licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 1.0.0 | c170775f98cb8738f72c31b6ec3f00055a60184a | docs | 330 | # Installation
## Install Package
using Pkg
Pkg.add(PackageSpec(url="https://github.com/LiScI-Lab/HeartRateVariability.jl"))
## Using wfdb files
To use the infile function for wfdb files you have to download and install the [WFDB Software Package](https://archive.physionet.org/physiotools/wfdb.shtml) from Physionet.
| HeartRateVariability | https://github.com/LiScI-Lab/HeartRateVariability.jl.git |
|
[
"MIT"
] | 1.0.0 | c170775f98cb8738f72c31b6ec3f00055a60184a | docs | 3290 | # Introduction
# Time-Domain Analysis
The time-domain analysis contains the following analysis methods:
###### Mean:
This is the mean value of the RR intervals. It is calculated by summing all NN intervals and then dividing by their number. [Read more](https://en.wikipedia.org/wiki/Mean#Arithmetic_mean_(AM))
###### SDNN:
This is the standard deviation of the NN intervals. [Read more](https://en.wikipedia.org/wiki/Heart_rate_variability#Time-domain_methods[36])
###### RMSSD:
This is the root mean square of the differences between successive NN intervals. [Read more](https://en.wikipedia.org/wiki/Heart_rate_variability#Time-domain_methods[36])
###### SDSD:
This is the standard deviation of the differences between successive NN intervals. [Read more](https://en.wikipedia.org/wiki/Heart_rate_variability#Time-domain_methods[36])
###### NN20/NN50:
This is the number of pairs of successive NN intervals that differ by more than 20ms/50ms. [Read more](https://en.wikipedia.org/wiki/Heart_rate_variability#Time-domain_methods[36])
###### pNN20/pNN50:
This is the percentage of pairs of successive NN intervals that differ by more than 20ms/50ms. [Read more](https://en.wikipedia.org/wiki/Heart_rate_variability#Time-domain_methods[36])
###### rRR:
The relative RR intervals are calculated using the equation\
for i=2...n
```math
rr _{i} := \frac{2*(RR_{i}-RR_{i-1})}{RR_{i}+RR_{i-1}}
```
where n is the number of RR intervals.\
The HRV is measured by the median of the euclidean distances of the relative RR intervals to the average of the relative RR intervals. [Read more](https://marcusvollmer.github.io/HRV/files/paper_method.pdf)
# Frequency-Domain Analysis
Frequency domain analysis uses a Lomb Scargle Transformation to determine the power spectral density of each frequency domain. The frequency bands are defined as follows:
- **VLF:** very low frequency, from 0.003 to 0.04 Hz
- **LF:** low frequency, from 0.04 to 0.15 Hz
- **HF:** high frequency, from 0.15 to 0.4 Hz
- **LF/HF:** The ratio of LF and HF
- **Total Power:** The sum of VLF, LF and HF
[Read more](https://en.wikipedia.org/wiki/Heart_rate_variability#Frequency-domain_methods[36])
# Nonlinear Analysis
###### Approximate entropy
This is a technique for quantifying the degree of regularity and unpredictability of the RR intervals. [Read more](https://en.wikipedia.org/wiki/Approximate_entropy)
###### Sample entropy
This is a modification of the approximate entropy that is used to assess the complexity of physiological time series signals. [Read more](https://en.wikipedia.org/wiki/Sample_entropy)
###### Hurst exponent
The Hurst exponent is used to measure the long-term memory of time series. [Read more](https://en.wikipedia.org/wiki/Hurst_exponent)
###### Rényi entropy
The renyi entropy is a measure of diversity and forms the basis of the concept of generalized dimensions. [Read more](https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy)
# Geometric Analysis
###### Poincaré plot
This plot is used to quantify self-similarity in processes. [Read more](https://en.wikipedia.org/wiki/Poincar%C3%A9_plot)
###### Recurrence plot
This plot is used to visualize the periodic nature of a trajectory through a phase space. [Read more](https://en.wikipedia.org/wiki/Recurrence_plot)
| HeartRateVariability | https://github.com/LiScI-Lab/HeartRateVariability.jl.git |
|
[
"MIT"
] | 1.0.0 | c170775f98cb8738f72c31b6ec3f00055a60184a | docs | 1881 | # Quickstart
## Data import
The data can be read from a wfdb file using the infile function. Therefore, the name and the annotator of the record are passed to the function. For the record "e1034.atr" with the header "e1304.hea" the name to be passed is "e1034" and the annotator is "atr".
data = HeartRateVariability.infile("e1304","atr")
Additionally, txt or csv files can be read in by passing the path of the file as a parameter.
```@example 1
using HeartRateVariability #hide
data = HeartRateVariability.infile("e1304.txt")
```
## Analysis
To analyze the read-in data, the array with the data is passed to the respective analysis function. The results are returned in a NamedTuple.
### Time-Domain Analysis
```@example 1
td = HeartRateVariability.time_domain(data)
```
### Frequency-Domain Analysis
```@example 1
fd = HeartRateVariability.frequency(data)
```
### Nonlinear Analysis
```@example 1
nl = HeartRateVariability.nonlinear(data)
```
### Geometric Analysis
```@example 1
g = HeartRateVariability.geometric(data)
```
The different results in the NamedTuple can be output individually by calling the result name.
```@example 1
g.poincare
```
```@example 1
g.recurrence
```
The data used in this example is from the European ST-T Database[^1] and was found on [Physionet](https://physionet.org/content/edb/1.0.0/)[^2].
[^1]: Taddei A, Distante G, Emdin M, Pisani P, Moody GB, Zeelenberg C, Marchesi C. The European ST-T Database: standard for evaluating systems for the analysis of ST-T changes in ambulatory electrocardiography. European Heart Journal 13: 1164-1172 (1992).
[^2]: Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.
| HeartRateVariability | https://github.com/LiScI-Lab/HeartRateVariability.jl.git |
|
[
"MIT"
] | 0.1.2 | 7d83f64d2eced2b8080544c50c04f93db18d2ac8 | code | 546 | using Documenter
using AesKeywrapNettle
push!(LOAD_PATH,"../src/")
makedocs(
sitename = "AesKeywrapNettle.jl Documentation",
pages = [
"Index" => "index.md",
"AesKeywrapNettle" => "AesKeywrapNettle.md",
],
format = Documenter.HTML(prettyurls = false)
)
# Documenter can also automatically deploy documentation to gh-pages.
# See "Hosting Documentation" and deploydocs() in the Documenter manual
# for more information.
deploydocs(
repo = "github.com/pst-lz/AesKeywrapNettle.jl.git",
devbranch = "main"
)
| AesKeywrapNettle | https://github.com/pst-lz/AesKeywrapNettle.jl.git |
|
[
"MIT"
] | 0.1.2 | 7d83f64d2eced2b8080544c50c04f93db18d2ac8 | code | 6790 | """
AesKeywrapNettle
AES keywrap in Julia
(uses https://github.com/JuliaCrypto/Nettle.jl for AES)
"""
module AesKeywrapNettle
export aes_wrap_key, aes_unwrap_key
using Nettle
"""
aes_wrap_key(kek, plaintext[, iv])
Wraps the key "plaintext" using the key "kek" und the initial vector "iv" with the "Advanced Encryption Standard (AES) Key Wrap Algorithm"
# Arguments
- `kek::Array{UInt8}`: the key-encryption key, possible key lengths for "kek" are 128 bit, 192 bit, and 192 bit
- `plaintext::Array{UInt8}`: the key (or plaintext) to wrap, the length of "plaintext" must be a multiple of 64 bit
- `iv::Array{UInt8}`: the 64-bit initial value used during the wrapping process; If no iv is specified, the default iv [0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6] from rfc3394 is used.
# Examples
```@meta
DocTestSetup = quote
using AesKeywrapNettle
end
```
```jldoctest
julia> a = aes_wrap_key([0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f], [0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff], [0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6])
24-element Vector{UInt8}:
0x1f
0xa6
0x8b
0x0a
0x81
0x12
0xb4
0x47
0xae
0xf3
⋮
0x82
0x9d
0x3e
0x86
0x23
0x71
0xd2
0xcf
0xe5
```
"""
function aes_wrap_key(kek::Array{UInt8}, plaintext::Array{UInt8}, iv::Array{UInt8}=[0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6])
# for Byte-Array
cryptalg = ""
if length(kek) == 16
cryptalg = "aes128"
elseif length(kek) == 24
cryptalg = "aes192"
elseif length(kek) == 32
cryptalg = "aes256"
else
error("wrong key length")
end
if length(iv) != 8
error("wrong iv length")
end
if length(plaintext) % 8 != 0 || length(plaintext) == 0
error("wrong plaintext length")
end
n = length(plaintext) ÷ 8
P = zeros(UInt8, n, 8)
for i in 1:n, j in 1:8
P[i, j] = plaintext[j + (i - 1) * 8]
end
R = zeros(UInt8, n +1, 8)
A = copy(iv)
for i in 1:n, j in 1:8
R[i + 1, j] = P[i, j]
end
for j in 0:5
for i in 1:n
for k in 1:8
push!(A, R[i + 1, k])
end
B = encrypt(cryptalg, kek, A)
t :: UInt64 = 0
t = (n * j) + i
if t <= 255
A = B[1:8]
A[8] = A[8] ⊻ t
else
BMSB :: UInt64 = 0
temp :: UInt64 = 0
for k in 1:8
temp = B[k]
BMSB += temp << (8 * (8 - k))
end
A64 :: UInt64 = 0
A64 = BMSB ⊻ t
A = hex2bytes(string(A64, base = 16, pad = 16))
end
for k in 1:8
R[i + 1, k] = B[8 + k]
end
end
end
C = zeros(UInt8, 8 * (n + 1))
for i in 1:8
C[i] = A[i]
end
for i in 1:8, j in 2:n+1
C[i + (j - 1) * 8] = R[j, i]
end
return C
end
"""
aes_unwrap_key(kek, wrapped[, iv])
Unwraps the key "plaintext" using the key "kek" with the "Advanced Encryption Standard (AES) Key Wrap Algorithm"
The initial vector "iv" is used for integrity check.
# Arguments
- `kek::Array{UInt8}`: the key-encryption key, possible key lengths for "kek" are 128 bit, 192 bit, and 192 bit
- `wrapped::Array{UInt8}`: the wrapped key (or plaintext) to wrap, the length of "wrapped" must be a multiple of 64 bit
- `iv::Array{UInt8}`: the 64-bit initial value used during the wrapping process; If no iv is specified, the default iv [0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6] from rfc3394 is used.
# Examples
```@meta
DocTestSetup = quote
using AesKeywrapNettle
end
```
```jldoctest
julia> b = aes_unwrap_key([0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f], [0x1f, 0xa6, 0x8b, 0x0a, 0x81, 0x12, 0xb4, 0x47, 0xae, 0xf3, 0x4b, 0xd8, 0xfb, 0x5a, 0x7b, 0x82, 0x9d, 0x3e, 0x86, 0x23, 0x71, 0xd2, 0xcf, 0xe5], [0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6])
16-element Vector{UInt8}:
0x00
0x11
0x22
0x33
0x44
0x55
0x66
0x77
0x88
0x99
0xaa
0xbb
0xcc
0xdd
0xee
0xff
```
"""
function aes_unwrap_key(kek::Array{UInt8}, wrapped::Array{UInt8}, iv::Array{UInt8}=[0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6, 0xa6])
# for Byte-Array
cryptalg = ""
if length(kek) == 16
cryptalg = "aes128"
elseif length(kek) == 24
cryptalg = "aes192"
elseif length(kek) == 32
cryptalg = "aes256"
else
error("wrong key length")
end
if length(iv) != 8
error("wrong iv length")
end
if length(wrapped) % 8 !=0
error("wrong wrapped length")
end
n = length(wrapped) ÷ 8 - 1
if n <= 0
error("wrong wrapped length")
end
C = zeros(UInt8, n + 1, 8)
for i in 1:n+1, j in 1:8
C[i, j] = wrapped[j + (i-1)*8]
end
A = zeros(UInt8, 8)
for i in 1:8
A[i] = C[1, i]
end
R = copy(C)
for j in 5:-1:0
for i in n:-1:1
t :: UInt64 = 0
t = (n * j) + i
if t <= 255
A[8] = A[8] ⊻ t
else
A64 :: UInt64 = 0
temp :: UInt64 = 0
for k in 1:8
temp = A[k]
A64 += temp << (8 * (8 - k))
end
A64 = A64 ⊻ t
A = hex2bytes(string(A64, base = 16, pad = 16))
end
for k in 1:8
push!(A, R[i + 1, k])
end
B = decrypt(cryptalg, kek, A)
A = copy(B[1:8])
for k in 1:8
R[i + 1, k] = B[8 + k]
end
end
end
if iv == A
P = zeros(UInt8, 8*n)
for i in 1:8, j in 1:n
P[i + (j - 1) * 8] = R[j + 1, i]
end
return P
else
error("wrong intial vector")
end
end
end # module
| AesKeywrapNettle | https://github.com/pst-lz/AesKeywrapNettle.jl.git |
|
[
"MIT"
] | 0.1.2 | 7d83f64d2eced2b8080544c50c04f93db18d2ac8 | code | 8540 | using Test, AesKeywrapNettle
function test_wrap_unwrap_iv(kekstring, datastring, ivstring, cipherstring)
test_correct :: Bool = false
kek = hex2bytes(kekstring)
data = hex2bytes(datastring)
iv = hex2bytes(ivstring)
cipherref = hex2bytes(cipherstring)
# wrap
cipher = aes_wrap_key(kek, data, iv)
test_correct = lowercase(bytes2hex(cipher)) == lowercase(cipherstring)
# unwrap
plain = aes_unwrap_key(kek, cipherref, iv)
test_correct = test_correct && lowercase(bytes2hex(plain)) == lowercase(datastring)
return test_correct
end
function test_wrap_unwrap(kekstring, datastring, cipherstring)
test_correct :: Bool = false
kek = hex2bytes(kekstring)
data = hex2bytes(datastring)
cipherref = hex2bytes(cipherstring)
# wrap
cipher = aes_wrap_key(kek, data)
test_correct = lowercase(bytes2hex(cipher)) == lowercase(cipherstring)
# unwrap
plain = aes_unwrap_key(kek, cipherref)
test_correct = test_correct && lowercase(bytes2hex(plain)) == lowercase(datastring)
return test_correct
end
@testset "tests from rfc3394" begin
# tests from rfc3394
# 4.1
name1 = "Wrap 128 bits of Key Data with a 128-bit KEK"
kekstring1 = "000102030405060708090A0B0C0D0E0F"
datastring1 = "00112233445566778899AABBCCDDEEFF"
ivstring1 = "A6A6A6A6A6A6A6A6"
cipherstring1 = "1FA68B0A8112B447AEF34BD8FB5A7B829D3E862371D2CFE5"
@test test_wrap_unwrap_iv(kekstring1, datastring1, ivstring1, cipherstring1)
@test test_wrap_unwrap(kekstring1, datastring1, cipherstring1)
# 4.2
name2 = "Wrap 128 bits of Key Data with a 192-bit KEK"
kekstring2 = "000102030405060708090A0B0C0D0E0F1011121314151617"
datastring2 = "00112233445566778899AABBCCDDEEFF"
ivstring2 = "A6A6A6A6A6A6A6A6"
cipherstring2 = "96778B25AE6CA435F92B5B97C050AED2468AB8A17AD84E5D"
@test test_wrap_unwrap_iv(kekstring2, datastring2, ivstring2, cipherstring2)
@test test_wrap_unwrap(kekstring2, datastring2, cipherstring2)
# 4.3
name3 = "Wrap 128 bits of Key Data with a 256-bit KEK"
kekstring3 = "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F"
datastring3 = "00112233445566778899AABBCCDDEEFF"
ivstring3 = "A6A6A6A6A6A6A6A6"
cipherstring3 = "64E8C3F9CE0F5BA263E9777905818A2A93C8191E7D6E8AE7"
@test test_wrap_unwrap_iv(kekstring3, datastring3, ivstring3, cipherstring3)
@test test_wrap_unwrap(kekstring3, datastring3, cipherstring3)
# 4.4
name4 = "Wrap 192 bits of Key Data with a 192-bit KEK"
kekstring4 = "000102030405060708090A0B0C0D0E0F1011121314151617"
datastring4 = "00112233445566778899AABBCCDDEEFF0001020304050607"
ivstring4 = "A6A6A6A6A6A6A6A6"
cipherstring4 = "031D33264E15D33268F24EC260743EDCE1C6C7DDEE725A936BA814915C6762D2"
@test test_wrap_unwrap_iv(kekstring4, datastring4, ivstring4, cipherstring4)
@test test_wrap_unwrap(kekstring4, datastring4, cipherstring4)
# 4.5
name5 = "Wrap 192 bits of Key Data with a 256-bit KEK"
kekstring5 = "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F"
datastring5 = "00112233445566778899AABBCCDDEEFF0001020304050607"
ivstring5 = "A6A6A6A6A6A6A6A6"
cipherstring5 = "A8F9BC1612C68B3FF6E6F4FBE30E71E4769C8B80A32CB8958CD5D17D6B254DA1"
@test test_wrap_unwrap_iv(kekstring5, datastring5, ivstring5, cipherstring5)
@test test_wrap_unwrap(kekstring5, datastring5, cipherstring5)
# 4.6
name6 = "Wrap 256 bits of Key Data with a 256-bit KEK"
kekstring6 = "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F"
datastring6 = "00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F"
ivstring6 = "A6A6A6A6A6A6A6A6"
cipherstring6 = "28C9F404C4B810F4CBCCB35CFB87F8263F5786E2D80ED326CBC7F0E71A99F43BFB988B9B7A02DD21"
@test test_wrap_unwrap_iv(kekstring6, datastring6, ivstring6, cipherstring6)
@test test_wrap_unwrap(kekstring6, datastring6, cipherstring6)
end
@testset "long key" begin
name7 = "Wrap 4096 bits of Key Data with a 256-bit KEK"
kekstring7 = "000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F"
datastring7 = "00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F00112233445566778899AABBCCDDEEFF000102030405060708090A0B0C0D0E0F"
ivstring7 = "A6A6A6A6A6A6A6A6"
cipherstring7 = "0cd47d1b501296187d893d813b0cf5911ce87fe75c5841f93c8e3b616cd9d638b1eb043da206af1bf52c186f9ff2270c2078deacb7992cc4782dc01f7594a46bbaeabc7003fc2b6295d27f9b345cddfc29de19f5b6d8a969121b7d2e49c4474e6eb6e965a0623f9e21305330122e6974beee87d345568260ec4c6606d3dc424819a9d5ab976bf952c02e141aa7fda07b14b2e4169bf49879dc2b6dc8ee6cc3aadc95ee2869a0ea9fa1bb62db3f09d1e1da8c13c1f70dc98fb1139296d45ac766364371008a688a9d992d62f9fd3a714212c9dfed285ad258387392ddaaffcdf3a0060c78970ae36d824007febff98d36c830c8010743e554c8ebe6eb79ccb2869267e129824e65210e39bf0f2f4191a858b89139b6babea64ef4b9b15bee4f0a9ef20417e01893cb380c6dbd140df82e8c3bb2d086e9510bb241f2731e73641e12c19abb680fc60efc035b7f5103eb0d6c609b0d1166e6a74907928c36930fa25e63bb8f522883e8c8af56f6efad4667ddf5c721f04ed9daedd5cf6c45c68ed5e1964f31f22e8208e0c12ccc34a8d72c2f73246d253872086a9ca485d346d93f14e90b1fa265decb74086170a87ab86b00739bcbeaa1ff27003cb2202cae43d8f7101460085741d540dc26480149e4b1b2fde4b07242d8398c885f5e4188291f92337332167db6a20f3c14a2f31ac7bd18430ad11d22b9431128561f8e38c912cff3f81f11e4782a"
@test test_wrap_unwrap_iv(kekstring7, datastring7, ivstring7, cipherstring7)
@test test_wrap_unwrap(kekstring7, datastring7, cipherstring7)
end
@testset "errors" begin
# wrong iv
@test_throws ErrorException aes_unwrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes("1FA68B0A8112B447AEF34BD8FB5A7B829D3E862371D2CFE5"), hex2bytes("A6A6A6A6A6A6A6A0"))
# wrong kek length unwrap
@test_throws ErrorException aes_unwrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F0A"), hex2bytes("1FA68B0A8112B447AEF34BD8FB5A7B829D3E862371D2CFE5"), hex2bytes("A6A6A6A6A6A6A6A6"))
# wrong wrapped length unwrap
@test_throws ErrorException aes_unwrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes("AA1FA68B0A8112B447AEF34BD8FB5A7B829D3E862371D2CFE5"), hex2bytes("A6A6A6A6A6A6A6A6"))
# wrong wrapped length unwrap (to short 1)
@test_throws ErrorException aes_unwrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes("1FA68B0A8112B447"), hex2bytes("A6A6A6A6A6A6A6A6"))
# wrong wrapped length unwrap (to short 2)
@test_throws ErrorException aes_unwrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes(""), hex2bytes("A6A6A6A6A6A6A6A6"))
# wrong iv length unwrap
@test_throws ErrorException aes_unwrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes("1FA68B0A8112B447AEF34BD8FB5A7B829D3E862371D2CFE5"), hex2bytes("A6A6A6A6A6A6A6A6CA"))
# wrong kek length wrap
@test_throws ErrorException aes_wrap_key(hex2bytes("AA000102030405060708090A0B0C0D0E0F"), hex2bytes("00112233445566778899AABBCCDDEEFF"), hex2bytes("A6A6A6A6A6A6A6A6"))
# wrong plaintext length wrap
@test_throws ErrorException aes_wrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes("FFAA00112233445566778899AABBCCDDEEFF"), hex2bytes("A6A6A6A6A6A6A6A6"))
# wrong plaintext length wrap (to short)
@test_throws ErrorException aes_wrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes(""), hex2bytes("A6A6A6A6A6A6A6A6"))
# wrong iv length wrap
@test_throws ErrorException aes_wrap_key(hex2bytes("000102030405060708090A0B0C0D0E0F"), hex2bytes("00112233445566778899AABBCCDDEEFF"), hex2bytes("A6A6A6A6A6A6A6A6BB"))
end | AesKeywrapNettle | https://github.com/pst-lz/AesKeywrapNettle.jl.git |
|
[
"MIT"
] | 0.1.2 | 7d83f64d2eced2b8080544c50c04f93db18d2ac8 | docs | 629 | # AesKeywrapNettle
[](https://github.com/pst-lz/AesKeywrapNettle.jl/actions)
[](http://codecov.io/github/pst-lz/AesKeywrapNettle.jl?branch=main)
[](https://pst-lz.github.io/AesKeywrapNettle.jl/stable)
[](https://pst-lz.github.io/AesKeywrapNettle.jl/dev)
AES keywrap in Julia uses https://github.com/JuliaCrypto/Nettle.jl for AES
| AesKeywrapNettle | https://github.com/pst-lz/AesKeywrapNettle.jl.git |
|
[
"MIT"
] | 0.1.2 | 7d83f64d2eced2b8080544c50c04f93db18d2ac8 | docs | 285 | # The AesKeywrapNettle Module.jl
```@docs
AesKeywrapNettle
```
## Module Index
```@index
Modules = [AesKeywrapNettle]
Order = [:constant, :type, :function, :macro]
```
## Detailed API
```@autodocs
Modules = [AesKeywrapNettle]
Order = [:constant, :type, :function, :macro]
```
| AesKeywrapNettle | https://github.com/pst-lz/AesKeywrapNettle.jl.git |
|
[
"MIT"
] | 0.1.2 | 7d83f64d2eced2b8080544c50c04f93db18d2ac8 | docs | 109 | # AesKeywrapNettle.jl
Documentation for [AesKeywrapNettle.jl](https://github.com/pst-lz/AesKeywrapNettle.jl) | AesKeywrapNettle | https://github.com/pst-lz/AesKeywrapNettle.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 528 | #!/usr/bin/env -S julia --color=yes --startup-file=no --threads=auto
## Usage
# Call with: `<path-to-file>/romeo.jl ARGS`
# On windows use: `julia --threads=auto <path-to-file>/romeo.jl ARGS`
# Example call:
# `./romeo.jl -p phase.nii.gz -m mag.nii.gz -t [0.025 0.05] -o output.nii.gz
import Pkg
Pkg.activate(@__DIR__)
try
using ROMEO, MriResearchTools, ArgParse
catch
Pkg.add(["ROMEO", "MriResearchTools", "ArgParse"])
using ROMEO, MriResearchTools, ArgParse
end
@time msg = unwrapping_main(ARGS)
println(msg)
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 607 | using ROMEO
using Documenter
DocMeta.setdocmeta!(ROMEO, :DocTestSetup, :(using ROMEO); recursive=true)
makedocs(;
modules=[ROMEO],
authors="Korbinian Eckstein [email protected]",
repo="https://github.com/korbinian90/ROMEO.jl/blob/{commit}{path}#{line}",
sitename="ROMEO.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://korbinian90.github.io/ROMEO.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/korbinian90/ROMEO.jl",
devbranch="master",
)
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 126 | module RomeoApp
using ArgParse
using MriResearchTools
using ROMEO
include("argparse.jl")
include("caller.jl")
end # module
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 12624 | function getargs(args::AbstractVector, version)
if isempty(args)
args = ["--help"]
else
if !('-' in args[1]) prepend!(args, Ref("-p")) end # if phase is first without -p
if length(args) >= 2 && !("-p" in args || "--phase" in args) && !('-' in args[end-1]) # if phase is last without -p
insert!(args, length(args), "-p")
end
end
s = ArgParseSettings(;
exc_handler=exception_handler,
add_version=true,
version,
)
@add_arg_table! s begin
"--phase", "-p"
help = "The phase image that should be unwrapped"
"--magnitude", "-m"
help = "The magnitude image (better unwrapping if specified)"
"--output", "-o"
help = "The output path or filename"
default = "unwrapped.nii"
"--echo-times", "-t"
help = """The echo times in [ms] required for temporal unwrapping
specified in array or range syntax (eg. "[1.5,3.0]" or
"3.5:3.5:14"). For identical echo times, "-t epi" can be
used with the possibility to specify the echo time as
e.g. "-t epi 5.3" (for B0 calculation)."""
nargs = '+'
"--mask", "-k"
help = """nomask | qualitymask <threshold> | robustmask | <mask_file>.
<threshold>=0.1 for qualitymask in [0;1]"""
default = ["robustmask"]
nargs = '+'
"--mask-unwrapped", "-u"
help = """Apply the mask on the unwrapped result. If mask is
"nomask", sets it to "robustmask"."""
action = :store_true
"--unwrap-echoes", "-e"
help = "Load only the specified echoes from disk"
default = [":"]
nargs = '+'
"--weights", "-w"
help = """romeo | romeo2 | romeo3 | romeo4 | romeo6 |
bestpath | <4d-weights-file> | <flags>.
<flags> are up to 6 bits to activate individual weights
(eg. "1010"). The weights are (1)phasecoherence
(2)phasegradientcoherence (3)phaselinearity (4)magcoherence
(5)magweight (6)magweight2"""
default = "romeo"
"--compute-B0", "-B"
help = """Calculate combined B0 map in [Hz].
Supports the B0 output filename as optional input.
This activates MCPC3Ds phase offset correction (monopolar)
for multi-echo data."""
default = ""
nargs = '?'
constant = "B0"
"--phase-offset-correction"
help = """on | off | bipolar.
Applies the MCPC3Ds method to perform phase offset
determination and removal (for multi-echo).
"bipolar" removes eddy current artefacts (requires >= 3 echoes)."""
default = "default off"
nargs = '?'
constant = "on"
"--phase-offset-smoothing-sigma-mm"
help = """default: [7,7,7]
Only applied if phase-offset-correction is activated. The given
sigma size is divided by the voxel size from the nifti phase
file to obtain a smoothing size in voxels. A value of [0,0,0]
deactivates phase offset smoothing (not recommended)."""
nargs = '+'
"--write-phase-offsets"
help = "Saves the estimated phase offsets to the output folder"
action = :store_true
"--individual-unwrapping", "-i"
help = """Unwraps the echoes individually (not temporal).
This might be necessary if there is large movement
(timeseries) or phase-offset-correction is not
applicable."""
action = :store_true
"--template"
help = """Template echo that is spatially unwrapped and used for
temporal unwrapping"""
arg_type = Int
default = 1
"--no-mmap", "-N"
help = """Deactivate memory mapping. Memory mapping might cause
problems on network storage"""
action = :store_true
"--no-phase-rescale", "--no-rescale"
help = """Deactivate rescaling of input images. By default the
input phase is rescaled to the range [-π;π]. This option
allows inputting already unwrapped phase images without
manually wrapping them first."""
action = :store_true
"--fix-ge-phase"
help = """GE systems write corrupted phase output (slice jumps).
This option fixes the phase problems."""
action = :store_true
"--threshold"
help = """<maximum number of wraps>.
Threshold the unwrapped phase to the maximum number of wraps
and sets exceeding values to 0"""
arg_type = Float64
default = Inf
"--verbose", "-v"
help = "verbose output messages"
action = :store_true
"--correct-global", "-g"
help = """Phase is corrected to remove global n2π phase offset. The
median of phase values (inside mask if given) is used to
calculate the correction term. This also corrects multi-echo
phase for individual unwrapping, and might require MCPC3Ds
phase offset correction."""
action = :store_true
"--write-quality", "-q"
help = """Writes out the ROMEO quality map as a 3D image with one
value per voxel"""
action = :store_true
"--write-quality-all", "-Q"
help = """Writes out an individual quality map for each of the
ROMEO weights."""
action = :store_true
"--max-seeds", "-s"
help = """EXPERIMENTAL! Sets the maximum number of seeds for
unwrapping. Higher values allow more seperated regions."""
arg_type = Int
default = 1
"--merge-regions"
help = """EXPERIMENTAL! Spatially merges neighboring regions after
unwrapping."""
action = :store_true
"--correct-regions"
help = """EXPERIMENTAL! Performed after merging. Brings the median
of each region closest to 0 (mod 2π)."""
action = :store_true
"--wrap-addition"
help = """[0;π] EXPERIMENTAL! Usually the true phase difference of
neighboring voxels cannot exceed π to be able to unwrap
them. This setting increases the limit and uses 'linear
unwrapping' of 3 voxels in a line. Neighbors can have
(π + wrap-addition) phase difference."""
arg_type = Float64
default = 0.0
"--temporal-uncertain-unwrapping"
help = """EXPERIMENTAL! Uses spatial unwrapping on voxels that have
high uncertainty values after temporal unwrapping."""
action = :store_true
end
return parse_args(args, s)
end
function exception_handler(settings::ArgParseSettings, err, err_code::Int=1)
if err == ArgParseError("too many arguments")
println(stderr,
"""wrong argument formatting!"""
)
end
ArgParse.default_handler(settings, err, err_code)
end
function getechoes(settings, neco)
echoes = eval(Meta.parse(join(settings["unwrap-echoes"], " ")))
if echoes isa Int
echoes = [echoes]
elseif echoes isa Matrix
echoes = echoes[:]
end
echoes = (1:neco)[echoes] # expands ":"
if (length(echoes) == 1) echoes = echoes[1] end
return echoes
end
function getTEs(settings, neco, echoes)
if isempty(settings["echo-times"])
if neco == 1 || length(echoes) == 1
return 1
else
error("multi-echo data is used, but no echo times are given. Please specify the echo times using the -t option.")
end
end
TEs = if settings["echo-times"][1] == "epi"
ones(neco) .* if length(settings["echo-times"]) > 1; parse(Float64, settings["echo-times"][2]) else 1 end
else
eval(Meta.parse(join(settings["echo-times"], " ")))
end
if TEs isa AbstractMatrix
TEs = TEs[:]
end
if 1 < length(TEs) == neco
TEs = TEs[echoes]
end
return TEs
end
function get_phase_offset_smoothing_sigma(settings)
if isempty(settings["phase-offset-smoothing-sigma-mm"])
return (7,7,7)
end
return eval(Meta.parse(join(settings["phase-offset-smoothing-sigma-mm"], " ")))[:]
end
function parseweights(settings)
if isfile(settings["weights"]) && splitext(settings["weights"])[2] != ""
return UInt8.(niread(settings["weights"]))
else
try
reform = "Bool[$(join(collect(settings["weights"]), ','))]"
flags = falses(6)
flags_tmp = eval(Meta.parse(reform))
flags[1:length(flags_tmp)] = flags_tmp
return flags
catch
return Symbol(settings["weights"])
end
end
end
function saveconfiguration(writedir, settings, args, version)
writedir = abspath(writedir)
open(joinpath(writedir, "settings_romeo.txt"), "w") do io
for (fname, val) in settings
if !(val isa AbstractArray || fname == "header")
println(io, "$fname: " * string(val))
end
end
println(io, """Arguments: $(join(args, " "))""")
println(io, "RomeoApp version: $version")
end
open(joinpath(writedir, "citations_romeo.txt"), "w") do io
println(io, "# For the algorithms used, please cite:")
println(io)
println(io, """Dymerska, B., Eckstein, K., Bachrata, B., Siow, B., Trattnig, S., Shmueli, K., Robinson, S.D., 2020.
Phase Unwrapping with a Rapid Opensource Minimum Spanning TreE AlgOrithm (ROMEO).
Magnetic Resonance in Medicine.
https://doi.org/10.1002/mrm.28563""")
println(io)
if settings["multi-channel"] || settings["phase-offset-correction"] != "off"
println(io, """Eckstein, K., Dymerska, B., Bachrata, B., Bogner, W., Poljanc, K., Trattnig, S., Robinson, S.D., 2018.
Computationally Efficient Combination of Multi-channel Phase Data From Multi-echo Acquisitions (ASPIRE).
Magnetic Resonance in Medicine 79, 2996-3006.
https://doi.org/10.1002/mrm.26963""")
println(io)
end
if settings["weights"] == "bestpath"
println(io, """Abdul-Rahman, H.S., Gdeisat, M.A., Burton, D.R., Lalor, M.J., Lilley, F., Moore, C.J., 2007.
Fast and robust three-dimensional best path phase unwrapping algorithm.
Applied Optics 46, 6623-6635.
https://doi.org/10.1364/AO.46.006623""")
println(io)
end
println(io, """Eckstein, K., Dymerska, B., Bachrata, B., Bogner, W., Poljanc, K., Trattnig, S., Robinson, S.D., 2018.
Computationally Efficient Combination of Multi-channel Phase Data From Multi-echo Acquisitions (ASPIRE).
Magnetic Resonance in Medicine 79, 2996-3006.
https://doi.org/10.1002/mrm.26963""")
println(io)
println(io)
println(io, "# Optional citations:")
println(io)
println(io, """Hagberg, G.E., Eckstein, K., Tuzzi, E., Zhou, J., Robinson, S.D., Scheffler, K., 2022.
Phase-based masking for quantitative susceptibility mapping of the human brain at 9.4T.
Magnetic Resonance in Medicine.
https://doi.org/10.1002/mrm.29368""")
println(io)
println(io, """Stewart, A.W., Robinson, S.D., O'Brien, K., Jin, J., Widhalm, G., Hangel, G., Walls, A., Goodwin, J., Eckstein, K., Tourell, M., Morgan, C., Narayanan, A., Barth, M., Bollmann, S., 2022.
QSMxT: Robust masking and artifact reduction for quantitative susceptibility mapping.
Magnetic Resonance in Medicine.
https://doi.org/10.1002/mrm.29048""")
println(io)
println(io, """Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B., 2017.
Julia: A fresh approach to numerical computing
SIAM Review 59, 65--98
https://doi.org/10.1137/141000671""")
end
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 11559 | function ROMEO.unwrapping_main(args; version="App 4.0")
settings = getargs(args, version)
data = load_data_and_resolve_args!(settings)
mkpath(settings["output"])
saveconfiguration(settings["output"], settings, args, version)
## Perform phase offset correction with MCPC-3D-S (and in case of 5D coil combinadion)
if settings["phase-offset-correction"] in ["monopolar", "bipolar"]
phase_offset_correction!(settings, data)
end
select_echoes!(data, settings)
set_mask!(data, settings)
keyargs = get_keyargs(settings, data)
unwrapping(data, settings, keyargs)
if settings["threshold"] != Inf
max = settings["threshold"] * 2π
data["phase"][data["phase"] .> max] .= 0
data["phase"][data["phase"] .< -max] .= 0
end
if settings["mask-unwrapped"] && haskey(data, "mask")
data["phase"] .*= data["mask"]
end
save(data["phase"], settings["filename"], settings)
if !isempty(settings["compute-B0"])
computeB0(settings, data)
end
write_qualitymap(settings, data, keyargs)
return 0
end
function load_data_and_resolve_args!(settings)
settings["filename"] = "unwrapped"
if endswith(settings["output"], ".nii") || endswith(settings["output"], ".nii.gz")
settings["filename"] = basename(settings["output"])
settings["output"] = dirname(settings["output"])
end
if settings["weights"] == "romeo"
if isnothing(settings["magnitude"])
settings["weights"] = "romeo4"
else
settings["weights"] = "romeo3"
end
end
if settings["mask-unwrapped"] && settings["mask"][1] == "nomask"
settings["mask"][1] = "robustmask"
end
if settings["mask"][1] == "robustmask" && isnothing(settings["magnitude"])
settings["mask"][1] = "nomask"
@warn "robustmask was chosen but no magnitude is available. No mask is used!" maxlog=2
end
settings["mmap-phase"] = !settings["no-mmap"] && !endswith(settings["phase"], ".gz") && !settings["fix-ge-phase"]
settings["mmap-mag"] = !settings["no-mmap"] && (isnothing(settings["magnitude"]) || !endswith(settings["magnitude"], ".gz"))
data = Dict{String, AbstractArray}()
data["phase"] = readphase(settings["phase"]; mmap=settings["mmap-phase"], rescale=!settings["no-phase-rescale"], fix_ge=settings["fix-ge-phase"])
settings["verbose"] && println("Phase loaded!")
if !isnothing(settings["magnitude"])
data["mag"] = readmag(settings["magnitude"], mmap=settings["mmap-mag"])
settings["verbose"] && println("Mag loaded!")
end
settings["header"] = header(data["phase"])
settings["neco"] = size(data["phase"], 4)
# activate phase-offset-correction as default (monopolar)
settings["multi-channel"] = size(data["phase"], 5) > 1
if (!isempty(settings["compute-B0"]) || settings["multi-channel"] || settings["phase-offset-correction"] == "on") && settings["phase-offset-correction"] ∉ ["bipolar", "off"]
settings["phase-offset-correction"] = "monopolar"
settings["verbose"] && println("Phase offset correction with MCPC-3D-S set to monopolar")
end
if settings["neco"] == 1
settings["phase-offset-correction"] = "off"
settings["verbose"] && println("Phase offset correction with MCPC-3D-S turned off (only one echo)")
end
if settings["phase-offset-correction"] == "default off"
settings["phase-offset-correction"] = "off"
end
## Echoes for unwrapping
settings["echoes"] = try
getechoes(settings, settings["neco"])
catch y
if isa(y, BoundsError)
error("echoes=$(join(settings["unwrap-echoes"], " ")): specified echo out of range! Number of echoes is $(settings["neco"])")
else
error("echoes=$(join(settings["unwrap-echoes"], " ")) wrongly formatted!")
end
end
settings["verbose"] && println("Echoes are $(settings["echoes"])")
settings["TEs"] = getTEs(settings, settings["neco"], settings["echoes"])
settings["verbose"] && println("TEs are $(settings["TEs"])")
if 1 < length(settings["echoes"]) && length(settings["echoes"]) != length(settings["TEs"])
error("Number of chosen echoes is $(length(settings["echoes"])) ($(settings["neco"]) in .nii data), but $(length(settings["TEs"])) TEs were specified!")
end
if haskey(data, "mag") && (size.(Ref(data["mag"]), 1:3) != size.(Ref(data["phase"]), 1:3) || size(data["mag"], 4) < maximum(settings["echoes"]))
error("size of magnitude and phase does not match!")
end
equal_echo_time = length(settings["TEs"]) >= 2 && settings["TEs"][1] == settings["TEs"][2]
if settings["phase-offset-correction"] != "off" && equal_echo_time
@warn "The echo times 1 and 2 ($(settings["TEs"])) need to be different for MCPC-3D-S phase offset correction! No phase offset correction performed"
settings["phase-offset-correction"] = "off"
end
return data
end
function phase_offset_correction!(settings, data)
polarity = settings["phase-offset-correction"]
bipolar_correction = polarity == "bipolar"
TEs = getTEs(settings, settings["neco"], :) # get all TEs here (not only selected)
if settings["neco"] != length(TEs) error("Phase offset determination requires all echo times! ($(length(TEs)) given, $(settings["neco"]) required)") end
settings["verbose"] && println("Perform phase offset correction with MCPC-3D-S ($polarity)")
settings["verbose"] && settings["multi-channel"] && println("Perform coil combination with MCPC-3D-S ($polarity)")
po = zeros(eltype(data["phase"]), (size(data["phase"])[1:3]...,size(data["phase"],5)))
sigma_mm = get_phase_offset_smoothing_sigma(settings)
sigma_vox = sigma_mm ./ header(data["phase"]).pixdim[2:4]
mag = if haskey(data, "mag") data["mag"] else ones(size(data["phase"])) end
phase, mcomb = mcpc3ds(data["phase"], mag; TEs, po, bipolar_correction, sigma=sigma_vox)
data["phase"] = phase
if size(mag, 5) != 1
data["mag"] = mcomb
end
if settings["multi-channel"]
settings["verbose"] && println("Saving combined_phase, combined_mag and phase_offset")
save(phase, "combined_phase", settings)
save(mcomb, "combined_mag", settings)
else
settings["verbose"] && println("Saving corrected_phase and phase_offset")
save(phase, "corrected_phase", settings)
end
settings["write-phase-offsets"] && save(po, "phase_offset", settings)
end
function get_keyargs(settings, data)
keyargs = Dict{Symbol, Any}()
if haskey(data, "mag")
keyargs[:mag] = data["mag"]
end
if haskey(data, "mask")
keyargs[:mask] = data["mask"]
end
keyargs[:TEs] = settings["TEs"]
keyargs[:correctglobal] = settings["correct-global"]
keyargs[:weights] = parseweights(settings)
keyargs[:maxseeds] = settings["max-seeds"]
settings["verbose"] && keyargs[:maxseeds] != 1 && println("Maxseeds are $(keyargs[:maxseeds])")
keyargs[:merge_regions] = settings["merge-regions"]
settings["verbose"] && keyargs[:merge_regions] && println("Region merging is activated")
keyargs[:correct_regions] = settings["correct-regions"]
settings["verbose"] && keyargs[:correct_regions] && println("Region correcting is activated")
keyargs[:wrap_addition] = settings["wrap-addition"]
keyargs[:temporal_uncertain_unwrapping] = settings["temporal-uncertain-unwrapping"]
keyargs[:individual] = settings["individual-unwrapping"]
settings["verbose"] && println("individual unwrapping is $(keyargs[:individual])")
keyargs[:template] = settings["template"]
settings["verbose"] && !settings["individual-unwrapping"] && println("echo $(keyargs[:template]) used as template")
return keyargs
end
function select_echoes!(data, settings)
data["phase"] = data["phase"][:,:,:,settings["echoes"]]
if haskey(data, "mag")
data["mag"] = data["mag"][:,:,:,settings["echoes"]]
end
end
function set_mask!(data, settings)
if isfile(settings["mask"][1])
settings["verbose"] && println("Trying to read mask from file $(settings["mask"][1])")
data["mask"] = niread(settings["mask"][1]).raw .!= 0
if size(data["mask"]) != size(data["phase"])[1:3]
error("size of mask is $(size(data["mask"])), but it should be $(size(data["phase"])[1:3])!")
end
elseif settings["mask"][1] == "robustmask" && haskey(data, "mag")
settings["verbose"] && println("Calculate robustmask from magnitude, saved as mask.nii")
template_echo = min(settings["template"], size(data["mag"], 4))
data["mask"] = robustmask(data["mag"][:,:,:,template_echo])
save(data["mask"], "mask", settings)
elseif contains(settings["mask"][1], "qualitymask")
threshold = if length(settings["mask"]) > 1
parse(Float32, settings["mask"][2])
elseif length(split(settings["mask"][1])) > 1
parse(Float32, split(settings["mask"][1])[2])
else
0.1 # default threshold
end
qmap = voxelquality(data["phase"]; get_keyargs(settings, data)...)
data["mask"] = robustmask(qmap; threshold)
save(data["mask"], "mask", settings)
elseif settings["mask"][1] != "nomask"
opt = settings["mask"][1]
error("masking option '$opt' is undefined" * ifelse(tryparse(Float32, opt) isa Float32, " (Maybe '-k qualitymask $opt' was meant?)", ""))
end
end
function unwrapping(data, settings, keyargs)
settings["verbose"] && println("perform unwrapping...")
regions = zeros(UInt8, size(data["phase"])[1:3]) # regions is an output
unwrap!(data["phase"]; keyargs..., regions)
settings["verbose"] && println("unwrapping finished!")
if settings["max-seeds"] > 1
settings["verbose"] && println("writing regions...")
save(regions, "regions", settings)
end
end
function computeB0(settings, data)
if isempty(settings["echo-times"])
error("echo times are required for B0 calculation! Unwrapping has been performed")
end
if !haskey(data, "mag")
if length(settings["TEs"]) > 1
@warn "B0 frequency estimation without magnitude might result in poor handling of noise in later echoes!"
end
data["mag"] = to_dim(exp.(-settings["TEs"]/20), 4) # T2*=20ms decay (low value to reduce noise contribution of later echoes)
end
B0 = calculateB0_unwrapped(data["phase"], data["mag"], settings["TEs"])
save(B0, settings["compute-B0"], settings)
end
function write_qualitymap(settings, data, keyargs)
# no mask used for writing quality maps
if settings["write-quality"]
settings["verbose"] && println("Calculate and write quality map...")
save(voxelquality(data["phase"]; keyargs...), "quality", settings)
end
if settings["write-quality-all"]
for i in 1:6
flags = falses(6)
flags[i] = true
settings["verbose"] && println("Calculate and write quality map $i...")
qm = voxelquality(data["phase"]; keyargs..., weights=flags)
if all(qm[1:end-1,1:end-1,1:end-1] .== 1.0)
settings["verbose"] && println("quality map $i skipped for the given inputs")
else
save(qm, "quality_$i", settings)
end
end
end
end
save(image, name, settings::Dict) = savenii(image, name, settings["output"], settings["header"])
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 503 | module ROMEO
using Statistics
using StatsBase
const NBINS = 256
include("utility.jl")
include("priorityqueue.jl")
include("weights.jl")
include("seed.jl")
include("region_handling.jl")
include("algorithm.jl")
include("unwrapping.jl")
include("voxelquality.jl")
unwrapping_main(args...; kwargs...) = @warn("Type `using ArgParse` to use this function \n `?unwrapping_main` for argument help")
export unwrap, unwrap!, unwrap_individual, unwrap_individual!, voxelquality, unwrapping_main
end # module
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 2756 | function grow_region_unwrap!(
wrapped, weights, visited=zeros(UInt8, size(wrapped)), pqueue=PQueue{Int}(NBINS);
maxseeds=1, merge_regions=false, correct_regions=false, wrap_addition=0, keyargs...
)
## Init
maxseeds = min(255, maxseeds) # Hard Limit, Stored in UInt8
dimoffsets = getdimoffsets(wrapped)
notvisited(i) = checkbounds(Bool, visited, i) && (visited[i] == 0)
seeds = Int[]
new_seed_thresh = 256
if isempty(pqueue) # no seed added yet
addseed! = getseedfunction(seeds, pqueue, visited, weights, wrapped, keyargs)
new_seed_thresh = addseed!()
end
## MST loop
while !isempty(pqueue)
if length(seeds) < maxseeds && pqueue.min > new_seed_thresh
new_seed_thresh = addseed!()
end
edge = dequeue!(pqueue)
oldvox, newvox = getvoxelsfromedge(edge, visited, dimoffsets)
if visited[newvox] == 0
unwrapedge!(wrapped, oldvox, newvox, visited, wrap_addition)
visited[newvox] = visited[oldvox]
for i in 1:6 # 6 directions
e = getnewedge(newvox, notvisited, dimoffsets, i)
if e != 0 && weights[e] > 0
enqueue!(pqueue, e, weights[e])
end
end
end
end
## Region merging
if merge_regions regions = merge_regions!(wrapped, visited, length(seeds), weights) end
if merge_regions && correct_regions correct_regions!(wrapped, visited, regions) end
return visited
end
function getvoxelsfromedge(edge, visited, stridelist)
dim = getdimfromedge(edge)
vox = getfirstvoxfromedge(edge)
neighbor = vox + stridelist[dim] # direct neigbor in dim
if visited[neighbor] == 0
return vox, neighbor
else
return neighbor, vox
end
end
# edge calculations
getdimfromedge(edge) = (edge - 1) % 3 + 1
getfirstvoxfromedge(edge) = div(edge - 1, 3) + 1
getedgeindex(leftvoxel, dim) = dim + 3(leftvoxel-1)
function unwrapedge!(wrapped, oldvox, newvox, visited, x)
oo = 2oldvox - newvox
d = 0
if checkbounds(Bool, wrapped, oo) && visited[oo] != 0 # neighbor behind is visited
v = wrapped[oldvox] - wrapped[oo]
d = if v < -x # threshold
-x
elseif v > x
x
else
v
end
end
wrapped[newvox] = unwrapvoxel(wrapped[newvox], wrapped[oldvox] + d)
end
unwrapvoxel(new, old) = new - 2pi * round((new - old) / 2pi)
function getnewedge(v, notvisited, stridelist, i)
iDim = div(i+1,2)
n = stridelist[iDim] # neigbor-offset in dimension iDim
if iseven(i)
if notvisited(v+n) getedgeindex(v, iDim) else 0 end
else
if notvisited(v-n) getedgeindex(v-n, iDim) else 0 end
end
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 668 | mutable struct PQueue{T}
min::Int
nbins::Int
content::Vector{Vector{T}}
end
# initialize new queue
PQueue{T}(nbins) where T = PQueue(nbins + 1, nbins, [Vector{T}() for _ in 1:nbins])
PQueue(nbins, item::T, weight=1) where T = enqueue!(PQueue{T}(nbins), item, weight)
Base.isempty(q::PQueue) = q.min > q.nbins
function enqueue!(q::PQueue, item, weight)
push!(q.content[weight], item)
q.min = min(q.min, weight)
return q
end
function dequeue!(q::PQueue)
elem = pop!(q.content[q.min])
# increase smallestbin, if elem was last in bin
while q.min ≤ q.nbins && isempty(q.content[q.min])
q.min += 1
end
return elem
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 2600 | function correct_regions!(wrapped, visited, regions)
for r in regions
wrapped[visited .== r] .-= (2π * median(round.(wrapped[visited .== r] ./ 2π)))
end
end
function merge_regions!(wrapped, visited, nregions, weights)
mask = sum(weights; dims=1)
region_size = countmap(visited) # TODO could use weight instead
offsets = zeros(nregions, nregions)
offset_counts = zeros(Int, nregions, nregions)
stridelist = getdimoffsets(wrapped)
for dim in 1:3
neighbor = CartesianIndex(ntuple(i->i==dim ? 1 : 0, 3))
for I in CartesianIndices(wrapped)
J = I + neighbor
if checkbounds(Bool, wrapped, J)# && weights[dim, I] < 255
ri = visited[I]
rj = visited[J]
if ri != 0 && rj != 0 && ri != rj
w = 255 - weights[dim,I]
if w == 255 w = 0 end
offsets[ri, rj] += (wrapped[I] - wrapped[J]) * w
offset_counts[ri, rj] += w
end
end
end
end
for i in 1:nregions, j in i:nregions
offset_counts[i,j] += offset_counts[j,i]
offset_counts[j,i] = offset_counts[i,j]
offsets[i,j] -= offsets[j,i]
offsets[j,i] = -offsets[i,j]
end
corrected = falses(nregions)
remaining_regions = Int[]
while !all(corrected)
largest_uncorrected_region = try
findmax(filter(p -> first(p) != 0 && !corrected[first(p)], region_size))[2]
catch
@show region_size size(corrected) nregions
throw(error())
end
# TODO correct region? No, offsets are already calculated
corrected[largest_uncorrected_region] = true
push!(remaining_regions, largest_uncorrected_region)
# TODO multiple rounds until no change?
sorted_offsets = get_offset_count_sorted(offset_counts, corrected)
for I in sorted_offsets
(i,j) = Tuple(I)
offset = round((offsets[I] / offset_counts[I]) / 2π)
if offset != 0
wrapped[visited .== j] .+= offset * 2π
visited[visited .== j] .= i
# TODO region merging offset_count and offset calculation
end
corrected[j] = true
offset_counts[i,j] = offset_counts[j,i] = -1
end
end
return remaining_regions
end
function get_offset_count_sorted(offset_counts, corrected)
f(I) = corrected[I[1]] && !corrected[I[2]]
return sort(filter(f, findall(offset_counts .> 0)), by=i->offset_counts[i], rev=true)
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 2093 | # returns a function that can repeatedly and efficiently create new seeds
function getseedfunction(seeds, pqueue, visited, weights, wrapped, keyargs)
seedqueue = getseedqueue(weights)
notvisited(i) = checkbounds(Bool, visited, i) && (visited[i] == 0)
stridelist = getdimoffsets(wrapped)
function addseed!()
seed = findseed!(seedqueue, weights, visited)
if seed == 0
return 255
end
for i in 1:6 # 6 directions
e = getnewedge(seed, notvisited, stridelist, i)
if e != 0 && weights[e] > 0
enqueue!(pqueue, e, weights[e])
end
end
seedcorrection!(wrapped, seed, keyargs)
push!(seeds, seed)
visited[seed] = length(seeds)
# new seed thresh
seed_weights = weights[getedgeindex.(seed, 1:3)]
new_seed_thresh = NBINS - div(NBINS - sum(seed_weights)/3, 2)
return new_seed_thresh
end
return addseed!
end
function getseedqueue(weights)
queue = PQueue{Int}(3NBINS)
for (i, w) in enumerate(sum([w == 0 ? UInt8(255) : w for w in weights]; dims=1))
enqueue!(queue, i, w)
end
return queue
end
function findseed!(queue::PQueue, weights, visited)
while !isempty(queue)
ind = dequeue!(queue)
if visited[ind] == 0
return ind
end
end
return 0
end
function seedcorrection!(wrapped, vox, keyargs)
if haskey(keyargs, :phase2) && haskey(keyargs, :TEs) # requires multiecho
phase2 = keyargs[:phase2]
TEs = keyargs[:TEs]
best = Inf
offset = 0
for off1 in -2:2, off2 in -1:1
diff = abs((wrapped[vox] + 2π*off1) / TEs[1] - (phase2[vox] + 2π*off2) / TEs[2])
diff += (abs(off1) + abs(off2)) / 100 # small panelty for wraps (if TE1 == 2*TE2 wrong value is chosen otherwise)
if diff < best
best = diff
offset = off1
end
end
wrapped[vox] += 2π * offset
else
wrapped[vox] = rem2pi(wrapped[vox], RoundNearest)
end
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 6696 | function unwrap!(wrapped::AbstractArray{T,3}; regions=zeros(UInt8, size(wrapped)), keyargs...) where T
weights = calculateweights(wrapped; keyargs...)
@assert sum(weights) != 0 "Unwrap-weights are all zero!"
regions .= grow_region_unwrap!(wrapped, weights; keyargs...)
if haskey(keyargs, :correctglobal) && keyargs[:correctglobal]
mask = if haskey(keyargs, :mask)
keyargs[:mask]
else
trues(size(wrapped))
end
wrapped .-= (2π * median(round.(filter(isfinite, wrapped[mask]) ./ 2π))) # TODO time (sample)
end
return wrapped
end
function unwrap!(wrapped::AbstractArray; keyargs...)
sz = size(wrapped)
@assert ndims(wrapped) <= 3 "This is 3D (4D) unwrapping! data is $(ndims(wrapped))D"
if ndims(wrapped) <= 2 # algorithm requires 3D input
wrapped = reshape(wrapped, size(wrapped)..., ones(Int, 3-ndims(wrapped))...)
end
unwrap!(wrapped; keyargs...)
return reshape(wrapped, sz)
end
"""
unwrap(wrapped::AbstractArray; keyargs...)
ROMEO unwrapping for 3D and 4D data.
### Optional keyword arguments:
- `TEs`: Required for 4D data. The echo times for multi-echo data. In the case of single-echo
application with phase and the phase2 as a tuple (eg. (5, 10) or [5, 10]).
- `weights`: Options are [`:romeo`] | `:romeo2` | `:romeo3` | `:bestpath`.
- `mag`: The magnitude is used to improve the unwrapping-path.
- `mask`: Unwrapping is only performed inside the mask.
- `phase2`: A second reference phase image (possibly with different echo time).
It is used for calculating the phasecoherence weight. This is automatically
done for 4D multi-echo input and therefore not required.
- `correctglobal=false`: If `true` corrects global n2π offsets.
- `individual=false`: If `true` perform individual unwrapping of echos.
Type `?unwrap_individual` for more information
- `template=1`: echo that is spatially unwrapped (if `individual` is `false`)
- `maxseeds=1`: higher values allow more seperate regions
- `merge_regions=false`: spatially merge neighboring regions after unwrapping
- `correct_regions=false`: bring each regions median closest to 0 by adding n2π
- `wrap_addition=0`: [0;π], allows 'linear unwrapping', neighbors can have more
(π+wrap_addition) phase difference
- `temporal_uncertain_unwrapping=false`: uses spatial unwrapping on voxels that
have high uncertainty values after temporal unwrapping
# Examples
```julia-repl
julia> using MriResearchTools
julia> phase = readphase("phase_3echo.nii")
julia> unwrapped = unwrap(phase; TEs=[1,2,3])
julia> savenii(unwrapped, "unwrapped.nii"; header=header(phase))
```
"""
unwrap, unwrap!
unwrap(wrapped; keyargs...) = unwrap!(copy(wrapped); keyargs...)
function unwrap!(wrapped::AbstractArray{T,4}; TEs, individual=false,
template=1, p2ref=ifelse(template==1, 2, template-1),
temporal_uncertain_unwrapping=false, keyargs...) where T
if individual return unwrap_individual!(wrapped; TEs, keyargs...) end
## INIT
args = Dict{Symbol, Any}(keyargs)
args[:phase2] = wrapped[:,:,:,p2ref]
args[:TEs] = TEs[[template, p2ref]]
if haskey(args, :mag)
args[:mag] = args[:mag][:,:,:,template]
end
## Calculate
weights = calculateweights(view(wrapped,:,:,:,template); args...)
unwrap!(view(wrapped,:,:,:,template); args..., weights) # rightmost keyarg takes precedence
quality = similar(wrapped)
V = falses(size(wrapped))
for ieco in [(template-1):-1:1; (template+1):length(TEs)]
iref = if (ieco < template) ieco+1 else ieco-1 end
refvalue = wrapped[:,:,:,iref] .* (TEs[ieco] / TEs[iref])
w = view(wrapped,:,:,:,ieco)
w .= unwrapvoxel.(w, refvalue) # temporal unwrapping
if temporal_uncertain_unwrapping # TODO extract as function
quality[:,:,:,ieco] .= getquality.(w, refvalue)
visited = quality[:,:,:,ieco] .< π/2
mask = if haskey(keyargs, :mask)
keyargs[:mask]
else
dropdims(sum(weights; dims=1); dims=1) .< 100
end
visited[.!mask] .= true
V[:,:,:,ieco] = visited
if any(visited) && !all(visited)
edges = getseededges(visited)
edges = filter(e -> weights[e] != 0, edges)
grow_region_unwrap!(w, weights, visited, initqueue(edges, weights))
end
end
end
return wrapped#, quality, weights, V
end
function getquality(vox, ref)
return abs(vox - ref)
end
function getseededges(visited::BitArray)
stridelist = getdimoffsets(visited)
edges = Int64[]
for dim in 1:3, I in LinearIndices(visited)
J = I + stridelist[dim]
if checkbounds(Bool, visited, J) # borders should be no problem due to invalid weights
if visited[I] + visited[J] == 1 # one visited and one not visited
push!(edges, getedgeindex(I, dim))
end
end
end
return edges
end
initqueue(seed::Int, weights) = initqueue([seed], weights)
function initqueue(seeds, weights)
pq = PQueue{eltype(seeds)}(NBINS)
for seed in seeds
enqueue!(pq, seed, weights[seed])
end
return pq
end
"""
unwrap_individual(wrapped::AbstractArray{T,4}; TEs, keyargs...) where T
Performs individual unwrapping of the echoes instead of temporal unwrapping.
Still uses multi-echo information to improve the quality map.
This function is identical to `unwrap` with the flag `individual=true`.
The syntax is identical to unwrap, but doesn't support the `temporal_uncertain_unwrapping` and `template` options:
$(@doc unwrap)
"""
unwrap_individual, unwrap_individual!
unwrap_individual(wrapped; keyargs...) = unwrap_individual!(copy(wrapped); keyargs...)
function unwrap_individual!(wrapped::AbstractArray{T,4}; TEs, keyargs...) where T
args = Dict{Symbol,Any}(keyargs)
Threads.@threads for i in 1:length(TEs)
e2 = if (i == 1) 2 else i-1 end
if haskey(keyargs, :mag) args[:mag] = keyargs[:mag][:,:,:,i] end
unwrap!(view(wrapped,:,:,:,i); phase2=wrapped[:,:,:,e2], TEs=TEs[[i,e2]], args...)
end
if haskey(keyargs, :correctglobal) && keyargs[:correctglobal]
correct_multi_echo_wraps!(wrapped; TEs, keyargs...)
end
return wrapped
end
function correct_multi_echo_wraps!(wrapped; TEs, mask=trues(size(wrapped)), keyargs...)
for ieco in 2:length(TEs)
iref = ieco - 1
nwraps = median(round.((filter(isfinite, wrapped[:,:,:,iref][mask]) .* (TEs[ieco] / TEs[iref]) .- filter(isfinite, wrapped[:,:,:,ieco][mask])) / 2π))
wrapped[:,:,:,ieco] .+= 2π * nwraps
end
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 229 | function γ(x::AbstractFloat) # faster if only one wrap can occur
if x < -π
x+typeof(x)(2π)
elseif x > π
x-typeof(x)(2π)
else
x
end
end
getdimoffsets(A) = (1, cumprod(size(A)[1:end-1])...)
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 1099 | """
voxelquality(phase::AbstractArray; keyargs...)
Calculates a quality for each voxel. The voxel quality can be used to create a mask.
# Examples
```julia-repl
julia> qmap = voxelquality(phase_3echo; TEs=[1,2,3]);
julia> mask = robustmask(qmap);
```
Takes the same inputs as `romeo`/`unwrap`:
$(@doc unwrap)
See also [`unwrap`](@ref)
"""
function voxelquality(phase; keyargs...)
weights = calculateweights(phase; type=Float32, rescale=x->x, keyargs...) # [0;1]
qmap = dropdims(sum(weights; dims=1); dims=1)
qmap[2:end,:,:] .+= weights[1,1:end-1,:,:]
qmap[:,2:end,:] .+= weights[2,:,1:end-1,:]
qmap[:,:,2:end] .+= weights[3,:,:,1:end-1]
return qmap ./ 6 # [0;1]
end
function calculateweights(phase::AbstractArray{T,4}; TEs, template=1, p2ref=2, keyargs...) where T
args = Dict{Symbol, Any}(keyargs)
args[:phase2] = phase[:,:,:,p2ref]
args[:TEs] = TEs[[template, p2ref]]
if haskey(args, :mag) && size(args[:mag], 4) > 1
args[:mag] = args[:mag][:,:,:,template]
end
return calculateweights(view(phase,:,:,:,template); args...)
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 5837 | ## weights
function getweight(P, i, j, P2, TEs, M, maxmag, flags) # Phase, index, neighbor, ...
weight = 1.0
if flags[1] weight *= (0.1 + 0.9phasecoherence(P, i, j)) end
if flags[2] weight *= (0.1 + 0.9phasegradientcoherence(P, P2, TEs, i, j)) end
if flags[3] weight *= (0.1 + 0.9phaselinearity(P, i, j)) end
if !isnothing(M)
small, big = minmax(M[i], M[j])
if flags[4] weight *= (0.1 + 0.9magcoherence(small, big)) end
if flags[5] weight *= (0.1 + 0.9magweight(small, maxmag)) end
if flags[6] weight *= (0.1 + 0.9magweight2(big, maxmag)) end
end
return weight
end
phasecoherence(P, i, j) = 1 - abs(γ(P[i] - P[j]) / π)
phasegradientcoherence(P, P2, TEs, i, j) = max(0, 1 - abs(γ(P[i] - P[j]) - γ(P2[i] - P2[j]) * TEs[1] / TEs[2]))
magcoherence(small, big) = (small / big) ^ 2
magweight(small, maxmag) = 0.5 + 0.5min(1, small / (0.5 * maxmag))
magweight2(big, maxmag) = 0.5 + 0.5min(1, (0.5 * maxmag) / big) # too high magnitude is not good either (flow artifact)
function phaselinearity(P, i, j, k)
pl = max(0, 1 - abs(rem2pi(P[i] - 2P[j] + P[k], RoundNearest)/2))
if isnan(pl)
return 0.5
end
return pl
end
function phaselinearity(P, i, j)
neighbor = j - i
h = i - neighbor
k = j + neighbor
if 0 < h && k <= length(P)
return phaselinearity(P, h, i, j) * phaselinearity(P, i, j, k)
else
return 0.9
end
end
"""
calculateweights(wrapped; weights=:romeo, kwargs...)
Calculates weights for all edges.
size(weights) == [3, size(wrapped)...]
### Optional keyword arguments:
- `weights`: Options are [`:romeo`] | `:romeo2` | `:romeo3` | `:bestpath`.
- `mag`: Additional mag weights are used.
- `mask`: Unwrapping is only performed inside the mask.
- `phase2`: A second reference phase image (possibly with different echo time).
It is used for calculating the phasecoherence weight.
- `TEs`: The echo times of the phase and the phase2 images as a tuple (eg. (5, 10) or [5, 10]).
"""
function calculateweights(wrapped; weights=:romeo, kwargs...)
weights = if weights isa AbstractArray{<:Real,4}
weights
elseif weights == :bestpath
calculateweights_bestpath(wrapped; kwargs...)
else
calculateweights_romeo(wrapped, weights; kwargs...)
end
# these edges do not exist
weights[1,end,:,:] .= 0
weights[2,:,end,:] .= 0
weights[3,:,:,end] .= 0
return weights
end
calculateweights_romeo(wrapped, weights::AbstractArray{T,4}; kw...) where T = UInt8.(weights)
# defines default weights
function calculateweights_romeo(wrapped, weights::Symbol; kwargs...)
flags = falses(6)
if weights == :romeo
flags[1:4] .= true
elseif weights == :romeo2
flags[[1,4]] .= true # phasecoherence, magcoherence
elseif weights == :romeo3
flags[[1,2,4]] .= true # phasecoherence, phasegradientcoherence, magcoherence
elseif weights == :romeo4
flags[1:4] .= true
elseif weights == :romeo6
flags[1:6] .= true # additional magweight, magweight2
else
throw(ArgumentError("Weight '$weights' not defined!"))
end
return calculateweights_romeo(wrapped, flags; kwargs...)
end
function calculateweights_romeo(wrapped, flags::AbstractArray{Bool,1}; type::Type{T}=UInt8, rescale=rescale, kwargs...) where T
mask, P2, TEs, M, maxmag = parsekwargs(kwargs, wrapped)
flags = updateflags(flags, P2, TEs, M)
stridelist = getdimoffsets(wrapped)
weights = zeros(T, 3, size(wrapped)...)
Threads.@threads for dim in 1:3
neighbor = stridelist[dim]
for I in LinearIndices(wrapped)
J = I + neighbor
if mask[I] && checkbounds(Bool, wrapped, J)
w = getweight(wrapped, I, J, P2, TEs, M, maxmag, flags)
weights[dim + (I-1)*3] = rescale(w)
end
end
end
return weights
end
## utility functions
function parsekwargs(kwargs, wrapped)
getval(key) = if haskey(kwargs, key) kwargs[key] else nothing end
mag = getval(:mag)
mask = getval(:mask)
if !isnothing(mask)
if !isnothing(mag)
mag = mag .* mask
end
else
mask = trues(size(wrapped))
end
maxmag = if !isnothing(mag)
quantile(mag[isfinite.(mag)], 0.95) else nothing
end
return mask, getval(:phase2), getval(:TEs), mag, maxmag
end
function updateflags(flags, P2, TEs, M)
flags = copy(flags)
if isnothing(M)
flags[4:6] .= false
end
if isnothing(P2) || isnothing(TEs)
flags[2] = false
end
return flags
end
# from: 1 is best and 0 worst
# to: 1 is best, NBINS is worst, 0 is not valid (not added to queue)
function rescale(w)
if 0 ≤ w ≤ 1
max(round(Int, (1 - w) * (NBINS - 1)), 1)
else
0
end
end
## best path weights
# Abdul-Rahamn https://doi.org/10.1364/AO.46.006623
function calculateweights_bestpath(wrapped; kwargs...)
scale(w) = UInt8.(min(max(round((1 - (w / 10)) * (NBINS - 1)), 1), 255))
weights = scale.(getbestpathweight(wrapped))
if haskey(kwargs, :mask) # apply mask to weights
mask = kwargs[:mask]
weights .*= reshape(mask, 1, size(mask)...)
end
weights
end
function getbestpathweight(φ)
R = 1 ./ getD(φ)
weight = zeros(3, size(R)...)
for idim in 1:3
n = getdimoffsets(R)[idim]
@inbounds for i in 1:length(R)-n
weight[idim + 3i] = R[i] + R[i+n]
end
end
return weight
end
function getD(φ)
directions = Iterators.product(-1:1,-1:1,-1:1)
neighbors = unique(abs(sum(getdimoffsets(φ) .* d)) for d in directions if d != (0,0,0))
D2 = zeros(size(φ))
@inbounds for n in neighbors, i in 1+n:length(φ)-n
D2[i] += (γ(φ[i-n] - φ[i]) - γ(φ[i] - φ[i+n])) ^ 2
end
return sqrt.(D2)
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 3498 | @testset "Unwrap 1D" begin
@test unwrap([0.1, 0.2, 0.3, 0.4]) ≈ [0.1, 0.2, 0.3, 0.4]
@test unwrap([0.1, 0.2 + 2pi, 0.3, 0.4]) ≈ [0.1, 0.2, 0.3, 0.4]
@test unwrap([0.1, 0.2 - 2pi, 0.3, 0.4]) ≈ [0.1, 0.2, 0.3, 0.4]
@test unwrap([0.1, 0.2 - 2pi, 0.3 - 2pi, 0.4]) ≈ [0.1, 0.2, 0.3, 0.4]
@test unwrap([0.1 + 2pi, 0.2, 0.3, 0.4]) ≈ [0.1, 0.2, 0.3, 0.4]
@test unwrap([0.1, 0.2 + 2pi, 0.3, 0.4]) ≈ [0.1, 0.2, 0.3, 0.4]
test_v = [0.1, 0.2, 0.3 + 2pi, 0.4]
res_v = unwrap(test_v)
@test test_v ≈ [0.1, 0.2, 0.3 + 2pi, 0.4]
res_v .= 0
unwrap!(test_v)
@test test_v ≈ [0.1, 0.2, 0.3, 0.4]
# test unwrapping within multi-dimensional array
wrapped = [0.1, 0.2 + 2pi, 0.3, 0.4]
unwrapped = [0.1, 0.2, 0.3, 0.4]
wrapped = hcat(wrapped, wrapped)
unwrapped = hcat(unwrapped, unwrapped)
# test generically typed unwrapping
types = (Float32, Float64)
for T in types
A_unwrapped = collect(range(0, stop=4convert(T, π), length=10))
A_wrapped = A_unwrapped .% (2convert(T, π))
test(I, J) = I ≈ J || I .+ 2π ≈ J || I .+ 4π ≈ J
@test test(unwrap(A_wrapped), A_unwrapped)
unwrap!(A_wrapped)
@test test(A_wrapped, A_unwrapped)
end
end
# tests for multi-dimensional unwrapping
@testset "Unwrap 2D" begin
types = (Float32, Float64)
for T in types
v_unwrapped = collect(range(0, stop=4convert(T, π), length=7))
A_unwrapped = v_unwrapped .+ v_unwrapped'
A_wrapped = A_unwrapped .% (2convert(T, π))
test_unwrapped = unwrap(A_wrapped)
d = first(A_unwrapped) - first(test_unwrapped)
@test (test_unwrapped .+ d) ≈ A_unwrapped
unwrap!(A_wrapped)
d = first(A_unwrapped) - first(A_wrapped)
@test (A_wrapped .+ d) ≈ A_unwrapped
v_unwrapped_range = collect(range(0, stop=4, length=7))
A_unwrapped_range = v_unwrapped_range .+ v_unwrapped_range'
test_range = convert(T, 2)
A_wrapped_range = A_unwrapped_range .% test_range
end
end
@testset "Unwrap 3D" begin
types = (Float32, Float64)
f(x, y, z) = 0.1x^2 - 2y + 2z
f_wraparound2(x, y, z) = 5*sin(x) + 2*cos(y) + z
f_wraparound3(x, y, z) = 5*sin(x) + 2*cos(y) - 4*cos(z)
for T in types
grid = range(zero(T), stop=2convert(T, π), length=11)
f_uw = f.(grid, grid', reshape(grid, 1, 1, :))
f_wr = f_uw .% (2convert(T, π))
uw_test = unwrap(f_wr)
offset = first(f_uw) - first(uw_test)
@test (uw_test.+offset) ≈ f_uw rtol=eps(T) #oop, nowrap
# test in-place version
unwrap!(f_wr)
offset = first(f_uw) - first(f_wr)
@test (f_wr.+offset) ≈ f_uw rtol=eps(T) #ip, nowrap
f_uw = f_wraparound2.(grid, grid', reshape(grid, 1, 1, :))
f_wr = f_uw .% (2convert(T, π))
uw_test = unwrap(f_wr)
offset = first(f_uw) - first(uw_test)
@test (uw_test.+offset) ≈ f_uw #oop, 2wrap
# test in-place version
unwrap!(f_wr)
offset = first(f_uw) - first(f_wr)
@test (f_wr.+offset) ≈ f_uw #ip, 2wrap
f_uw = f_wraparound3.(grid, grid', reshape(grid, 1, 1, :))
f_wr = f_uw .% (2convert(T, π))
uw_test = unwrap(f_wr)
offset = first(f_uw) - first(uw_test)
@test (uw_test.+offset) ≈ f_uw #oop, 3wrap
# test in-place version
unwrap!(f_wr)
offset = first(f_uw) - first(f_wr)
@test (f_wr.+offset) ≈ f_uw #oop, 3wrap
end
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 443 | @testset "Features" begin
for l in 7:5:20
for offset in 2π .* [0, 2, -1, 10, -50]
phase_uw = collect(range(-2π; stop=2π, length=l))
phase = rem2pi.(phase_uw, RoundNearest) .+ offset
for mag in [nothing, collect(range(10; stop=1, length=l)), collect(range(1; stop=10, length=l))]
unwrapped = unwrap(phase; mag=mag, correctglobal=true)
@test unwrapped ≈ phase_uw
end
end
end
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 2723 | @testset "MRI tests" begin
phasefile = joinpath("data", "small", "Phase.nii")
magfile = joinpath("data", "small", "Mag.nii")
echo = 3
echo2 = 2
phase4D = niread(phasefile).raw
mag4D = niread(magfile).raw
phase = phase4D[:,:,:,echo]
mag = mag4D[:,:,:,echo]
phase2 = niread(phasefile).raw[:,:,:,echo2]
TEs = [echo, echo2]
## accuracy tests
function unwrap_test(wrapped; keyargs...)
unwrapped = unwrap(wrapped; keyargs...)
# test that unwrapped is not a copy of phase
@test unwrapped != wrapped
# test that all resulting values are only 2π different
@test all(isapprox.(rem2pi.(unwrapped - wrapped, RoundNearest), 0; atol=1e-5))
unwrapped
end
t = []
push!(t, unwrap_test(phase))
push!(t, unwrap_test(phase; mag=mag))
#push!(t, unwrap_test(phase4D)) #TODO use test data set with noise to see difference
push!(t, unwrap_test(phase4D; mag=mag4D, TEs=TEs))
push!(t, unwrap_individual(phase4D; mag=mag4D, TEs=TEs))
push!(t, unwrap_test(phase; weights=:bestpath))
#push!(t, unwrap_test(phase; weights=:romeo, mask=robustmask(mag)))
push!(t, unwrap_test(phase; weights=:romeo2, mag=mag, TEs=TEs, phase2=phase2))
push!(t, unwrap_test(phase; weights=:romeo3, mag=mag, TEs=TEs, phase2=phase2))
push!(t, unwrap_test(phase; weights=:romeo4, mag=mag, TEs=TEs, phase2=phase2))
push!(t, unwrap_test(phase; mag=mag, maxseeds=50))
push!(t, unwrap_test(phase4D; mag=mag4D, TEs=TEs, template=2))
push!(t, unwrap_test(phase4D; mag=mag4D, TEs=TEs, template=2, temporal_uncertain_unwrapping=true))
# all results should be different
for i in 1:length(t), j in 1:(i-1)
if (t[i] == t[j])
@show i j
end
@test t[i] != t[j]
end
t = []
push!(t, unwrap_test(phase; mag=mag, maxseeds=50))
push!(t, unwrap_test(phase; mag=mag, maxseeds=50, merge_regions=true, correct_regions=true))
#TODO correct_regions does not affect outcome
#push!(t, unwrap_test(phase; mag=mag, maxseeds=50, merge_regions=true))
for i in 1:length(t), j in 1:(i-1)
@test t[i] != t[j]
end
@test unwrap_individual(phase4D; mag=mag4D, TEs=TEs) == unwrap(phase4D; mag=mag4D, TEs=TEs, individual=true)
## performance tests (not at beginning to avoid first run overhead)
if VERSION ≥ v"1.8" # different performance on older julia versions
@test (@timed unwrap(phase))[5].poolalloc < 6e3
@test (@timed unwrap(phase; mag=mag))[5].poolalloc < 7e3
@test (@timed unwrap(phase; weights=:bestpath))[5].poolalloc < 3e4
end
## NaN tests
nanphase = copy(phase)
nanphase[1,:,:] .= NaN
nan_unwrapped = unwrap_test(phase)
nan_unwrapped[1,:,:] .= NaN
@test nan_test(unwrap(nanphase), nan_unwrapped)
nanmag = copy(mag)
nanmag[1,:,:] .= NaN
@test nan_test(unwrap(phase; mag=nanmag)[2:end,:,:], unwrap_test(phase; mag=mag)[2:end,:,:])
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 544 | using ROMEO
using Test
using MriResearchTools
nan_test(I1, I2) = I1[.!isnan.(I1)] ≈ I2[.!isnan.(I2)]
@testset "ROMEO.jl" begin
include("features.jl")
include("specialcases.jl")
include("dsp_tests.jl")
include("mri.jl")
include("voxelquality.jl")
#include("timing.jl")
end
if VERSION ≥ v"1.9"
@testset "RomeoApp" begin
using ArgParse
include("RomeoApp/dataset_small.jl")
include("RomeoApp/dataset_small2.jl")
end
end
## print version to verify
println()
unwrapping_main(["--version"])
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 1164 | @testset "Special Cases" begin
phase = ones(3,3,3)
unwrap(phase)
point = ones(1,1,1)
@test_throws AssertionError unwrap(point)
## weight test
function weight_test(w, out)
w = reshape(w, length(w), 1, 1)
@test UInt8.(out) == ROMEO.calculateweights(w)[1,:,1,1]
end
weight_test([0.1, 0.2 + 2pi, 0.3, 0.4], [30, 7, 30, 0]) # phase linearity penalty (30) at borders
if VERSION ≥ v"1.8" # different NaN handling on older julia versions
weight_test([0.1, 0.2 + 2pi, 0.3, NaN], [30, 119, 0, 0]) # 119 to voxel bordering to NaN voxel, 0 to NaN voxel
end
## NaN test
@test nan_test(unwrap([0.1, 0.2 + 2pi, 0.3, 0.4]), [0.1, 0.2, 0.3, 0.4])
@test nan_test(unwrap([0.1, 0.2 + 2pi, 0.3, NaN]), [0.1, 0.2, 0.3, NaN])
@test nan_test(unwrap([0.1, 0.2 + 2pi, 0.3, 0.4]; mag=[1, 1, 1, NaN]), [0.1, 0.2, 0.3, 0.4])
## test dimensions
function dim_test(a; keyargs...)
ca = copy(a)
ua = unwrap(a; keyargs...)
@test a != ua
@test size(a) == size(ua)
@test a == ca
unwrap!(a; keyargs...)
@test a != ca
@test a == ua
end
dim_test(10rand(50))
dim_test(10rand(50,20))
dim_test(10rand(15,20,10))
dim_test(10rand(7,9,10,3); TEs=ones(3))
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 460 | using BenchmarkTools
phasefile = joinpath("data", "small", "Phase.nii")
magfile = joinpath("data", "small", "Mag.nii")
echo = 3
phase = niread(phasefile).raw[:,:,:,echo]
mag = niread(magfile).raw[:,:,:,echo]
big(I, factor=3) = repeat(I; outer=factor .* [1,1,1])
bigphase = big(phase)
bigmag = big(mag)
@btime ROMEO.calculateweights($bigphase; mag=$bigmag)
@btime unwrap($bigphase; mag=$bigmag)
# akku - high power
# 450ms
# 1.4s
# PC
# 360ms
# 1.1s
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 390 | @testset "voxelquality" begin
phasefile = joinpath("data", "small", "Phase.nii")
magfile = joinpath("data", "small", "Mag.nii")
phase4D = niread(phasefile).raw
mag4D = niread(magfile).raw
qm1 = voxelquality(phase4D[:,:,:,1])
qm2 = voxelquality(phase4D[:,:,:,1]; mag=mag4D[:,:,:,1])
qm3 = voxelquality(phase4D; mag4D, TEs=[4,8,12])
@test qm1 != qm2
@test qm1 != qm3
@test qm2 != qm3
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 9573 | @testset "ROMEO function tests" begin
original_path = abspath(".")
p = abspath(joinpath("data", "small"))
tmpdir = mktempdir()
cd(tmpdir)
phasefile_me = joinpath(p, "Phase.nii")
phasefile_me_nan = joinpath(p, "phase_with_nan.nii")
magfile_me = joinpath(p, "Mag.nii")
phasefile_1eco = joinpath(tmpdir, "Phase.nii")
phasefile_2D = joinpath(tmpdir, "Phase2D.nii")
magfile_1eco = joinpath(tmpdir, "Mag.nii")
magfile_2D = joinpath(tmpdir, "Mag2D.nii")
phasefile_1arreco = joinpath(tmpdir, "Phase.nii")
magfile_1arreco = joinpath(tmpdir, "Mag.nii")
maskfile = joinpath(tmpdir, "Mask.nii")
savenii(niread(magfile_me)[:,:,:,1] |> I -> I .> MriResearchTools.median(I), maskfile)
savenii(niread(phasefile_me)[:,:,:,1], phasefile_1eco)
savenii(niread(magfile_me)[:,:,:,1], magfile_1eco)
savenii(niread(phasefile_me)[:,:,:,[1]], phasefile_1arreco)
savenii(niread(magfile_me)[:,:,:,[1]], magfile_1arreco)
savenii(niread(phasefile_me)[:,:,1,1], phasefile_2D)
savenii(niread(magfile_me)[:,:,1,1], magfile_2D)
phasefile_me_5D = joinpath(tmpdir, "phase_multi_channel.nii")
magfile_5D = joinpath(tmpdir, "mag_multi_channel.nii")
savenii(repeat(niread(phasefile_me),1,1,1,1,2), phasefile_me_5D)
savenii(repeat(niread(magfile_me),1,1,1,1,2), magfile_5D)
function test_romeo(args)
folder = tempname()
args = [args..., "-o", folder]
try
msg = unwrapping_main(args)
@test msg == 0
@test isfile(joinpath(folder, "unwrapped.nii"))
catch e
println(args)
println(sprint(showerror, e, catch_backtrace()))
@test "test failed" == "with error" # signal a failed test
end
end
configurations_se(pf, mf) = vcat(configurations_se.([[pf], [pf, "-m", mf]])...)
configurations_se(pm) = [
[pm...],
[pm..., "-g"],
[pm..., "-N"],
[pm..., "-i"],
[pm..., "-q"],
[pm..., "-Q"],
[pm..., "-u"],
[pm..., "-w", "romeo"],
[pm..., "-w", "bestpath"],
[pm..., "-w", "1010"],
[pm..., "-w", "101011"],
[pm..., "--threshold", "4"],
[pm..., "-s", "50"],
[pm..., "-s", "50", "--merge-regions"],
[pm..., "-s", "50", "--merge-regions", "--correct-regions"],
[pm..., "--wrap-addition", "0.1"],
[pm..., "-k", "robustmask"],
[pm..., "-k", "nomask"],
[pm..., "-k", "qualitymask"],
[pm..., "-k", "qualitymask", "0.1"],
]
configurations_me(phasefile_me, magfile_me) = vcat(configurations_me.([[phasefile_me], [phasefile_me, "-m", magfile_me]])...)
configurations_me(pm) = [
[pm..., "-e", "1:2", "-t", "[2,4]"], # giving two echo times for two echoes used out of three
[pm..., "-e", "[1,3]", "-t", "[2,4,6]"], # giving three echo times for two echoes used out of three
[pm..., "-e", "[1", "3]", "-t", "[2,4,6]"],
[pm..., "-t", "[2,4,6]"],
[pm..., "-t", "2:2:6"],
[pm..., "-t", "[2.1,4.2,6.3]"],
[pm..., "-t", "epi"], # shorthand for "ones(<num-echoes>)"
[pm..., "-t", "epi", "5.3"], # shorthand for "5.3*ones(<num-echoes>)"
[pm..., "-B", "-t", "[2,4,6]"],
[pm..., "-B", "-t", "[2" ,"4", "6]"], # when written like [2 4 6] in command line
[pm..., "--temporal-uncertain-unwrapping", "-t", "[2,4,6]"],
[pm..., "--template", "1", "-t", "[2,4,6]"],
[pm..., "--template", "3", "-t", "[2,4,6]"],
[pm..., "--phase-offset-correction", "-t", "[2,4,6]"],
[pm..., "--phase-offset-correction", "bipolar", "-t", "[2,4,6]"],
[pm..., "--phase-offset-correction", "-t", "[2,4,6]", "--phase-offset-smoothing-sigma-mm", "[5,8,4]"],
[pm..., "--phase-offset-correction", "-t", "[2,4,6]", "--write-phase-offsets"],
]
files = [(phasefile_1eco, magfile_1eco), (phasefile_1arreco, magfile_1arreco), (phasefile_1eco, magfile_1arreco), (phasefile_1arreco, magfile_1eco), (phasefile_2D, magfile_2D)]
for (pf, mf) in files, args in configurations_se(pf, mf)
test_romeo(args)
end
for args in configurations_me(phasefile_me, magfile_me)
test_romeo(args)
end
for args in configurations_se(["-p", phasefile_me, "-m", magfile_me, "-t", "[2,4,6]"])
test_romeo(args)
end
for args in configurations_me(phasefile_me_5D, magfile_5D)[end-2:end] # test the last 3 configurations_me lines for coil combination
test_romeo(args)
end
files_se = [(phasefile_1eco, magfile_1eco), (phasefile_1arreco, magfile_1arreco)]
for (pf, mf) in files_se
b_args = ["-B", "-t", "3.06"]
test_romeo(["-p", pf, b_args...])
test_romeo(["-p", pf, "-m", mf, b_args...])
end
test_romeo([phasefile_me_nan, "-t", "[2,4]", "-k", "nomask"])
## Test error and warning messages
m = "multi-echo data is used, but no echo times are given. Please specify the echo times using the -t option."
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_me, "-o", tmpdir, "-v"])
m = "masking option '0.8' is undefined (Maybe '-k qualitymask 0.8' was meant?)"
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_1eco, "-o", tmpdir, "-v", "-k", "0.8"])
m = "masking option 'blub' is undefined"
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_1eco, "-o", tmpdir, "-v", "-k", "blub"])
m = "Phase offset determination requires all echo times! (2 given, 3 required)"
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_me_5D, "-o", tmpdir, "-v", "-t", "[1,2]", "-e", "[1,2]", "--phase-offset-correction"])
m = "echoes=[1,5]: specified echo out of range! Number of echoes is 3"
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_me, "-o", tmpdir, "-v", "-t", "[1,2,3]", "-e", "[1,5]"])
m = "echoes=[1,5} wrongly formatted!"
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_me, "-o", tmpdir, "-v", "-t", "[1,2,3]", "-e", "[1,5}"])
m = "Number of chosen echoes is 2 (3 in .nii data), but 5 TEs were specified!"
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_me, "-o", tmpdir, "-v", "-t", "[1,2,3,4,5]", "-e", "[1,2]"])
m = "size of magnitude and phase does not match!"
@test_throws ErrorException(m) unwrapping_main(["-p", phasefile_me, "-o", tmpdir, "-v", "-t", "[1,2,3]", "-m", magfile_1eco])
m = "robustmask was chosen but no magnitude is available. No mask is used!"
@test_logs (:warn, m) match_mode=:any unwrapping_main(["-p", phasefile_1eco, "-o", tmpdir])
m = "The echo times 1 and 2 ([1.1, 1.1, 1.1]) need to be different for MCPC-3D-S phase offset correction! No phase offset correction performed"
@test_logs (:warn, m) match_mode=:any unwrapping_main(["-p", phasefile_me, "-m", magfile_me, "-o", tmpdir, "-t", "[1.1, 1.1, 1.1]", "--phase-offset-correction"])
@test_logs unwrapping_main(["-p", phasefile_1eco, "-o", tmpdir, "-m", magfile_1eco]) # test that no warning appears
## test maskfile
unwrapping_main([phasefile_1eco, "-k", maskfile])
## test no-rescale
phasefile_me_uw = joinpath(tempname(), "unwrapped.nii")
phasefile_me_uw_wrong = joinpath(tempname(), "wrong_unwrapped.nii")
phasefile_me_uw_again = joinpath(tempname(), "again_unwrapped.nii")
phasefile_me_uw_legacy_rescale = joinpath(tempname(), "legacy_rescale_unwrapped.nii")
unwrapping_main([phasefile_me, "-o", phasefile_me_uw, "-t", "[2,4,6]"])
unwrapping_main([phasefile_me_uw, "-o", phasefile_me_uw_wrong, "-t", "[2,4,6]"])
unwrapping_main([phasefile_me_uw, "-o", phasefile_me_uw_again, "-t", "[2,4,6]", "--no-phase-rescale"])
unwrapping_main([phasefile_me_uw, "-o", phasefile_me_uw_legacy_rescale, "-t", "[2,4,6]", "--no-rescale"]) # legacy
@test readphase(phasefile_me_uw_again; rescale=false).raw == readphase(phasefile_me_uw; rescale=false).raw
@test readphase(phasefile_me_uw_legacy_rescale; rescale=false).raw == readphase(phasefile_me_uw; rescale=false).raw
@test readphase(phasefile_me_uw_wrong; rescale=false).raw != readphase(phasefile_me_uw; rescale=false).raw
## test ROMEO output files
testpath = joinpath(tmpdir, "test_name_1")
unwrapping_main([phasefile_1eco, "-o", testpath])
@test isfile(joinpath(testpath, "unwrapped.nii"))
testpath = joinpath(tmpdir, "test_name_2")
fn = joinpath(testpath, "unwrap_name.nii")
unwrapping_main([phasefile_1eco, "-o", fn])
@test isfile(fn)
testpath = joinpath(tmpdir, "test_name_2")
gz_fn = joinpath(testpath, "unwrap_name.nii.gz")
unwrapping_main([phasefile_1eco, "-o", gz_fn])
@test isfile(gz_fn)
## test .gz input file
unwrapping_main([gz_fn, "-o", joinpath(testpath, "gz_read_test.nii")])
unwrapping_main([gz_fn, "-m", gz_fn, "-o", joinpath(testpath, "gz_read_test.nii")])
## test mcpc3ds output files
testpath = joinpath(tmpdir, "test5d")
unwrapping_main([phasefile_me_5D, "-o", testpath, "-m", magfile_5D, "-t", "[2,4,6]"])
@test isfile(joinpath(testpath, "combined_mag.nii"))
@test isfile(joinpath(testpath, "combined_phase.nii"))
testpath = joinpath(tmpdir, "test4d")
unwrapping_main([phasefile_me, "-o", testpath, "-m", magfile_me, "-t", "[2,4,6]", "--phase-offset-correction"])
@test isfile(joinpath(testpath, "corrected_phase.nii"))
## test B0 output files
testpath = joinpath(tmpdir, "testB0_1")
unwrapping_main([phasefile_me, "-o", testpath, "-m", magfile_me, "-t", "[2,4,6]", "-B"])
@test isfile(joinpath(testpath, "B0.nii"))
testpath = joinpath(tmpdir, "testB0_2")
name = "B0_output"
unwrapping_main([phasefile_me, "-o", testpath, "-m", magfile_me, "-t", "[2,4,6]", "-B", name])
@test isfile(joinpath(testpath, "$name.nii"))
## TODO add and test homogeneity corrected output
## test quality map
unwrapping_main([phasefile_me, "-m", magfile_me, "-o", tmpdir, "-t", "[2,4,6]", "-qQ"])
fns = joinpath.(tmpdir, ["quality.nii", ("quality_$i.nii" for i in 1:4)...])
for i in 1:length(fns), j in i+1:length(fns)
@test niread(fns[i]).raw != niread(fns[j]).raw
end
cd(original_path)
GC.gc()
rm(tmpdir, recursive=true)
end
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | code | 266 | @testset "Dataset small2" begin
p = joinpath("data", "small2")
phasefile_me = joinpath(p, "Phase.nii")
fn_mag = joinpath(p, "Mag.nii")
fn_phase = joinpath(p, "Phase.nii")
tmpdir = mktempdir()
unwrapping_main(["-p", fn_phase, "-m", fn_mag, "-o", tmpdir, "-v"])
end | ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | docs | 3543 | # ROMEO Unwrapping
[](https://korbinian90.github.io/ROMEO.jl/dev)
[](https://github.com/korbinian90/ROMEO.jl/actions/workflows/ci.yml)
[](https://codecov.io/gh/korbinian90/ROMEO.jl)
Please cite [ROMEO](https://onlinelibrary.wiley.com/doi/10.1002/mrm.28563) if you use it!
## Getting Started
This repository contains ROMEO 3D/4D unwrapping on arrays and a command line interface for MR data in NIfTI format.
A compiled command line tool is available under [ROMEO](https://github.com/korbinian90/ROMEO) (windows and linux executables; does not require a Julia installation) and otherwise, for opening NIfTI files in Julia [NIfTI.jl](https://github.com/JuliaIO/NIfTI.jl) or [MriResearchTools.jl](https://github.com/korbinian90/MriResearchTools.jl) can be helpful.
### Usage - command line
Install Julia 1.9 or newer (https://julialang.org)
Copy the file [romeo.jl](https://github.com/korbinian90/ROMEO.jl/blob/master/romeo.jl) from this repository to a convenient location. An alias for `romeo` as `julia <path-to-file>/romeo.jl` might be useful.
```bash
$ julia <path-to-file>/romeo.jl phase.nii -m mag.nii -t [2.1,4.2,6.3] -o results
```
On the first run, the dependencies will be installed automatically.
For an extended explanation of the command line interface see [ROMEO](https://github.com/korbinian90/ROMEO)
### Usage - Julia
```julia
using ROMEO
unwrapped = unwrap(phasedata3D; mag=magdata3D)
```
or via MriResearchTools:
```julia
using MriResearchTools
phase4D = readphase("Phase.nii") # 4D phase in NIfTI format
unwrapped = unwrap(phase4D; TEs=[4,8,12])
```
### Function Reference
https://korbinian90.github.io/ROMEO.jl/dev
## Different Use Cases
### Multi-Echo
If multi-echo data is available, supplying ROMEO with multi-echo information should improve the unwrapping accuracy. The same is true for magnitude information.
### Phase Offsets
If the multi-echo data contains **large phase offsets** (phase at echo time zero), default template unwrapping might fail. Setting the `individual-unwrapping` flag is a solution, as it performs spatial unwrapping for each echo instead. The computed B0 map is not corrected for remaining phase offsets.
For proper handling, the phase offsest can be removed using `mcpc3ds` from `MriResearchTools`. This works for monopolar and bipolar data, already combined or uncombined channels. However, if the phase is already "corrupted" by other coil combination algorithms, it might not be possible to estimate and remove the phase offsets.
### Repeated Measurements (EPI)
4D data with an equal echo time for all volumes should be unwrapped as 4D for best accuracy and temporal stability. The echo times can be set to `TEs=ones(size(phase,4))`
### Setting the Template Echo
In certain cases, the phase of the first echo/time-point looks differently than the rest of the acquisition, which can occur due to flow compensation of only the first echo or not having reached the steady state in fMRI. This might cause template unwrapping to fail, as the first echo is chosen as the template by default.
With the optional argument `template=2`, this can be changed to the second (or any other) echo/time-point.
## License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/korbinian90/ROMEO.jl/blob/master/LICENSE) for details
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.2.0 | 37477fcfbb6c6c23abdfa6f24e624a20b4301814 | docs | 164 | ```@meta
CurrentModule = ROMEO
```
# ROMEO
Documentation for [ROMEO](https://github.com/korbinian90/ROMEO.jl).
```@index
```
```@autodocs
Modules = [ROMEO]
```
| ROMEO | https://github.com/korbinian90/ROMEO.jl.git |
|
[
"MIT"
] | 1.1.2 | 74619231a7aa262a76f82ae05c7385622d8a5945 | code | 1005 | using PlyIO
ply = Ply()
push!(ply, PlyComment("An example ply file"))
nverts = 1000
# Random vertices with position and color
vertex = PlyElement("vertex",
ArrayProperty("x", randn(nverts)),
ArrayProperty("y", randn(nverts)),
ArrayProperty("z", randn(nverts)),
ArrayProperty("r", rand(nverts)),
ArrayProperty("g", rand(nverts)),
ArrayProperty("b", rand(nverts)))
push!(ply, vertex)
# Some triangular faces
vertex_index = ListProperty("vertex_index", Int32, Int32)
for i=1:nverts
push!(vertex_index, rand(0:nverts-1,3))
end
push!(ply, PlyElement("face", vertex_index))
# Some edges
vertex_index = ListProperty("vertex_index", Int32, Int32)
for i=1:nverts
push!(vertex_index, rand(0:nverts-1,2))
end
push!(ply, PlyElement("edge", vertex_index))
# For the sake of the example, ascii format is used, the default binary mode is faster.
save_ply(ply, "example1.ply", ascii=true)
| PlyIO | https://github.com/JuliaGeometry/PlyIO.jl.git |
|
[
"MIT"
] | 1.1.2 | 74619231a7aa262a76f82ae05c7385622d8a5945 | code | 309 | __precompile__()
module PlyIO
# Types for the ply data model
export Ply, PlyElement, PlyComment, ArrayProperty, ListProperty
export plyname # Is there something in base we could overload for this?
# High level file IO
# (TODO: FileIO?)
export load_ply, save_ply
include("types.jl")
include("io.jl")
end
| PlyIO | https://github.com/JuliaGeometry/PlyIO.jl.git |
|
[
"MIT"
] | 1.1.2 | 74619231a7aa262a76f82ae05c7385622d8a5945 | code | 14314 | #-------------------------------------------------------------------------------
# Ply file IO functionality
#--------------------------------------------------
# Header IO
@enum Format Format_ascii Format_binary_little Format_binary_big
function ply_type(type_name)
if type_name == "char" || type_name == "int8"; return Int8
elseif type_name == "short" || type_name == "int16"; return Int16
elseif type_name == "int" || type_name == "int32" ; return Int32
elseif type_name == "int64"; return Int64
elseif type_name == "uchar" || type_name == "uint8"; return UInt8
elseif type_name == "ushort" || type_name == "uint16"; return UInt16
elseif type_name == "uint" || type_name == "uint32"; return UInt32
elseif type_name == "uint64"; return UInt64
elseif type_name == "float" || type_name == "float32"; return Float32
elseif type_name == "double" || type_name == "float64"; return Float64
else
error("type_name $type_name unrecognized/unimplemented")
end
end
ply_type_name(::Type{UInt8}) = "uint8"
ply_type_name(::Type{UInt16}) = "uint16"
ply_type_name(::Type{UInt32}) = "uint32"
ply_type_name(::Type{UInt64}) = "uint64"
ply_type_name(::Type{Int8}) = "int8"
ply_type_name(::Type{Int16}) = "int16"
ply_type_name(::Type{Int32}) = "int32"
ply_type_name(::Type{Int64}) = "int64"
ply_type_name(::Type{Float32}) = "float32"
ply_type_name(::Type{Float64}) = "float64"
ply_type_name(::UInt8) = "uint8"
ply_type_name(::UInt16) = "uint16"
ply_type_name(::UInt32) = "uint32"
ply_type_name(::UInt64) = "uint64"
ply_type_name(::Int8) = "int8"
ply_type_name(::Int16) = "int16"
ply_type_name(::Int32) = "int32"
ply_type_name(::Int64) = "int64"
ply_type_name(::Float32) = "float32"
ply_type_name(::Float64) = "float64"
const PlyNativeType = Union{UInt8,UInt16,UInt32,UInt64,Int8,Int16,Int32,Int64,Float32,Float64}
ply_type_name(A::AbstractArray{T}) where {T<:PlyNativeType} = ply_type_name(T)
ply_type_name(A::AbstractArray) = !isempty(A) ? ply_type_name(A[1]) :
error("Unknown ply element type name for empty array of type $(typeof(A))")
const _host_is_little_endian = (ENDIAN_BOM == 0x04030201)
function read_header(ply_file)
firstline = readline(ply_file)
if firstline != "ply"
throw(ErrorException("Expected \"ply\" header, got \"$firstline\""))
end
element_name = ""
element_numel = 0
element_props = Vector{AbstractVector}()
elements = PlyElement[]
comments = PlyComment[]
format = nothing
while true
line = readline(ply_file)
if line == "" && eof(ply_file)
throw(ErrorException("Unexpected end of file reading ply header"))
elseif line == "end_header"
break
elseif startswith(line, "comment")
push!(comments, PlyComment(strip(line[8:end]), false, length(elements)+1))
elseif startswith(line, "obj_info")
push!(comments, PlyComment(strip(line[9:end]), true, length(elements)+1))
else
tokens = split(line)
length(tokens) > 2 || throw(ErrorException("Bad ply header, line: \"$line\""))
if tokens[1] == "format"
length(tokens) == 3 || throw(ErrorException("Bad ply header, line: \"$line\""))
_, format_type, format_version = tokens
if format_version != "1.0"
throw(ErrorException("Expected ply version 1.0, got $format_version"))
end
format = format_type == "ascii" ? Format_ascii :
format_type == "binary_little_endian" ? Format_binary_little :
format_type == "binary_big_endian" ? Format_binary_big :
error("Unknown ply format $format_type")
elseif tokens[1] == "element"
if !isempty(element_name)
push!(elements, PlyElement(element_name, element_numel, element_props))
element_props = Vector{AbstractVector}()
end
length(tokens) == 3 || throw(ErrorException("Bad ply header, line: \"$line\""))
_, element_name, element_numel = tokens
element_numel = parse(Int,element_numel)
elseif tokens[1] == "property"
!isempty(element_name) || throw(ErrorException("Bad ply header: property before first element"))
if tokens[2] == "list"
length(tokens) == 5 || throw(ErrorException("Bad ply header, line: \"$line\""))
count_type_name, type_name, prop_name = tokens[3:end]
count_type = ply_type(count_type_name)
type_ = ply_type(type_name)
push!(element_props, ListProperty(prop_name, ply_type(count_type_name), ply_type(type_name)))
else
length(tokens) == 3 || throw(ErrorException("Bad ply header, line: \"$line\""))
type_name, prop_name = tokens[2:end]
push!(element_props, ArrayProperty(prop_name, ply_type(type_name)))
end
else
throw(ErrorException("Bad ply header, line: \"$line\""))
end
end
end
if !isempty(element_name)
push!(elements, PlyElement(element_name, element_numel, element_props))
end
elements, format, comments
end
function write_header_field(stream::IO, prop::ArrayProperty)
println(stream, "property $(ply_type_name(prop.data)) $(prop.name)")
end
function write_header_field(stream::IO, prop::ArrayProperty{T,Names}) where {T,Names<:PropNameList}
for n in prop.name
println(stream, "property $(ply_type_name(prop.data)) $(n)")
end
end
function write_header_field(stream::IO, prop::ListProperty{S}) where {S}
println(stream, "property list $(ply_type_name(S)) $(ply_type_name(prop.data)) $(prop.name)")
end
function write_header_field(stream::IO, comment::PlyComment)
prefix = comment.obj_info ? "obj_info " : "comment "
println(stream, prefix, comment.comment)
end
function write_header(ply, stream::IO, ascii)
println(stream, "ply")
if ascii
println(stream, "format ascii 1.0")
else
endianness = _host_is_little_endian ? "little" : "big"
println(stream, "format binary_$(endianness)_endian 1.0")
end
commentidx = 1
for (elemidx,element) in enumerate(ply.elements)
while commentidx <= length(ply.comments) && ply.comments[commentidx].location == elemidx
write_header_field(stream, ply.comments[commentidx])
commentidx += 1
end
println(stream, "element $(element.name) $(length(element))")
for property in element.properties
write_header_field(stream, property)
end
end
while commentidx <= length(ply.comments)
write_header_field(stream, ply.comments[commentidx])
commentidx += 1
end
println(stream, "end_header")
end
#-------------------------------------------------------------------------------
# ASCII IO for properties and elements
function parse_ascii(::Type{T}, io::IO) where {T}
# FIXME: sadly unbuffered, will probably have terrible performance.
buf = UInt8[]
while !eof(io)
c = read(io, UInt8)
if c == UInt8(' ') || c == UInt8('\t') || c == UInt8('\r') || c == UInt8('\n')
if !isempty(buf)
break
end
else
push!(buf, c)
end
end
parse(T, String(buf))
end
function read_ascii_value!(stream::IO, prop::ArrayProperty{T}, index) where {T}
prop.data[index] = parse_ascii(T, stream)
end
function read_ascii_value!(stream::IO, prop::ListProperty{S,T}, index) where {S,T}
N = parse_ascii(S, stream)
prop.start_inds[index+1] = prop.start_inds[index] + N
for i=1:N
push!(prop.data, parse_ascii(T, stream))
end
end
#-------------------------------------------------------------------------------
# Binary IO for properties and elements
#--------------------------------------------------
# property IO
function read_binary_value!(stream::IO, prop::ArrayProperty{T}, index) where {T}
prop.data[index] = read(stream, T)
end
function read_binary_value!(stream::IO, prop::ListProperty{S,T}, index) where {S,T}
N = read(stream, S)
prop.start_inds[index+1] = prop.start_inds[index] + N
inds = read!(stream, Vector{T}(undef, Int(N)))
append!(prop.data, inds)
end
function write_binary_value(stream::IO, prop::ArrayProperty, index)
write(stream, prop.data[index])
end
function write_binary_value(stream::IO, prop::ListProperty{S}, index) where {S}
len = prop.start_inds[index+1] - prop.start_inds[index]
write(stream, convert(S, len))
esize = sizeof(eltype(prop.data))
unsafe_write(stream, pointer(prop.data) + esize*(prop.start_inds[index]-1), esize*len)
end
function write_ascii_value(stream::IO, prop::ListProperty, index)
print(stream, prop.start_inds[index+1] - prop.start_inds[index], ' ')
for i = prop.start_inds[index]:prop.start_inds[index+1]-1
if i != prop.start_inds[index]
write(stream, ' ')
end
print(stream, prop.data[i])
end
end
function write_ascii_value(stream::IO, prop::ArrayProperty, index)
print(stream, prop.data[index])
end
function write_ascii_value(stream::IO, prop::ArrayProperty{<:AbstractArray}, index)
p = prop.data[index]
for i = 1:length(p)
if i != 1
write(stream, ' ')
end
print(stream, p[i])
end
end
#--------------------------------------------------
# Batched element IO
# Read/write values for an element as binary. We codegen a version for each
# number of properties so we can unroll the inner loop to get type inference
# for individual properties. (Could this be done efficiently by mapping over a
# tuple of properties? Alternatively a generated function would be ok...)
for numprop=1:16
propnames = [Symbol("p$i") for i=1:numprop]
@eval function write_binary_values(stream::IO, elen, $(propnames...))
for i=1:elen
$([:(write_binary_value(stream, $(propnames[j]), i)) for j=1:numprop]...)
end
end
@eval function read_binary_values!(stream::IO, elen, $(propnames...))
for i=1:elen
$([:(read_binary_value!(stream, $(propnames[j]), i)) for j=1:numprop]...)
end
end
end
# Fallback for large numbers of properties
function write_binary_values(stream::IO, elen, props...)
for i=1:elen
for p in props
write_binary_value(stream, property, i)
end
end
end
function read_binary_values!(stream::IO, elen, props...)
for i=1:elen
for p in props
read_binary_value!(stream, p, i)
end
end
end
# Optimization: special cases for a single array property within an element
function write_binary_values(stream::IO, elen, prop::ArrayProperty)
write(stream, prop.data)
end
function read_binary_values!(stream::IO, elen, prop::ArrayProperty)
read!(stream, prop.data)
end
# Optimization: For properties with homogeneous type, shuffle into a buffer
# matrix before a batch-wise call to write(). This is a big speed improvement
# for elements constructed of simple arrays with homogenous type -
# serialization speed generally seems to be limited by the many individual
# calls to write() with small buffers.
function write_binary_values(stream::IO, elen, props::ArrayProperty{T}...) where {T}
batchsize = 100
numprops = length(props)
buf = Matrix{T}(undef, numprops, batchsize)
for i=1:batchsize:elen
thisbatchsize = min(batchsize, elen-i+1)
for j=1:numprops
buf[j,1:thisbatchsize] = props[j].data[i:i+thisbatchsize-1]
end
unsafe_write(stream, pointer(buf), sizeof(T)*numprops*thisbatchsize)
end
end
#-------------------------------------------------------------------------------
# High level IO for complete files
"""
load_ply(file)
Load data from a ply file and return a `Ply` datastructure. `file` may either
be a file name or an open stream.
"""
function load_ply(io::IO)
elements, format, comments = read_header(io)
if format != Format_ascii
if _host_is_little_endian && format != Format_binary_little
error("Reading big endian ply on little endian host is not implemented")
elseif !_host_is_little_endian && format != Format_binary_big
error("Reading little endian ply on big endian host is not implemented")
end
end
for element in elements
for prop in element.properties
resize!(prop, length(element))
end
if format == Format_ascii
for i = 1:length(element)
for prop in element.properties
read_ascii_value!(io, prop, i)
end
end
else # format == Format_binary_little
read_binary_values!(io, length(element), element.properties...)
end
end
Ply(elements, comments)
end
function load_ply(file_name::AbstractString)
open(file_name, "r") do fid
load_ply(fid)
end
end
"""
save_ply(ply::Ply, file; [ascii=false])
Save data from `Ply` data structure into `file` which may be a filename or an
open stream. The file will be native endian binary, unless the keyword
argument `ascii` is set to `true`.
"""
function save_ply(ply, stream::IO; ascii::Bool=false)
write_header(ply, stream, ascii)
for element in ply
if ascii
for i=1:length(element)
for (j,property) in enumerate(element.properties)
if j != 1
write(stream, ' ')
end
write_ascii_value(stream, property, i)
end
println(stream)
end
else # binary
write_binary_values(stream, length(element), element.properties...)
end
end
end
function save_ply(ply, file_name::AbstractString; kwargs...)
open(file_name, "w") do fid
save_ply(ply, fid; kwargs...)
end
end
| PlyIO | https://github.com/JuliaGeometry/PlyIO.jl.git |
|
[
"MIT"
] | 1.1.2 | 74619231a7aa262a76f82ae05c7385622d8a5945 | code | 7033 | #-------------------------------------------------------------------------------
# Types representing the ply data model
"""
plyname(data)
Return the name that `data` is associated with when serialized in a ply file
"""
function plyname
end
const PropNameList = Union{AbstractVector,Tuple}
#--------------------------------------------------
"""
ArrayProperty(name, T)
A ply `property \$T \$name`, modelled as an abstract vector, with a name which
can be retrieved using `plyname()`.
"""
mutable struct ArrayProperty{T,Name} <: AbstractVector{T}
name::Name
data::Vector{T}
end
#=
# FIXME: Ambiguous constructor
function ArrayProperty(names::PropNameList, data::AbstractVector{T}) where {T}
if length(names) != length(T)
error("Number of property names in $names does not match length($T)")
end
ArrayProperty(names, data)
end
=#
ArrayProperty(name::AbstractString, ::Type{T}) where {T} = ArrayProperty(String(name), Vector{T}())
Base.summary(prop::ArrayProperty) = "$(length(prop))-element $(typeof(prop)) \"$(plyname(prop))\""
# AbstractArray methods
Base.size(prop::ArrayProperty) = size(prop.data)
Base.getindex(prop::ArrayProperty, i::Int) = prop.data[i]
Base.setindex!(prop::ArrayProperty, v, i::Int) = prop.data[i] = v
Base.IndexStyle(::Type{<:ArrayProperty}) = IndexLinear()
# List methods
Base.resize!(prop::ArrayProperty, len) = resize!(prop.data, len)
Base.push!(prop::ArrayProperty, val) = (push!(prop.data, val); prop)
# Ply methods
plyname(prop::ArrayProperty) = prop.name
#--------------------------------------------------
"""
ListProperty(name, S, T)
ListProperty(name, list_of_vectors)
A ply `property list \$S \$T \$name`, modelled as a abstract vector of vectors,
with a name which can be retrieved using `plyname()`.
"""
mutable struct ListProperty{S,T} <: AbstractVector{Vector{T}}
name::String
start_inds::Vector{Int}
data::Vector{T}
end
ListProperty(name, ::Type{S}, ::Type{T}) where {S,T} = ListProperty{S,T}(String(name), ones(Int,1), Vector{T}())
function ListProperty(name::AbstractString, a::AbstractVector)
# Construct list from an array of arrays
prop = ListProperty(name, Int32, eltype(a[1]))
foreach(ai->push!(prop,ai), a)
prop
end
Base.summary(prop::ListProperty) = "$(length(prop))-element $(typeof(prop)) \"$(plyname(prop))\""
# AbstractArray methods
Base.length(prop::ListProperty) = length(prop.start_inds)-1
Base.size(prop::ListProperty) = (length(prop),)
Base.getindex(prop::ListProperty, i::Int) = prop.data[prop.start_inds[i]:prop.start_inds[i+1]-1]
Base.IndexStyle(::Type{<:ListProperty}) = IndexLinear()
# TODO: Do we need Base.setindex!() ? Hard to provide with above formulation...
# List methods
function Base.resize!(prop::ListProperty, len)
resize!(prop.start_inds, len+1)
prop.start_inds[1] = 1
end
function Base.push!(prop::ListProperty, list)
push!(prop.start_inds, prop.start_inds[end]+length(list))
append!(prop.data, list)
prop
end
# Ply methods
plyname(prop::ListProperty) = prop.name
#--------------------------------------------------
"""
PlyElement(name, [len | props...])
Construct a ply `element \$name \$len`, containing a list of properties with a
name which can be retrieved using `plyname`. Properties can be accessed with
the array interface, or looked up by indexing with a string.
The expected length `len` is used if it is set, otherwise the length shared by
the property vectors is used.
"""
mutable struct PlyElement
name::String
prior_len::Int # Length as expected, or as read from file
properties::Vector
end
PlyElement(name::AbstractString, len::Int=-1) = PlyElement(name, len, Vector{Any}())
function PlyElement(name::AbstractString, props::AbstractVector...)
PlyElement(name, -1, collect(props))
end
function Base.show(io::IO, elem::PlyElement)
prop_names = join(["\"$(plyname(prop))\"" for prop in elem.properties], ", ")
print(io, "PlyElement \"$(plyname(elem))\" of length $(length(elem)) with properties [$prop_names]")
end
# Table-like methods
function Base.length(elem::PlyElement)
# Check that lengths are consistent and return the length
if elem.prior_len != -1
return elem.prior_len
end
if isempty(elem.properties)
return 0
end
len = length(elem.properties[1])
if any(prop->len != length(prop), elem.properties)
proplens = [length(p) for p in elem.properties]
throw(ErrorException("Element $(plyname(elem)) has inconsistent property lengths: $proplens"))
end
return len
end
function Base.getindex(element::PlyElement, prop_name)
# Get first property with a matching name
for prop in element.properties
if plyname(prop) == prop_name
return prop
end
end
error("property $prop_name not found in Ply element $(plyname(element))")
end
# List methods
Base.iterate(elem::PlyElement, s...) = iterate(elem.properties, s...)
# Ply methods
plyname(elem::PlyElement) = elem.name
#--------------------------------------------------
"""
PlyComment(string; [obj_info=false])
A ply comment.
Nonstandard [obj_info header lines](
http://docs.pointclouds.org/1.5.1/structpcl_1_1io_1_1ply_1_1obj__info.html)
may be represented by setting obj_info flag.
"""
struct PlyComment
comment::String
obj_info::Bool # Set for comment-like "obj_info" lines
location::Int # index of previous element (TODO: move this out of the comment)
end
PlyComment(string::AbstractString; obj_info::Bool=false) =
PlyComment(string, obj_info, -1)
function Base.:(==)(a::PlyComment, b::PlyComment)
a.comment == b.comment &&
a.obj_info == b.obj_info &&
a.location == b.location
end
#--------------------------------------------------
"""
Ply()
Container for the contents of a ply file. This type directly models the
contents of the header. Ply elements and comments can be added using
`push!()`, elements can be iterated over with the standard iterator
interface, and looked up by indexing with a string.
"""
mutable struct Ply
elements::Vector{PlyElement}
comments::Vector{PlyComment}
end
Ply() = Ply(Vector{PlyElement}(), Vector{String}())
function Base.show(io::IO, ply::Ply)
buf = IOBuffer()
write_header(ply, buf, true)
headerstr = String(take!(buf))
headerstr = replace(strip(headerstr), "\n"=>"\n ")
print(io, "$Ply with header:\n $headerstr")
end
# List methods
Base.push!(ply::Ply, el::PlyElement) = (push!(ply.elements, el); ply)
function Base.push!(ply::Ply, c::PlyComment)
push!(ply.comments, PlyComment(c.comment, c.obj_info, length(ply.elements)+1))
ply
end
# Element search and iteration
function Base.getindex(ply::Ply, elem_name::AbstractString)
for elem in ply.elements
if plyname(elem) == elem_name
return elem
end
end
error("$elem_name not found in Ply element list")
end
Base.length(ply::Ply) = length(ply.elements)
Base.iterate(ply::Ply, s...) = iterate(ply.elements, s...)
| PlyIO | https://github.com/JuliaGeometry/PlyIO.jl.git |
|
[
"MIT"
] | 1.1.2 | 74619231a7aa262a76f82ae05c7385622d8a5945 | code | 2990 | # Very junky script testing performance against direct binary IO
#
# In this test, it's about 5x slower to write to ply than to directly dump the
# bytes, this seems to be due to the many calls to write()
using BufferedStreams
import PlyIO: save_ply, write_header, Ply, Element, ArrayProperty, ListProperty
ply = Ply()
nverts = 100000
vertex = Element("vertex",
ArrayProperty("x", randn(nverts)),
ArrayProperty("y", randn(nverts)),
ArrayProperty("z", randn(nverts)),
ArrayProperty("r", rand(nverts)),
ArrayProperty("g", rand(nverts)),
ArrayProperty("b", rand(nverts)))
push!(ply, vertex)
# Some triangular faces
vertex_index = ListProperty("vertex_index", Int32, Int32)
for i=1:nverts
push!(vertex_index, rand(0:nverts-1,3))
end
push!(ply, Element("face", vertex_index))
#=
# Some edges
vertex_index = ListProperty("vertex_index", Int32, Int32)
for i=1:nverts
push!(vertex_index, rand(0:nverts-1,2))
end
push!(ply, Element("edge", vertex_index))
=#
function save_ply_ref(ply, stream::IO)
write_header(ply, stream, false)
x = ply["vertex"]["x"].data::Vector{Float64}
y = ply["vertex"]["y"].data::Vector{Float64}
z = ply["vertex"]["z"].data::Vector{Float64}
r = ply["vertex"]["r"].data::Vector{Float64}
g = ply["vertex"]["g"].data::Vector{Float64}
b = ply["vertex"]["b"].data::Vector{Float64}
vertex_index = ply["face"]["vertex_index"]
viinds = vertex_index.start_inds::Vector{Int32}
vidata = vertex_index.data::Vector{Int32}
#=
for i=1:length(x)
write(stream, x[i])
write(stream, y[i])
write(stream, z[i])
write(stream, r[i])
write(stream, g[i])
write(stream, b[i])
end
for i=1:length(viinds)-1
len = viinds[i+1] - viinds[i]
write(stream, len)
write(stream, vidata[viinds[i]:viinds[i+1]-1])
end
=#
# Buffered reordering
#=
write(stream, [x y z r g b]')
write(stream, viinds)
write(stream, vidata)
=#
# Benchmark against direct binary IO (invalid ply!), which should be about
# as fast as you can hope for.
write(stream, x)
write(stream, y)
write(stream, z)
write(stream, r)
write(stream, g)
write(stream, b)
write(stream, viinds)
write(stream, vidata)
end
function save_ply_ref(ply, filename::AbstractString)
open(filename, "w") do fid
save_ply_ref(ply, fid)
end
end
function save_ply_buffered(ply, filename; kwargs...)
open(filename, "w") do fid
save_ply(ply, BufferedOutputStream(fid, 2^16); kwargs...)
end
end
save_ply(ply, "test.ply", ascii=false)
save_ply_buffered(ply, "test.ply", ascii=false)
save_ply_ref(ply, "test2.ply")
Profile.clear_malloc_data()
@time save_ply(ply, "test.ply", ascii=false)
@time save_ply_buffered(ply, "test.ply", ascii=false)
@time save_ply_ref(ply, "test2.ply")
#run(`displaz -script test.ply`)
| PlyIO | https://github.com/JuliaGeometry/PlyIO.jl.git |
|
[
"MIT"
] | 1.1.2 | 74619231a7aa262a76f82ae05c7385622d8a5945 | code | 8344 | using PlyIO
using StaticArrays
using Test
@testset "PlyIO" begin
@testset "types" begin
ply = Ply()
push!(ply, PlyComment("PlyComment"))
elt = PlyElement("A", ArrayProperty("x", UInt8[1,2,3]),
ArrayProperty("y", Float32[1.1,2.2,3.3]))
push!(ply, elt)
@test sprint(show, ply) == """
Ply with header:
ply
format ascii 1.0
comment PlyComment
element A 3
property uint8 x
property float32 y
end_header"""
@test sprint(show, elt) == "PlyElement \"A\" of length 3 with properties [\"x\", \"y\"]"
end
@testset "simple" begin
ply = Ply()
push!(ply, PlyComment("PlyComment about A"))
push!(ply, PlyElement("A",
ArrayProperty("x", UInt8[1,2,3]),
ArrayProperty("y", Float32[1.1,2.2,3.3]),
ListProperty("a_list", Vector{Int64}[[0,1], [2,3,4], [5]])))
push!(ply, PlyComment("PlyComment about B"))
push!(ply, PlyComment("PlyObjInfo", obj_info=true))
push!(ply, PlyComment("PlyComment about B 2"))
push!(ply, PlyElement("B",
ArrayProperty("r", Int16[-1,1]),
ArrayProperty("g", Int16[1,1])))
push!(ply, PlyComment("Final comment"))
buf = IOBuffer()
save_ply(ply, buf, ascii=true)
str = String(take!(buf))
open("simple_test_tmp.ply", "w") do fid
write(fid, str)
end
@test str ==
"""
ply
format ascii 1.0
comment PlyComment about A
element A 3
property uint8 x
property float32 y
property list int32 int64 a_list
comment PlyComment about B
obj_info PlyObjInfo
comment PlyComment about B 2
element B 2
property int16 r
property int16 g
comment Final comment
end_header
1 1.1 2 0 1
2 2.2 3 2 3 4
3 3.3 1 5
-1 1
1 1
"""
end
@testset "empty elements" begin
ply = Ply()
push!(ply, PlyElement("A", ArrayProperty("x", UInt8[])))
push!(ply, PlyElement("B", ListProperty("a_list", Vector{Int64}[[], [], []])))
buf = IOBuffer()
save_ply(ply, buf, ascii=true)
str = String(take!(buf))
open("empty_test_tmp.ply", "w") do fid
write(fid, str)
end
@test str ==
"""
ply
format ascii 1.0
element A 0
property uint8 x
element B 3
property list int32 int64 a_list
end_header
0
0
0
"""
end
@testset "roundtrip" begin
@testset "ascii=$test_ascii" for test_ascii in [false, true]
ply = Ply()
push!(ply, PlyComment("A comment"))
push!(ply, PlyComment("Blah blah"))
push!(ply, PlyComment("x=10", obj_info=true))
nverts = 10
x = collect(Float64, 1:nverts)
y = collect(Int16, 1:nverts)
push!(ply, PlyElement("vertex", ArrayProperty("x", x),
ArrayProperty("y", y)))
# Some triangular faces
vertex_index = ListProperty("vertex_index", Int32, Int32)
for i=1:nverts
push!(vertex_index, rand(0:nverts-1,3))
end
push!(ply, PlyElement("face", vertex_index))
save_ply(ply, "roundtrip_test_tmp.ply", ascii=test_ascii)
newply = load_ply("roundtrip_test_tmp.ply")
# TODO: Need a better way to access the data arrays than this.
@test newply["vertex"]["x"] == x
@test newply["vertex"]["y"] == y
@test newply["face"]["vertex_index"] == vertex_index
@test newply.comments == [PlyComment("A comment",false,1),
PlyComment("Blah blah",false,1),
PlyComment("x=10",true,1)]
end
@testset "proptype=$proptype" for proptype in [Int8, Int16, Int32, Int64,
UInt8, UInt16, UInt32, UInt64,
Float32, Float64]
ply = Ply()
arrayprop = fill(proptype(42), 1)
listprop = ListProperty("listprop", proptype<:Integer ? proptype : Int32, proptype)
push!(listprop, collect(proptype, 1:10))
push!(ply, PlyElement("test", ArrayProperty("arrayprop", arrayprop), listprop))
io = IOBuffer()
save_ply(ply, io)
seek(io, 0)
newply = load_ply(io)
@test length(newply) == 1
@test typeof(newply["test"]["arrayprop"][1]) == proptype
@test newply["test"]["arrayprop"] == arrayprop
@test typeof(newply["test"]["listprop"][1]) == Vector{proptype}
@test newply["test"]["listprop"] == listprop
end
@testset "Batched writes for homogenous properties" begin
ply = Ply()
nverts = 1000
x, y, z = [rand(nverts) for _ = 1:3]
push!(ply, PlyElement("vertex",
ArrayProperty("x", x),
ArrayProperty("y", y),
ArrayProperty("z", z)))
io = IOBuffer()
save_ply(ply, io)
seek(io, 0)
newply = load_ply(io)
@test length(newply) == 1
@test newply["vertex"]["x"] == x
@test newply["vertex"]["y"] == y
@test newply["vertex"]["z"] == z
end
end
@testset "SVector properties" begin
ply = Ply()
push!(ply, PlyElement("A",
ArrayProperty(["x","y"], SVector{2,Float64}[SVector(1,2), SVector(3,4)])
))
push!(ply, PlyElement("B",
ArrayProperty(["r","g","b"], SVector{3,UInt8}[SVector(1,2,3)])
))
buf = IOBuffer()
save_ply(ply, buf, ascii=true)
str = String(take!(buf))
open("SVector_properties_test_tmp.ply", "w") do fid
write(fid, str)
end
@test str ==
"""
ply
format ascii 1.0
element A 2
property float64 x
property float64 y
element B 1
property uint8 r
property uint8 g
property uint8 b
end_header
1.0 2.0
3.0 4.0
1 2 3
"""
end
@testset "Malformed ply headers" begin
@test_throws ErrorException load_ply(IOBuffer("asdf"))
@test_throws ErrorException load_ply(IOBuffer("ply"))
@test_throws ErrorException load_ply(IOBuffer("""
ply
format ascii 2.0
"""))
@test_throws ErrorException load_ply(IOBuffer("""
ply
format 1.0
end_header"""))
@test_throws ErrorException load_ply(IOBuffer("""
ply
format ascii 1.0
asdf
end_header"""))
@test_throws ErrorException load_ply(IOBuffer("""
ply
format ascii 1.0
element el
end_header"""))
@test_throws ErrorException load_ply(IOBuffer("""
ply
format ascii 1.0
property float x
end_header"""))
@test_throws ErrorException load_ply(IOBuffer("""
ply
format ascii 1.0
element el 0
property
end_header"""))
@test_throws ErrorException load_ply(IOBuffer("""
ply
format ascii 1.0
element el 0
property list
end_header"""))
end
end # @testset PlyIO
| PlyIO | https://github.com/JuliaGeometry/PlyIO.jl.git |
|
[
"MIT"
] | 1.1.2 | 74619231a7aa262a76f82ae05c7385622d8a5945 | docs | 4091 | # Ply polygon file IO
**PlyIO** is a package for reading and writing data in the
[Ply](http://paulbourke.net/dataformats/ply/) polygon file format, also called
the Stanford triangle format.
[](https://github.com/JuliaGeometry/PlyIO.jl/actions?query=workflow%3ACI)
## Quick start
### Writing ply
Here's an example of how to write a basic ply file containing random triangles
and edges:
```julia
using PlyIO
ply = Ply()
push!(ply, PlyComment("An example ply file"))
nverts = 1000
# Random vertices with position and color
vertex = PlyElement("vertex",
ArrayProperty("x", randn(nverts)),
ArrayProperty("y", randn(nverts)),
ArrayProperty("z", randn(nverts)),
ArrayProperty("r", rand(nverts)),
ArrayProperty("g", rand(nverts)),
ArrayProperty("b", rand(nverts)))
push!(ply, vertex)
# Some triangular faces.
# The UInt8 is the type used for serializing the number of list elements (equal
# to 3 for a triangular mesh); the Int32 is the type used to serialize indices
# into the vertex array.
vertex_index = ListProperty("vertex_index", UInt8, Int32)
for i=1:nverts
push!(vertex_index, rand(0:nverts-1,3))
end
push!(ply, PlyElement("face", vertex_index))
# Some edges
vertex_index = ListProperty("vertex_index", Int32, Int32)
for i=1:nverts
push!(vertex_index, rand(0:nverts-1,2))
end
push!(ply, PlyElement("edge", vertex_index))
# For the sake of the example, ascii format is used, the default binary mode is faster.
save_ply(ply, "example1.ply", ascii=true)
```
Opening this file using a program like
[displaz](https://github.com/c42f/displaz), for example using `displaz example1.ply`,
you should see something like

### Reading ply
Reading the ply file generated above is quite simple:
```julia
julia> using PlyIO
julia> ply = load_ply("example1.ply")
PlyIO.Ply with header:
ply
format ascii 1.0
comment An example ply file
element vertex 1000
property float64 x
property float64 y
property float64 z
property float64 r
property float64 g
property float64 b
element face 1000
property list int32 int32 vertex_index
element edge 1000
property list int32 int32 vertex_index
end_header
julia> ply["vertex"]
PlyElement "vertex" of length 1000 with properties ["x", "y", "z", "r", "g", "b"]
julia> ply["vertex"]["x"]
1000-element PlyIO.ArrayProperty{Float64,String} "x":
-0.472592
1.04326
-0.982202
⋮
-2.55605
0.773923
-2.10675
```
## API
### The file format
Conceptually, the ply format is a container for a set of named tables of numeric
data. Each table, or **element**, has several named columns or **properties**.
Properties can be either simple numeric arrays (floating point or
signed/unsigned integers), or arrays of variable length lists of such numeric
values.
As described, ply is quite a generic format but it's primarily used for
geometric data. For this use there are some loose
[naming conventions](http://paulbourke.net/dataformats/ply/) which attach
geometric meaning to certian combinations of element and property names.
Unfortunately there's no official standard.
### Document object model
Ply elements are represented with the `PlyElement` type which is a list of
properties which may be looked up by name.
Properties may be represented by an `AbstractArray` type which has the the
`plyname` function defined, which should return a name for the property. The
builtin types `ArrayProperty` and `ListProperty` are used as containers for data
when reading a ply file.
The `Ply` type is a container for several interleaved `PlyElement` and
`PlyComment` fields, in the order which would be observed in a standard ply
header.
### Reading and writing
To read and write `Ply` objects from files or `IO` streams, use the functions
`load_ply()` and `save_ply()`.
## Acknowledgements
[](https://github.com/FugroRoames)
| PlyIO | https://github.com/JuliaGeometry/PlyIO.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 4192 | using QuasinormalModes
# ------------------------------------------------------------------
# 1. Setting up Schwarzschild black hole data structure for
# computing the QNMs numerically.
# ------------------------------------------------------------------
struct NSchwarzschildData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
l::N
s::N
end
function NSchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
return NSchwarzschildData{N,T}(nIter, x0, l, s)
end
QuasinormalModes.λ0(::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (-1 + (2*im)*ω + x*(4 - 3*x + (4*im)*(-2 + x)*ω))/((-1 + x)^2*x)
QuasinormalModes.S0(d::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (d.l + d.l^2 + (-1 + d.s^2)*(-1 + x) + (4*im)*(-1 + x)*ω + 4*(-2 + x)*ω^2)/((-1 + x)^2*x)
QuasinormalModes.get_niter(d::NSchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NSchwarzschildData{N,T}) where {N,T} = d.x0
# ------------------------------------------------------------------
# 2. Benchmark
# ------------------------------------------------------------------
"""
bench_iter(iter_start, iter_end, x0, l, s, xt, ft, nls_iter, reference_ωr, reference_ωi, repeat)
Measures the time required to compute a pair of quasinormal frequencies as a function of the
number of AIM iterations performed. The result is also compared with a know result in order
to provide a error measurement.
# Input
- `iter_start`: First number of iterations to perform.
- `iter_end`: Last number of iterations.
- `x0`: The point around which AIM functions will be Taylor expanded.
- `l`: The l value of the sought mode.
- `s`: The s value of the sought mode.
- `xt`: The `xtol` value passed to NLSolve.
- `ft`: The `ftol` value passed to NLSolve.
- `nls_iter`: The number of iterations that NLSolve will perform.
- `reference_ωr`: The reference value for the real QNM frequency.
- `reference_ωi`: The reference value for the imaginary QNM frequency.
- `repeat`: The amount of times to repeat the operation.
# Output
nothing
"""
function bench_iter(iter_start, iter_end, x0, l, s, xt, ft, nls_iter, reference_ωr, reference_ωi, repeat)
for i in 1:repeat
println("Benchmark iteration ", i, ":")
file = open("run_$(i).dat", "w")
println(file, "# 1:time(ns) 2:iter 3:x0 4:l 5:s 6:xt 7:ft 8:w_r 9:w_i 10:error in w_r 11:error in w_i")
for iter in iter_start:iter_end
println(" AIM iteration ", iter)
p_num = NSchwarzschildData(iter, x0, l, s);
c_num = AIMCache(p_num)
t0 = time_ns();
ev = computeEigenvalues(Threaded(), p_num, c_num, typeof(x0)(reference_ωr, reference_ωi), nls_xtol = xt, nls_ftol = ft, nls_iterations = nls_iter)
elapsed_time = time_ns() - t0
if(ev.x_converged || ev.f_converged)
println(file,
elapsed_time, " ",
iter, " ",
convert(Float64, x0), " ",
l, " ",
s, " ",
convert(Float64, xt), " ",
convert(Float64, ft), " ",
ev.zero[1], " ",
ev.zero[2], " ",
ev.zero[1] - reference_ωr, " ",
ev.zero[2] - reference_ωi
)
flush(file)
end
end
close(file)
end
end
# We call the function once and discard the results in order so that the compilation time does not get included in the benchmark.
# Reference values are obtained from https://pages.jh.edu/eberti2/ringdown/
bench_iter(convert(UInt32, 1), convert(UInt32, 100), Complex(big"0.39", big"0.0"), 0x00000, 0x00000, big"1.0e-55", big"1.0e-55", 5000000, big"0.2209098781608393", big"-0.2097914341737619", 1)
bench_iter(convert(UInt32, 1), convert(UInt32, 100), Complex(big"0.39", big"0.0"), 0x00000, 0x00000, big"1.0e-55", big"1.0e-55", 5000000, big"0.2209098781608393", big"-0.2097914341737619", 20)
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 432 | # To view in browser start a server in the build dir:
# python -m http.server --bind localhost
using Documenter
using QuasinormalModes
makedocs(sitename = "QuasinormalModes.jl",
modules = [QuasinormalModes],
pages = [
"index.md",
"intro.md",
"org.md",
"schw.md",
"sho.md",
"api_ref.md"
]
)
deploydocs(
repo = "github.com/lucass-carneiro/QuasinormalModes.jl.git",
)
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 2708 | using QuasinormalModes
using SymEngine
# ------------------------------------------------------------------
# 1. Analytic harmonic oscillator
# ------------------------------------------------------------------
struct HarmonicOscilatorData{N,T} <: QuadraticEigenvalueProblem{N,T}
nIter::N
x0::T
vars::Tuple{Basic,Basic}
exprs::Tuple{Basic,Basic}
end
function HarmonicOscilatorData(nIter::N, x0::T) where {N,T}
vars = @vars x ω
λ0 = 2 * x
S0 = 1 - ω
return HarmonicOscilatorData{N,T}(nIter, x0, vars, (λ0, S0))
end
QuasinormalModes.λ0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.exprs[1]
QuasinormalModes.S0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.exprs[2]
QuasinormalModes.get_niter(d::HarmonicOscilatorData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.x0
QuasinormalModes.get_ODEvar(d::HarmonicOscilatorData{N,T}) where {N,T} = d.vars[1]
QuasinormalModes.get_ODEeigen(d::HarmonicOscilatorData{N,T}) where {N,T} = d.vars[2]
# ------------------------------------------------------------------
# 2. Numeric harmonic oscillator
# ------------------------------------------------------------------
struct NHarmonicOscilatorData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
end
function NHarmonicOscilatorData(nIter::N, x0::T) where {N,T}
return NHarmonicOscilatorData{N,T}(nIter, x0)
end
QuasinormalModes.λ0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x, ω) -> 2 * x
QuasinormalModes.S0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x, ω) -> 1 - ω + x - x
QuasinormalModes.get_niter(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.x0
# ------------------------------------------------------------------
# 3. Constructing problems and caches
# ------------------------------------------------------------------
p_ana = HarmonicOscilatorData(0x0000A, 0.5);
p_num = NHarmonicOscilatorData(0x0000A, 0.5);
c_ana = AIMCache(p_ana)
c_num = AIMCache(p_num)
# ------------------------------------------------------------------
# 4. Computing quasinormal modes
# ------------------------------------------------------------------
ev_ana = computeEigenvalues(Serial(), p_ana, c_ana)
ev_num = eigenvaluesInGrid(Serial(), p_num, c_num, (0.0, 21.0))
function printEigen(eigenvalues)
println("--------------------------------------")
for i in eachindex(eigenvalues)
println("n = $i, ω = $(eigenvalues[i])")
end
println("--------------------------------------")
return nothing
end
println("Analytic results")
printEigen(reverse!(ev_ana))
println("Numeric results")
printEigen(ev_num)
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 3171 | using QuasinormalModes
using RootsAndPoles
# ------------------------------------------------------------------
# 1. Numeric harmonic oscillator
# ------------------------------------------------------------------
struct NHarmonicOscilatorData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
end
function NHarmonicOscilatorData(nIter::N, x0::T) where {N,T}
return NHarmonicOscilatorData{N,T}(nIter, x0)
end
QuasinormalModes.λ0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x, ω) -> 2 * x
QuasinormalModes.S0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x, ω) -> 1 - ω + x - x
QuasinormalModes.get_niter(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.x0
# ------------------------------------------------------------------
# 2. Constructing problems and caches
# ------------------------------------------------------------------
const m = Serial()
const p = NHarmonicOscilatorData(0x0000A, Complex(0.5, 0.0))
const c = AIMCache(p)
# ------------------------------------------------------------------
# 3. Creatting a wrapper function to pass to RootsAndPoles.jl
# ------------------------------------------------------------------
function δ_wrapper(z)
computeDelta!(m, p, c, z)
end
# ------------------------------------------------------------------
# 3. RootsAndPoles.jl search domain and mesh construction
# ------------------------------------------------------------------
const xb = 0.0 # real part begin
const xe = 21.0 # real part end
const yb = 0.0 # imag part begin
const ye = 21.0 # imag part end
const r = 1.0 # initial mesh step
const origcoords = rectangulardomain(Complex(xb, yb), Complex(xe, ye), r)
# ------------------------------------------------------------------
# 4. RootsAndPoles.jl search settings
# ------------------------------------------------------------------
# For details, see https://github.com/fgasdia/RootsAndPoles.jl
params = GRPFParams(
100, # the maximum number of refinement iterations before `grpf` returns.
50000, # the maximum number of Delaunay tessalation nodes before `grpf` returns.
3, # maximum ratio of the longest to shortest side length of Delaunay triangles before they are split during `grpf` refinement iterations.
5000, # provide a size hint to the total number of expected nodes in the Delaunay tesselation. Setting this number approximately correct can improve performance
1.0e-12, # maximum allowed edge length of the tesselation defined in the `origcoords` domain before returning
false # use `Threads.@threads` to run the user-provided function `fcn` across the `DelaunayTriangulation`
)
# ------------------------------------------------------------------
# 5. Root finding and printing
# ------------------------------------------------------------------
roots, poles = grpf(δ_wrapper, origcoords, params)
sort!(roots, by = z -> real(z))
sort!(poles, by = z -> real(z))
println("Roots:")
for root in roots
println(root)
end
println("-------------------------------------------")
println("Poles:")
for pole in poles
println(pole)
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 3255 | using QuasinormalModes
using SymEngine
# ------------------------------------------------------------------
# 1. Analytic Schwarzschild Black Hole
# ------------------------------------------------------------------
struct SchwarzschildData{N,T} <: QuadraticEigenvalueProblem{N,T}
nIter::N
x0::T
vars::Tuple{Basic, Basic}
exprs::Tuple{Basic, Basic}
end
function SchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
vars = @vars x ω
λ0 = (-1 + (2*im)*ω + x*(4 - 3*x + (4*im)*(-2 + x)*ω))/((-1 + x)^2*x)
S0 = (l + l^2 + (-1 + s^2)*(-1 + x) + (4*im)*(-1 + x)*ω + 4*(-2 + x)*ω^2)/((-1 + x)^2*x)
return SchwarzschildData{N,T}(nIter, x0, vars, (λ0, S0))
end
QuasinormalModes.λ0(d::SchwarzschildData{N,T}) where {N,T} = d.exprs[1]
QuasinormalModes.S0(d::SchwarzschildData{N,T}) where {N,T} = d.exprs[2]
QuasinormalModes.get_niter(d::SchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::SchwarzschildData{N,T}) where {N,T} = d.x0
QuasinormalModes.get_ODEvar(d::SchwarzschildData{N,T}) where {N,T} = d.vars[1]
QuasinormalModes.get_ODEeigen(d::SchwarzschildData{N,T}) where {N,T} = d.vars[2]
# ------------------------------------------------------------------
# 2. Numeric Schwarzschild Black Hole
# ------------------------------------------------------------------
struct NSchwarzschildData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
l::N
s::N
end
function NSchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
return NSchwarzschildData{N,T}(nIter, x0, l, s)
end
QuasinormalModes.λ0(::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (-1 + (2*im)*ω + x*(4 - 3*x + (4*im)*(-2 + x)*ω))/((-1 + x)^2*x)
QuasinormalModes.S0(d::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (d.l + d.l^2 + (-1 + d.s^2)*(-1 + x) + (4*im)*(-1 + x)*ω + 4*(-2 + x)*ω^2)/((-1 + x)^2*x)
QuasinormalModes.get_niter(d::NSchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NSchwarzschildData{N,T}) where {N,T} = d.x0
# ------------------------------------------------------------------
# 3. Constructing problems and caches
# ------------------------------------------------------------------
p_ana = SchwarzschildData(0x000030, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00000, 0x00000);
p_num = NSchwarzschildData(0x00030, Complex(0.43, 0.0), 0x00000, 0x00000);
c_ana = AIMCache(p_ana)
c_num = AIMCache(p_num)
# ------------------------------------------------------------------
# 4. Computing quasinormal modes
# ------------------------------------------------------------------
m_ana = computeEigenvalues(Serial(), p_ana, c_ana)
function printQNMs(qnms, cutoff, instab)
println("-"^165)
println("|", " "^36, "Re(omega)", " "^36, " ", " "^36, "Im(omega)", " "^36, "|")
println("-"^165)
for qnm in qnms
if real(qnm) > cutoff && ( instab ? true : imag(qnm) < big"0.0" )
println(real(qnm), " ", imag(qnm))
end
end
println("-"^165)
return nothing
end
sort!(m_ana, by = x -> imag(x))
println("Analytic results")
printQNMs(m_ana, 1.0e-10, false)
ev = computeEigenvalues(Serial(), p_num, c_num, Complex(0.22, -0.20), nls_xtol = 1.0e-10, nls_ftol = 1.0e-10)
println("Numeric results:")
println(ev)
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 3269 | using QuasinormalModes
using RootsAndPoles
# ------------------------------------------------------------------
# 1. Numeric Schwarzschild Black Hole
# ------------------------------------------------------------------
struct NSchwarzschildData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
l::N
s::N
end
function NSchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
return NSchwarzschildData{N,T}(nIter, x0, l, s)
end
QuasinormalModes.λ0(::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (-1 + (2*im)*ω + x*(4 - 3*x + (4*im)*(-2 + x)*ω))/((-1 + x)^2*x)
QuasinormalModes.S0(d::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (d.l + d.l^2 + (-1 + d.s^2)*(-1 + x) + (4*im)*(-1 + x)*ω + 4*(-2 + x)*ω^2)/((-1 + x)^2*x)
QuasinormalModes.get_niter(d::NSchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NSchwarzschildData{N,T}) where {N,T} = d.x0
# ------------------------------------------------------------------
# 2. Constructing problems and caches
# ------------------------------------------------------------------
const m = Serial()
const p = NSchwarzschildData(0x00030, Complex(0.5, 0.0), 0x00000, 0x00000);
const c = AIMCache(p)
# ------------------------------------------------------------------
# 3. Creatting a wrapper function to pass to RootsAndPoles.jl
# ------------------------------------------------------------------
function δ_wrapper(z)
computeDelta!(m, p, c, z)
end
# ------------------------------------------------------------------
# 3. RootsAndPoles.jl search domain and mesh construction
# ------------------------------------------------------------------
const xb = 0.1 # real part begin
const xe = 1.0 # real part end
const yb = -1.0 # imag part begin
const ye = 0.1 # imag part end
const r = 6.0e-3 # initial mesh step
const origcoords = rectangulardomain(Complex(xb, yb), Complex(xe, ye), r)
# ------------------------------------------------------------------
# 4. RootsAndPoles.jl search settings
# ------------------------------------------------------------------
# For details, see https://github.com/fgasdia/RootsAndPoles.jl
params = GRPFParams(
100, # the maximum number of refinement iterations before `grpf` returns.
50000, # the maximum number of Delaunay tessalation nodes before `grpf` returns.
3, # maximum ratio of the longest to shortest side length of Delaunay triangles before they are split during `grpf` refinement iterations.
5000, # provide a size hint to the total number of expected nodes in the Delaunay tesselation. Setting this number approximately correct can improve performance
1.0e-12, # maximum allowed edge length of the tesselation defined in the `origcoords` domain before returning
false # use `Threads.@threads` to run the user-provided function `fcn` across the `DelaunayTriangulation`
)
# ------------------------------------------------------------------
# 5. Root finding and printing
# ------------------------------------------------------------------
roots, poles = grpf(δ_wrapper, origcoords, params)
println("Roots:")
for root in roots
println(root)
end
println("-------------------------------------------")
println("Poles:")
for pole in poles
println(pole)
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 796 | # ------------------------------------------------------------------
# Spin 0, 1 and 2 field on a Schwarzschild background
# ------------------------------------------------------------------
struct Schwarzschild_boson{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
M::T
l::N
s::N
end
QuasinormalModes.λ0(d::Schwarzschild_boson{N,T}) where {N,T} = (x,ω) -> (4 * d.M * im * ω * (2 * x^2 - 4 * x + 1) - (1 - 3 * x) * (1 - x))/(x * (1 - x)^2)
QuasinormalModes.S0(d::Schwarzschild_boson{N,T}) where {N,T} = (x,ω) -> (16 * d.M^2 * ω^2 * (x - 2) - 8 * d.M * im * ω * (1 - x) + d.l * (d.l + 1) + (1 - d.s^2)*(1 - x))/(x * (1-x)^2)
QuasinormalModes.get_niter(d::Schwarzschild_boson{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::Schwarzschild_boson{N,T}) where {N,T} = d.x0
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 1595 | # --------------------------------------------------------------------
# Compute Schwarzschild QNMs perturbed by spin 0,1 or 2 field and save
# --------------------------------------------------------------------
function compute_boson(iter, s)
x0 = Complex(big"0.43", big"0.0")
M = Complex(big"1.0", big"0.0")
p_num = Schwarzschild_boson(iter, x0, M, 0x00000, convert(UInt32, s))
c_num = AIMCache(p_num)
file = open("high_l_qnm.dat", "w")
println(file, "# 1:Iterations 2:l 3:n 4:real(guess) 5:imag(guess) 6:re(omega) 7:im(omega)")
if s == 0
refArray = spin_0
elseif s == 1
refArray = spin_1
elseif s == 2
refArray = spin_2
else
error("Bosonic spin must be either 0, 1 or 2")
end
println("Starting computation")
for reference in refArray
l = UInt32(reference[1])
n = reference[2]
println("Computing l = ", l, " n = ", n)
guess = Complex(BigFloat(reference[3]), BigFloat(reference[4]))
print(file, iter, " ", l, " ", n, " ", reference[3], " ", reference[4]," ")
p_num = Schwarzschild_boson(iter, x0, M, l, convert(UInt32, s))
qnm = computeEigenvalues(Serial(), p_num, c_num, guess, nls_xtol = big"1.0e-20", nls_ftol = big"1.0e-20", nls_iterations = 10000000)
if(qnm.x_converged || qnm.f_converged)
print(file, qnm.zero[1], " ", qnm.zero[2], "\n")
else
print(file, "Did not converge Did not converge\n")
end
flush(file)
flush(stdout)
end
close(file)
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 31855 | # ------------------------------------------------------------------
# Reference QNMs for use as initial guesses
#
# Format: (l, n, real part, imaginary part)
# ------------------------------------------------------------------
const spin_0 = [
(0, "0", "0.1104", "-0.1048"),
(1, "0", "0.2911", "-0.0980"),
(2, "0", "0.4832", "-0.0968"),
(2, "1", "0.4632", "-0.2958"),
(3, "0", "0.6752", "-0.0965"),
(3, "1", "0.6604", "-0.2923"),
(3, "2", "0.6348", "-0.4941"),
(4, "0", "0.8673", "-0.0964"),
(4, "1", "0.8857", "-0.2909"),
(4, "2", "0.8345", "-0.4895"),
(4, "3", "0.8064", "-0.6926"),
(5, "0", "1.0585", "-0.096225"),
(5, "1", "1.0585", "-0.28868"),
(5, "2", "1.0585", "-0.48113"),
(5, "3", "1.0585", "-0.67358"),
(5, "4", "1.0585", "-0.86603"),
(5, "5", "1.0585", "-1.0585"),
(6, "0", "1.2509", "-0.096225"),
(6, "1", "1.2509", "-0.28868"),
(6, "2", "1.2509", "-0.48113"),
(6, "3", "1.2509", "-0.67358"),
(6, "4", "1.2509", "-0.86603"),
(6, "5", "1.2509", "-1.0585"),
(6, "6", "1.2509", "-1.2509"),
(7, "0", "1.4434", "-0.096225"),
(7, "1", "1.4434", "-0.28868"),
(7, "2", "1.4434", "-0.48113"),
(7, "3", "1.4434", "-0.67358"),
(7, "4", "1.4434", "-0.86603"),
(7, "5", "1.4434", "-1.0585"),
(7, "6", "1.4434", "-1.2509"),
(7, "7", "1.4434", "-1.4434"),
(8, "0", "1.6358", "-0.096225"),
(8, "1", "1.6358", "-0.28868"),
(8, "2", "1.6358", "-0.48113"),
(8, "3", "1.6358", "-0.67358"),
(8, "4", "1.6358", "-0.86603"),
(8, "5", "1.6358", "-1.0585"),
(8, "6", "1.6358", "-1.2509"),
(8, "7", "1.6358", "-1.4434"),
(8, "8", "1.6358", "-1.6358"),
(9, "0", "1.8283", "-0.096225"),
(9, "1", "1.8283", "-0.28868"),
(9, "2", "1.8283", "-0.48113"),
(9, "3", "1.8283", "-0.67358"),
(9, "4", "1.8283", "-0.86603"),
(9, "5", "1.8283", "-1.0585"),
(9, "6", "1.8283", "-1.2509"),
(9, "7", "1.8283", "-1.4434"),
(9, "8", "1.8283", "-1.6358"),
(9, "9", "1.8283", "-1.8283"),
(10, "0", "2.0207", "-0.096225"),
(10, "1", "2.0207", "-0.28868"),
(10, "2", "2.0207", "-0.48113"),
(10, "3", "2.0207", "-0.67358"),
(10, "4", "2.0207", "-0.86603"),
(10, "5", "2.0207", "-1.0585"),
(10, "6", "2.0207", "-1.2509"),
(10, "7", "2.0207", "-1.4434"),
(10, "8", "2.0207", "-1.6358"),
(10, "9", "2.0207", "-1.8283"),
(10, "10", "2.0207", "-2.0207"),
(11, "0", "2.2132", "-0.096225"),
(11, "1", "2.2132", "-0.28868"),
(11, "2", "2.2132", "-0.48113"),
(11, "3", "2.2132", "-0.67358"),
(11, "4", "2.2132", "-0.86603"),
(11, "5", "2.2132", "-1.0585"),
(11, "6", "2.2132", "-1.2509"),
(11, "7", "2.2132", "-1.4434"),
(11, "8", "2.2132", "-1.6358"),
(11, "9", "2.2132", "-1.8283"),
(11, "10", "2.2132", "-2.0207"),
(11, "11", "2.2132", "-2.2132"),
(12, "0", "2.4056", "-0.096225"),
(12, "1", "2.4056", "-0.28868"),
(12, "2", "2.4056", "-0.48113"),
(12, "3", "2.4056", "-0.67358"),
(12, "4", "2.4056", "-0.86603"),
(12, "5", "2.4056", "-1.0585"),
(12, "6", "2.4056", "-1.2509"),
(12, "7", "2.4056", "-1.4434"),
(12, "8", "2.4056", "-1.6358"),
(12, "9", "2.4056", "-1.8283"),
(12, "10", "2.4056", "-2.0207"),
(12, "11", "2.4056", "-2.2132"),
(12, "12", "2.4056", "-2.4056"),
(13, "0", "2.5981", "-0.096225"),
(13, "1", "2.5981", "-0.28868"),
(13, "2", "2.5981", "-0.48113"),
(13, "3", "2.5981", "-0.67358"),
(13, "4", "2.5981", "-0.86603"),
(13, "5", "2.5981", "-1.0585"),
(13, "6", "2.5981", "-1.2509"),
(13, "7", "2.5981", "-1.4434"),
(13, "8", "2.5981", "-1.6358"),
(13, "9", "2.5981", "-1.8283"),
(13, "10", "2.5981", "-2.0207"),
(13, "11", "2.5981", "-2.2132"),
(13, "12", "2.5981", "-2.4056"),
(13, "13", "2.5981", "-2.5981"),
(14, "0", "2.7905", "-0.096225"),
(14, "1", "2.7905", "-0.28868"),
(14, "2", "2.7905", "-0.48113"),
(14, "3", "2.7905", "-0.67358"),
(14, "4", "2.7905", "-0.86603"),
(14, "5", "2.7905", "-1.0585"),
(14, "6", "2.7905", "-1.2509"),
(14, "7", "2.7905", "-1.4434"),
(14, "8", "2.7905", "-1.6358"),
(14, "9", "2.7905", "-1.8283"),
(14, "10", "2.7905", "-2.0207"),
(14, "11", "2.7905", "-2.2132"),
(14, "12", "2.7905", "-2.4056"),
(14, "13", "2.7905", "-2.5981"),
(14, "14", "2.7905", "-2.7905"),
(15, "0", "2.9830", "-0.096225"),
(15, "1", "2.9830", "-0.28868"),
(15, "2", "2.9830", "-0.48113"),
(15, "3", "2.9830", "-0.67358"),
(15, "4", "2.9830", "-0.86603"),
(15, "5", "2.9830", "-1.0585"),
(15, "6", "2.9830", "-1.2509"),
(15, "7", "2.9830", "-1.4434"),
(15, "8", "2.9830", "-1.6358"),
(15, "9", "2.9830", "-1.8283"),
(15, "10", "2.9830", "-2.0207"),
(15, "11", "2.9830", "-2.2132"),
(15, "12", "2.9830", "-2.4056"),
(15, "13", "2.9830", "-2.5981"),
(15, "14", "2.9830", "-2.7905"),
(15, "15", "2.9830", "-2.9830"),
(16, "0", "3.1754", "-0.096225"),
(16, "1", "3.1754", "-0.28868"),
(16, "2", "3.1754", "-0.48113"),
(16, "3", "3.1754", "-0.67358"),
(16, "4", "3.1754", "-0.86603"),
(16, "5", "3.1754", "-1.0585"),
(16, "6", "3.1754", "-1.2509"),
(16, "7", "3.1754", "-1.4434"),
(16, "8", "3.1754", "-1.6358"),
(16, "9", "3.1754", "-1.8283"),
(16, "10", "3.1754", "-2.0207"),
(16, "11", "3.1754", "-2.2132"),
(16, "12", "3.1754", "-2.4056"),
(16, "13", "3.1754", "-2.5981"),
(16, "14", "3.1754", "-2.7905"),
(16, "15", "3.1754", "-2.9830"),
(16, "16", "3.1754", "-3.1754"),
(17, "0", "3.3679", "-0.096225"),
(17, "1", "3.3679", "-0.28868"),
(17, "2", "3.3679", "-0.48113"),
(17, "3", "3.3679", "-0.67358"),
(17, "4", "3.3679", "-0.86603"),
(17, "5", "3.3679", "-1.0585"),
(17, "6", "3.3679", "-1.2509"),
(17, "7", "3.3679", "-1.4434"),
(17, "8", "3.3679", "-1.6358"),
(17, "9", "3.3679", "-1.8283"),
(17, "10", "3.3679", "-2.0207"),
(17, "11", "3.3679", "-2.2132"),
(17, "12", "3.3679", "-2.4056"),
(17, "13", "3.3679", "-2.5981"),
(17, "14", "3.3679", "-2.7905"),
(17, "15", "3.3679", "-2.9830"),
(17, "16", "3.3679", "-3.1754"),
(17, "17", "3.3679", "-3.3679"),
(18, "0", "3.5603", "-0.096225"),
(18, "1", "3.5603", "-0.28868"),
(18, "2", "3.5603", "-0.48113"),
(18, "3", "3.5603", "-0.67358"),
(18, "4", "3.5603", "-0.86603"),
(18, "5", "3.5603", "-1.0585"),
(18, "6", "3.5603", "-1.2509"),
(18, "7", "3.5603", "-1.4434"),
(18, "8", "3.5603", "-1.6358"),
(18, "9", "3.5603", "-1.8283"),
(18, "10", "3.5603", "-2.0207"),
(18, "11", "3.5603", "-2.2132"),
(18, "12", "3.5603", "-2.4056"),
(18, "13", "3.5603", "-2.5981"),
(18, "14", "3.5603", "-2.7905"),
(18, "15", "3.5603", "-2.9830"),
(18, "16", "3.5603", "-3.1754"),
(18, "17", "3.5603", "-3.3679"),
(18, "18", "3.5603", "-3.5603"),
(19, "0", "3.7528", "-0.096225"),
(19, "1", "3.7528", "-0.28868"),
(19, "2", "3.7528", "-0.48113"),
(19, "3", "3.7528", "-0.67358"),
(19, "4", "3.7528", "-0.86603"),
(19, "5", "3.7528", "-1.0585"),
(19, "6", "3.7528", "-1.2509"),
(19, "7", "3.7528", "-1.4434"),
(19, "8", "3.7528", "-1.6358"),
(19, "9", "3.7528", "-1.8283"),
(19, "10", "3.7528", "-2.0207"),
(19, "11", "3.7528", "-2.2132"),
(19, "12", "3.7528", "-2.4056"),
(19, "13", "3.7528", "-2.5981"),
(19, "14", "3.7528", "-2.7905"),
(19, "15", "3.7528", "-2.9830"),
(19, "16", "3.7528", "-3.1754"),
(19, "17", "3.7528", "-3.3679"),
(19, "18", "3.7528", "-3.5603"),
(19, "19", "3.7528", "-3.7528"),
(20, "0", "3.9452", "-0.096225"),
(20, "1", "3.9452", "-0.28868"),
(20, "2", "3.9452", "-0.48113"),
(20, "3", "3.9452", "-0.67358"),
(20, "4", "3.9452", "-0.86603"),
(20, "5", "3.9452", "-1.0585"),
(20, "6", "3.9452", "-1.2509"),
(20, "7", "3.9452", "-1.4434"),
(20, "8", "3.9452", "-1.6358"),
(20, "9", "3.9452", "-1.8283"),
(20, "10", "3.9452", "-2.0207"),
(20, "11", "3.9452", "-2.2132"),
(20, "12", "3.9452", "-2.4056"),
(20, "13", "3.9452", "-2.5981"),
(20, "14", "3.9452", "-2.7905"),
(20, "15", "3.9452", "-2.9830"),
(20, "16", "3.9452", "-3.1754"),
(20, "17", "3.9452", "-3.3679"),
(20, "18", "3.9452", "-3.5603"),
(20, "19", "3.9452", "-3.7528"),
(20, "20", "3.9452", "-3.9452"),
(21, "0", "4.1377", "-0.096225"),
(21, "1", "4.1377", "-0.28868"),
(21, "2", "4.1377", "-0.48113"),
(21, "3", "4.1377", "-0.67358"),
(21, "4", "4.1377", "-0.86603"),
(21, "5", "4.1377", "-1.0585"),
(21, "6", "4.1377", "-1.2509"),
(21, "7", "4.1377", "-1.4434"),
(21, "8", "4.1377", "-1.6358"),
(21, "9", "4.1377", "-1.8283"),
(21, "10", "4.1377", "-2.0207"),
(21, "11", "4.1377", "-2.2132"),
(21, "12", "4.1377", "-2.4056"),
(21, "13", "4.1377", "-2.5981"),
(21, "14", "4.1377", "-2.7905"),
(21, "15", "4.1377", "-2.9830"),
(21, "16", "4.1377", "-3.1754"),
(21, "17", "4.1377", "-3.3679"),
(21, "18", "4.1377", "-3.5603"),
(21, "19", "4.1377", "-3.7528"),
(21, "20", "4.1377", "-3.9452"),
(21, "21", "4.1377", "-4.1377"),
(22, "0", "4.3301", "-0.096225"),
(22, "1", "4.3301", "-0.28868"),
(22, "2", "4.3301", "-0.48113"),
(22, "3", "4.3301", "-0.67358"),
(22, "4", "4.3301", "-0.86603"),
(22, "5", "4.3301", "-1.0585"),
(22, "6", "4.3301", "-1.2509"),
(22, "7", "4.3301", "-1.4434"),
(22, "8", "4.3301", "-1.6358"),
(22, "9", "4.3301", "-1.8283"),
(22, "10", "4.3301", "-2.0207"),
(22, "11", "4.3301", "-2.2132"),
(22, "12", "4.3301", "-2.4056"),
(22, "13", "4.3301", "-2.5981"),
(22, "14", "4.3301", "-2.7905"),
(22, "15", "4.3301", "-2.9830"),
(22, "16", "4.3301", "-3.1754"),
(22, "17", "4.3301", "-3.3679"),
(22, "18", "4.3301", "-3.5603"),
(22, "19", "4.3301", "-3.7528"),
(22, "20", "4.3301", "-3.9452"),
(22, "21", "4.3301", "-4.1377"),
(22, "22", "4.3301", "-4.3301"),
(23, "0", "4.5226", "-0.096225"),
(23, "1", "4.5226", "-0.28868"),
(23, "2", "4.5226", "-0.48113"),
(23, "3", "4.5226", "-0.67358"),
(23, "4", "4.5226", "-0.86603"),
(23, "5", "4.5226", "-1.0585"),
(23, "6", "4.5226", "-1.2509"),
(23, "7", "4.5226", "-1.4434"),
(23, "8", "4.5226", "-1.6358"),
(23, "9", "4.5226", "-1.8283"),
(23, "10", "4.5226", "-2.0207"),
(23, "11", "4.5226", "-2.2132"),
(23, "12", "4.5226", "-2.4056"),
(23, "13", "4.5226", "-2.5981"),
(23, "14", "4.5226", "-2.7905"),
(23, "15", "4.5226", "-2.9830"),
(23, "16", "4.5226", "-3.1754"),
(23, "17", "4.5226", "-3.3679"),
(23, "18", "4.5226", "-3.5603"),
(23, "19", "4.5226", "-3.7528"),
(23, "20", "4.5226", "-3.9452"),
(23, "21", "4.5226", "-4.1377"),
(23, "22", "4.5226", "-4.3301"),
(23, "23", "4.5226", "-4.5226"),
(24, "0", "4.7150", "-0.096225"),
(24, "1", "4.7150", "-0.28868"),
(24, "2", "4.7150", "-0.48113"),
(24, "3", "4.7150", "-0.67358"),
(24, "4", "4.7150", "-0.86603"),
(24, "5", "4.7150", "-1.0585"),
(24, "6", "4.7150", "-1.2509"),
(24, "7", "4.7150", "-1.4434"),
(24, "8", "4.7150", "-1.6358"),
(24, "9", "4.7150", "-1.8283"),
(24, "10", "4.7150", "-2.0207"),
(24, "11", "4.7150", "-2.2132"),
(24, "12", "4.7150", "-2.4056"),
(24, "13", "4.7150", "-2.5981"),
(24, "14", "4.7150", "-2.7905"),
(24, "15", "4.7150", "-2.9830"),
(24, "16", "4.7150", "-3.1754"),
(24, "17", "4.7150", "-3.3679"),
(24, "18", "4.7150", "-3.5603"),
(24, "19", "4.7150", "-3.7528"),
(24, "20", "4.7150", "-3.9452"),
(24, "21", "4.7150", "-4.1377"),
(24, "22", "4.7150", "-4.3301"),
(24, "23", "4.7150", "-4.5226"),
(24, "24", "4.7150", "-4.7150"),
(25, "0", "4.9075", "-0.096225"),
(25, "1", "4.9075", "-0.28868"),
(25, "2", "4.9075", "-0.48113"),
(25, "3", "4.9075", "-0.67358"),
(25, "4", "4.9075", "-0.86603"),
(25, "5", "4.9075", "-1.0585"),
(25, "6", "4.9075", "-1.2509"),
(25, "7", "4.9075", "-1.4434"),
(25, "8", "4.9075", "-1.6358"),
(25, "9", "4.9075", "-1.8283"),
(25, "10", "4.9075", "-2.0207"),
(25, "11", "4.9075", "-2.2132"),
(25, "12", "4.9075", "-2.4056"),
(25, "13", "4.9075", "-2.5981"),
(25, "14", "4.9075", "-2.7905"),
(25, "15", "4.9075", "-2.9830"),
(25, "16", "4.9075", "-3.1754"),
(25, "17", "4.9075", "-3.3679"),
(25, "18", "4.9075", "-3.5603"),
(25, "19", "4.9075", "-3.7528"),
(25, "20", "4.9075", "-3.9452"),
(25, "21", "4.9075", "-4.1377"),
(25, "22", "4.9075", "-4.3301"),
(25, "23", "4.9075", "-4.5226"),
(25, "24", "4.9075", "-4.7150"),
(25, "25", "4.9075", "-4.9075"),
(26, "0", "5.0999", "-0.096225"),
(26, "1", "5.0999", "-0.28868"),
(26, "2", "5.0999", "-0.48113"),
(26, "3", "5.0999", "-0.67358"),
(26, "4", "5.0999", "-0.86603"),
(26, "5", "5.0999", "-1.0585"),
(26, "6", "5.0999", "-1.2509"),
(26, "7", "5.0999", "-1.4434"),
(26, "8", "5.0999", "-1.6358"),
(26, "9", "5.0999", "-1.8283"),
(26, "10", "5.0999", "-2.0207"),
(26, "11", "5.0999", "-2.2132"),
(26, "12", "5.0999", "-2.4056"),
(26, "13", "5.0999", "-2.5981"),
(26, "14", "5.0999", "-2.7905"),
(26, "15", "5.0999", "-2.9830"),
(26, "16", "5.0999", "-3.1754"),
(26, "17", "5.0999", "-3.3679"),
(26, "18", "5.0999", "-3.5603"),
(26, "19", "5.0999", "-3.7528"),
(26, "20", "5.0999", "-3.9452"),
(26, "21", "5.0999", "-4.1377"),
(26, "22", "5.0999", "-4.3301"),
(26, "23", "5.0999", "-4.5226"),
(26, "24", "5.0999", "-4.7150"),
(26, "25", "5.0999", "-4.9075"),
(26, "26", "5.0999", "-5.0999"),
(27, "0", "5.2924", "-0.096225"),
(27, "1", "5.2924", "-0.28868"),
(27, "2", "5.2924", "-0.48113"),
(27, "3", "5.2924", "-0.67358"),
(27, "4", "5.2924", "-0.86603"),
(27, "5", "5.2924", "-1.0585"),
(27, "6", "5.2924", "-1.2509"),
(27, "7", "5.2924", "-1.4434"),
(27, "8", "5.2924", "-1.6358"),
(27, "9", "5.2924", "-1.8283"),
(27, "10", "5.2924", "-2.0207"),
(27, "11", "5.2924", "-2.2132"),
(27, "12", "5.2924", "-2.4056"),
(27, "13", "5.2924", "-2.5981"),
(27, "14", "5.2924", "-2.7905"),
(27, "15", "5.2924", "-2.9830"),
(27, "16", "5.2924", "-3.1754"),
(27, "17", "5.2924", "-3.3679"),
(27, "18", "5.2924", "-3.5603"),
(27, "19", "5.2924", "-3.7528"),
(27, "20", "5.2924", "-3.9452"),
(27, "21", "5.2924", "-4.1377"),
(27, "22", "5.2924", "-4.3301"),
(27, "23", "5.2924", "-4.5226"),
(27, "24", "5.2924", "-4.7150"),
(27, "25", "5.2924", "-4.9075"),
(27, "26", "5.2924", "-5.0999"),
(27, "27", "5.2924", "-5.2924"),
(28, "0", "5.4848", "-0.096225"),
(28, "1", "5.4848", "-0.28868"),
(28, "2", "5.4848", "-0.48113"),
(28, "3", "5.4848", "-0.67358"),
(28, "4", "5.4848", "-0.86603"),
(28, "5", "5.4848", "-1.0585"),
(28, "6", "5.4848", "-1.2509"),
(28, "7", "5.4848", "-1.4434"),
(28, "8", "5.4848", "-1.6358"),
(28, "9", "5.4848", "-1.8283"),
(28, "10", "5.4848", "-2.0207"),
(28, "11", "5.4848", "-2.2132"),
(28, "12", "5.4848", "-2.4056"),
(28, "13", "5.4848", "-2.5981"),
(28, "14", "5.4848", "-2.7905"),
(28, "15", "5.4848", "-2.9830"),
(28, "16", "5.4848", "-3.1754"),
(28, "17", "5.4848", "-3.3679"),
(28, "18", "5.4848", "-3.5603"),
(28, "19", "5.4848", "-3.7528"),
(28, "20", "5.4848", "-3.9452"),
(28, "21", "5.4848", "-4.1377"),
(28, "22", "5.4848", "-4.3301"),
(28, "23", "5.4848", "-4.5226"),
(28, "24", "5.4848", "-4.7150"),
(28, "25", "5.4848", "-4.9075"),
(28, "26", "5.4848", "-5.0999"),
(28, "27", "5.4848", "-5.2924"),
(28, "28", "5.4848", "-5.4848"),
(29, "0", "5.6773", "-0.096225"),
(29, "1", "5.6773", "-0.28868"),
(29, "2", "5.6773", "-0.48113"),
(29, "3", "5.6773", "-0.67358"),
(29, "4", "5.6773", "-0.86603"),
(29, "5", "5.6773", "-1.0585"),
(29, "6", "5.6773", "-1.2509"),
(29, "7", "5.6773", "-1.4434"),
(29, "8", "5.6773", "-1.6358"),
(29, "9", "5.6773", "-1.8283"),
(29, "10", "5.6773", "-2.0207"),
(29, "11", "5.6773", "-2.2132"),
(29, "12", "5.6773", "-2.4056"),
(29, "13", "5.6773", "-2.5981"),
(29, "14", "5.6773", "-2.7905"),
(29, "15", "5.6773", "-2.9830"),
(29, "16", "5.6773", "-3.1754"),
(29, "17", "5.6773", "-3.3679"),
(29, "18", "5.6773", "-3.5603"),
(29, "19", "5.6773", "-3.7528"),
(29, "20", "5.6773", "-3.9452"),
(29, "21", "5.6773", "-4.1377"),
(29, "22", "5.6773", "-4.3301"),
(29, "23", "5.6773", "-4.5226"),
(29, "24", "5.6773", "-4.7150"),
(29, "25", "5.6773", "-4.9075"),
(29, "26", "5.6773", "-5.0999"),
(29, "27", "5.6773", "-5.2924"),
(29, "28", "5.6773", "-5.4848"),
(29, "29", "5.6773", "-5.6773"),
(30, "0", "5.8697", "-0.096225"),
(30, "1", "5.8697", "-0.28868"),
(30, "2", "5.8697", "-0.48113"),
(30, "3", "5.8697", "-0.67358"),
(30, "4", "5.8697", "-0.86603"),
(30, "5", "5.8697", "-1.0585"),
(30, "6", "5.8697", "-1.2509"),
(30, "7", "5.8697", "-1.4434"),
(30, "8", "5.8697", "-1.6358"),
(30, "9", "5.8697", "-1.8283"),
(30, "10", "5.8697", "-2.0207"),
(30, "11", "5.8697", "-2.2132"),
(30, "12", "5.8697", "-2.4056"),
(30, "13", "5.8697", "-2.5981"),
(30, "14", "5.8697", "-2.7905"),
(30, "15", "5.8697", "-2.9830"),
(30, "16", "5.8697", "-3.1754"),
(30, "17", "5.8697", "-3.3679"),
(30, "18", "5.8697", "-3.5603"),
(30, "19", "5.8697", "-3.7528"),
(30, "20", "5.8697", "-3.9452"),
(30, "21", "5.8697", "-4.1377"),
(30, "22", "5.8697", "-4.3301"),
(30, "23", "5.8697", "-4.5226"),
(30, "24", "5.8697", "-4.7150"),
(30, "25", "5.8697", "-4.9075"),
(30, "26", "5.8697", "-5.0999"),
(30, "27", "5.8697", "-5.2924"),
(30, "28", "5.8697", "-5.4848"),
(30, "29", "5.8697", "-5.6773"),
(30, "30", "5.8697", "-5.8697"),
(31,"0","6.0622","-0.096225"),
(31,"1","6.0622","-0.28868"),
(31,"2","6.0622","-0.48113"),
(31,"3","6.0622","-0.67358"),
(31,"4","6.0622","-0.86603"),
(31,"5","6.0622","-1.0585"),
(31,"6","6.0622","-1.2509"),
(31,"7","6.0622","-1.4434"),
(31,"8","6.0622","-1.6358"),
(31,"9","6.0622","-1.8283"),
(31,"10","6.0622","-2.0207"),
(31,"11","6.0622","-2.2132"),
(31,"12","6.0622","-2.4056"),
(31,"13","6.0622","-2.5981"),
(31,"14","6.0622","-2.7905"),
(31,"15","6.0622","-2.9830"),
(31,"16","6.0622","-3.1754"),
(31,"17","6.0622","-3.3679"),
(31,"18","6.0622","-3.5603"),
(31,"19","6.0622","-3.7528"),
(31,"20","6.0622","-3.9452"),
(31,"21","6.0622","-4.1377"),
(31,"22","6.0622","-4.3301"),
(31,"23","6.0622","-4.5226"),
(31,"24","6.0622","-4.7150"),
(31,"25","6.0622","-4.9075"),
(31,"26","6.0622","-5.0999"),
(31,"27","6.0622","-5.2924"),
(31,"28","6.0622","-5.4848"),
(31,"29","6.0622","-5.6773"),
(31,"30","6.0622","-5.8697"),
(31,"31","6.0622","-6.0622"),
(31,"32","6.0622","-6.2546"),
(31,"33","6.0622","-6.4471"),
(31,"34","6.0622","-6.6395"),
(31,"35","6.0622","-6.8320"),
(31,"36","6.0622","-7.0244"),
(31,"37","6.0622","-7.2169"),
(31,"38","6.0622","-7.4093"),
(31,"39","6.0622","-7.6018"),
(32,"0","6.2546","-0.096225"),
(32,"1","6.2546","-0.28868"),
(32,"2","6.2546","-0.48113"),
(32,"3","6.2546","-0.67358"),
(32,"4","6.2546","-0.86603"),
(32,"5","6.2546","-1.0585"),
(32,"6","6.2546","-1.2509"),
(32,"7","6.2546","-1.4434"),
(32,"8","6.2546","-1.6358"),
(32,"9","6.2546","-1.8283"),
(32,"10","6.2546","-2.0207"),
(32,"11","6.2546","-2.2132"),
(32,"12","6.2546","-2.4056"),
(32,"13","6.2546","-2.5981"),
(32,"14","6.2546","-2.7905"),
(32,"15","6.2546","-2.9830"),
(32,"16","6.2546","-3.1754"),
(32,"17","6.2546","-3.3679"),
(32,"18","6.2546","-3.5603"),
(32,"19","6.2546","-3.7528"),
(32,"20","6.2546","-3.9452"),
(32,"21","6.2546","-4.1377"),
(32,"22","6.2546","-4.3301"),
(32,"23","6.2546","-4.5226"),
(32,"24","6.2546","-4.7150"),
(32,"25","6.2546","-4.9075"),
(32,"26","6.2546","-5.0999"),
(32,"27","6.2546","-5.2924"),
(32,"28","6.2546","-5.4848"),
(32,"29","6.2546","-5.6773"),
(32,"30","6.2546","-5.8697"),
(32,"31","6.2546","-6.0622"),
(32,"32","6.2546","-6.2546"),
(32,"33","6.2546","-6.4471"),
(32,"34","6.2546","-6.6395"),
(32,"35","6.2546","-6.8320"),
(32,"36","6.2546","-7.0244"),
(32,"37","6.2546","-7.2169"),
(32,"38","6.2546","-7.4093"),
(32,"39","6.2546","-7.6018"),
(33,"0","6.4471","-0.096225"),
(33,"1","6.4471","-0.28868"),
(33,"2","6.4471","-0.48113"),
(33,"3","6.4471","-0.67358"),
(33,"4","6.4471","-0.86603"),
(33,"5","6.4471","-1.0585"),
(33,"6","6.4471","-1.2509"),
(33,"7","6.4471","-1.4434"),
(33,"8","6.4471","-1.6358"),
(33,"9","6.4471","-1.8283"),
(33,"10","6.4471","-2.0207"),
(33,"11","6.4471","-2.2132"),
(33,"12","6.4471","-2.4056"),
(33,"13","6.4471","-2.5981"),
(33,"14","6.4471","-2.7905"),
(33,"15","6.4471","-2.9830"),
(33,"16","6.4471","-3.1754"),
(33,"17","6.4471","-3.3679"),
(33,"18","6.4471","-3.5603"),
(33,"19","6.4471","-3.7528"),
(33,"20","6.4471","-3.9452"),
(33,"21","6.4471","-4.1377"),
(33,"22","6.4471","-4.3301"),
(33,"23","6.4471","-4.5226"),
(33,"24","6.4471","-4.7150"),
(33,"25","6.4471","-4.9075"),
(33,"26","6.4471","-5.0999"),
(33,"27","6.4471","-5.2924"),
(33,"28","6.4471","-5.4848"),
(33,"29","6.4471","-5.6773"),
(33,"30","6.4471","-5.8697"),
(33,"31","6.4471","-6.0622"),
(33,"32","6.4471","-6.2546"),
(33,"33","6.4471","-6.4471"),
(33,"34","6.4471","-6.6395"),
(33,"35","6.4471","-6.8320"),
(33,"36","6.4471","-7.0244"),
(33,"37","6.4471","-7.2169"),
(33,"38","6.4471","-7.4093"),
(33,"39","6.4471","-7.6018"),
(34,"0","6.6395","-0.096225"),
(34,"1","6.6395","-0.28868"),
(34,"2","6.6395","-0.48113"),
(34,"3","6.6395","-0.67358"),
(34,"4","6.6395","-0.86603"),
(34,"5","6.6395","-1.0585"),
(34,"6","6.6395","-1.2509"),
(34,"7","6.6395","-1.4434"),
(34,"8","6.6395","-1.6358"),
(34,"9","6.6395","-1.8283"),
(34,"10","6.6395","-2.0207"),
(34,"11","6.6395","-2.2132"),
(34,"12","6.6395","-2.4056"),
(34,"13","6.6395","-2.5981"),
(34,"14","6.6395","-2.7905"),
(34,"15","6.6395","-2.9830"),
(34,"16","6.6395","-3.1754"),
(34,"17","6.6395","-3.3679"),
(34,"18","6.6395","-3.5603"),
(34,"19","6.6395","-3.7528"),
(34,"20","6.6395","-3.9452"),
(34,"21","6.6395","-4.1377"),
(34,"22","6.6395","-4.3301"),
(34,"23","6.6395","-4.5226"),
(34,"24","6.6395","-4.7150"),
(34,"25","6.6395","-4.9075"),
(34,"26","6.6395","-5.0999"),
(34,"27","6.6395","-5.2924"),
(34,"28","6.6395","-5.4848"),
(34,"29","6.6395","-5.6773"),
(34,"30","6.6395","-5.8697"),
(34,"31","6.6395","-6.0622"),
(34,"32","6.6395","-6.2546"),
(34,"33","6.6395","-6.4471"),
(34,"34","6.6395","-6.6395"),
(34,"35","6.6395","-6.8320"),
(34,"36","6.6395","-7.0244"),
(34,"37","6.6395","-7.2169"),
(34,"38","6.6395","-7.4093"),
(34,"39","6.6395","-7.6018"),
(35,"0","6.8320","-0.096225"),
(35,"1","6.8320","-0.28868"),
(35,"2","6.8320","-0.48113"),
(35,"3","6.8320","-0.67358"),
(35,"4","6.8320","-0.86603"),
(35,"5","6.8320","-1.0585"),
(35,"6","6.8320","-1.2509"),
(35,"7","6.8320","-1.4434"),
(35,"8","6.8320","-1.6358"),
(35,"9","6.8320","-1.8283"),
(35,"10","6.8320","-2.0207"),
(35,"11","6.8320","-2.2132"),
(35,"12","6.8320","-2.4056"),
(35,"13","6.8320","-2.5981"),
(35,"14","6.8320","-2.7905"),
(35,"15","6.8320","-2.9830"),
(35,"16","6.8320","-3.1754"),
(35,"17","6.8320","-3.3679"),
(35,"18","6.8320","-3.5603"),
(35,"19","6.8320","-3.7528"),
(35,"20","6.8320","-3.9452"),
(35,"21","6.8320","-4.1377"),
(35,"22","6.8320","-4.3301"),
(35,"23","6.8320","-4.5226"),
(35,"24","6.8320","-4.7150"),
(35,"25","6.8320","-4.9075"),
(35,"26","6.8320","-5.0999"),
(35,"27","6.8320","-5.2924"),
(35,"28","6.8320","-5.4848"),
(35,"29","6.8320","-5.6773"),
(35,"30","6.8320","-5.8697"),
(35,"31","6.8320","-6.0622"),
(35,"32","6.8320","-6.2546"),
(35,"33","6.8320","-6.4471"),
(35,"34","6.8320","-6.6395"),
(35,"35","6.8320","-6.8320"),
(35,"36","6.8320","-7.0244"),
(35,"37","6.8320","-7.2169"),
(35,"38","6.8320","-7.4093"),
(35,"39","6.8320","-7.6018"),
(36,"0","7.0244","-0.096225"),
(36,"1","7.0244","-0.28868"),
(36,"2","7.0244","-0.48113"),
(36,"3","7.0244","-0.67358"),
(36,"4","7.0244","-0.86603"),
(36,"5","7.0244","-1.0585"),
(36,"6","7.0244","-1.2509"),
(36,"7","7.0244","-1.4434"),
(36,"8","7.0244","-1.6358"),
(36,"9","7.0244","-1.8283"),
(36,"10","7.0244","-2.0207"),
(36,"11","7.0244","-2.2132"),
(36,"12","7.0244","-2.4056"),
(36,"13","7.0244","-2.5981"),
(36,"14","7.0244","-2.7905"),
(36,"15","7.0244","-2.9830"),
(36,"16","7.0244","-3.1754"),
(36,"17","7.0244","-3.3679"),
(36,"18","7.0244","-3.5603"),
(36,"19","7.0244","-3.7528"),
(36,"20","7.0244","-3.9452"),
(36,"21","7.0244","-4.1377"),
(36,"22","7.0244","-4.3301"),
(36,"23","7.0244","-4.5226"),
(36,"24","7.0244","-4.7150"),
(36,"25","7.0244","-4.9075"),
(36,"26","7.0244","-5.0999"),
(36,"27","7.0244","-5.2924"),
(36,"28","7.0244","-5.4848"),
(36,"29","7.0244","-5.6773"),
(36,"30","7.0244","-5.8697"),
(36,"31","7.0244","-6.0622"),
(36,"32","7.0244","-6.2546"),
(36,"33","7.0244","-6.4471"),
(36,"34","7.0244","-6.6395"),
(36,"35","7.0244","-6.8320"),
(36,"36","7.0244","-7.0244"),
(36,"37","7.0244","-7.2169"),
(36,"38","7.0244","-7.4093"),
(36,"39","7.0244","-7.6018"),
(37,"0","7.2169","-0.096225"),
(37,"1","7.2169","-0.28868"),
(37,"2","7.2169","-0.48113"),
(37,"3","7.2169","-0.67358"),
(37,"4","7.2169","-0.86603"),
(37,"5","7.2169","-1.0585"),
(37,"6","7.2169","-1.2509"),
(37,"7","7.2169","-1.4434"),
(37,"8","7.2169","-1.6358"),
(37,"9","7.2169","-1.8283"),
(37,"10","7.2169","-2.0207"),
(37,"11","7.2169","-2.2132"),
(37,"12","7.2169","-2.4056"),
(37,"13","7.2169","-2.5981"),
(37,"14","7.2169","-2.7905"),
(37,"15","7.2169","-2.9830"),
(37,"16","7.2169","-3.1754"),
(37,"17","7.2169","-3.3679"),
(37,"18","7.2169","-3.5603"),
(37,"19","7.2169","-3.7528"),
(37,"20","7.2169","-3.9452"),
(37,"21","7.2169","-4.1377"),
(37,"22","7.2169","-4.3301"),
(37,"23","7.2169","-4.5226"),
(37,"24","7.2169","-4.7150"),
(37,"25","7.2169","-4.9075"),
(37,"26","7.2169","-5.0999"),
(37,"27","7.2169","-5.2924"),
(37,"28","7.2169","-5.4848"),
(37,"29","7.2169","-5.6773"),
(37,"30","7.2169","-5.8697"),
(37,"31","7.2169","-6.0622"),
(37,"32","7.2169","-6.2546"),
(37,"33","7.2169","-6.4471"),
(37,"34","7.2169","-6.6395"),
(37,"35","7.2169","-6.8320"),
(37,"36","7.2169","-7.0244"),
(37,"37","7.2169","-7.2169"),
(37,"38","7.2169","-7.4093"),
(37,"39","7.2169","-7.6018"),
(38,"0","7.4093","-0.096225"),
(38,"1","7.4093","-0.28868"),
(38,"2","7.4093","-0.48113"),
(38,"3","7.4093","-0.67358"),
(38,"4","7.4093","-0.86603"),
(38,"5","7.4093","-1.0585"),
(38,"6","7.4093","-1.2509"),
(38,"7","7.4093","-1.4434"),
(38,"8","7.4093","-1.6358"),
(38,"9","7.4093","-1.8283"),
(38,"10","7.4093","-2.0207"),
(38,"11","7.4093","-2.2132"),
(38,"12","7.4093","-2.4056"),
(38,"13","7.4093","-2.5981"),
(38,"14","7.4093","-2.7905"),
(38,"15","7.4093","-2.9830"),
(38,"16","7.4093","-3.1754"),
(38,"17","7.4093","-3.3679"),
(38,"18","7.4093","-3.5603"),
(38,"19","7.4093","-3.7528"),
(38,"20","7.4093","-3.9452"),
(38,"21","7.4093","-4.1377"),
(38,"22","7.4093","-4.3301"),
(38,"23","7.4093","-4.5226"),
(38,"24","7.4093","-4.7150"),
(38,"25","7.4093","-4.9075"),
(38,"26","7.4093","-5.0999"),
(38,"27","7.4093","-5.2924"),
(38,"28","7.4093","-5.4848"),
(38,"29","7.4093","-5.6773"),
(38,"30","7.4093","-5.8697"),
(38,"31","7.4093","-6.0622"),
(38,"32","7.4093","-6.2546"),
(38,"33","7.4093","-6.4471"),
(38,"34","7.4093","-6.6395"),
(38,"35","7.4093","-6.8320"),
(38,"36","7.4093","-7.0244"),
(38,"37","7.4093","-7.2169"),
(38,"38","7.4093","-7.4093"),
(38,"39","7.4093","-7.6018"),
(39,"0","7.6018","-0.096225"),
(39,"1","7.6018","-0.28868"),
(39,"2","7.6018","-0.48113"),
(39,"3","7.6018","-0.67358"),
(39,"4","7.6018","-0.86603"),
(39,"5","7.6018","-1.0585"),
(39,"6","7.6018","-1.2509"),
(39,"7","7.6018","-1.4434"),
(39,"8","7.6018","-1.6358"),
(39,"9","7.6018","-1.8283"),
(39,"10","7.6018","-2.0207"),
(39,"11","7.6018","-2.2132"),
(39,"12","7.6018","-2.4056"),
(39,"13","7.6018","-2.5981"),
(39,"14","7.6018","-2.7905"),
(39,"15","7.6018","-2.9830"),
(39,"16","7.6018","-3.1754"),
(39,"17","7.6018","-3.3679"),
(39,"18","7.6018","-3.5603"),
(39,"19","7.6018","-3.7528"),
(39,"20","7.6018","-3.9452"),
(39,"21","7.6018","-4.1377"),
(39,"22","7.6018","-4.3301"),
(39,"23","7.6018","-4.5226"),
(39,"24","7.6018","-4.7150"),
(39,"25","7.6018","-4.9075"),
(39,"26","7.6018","-5.0999"),
(39,"27","7.6018","-5.2924"),
(39,"28","7.6018","-5.4848"),
(39,"29","7.6018","-5.6773"),
(39,"30","7.6018","-5.8697"),
(39,"31","7.6018","-6.0622"),
(39,"32","7.6018","-6.2546"),
(39,"33","7.6018","-6.4471"),
(39,"34","7.6018","-6.6395"),
(39,"35","7.6018","-6.8320"),
(39,"36","7.6018","-7.0244"),
(39,"37","7.6018","-7.2169"),
(39,"38","7.6018","-7.4093"),
(39,"39","7.6018","-7.6018"),
(40,"0","7.7942","-0.096225"),
(40,"1","7.7942","-0.28868"),
(40,"2","7.7942","-0.48113"),
(40,"3","7.7942","-0.67358"),
(40,"4","7.7942","-0.86603"),
(40,"5","7.7942","-1.0585"),
(40,"6","7.7942","-1.2509"),
(40,"7","7.7942","-1.4434"),
(40,"8","7.7942","-1.6358"),
(40,"9","7.7942","-1.8283"),
(40,"10","7.7942","-2.0207"),
(40,"11","7.7942","-2.2132"),
(40,"12","7.7942","-2.4056"),
(40,"13","7.7942","-2.5981"),
(40,"14","7.7942","-2.7905"),
(40,"15","7.7942","-2.9830"),
(40,"16","7.7942","-3.1754"),
(40,"17","7.7942","-3.3679"),
(40,"18","7.7942","-3.5603"),
(40,"19","7.7942","-3.7528"),
(40,"20","7.7942","-3.9452"),
(40,"21","7.7942","-4.1377"),
(40,"22","7.7942","-4.3301"),
(40,"23","7.7942","-4.5226"),
(40,"24","7.7942","-4.7150"),
(40,"25","7.7942","-4.9075"),
(40,"26","7.7942","-5.0999"),
(40,"27","7.7942","-5.2924"),
(40,"28","7.7942","-5.4848"),
(40,"29","7.7942","-5.6773"),
(40,"30","7.7942","-5.8697"),
(40,"31","7.7942","-6.0622"),
(40,"32","7.7942","-6.2546"),
(40,"33","7.7942","-6.4471"),
(40,"34","7.7942","-6.6395"),
(40,"35","7.7942","-6.8320"),
(40,"36","7.7942","-7.0244"),
(40,"37","7.7942","-7.2169"),
(40,"38","7.7942","-7.4093"),
(40,"39","7.7942","-7.6018")
]
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 428 | # ------------------------------------------------------------------
# Compute Schwarzschild QNMs perturbed by a scalar field
# ------------------------------------------------------------------
using QuasinormalModes
using ProgressMeter
using Dates
include("Schwarzschild_boson.jl")
include("guesses.jl")
include("compute_boson.jl")
# Creates a list of Schwarzschild modes with spin 0
compute_boson(convert(UInt32, 100), 0)
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 1097 | using Plots
using CSV
using DataFrames
using LaTeXStrings
file = CSV.File("high_l_qnm.dat", header=["iter", "l", "n", "real_guess", "imag_guess", "real", "imag"], delim=' ', skipto=2, ignorerepeated=true)
df = DataFrame(file)
function plot_ns(save=true)
dfs = [filter(row -> row.n == n, df) for n in 0:3]
p1 = plot(dfs[1].real,
-dfs[1].imag,
seriestype=:scatter,
xlabel=L"\Re(\omega)",
ylabel=L"-\Im(\omega)",
label=L"n = 0",
frame=:box,
grid=false,
color=:red,
fontfamily="Computer Modern",
legend=:outertopright
)
p2 = plot!(p1, dfs[2].real, -dfs[2].imag, seriestype=:scatter, label=L"n = 1", color=:black)
p3 = plot!(p2, dfs[3].real, -dfs[3].imag, seriestype=:scatter, label=L"n = 2", color=:blue)
p4 = plot!(p3, dfs[4].real, -dfs[4].imag, seriestype=:scatter, label=L"n = 3", color=:green)
if save
savefig(p4, "high_l_Schwarzschild/l_plot.svg")
end
return p4
end
plot_ns()
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 5952 | __precompile__()
"""
This package contains routines for computing eigenvalues of second
order ordinary differential equations and in particular the
quasinormal modes (QNMs) of black holes in General Relativity using
the "Asymptotic Iteration Method" [1] using the implementation
based on the "improved" version of the AIM, described in [2].
References:
[1] (https://arxiv.org/abs/math-ph/0309066v1)
[2] (https://arxiv.org/abs/1111.5024)
"""
module QuasinormalModes
# ------------------------------------------------------------------
# 1. Imports
# ------------------------------------------------------------------
using SymEngine
using Polynomials
using PolynomialRoots
using TaylorSeries
using NLsolve
using Roots
# ------------------------------------------------------------------
# 2. Public API (Exports)
# ------------------------------------------------------------------
# --- Type hierarchy ---
export AIMProblem
export AnalyticAIMProblem
export NumericAIMProblem
export QuadraticEigenvalueProblem
# --- Extra (concrete) types ---
export AIMCache
# --- Steping methods (singleton types) ---
export Serial
export Threaded
# --- Mandatory methods ---
export λ0, S0, get_niter, get_x0
export get_ODEvar, get_ODEeigen
# --- AIM methods ---
export computeDelta!
export computeEigenvalues
export eigenvaluesInGrid
# ------------------------------------------------------------------
# 3. Type hierarchy
# ------------------------------------------------------------------
"""
Parent super-type of all problems that can be solved using the AIM.
"""
abstract type AIMProblem{N<:Unsigned,T<:Number} end
"""
Parent super-type of all problems that can be solved using the AIM semi-analytically.
"""
abstract type AnalyticAIMProblem{N<:Unsigned,T<:Number} <: AIMProblem{N,T} end
"""
Parent super-type of all problems that can be solved using the AIM numerically.
"""
abstract type NumericAIMProblem{N<:Unsigned,T<:Number} <: AIMProblem{N,T} end
"""
Parent super-type of all problems whose eigenvalue is a quadratic polynomial.
"""
abstract type QuadraticEigenvalueProblem{N<:Unsigned,T<:Number} <: AnalyticAIMProblem{N,T} end
# ------------------------------------------------------------------
# 4. Traits
# ------------------------------------------------------------------
"""
Super-type of traits describing the analyticity of eigenvalue problems.
"""
abstract type AnalyticityTrait end
"""
All problems with eigenvalues that *can* be described by analytic functions have this trait.
"""
struct IsAnalytic <: AnalyticityTrait end
"""
All problems with eigenvalues that *can't* be described by analytic functions have this trait.
"""
struct IsNumeric <: AnalyticityTrait end
"""
The default trait of AIMProblem(s).
"""
AnalyticityTrait(::Type{<:AIMProblem}) = IsNumeric()
"""
The trait of AnalyticAIMProblem(s).
"""
AnalyticityTrait(::Type{<:AnalyticAIMProblem}) = IsAnalytic()
"""
The trait of NumericAIMProblem(s).
"""
AnalyticityTrait(::Type{<:NumericAIMProblem}) = IsNumeric()
"""
All problem types must implement a λ0 function.
This behavior is enforced by the default implementations.
"""
λ0(x::T) where {T} = λ0(AnalyticityTrait(T), x)
λ0(::IsAnalytic, x) = error("Please implement a λ0 function for ", typeof(x))
λ0(::IsNumeric, x) = error("Please implement a λ0 function for ", typeof(x))
"""
All problem types must implement a S0 function.
This behavior is enforced by the default implementations.
"""
S0(x::T) where {T} = S0(AnalyticityTrait(T), x)
S0(::IsAnalytic, x) = error("Please implement a S0 function for ", typeof(x))
S0(::IsNumeric, x) = error("Please implement a S0 function for ", typeof(x))
"""
All problem types must implement get_niter to return the number of iterations to perform.
"""
get_niter(x::T) where {T} = get_niter(AnalyticityTrait(T), x)
get_niter(::IsAnalytic, x) = error("Please implement a get_niter function for ", typeof(x))
get_niter(::IsNumeric, x) = error("Please implement a get_niter function for ", typeof(x))
"""
All problem types must implement get_x0 to return AIM's point of evaluation.
"""
get_x0(x::T) where {T} = get_x0(AnalyticityTrait(T), x)
get_x0(::IsAnalytic, x) = error("Please implement a get_x0 function for ", typeof(x))
get_x0(::IsNumeric, x) = error("Please implement a get_x0 function for ", typeof(x))
"""
Analytic problems must implement an acessor to the variable of the ODE.
"""
get_ODEvar(x::T) where {T} = get_ODEvar(AnalyticityTrait(T), x)
get_ODEvar(::IsAnalytic, x) = error("Please implement a get_ODEvar function for ", typeof(x))
get_ODEvar(::IsNumeric, x) = error("Numeric problem", typeof(x), "cannot implement get_ODEvar")
"""
Analytic problems must implement an acessor to the eigenvalue of the ODE.
"""
get_ODEeigen(x::T) where {T} = get_ODEeigen(AnalyticityTrait(T), x)
get_ODEeigen(::IsAnalytic, x) = error("Please implement a get_ODEeigen function for ", typeof(x))
get_ODEeigen(::IsNumeric, x) = error("Numeric problem", typeof(x), "cannot implement get_ODEeigen")
# ------------------------------------------------------------------
# 5. AIM stepping methods
# ------------------------------------------------------------------
"""
Super-type of all stepping methods used internally by the AIM.
"""
abstract type AIMSteppingMethod end
"""
Perform all AIM steps sequentially
"""
struct Serial <: AIMSteppingMethod end
"""
Perform all AIM steps in parallel using threads
"""
struct Threaded <: AIMSteppingMethod end
# ------------------------------------------------------------------
# 6. Includes
# ------------------------------------------------------------------
# --- Include core functionality ---
include("./core/aim_cache.jl")
include("./core/analytic_tools.jl")
include("./core/aim_step.jl")
include("./core/compute_delta.jl")
# --- Include eigenvalue computing functionality ---
include("./eigenvalue_computing/compute_eigenvalues.jl")
include("./eigenvalue_computing/eigenvalues_in_grid.jl")
end # QuasinormalModes
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 3459 | """
Cache of coefficient arrays for the AIM.
To each AIM problem corresponds a cache. As long as the problem doesn't change, the cache can be reused.
# Members
- `icda::Array{T,1}`: Hold the initial c data, i.e., c^i_0.
- `ccda::Array{T,1}`: Hold the coefficients for the current aim step, c^i_n.
- `pcda::Array{T,1}`: Hold the coefficients for the previous aim step, c^i_{n-1}.
- `bcda::Array{T,1}`: The work buffer used to actually compute the c coefficients in parallel.
- `idda::Array{T,1}`: Hold the initial d data, i.e., c^i_0.
- `cdda::Array{T,1}`: Hold the coefficients for the current aim step, d^i_n.
- `pdda::Array{T,1}`: Hold the coefficients for the previous aim step, d^i_{n-1}.
- `bdda::Array{T,1}`: The work buffer used to actually compute the d coefficients in parallel.
- `size::N`: The size of the arrays in the cache.
"""
struct AIMCache{N <: Unsigned, T <: Any}
icda::Array{T,1}
ccda::Array{T,1}
pcda::Array{T,1}
bcda::Array{T,1}
idda::Array{T,1}
cdda::Array{T,1}
pdda::Array{T,1}
bdda::Array{T,1}
size::N
end
"""
AIMCache(p::QuadraticEigenvalueProblem{N,T}) where {N <: Unsigned, T <: Number}
Create an AIMCache object suitable for Quadratic Eigenvalue Problems.
# Input
- `p::QuadraticEigenvalueProblem`: The problem data.
# Output
An `AIMCache{N,Polynomial{T}}` object.
"""
function AIMCache(p::QuadraticEigenvalueProblem{N,T}) where {N <: Unsigned, T <: Number}
P = Polynomial{T}
size = get_niter(p) + one(N)
# The initial data arrays are initialized using the Taylor coefficient formula f^(k)(x0)/k!
# f^(k)(x0) is a polynomial constructed by createPoly
icda = [createPoly(p, i, λ0) for i in zero(N):get_niter(p)]
ccda = zeros(P, size)
pcda = zeros(P, size)
bcda = zeros(P, size)
idda = [createPoly(p, i, S0) for i in zero(N):get_niter(p)]
cdda = zeros(P, size)
pdda = zeros(P, size)
bdda = zeros(P, size)
AIMCache{N,P}(icda, ccda, pcda, bcda, idda, cdda, pdda, bdda, size)
end
"""
AIMCache(p::NumericAIMProblem{N,T}) where {N <: Unsigned, T <: Number}
Create an AIMCache object suitable for Numeric Eigenvalue Problems.
# Input
- `p::NumericAIMProblem`: The problem data.
# Output
An `AIMCache{N,T}` object.
"""
function AIMCache(p::NumericAIMProblem{N,T}) where {N <: Unsigned, T <: Number}
size = get_niter(p) + one(N)
icda = zeros(T, size)
ccda = zeros(T, size)
pcda = zeros(T, size)
bcda = zeros(T, size)
idda = zeros(T, size)
cdda = zeros(T, size)
pdda = zeros(T, size)
bdda = zeros(T, size)
AIMCache{N,T}(icda, ccda, pcda, bcda, idda, cdda, pdda, bdda, size)
end
"""
recomputeInitials(p::NumericAIMProblem{N,T}, c::AIMCache{N,T}, ω::T) where {N <: Unsigned, T <: Number}
Reevaluate (in-place) the initial data arrays. The initial data array elements are the Taylor expansion
coefficients of λ0 and S0 in the ODE variable x around x0 of order get_niter(p) at a point ω.
# Input
- `p::NumericAIMProblem`: The problem data.
- `c::AIMCache`: The problem data associated cache.
- `ω::T`: The value of the eigenvalue to evaluate the arrays in.
# Output
nothing
"""
function recomputeInitials!(p::NumericAIMProblem{N,T}, c::AIMCache{N,T}, ω::T) where {N <: Unsigned, T <: Number}
t = get_x0(p) + Taylor1(T, convert(Int64, get_niter(p)))
c1 = λ0(p)(t,ω).coeffs
c2 = S0(p)(t,ω).coeffs
copy!(c.icda, c1)
copy!(c.idda, c2)
return nothing
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 4057 | """
AIMStep!(::Threaded, p::AIMProblem{N,T}, c::AIMCache{N,U}) where {N <: Unsigned, T <: Number, U <: Any}
Performs a single step of the AIM algorithm in parallel using threads:
1. The initial data arrays are not altered.
2. The previous arrays receive the values of the current arrays.
3. The results of the next step computed using the initial and current arrays. Results are stored in the buffer arrays.
4. The current arrays receive the contents of the buffer arrays.
# Input
- `::Threaded`: Instance of the `Threaded` object.
- `p::AIMProblem`: The problem data to use in the computation.
- `c::AIMCache`: The cache of arrays that corresponds to the problem p.
# Output
nothing
"""
function AIMStep!(::Threaded, p::AIMProblem{N,T}, c::AIMCache{N,U}) where {N <: Unsigned, T <: Number, U <: Any}
zeroN = zero(N)
oneN = one(N)
twoN = oneN + oneN
# The currently computed coefficients will be the previous iteration coefficients
# after the iteration is done, so we store the current values to the previous array
@inbounds copy!(c.pcda, c.ccda)
@inbounds copy!(c.pdda, c.cdda)
# Apply the AIM formula to compute the coeficients writing the results to the
# buffer arrays and reading data from all the other arrays.
# The use of a buffer array as temporary storage to the coefficients allow for the
# computation of the coefficients to be parallelized
Threads.@threads for i in zeroN:(get_niter(p) - oneN)
c_sum = zero(T)
d_sum = zero(T)
for k in zeroN:i
@inbounds c_sum += c.icda[k + oneN] * c.ccda[i - k + oneN]
@inbounds d_sum += c.idda[k + oneN] * c.ccda[i - k + oneN]
end
@inbounds c.bcda[i + oneN] = (i + oneN) * c.ccda[i + twoN] + c.cdda[i + oneN] + c_sum
@inbounds c.bdda[i + oneN] = (i + oneN) * c.cdda[i + twoN] + d_sum
end
# Copy the data in the buffer arrays to the current arrays
@inbounds copy!(c.ccda, c.bcda)
@inbounds copy!(c.cdda, c.bdda)
return nothing
end
"""
AIMStep!(::Serial, p::AIMProblem{N,T}, c::AIMCache{N,U}) where {N <: Unsigned, T <: Number, U <: Any}
Performs a single step of the AIM algorithm sequentially:
1. The initial data arrays are not altered.
2. The previous arrays receive the values of the current arrays.
3. The results of the next step computed using the initial and current arrays. Results are stored in the buffer arrays.
4. The current arrays receive the contents of the buffer arrays.
# Input
- `::Serial`: Instance of `Serial` object.
- `p::AIMProblem`: The problem data to use in the computation.
- `c::AIMCache`: The cache of arrays that corresponds to the problem p.
# Output
nothing
"""
function AIMStep!(::Serial, p::AIMProblem{N,T}, c::AIMCache{N,U}) where {N <: Unsigned, T <: Number, U <: Any}
zeroN = zero(N)
oneN = one(N)
twoN = oneN + oneN
# The currently computed coefficients will be the previous iteration coefficients
# after the iteration is done, so we store the current values to the previous array
@inbounds copy!(c.pcda, c.ccda)
@inbounds copy!(c.pdda, c.cdda)
# Apply the AIM formula to compute the coeficients writing the results to the
# buffer arrays and reading data from all the other arrays.
# The use of a buffer array as temporary storage to the coefficients allow for the
# computation of the coefficients to be parallelized
for i in zeroN:(get_niter(p) - oneN)
c_sum = zero(T)
d_sum = zero(T)
for k in zeroN:i
@inbounds c_sum += c.icda[k + oneN] * c.ccda[i - k + oneN]
@inbounds d_sum += c.idda[k + oneN] * c.ccda[i - k + oneN]
end
@inbounds c.bcda[i + oneN] = (i + oneN) * c.ccda[i + twoN] + c.cdda[i + oneN] + c_sum
@inbounds c.bdda[i + oneN] = (i + oneN) * c.cdda[i + twoN] + d_sum
end
# Copy the data in the buffer arrays to the current arrays
@inbounds copy!(c.ccda, c.bcda)
@inbounds copy!(c.cdda, c.bdda)
return nothing
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 3992 | """
dnx(n::T, f::Basic, v::Basic) where {T <: Unsigned}
Computes the n-th derivative of the the function f with respect to the variable v.
This function re implements SymEngine's own `diff` function using an early quitting strategy.
# Input
- `n::T`: The order of the derivative.
- `f::Basic`: The expression to compute the derivative.
- `v::Basic`: The variable with respect to which the derivative will be computed.
# Output
A `SymEngine.Basic` object with the derived expression.
"""
function dnx(n::T, f::Basic, v::Basic) where {T <: Unsigned}
b0 = Basic(0)
zt = zero(T)
ot = one(T)
f == b0 && return b0
n == zt && return f
n == ot && return SymEngine.diff(f, v)
n > 1 && return dnx(n - ot, SymEngine.diff(f, v), v)
end
"""
dnx(p::AnalyticAIMProblem{N,T}, n::N, f::Function, v::Function) where {N <: Unsigned, T <: Number}
Computes the n-th derivative of the AIM expressions with respect to ODE's variable.
This function is only a thin wrapper around SymEngine's own `diff` function.
It works as a barrier function that produces a type stable `Basic` result.
# Input
- `p::AnalyticAIMProblem`: The problem data with the expressions to compute the derivative.
- `n::Unsigned`: The order of the derivative.
- `f::Function`: The actual expression to compute the derivative. Either λ0 or S0.
- `v::Function`: The variable with respec to which the derivative will be computed. Either get_ODEvar or get_ODEeigen.
# Output
A `SymEngine.Basic` object with the derived expression.
"""
function dnx(p::AnalyticAIMProblem{N,T}, n::N, f::Function, v::Function) where {N <: Unsigned, T <: Number}
return convert(SymEngine.Basic, dnx(n, f(p), v(p)))
end
"""
computePolynomialFactors(p::QuadraticEigenvalueProblem{N,T}, n::N, f::Function) where {N <: Unsigned, T <: Number}
Create a second order `Polynomial` object in the variable `ω` by computing derivatives of `λ0` or `S0`.
# Input
- `p::QuadraticEigenvalueProblem`: The problem data with the expressions to compute the derivative.
- `n::Unsigned`: The order of the derivative.
- `f::Function`: the function to extract the polynomial from. Either λ0 or S0.
# Output
An object of type `Polynomial{T}` containing the polynomial resulting from the derivation of the expression.
"""
function computePolynomialFactors(p::QuadraticEigenvalueProblem{N,T}, n::N, f::Function) where {N <: Unsigned, T <: Number}
SYB = SymEngine.Basic
# Compute the derivative at x0 and expand it to obtain a polynomial in ω
step1::SYB = convert(SYB, subs(dnx(p, n, f, get_ODEvar), get_ODEvar(p) => get_x0(p)))
step2::SYB = convert(SYB, expand(step1))
# Extract the factors of the polynomial in ω
ω = get_ODEeigen(p)
p0::T = convert(T, SymEngine.N(coeff(step2, ω, Basic(0))))
p1::T = convert(T, SymEngine.N(coeff(step2, ω, Basic(1))))
p2::T = convert(T, SymEngine.N(coeff(step2, ω, Basic(2))))
poly = Polynomial{T}([p0, p1, p2])
return poly
end
"""
createPoly(p::QuadraticEigenvalueProblem{N,T}, n::N, f::Function)
Compute the n-th coefficient of the Taylor expansion around x0 for the functions `λ0` or `S0`
# Input
- `p::QuadraticEigenvalueProblem`: The problem data with the expressions to compute the derivative.
- `n::Unsigned`: The order of the derivative.
- `f::Function`: the function to extract the polynomial from. Either λ0 or S0.
# Output
An object of type `Polynomial{T}` containing the polynomial Taylor coefficient.
"""
function createPoly(p::QuadraticEigenvalueProblem{N,T}, n::N, f::Function) where {N <: Unsigned, T <: Number}
computePolynomialFactors(p, n, f)/factorial(n)
end
function createPoly(p::QuadraticEigenvalueProblem{N,T}, n::N, f::Function) where {N <: Unsigned, T <: BigFloat}
computePolynomialFactors(p, n, f)/factorial(big(n))
end
function createPoly(p::QuadraticEigenvalueProblem{N,T}, n::N, f::Function) where {N <: Unsigned, T <: Complex{BigFloat}}
computePolynomialFactors(p, n, f)/factorial(big(n))
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 2502 | """
computeDelta!(p::QuadraticEigenvalueProblem{N,T}, c::AIMCache{N,Polynomial{T}}) where {N <: Unsigned, T <: Number}
Compute and return the AIM "quantization condition".
# Input
- `m::AIMSteppingMethod`: The stepping method to use.
- `p::QuadraticEigenvalueProblem`: A quadratic frequency problem.
- `c::AIMCache`: An AIM cache object created from p.
# Output
An object of type `Polynomial{T}` whose roots are the problem's eigenvalues.
"""
function computeDelta!(m::AIMSteppingMethod, p::QuadraticEigenvalueProblem{N,T}, c::AIMCache{N,Polynomial{T}}) where {N <: Unsigned, T <: Number}
if (get_niter(p) + one(N)) != c.size
error("The provided cache cannot hold data for $(get_niter(p)) AIM iterations.
Make sure that the passed AIMCache corresponds to the passed AIM problem")
end
# Initialize the current data arrays with the initial data for the first step
copy!(c.ccda, c.icda)
copy!(c.cdda, c.idda)
# Perform the aim steps
for i in one(N):get_niter(p)
AIMStep!(m, p, c)
end
# Compute and return the AIM "quantization condition"
return c.cdda[1]*c.pcda[1] - c.pdda[1]*c.ccda[1]
end
"""
computeDelta!(m::AIMSteppingMethod, p::NumericAIMProblem{N,T}, c::AIMCache{N,T}, ω::T) where {N <: Unsigned, T <: Number}
Compute and return the AIM "quantization condition".
# Input
- `m::AIMSteppingMethod`: The stepping method to use.
- `p::QuadraticEigenvalueProblem`: A quadratic frequency problem.
- `c::AIMCache`: An AIM cache object created from p.
- `ω::T`: Point to evaluate the quantization condition.
# Output
An object of type `T` which represents the AIM quantization condition at point ω.
"""
function computeDelta!(m::AIMSteppingMethod, p::NumericAIMProblem{N,T}, c::AIMCache{N,T}, ω::T) where {N <: Unsigned, T <: Number}
if (get_niter(p) + one(N)) != c.size
error("The provided cache cannot hold data for $(get_niter(p)) AIM iterations.
Make sure that the passed AIMCache corresponds to the passed AIM problem")
end
# Fill iitial arrays with data corresponding to the point ω
recomputeInitials!(p,c,ω)
# Initialize the current data arryas with the initial data for the first step
copy!(c.ccda, c.icda)
copy!(c.cdda, c.idda)
# Perform the aim steps
for i in one(N):get_niter(p)
AIMStep!(m, p, c)
end
# Compute and return the AIM "quantization condition" at point ω
return c.cdda[1]*c.pcda[1] - c.pdda[1]*c.ccda[1]
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 4980 | """
computeEigenvalues(
m::AIMSteppingMethod,
p::QuadraticEigenvalueProblem{N,T},
c::AIMCache{N,Polynomial{T}};
plr_polish::Bool = true,
plr_epsilon::Real = convert(T, 1.0e-10)
) where {N <: Unsigned, T <: Number}
Compute the eigenvalues for the problem `p` with corresponding cache `c`.
# Input
- `m::AIMSteppingMethod`: The stepping method to use.
- `p::QuadraticEigenvalueProblem`: The previously defined problem data.
- `c::AIMCache`: The cache constructed from p.
- `plr_polish::Bool`: Tell PolynomialRoots to divide the original polynomial by each root found and polish the results using the full polynomial.
- `plr_epsilon::Real`: The stopping criterion described in Skowron & Gould paper. This is not the precision with which the roots will be calculated.
# Output
An object of type `Array{T,1}` containing the computed eigenvalues.
"""
function computeEigenvalues(
m::AIMSteppingMethod,
p::QuadraticEigenvalueProblem{N,T},
c::AIMCache{N,Polynomial{T}};
plr_polish::Bool = true,
plr_epsilon::Real = 1.0e-10
) where {N<:Unsigned,T<:Number}
# Compute the AIM "quantization condition"
δ = computeDelta!(m, p, c)
# Solve the quantization condition to obtain the eigenvalues
eigenvalues = PolynomialRoots.roots(coeffs(δ), polish = plr_polish, epsilon = plr_epsilon)
if isempty(eigenvalues)
@warn "The computed mode array is empty. This means that no roots of the polynomial equation in ω were found."
end
return eigenvalues
end
"""
computeEigenvalues(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
guess::T;
nls_xtol::Real = convert(T, 1.0e-10),
nls_ftol::Real = convert(T, 1.0e-10),
nls_iterations::Int = 1000
) where {N <: Unsigned, T <: Complex}
Compute a single eigenvalue for the problem `p` with corresponding cache `c`.
# Input
- `m::AIMSteppingMethod`: The stepping method to use.
- `p::NumericAIMProblem`: The previously defined problem data.
- `c::AIMCache`: The cache constructed from p.
- `nls_xtol::Real`: Norm difference in x between two successive iterates under which convergence is declared.
- `nls_ftol::Real`: Infinite norm of residuals under which convergence is declared.
- `nls_iterations::Int`: Maximum number of iterations performed by NLsolve.
# Output
An object of type `SolverResults` returned by `nlsolve`. See [NLsolve.jl](https://github.com/JuliaNLSolvers/NLsolve.jl) for further details.
"""
function computeEigenvalues(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
guess::T;
nls_xtol::Real = 1.0e-10,
nls_ftol::Real = 1.0e-10,
nls_iterations::Int = 1000
) where {N<:Unsigned,T<:Complex}
# This function is passed to NLsolve to find the roots of δ
function f!(F, x)
y = computeDelta!(m, p, c, Complex(x[1], x[2]))
F[1] = real(y)
F[2] = imag(y)
end
# compute the roots of δ using NLsolve
return nlsolve(f!, [real(guess), imag(guess)], ftol = nls_ftol, xtol = nls_xtol, iterations = nls_iterations)
end
"""
computeEigenvalues(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
guess::T;
roots_atol::Real = 1.0e-10,
roots_rtol::Real = 1.0e-10,
roots_xatol::Real = 1.0e-10,
roots_xrtol::Real = 1.0e-10,
roots_maxevals::Int = 100
) where {N <: Unsigned, T <: Real}
Compute a single eigenvalue for the problem `p` with corresponding cache `c`.
For details on convergence settings see [Roots.jl](https://juliahub.com/docs/Roots/o0Xsi/1.0.7/reference/#Convergence).
# Input
- `m::AIMSteppingMethod`: The stepping method to use.
- `p::NumericAIMProblem{N,T}`: The previously defined problem data.
- `c::AIMCache{N,T}`: The cache constructed from p.
- `grid::Tuple{T,T}`: A tuple consisting of (start point, end point).
- `roots_atol::Real`: Absolute tolerance.
- `roots_rtol::Real`: Relative tolerance.
- `roots_xatol::Real`: Floating point comparison absolute tolerance.
- `roots_xrtol::Real`: Floating point comparison relative tolerance.
- `roots_maxevals::Int`: Number of algorithm iterations performed.
# Output
An object of type T containing the found eigenvalue.
"""
function computeEigenvalues(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
guess::T;
roots_atol::Real = 1.0e-10,
roots_rtol::Real = 1.0e-10,
roots_xatol::Real = 1.0e-10,
roots_xrtol::Real = 1.0e-10,
roots_maxevals::Int = 100
) where {N<:Unsigned,T<:Real}
try
find_zero(
x -> computeDelta!(m, p, c, x),
guess,
atol = roots_atol,
rtol = roots_rtol,
xatol = roots_xatol,
xrtol = roots_xrtol,
maxevals = roots_maxevals
)
catch
println("find_zeros was unable to converge to an eigenvalue")
end
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 4789 | """
eigenvaluesInGrid(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
grid::Tuple{T,T,Int64,Int64};
xtol::Real = 1.0e-10,
ftol::Real = 1.0e-10,
iterations::Int = 1000
) where {N <: Unsigned, T <: Complex}
Attempts to find eigenvalues using a grid of complex plane data points as initial guesses passed to nlsolve.
# Input
- `m::AIMSteppingMethod`: The stepping method to use.
- `p::NumericAIMProblem{N,T}`: The previously defined problem data.
- `c::AIMCache{N,T}`: The cache constructed from p.
- `grid::Tuple{T,T,Int64,Int64}`: A tuple consisting of (start point, end point, num. of real pts., num. of imag. pots.).
- `xtol::Real`: Norm difference in x between two successive iterates under which convergence is declared.
- `ftol::Real`: Infinite norm of residuals under which convergence is declared.
- `iterations::Int`: Maximum number of iterations performed by NLsolve.
# Output
An object of type `Array{T,1}` containing the modes found within the grid.
"""
function eigenvaluesInGrid(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
grid::Tuple{T,T,Int64,Int64};
xtol::Real=1.0e-10,
ftol::Real=1.0e-10,
iterations::Int=1000
) where {N <: Unsigned,T <: Complex}
if grid[3] < 0 || grid[4] < 0
error("The number of points in the grid must be positive")
end
# Extract grid quantities
re_start = real(grid[1])
re_end = real(grid[2])
re_size = grid[3]
re_step = (re_end - re_start) / (re_size - 1)
re_range = re_start:re_step:re_end
im_start = imag(grid[1])
im_end = imag(grid[2])
im_size = grid[4]
im_step = (im_end - im_start) / (im_size - 1)
im_range = im_start:im_step:im_end
# Check that the grid is valid
if (re_end <= re_start || re_step < 0 ) || (im_end <= im_start || im_step < 0 )
error("The grid specified must be tuple in the form (start, end, real points, imag. points) where end > start.")
end
# Build the output data array with it's expected size
eigenvalues::Array{T,1} = []
for realPart in re_range
for imagPart in im_range
# Compute the solution, if any
sol = computeEigenvalues(
m,
p,
c,
T(realPart, imagPart),
nls_xtol=xtol,
nls_ftol=ftol,
nls_iterations=iterations
)
# If nlsolve converge the result is stored. If not take the appropriate action
if sol.f_converged || sol.x_converged
push!(eigenvalues, T(sol.zero[1], sol.zero[2]))
end
end
end
return eigenvalues
end
"""
eigenvaluesInGrid(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
grid::Tuple{T,T};
roots_atol::Real = 1.0e-10,
roots_rtol::Real = 1.0e-10,
roots_xatol::Real = 1.0e-10,
roots_xrtol::Real = 1.0e-10,
roots_maxevals::Int = 100
) where {N <: Unsigned, T <: Real}
Attempts to find eigenvalues using a range of real data points as a search region to find_zeros.
For details on convergence settings see [Roots.jl](https://juliahub.com/docs/Roots/o0Xsi/1.0.7/reference/#Convergence).
# Input
- `m::AIMSteppingMethod`: The stepping method to use.
- `p::NumericAIMProblem{N,T}`: The previously defined problem data.
- `c::AIMCache{N,T}`: The cache constructed from p.
- `grid::Tuple{T,T}`: A tuple consisting of (start point, end point).
- `roots_atol::Real`: Absolute tolerance.
- `roots_rtol::Real`: Relative tolerance.
- `roots_xatol::Real`: Floating point comparison absolute tolerance.
- `roots_xrtol::Real`: Floating point comparison relative tolerance.
- `roots_maxevals::Int`: Number of algorithm iterations performed.
# Output
An object of type `Array{T,1}` containing the eigenvalues found within the grid.
"""
function eigenvaluesInGrid(
m::AIMSteppingMethod,
p::NumericAIMProblem{N,T},
c::AIMCache{N,T},
grid::Tuple{T,T};
roots_atol::Real=1.0e-10,
roots_rtol::Real=1.0e-10,
roots_xatol::Real=1.0e-10,
roots_xrtol::Real=1.0e-10,
roots_maxevals::Int=100,
) where {N <: Unsigned,T <: Real}
if grid[1] > grid[2]
error("The interval for searching eigenvalues must be ordered.")
end
try
find_zeros(
x -> computeDelta!(m, p, c, x),
grid[1],
grid[2],
atol=roots_atol,
rtol=roots_rtol,
xatol=roots_xatol,
xrtol=roots_xrtol,
maxevals=roots_maxevals
)
catch
println("find_zeros was unable to converge to an eigenvalue")
end
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 700 | # ------------------------------------------------------------------
# Compute the code coverage locally.
# See https://github.com/JuliaCI/Coverage.jl
# ------------------------------------------------------------------
using Coverage
coverage = process_folder()
coverage = merge_coverage_counts(coverage, filter!(
let prefixes = joinpath(pwd(), "src", "")
c -> any(p -> startswith(c.filename, p), prefixes)
end,
LCOV.readfolder("test")))
covered_lines, total_lines = get_summary(coverage)
println("Total lines: ", total_lines)
println("Covered lines: ", covered_lines)
println("Missing lines: ", total_lines - covered_lines)
println("Code convergence: ", covered_lines / total_lines)
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 14951 | using QuasinormalModes
using SymEngine
using Test
# ------------------------------------------------------------------
# 1. Analytic harmonic oscillator
# ------------------------------------------------------------------
struct HarmonicOscilatorData{N,T} <: QuadraticEigenvalueProblem{N,T}
nIter::N
x0::T
vars::Tuple{Basic,Basic}
exprs::Tuple{Basic,Basic}
end
function HarmonicOscilatorData(nIter::N, x0::T) where {N,T}
vars = @vars x ω
λ0 = 2 * x
S0 = 1 - ω
return HarmonicOscilatorData{N,T}(nIter, x0, vars, (λ0, S0))
end
QuasinormalModes.λ0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.exprs[1]
QuasinormalModes.S0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.exprs[2]
QuasinormalModes.get_niter(d::HarmonicOscilatorData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.x0
QuasinormalModes.get_ODEvar(d::HarmonicOscilatorData{N,T}) where {N,T} = d.vars[1]
QuasinormalModes.get_ODEeigen(d::HarmonicOscilatorData{N,T}) where {N,T} = d.vars[2]
# ------------------------------------------------------------------
# 2. Numeric harmonic oscillator
# ------------------------------------------------------------------
struct NHarmonicOscilatorData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
end
function NHarmonicOscilatorData(nIter::N, x0::T) where {N,T}
return NHarmonicOscilatorData{N,T}(nIter, x0)
end
QuasinormalModes.λ0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x, ω) -> 2 * x
QuasinormalModes.S0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x, ω) -> 1 - ω + x - x
QuasinormalModes.get_niter(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.x0
# ------------------------------------------------------------------
# 3. Serial Numeric Tests
# ------------------------------------------------------------------
@testset "Serial Numeric correctness - 10 iterations" begin
p = NHarmonicOscilatorData(0x0000A, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Serial(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Serial(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Serial Numeric correctness - 20 iterations" begin
p = NHarmonicOscilatorData(0x00014, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Serial(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Serial(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Serial Numeric correctness - 50 iterations" begin
p = NHarmonicOscilatorData(0x00032, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Serial(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Serial(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Serial Numeric correctness - 100 iterations" begin
p = NHarmonicOscilatorData(0x00064, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Serial(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Serial(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Serial Numeric correctness - 25 iterations with BigFloat" begin
p = NHarmonicOscilatorData(0x00019, BigFloat("0.5"))
c = AIMCache(p)
ev = eigenvaluesInGrid(Serial(), p, c, (BigFloat("0.0"), BigFloat("20.0")))
ce = [computeEigenvalues(Serial(), p, c, guess) for guess in BigFloat("1.0"):BigFloat("2.0"):BigFloat("19.0")]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
# ------------------------------------------------------------------
# 4. Serial Analytic Tests
# ------------------------------------------------------------------
@testset "Serial Analytic correctness - 10 iterations" begin
p = HarmonicOscilatorData(0x0000A, 0.5)
c = AIMCache(p)
ev = computeEigenvalues(Serial(), p, c)
@test length(ev) >= 10
@test round(real(ev[end])) == 1
@test round(real(ev[end-1])) == 3
@test round(real(ev[end-2])) == 5
@test round(real(ev[end-3])) == 7
@test round(real(ev[end-4])) == 9
@test round(real(ev[end-5])) == 11
@test round(real(ev[end-6])) == 13
@test round(real(ev[end-7])) == 15
@test round(real(ev[end-8])) == 17
@test round(real(ev[end-9])) == 19
end
@testset "Serial Analytic correctness - 20 iterations" begin
p = HarmonicOscilatorData(0x00014, 0.5)
c = AIMCache(p)
ev = computeEigenvalues(Serial(), p, c)
@test length(ev) >= 10
@test round(real(ev[end])) == 1
@test round(real(ev[end-1])) == 3
@test round(real(ev[end-2])) == 5
@test round(real(ev[end-3])) == 7
@test round(real(ev[end-4])) == 9
@test round(real(ev[end-5])) == 11
@test round(real(ev[end-6])) == 13
@test round(real(ev[end-7])) == 15
@test round(real(ev[end-8])) == 17
@test round(real(ev[end-9])) == 19
end
@testset "Serial Analytic correctness - 25 iterations with BigFloat" begin
p = HarmonicOscilatorData(0x00019, BigFloat("0.5"))
c = AIMCache(p)
ev = computeEigenvalues(Serial(), p, c)
@test length(ev) >= 10
@test round(real(ev[end])) == 1
@test round(real(ev[end-1])) == 3
@test round(real(ev[end-2])) == 5
@test round(real(ev[end-3])) == 7
@test round(real(ev[end-4])) == 9
@test round(real(ev[end-5])) == 11
@test round(real(ev[end-6])) == 13
@test round(real(ev[end-7])) == 15
end
# ------------------------------------------------------------------
# 5. Threaded Numeric Tests
# ------------------------------------------------------------------
@testset "Threaded Numeric correctness - 10 iterations" begin
p = NHarmonicOscilatorData(0x0000A, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Threaded(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Threaded(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Threaded Numeric correctness - 20 iterations" begin
p = NHarmonicOscilatorData(0x00014, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Threaded(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Threaded(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Threaded Numeric correctness - 50 iterations" begin
p = NHarmonicOscilatorData(0x00032, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Threaded(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Threaded(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Threaded Numeric correctness - 100 iterations" begin
p = NHarmonicOscilatorData(0x00064, 0.5)
c = AIMCache(p)
ev = eigenvaluesInGrid(Threaded(), p, c, (0.0, 20.0))
ce = [computeEigenvalues(Threaded(), p, c, guess) for guess in 1.0:2.0:19.0]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
@testset "Threaded Numeric correctness - 25 iterations with BigFloat" begin
p = NHarmonicOscilatorData(0x00019, BigFloat("0.5"))
c = AIMCache(p)
ev = eigenvaluesInGrid(Threaded(), p, c, (big"0.0", big"20.0"))
ce = [computeEigenvalues(Threaded(), p, c, guess) for guess in BigFloat("1.0"):BigFloat("2.0"):BigFloat("19.0")]
@test length(ev) == 10
@test round(ev[1]) == 1
@test round(ev[2]) == 3
@test round(ev[3]) == 5
@test round(ev[4]) == 7
@test round(ev[5]) == 9
@test round(ev[6]) == 11
@test round(ev[7]) == 13
@test round(ev[8]) == 15
@test round(ev[9]) == 17
@test round(ev[10]) == 19
@test length(ce) == 10
@test round(ce[1]) == 1
@test round(ce[2]) == 3
@test round(ce[3]) == 5
@test round(ce[4]) == 7
@test round(ce[5]) == 9
@test round(ce[6]) == 11
@test round(ce[7]) == 13
@test round(ce[8]) == 15
@test round(ce[9]) == 17
@test round(ce[10]) == 19
end
# ------------------------------------------------------------------
# 5. Threaded Analytic Tests
# ------------------------------------------------------------------
@testset "Threaded Analytic correctness - 10 iterations" begin
p = HarmonicOscilatorData(0x0000A, 0.5)
c = AIMCache(p)
ev = computeEigenvalues(Threaded(), p, c)
@test length(ev) >= 10
@test round(real(ev[end])) == 1
@test round(real(ev[end-1])) == 3
@test round(real(ev[end-2])) == 5
@test round(real(ev[end-3])) == 7
@test round(real(ev[end-4])) == 9
@test round(real(ev[end-5])) == 11
@test round(real(ev[end-6])) == 13
@test round(real(ev[end-7])) == 15
@test round(real(ev[end-8])) == 17
@test round(real(ev[end-9])) == 19
end
@testset "Threaded Analytic correctness - 20 iterations" begin
p = HarmonicOscilatorData(0x00014, 0.5)
c = AIMCache(p)
ev = computeEigenvalues(Threaded(), p, c)
@test length(ev) >= 10
@test round(real(ev[end])) == 1
@test round(real(ev[end-1])) == 3
@test round(real(ev[end-2])) == 5
@test round(real(ev[end-3])) == 7
@test round(real(ev[end-4])) == 9
@test round(real(ev[end-5])) == 11
@test round(real(ev[end-6])) == 13
@test round(real(ev[end-7])) == 15
@test round(real(ev[end-8])) == 17
@test round(real(ev[end-9])) == 19
end
@testset "Threaded Analytic correctness - 25 iterations with BigFloat" begin
p = HarmonicOscilatorData(0x00019, BigFloat("0.5"))
c = AIMCache(p)
ev = computeEigenvalues(Threaded(), p, c)
@test length(ev) >= 10
@test round(real(ev[end])) == 1
@test round(real(ev[end-1])) == 3
@test round(real(ev[end-2])) == 5
@test round(real(ev[end-3])) == 7
@test round(real(ev[end-4])) == 9
@test round(real(ev[end-5])) == 11
@test round(real(ev[end-6])) == 13
@test round(real(ev[end-7])) == 15
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 268 | using SafeTestsets
@time begin
@time @safetestset "Quantum Harmonic oscillator tests" begin
include("harmonic_oscillator.jl")
end
@time @safetestset "Schwarzschild quasinormal modes tests" begin
include("schwarzschild_qnm.jl")
end
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | code | 15429 | using QuasinormalModes
using SymEngine
using Test
# ------------------------------------------------------------------
# 1. Analytic Schwarzschild Black Hole
# ------------------------------------------------------------------
struct SchwarzschildData{N,T} <: QuadraticEigenvalueProblem{N,T}
nIter::N
x0::T
vars::Tuple{Basic,Basic}
exprs::Tuple{Basic,Basic}
end
function SchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
vars = @vars x ω
λ0 = (-1 + (2 * im) * ω + x * (4 - 3 * x + (4 * im) * (-2 + x) * ω)) / ((-1 + x)^2 * x)
S0 = (l + l^2 + (-1 + s^2) * (-1 + x) + (4 * im) * (-1 + x) * ω + 4 * (-2 + x) * ω^2) / ((-1 + x)^2 * x)
return SchwarzschildData{N,T}(nIter, x0, vars, (λ0, S0))
end
QuasinormalModes.λ0(d::SchwarzschildData{N,T}) where {N,T} = d.exprs[1]
QuasinormalModes.S0(d::SchwarzschildData{N,T}) where {N,T} = d.exprs[2]
QuasinormalModes.get_niter(d::SchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::SchwarzschildData{N,T}) where {N,T} = d.x0
QuasinormalModes.get_ODEvar(d::SchwarzschildData{N,T}) where {N,T} = d.vars[1]
QuasinormalModes.get_ODEeigen(d::SchwarzschildData{N,T}) where {N,T} = d.vars[2]
# ------------------------------------------------------------------
# 2. Numeric Schwarzschild Black Hole
# ------------------------------------------------------------------
struct NSchwarzschildData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
l::N
s::N
end
function NSchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
return NSchwarzschildData{N,T}(nIter, x0, l, s)
end
QuasinormalModes.λ0(::NSchwarzschildData{N,T}) where {N,T} = (x, ω) -> (-1 + (2 * im) * ω + x * (4 - 3 * x + (4 * im) * (-2 + x) * ω)) / ((-1 + x)^2 * x)
QuasinormalModes.S0(d::NSchwarzschildData{N,T}) where {N,T} = (x, ω) -> (d.l + d.l^2 + (-1 + d.s^2) * (-1 + x) + (4 * im) * (-1 + x) * ω + 4 * (-2 + x) * ω^2) / ((-1 + x)^2 * x)
QuasinormalModes.get_niter(d::NSchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NSchwarzschildData{N,T}) where {N,T} = d.x0
# ------------------------------------------------------------------
# 3. Nmeric Tests
# ------------------------------------------------------------------
@testset "Serial Numeric correctness - BigFloat, 100 iterations, s = 0, l = 0" begin
p = NSchwarzschildData(0x00064, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00000, 0x00000);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(BigFloat("0.22"), BigFloat("-0.20")),
nls_xtol=BigFloat("1.0e-50"),
nls_ftol=BigFloat("1.0e-50")
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - BigFloat("0.2209098781608393")) < BigFloat("1.0e-7")
@test abs(ev.zero[2] + BigFloat("0.2097914341737619")) < BigFloat("1.0e-7")
end
@testset "Serial Numeric correctness - BigFloat, 100 iterations, s = 1, l = 1" begin
p = NSchwarzschildData(0x00064, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00001, 0x00001);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(BigFloat("0.49"), BigFloat("-0.18")),
nls_xtol=BigFloat("1.0e-50"),
nls_ftol=BigFloat("1.0e-50")
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - BigFloat("0.4965265283562174")) < BigFloat("1.0e-7")
@test abs(ev.zero[2] + BigFloat("0.1849754359058844")) < BigFloat("1.0e-7")
end
@testset "Serial Numeric correctness - BigFloat, 100 iterations, s = 1, l = 1" begin
p = NSchwarzschildData(0x00064, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00002, 0x00002);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(BigFloat("0.74"), BigFloat("-0.17")),
nls_xtol=BigFloat("1.0e-50"),
nls_ftol=BigFloat("1.0e-50")
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - BigFloat("0.7473433688360838")) < BigFloat("1.0e-7")
@test abs(ev.zero[2] + BigFloat("0.1779246313778714")) < BigFloat("1.0e-7")
end
@testset "Serial Numeric correctness - Float64, 48 iterations, s = 0, l = 0" begin
p = NSchwarzschildData(0x00030, Complex(0.43, 0.0), 0x00000, 0x00000);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.22, -0.20),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.2209098781608393) < 1.0e-4
@test abs(ev.zero[2] + 0.2097914341737619) < 1.0e-4
# --- n = 1 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.17, -0.69),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.1722338366727985) < 1.0e-3
@test abs(ev.zero[2] + 0.6961048936129209) < 1.0e-3
# --- n = 2 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.15, -1.2),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.1514838710703517) < 1.0e-2
@test abs(ev.zero[2] + 0.1202157180071607e1) < 1.0e-2
end
@testset "Serial Numeric correctness - Float64, 47 iterations, s = 1, l = 1" begin
p = NSchwarzschildData(0x0002F, Complex(0.43, 0.0), 0x00001, 0x00001);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.49, -0.18),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.4965265283562174) < 1.0e-9
@test abs(ev.zero[2] + 0.1849754359058844) < 1.0e-9
# --- n = 1 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.42, -0.58),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.4290308391272117) < 1.0e-6
@test abs(ev.zero[2] + 0.5873352910914573) < 1.0e-6
# --- n = 2 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.34, -1.05),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.3495471352140215) < 1.0e-4
@test abs(ev.zero[2] + 0.1050375198717648e1) < 1.0e-4
end
@testset "Serial Numeric correctness - Float64, 46 iterations, s = 2, l = 2" begin
p = NSchwarzschildData(0x0002D, Complex(0.43, 0.0), 0x00002, 0x00002);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.74, -0.17),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.7473433688360838) < 1.0e-9
@test abs(ev.zero[2] + 0.1779246313778714) < 1.0e-9
# --- n = 1 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.69, -0.54),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.6934219937583268) < 1.0e-8
@test abs(ev.zero[2] + 0.5478297505824697) < 1.0e-8
# --- n = 1 ---
ev = computeEigenvalues(
Serial(),
p,
c,
Complex(0.60, -0.95),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.6021069092247328) < 1.0e-6
@test abs(ev.zero[2] + 0.9565539664461437) < 1.0e-6
end
# ------------------------------------------------------------------
# 4. Threaded Nmeric Tests
# ------------------------------------------------------------------
@testset "Threaded Numeric correctness - BigFloat, 100 iterations, s = 0, l = 0" begin
p = NSchwarzschildData(0x00064, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00000, 0x00000);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(BigFloat("0.22"), BigFloat("-0.20")),
nls_xtol=BigFloat("1.0e-50"),
nls_ftol=BigFloat("1.0e-50")
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - BigFloat("0.2209098781608393")) < BigFloat("1.0e-7")
@test abs(ev.zero[2] + BigFloat("0.2097914341737619")) < BigFloat("1.0e-7")
end
@testset "Threaded Numeric correctness - BigFloat, 100 iterations, s = 1, l = 1" begin
p = NSchwarzschildData(0x00064, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00001, 0x00001);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(BigFloat("0.49"), BigFloat("-0.18")),
nls_xtol=BigFloat("1.0e-50"),
nls_ftol=BigFloat("1.0e-50")
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - BigFloat("0.4965265283562174")) < BigFloat("1.0e-7")
@test abs(ev.zero[2] + BigFloat("0.1849754359058844")) < BigFloat("1.0e-7")
end
@testset "Threaded Numeric correctness - BigFloat, 100 iterations, s = 1, l = 1" begin
p = NSchwarzschildData(0x00064, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00002, 0x00002);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(BigFloat("0.74"), BigFloat("-0.17")),
nls_xtol=BigFloat("1.0e-50"),
nls_ftol=BigFloat("1.0e-50")
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - BigFloat("0.7473433688360838")) < BigFloat("1.0e-7")
@test abs(ev.zero[2] + BigFloat("0.1779246313778714")) < BigFloat("1.0e-7")
end
@testset "Threaded Numeric correctness - Float64, 48 iterations, s = 0, l = 0" begin
p = NSchwarzschildData(0x00030, Complex(0.43, 0.0), 0x00000, 0x00000);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.22, -0.20),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.2209098781608393) < 1.0e-4
@test abs(ev.zero[2] + 0.2097914341737619) < 1.0e-4
# --- n = 1 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.17, -0.69),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.1722338366727985) < 1.0e-3
@test abs(ev.zero[2] + 0.6961048936129209) < 1.0e-3
# --- n = 2 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.15, -1.2),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.1514838710703517) < 1.0e-2
@test abs(ev.zero[2] + 0.1202157180071607e1) < 1.0e-2
end
@testset "Threaded Numeric correctness - Float64, 47 iterations, s = 1, l = 1" begin
p = NSchwarzschildData(0x0002F, Complex(0.43, 0.0), 0x00001, 0x00001);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.49, -0.18),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.4965265283562174) < 1.0e-9
@test abs(ev.zero[2] + 0.1849754359058844) < 1.0e-9
# --- n = 1 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.42, -0.58),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.4290308391272117) < 1.0e-6
@test abs(ev.zero[2] + 0.5873352910914573) < 1.0e-6
# --- n = 2 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.34, -1.05),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.3495471352140215) < 1.0e-4
@test abs(ev.zero[2] + 0.1050375198717648e1) < 1.0e-4
end
@testset "Threaded Numeric correctness - Float64, 46 iterations, s = 2, l = 2" begin
p = NSchwarzschildData(0x0002D, Complex(0.43, 0.0), 0x00002, 0x00002);
c = AIMCache(p)
# --- n = 0 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.74, -0.17),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.7473433688360838) < 1.0e-9
@test abs(ev.zero[2] + 0.1779246313778714) < 1.0e-9
# --- n = 1 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.69, -0.54),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.6934219937583268) < 1.0e-8
@test abs(ev.zero[2] + 0.5478297505824697) < 1.0e-8
# --- n = 1 ---
ev = computeEigenvalues(
Threaded(),
p,
c,
Complex(0.60, -0.95),
nls_xtol=1.0e-10,
nls_ftol=1.0e-10
)
@test (ev.x_converged || ev.f_converged) == true
@test abs(ev.zero[1] - 0.6021069092247328) < 1.0e-6
@test abs(ev.zero[2] + 0.9565539664461437) < 1.0e-6
end
# ------------------------------------------------------------------
# 5. Grid search tests
# ------------------------------------------------------------------
@testset "Serial grid search - Float64, 46 iterations, s = 2, l = 2" begin
p = NSchwarzschildData(0x0002D, Complex(0.43, 0.0), 0x00002, 0x00002);
c = AIMCache(p)
ev = eigenvaluesInGrid(Serial(), p, c, (Complex(0.60, -0.95), Complex(0.74, -0.17), 3, 3))
cutoff = 1.0e-5
filter!(x -> real(x) > cutoff && imag(x) < 0.0, ev)
@test abs(real(ev[1]) - 0.6021069092247328) < 1.0e-6 && abs(imag(ev[1]) + 0.9565539664461437) < 1.0e-6
@test abs(real(ev[5]) - 0.6934219937583268) < 1.0e-6 && abs(imag(ev[5]) + 0.5478297505824697) < 1.0e-6
@test abs(real(ev[6]) - 0.7473433688360838) < 1.0e-6 && abs(imag(ev[6]) + 0.1779246313778714) < 1.0e-6
end
@testset "Serial grid search - BigFloat, 46 iterations, s = 2, l = 2" begin
p = NSchwarzschildData(0x0002D, Complex(big"0.43", big"0.0"), 0x00002, 0x00002);
c = AIMCache(p)
ev = eigenvaluesInGrid(Threaded(), p, c, (Complex(big"0.60", big"-0.95"), Complex(big"0.74", big"-0.17"), 3, 3))
cutoff = big"1.0e-5"
filter!(x -> real(x) > cutoff && imag(x) < big"0.0", ev)
@test abs(real(ev[1]) - big"0.6021069092247328") < 1.0e-6 && abs(imag(ev[1]) + big"0.9565539664461437") < 1.0e-6
@test abs(real(ev[5]) - big"0.6934219937583268") < 1.0e-6 && abs(imag(ev[5]) + big"0.5478297505824697") < 1.0e-6
@test abs(real(ev[6]) - big"0.7473433688360838") < 1.0e-6 && abs(imag(ev[6]) + big"0.1779246313778714") < 1.0e-6
end
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 3417 | # QuasinormalModes.jl

This is a [Julia](http://julialang.org) package whose primary objective is to compute the discrete eigenvalues of second order ordinary differential equations. It was written with the intent to be used for computing quasinormal modes (QNMs) of black holes in General Relativity efficiently and accurately. QNMs are the discrete spectrum of characteristic oscillations produced by black holes when perturbed. These oscillations decay exponentially in time and thus it's said that QNMs contain a real ``\omega_R`` oscillation frequency and an imaginary ``\omega_I`` frequency that represents the mode's decay rate. These frequencies are often described by a discrete eigenvalue in a second order ODE. For a comprehensive review see [[1]](https://arxiv.org/abs/0905.2975).
To compute eigenvalues (and thus quasinormal frequencies) this package uses the Asymptotic Iteration Method (AIM) [[2]](https://arxiv.org/abs/math-ph/0309066v1), more specifically the "improved" version of the AIM as described in [[3]](https://arxiv.org/abs/1111.5024). The AIM can be used to find the eigenvectors and eigenvalues of any second order differential equation (the class of problems with which the quasinormal modes belong) and thus this package can be used not only in the context of General Relativity but can also be used to find the discrete eigenvalues of other systems such as the eigenenergies of a quantum system described by the time independent Schrödinger equation.
[](https://lucass-carneiro.github.io/QuasinormalModes.jl/stable)
[](https://lucass-carneiro.github.io/QuasinormalModes.jl/dev)

[](https://codecov.io/gh/lucass-carneiro/QuasinormalModes.jl)
[](https://zenodo.org/badge/latestdoi/316366566)
[](https://doi.org/10.21105/joss.04077)
# Author
[Lucas T. Sanches]([email protected]), Centro de Ciências Naturais e Humanas, Universidade Federal do ABC (UFABC).
# License
`QuasinormalModes` is licensed under the [MIT license](./LICENSE.md).
# Installation
This package can be installed using the Julia package manager. From the Julia REPL, type `]` to enter the Pkg REPL mode and run
```julia
pkg> add QuasinormalModes
```
and then type backspace to exit back to the REPL.
# Using
For detailed usage instructions please read the [documentation](https://lucass-carneiro.github.io/QuasinormalModes.jl/). You can also find examples [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/tree/master/examples).
# Contributing
There are many ways to contribute to this package:
- Report an issue if you encounter some odd behavior, or if you have suggestions to improve the package.
- Contribute with code addressing some open issues, that add new functionality or that improve the performance.
- When contributing with code, add docstrings and comments, so others may understand the methods implemented.
- Contribute by updating and improving the documentation.
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 1722 | # Benchmarks
This folder contains all the necessary resources to reproduce the benchmark and error convergence results presented in the JOSS paper:
1. `benchmark.jl`: The Julia script that produces the benchmark results. The benchmark consists in a user configurable number of "runs". Each run computes a quasinormal frequency with a varying (and also configurable) number of AIM iterations. The function that performs the benchmark is called twice, so that Julia's compilation time is not erroneously included at the first iteration.
2. `perf.py`: This Python script takes as an input a number of runs and produces a plot of the average runtime versus the number of iterations from a set of benchmark result files.
3. `err.py`: This Python script produces a plot of the error of the computed frequencies versus the number of AIM iterations from a set of benchmark result files. Since the convergence rate does not change with the number of runs, it computes the plot using the data for the first run.
Additionally, it contains the raw data files with the benchmark results that were used in the production of the JOSS paper compressed inside the `runs.tar.gz` file, alongside the original `.pdf` images included in the paper.
# Hardware and commands
The included data files were produced using 16 threads on a Intel(R) Core(TM) i9-7900X @ 3.30GHz CPU with 256 bit precision floating point numbers. The command issued to produce the results was
```
julia -O3 --startup-file=no --threads=16 benchmark.jl
```
Once the data files were produced with the command above, the `perf.pdf` plot was created by issuing
```
python perf.py 20
```
and finally, the convergence rate plot was produced with
```
python err.py
```
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 711 | # API Reference
Here we present the API reference for all functions and types within the module. The end user must only use the exported objects but private objects are also documented for completeness
## Public Modules
```@autodocs
Modules = [QuasinormalModes]
Private = false
Order = [:module]
```
## Public types
```@autodocs
Modules = [QuasinormalModes]
Private = false
Order = [:type]
```
## Public functions
```@autodocs
Modules = [QuasinormalModes]
Private = false
Order = [:function]
```
## Private types
```@autodocs
Modules = [QuasinormalModes]
Public = false
Order = [:type]
```
## Private functions
```@autodocs
Modules = [QuasinormalModes]
Public = false
Order = [:function]
```
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 104 | # Table of Contents
```@contents
Pages = ["intro.md", "org.md", "schw.md", "sho.md", "api_ref.md"]
```
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 1822 | # Introduction
This package's primary objective is to compute the discrete eigenvalues of second order ordinary differential equations. It was written with the intent to be used for computing quasinormal modes (QNMs) of black holes in General Relativity efficiently and accurately. QNMs are the discrete spectrum of characteristic oscillations produced by black holes when perturbed. These oscillations decay exponentially in time and thus it's said that QNMs contain a real ``\omega_R`` oscillation frequency and an imaginary ``\omega_I`` frequency that represents the mode's decay rate. These frequencies are often described by a discrete eigenvalue in a second order ODE. For a comprehensive review see [[1]](https://arxiv.org/abs/0905.2975).
To compute eigenvalues (and thus quasinormal frequencies) this package uses the Asymptotic Iteration Method (AIM) [[2]](https://arxiv.org/abs/math-ph/0309066v1), more specifically the "improved" version of the AIM as described in [[3]](https://arxiv.org/abs/1111.5024). The AIM can be used to find the eigenvectors and eigenvalues of any second order differential equation (the class of problems with which the quasinormal modes belong) and thus this package can be used not only in the context of General Relativity but can also be used to find the discrete eigenvalues of other systems such as the eigenenergies of a quantum system described by the time independent Schrödinger equation.
In the following sections you will find the QuasinormalModes.jl API and instructions on how to use it in a series of (hopefully sufficient) examples.
# Installing
This package can be installed using the Julia package manager. From the Julia REPL, type `]` to enter the Pkg REPL mode and run
```julia
pkg> add QuasinormalModes
```
and then type backspace to exit back to the REPL.
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 15108 | # Package organization
## A Brief description of the AIM
`QuasinormalModes.jl` is in its core an implementation of the Asymptotic Iteration Method. For a complete description of the general method the reader is encouraged to read [this paper](https://arxiv.org/abs/math-ph/0309066v1). Our implementation is based on the variation of the method described in [this paper](https://arxiv.org/abs/1111.5024). The method requires 3 basic steps:
1. Incorporate the asymptotic boundary conditions into the ODE.
2. Compactify the domain of the problem (if it isn't already compact).
3. Write the ODE in the form ``y^{\prime\prime}(x) = \lambda_0(x)y^{\prime}(x) + s_0(x)y(x)``
From the ODE coefficients ``\lambda_0(x)`` and ``s_0(x)`` the AIM computes the eigenvalues by requiring that the "quantization condition"
```math
\delta_n = s_n\lambda_{n-1} - s_{n-1}\lambda_{n} = 0
```
is satisfied, where
```math
\lambda_n = \lambda^\prime_{n-1} + s_{n-1} + \lambda_0 \lambda_{n-1}
```
and
```math
s_n = s^\prime_{n-1} + s_0 s_{n-1}.
```
`QuasinormalModes.jl` expects as input the ``\lambda_0(x)`` and ``s_0(x)`` coefficients computed with the 3 steps described above. This documentation contains two practical examples of how to obtain and feed such coefficients to the package.
## The type hierarchy
`QuasinormalModes.jl` employs two main strategies in order to find eigenvalues using the AIM: problems can be solved in a semi-analytic or purely numeric fashion. We make use of Julia's type system in order to implement structures that reflect these operation modes. All of the package's exported functionality is designed to operate on sub-types of abstract types that reflect the desired solution strategy (semi-analytic or numeric). The user is responsible for constructing concrete types that are sub-types of the exported abstract types with the actual problem specific information. It's thus useful to start by inspecting the package's exported type hierarchy:
```@raw html
<table border="0"><tr>
<td>
<figure>
<img src='../assets/types.svg' alt='missing'><br>
<figcaption><em>QuasinormalModes.jl type hierarchy</em></figcaption>
</figure>
</td>
</tr></table>
```
1. `AIMProblem` is the parent type of all problems that can be solved with this package. All problems must be sub-type it and a user can use it to construct functions that operate on all AIM solvable problems.
2. `NumericAIMProblem` is the parent type of all problems that can be solved using a numeric approach.
3. `AnalyticAIMProlem` is the parent type of all problems that can be solved using a semi-analytic approach.
4. `QuadraticEigenvaluePoblem` is a specific type of analytic problem whose eigenvalues appear in the ODE as a (possibly incomplete) quadratic polynomial.
All types are parameterized by two parameters: `N <: Unsigned` and `T <: Number` which represent respectively, the type used to represent the number of iterations the AIM will perform and the type used in the numeric computations of the method.
## Type traits
Type traits are non-exported abstract types that help the user to ensure that their sub-types implement the correct functions. Currently there is only one defined trait, called `AnalyticityTrait`. This trait can have two possible "values": `IsAnalytic` and `IsNumeric`, that are represented by concrete types. The default trait of an `AIMProblem` is `IsNumeric`, while any sub-type of `AnalyticAIMProblem` has the `IsAnalytic` and `NumericAIMProblem` have the `IsNumeric` trait
With these traits, we enforce that the user must implement for all problem types, the following functions:
1. `λ0`: Return the λ0 component of the ODE. The actual implementation depends heavily on the problem type.
2. `S0`: Return the S0 component of the ODE. The actual implementation depends heavily on the problem type.
3. `get_niter`: Return the number of iterations that the AIM will perform.
4. `get_x0`: Return the expansion point of the AIM.
For problems with the `IsAnalytic` trait, the user must implement the following functions function:
1. `get_ODEvar` which returns an object that represents the ODE's variable.
2. `get_ODEeigen` which returns an object that represents the ODE's eigenvalue.
Failure to implement these functions returns an error with the appropriate message. Note that these traits only check that such functions are implemented for a certain problem type and not that they follow a particular implementation pattern. The contract on the functions implementations is *soft* and will be clarified further on. Failure to abide by these soft contracts results in undefined behavior.
## Extending the default functionality
The following assumes that the package `SymEngine` is installed. If a problem type `P{N,T}` is a sub-type of `AnalyticAIMProblem{N,T}`, the user must extend the default implementations abiding by the following rules
1. `QuasinormalModes.λ0(p::P{N,T}) where {N,T}` must return a `SymEngine.Basic` object representing the symbolic expression for the `λ0` part of the ODE.
2. `QuasinormalModes.S0(p::P{N,T}) where {N,T}` must return a `SymEngine.Basic` object representing the symbolic expression for the `S0` part of the ODE.
3. `QuasinormalModes.get_ODEvar(p::P{N,T}) where {N,T}` must return a `SymEngine.Basic` objects representing the `SymEngine` variable associated with the ODE's variable.
4. `QuasinormalModes.get_ODEeigen(p::P{N,T}) where {N,T}` must return a `SymEngine.Basic` objects representing the `SymEngine` variable associated with the ODE's eigenvalue.
If a problem type `P{N,T}` is a sub-type of `NumericAIMProblem{N,T}`, the user must extend the default implementations abiding by the following rules
1. `QuasinormalModes.λ0(p::P{N,T}) where {N,T}` must return a lambda function of two parameters, the first representing the ODE's variable and the second representing the ODE's eigenvalue where the body represents the expression for the `λ0` part of the ODE.
2. `QuasinormalModes.S0(p::P{N,T}) where {N,T}` must return a lambda function of two parameters, the first representing the ODE's variable and the second representing the ODE's eigenvalue where the body represents the expression for the `S0` part of the ODE.
All problems `P{N,T}` that are a sub-type of `AIMProblem{N,T}` must extend the default implementations abiding by the following rules
1. `QuasinormalModes.get_niter(p::P{N,T}) where {N,T}` must return an unsigned number of type `N` representing the number of iterations for the AIM to perform.
2. `QuasinormalModes.get_x0(p::P{N,T}) where {N,T}` must return a number of type `T` representing the evaluation point of the AIM.
In the following sections, concrete examples of problems will be illustrated in order to better acquaint the user with the package and hopefully clear out any remaining misunderstandings.
!!! note "Semi-analytic VS numeric approach"
Because of the semi-analytic nature of the operation performed when a structure is a subtype of `AnalyticAIMProblem`, `QuasinormalModes.jl` is naturally slower to compute modes in this case. One may also find that for a large number of iterations the AIM might even fail to find modes. A general good approach would be to use the semi-analytic mode to generate lists of eigenvalues for a number of iterations that runs reasonably fast and then use these results as initial guesses for the numeric mode with a high number of iterations.
## The memory cache
In order to minimize memory allocations, all functions that actually compute eigenvalues require a `AIMCache` object. Given a certain a problem `P{N,T}` it initializes memory for 8 arrays of size `get_niter(p) + one(N)` elements of type `T`. These arrays are used to store intermediate and final computation results. By using a cache object, we guarantee that memory for the computation data is allocated only once and not at each step of the AIM.
## Stepping methods
In order to compute ``\delta_n``, `QuasinormalModes.jl` evolves ``\lambda_0`` and ``s_0`` according to the previously stated equations. The evolution from step 1 to step ``n`` must happen sequentially but the step itself, that is, the computation of new values of ``\lambda`` and ``s`` from old ones can be performed in parallel. We've provided singleton types that allow the user to control this behavior by passing instances of those types to the eigenvalue computing functions. The user can currently choose the following stepping methods:
1. `Serial`: Each instruction in a single AIM step is executed sequentially.
2. `Threaded`: Instruction in a single AIM step is executed in parallel using Julia's built-in `Threads.@threads` macro.
## Computing eigenvalues and general workflow guidelines
To compute eigenvalues, 3 functions are provided:
1. `computeDelta!`: Computes the AIM "quantization condition" ``\delta_n``.
2. `computeEigenvalues`: Computes a single, or a list of eigenvalues.
3. `eigenvaluesInGrid`: Find all eigenvalues in a certain numerical grid.
Depending on the problem type, these functions return and behave differently. In a `QuadraticEigenvalueProblem`.
1. `computeDelta!`: Returns a polynomial whose roots are the eigenvalues of the ODE.
2. `computeEigenvalues`: Computes the complete list of eigenvalues given by the roots of the computed polynomial.
In a `NumericAIMProblem`,
1. `computeDelta!`: Returns a value of the quantization condition at a given point in the complex plane.
2. `computeEigenvalues`: Computes a single eigenvalues from an initial trial frequency.
3. `eigenvaluesInGrid`: Attempts to find eigenvalues using a grid of real or complex data points as initial trial frequencies passed to `NLSolve`.
For more detail on these functions and their behaviors with each problem type refer to the the [API Reference](api_ref.md) where specific descriptions can be found.
The AIM provides the user with "two degrees of freedom" when computing eigenvalues: The number of iterations to perform (which we refer by ``n``) and the point around which the ODE functions will be expanded (which we refer by``x_0``). Additionally, our implementation asks for an initial guess in `NumericAIMProblem`s to find the roots of ``\delta_n``, adding yet another degree of freedom to the method. So far, the literature around the AIM cannot provide a general prescription for choosing optimal values for ``n`` or ``x_0``, however, it is know that ``x_0`` can affect the speed at which the method converges to a correct solution and if ``n`` is chosen to be too small, no eigenvalues will be found. Furthermore, because `computeEigenvalues` employ a Newton-like rootfinding method (provided by `NLSolve`) that is based on an initial guess for the root, choosing this guess "too far" from the correct solution might not converge to a root or it might be that the root is unstable and any small perturbation around an initial guess produces wildly different results. That being said, we can still outline a general procedure that works empirically when finding quasinormal modes based on the different problem types.
First, when the optimal values of ``n`` and ``x_0`` are unknown, start with ``x_0`` in the midpoint of the compactified domain and ``n`` around 20 or 30. This number of iterations will not yield the the most accurate results but it will be enough to determine if we are on the right track while also not being too computationally expensive. From here, we can take one of two different paths:
1. If we have a `QuadraticEigenvalueProblem`, we will have a list of several eigenvalue candidates that are roots of the ``\delta_n`` polynomial but are not necessarily eigenvalues of the ODE. To determine the true eigenvalues, we need to call `computeEigenvalues` repeatedly with an increasing number of AIM iterations. Eigenvalues that persist or change slowly when the number of iterations changes are very likely to be true eigenvalues of the ODE. Other values are likely to be spurious numerical results. This procedure is similar to the one employed when computing eigenvalues using pseudospectral methods: Various spurious results are produced and the true ones are found by repeatedly refining and comparing results. Once true eigenvalues start to emerge, we can start to play around with ``x_0`` to see if more eigenvalues emerge in the list.
2. If we have a `NumericAIMProblem`, a call to `computeEigenvalues` can only produce a single eigenvalue based on an initial guess. Assuming that the `NLSolve` actually converges to a solution this mode is also under the peril of returning spurious results. Here, the wisdom of the `QuadraticEigenvalueProblem`s remains: True results must be refined when the number of iterations increase (indicating numerical convergence). If the returned eigenvalue changes wildly for a fixed initial guess this might indicate that the result is spurious. Once an eigenvalue is found, fine tuning to ``x_0`` can be made. A good value for ``x_0`` will make `NLSolve` converge to a root faster (with less iterations) than a bad one. Also, note that the optimal ``x_0`` value for a certain eigenvalue might not be optimal for all eigenvalues in the spectrum of the ODE (this has been observed empirically). This means that if we are sure that there is an eigenvalue in the vicinity of an initial guess (because we have obtained it with another method, for instance) and `computeEigenvalues` cannot find it even when the number of AIM iterations is high, tuning ``x_0`` might make these modes emerge. Furthermore, in `NumericAIMProblem`s the function `computeDelta!` is a point-wise function that returns the value of ``\delta_n`` anywhere in the complex plane. Using this function, the user can employ a different root finding method than `NLSolve`. This flexibility allows one to eliminate the additional degree of freedom imposed by the initial guess. We can, for instance, use [RootsAndPoles.jl](https://github.com/fgasdia/RootsAndPoles.jl) to find all roots of ``\delta_n`` or any other root finding method desired. An implementation of this idea is presented [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/schwarzschild_roots_and_poles.jl) and [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/harmonic_oscillator_roots_and_poles.jl).
!!! note "Determining initial guesses"
Finding initial guesses to supply to `NumericAIMProblem`s can be difficult when solving a new physics problem. If possible, one could firs try and find eigenvalue candidates implementing the problem of interest as a `QuadraticEigenvalueProblem` or extending the code to work semi-analytically with other function types. Furthermore, one could guess a reasonable region where modes would be and use a root bracketing scheme, as described [here](https://lucass-carneiro.github.io/QuasinormalModes.jl/dev/schw/#Interpreting-SolverResults-output) and exemplified [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/harmonic_oscillator_roots_and_poles.jl) and [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/schwarzschild_roots_and_poles.jl) | QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 14782 | # Complete Example: Schwarzschild Quasinormal Modes
To illustrate how to use `QuasinormalModes.jl` we will show from start to finish how to compute the quasinormal modes of a Schwarzschild black hole perturbed by an external field. This section will follow closely Emanuele Berti's lectures on black hole perturbation theory, which can be found [here](https://www.dropbox.com/sh/9th1um175m8gco9/AACCIkvNa3h-zdHBMZkQ31Baa?dl=0) and in Ref. [[3]](https://arxiv.org/abs/1111.5024)
# Mathematical preliminaries
Let's say that our Schwarzschild black hole is being perturbed by an external field ``\psi_{ls}`` where ``s`` is the spin of the field (``s = 0, 1, 2`` for scalar, electromagnetic and gravitational perturbations, respectively) and ``l`` is the angular index of the perturbation. Using mass units such that ``2M=1`` the "master" radial equation governing the perturbation is
```math
r(r-1)\psi_{ls}^{\prime\prime}(r) + \psi_{ls}^{\prime}(r) - \left[ l(l+1) - \frac{s^2-1}{r} - \frac{\omega^2 r^3}{r-1} \right]\psi_{ls}(r) = 0
```
where primes denote derivatives with respect to the radial coordinate ``r`` and ``\omega`` are the quasinormal frequencies. Since we are solving for quasinormal modes, we need to enforce the proper boundary conditions in the master equation: classically no wave can escape from the BH's event horizon and at spatial infinity waves can only "leave" the space-time. It's thus said that our field is purely *ingoing* in the event horizon (when ``r\rightarrow 1``) and purely *outgoing* at spatial infinity (when ``r\rightarrow\infty``). Mathematically, this means that the solution to the master equation must be of the form
```math
\psi_{ls}(r) = (r-1)^{-i \omega} r^{2 i \omega} e^{i \omega (r-1)}f_{ls}(r).
```
By substituting this solution *ansatz* in the master equation, we obtain a new 2nd order ODE, now for the function ``f_{ls}(r)``. This new ODE is enforcing the correct quasinormal mode boundary conditions. This process usually referred to as incorporating the boundary conditions into the differential equation. The resulting equation reads
```math
r \left((r-1) r f^{\prime\prime}(r)+\left(1+2 i \left(r^2-2\right) \omega \right) f^\prime(r)\right)+f(r) \left(-r \left(l^2+l-4 \omega ^2\right)+s^2+(2 \omega +i)^2\right) = 0.
```
The last step, although not strictly required, facilitates the numerical handling of the equation. Because the radial coordinates extends from the event horizon to infinity, that is, ``r\in [1,\infty]`` and computers can't handle infinities, we re-scale the ODE's domain to a finite one. This can be easily done with the change of variables
```math
x = 1 - \frac{1}{r}
```
which implies that when ``r=1`` we have ``x=0`` and when ``r\rightarrow\infty`` we have ``x = 1``. Thus the solution domain has been successfully compactifyied in the interval ``x\in[0,1]``. By making this change of variables we get to the final form of the master equation which we will actually feed to `QuasinormalModes.jl`
```math
-x (x-1)^2 f^{\prime\prime}(x) + (x (4 i (x-2) \omega -3 x+4)+2 i \omega -1) f^\prime(x)+f(x) \left(l^2+l+\left(s^2-1\right) (x-1)+4 (x-2) \omega ^2+4 i (x-1) \omega \right) = 0.
```
# Implementing the master equation as an analytic problem
The first step is to load the required packages to run this example: `QuasinormalModes` and `SymEngine`:
```julia
using QuasinormalModes
using SymEngine
```
Next, we create a parametric type that sub-types `AnalyticAIMProblem`. As the eigenvalue in the master equation is a quadratic polynomial, we will sub-type `QuadraticEigenvalueProblem` with the following structure:
```julia
struct SchwarzschildData{N,T} <: QuadraticEigenvalueProblem{N,T}
nIter::N
x0::T
vars::Tuple{Basic, Basic}
exprs::Tuple{Basic, Basic}
end
```
As the reader might notice the structure is quite simple. The variables `nIter` and `x0` store the AIM's number of iterations and expansion point, respectively while `vars` will be responsible for storing the `SymEngine` variables representing the ODE's variable and eigenvalue, respectively, as a tuple. Finally `exprs` will store the `SymEngine` expressions for the `λ0` and `S0` parts of the ODE.
Next we create a parametric constructor for `SchwarzschildData` that will initializes the fields:
```julia
function SchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
vars = @vars x ω
λ0 = (-1 + (2*im)*ω + x*(4 - 3*x + (4*im)*(-2 + x)*ω))/((-1 + x)^2*x)
S0 = (l + l^2 + (-1 + s^2)*(-1 + x) + (4*im)*(-1 + x)*ω + 4*(-2 + x)*ω^2)/((-1 + x)^2*x)
return SchwarzschildData{N,T}(nIter, x0, vars, (λ0, S0))
end
```
This constructor can be used by passing the values directly instead of explicitly declaring type parameters. The final step is to extend the default accessors functions to operate on `SchwarzschildData`:
```julia
QuasinormalModes.λ0(d::SchwarzschildData{N,T}) where {N,T} = d.exprs[1]
QuasinormalModes.S0(d::SchwarzschildData{N,T}) where {N,T} = d.exprs[2]
QuasinormalModes.get_niter(d::SchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::SchwarzschildData{N,T}) where {N,T} = d.x0
QuasinormalModes.get_ODEvar(d::SchwarzschildData{N,T}) where {N,T} = d.vars[1]
QuasinormalModes.get_ODEeigen(d::SchwarzschildData{N,T}) where {N,T} = d.vars[2]
```
These functions are fairly straightforward accessors and require no additional comment.
# Implementing the master equation as a numeric problem
Again we start by defining a structure but this time around we sub-type `NumericAIMProblem`
```julia
struct NSchwarzschildData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
l::N
s::N
end
```
Here `nIter` and `x0` have the same meaning as before, but now instead of storing symbolic variables and expressions we store two additional unsigned integers, `l` and `s`. These are the angular and spin parameters of the master equation. Here we must store them in the struct as they can't be "embedded" into the expressions for `λ0` and `S0` as in the analytic case.
We proceed once again by creating a more convenient constructor. This time no intermediate computation is required upon construction:
```julia
function NSchwarzschildData(nIter::N, x0::T, l::N, s::N) where {N,T}
return NSchwarzschildData{N,T}(nIter, x0, l, s)
end
```
Finally we extend the default implementations
```julia
QuasinormalModes.λ0(::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (-1 + (2*im)*ω + x*(4 - 3*x + (4*im)*(-2 + x)*ω))/((-1 + x)^2*x)
QuasinormalModes.S0(d::NSchwarzschildData{N,T}) where {N,T} = (x,ω) -> (d.l + d.l^2 + (-1 + d.s^2)*(-1 + x) + (4*im)*(-1 + x)*ω + 4*(-2 + x)*ω^2)/((-1 + x)^2*x)
QuasinormalModes.get_niter(d::NSchwarzschildData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NSchwarzschildData{N,T}) where {N,T} = d.x0
```
This time `λ0` and `S0` return two parameters lambda functions that will be called multiple times during the evaluation of the AIM. As we've previously mentioned, the first parameters is assumed to be the ODE's variables while the second the ODE's eigenvalue. The body of each lambda is the expression for their respective parts on the ODE.
# Constructing problems and initializing the cache
We create our problems and cache objects by calling the constructors:
```julia
p_ana = SchwarzschildData(0x00030, Complex(BigFloat("0.43"), BigFloat("0.0")), 0x00000, 0x00000);
p_num = NSchwarzschildData(0x00030, Complex(0.43, 0.0), 0x00000, 0x00000);
c_ana = AIMCache(p_ana)
c_num = AIMCache(p_num)
```
Here we are setting up problems to be solved using 48 iterations with `x0 = 0.43 + 0.0*im` and `l = s = 0`.
# Computing the eigenvalues
Finally, to compute the quasinormal frequencies we will call `computeEigenvalues(Serial(), p_ana, c_ana)` (or `computeEigenvalues(Threaded(), p_ana, c_ana)` if you wish, but don't forget to start julia with the `--threads` option). This returns an array with all the roots of the quantization condition. We will sort the array by descending order in the imaginary part and after that we will filter the array to remove entries whose real part is too small or with a positive imaginary part and print the result to `stdout`:
```julia
m_ana = computeEigenvalues(Serial(), p_ana, c_ana)
function printQNMs(qnms, cutoff, instab)
println("-"^165)
println("|", " "^36, "Re(omega)", " "^36, " ", " "^36, "Im(omega)", " "^36, "|")
println("-"^165)
for qnm in qnms
if real(qnm) > cutoff && ( instab ? true : imag(qnm) < big"0.0" )
println(real(qnm), " ", imag(qnm))
end
end
println("-"^165)
return nothing
end
sort!(m_ana, by = x -> imag(x))
printQNMs(m_ana, 1.0e-10, false)
```
Remember that (as was discussed in [here](org.md#Computing-eigenvalues-and-general-workflow-guidelines)) not all values are actually eigenvalues of the ODE (that is, quasinormal modes). Next we will call
```julia
ev = computeEigenvalues(Serial(), p_num, c_num, Complex(0.22, -0.20), nls_xtol = 1.0e-10, nls_ftol = 1.0e-10)
```
The variable `ev` now contains a `SolverResults` object from the [NLsolve.jl](https://github.com/JuliaNLSolvers/NLsolve.jl) package. The first solution element represents the real part of the computed mode while the second represents the imaginary part. The object also contains information about the convergence of the method. Note that with a numerical problem we can only find one mode at a time using a certain initial guess. This can be somewhat remedied by using `eigenvaluesInGrid`, which uses multiple initial conditions as a guess and collects the converged results. The complete source code of this example can be found [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/schwarzschild.jl).
# Interpreting `SolverResults` output
If we print `ev` to `stdout`, we will see something like
```
Results of Nonlinear Solver Algorithm
* Algorithm: Trust-region with dogleg and autoscaling
* Starting Point: [0.22, -0.2]
* Zero: [0.22090807949439797, -0.20979076729905097]
* Inf-norm of residuals: (a large number)
* Iterations: 5
* Convergence: true
* |x - x'| < 1.0e-10: true
* |f(x)| < 1.0e-10: false
* Function Calls (f): 6
* Jacobian Calls (df/dx): 6
```
This is precisely the information returned by the `SolverResults` object in human readable format. Most of these fields are self explanatory, but we must pay close attention to the `Convergence` field where we see that for convergence to be declared, either of two tests must pass:
1. The `|x - x'| < 1.0e-10: true` test indicates that the difference between two solution candidates obtained by two consecutive iterations of the trust region algorithm are differing by an amount smaller than `1.0e-10`
2. The `|f(x)| < 1.0e-10: false` test indicates that the Inf-norm of the residuals (the value of the function we are trying to find the root for yields after we substitute a solution candidate back into it) is not smaller than `1.0e-10`. In fact, in this example it is a large number.
The fact that the first test passes and the second one does not, suggests that the root finding method is converging, since at each iteration the difference between candidate solutions gets smaller, but not to a true root of the AIM quantization condition (and therefore a quasinormal mode) since that we get a number far from zero when we substitute this value back at the quantization condition. To resolve this kind of ambiguity, we can employ a different root finding method that does not "polish" roots like Newton's method but "brackets" them, like the bisection method, that is guaranteed to converge to a root if it exists within an interval. Such a method for finding all roots and poles of a complex function within an interval exists and is know as the [GRPF](https://github.com/PioKow/GRPF) method, implemented in Julia by the [RootsAndPoles.jl](https://github.com/fgasdia/RootsAndPoles.jl) package. This algorithm works by subdividing a complex plane domain in triangles and applying a discretized version of Cauchy's argument principle, which counts the roots and poles of a complex function within a region. Despite relying internally on `NLsolve.jl` in the `computeEigenvalues` or `eigenvaluesInGrid` family of functions, `QuasinormalModes.jl`'s core responsibility is to compute the AIM quantization condition at any point in the complex plane for a given problem. Such computation is provided by the `computeDelta!` family of functions. By exposing this functionality, the user has complete freedom to choose what root finding method will be employed. In fact, internally, `computeEigenvalues` simply wraps `computeDelta!` into another function that is on the format accepted by `NLsolve.jl`. To see a concrete example of finding Schwarzschild modes with `RootsAndPoles.jl` look at [this](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/schwarzschild_roots_and_poles.jl) example, which when asked to search for roots in the ``[0.1 - 1.0 i, 1.0 + 1 .0 i]`` range reports
```
Roots:
0.2082018398054518 - 0.7014133118267079im
0.2209861849099763 - 0.2095673605939246im
0.2082018397886055 - 0.7014133118222261im
-------------------------------------------
Poles:
0.2082018397894562 - 0.7014133118211809mi
```
Within this list, we can find the fundamental and first excited mode. By comparing these two results we can be sure that a found root is in fact a root of the quantization condition. Whether or not this root is an actual eigenvalue of the original second order ODE is a different story and was [previously](https://lucass-carneiro.github.io/QuasinormalModes.jl/dev/org/#Computing-eigenvalues-and-general-workflow-guidelines) discussed.
# Computing high ``\ell`` quasinormal modes
As a final example, we have included [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/tree/master/examples/high_l_Schwarzschild) source files for computing a and plotting a large list of Schwarzschild quasinormal modes with varying values of ``\ell``. This code is an excerpt of a production code, created to compute Schwarzschild QNMs by perturbing the system with fields of various spins. This examples allows not only to see that we are able to recover literature values as wells as "visualize" the different frequencies.
```@raw html
<table border="0"><tr>
<td>
<figure>
<img src='../assets/l_plot.svg' alt='missing'><br>
<figcaption><em>Quasinormal modes in a Schwarzschild background</em></figcaption>
</figure>
</td>
</tr></table>
```
Each point in the plot represents a certain value of ``\ell`` and the colors indicate a fixed ``n`` value. The list of numeric values can be found [here](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/high_l_Schwarzschild/qnm_2022-04-05%2017:49:10_s_0.dat) | QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 5447 | # Complete Example: The Harmonic Oscillator
We will now turn away from general relativity and use `QuasinormalModes.jl` to compute to compute the energy eigenvalues of the quantum harmonic oscillator following [this paper](https://arxiv.org/abs/1111.5024).
# Mathematical preliminaries
If we measure the energy of the system in units of ``\hbar\omega`` and distance in units of ``\sqrt{\hbar/(m\omega)}`` the time independent Schrödinger equation for the quantum harmonic oscillator is written as
```math
-\psi^{\prime\prime}(x) + x^2\psi(x) = \epsilon\psi(x),
```
where we defined ``\epsilon \equiv 2 E`` and ``E`` is the quantum state's energy. Imposing that ``\psi(x)`` decays like a Gaussian distribution asymptotically, we apply the ansatz
```math
\psi(x) = e^{-x^2/2}f(x)
```
which, substituting in the original equation, yields
```math
f^{\prime\prime}(x) = 2 x f^\prime(x) + (1-\epsilon)f(x).
```
This allows us to easily identify ``\lambda_0 = 2x`` and ``s_0 = 1 - \epsilon``. In all our implementations we shall refer the sought eigenvalue ``\epsilon`` using the variable `ω` in order to maintain consistency with the previous example.
# Implementing the master equation as an analytic problem
The first step is to load the required packages to run this example: `QuasinormalModes` and `SymEngine`:
```julia
using QuasinormalModes
using SymEngine
```
Next, we create a parametric type that sub-types `AnalyticAIMProblem`. As the eigenvalue in the master equation is a quadratic polynomial, we will sub-type `QuadraticEigenvalueProblem` with the following structure:
```julia
struct HarmonicOscilatorData{N,T} <: QuadraticEigenvalueProblem{N,T}
nIter::N
x0::T
vars::Tuple{Basic, Basic}
exprs::Tuple{Basic, Basic}
end
```
Now we implement the constructor and extend the default implementations:
```julia
function HarmonicOscilatorData(nIter::N, x0::T) where {N,T}
vars = @vars x ω
λ0 = 2*x
S0 = 1 - ω
return HarmonicOscilatorData{N,T}(nIter, x0, vars, (λ0, S0))
end
QuasinormalModes.λ0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.exprs[1]
QuasinormalModes.S0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.exprs[2]
QuasinormalModes.get_niter(d::HarmonicOscilatorData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::HarmonicOscilatorData{N,T}) where {N,T} = d.x0
QuasinormalModes.get_ODEvar(d::HarmonicOscilatorData{N,T}) where {N,T} = d.vars[1]
QuasinormalModes.get_ODEeigen(d::HarmonicOscilatorData{N,T}) where {N,T} = d.vars[2]
```
# Implementing the master equation as a numeric problem
The structure, constructor and extensions are
```julia
struct NHarmonicOscilatorData{N,T} <: NumericAIMProblem{N,T}
nIter::N
x0::T
end
function NHarmonicOscilatorData(nIter::N, x0::T) where {N,T}
return NHarmonicOscilatorData{N,T}(nIter, x0)
end
QuasinormalModes.λ0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x,ω) -> 2*x
QuasinormalModes.S0(::NHarmonicOscilatorData{N,T}) where {N,T} = (x,ω) -> 1 - ω + x - x
QuasinormalModes.get_niter(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.nIter
QuasinormalModes.get_x0(d::NHarmonicOscilatorData{N,T}) where {N,T} = d.x0
```
# Constructing problems and initializing the cache
Once again, we create our problems and cache objects by calling the constructors:
```julia
p_ana = HarmonicOscilatorData(0x0000A, 0.5);
p_num = NHarmonicOscilatorData(0x0000A, 0.5);
c_ana = AIMCache(p_ana)
c_num = AIMCache(p_num)
```
Here we are setting up problems to be solved using 10 iterations with `x0 = 0.5`
# Computing the eigenvalues
Once again we compute the eigenvalues by calling
```julia
ev_ana = computeEigenvalues(Serial(), p_ana, c_ana)
ev_num = eigenvaluesInGrid(Serial(), p_num, c_num, (0.0, 21.0))
```
The results are two arrays, containing the eigenvalues. As before, we define a function to print the results to `stdout`
```julia
function printEigen(eigenvalues)
println("--------------------------------------")
for i in eachindex(eigenvalues)
println("n = $i, ω = $(eigenvalues[i])")
end
println("--------------------------------------")
return nothing
end
println("Analytic results")
printEigen(reverse!(ev_ana))
println("Numeric results")
printEigen(ev_num)
```
The complete source file for this example can be found in [harmonic_oscillator.jl](https://github.com/lucass-carneiro/QuasinormalModes.jl/blob/master/examples/harmonic_oscillator.jl). The output is agreement with the expected result for the eigenenergies of the harmonic oscillator, that is, ``E_n = n + 1/2``
```
Analytic results
--------------------------------------
n = 1, ω = 0.9999999999999999 + 0.0im
n = 2, ω = 2.9999999999999964 + 0.0im
n = 3, ω = 4.999999999999426 + 0.0im
n = 4, ω = 7.000000000006788 + 0.0im
n = 5, ω = 8.999999999980533 + 0.0im
n = 6, ω = 10.999999804542819 + 0.0im
n = 7, ω = 13.000000959453153 + 0.0im
n = 8, ω = 14.999998108295404 + 0.0im
n = 9, ω = 17.00000187312756 + 0.0im
n = 10, ω = 18.999999068409203 - 0.0im
n = 11, ω = 21.000000186185098 + 0.0im
--------------------------------------
Numeric results
--------------------------------------
n = 1, ω = 1.0
n = 2, ω = 3.000000000000006
n = 3, ω = 5.000000000000002
n = 4, ω = 7.000000000000006
n = 5, ω = 8.999999999999988
n = 6, ω = 11.0
n = 7, ω = 12.999999999999977
n = 8, ω = 15.000000000000014
n = 9, ω = 16.999999999999908
n = 10, ω = 19.000000000000025
n = 11, ω = 21.0
--------------------------------------
```
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 1327 | # QuasinormalModes.jl Examples
In this folder you can find examples of `QuasinormalModes.jl` in action. Here is a list and brief description of each example:
1. `schwarzschild.jl`: Shows how to compute the quasinormal modes of a Schwarzschild black hole both numerically and semi-analytically. A complete description of how to assemble this example can be found [here](https://lucass-carneiro.github.io/QuasinormalModes.jl/dev/schw/).
2. `harmonic_oscillator.jl`: Shows how to obtain the eigenenergies from a quantum harmonic oscillator. A complete description of how to assemble this example can be found [here](https://lucass-carneiro.github.io/QuasinormalModes.jl/dev/sho/).
3. `schwarzschild_roots_and_poles.jl` Shows how one can use the `computeDelta` function to use any root finding scheme, or package, in order to find eigenvalues. This example makes use of the `RootsAndPoles.jl` package, that finds all roots of the AIM quantization condition in a given region of the complex plane.
4. `harmonic_oscillator_roots_and_poles.jl` Shows how one can use the `computeDelta` and `RootsAndPoles.jl` to find the first few eigenenergies of the quantum harmonic oscillator.
Note that this folder has it's own `Project.toml` and `Manifest.toml`. This allows tracking example dependencies separately from the main package code. | QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.1.1 | a7c8cc3ba0ad32e0c468884c95d44991866c2a42 | docs | 10271 | ---
title: '`QuasinormalModes.jl`: A Julia package for computing discrete eigenvalues of second order ODEs'
tags:
- Julia
- Differential equations
- Black holes
- Discrete eigenvalues
authors:
- name: Lucas Timotheo Sanches
orcid: 0000-0001-6764-1812
affiliation: 1
affiliations:
- name: Centro de Ciências Naturais e Humanas, Universidade Federal do ABC (UFABC)
index: 1
date: 17 September 2021
bibliography: paper.bib
---
# Summary
In General Relativity, when perturbing a black hole with an external field, or particle, the system relaxes by emitting damped gravitational waves, known as *quasinormal modes*. These are the characteristic frequencies of the black hole and gain the *quasi* prefix due to the fact that they have a real frequency, which represents the oscillation frequency of the response, and an imaginary frequency that represents the decay rate of said oscillations. In many cases, such perturbations can be described by a second order homogeneous ordinary differential equation (ODE) with discrete complex eigenvalues.
Determining these characteristic frequencies quickly and accurately for a large range of models is important for many practical reasons. It has been shown that the gravitational wave signal emitted at the final stage of the coalescence of two compact objects is well described by quasinormal modes [@buonanno; @seidel]. This means that if one has access to a database of quasinormal modes and of gravitational wave signals from astrophysical collision events, it is possible to characterize the remnant object using its quasinormal frequencies. Since there are many different models that aim to describe remnants, being able to compute the quasinormal frequencies for such models in a reliable way is paramount for confirming or discarding them.
# Statement of need
`QuasinormalModes.jl` is a `Julia` package for computing the quasinormal modes of any General Relativity model whose perturbation equation can be expressed as second order homogeneous ODE. Not only that, the package can be used to compute the discrete eigenvalues of *any* second order homogeneous ODE (such as the energy eigenstates of the time independent Schrödinger equation) provided that these eigenvalues actually exist. The package features a flexible and user friendly API where the user simply needs to provide the coefficients of the problem ODE after incorporating boundary and asymptotic conditions on it. The user can also choose to use machine or arbitrary precision arithmetic for the underlying floating point operations involved and whether or not to do computations sequentially or in parallel using threads. The API also tries not to force any particular workflow on the users so that they can incorporate and adapt the existing functionality on their research pipelines without unwanted intrusions. Often user friendliness, flexibility and performance are treated as mutually exclusive, particularly in scientific applications. By using `Julia` as an implementation language, the package can have all of theses features simultaneously.
Another important motivation for using `Julia` and writing this package was the lack of generalist, free (both in the financial and license-wise sense) open source tools that serve the same purpose. More precisely, there are tools which are free and open source, but run on top of a proprietary paid and expensive software framework such as the ones developped by @qnmspectral and @spectralbp, which are both excellent packages that aim to perform the same task as `QuasinormalModes.jl` and can be obtained and modified freely but, unfortunately, require the user to own a license of the proprietary `Wolfram Mathematica` CAS. Furthermore, their implementations are limited to solve problems where the eigenvalues must appear in the ODE as a polynomials of order $p$. While this is not prohibitively restrictive to most astrophysics problems, it can be an important limitation in other areas. There are also packages that are free and run on top of `Mathematica` but are not aimed at being general eigenvalue solvers at all, such as the one by @bhpt_quasinormalmodes, that can only compute modes of Schwarzschild and Kerr black holes. Finally, the Python package by @bhpt_qnm is open source and free but can only compute Kerr quasinormal modes.
`QuasinormalModes.jl` fills the existing gap for free, open source tools that are able to compute discrete eigenvalues (and in particular, quasinormal modes) efficiently for a broad class of models and problems. The package was developed during the author's PhD research where it is actively used for producing novel results that shall appear in the author's thesis. It is also actively used in a collaborative research effort (of which the author is one of the members) for computing quasinormal modes produced by perturbations with integer (but different than 0) spins and semi-integer spins. These results are being contrasted with those obtained by other methods and so far show excellent agreement with each other and with literature results.
# Underlying algorithm
`QuasinormalModes.jl` internally uses a relativity new numerical method called the Asymptotic Iteration Method (AIM). The method was introduced by @aim_original but the actual implementation used in this package is based on the revision performed by @aim_improved. The main purpose of the (AIM) is to solve the following general linear homogeneous second order ODE:
\begin{equation}
y^{\prime\prime}(x) - \lambda_0(x)y^\prime(x) - s_0(x)y(x) = 0,
\label{eq:aim_general_ode}
\end{equation}
where primes denote derivatives with respect to to the variable $x$ (that is defined over some interval that is not necessarily bounded), $\lambda_0(x) \neq 0$ and $s_0(x) \in C_\infty$. The method is based upon the following theorem: let $\lambda_0$ and $s_0$ be functions of the variable $x \in (a,b)$ that are $C_\infty$ on the same interval, the solution of the differential equation, Eq. \eqref{eq:aim_general_ode}, has the form
\begin{equation}
y(x) = \exp\left( -\int\alpha\mathrm{d} t \right) \times \left[ C_2 + C_1 \int^{x} \exp \left( \int^{t} ( \lambda_0(\tau) + 2\alpha(\tau) )\mathrm{d} \tau \right) \mathrm{d} t \right]
\label{eq:aim_general_solution}
\end{equation}
if for some $n>0$ the condition
\begin{equation}
\delta \equiv s_n\lambda_{n-1} - \lambda_{n}s_{n-1} = 0
\label{eq:aim_delta_definition}
\end{equation}
is satisfied, where
\begin{align}
\lambda_k(x) \equiv & \lambda^\prime_{k-1}(x) + s_{k-1}(x) + \lambda_0(x)\lambda_{k-1}(x) \label{eq:aim_lambda_k}\\
s_k(x) \equiv & s^\prime_{k-1}(x) + s_0\lambda_{k-1}(x) \label{eq:aim_sk}
\end{align}
where $k$ is an integer that ranges from $1$ to $n$.
Provided that the theorem is satisfied we can find both the eigenvalues and eigenvectors of the second order ODE using, respectively, Eq. \eqref{eq:aim_delta_definition} and Eq. \eqref{eq:aim_general_solution}. Due to the recursive nature of Eq.\eqref{eq:aim_lambda_k} and Eq.\eqref{eq:aim_sk}, to compute the quantization condition, Eq.\eqref{eq:aim_delta_definition}, using $n$ iterations the $n$-th derivatives of $\lambda_0$ and $s_0$ must be computed multiple times. To address this issue, @aim_improved proposed the use of a Taylor expansion of both $\lambda$ and $s$ around a point $\xi$ where the AIM is to be performed. This improved version is implemented in `QuasinormalModes.jl`
# Benchmark
To show `QuasinormalModes.jl` in action, this section provides a simple benchmark where the fundamental quasinormal mode ($n=\ell=m=s=0$) of a Schwarzschild black hole was computed using 16 threads on a Intel(R) Core(TM) i9-7900X @ 3.30GHz CPU with 256 bit precision floating point numbers.
{width=60%}
{width=60%}
In order to quantify the rate at which the method converges to the correct results, the error measure $\varepsilon$ was defined as follows: given the computed real and imaginary quasinormal frequencies, denoted respectively by $\omega_R$ and $\omega_I$, and the reference frequencies given by @berti_ringdown, denoted respectively as $\overline{\omega}_R$ and $\overline{\omega}_I$ we have that $|\varepsilon| = |\omega_{R,I} - \overline{\omega}_{R,I}|$. In \autoref{fig:convergence} $\varepsilon$ is plotted as a function of the number of iterations for both the real and imaginary frequencies. We can see that, as the number of iterations increases, the error in the computed values rapidly decreases.
In \autoref{fig:time}, the time required to perform a certain number of iterations is plotted in logarithmic scale. Each data point is the arithmetic mean time obtained after 10 runs of the algorithm for a given number of iterations. The time measurement takes care to exclude the overhead induced at "startup" due to Julia's JIT compilation. Even though the time taken to perform a certain number of iterations increases with a power law, the time scale required to achieve highly accurate results is still around 10s. This time would be even smaller if one choose to use built-in floating point types instead of arbitrary precision numbers.
# Acknowledgements
I would like to thank my PhD advisor, Dr. Maurício Richartz, for introducing me to the AIM and providing helpful discussions and comments about the method's inner workings as well as helpful revision and comments about the paper's structure. I would also like to thank Dr. Iara Ota for the helpful comments, discussions and revision of this paper. I would also like to thank Dr. Erik Schnetter, Soham Mukherjee and Stamatis Vretinaris for the help and discussions regarding root finding methods and to Dr. Schnetter for directly contributing documentation typo corrections and suggestions for improving the package's overall presentation and documentation. This research was supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES, Brazil) - Finance Code 001.
# References
| QuasinormalModes | https://github.com/lucass-carneiro/QuasinormalModes.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 893 | using Documenter, MonteCarloMeasurements, Unitful
using Plots
plotly()
makedocs(
sitename = "MonteCarloMeasurements Documentation",
doctest = false,
modules = [MonteCarloMeasurements],
pages = [
"Home" => "index.md",
"Supporting new functions" => "overloading.md",
"Examples" => "examples.md",
"Linear vs. Monte-Carlo uncertainty propagation" => "comparison.md",
"Performance tips" => "performance.md",
"Advanced usage" => "advanced_usage.md",
"API" => "api.md",
],
format = Documenter.HTML(prettyurls = haskey(ENV, "CI")),
) # Due to lots of plots, this will just have to be run on my local machine
deploydocs(
deps = Deps.pip("pygments", "mkdocs", "python-markdown-math", "mkdocs-cinder"),
repo = "github.com/baggepinnen/MonteCarloMeasurements.jl.git",
)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 2076 | # PGFPlots.save("logo.tex", fig.o, include_preamble=true)
# add \begin{axis}[ticks=none,height...
# add begin{tikzpicture}[scale=0.5]
# Font: https://github.com/JuliaGraphics/julia-logo-graphics/blob/master/font/TamilMN-Bold.ttf
# pdftopng test.pdf logo.png
# convert logo.pdf logo.svg
using MonteCarloMeasurements, Plots, Random, KernelDensity, Colors
darker_blue = RGB(0.251, 0.388, 0.847)
lighter_blue = RGB(0.4, 0.51, 0.878)
darker_purple = RGB(0.584, 0.345, 0.698)
lighter_purple = RGB(0.667, 0.475, 0.757)
darker_green = RGB(0.22, 0.596, 0.149)
lighter_green = RGB(0.376, 0.678, 0.318)
darker_red = RGB(0.796, 0.235, 0.2)
lighter_red = RGB(0.835, 0.388, 0.361)
N = 30
Random.seed!(3)
a,b,c = [0.2randn(N,2) for _ in 1:3]
a .+= [0 0]
b .+= [1 0]
c .+= [0.5 1]
pa,pb,pc = Particles.((a,b,c))
opts = (markersize=9, markerstrokewidth=2, markerstrokealpha=0.8, markeralpha=0.8, size=(300,300))
scatter(eachcol(a)...; c=lighter_red, markerstrokecolor=darker_red, opts..., axis=false, grid=false, legend=false)
scatter!(eachcol(b)...; c=lighter_purple, markerstrokecolor=darker_purple, opts...)
# scatter!(eachcol(c)...; c=lighter_green, markerstrokecolor=darker_, opts...)
ls = (linewidth=10,markerstrokewidth=2)
plot!(pa[1:1],pa[2:2]; c=lighter_red, markerstrokecolor=darker_red, ls...)
plot!(pb[1:1],pb[2:2]; c=lighter_purple, markerstrokecolor=darker_purple, ls...)
# plot!(pc[1:1],pc[2:2]; c=lighter_green, markerstrokecolor=darker_, ls...)
##
x = LinRange(-0.5,1.45,30)
f(x) = (x-0.3)^2 + 0.5
y = f.(x)
plot!(x,y, l=(darker_blue,))
bi = 1:3:N
plot!([b[bi,1] b[bi,1]]', [b[bi,2] f.(b[bi,1])]', l=(lighter_purple, :dash, 0.2))
plot!([b[bi,1] fill(0.5,length(bi))]', [f.(b[bi,1]) f.(b[bi,1])]', l=(lighter_green, :dash, 0.2))
scatter!(fill(0.5,N), f.(b[:,1]); c=lighter_green, markerstrokecolor=darker_green, opts...)
kd = kde(f.(b[:,1]), npoints=200, bandwidth=0.09, boundary=(0.5,1.8))
fig = plot!(0.5 .- 0.2kd.density, kd.x, c=lighter_green, markerstrokecolor=darker_green, fill=true, fillalpha=0.2)
# PGFPlots.save("test.tex", fig.o, include_preamble=true)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 2801 | # In this example we solve a robust linear programming problem using Optim. The problem is taken from [wikipedia](https://en.wikipedia.org/wiki/Robust_optimization#Example_1)
# $$\text{maximize}_{x,y} \; 3x+2y \quad \text{s.t}. x,y > 0, \quad cx+dy < 10 ∀ c,d ∈ P$$
# Where $c$ and $d$ are uncertain. We encode the constraint into the cost and solve it using 4 different algorithms
using MonteCarloMeasurements, Optim, ForwardDiff, Zygote
const c = 1 ∓ 0.1 # These are the uncertain parameters
const d = 1 ∓ 0.1 # These are the uncertain parameters
# In the cost function below, we ensure that $cx+dy > 10 \; ∀ \; c,d ∈ P$ by looking at the worst case
Base.findmax(p::AbstractParticles;dims=:) = findmax(p.particles,dims=:)
function cost(pars)
x,y = pars
-(3x+2y) + 10000sum(pars .< 0) + 10000*(pmaximum(c*x+d*y) > 10)
end
pars = [1., 1] # Initial guess
cost(pars) # Try the cost function
cost'(pars)
# We now solve the problem using the following list of algorithms
function solvemany()
algos = [NelderMead(), SimulatedAnnealing(), BFGS(), Newton()]
map(algos) do algo
res = Optim.optimize(cost, cost', pars, algo, inplace=false)
m = res.minimizer
cost(m)
end
end
solvemany()'
# All methods find more or less the same minimum, but the gradient-free methods actually do a bit better
# How long time does it take to solve all the problems?
@time solvemany();
# It was quite fast
# We can also see whether or not it's possible to take the gradient of
# 1. An deterministic function with respect to determinisitc parameters
# 2. An deterministic function with respect to uncertain parameters
# 3. An uncertain function with respect to determinisitc parameters
# 4. An uncertain function with respect to uncertain parameters
function strange(x,y)
(x.^2)'*(y.^2)
end
deterministic = [1., 2] # Initial guess
uncertain = [1., 2] .+ 0.001 .* StaticParticles.(10) # Initial guess
ForwardDiff.gradient(x->strange(x,deterministic), deterministic)
#
ForwardDiff.gradient(x->strange(x,deterministic), uncertain)
#
ForwardDiff.gradient(x->strange(x,uncertain), deterministic)
#
a = ForwardDiff.gradient(x->strange(x,uncertain), uncertain);
# mean.(a)
# The last one here is commented because it sometimes segfaults. When it doesn't, it seems to produce the correct result with the complicated type Particles{Particles{Float64,N},N}, which errors when printed.
# We can also do the same using Zygote. The result is the same, and Zygote also handles the last version without producing a weird type in the result!
using Zygote
Zygote.gradient(x->strange(x,deterministic), deterministic)
#
Zygote.gradient(x->strange(x,deterministic), uncertain)
#
Zygote.gradient(x->strange(x,uncertain), deterministic)
#
Zygote.gradient(x->strange(x,uncertain), uncertain)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 5611 | # # ControlSystems using MonteCarloMeasurements
# In this example, we will create a transfer function with uncretain coefficients, and use it to calculate bode diagrams and simulate the system.
using ControlSystemsBase, MonteCarloMeasurements, StatsPlots
using Test, LinearAlgebra, Statistics
import MonteCarloMeasurements: ⊗
unsafe_comparisons(true, verbose=false) # This file requires mean comparisons for displaying transfer functions in text form as well as for discretizing a LTIsystem
default(size=(2000,1200))
p = 1 ± 0.1
ζ = 0.3 ± 0.05
ω = 1 ± 0.1;
# Alternative definitions of the uncertain parameters are given by
# `p,ζ,ω = outer_product([Normal(1,0.1),Normal(0.3,0.05),Normal(1,0.1)], 2_000)`
# `p,ζ,ω = [1,0.3,1] ⊗ [0.1, 0.05, 0.1] # Defaults to N≈100_000`
G = tf([p*ω], [1, 2ζ*ω, ω^2])
dc = dcgain(G)[]
density(dc)
#
w = exp10.(LinRange(-0.7,0.7,100))
@time mag, phase = bode(G,w) .|> vec;
# ## Bode plot
scales = (yscale=:log10, xscale=:log10)
errorbarplot(w,mag,0.00; scales..., layout=3, subplot=1, lab="q=$(0.00)")
errorbarplot!(w,mag,0.01, subplot=1, lab="q=$(0.01)")
errorbarplot!(w,mag,0.1, subplot=1, lab="q=$(0.1)", legend=:bottomleft, linewidth=3)
mcplot!(w,mag; scales..., alpha=0.2, subplot=2, c=:black)
ribbonplot!(w,mag, 0.95; yscale=:log10, xscale=:log10, alpha=0.2, subplot=3)
# A ribbonplot is not always suitable for plots with logarithmic scales.
# ## Nyquist plot
# We can visualize the uncertainty in the Nyquist plot in a number of different ways, here are two examples
reny,imny,wny = nyquist(G,w) .|> vec
plot(reny, imny, 0.005, lab="Nyquist curve 99%")
plot!(reny, imny, 0.025, lab="Nyquist curve 95%", ylims=(-4,0.2), xlims=(-2,2), legend=:bottomright)
vline!([-1], l=(:dash, :red), primary=false)
vline!([0], l=(:dash, :black), primary=false)
hline!([0], l=(:dash, :black), primary=false)
#
plot(reny, imny, lab="Nyquist curve 95%", ylims=(-4,0.2), xlims=(-2,2), legend=:bottomright, points=true)
vline!([-1], l=(:dash, :red), primary=false)
vline!([0], l=(:dash, :black), primary=false)
hline!([0], l=(:dash, :black), primary=false)
#
# ## Time Simulations
# We start by sampling the system to obtain a discrete-time model.
@unsafe Pd = c2d(G, 0.1)
# We then simulate an plot the results
y,t,x = step(Pd, 20)
errorbarplot(t,y[:], 0.00, layout=3, subplot=1, alpha=0.5)
errorbarplot!(t,y[:], 0.05, subplot=1, alpha=0.5)
errorbarplot!(t,y[:], 0.1, subplot=1, alpha=0.5)
mcplot!(t,y[:], subplot=2, l=(:black, 0.02))
ribbonplot!(t,y[:], subplot=3)
# # System identification
using MonteCarloMeasurements, ControlSystemIdentification, ControlSystemsBase
using Random, LinearAlgebra
# We start by creating a system to use as the subject of identification and some data to use for identification
N = 500 # Number of time steps
t = 1:N
Δt = 1 # Sample time
u = randn(1, N) # A random control input
G = tf(0.8, [1,-0.9], 1) # An interesting system
y = lsim(G,u,t)[1][:]
yn = y + randn(size(y));
# Validation data
uv = randn(1, N)
yv = lsim(G,uv,t)[1][:]
ynv = yv + randn(size(yv));
# Identification parameters
na,nb,nc = 2,1,1
data = iddata(yn,u,Δt)
Gls = arx(data,na,nb,stochastic=true) # Regular least-squares estimation
Gtls = arx(data,na,nb,stochastic=true, estimator=tls) # Total least-squares estimation
Gwtls = arx(data,na,nb,stochastic=true, estimator=wtls_estimator(y,na,nb)) # Weighted Total least-squares estimation
# We now calculate and plot the Bode diagrams for the uncertain transfer functions
scales = (yscale=:log10, xscale=:log10)
w = exp10.(LinRange(-3,log10(π),30))
magG = bodev(G,w)[1]
mag = bodev(Gls,w)[1]
errorbarplot(w,mag,0.01; scales..., layout=3, subplot=1, lab="ls")
# plot(w,mag; scales..., layout=3, subplot=1, lab="ls") # src
plot!(w,magG, subplot=1)
mag = bodev(Gtls,w)[1]
errorbarplot!(w,mag,0.01; scales..., subplot=2, lab="qtls")
# plot!(w,mag; scales..., subplot=2, lab="qtls") # src
plot!(w,magG, subplot=2)
mag = bodev(Gwtls,w)[1]
errorbarplot!(w,mag,0.01; scales..., subplot=3, lab="wtls")
# plot!(w,mag; scales..., subplot=3, lab="wtls") # src
plot!(w,magG, subplot=3)
## bode benchmark =========================================
using MonteCarloMeasurements, BenchmarkTools, Printf, ControlSystemsBase
using Measurements
using ChangePrecision
@changeprecision Float32 begin
w = exp10.(LinRange(-3,log10(π),30))
p = 1. ± 0.1
ζ = 0.3 ± 0.1
ω = 1. ± 0.1
G = tf([p*ω], [1, 2ζ*ω, ω^2])
t1 = @belapsed bode($G,$w)
p = 1.
ζ = 0.3
ω = 1.
G = tf([p*ω], [1., 2ζ*ω, ω^2])
sleep(0.5)
t2 = @belapsed bode($G,$w)
p = Measurements.:(±)(1.0, 0.1)
ζ = Measurements.:(±)(0.3, 0.1)
ω = Measurements.:(±)(1.0, 0.1)
G = tf([p*ω], [1, 2ζ*ω, ω^2])
sleep(0.5)
t3 = @belapsed bode($G,$w)
p = 1. ∓ 0.1
ζ = 0.3 ∓ 0.1
ω = 1. ∓ 0.1
G = tf([p*ω], [1, 2ζ*ω, ω^2])
sleep(0.5)
t4 = @belapsed bode($G,$w)
p,ζ,ω = StaticParticles(sigmapoints([1, 0.3, 1], 0.1^2))
G = tf([p*ω], [1, 2ζ*ω, ω^2])
sleep(0.5)
t5 = @belapsed bode($G,$w)
end
##
@printf("
| Benchmark | Result |
|-----------|--------|
| Time with 2000 particles | %16.4fms |
| Time with regular floating point | %7.4fms |
| Time with Measurements | %17.4fms |
| Time with 100 static part. | %13.4fms |
| Time with static sigmapoints. | %10.4fms |
| 2000×floating point time | %16.4fms |
| Speedup factor vs. Manual | %11.1fx |
| Slowdown factor vs. Measurements | %4.1fx |
| Slowdown static vs. Measurements | %4.1fx |
| Slowdown sigma vs. Measurements | %5.1fx|\n",
1000*t1, 1000*t2, 1000*t3, 1000*t4, 1000*t5, 1000*2000t2, 2000t2/t1, t1/t3, t4/t3, t5/t3) #src
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 1983 | # This script illustrates how to use latin hypercube sampling. In the first example, we produce a sample with a non-diagonal covariance matrix to illustrate that the latin property is lost for all dimensions but the first:
using MonteCarloMeasurements, LatinHypercubeSampling, Test
ndims = 2
N = 40 # Number of particles
ngen = 2000 # How long to run optimization
X, fit = LHCoptim(N,ndims,ngen)
m, Σ = [1,2], [2 1; 1 4] # Desired mean and covariance
particles = transform_moments(X, m, Σ)
@test mean(particles, dims=1)[:] ≈ m
@test cov(particles) ≈ Σ
p = Particles(particles)
plot(scatter(eachcol(particles)..., title="Sample"), plot(fit, title="Fitness vs. iteration"))
vline!(particles[:,1]) # First dimension is still latin
hline!(particles[:,2]) # Second dimension is not
# If we do the same thing with a diagonal covariance matrix, the latin property is approximately preserved in all dimensions provided that the latin optimizer was run sufficiently long.
m, Σ = [1,2], [2 0; 0 4] # Desired mean and covariance
particles = transform_moments(X, m, Σ)
p = Particles(particles)
plot(scatter(eachcol(particles)..., title="Sample"), plot(fit, title="Fitness vs. iteration"))
vline!(particles[:,1]) # First dimension is still latin
hline!(particles[:,2]) # Second dimension is not
# We provide a method which absolutely preserves the latin property in all dimensions, but if you use this, the covariance of the sample will be slighlty wrong
particles = transform_moments(X, m, Σ, preserve_latin=true)
p = Particles(particles)
plot(scatter(eachcol(particles)..., title="Sample"), plot(fit, title="Fitness vs. iteration"))
vline!(particles[:,1]) # First dimension is still latin
hline!(particles[:,2]) # Second dimension is not
@test mean(particles, dims=1)[:] ≈ m
@test cov(particles) ≉ Σ # OBS not the not approx equal!
# We can also visualize the statistics of the sample
using StatsPlots
corrplot(particles)
#
plot(density(p[1]), density(p[2]))
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 5309 | using MonteCarloMeasurements, Measurements, Plots, BenchmarkTools, OrdinaryDiffEq, PrettyTables, ChangePrecision, LinearAlgebra
# pgfplots()
default(size=(600,400))
color_palette = [
RGB(0.1399999, 0.1399999, 0.4),
RGB(1.0, 0.7075, 0.35),
RGB(0.414999, 1.0, 1.0),
RGB(0.6, 0.21, 0.534999),
RGB(0,0.6,0),
]
function sim((±)::F, tspan, plotfun=plot!, args...; kwargs...) where F
@changeprecision Float32 begin
g = 9.79 ± 0.02; # Gravitational constant
L = 1.00 ± 0.01; # Length of the pendulum
u₀ = [0.0 ± 0.0, π / 3.0 ± 0.02] # Initial speed and initial angle
# @show typeof(u₀)
gL = g/L
#Define the problem
function simplependulum(du,u,p,t)
θ = u[1]
dθ = u[2]
du[1] = dθ
du[2] = -gL * sin(θ)
end
prob = ODEProblem(simplependulum, u₀, tspan)
sol = solve(prob, Tsit5(), reltol = 1e-6)
plotfun(sol.t, getindex.(sol.u, 2), args...; kwargs...)
end
end
## Special function needed to construct the parameters as sigmapoints, as they all have to be constructed in one single call
function sigmasim(tspan, plotfun=plot!, args...; kwargs...) where F
@changeprecision Float32 begin
g,L,u02 = StaticParticles(sigmapoints([9.79, 1.0, pi/3.0], diagm([0.02, 0.01, 0.02].^2)))
u₀ = [0, u02] # Initial speed and initial angle
# @show typeof(u₀)
gL = g/L
#Define the problem
function simplependulum(du,u,p,t)
θ = u[1]
dθ = u[2]
du[1] = dθ
du[2] = -gL * sin(θ)
end
prob = ODEProblem(simplependulum, u₀, tspan)
sol = solve(prob, Tsit5(), reltol = 1e-6)
plotfun(sol.t, getindex.(sol.u, 2), args...; kwargs...)
end
end
##
tspan = (0.0f0, 0.5f0)
plot()
sim(Measurements.:±, tspan, label = "Linear", xlims=(tspan[2]-2,tspan[2]), color=color_palette[4])
sim(MonteCarloMeasurements.:±, tspan, errorbarplot!, 0.8413, label = "MCM", xlims=(tspan[2]-0.5,tspan[2]), l=(:dot,), color=color_palette[2], xlabel="Time [s]", ylabel="\$\\theta\$")
# savefig("/home/fredrikb/mcm_paper/figs/0-2.pdf")
##
tspan = (0.0f0, 200)
plot()
sim(Measurements.:±, tspan, label = "Linear", xlims=(tspan[2]-5,tspan[2]), color=color_palette[4])
sim(MonteCarloMeasurements.:±, tspan, label = "Monte Carlo", xlims=(tspan[2]-5,tspan[2]), l=(:dot,), color=color_palette[2], xlabel="Time [s]", ylabel="\$\\theta\$")
##
# We now integrated over 200 seconds and look at the last 5 seconds. This result maybe looks a bit confusing, the linear uncertainty propagation is very sure about the amplitude at certain points but not at others, whereas the Monte-Carlo approach is completely unsure. Furthermore, the linear approach thinks that the amplitude at some points is actually much higher than the starting amplitude, implying that energy somehow has been added to the system! The picture might become a bit more clear by plotting the individual trajectories of the particles
tspan = (0.0f0, 200)
plot()
sim(Measurements.:±, tspan, label = "Linear", xlims=(tspan[2]-5,tspan[2]), l=(5,), color=color_palette[4])
sim(MonteCarloMeasurements.:∓, tspan, mcplot!, xlims=(tspan[2]-5,tspan[2]), l=(color_palette[2],0.3), xlabel="Time [s]", ylabel="\$\\theta\$", label="MCM")
sigmasim(tspan, mcplot!, xlims=(tspan[2]-5,tspan[2]), l=(color_palette[3],0.9,2), xlabel="Time [s]", ylabel="\$\\theta\$", label="MCM \\Sigma")
# savefig("/home/fredrikb/mcm_paper/figs/mcplot.pdf")
# It now becomes clear that each trajectory has a constant amplitude (although individual trajectories amplitudes vary slightly due to the uncertainty in the initial angle), but the phase is all mixed up due to the slightly different frequencies!
# These problems grow with increasing uncertainty and increasing integration time. In fact, the uncertainty reported by Measurements.jl goes to infinity as the integration time does the same.
# Of course, the added accuracy from using MonteCarloMeasurements does not come for free, as it costs some additional computation. We have the following timings for integrating the above system 100 seconds using three different uncertainty representations
##
function naive_mc(tspan)
for i = 1:100
sim(certain, tspan, (args...;kwargs...)->nothing)
end
end
tspan = (0.0f0, 100f0)
certain = (x,y)->x+y*randn()
table = Matrix{Any}(undef,3,6)
t1 = @benchmark sim($certain, $tspan, (args...;kwargs...)->nothing)
t2 = @benchmark sim($Measurements.:±, $tspan, (args...;kwargs...)->nothing) samples=500
t3 = @benchmark sim($MonteCarloMeasurements.:∓, $tspan, (args...;kwargs...)->nothing) samples=500
t4 = @benchmark sigmasim($tspan, (args...;kwargs...)->nothing) samples=500
t5 = @benchmark naive_mc($tspan) samples=500
# table[1,1] = ""
table[1,1] = "Time [ms]"
table[2,1] = "Memory [MiB]"
table[3,1] = "k Allocations"
for (i,t) in enumerate((t1,t2,t3,t4,t5))
table[1,i+1] = time(t)/1000_000
table[2,i+1] = memory(t)/1000_000
table[3,i+1] = allocs(t)/1000
end
# pretty_table(table, ["" "Float32" "Linear" "MCM" "MCM \\Sigma" "Naive MC"], backend=:latex, formatters=ft_printf("%5.1f"))
pretty_table(table, ["" "Float32" "Linear" "MCM" "MCM \\Sigma" "Naive MC"], backend=:text, tf=markdown, formatters=ft_printf("%5.1f"))
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 3080 | # In this script, we will design a PID controller by optimization. The system model has uncertain parameters, and we pay a price not only for poor performance of the closed-loop system, but also for a high variance in the performance. In addition to this, we place a constraint on the 90:th percentile of the maximum of the sensitivity function. This way, we will get a doubly robust controller as a result :) To avoid excessive amplification of measurement noise, we penalize noise amplification above a certain frequency.
# We start by defining the system and some initial controller parameters
using MonteCarloMeasurements, Optim, ControlSystems, Plots
using MonteCarloMeasurements: ∓
unsafe_comparisons(true)
p = 1 + 0.1*Particles(100)
ζ = 0.3 + 0.05*Particles(100)
ω = 1 + 0.05*Particles(100)
const P = tf([p*ω], [1, 2ζ*ω, ω^2]) |> ss
const w = exp10.(LinRange(-2,3,100))
params = log.([1,0.1,0.1])
const Msc = 1.2 # Constraint on Ms
# We now define the cost function, which includes the constraint on the maximum sensitivity function
function systems(params::AbstractVector{T}) where T
kp,ki,kd = exp.(params)
C = convert(StateSpace{Continuous, T}, pid(kp=kp,ki=ki,kd=kd)*tf(1, [0.05, 1])^2, balance=false)
G = feedback(P*C) # Closed-loop system
S = 1/(1 + P*C) # Sensitivity function
CS = C*S # Noise amplification
local Gd
try
Gd = c2d(G,0.1) # Discretize the system. This might fail for some parameters, so we catch these cases and return a high value
catch
return T(10000)
end
y,t,_ = step(Gd,15) .|> vec # This is the time-domain simulation
C, G, S, CS, y, t
end
function cost(params::AbstractVector{T}) where T
C, G, S, CS, y, t = systems(params)
Ms = maximum(bode(S, w)[1]) # Maximum of the sensitivity function
q = pquantile(Ms, 0.9)
performance = mean(abs, 1 .- y) # This is our performance measure
robustness = (q > Msc ? 10000(q-Msc) : zero(T)) # This is our robustness constraint
variance = pstd(performance) # This is the price we pay for high variance in the performance
noise = pmean(sum(bode(CS, w[end-30:end])[1]))
100pmean(performance) + robustness + 10variance + 0.002noise
end
# We are now ready to test the cost function.
@time cost(params)
#
res = Optim.optimize(cost, params, NelderMead(), Optim.Options(iterations=1000, show_trace=true, show_every=20));
println("Final cost: ", res.minimum)
# We can now perform the same computations as above to visualize the found controller
fig = plot(layout=2)
for params = (params, res.minimizer)
C, G, S, CS, y, t = systems(params)
mag = bode(S, w)[1][:]
plot!(t,y[:], title="Time response", subplot=1, legend=false)
plot!(w, mag, title="Sensitivity function", xscale=:log10, yscale=:log10, subplot=2, legend=false)
end
hline!([Msc], l=(:black, :dash), subplot=2)
display(fig)
# Other things that could potentially be relevant is adding a probabilistic constraint on the time-domain output, such as the probability of having the step response go above 1.5 must be < 0.05 etc.
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 2283 |
# This file produces a figure that shows how particles perform uncertainty propagation, and compares the result to uncertainty propagation through linearization like Measurements.jl does.
using MonteCarloMeasurements, StatsPlots, NNlib, Measurements, KernelDensity
using Measurements: value, uncertainty
default(lab="")
N = 20 # Number of particles
f = x -> σ(12x-6) # Nonlinear function
l = 0 # Left boundary
r = 1 # Right boundary
d = Normal(0.5, 0.15) # The probability density of the input
m = Measurements.:(±)(d.μ, d.σ) # For comparison to Measurements.jl using linear uncertainty propagation
my = f(m) # output measurement
dm = Normal(value(my), uncertainty(my)) # Output density according to Measurements
x = Particles(N,d, permute=false) # Create particles distributed according to d, sort for visualization
y = f(x).particles # corresponding output particles
x = x.particles # extract vector to plot manually
xr = LinRange(l,r,100) # x values for plotting
noll = zeros(N)
plot(f, l, r, legend=:right, xlims=(l,r), ylims=(l,r), axis=false, grid=false, lab="f(x)", xlabel="Input space", ylabel="Output space")
plot!(x->0.2pdf(d,x),l,r, lab="Input dens.")
# Estimate the true output density using a large sample
kdt = kde(f.(rand(d,100000)), npoints=200, bandwidth=0.08)
plot!(l .+ 0.2kdt.density, kdt.x, lab="True output dens.")
# This is the output density as approximated by linear uncertainty propagation
plot!(l .+ 0.2pdf.(Ref(dm),xr), xr, lab="Linear Gaussian propagation")
# Estimate the output density corresponding to the particles
kd = kde(y, npoints=200, bandwidth=0.08)
plot!(l .+ 0.2kd.density, kd.x, lab="Particle kernel dens. est.", l=:dash)
# Draw helper lines that show how particles are transformed from input space to output space
plot!([x x][1:2:end,:]', [noll y][1:2:end,:]', l=(:black, :arrow, :dash, 0.1))
plot!([x fill(l,N).+0.02][1:2:end,:]', [y y][1:2:end,:]', l=(:black, :arrow, :dash, 0.1))
# Plot the particles
scatter!(x, 0y, lab="Input particles")
scatter!(fill(l,N) .+ 0.02, y, lab="Output particles")
# Draw mean lines, these show hoe the mean is transformed using linear uncertainty propagation
plot!([d.μ,d.μ], [0,f(d.μ)], l=(:red, :dash, 0.2))
plot!([l,d.μ], [f(d.μ),f(d.μ)], l=(:red, :dash, 0.2))
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 1223 | module MakieExt
using Makie
using MonteCarloMeasurements
Makie.used_attributes(::Type{<:Series}, ::AbstractVector, ::AbstractVector{<:Particles}) = (:N,)
Makie.used_attributes(::Type{<:Series}, ::AbstractVector{<:Tuple{<:Real,<:Particles}}) = (:N,)
Makie.convert_arguments(ct::Type{<:Series}, x::AbstractVector, y::AbstractVector{<:Particles}; N=7) = convert_arguments(ct, x, Matrix(y)[1:min(N, end), :])
Makie.used_attributes(::Type{<:Union{Rangebars,Band}}, ::AbstractVector, ::AbstractVector{<:Particles}) = (:q,)
Makie.used_attributes(::Type{<:Union{Rangebars,Band}}, ::AbstractVector{<:Tuple{<:Real,<:Particles}}) = (:q,)
Makie.convert_arguments(p::Type{<:Union{Hist,Density}}, x::Particles) = convert_arguments(p, Vector(x))
Makie.convert_arguments(ct::Type{<:Union{Rangebars,Band}}, x::AbstractVector, y::AbstractVector{<:Particles}; q=0.16) = convert_arguments(ct, x, pquantile.(y, q), pquantile.(y, 1-q))
Makie.convert_arguments(ct::Type{<:AbstractPlot}, X::AbstractVector{<:Tuple{<:Real,<:Particles}}; kwargs...) = convert_arguments(ct, first.(X), last.(X); kwargs...)
Makie.convert_arguments(ct::PointBased, x::AbstractVector{<:Real}, y::AbstractVector{<:Particles}) = convert_arguments(ct, x, pmean.(y))
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 7916 | """
This package facilitates working with probability distributions by means of Monte-Carlo methods, in a way that allows for propagation of probability distributions through functions. This is useful for, e.g., nonlinear [uncertainty propagation](https://en.wikipedia.org/wiki/Propagation_of_uncertainty). A variable or parameter might be associated with uncertainty if it is measured or otherwise estimated from data. We provide two core types to represent probability distributions: `Particles` and `StaticParticles`, both `<: Real`. (The name "Particles" comes from the [particle-filtering](https://en.wikipedia.org/wiki/Particle_filter) literature.) These types all form a Monte-Carlo approximation of the distribution of a floating point number, i.e., the distribution is represented by samples/particles. **Correlated quantities** are handled as well, see [multivariate particles](https://baggepinnen.github.io/MonteCarloMeasurements.jl/stable/#Multivariate-particles-1) below.
A number of type `Particles` behaves just as any other `Number` while partaking in calculations. Particles also behave like a distribution, so after a calculation, an approximation to the **complete distribution** of the output is captured and represented by the output particles. `mean`, `std` etc. can be extracted from the particles using the corresponding functions `pmean` and `pstd`. `Particles` also interact with [Distributions.jl](https://github.com/JuliaStats/Distributions.jl), so that you can call, e.g., `Normal(p)` and get back a `Normal` type from distributions or `fit(Gamma, p)` to get a `Gamma`distribution. Particles can also be asked for `maximum/minimum`, `quantile` etc. using functions with a prefix `p`, i.e., `pmaximum`. If particles are plotted with `plot(p)`, a histogram is displayed. This requires Plots.jl. A kernel-density estimate can be obtained by `density(p)` is StatsPlots.jl is loaded.
## Quick start
```julia
julia> using MonteCarloMeasurements, Plots
julia> a = π ± 0.1 # Construct Gaussian uncertain parameters using ± (\\pm)
Particles{Float64,2000}
3.14159 ± 0.1
julia> b = 2 ∓ 0.1 # ∓ (\\mp) creates StaticParticles (with StaticArrays)
StaticParticles{Float64,100}
2.0 ± 0.0999
julia> pstd(a) # Ask about statistical properties
0.09999231528930486
julia> sin(a) # Use them like any real number
Particles{Float64,2000}
1.2168e-16 ± 0.0995
julia> plot(a) # Plot them
julia> b = sin.(1:0.1:5) .± 0.1; # Create multivariate uncertain numbers
julia> plot(b) # Vectors of particles can be plotted
julia> using Distributions
julia> c = Particles(500, Poisson(3.)) # Create uncertain numbers distributed according to a given distribution
Particles{Int64,500}
2.882 ± 1.7
```
For further help, see the [documentation](https://baggepinnen.github.io/MonteCarloMeasurements.jl/stable), the [examples folder](https://github.com/baggepinnen/MonteCarloMeasurements.jl/tree/master/examples) or the [arXiv paper](https://arxiv.org/abs/2001.07625).
"""
module MonteCarloMeasurements
using LinearAlgebra, Statistics, Random, StaticArrays, RecipesBase, MacroTools, SLEEFPirates, GenericSchur
using Distributed: pmap
import Base: add_sum
using Distributions, StatsBase, Requires
using ForwardDiff
const DEFAULT_NUM_PARTICLES = 2000
const DEFAULT_STATIC_NUM_PARTICLES = 100
function pmean end
"""
The function used to reduce particles to a number for comparison. Defaults to `mean`. Change using `unsafe_comparisons`.
"""
const COMPARISON_FUNCTION = Ref{Function}(pmean)
const COMPARISON_MODE = Ref(:safe)
"""
unsafe_comparisons(onoff=true; verbose=true)
Toggle the use of a comparison function without warning. By default `mean` is used to reduce particles to a floating point number for comparisons. This function can be changed, example: `set_comparison_function(median)`
unsafe_comparisons(mode=:reduction; verbose=true)
One can also specify a comparison mode, `mode` can take the values `:safe, :montecarlo, :reduction`. `:safe` is the same as calling `unsafe_comparisons(false)` and `:reduction` corresponds to `true`.
If
"""
function unsafe_comparisons(mode=true; verbose=true)
mode == false && (mode = :safe)
mode == true && (mode = :reduction)
COMPARISON_MODE[] = mode
if mode != :safe && verbose
if mode === :reduction
@info "Unsafe comparisons using the function `$(COMPARISON_FUNCTION[])` has been enabled globally. Use `@unsafe` to enable in a local expression only or `unsafe_comparisons(false)` to turn off unsafe comparisons"
elseif mode === :montecarlo
@info "Comparisons using the monte carlo has been enabled globally. Call `unsafe_comparisons(false)` to turn off unsafe comparisons"
end
end
mode ∉ (:safe, :montecarlo, :reduction) && error("Got unsupported comparison model")
end
"""
set_comparison_function(f)
Change the Function used to reduce particles to a number for comparison operators
Toggle the use of a comparison Function without warning using the Function `unsafe_comparisons`.
"""
function set_comparison_function(f)
if f in (mean, median, maximum, minimum)
@warn "This comparison function ($(f)) is probably not the right choice, consider if you want the particle version (p$(f)) instead."
end
COMPARISON_FUNCTION[] = f
end
"""
@unsafe expression
Activates unsafe comparisons for the provided expression only. The expression is surrounded by a try/catch block to robustly restore unsafe comparisons in case of exception.
"""
macro unsafe(ex)
ex2 = if @capture(ex, assigned_vars__ = y_)
if length(assigned_vars) == 1
esc(assigned_vars[1])
else
esc.(assigned_vars[1].args)
end
else
:(res)
end
quote
previous_state = COMPARISON_MODE[]
unsafe_comparisons(true, verbose=false)
local res
try
res = ($(esc(ex)))
finally
unsafe_comparisons(previous_state, verbose=false)
end
$ex2 = res
end
end
export ±, ∓, .., ⊠, ⊞, AbstractParticles,Particles,StaticParticles, MvParticles, sigmapoints, transform_moments, ≲,≳, systematic_sample, ess, outer_product, meanstd, meanvar, register_primitive, register_primitive_multi, register_primitive_single, ℝⁿ2ℝⁿ_function, ℝⁿ2ℂⁿ_function, ℂ2ℂ_function, ℂ2ℂ_function!, bootstrap, sqrt!, exp!, sin!, cos!, wasserstein, with_nominal, nominal, nparticles, particleeltype
# Plot exports
export errorbarplot, mcplot, ribbonplot
# Statistics reexport
export mean, std, cov, var, quantile, median
export pmean, pstd, pcov, pcor, pvar, pquantile, pmedian, pmiddle, piterate, pextrema, pminimum, pmaximum
# Distributions reexport
export Normal, MvNormal, Cauchy, Beta, Exponential, Gamma, Laplace, Uniform, fit, logpdf
export unsafe_comparisons, @unsafe, set_comparison_function
export bymap, bypmap, @bymap, @bypmap, @prob, Workspace, with_workspace, has_particles, mean_object
include("types.jl")
include("register_primitive.jl")
include("sampling.jl")
include("particles.jl")
include("distances.jl")
include("complex.jl")
include("sigmapoints.jl")
include("resampling.jl")
include("bymap.jl")
include("deconstruct.jl")
include("diff.jl")
include("plotting.jl")
include("optimize.jl")
include("sleefpirates.jl")
include("nominal.jl")
include("forwarddiff.jl")
# This is defined here so that @bymap is loaded
LinearAlgebra.norm2(p::AbstractArray{<:AbstractParticles}) = bymap(LinearAlgebra.norm2,p)
Base.:\(x::AbstractVecOrMat{<:AbstractParticles}, y::AbstractVecOrMat{<:AbstractParticles}) = bymap(\, x, y)
Base.:\(x::Diagonal{<:AbstractParticles}, y::Vector{<:AbstractParticles}) = bymap(\, x, y) # required for ambiguity
function __init__()
@require Measurements="eff96d63-e80a-5855-80a2-b1b0885c5ab7" include("measurements.jl")
@require Unitful = "1986cc42-f94f-5a68-af5c-568840ba703d" include("unitful.jl")
end
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 5307 | import Base.Cartesian.@ntuple
nparticles(p) = length(p)
nparticles(p::Type) = 1
nparticles(p::Type{<:AbstractParticles{T,N}}) where {T,N} = N
nparticles(p::AbstractParticles{T,N}) where {T,N} = N
nparticles(p::ParticleArray) = nparticles(eltype(p))
nparticles(p::Type{<:ParticleArray}) = nparticles(eltype(p))
particletype(p::AbstractParticles) = typeof(p)
particletype(::Type{P}) where P <: AbstractParticles = P
particletype(p::AbstractArray{<:AbstractParticles}) = eltype(p)
particleeltype(::AbstractParticles{T,N}) where {T,N} = T
particleeltype(::AbstractArray{<:AbstractParticles{T,N}}) where {T,N} = T
"""
vecindex(p::Number,i) = p
vecindex(p,i) = getindex(p,i)
vecindex(p::AbstractParticles,i) = getindex(p.particles,i)
vecindex(p::ParticleArray,i) = vecindex.(p,i)
vecindex(p::NamedTuple,i) = (; Pair.(keys(p), ntuple(j->arggetter(i,p[j]), fieldcount(typeof(p))))...)
"""
vecindex(p::Number,i) = p
vecindex(p,i) = getindex(p,i)
vecindex(p::AbstractParticles,i) = getindex(p.particles,i)
vecindex(p::ParticleArray,i) = vecindex.(p,i)
vecindex(p::NamedTuple,i) = (; Pair.(keys(p), ntuple(j->arggetter(i,p[j]), fieldcount(typeof(p))))...)
function indexof_particles(args)
inds = findall(a-> a <: SomeKindOfParticles, args)
inds === nothing && throw(ArgumentError("At least one argument should be <: AbstractParticles. If particles appear nested as fields inside an argument, see `with_workspace` and `Workspace`"))
all(nparticles(a) == nparticles(args[inds[1]]) for a in args[inds]) || throw(ArgumentError("All p::Particles must have the same number of particles."))
(inds...,)
# TODO: test all same number of particles
end
function arggetter(i,a::Union{SomeKindOfParticles, NamedTuple})
vecindex(a,i)
end
arggetter(i,a) = a
"""
@bymap f(p, args...)
Call `f` with particles or vectors of particles by using `map`. This can be utilized if registering `f` using [`register_primitive`](@ref) fails. See also [`Workspace`](@ref) if `bymap` fails.
"""
macro bymap(ex)
@capture(ex, f_(args__)) || error("expected a function call")
quote
bymap($(esc(f)),$(esc.(args)...))
end
end
"""
bymap(f, args...)
Uncertainty propagation using the `map` function.
Call `f` with particles or vectors of particles by using `map`. This can be utilized if registering `f` using [`register_primitive`](@ref) fails. See also [`Workspace`](@ref) if `bymap` fails.
"""
function bymap(f::F, args...) where F
inds = indexof_particles(typeof.(args))
T,N,PT = particletypetuple(args[first(inds)])
individuals = map(1:N) do i
argsi = ntuple(j->arggetter(i,args[j]), length(args))
f(argsi...)
end
PTNT = PT{eltype(eltype(individuals)),N}
if (eltype(individuals) <: AbstractArray{TT,0} where TT) || eltype(individuals) <: Number
PTNT(individuals)
elseif eltype(individuals) <: AbstractArray{TT,1} where TT
PTNT(copy(reduce(hcat,individuals)'))
elseif eltype(individuals) <: AbstractArray{TT,2} where TT
# @show PT{eltype(individuals),N}
reshape(PTNT(copy(reduce(hcat,vec.(individuals))')), size(individuals[1],1),size(individuals[1],2))::Matrix{PTNT}
else
error("Output with dimension >2 is currently not supported by `bymap`. Consider if `ℝⁿ2ℝⁿ_function($(f), $(args...))` works for your use case.")
end
end
"""
Distributed uncertainty propagation using the `pmap` function. See [`bymap`](@ref) for more details.
"""
function bypmap(f::F, args...) where F
inds = indexof_particles(typeof.(args))
T,N,PT = particletypetuple(args[first(inds)])
individuals = map(1:N) do i
argsi = ntuple(j->arggetter(i,args[j]), length(args))
f(argsi...)
end
PTNT = PT{eltype(eltype(individuals)),N}
if (eltype(individuals) <: AbstractArray{TT,0} where TT) || eltype(individuals) <: Number
PTNT(individuals)
elseif eltype(individuals) <: AbstractArray{TT,1} where TT
PTNT(copy(reduce(hcat,individuals)'))
elseif eltype(individuals) <: AbstractArray{TT,2} where TT
# @show PT{eltype(individuals),N}
reshape(PTNT(copy(reduce(hcat,vec.(individuals))')), size(individuals[1],1),size(individuals[1],2))::Matrix{PTNT}
else
error("Output with dimension >2 is currently not supported by `bymap`. Consider if `ℝⁿ2ℝⁿ_function($(f), $(args...))` works for your use case.")
end
end
"""
@bypmap f(p, args...)
Call `f` with particles or vectors of particles by using parallel `pmap`. This can be utilized if registering `f` using [`register_primitive`](@ref) fails. See also [`Workspace`](@ref) if `bymap` fails.
"""
macro bypmap(ex)
@capture(ex, f_(args__)) || error("expected a function call")
quote
bypmap($(esc(f)),$(esc.(args)...))
end
end
"""
@prob a < b
Calculate the probability that an event on any of the forms `a < b, a > b, a <= b, a >= b` occurs, where `a` and/or `b` are of type `AbstractParticles`.
"""
macro prob(ex)
ex.head == :call && ex.args[1] ∈ (:<,:>,:<=,:>=) || error("Expected an expression on any of the forms `a < b, a > b, a <= b, a >= b`")
op = ex.args[1]
a = ex.args[2]
b = ex.args[3]
quote
mean($op.(MonteCarloMeasurements.maybe_particles($(esc(a))), MonteCarloMeasurements.maybe_particles($(esc(b)))))
end
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 6566 | for PT in (:Particles, :StaticParticles)
@eval begin
Base.promote_rule(::Type{Complex{S}}, ::Type{$PT{T,N}}) where {S<:Real,T<:Real,N} = Complex{$PT{promote_type(S,T),N}}
end
for ff in (^,)
f = nameof(ff)
@eval Base.$f(z::Complex{$PT{T,N}}, x::Real) where {T,N} = ℂ2ℂ_function($f, z, x)
@eval Base.$f(z::Real, x::Complex{$PT{T,N}}) where {T,N} = ℂ2ℂ_function($f, z, x)
@eval Base.$f(z::Complex{$PT{T,N}}, x::Complex{$PT{T,N}}) where {T,N} = ℂ2ℂ_function($f, z, x)
@eval Base.$f(z::Complex{$PT{T,N}}, x::Int) where {T,N} = ℂ2ℂ_function($f, z, x)
@eval Base.$f(z::Int, x::Complex{$PT{T,N}}) where {T,N} = ℂ2ℂ_function($f, z, x)
end
end
@inline maybe_complex_particles(x,i) = x
@inline maybe_complex_particles(p::AbstractParticles,i) = complex(real(p.particles[i]), imag(p.particles[i]))
@inline maybe_complex_particles(p::Complex{<:AbstractParticles},i) = complex(p.re.particles[i], p.im.particles[i])
"""
ℂ2ℂ_function(f::Function, z::Complex{<:AbstractParticles})
Helper function for uncertainty propagation through complex-valued functions of complex arguments.
applies `f : ℂ → ℂ ` to `z::Complex{<:AbstractParticles}`.
"""
function ℂ2ℂ_function(f::F, z::Complex{T}) where {F<:Union{Function,DataType},T<:AbstractParticles}
s = map(1:length(z.re.particles)) do i
@inbounds f(maybe_complex_particles(z, i))
end
complex(T(real.(s)), T(imag.(s)))
end
function ℂ2ℂ_function(f::F, z::Union{Complex{T},T}, a::R) where {F<:Union{Function,DataType},T<:AbstractParticles,R<:Real}
s = map(1:length(z.re.particles)) do i
@inbounds f(maybe_complex_particles(z, i), vecindex(a, i))
end
complex(T(real.(s)), T(imag.(s)))
end
function ℂ2ℂ_function(f::F, z::R, a::Complex{S}) where {F<:Union{Function,DataType},S<:AbstractParticles,R<:Real}
s = map(1:length(a.re.particles)) do i
@inbounds f(vecindex(z, i), maybe_complex_particles(a, i))
end
complex(S(real.(s)), S(imag.(s)))
end
function ℂ2ℂ_function(f::F, z::Complex{T}, a::Complex{S}) where {F<:Union{Function,DataType},T<:AbstractParticles,S<:AbstractParticles}
s = map(1:length(z.re.particles)) do i
@inbounds f(maybe_complex_particles(z, i), maybe_complex_particles(a, i))
end
complex(T(real.(s)), T(imag.(s)))
end
# function ℂ2ℂ_function(f::F, z::Complex{T}, a::Complex{S}) where {F<:Union{Function,DataType},T<:AbstractParticles,S<:AbstractParticles}
# out = deepcopy(z)
# rp, ip = out.re.particles, out.im.particles
# @inbounds for i = 1:length(z.re.particles)
# res = f(maybe_complex_particles(z, i), maybe_complex_particles(a, i))
# rp[i] = res.re
# ip[i] = res.im
# end
# out
# end
function ℂ2ℂ_function!(f::F, s, z::Complex{T}) where {F,T<:AbstractParticles}
map!(s, 1:length(z.re.particles)) do i
@inbounds f(maybe_complex_particles(z, i))
end
complex(T(real.(s)), T(imag.(s)))
end
for ff in (sqrt, exp, exp10, log, log10, sin, cos, tan)
f = nameof(ff)
@eval Base.$f(z::Complex{<: AbstractParticles}) = ℂ2ℂ_function($f, z)
@eval $(Symbol(f,:!))(s, z::Complex{<: AbstractParticles}) = ℂ2ℂ_function!($f, s, z)
end
Base.isinf(p::Complex{<: AbstractParticles}) = isinf(real(p)) || isinf(imag(p))
Base.isfinite(p::Complex{<: AbstractParticles}) = isfinite(real(p)) && isfinite(imag(p))
function Base.:(/)(a::Union{T, Complex{T}}, b::Complex{T}) where T<:AbstractParticles
ℂ2ℂ_function(/, a, b)
end
function Base.FastMath.div_fast(a::Union{T, Complex{T}}, b::Complex{T}) where T<:AbstractParticles
ℂ2ℂ_function(Base.FastMath.div_fast, a, b)
end
for f in (:pmean, :pmaximum, :pminimum, :psum, :pstd, :pcov)
@eval $f(p::Complex{<: AbstractParticles}) = Complex($f(p.re), $f(p.im))
end
function switch_representation(d::Complex{<:V}) where {V<:AbstractParticles}
MonteCarloMeasurements.nakedtypeof(V)(complex.(d.re.particles, d.im.particles))
end
function complex_array(R::AbstractArray{Complex{V}}) where {V<:AbstractParticles}
R = switch_representation.(R)
permutedims(reinterpret(reshape, Float64, vec(R)), dims=())
end
function ℂⁿ2ℂⁿ_function(f::F, R::Matrix{<:Complex{<:AbstractParticles{T, N}}}) where {F, T, N}
E = similar(R)
for i in eachindex(E)
E[i] = Complex(Particles(zeros(N)), Particles(zeros(N)))
end
r = zeros(Complex{T}, size(R)...)
for n in 1:N
for j in eachindex(R)
r[j] = Complex(R[j].re.particles[n], R[j].im.particles[n])
end
e = f(r)
for i in eachindex(e)
E[i].re.particles[n] = e[i].re
E[i].im.particles[n] = e[i].im
end
end
E
end
Base.exp(R::Matrix{<:Complex{<:AbstractParticles}}) = ℂⁿ2ℂⁿ_function(exp, R)
LinearAlgebra.exp!(R::Matrix{<:Complex{<:AbstractParticles}}) = ℂⁿ2ℂⁿ_function(LinearAlgebra.exp!, R)
Base.log(R::Matrix{<:Complex{<:AbstractParticles}}) = ℂⁿ2ℂⁿ_function(log, R)
function ℂⁿ2ℂ_function(f::F, D::Matrix{Complex{PT}}) where {F, PT <: AbstractParticles}
D0 = similar(D, ComplexF64)
parts = map(1:nparticles(D[1].re)) do i
for j in eachindex(D0)
D0[j] = Complex(D[j].re.particles[i], D[j].im.particles[i])
end
f(D0)
end
# PT = nakedtypeof(P)
Complex(PT(getfield.(parts, :re)), PT(getfield.(parts, :im)))
end
LinearAlgebra.det(R::Matrix{<:Complex{<:AbstractParticles}}) = ℂⁿ2ℂ_function(det, R)
function LinearAlgebra.eigvals(R::Matrix{<:Complex{<:AbstractParticles{T, N}}}; kwargs...) where {T, N}
E = Vector{Complex{Particles{T,N}}}(undef, size(R,1))
for i in eachindex(E)
E[i] = Complex(Particles(zeros(N)), Particles(zeros(N)))
end
r = zeros(Complex{T}, size(R)...)
for n in 1:N
for j in eachindex(R)
r[j] = Complex(R[j].re.particles[n], R[j].im.particles[n])
end
e = eigvals!(r; kwargs...)
for i in eachindex(e)
E[i].re.particles[n] = e[i].re
E[i].im.particles[n] = e[i].im
end
end
E
end
function LinearAlgebra.svdvals(R::Matrix{<:Complex{<:AbstractParticles{T, N}}}; kwargs...) where {T, N}
E = Vector{Particles{T,N}}(undef, size(R,1))
for i in eachindex(E)
E[i] = Particles(zeros(N))
end
r = zeros(Complex{T}, size(R)...)
for n in 1:N
for j in eachindex(R)
r[j] = Complex(R[j].re.particles[n], R[j].im.particles[n])
end
e = svdvals!(r; kwargs...)
for i in eachindex(e)
E[i].particles[n] = e[i]
end
end
E
end | MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 17506 | """
has_particles(P)
Determine whether or no the object `P` has some kind of particles inside it. This function examins fields of `P` recursively and looks inside arrays etc.
"""
function has_particles(P)
P isa AbstractParticles && (return true)
if P isa AbstractArray
length(P) < 1 && (return eltype(P) <: AbstractParticles)
return has_particles(P[1])
end
any(fieldnames(typeof(P))) do n
fp = getfield(P,n)
if fp isa Union{AbstractArray, Tuple}
length(fp) < 1 && (return eltype(fp) <: AbstractParticles)
return has_particles(fp[1]) # Specials can occur inside arrays or tuples
else
return has_particles(fp)
end
end
end
"""
has_mutable_particles(P)
Similar to `has_particles`, but only returns true if the found particles are mutable, i.e., are not `StaticParticles`
"""
function has_mutable_particles(P)
P isa Particles && (return true)
P isa StaticParticles && (return false)
P isa AbstractArray && (return has_mutable_particles(P[1]))
all(fieldnames(typeof(P))) do n
fp = getfield(P,n)
if fp isa Union{AbstractArray, Tuple}
length(fp) < 1 && (return eltype(fp) <: Particles)
return has_mutable_particles(fp[1])
else
return has_mutable_particles(fp)
end
end
end
"""
nakedtypeof(x)
Returns the type of `x` with type parameters removed. Uses internals and should ideally not be used at all. Do not use inside generated function.
"""
nakedtypeof(x::Type) = x.name.wrapper
nakedtypeof(x) = nakedtypeof(typeof(x))
"""
build_mutable_container(P)
Recursively visits all fields of `P` and replaces all instances of `StaticParticles` with `Particles`
"""
function build_mutable_container(P)
has_mutable_particles(P) && (return P)
replace_particles(P, replacer=P->Particles(Vector(P.particles)))
end
"""
make_scalar(P)
Replaces all fields of `P` that are particles with `Particles(1)`
"""
function make_scalar(P)
replace_particles(P, replacer=P->Particles([pmean(P)]))
end
"""
restore_scalar(P, N)
Replaces all fields of `P` that are `Particles(1)` with `Particles(N)`
"""
function restore_scalar(P, N)
replace_particles(P, replacer = P->Particles(N))
end
"""
make_static(P)
Replaces all mutable particles inside `P` with `StaticParticles`.
"""
function make_static(P)
!has_mutable_particles(P) && (return P)
replace_particles(P, replacer = P->StaticParticles(P.particles))
end
"""
build_container(P)
Recursively visits all fields of `P` and replaces all instances of `AbstractParticles{T,N}` with `::T`
"""
build_container(P) = replace_particles(P)
"""
mean_object(x)
Returns an object similar to `x`, but where all internal instances of `Particles` are replaced with their mean. The generalization of this function is `replace_particles`.
"""
mean_object(p::AbstractParticles) = pmean(p)
mean_object(p::AbstractArray{<:AbstractParticles}) = pmean.(p)
mean_object(P) = replace_particles(P; replacer = P->pmean(P))
"""
replace_particles(x; condition=P->P isa AbstractParticles,replacer = P->vecindex(P, 1))
This function recursively scans through the structure `x`, every time a field that matches `condition` is found, `replacer` is called on that field and the result is used instead of `P`. See function `mean_object`, which uses this function to replace all instances of `Particles` with their mean.
"""
function replace_particles(P; condition::F1=P->P isa AbstractParticles,replacer::F2 = P->vecindex(P, 1)) where {F1,F2}
# @show typeof(P)
condition(P) && (return replacer(P))
has_particles(P) || (return P) # No need to carry on
if P isa AbstractArray # Special handling for arrays
return map(P->replace_particles(P;condition,replacer), P)
end
P isa Complex && condition(real(P)) && (return complex(replacer(real(P)), replacer(imag(P))))
P isa Number && (return P)
fields = map(fieldnames(typeof(P))) do n
f = getfield(P,n)
has_particles(f) || (return f)
# @show typeof(f), n
replace_particles(f; condition,replacer)
end
T = nakedtypeof(P)
try
if T <: NamedTuple # Special case required for NamedTuple
return (;Pair.(keys(P), fields)...) # The semicolon is required to create named tuple
else
return T(fields...)
end
catch e
if has_mutable_particles(P)
@error("Failed to create a `$T` by calling it with its fields in order. For this to work, `$T` must have a constructor that accepts all fields in the order they appear in the struct and accept that the fields that contained particles are replaced by 0. Try defining a meaningful constructor that accepts arguments with the type signature \n`$(T)$(typeof.(fields))`\nThe error thrown by `$T` was ")
else
mutable_fields = build_mutable_container.(fields)
@error("Failed to create a `$T` by calling it with its fields in order. For this to work, `$T` must have a constructor that accepts all fields in the order they appear in the struct and accept that the fields that contained particles are replaced by 0. Try defining a meaningful constructor that accepts arguments with the following two type signatures \n`$(T)$(typeof.(fields))`\n`$(T)$(typeof.(mutable_fields))`\nThe error thrown by `$T` was ")
end
rethrow(e)
end
end
"""
particletypetuple(p::AbstractParticles{T,N}) = (T,N,ParticleType)
"""
particletypetuple(p::Particles{T,N}) where {T,N} = (T,N,Particles)
particletypetuple(::Type{Particles{T,N}}) where {T,N} = (T,N,Particles)
particletypetuple(p::StaticParticles{T,N}) where {T,N} = (T,N,StaticParticles)
particletypetuple(::Type{StaticParticles{T,N}}) where {T,N} = (T,N,StaticParticles)
particletypetuple(a::AbstractArray) = particletypetuple(eltype(a))
"""
particle_paths(P)
Figure out all paths down through fields of `P` that lead to an instace of `<: AbstractParticles`. The returned structure is a list where each list element is a tuple. The tuple looks like this: (path, particletypetuple, particlenumber)
`path in turn looks like this (:fieldname, fieldtype, size)
"""
function particle_paths(P, allpaths=[], path=[])
T = typeof(P)
if T <: AbstractParticles
push!(allpaths, (path,particletypetuple(T)...))
return
end
if T <: AbstractArray
particle_paths(P[1], allpaths, [path; (:input, T, size(P))])
end
if T <: Tuple
particle_paths(P[1], allpaths, [path; (:input, T, length(P))])
end
for n in fieldnames(T)
fp = getfield(P,n)
FT = typeof(fp)
if FT <: AbstractArray
particle_paths(fp[1], allpaths, [path; (n, FT, size(fp))])
elseif FT <: Tuple
particle_paths(fp[1], allpaths, [path; (n, FT, length(fp))])
else
particle_paths(fp, allpaths, [path; (n, FT, ())])
end
end
ntuple(i->allpaths[i], length(allpaths))
end
"""
vecpartind2vec!(v, pv, j)
Extract particle `j` into vector `v`
# Arguments:
- `v`: vector
- `pv`: vector of `Particles`
- `j`: index
"""
function vecpartind2vec!(v, pv, j)
for i in eachindex(v)
v[i] = vecindex(pv[i], j)
end
end
"""
vec2vecpartind!(pv, v, j)
Extract particle `j` from vector `v` into particles
# Arguments:
- `v`: vector
- `pv`: vector of `Particles`
- `j`: index
"""
function vec2vecpartind!(pv, v, j)
for i in eachindex(v)
pv[i].particles[j] = v[i]
end
end
"""
s1 = get_buffer_setter(paths)
Returns a function that is to be used to update work buffer inside `Workspace`
This function is `@eval`ed and can cause world-age problems unless called with `invokelatest`.
"""
function get_buffer_setter(paths)
setbufex = map(paths) do p # for each encountered particle
getbufex = :(input)
setbufex = :(simple_input)
for (i,(fn,ft,fs)) in enumerate(p[1][1:end-1]) # p[1] is a tuple vector where the first element in each tuple is the fieldname
if ft <: AbstractArray
# Here we should recursively branch down into all the elements, but this seems very complicated so we'll just access element 1 for now
getbufex = :($(getbufex).$(fn)[1])
setbufex = :($(setbufex).$(fn)[1])
else # It was no array, regular field
getbufex = :($(getbufex).$(fn))
setbufex = :($(setbufex).$(fn))
end
end
(fn,ft,fs) = p[1][end] # The last element, we've reached the particles
if ft <: AbstractArray
setbufex = :(vecpartind2vec!($(setbufex).$(fn), $(getbufex).$(fn), partind))
# getbufex = :(getindex.($(getbufex).$(fn), partind))
else # It was no array, regular field
getbufex = :($(getbufex).$(fn)[partind])
setbufex = :($(getbufex).$(fn)[partind] = $getbufex)
end
setbufex = MacroTools.postwalk(setbufex) do x
@capture(x, y_.input) && (return y)
x
end
setbufex
end
# setbufex,getbufex = getindex.(setbufex, 1),getindex.(getbufex, 2)
setbufex = Expr(:block, setbufex...)
# getbufex = Expr(:block, getbufex...)
@eval setbuffun = (input,simple_input,partind)-> $setbufex
setbuffun
end
"""
get_result_setter(result)
See `get_buffer_setter`.
"""
function get_result_setter(result)
paths = particle_paths(result)
setresex = map(paths) do p # for each encountered particle
getresex = :(result)
setresex = :(simple_result)
for (fn,ft,fs) in p[1][1:end-1] # p[1] is a tuple vector where the first element in each tuple is the fieldname
if ft <: Union{AbstractArray, Tuple}
# Here we should recursively branch down into all the elements, but this seems very complicated so we'll just access element 1 for now
getresex = :($(getresex).$(fn)[1])
setresex = :($(setresex).$(fn)[1])
else # It was no array, regular field
getresex = :($(getresex).$(fn))
setresex = :($(setresex).$(fn))
end
end
(fn,ft,fs) = p[1][end] # The last element, we've reached the particles
if ft <: AbstractArray
setresex = :(vec2vecpartind!($(getresex).$(fn), $(setresex).$(fn), partind))
# getresex = :(getindex.($(getresex).$(fn), partind))
else # It was no array, regular field
getresex = :($(getresex).$(fn)[partind])
setresex = :($(getresex).$(fn)[partind] = $getbufex)
end
setresex = MacroTools.postwalk(setresex) do x
@capture(x, y_.input) && (return y)
x
end
setresex
end
setresex = Expr(:block, setresex...)
@eval setresfun = (result,simple_result,partind)-> $setresex
setresfun
end
##
# We create a two-stage process with the outer function `withbuffer` and an inner macro with the same name. The reason is that the function generates an expression at *runtime* and this should ideally be compiled into the body of the function without a runtime call to eval. The macro allows us to do this
"""
struct Workspace{T1, T2, T3, T4, T5, T6}
# Arguments:
- `simple_input`: Input object `f` will be called with, does not contain any particles
- `simple_result`: Simple output from `f` without particles
- `result`: Complete output of `f` including particles
- `buffersetter`: Helper function to shift data between objects
- `resultsetter`: Helper function to shift data between objects
- `f`: Function to call
- `N`: Number of particles
"""
struct Workspace{T1,T2,T3,T4,T5,T6}
simple_input::T1
simple_result::T2
result::T3
buffersetter::T4
resultsetter::T5
f::T6
N::Int
end
"""
Workspace(f, input)
Create a `Workspace` object for inputs of type `typeof(input)`. Useful if `input` is a structure with fields of type `<: AbstractParticles` (can be deeply nested). See also `with_workspace`.
"""
function Workspace(f,input)
paths = particle_paths(input)
buffersetter = get_buffer_setter(paths)
@assert all(n == paths[1][3] for n in getindex.(paths,3))
simple_input = build_container(input)
N = paths[1][3]
Base.invokelatest(buffersetter,input,simple_input,1)
simple_result = f(simple_input) # We first to index 1 to peek at the result
result = @unsafe restore_scalar(build_mutable_container(f(make_scalar(input))), N) # Heuristic, see what the result is if called with particles and unsafe_comparisons TODO: If the reason the workspace approach is used is that the function f fails for different reason than comparinsons, this will fail here. Maybe Particles{1} can act as constant and be propagated through
resultsetter = get_result_setter(result)
Workspace(simple_input,simple_result,result,buffersetter, resultsetter,f,N)
end
"""
with_workspace(f,P)
In some cases, defining a primitive function which particles are to be propagate through is not possible but allowing unsafe comparisons are not acceptable. One such case is functions that internally calculate eigenvalues of uncertain matrices. The eigenvalue calculation makes use of comparison operators. If the uncertainty is large, eigenvalues might change place in the sorted list of returned eigenvalues, completely ruining downstream computations. For this we recommend, in order of preference
1. Use `@bymap` detailed [in the documentation](https://github.com/baggepinnen/MonteCarloMeasurements.jl#monte-carlo-simulation-by-mappmap). Applicable if all uncertain values appears as arguments to your entry function.
2. Create a `Workspace` object and call it using your entry function. Applicable if uncertain parameters appear nested in an object that is an argument to your entry function:
```julia
# desired computation: y = f(obj), obj contains uncertain parameters inside
y = with_workspace(f, obj)
# or equivalently
w = Workspace(f, obj)
use_invokelatest = true # Set this to false to gain 0.1-1 ms, at the expense of world-age problems if w is created and used in the same function.
w(obj, use_invokelatest)
```
"""
with_workspace(f,P) = Workspace(f,P)(P, true)
function (w::Workspace)(input)
simple_input,simple_result,result,buffersetter,resultsetter,N,f = w.simple_input,w.simple_result,w.result,w.buffersetter,w.resultsetter,w.N,w.f
for partind = 1:N
buffersetter(input,simple_input, partind)
simple_result = f(simple_input)
Base.invokelatest(resultsetter, result,simple_result, partind)
end
has_mutable_particles(input) ? result : make_static(result)
end
function (w::Workspace)(input, invlatest::Bool)
invlatest || w(input)
simple_input,simple_result,result,buffersetter,resultsetter,N,f = w.simple_input,w.simple_result,w.result,w.buffersetter,w.resultsetter,w.N,w.f
for partind = 1:N
Base.invokelatest(buffersetter, input,simple_input, partind)
simple_result = f(simple_input)
Base.invokelatest(resultsetter, result,simple_result, partind)
end
has_mutable_particles(input) ? result : make_static(result)
end
"""
array_of_structs(f, arg)
Exectues `f` on each instance of `arg` represented by internal particles of `arg`. This is useful as a last resort if all other methods to propagate particles through `f` fails. The function returns an array (length = num. particles) of structs rather than particles, each struct is the result of `f(replace_particles(arg, p->p[i]))`.
"""
function array_of_structs(f, arg)
N = particle_paths(arg)[end][end-1]
map(1:N) do i
arg_i = replace_particles(arg, replacer=p->vecindex(p, i))
f(arg_i)
end
end
# macro withbuffer(f,P,simple_input,setters,setters2,N)
# quote
# $(esc(:(partind = 1))) # Because we need the actual name partind
# $(esc(setters))($(esc(P)),$(esc(simple_input)), $(esc(:partind)))
# $(esc(:simple_result)) = $(esc(f))($(esc(simple_input))) # We first to index 1 to peek at the result
# result = @unsafe build_mutable_container($(esc(f))($(esc(P)))) # Heuristic, see what the result is if called with particles and unsafe_comparisons
# $(esc(setters2))(result,$(esc(:simple_result)), $(esc(:partind)))
# for $(esc(:partind)) = 2:$(esc(N))
# $(esc(setters))($(esc(P)),$(esc(simple_input)), $(esc(:partind)))
# $(esc(:simple_result)) = $(esc(f))($(esc(simple_input)))
# $(esc(setters2))(result,$(esc(:simple_result)), $(esc(:partind)))
# end
# result
# end
# end
#
# @generated function withbufferg(f,P,simple_input,N,setters,setters2)
# ex = Expr(:block)
# push!(ex.args, quote
# partind = 1 # Because we need the actual name partind
# end)
# push!(ex.args, :(setters(P,simple_input,partind)))
# push!(ex.args, quote
# simple_result = f(simple_input) # We first to index 1 to peek at the result
# result = @unsafe f(P)
# end) #build_container(paths, results[1])
# push!(ex.args, :(setters2(result,simple_result,partind)))
# loopex = Expr(:block, :(setters(P,simple_input,partind)))
# push!(loopex.args, :(simple_result = f(simple_input)))
# push!(loopex.args, :(setters2(result,simple_result,partind)))
# push!(ex.args, quote
# for partind = 2:N
# $loopex
# end
# end)
# push!(ex.args, :result)
# ex
# end
#
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 921 | """
gradient(f, p::AbstractParticles)
Calculate the gradient of `f` in `p`. This corresponds to a smoothed finite-difference approximation where the smoothing kernel is given by the distribution of `p`.
Return mean and std.
"""
function gradient(f,p::MonteCarloMeasurements.AbstractParticles)
r = 2(p\f(p))
pmean(r), pstd(r)
end
function gradient(f,p::Union{Integer, AbstractFloat})
p = p ± 0.000001
r = 2(p\f(p))
pmean(r)
end
function gradient(f::Function,p::MonteCarloMeasurements.MvParticles)
r = (p-pmean.(p))\(f(p) - f(pmean.(p)))
end
"""
jacobian(f::Function, p::Vector{<:AbstractParticles})
Calculate the Jacobian of `f` in `p`. This corresponds to a smoothed finite-difference approximation where the smoothing kernel is given by the distribution of `p`.
"""
function jacobian(f::Function,p::MonteCarloMeasurements.MvParticles)
r = (p-pmean.(p))\(f(p) - f(pmean.(p)))
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 417 | """
wasserstein(p1::AbstractParticles,p2::AbstractParticles,p)
Returns the Wasserstein distance (Earth-movers distance) of order `p`, to the `p`th power, between `p1` and `p2`.
I.e., for `p=2`, this returns W₂²
"""
function wasserstein(p1::AbstractParticles,p2::AbstractParticles,p)
p1 = sort(p1.particles)
p2 = sort(p2.particles)
wp = mean(eachindex(p1)) do i
abs(p1[i]-p2[i])^p
end
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 3030 | import .ForwardDiff: Dual, value, partials, Partials # The dot in .ForwardDiff is an artefact of using Requires.jl
"""
switch_representation(d::Dual{T, V, N}) where {T, V <: AbstractParticles, N}
Goes from Dual{Particles} to Particles{Dual}
"""
function switch_representation(d::Dual{T,V,N}) where {T,V<:AbstractParticles,N}
part = partials(d)
MonteCarloMeasurements.nakedtypeof(V)([Dual{T}(value(d).particles[i], ntuple(j->part[j].particles[i], N)) for i ∈ 1:nparticles(V)])
end
# function switch_representation(p::Particles{Dual{T,V,N},NP}) where {T,V<:AbstractParticles,N,NP}
# Dual{T}(Particles(value.(p.particles)), Particles(partials.(p.particles)))
# end
const DualParticles = Dual{T,V,N} where {T,V<:AbstractParticles,N}
for ff in [maximum,minimum,std,var,cov,mean,median,quantile]
f = nameof(ff)
pname = Symbol("p"*string(f))
m = Base.parentmodule(ff)
@eval ($pname)(d::DualParticles) = ($pname)(switch_representation(d))
end
macro andreverse(ex)
def = splitdef(ex)
if haskey(def,:whereparams) && !isempty(def[:whereparams])
quote
$(esc(ex))
$(esc(def[:name]))($(esc(def[:args][2])), $(esc(def[:args][1]))) where $(esc(def[:whereparams]...)) = $(esc(def[:body]))
end
else
quote
$(esc(ex))
$(esc(def[:name]))($(esc(def[:args][2])), $(esc(def[:args][1]))) = $(esc(def[:body]))
end
end
end
for PT in (Particles, StaticParticles)
@eval begin
@andreverse function Base.:(*)(p::$PT, d::Dual{T}) where {T}
Dual{T}($PT(p.particles .* value(d)), ntuple(i->$PT(p.particles .* partials(d)[i]) ,length(partials(d))))
end
@andreverse function Base.:(+)(p::$PT, d::Dual{T}) where {T}
Dual{T}($PT(p.particles .+ value(d)), ntuple(i->$PT(0p.particles .+ partials(d)[i]) ,length(partials(d))))
end
function Base.:(-)(p::$PT, d::Dual{T}) where {T}
Dual{T}($PT(p.particles .- value(d)), ntuple(i->$PT(0p.particles .- partials(d)[i]) ,length(partials(d))))
end
function Base.:(-)(d::Dual{T}, p::$PT) where {T}
Dual{T}($PT(value(d) .- p.particles), ntuple(i->$PT(0p.particles .+ partials(d)[i]) ,length(partials(d))))
end
function Base.promote_rule(::Type{ForwardDiff.Dual{T,V,NP}}, ::Type{$PT{S, N}}) where {T, V, NP, S, N}
VS = promote_type(V,S)
Dual{T, $PT{VS, N}, NP}
# Dual{T}($PT(fill(value(d), N)), ntuple(i->$PT(fill(partials(d)[i], N)) ,length(partials(d))))
end
# function Base.promote_rule(::Type{ForwardDiff.Dual{T, V, N}}, ::Type{$PT{T, N}}) where {T, V, N, T, N}
# Dual$PT{}
# end
end
end
# Base.hidigit(x::AbstractParticles, base) = Base.hidigit(mean(x), base) # To avoid stackoverflow in some printing situations
# Base.hidigit(x::Dual, base) = Base.hidigit(x.value, base) # To avoid stackoverflow in some printing situations
# Base.round(d::Dual, r::RoundingMode) = round(d.value,r)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 977 |
for PT in (:Particles, :StaticParticles)
@eval begin
function $PT(N, m::Measurements.Measurement{T})::$PT{T,N} where T
$PT(N, Normal(Measurements.value(m),Measurements.uncertainty(m)))
end
end
end
"""
Convert an uncertain number from Measurements.jl to the equivalent particle representation with the default number of particles.
"""
function Particles(m::Measurements.Measurement{T}) where T
Particles(DEFAULT_NUM_PARTICLES, m)
end
"""
Convert an uncertain number from Measurements.jl to the equivalent particle representation with the default number of particles.
"""
function StaticParticles(m::Measurements.Measurement{T}) where T
StaticParticles(DEFAULT_STATIC_NUM_PARTICLES, m)
end
"""
Measurements.value(p::AbstractParticles) = mean(p)
"""
Measurements.value(p::AbstractParticles) = pmean(p)
"""
Measurements.uncertainty(p::AbstractParticles) = std(p)
"""
Measurements.uncertainty(p::AbstractParticles) = pstd(p)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 1287 | function shuffle_and_insert(p::AbstractParticles, ind, val)
p = deepcopy(p)
p.particles[ind] = p.particles[1]
p.particles[1] = val
p
end
function shuffle_and_insert(p::StaticParticles, ind, val)
part = p.particles
part = setindex(part, part[1], ind)
part = setindex(part, val, 1)
StaticParticles(part)
end
"""
pn = with_nominal(p, val)
Endow particles `p` with a nominal value `val`. The particle closest to `val` will be replaced with val, and moved to index 1. This operation introduces a slight bias in the statistics of `pn`, but the operation is asymptotically unbiased for large sample sizes. To obtain the nominal value of `pn`, call `nominal(pn)`.
"""
function with_nominal(p::AbstractParticles, val)
minind = argmin(abs.(p.particles .- val))
shuffle_and_insert(p, minind, val)
end
function with_nominal(p::MvParticles, val::AbstractVector)
M = Matrix(p)
minind = argmin(vec(sum(abs2, M .- val', dims=1)))
shuffle_and_insert.(p, minind, val)
end
"""
nominal(p)
Return the nominal value of `p` (assumes that `p` has been endowed with a nominal value using `with_nominal`).
"""
nominal(p::AbstractParticles) = p.particles[1]
nominal(p::MvParticles) = nominal.(p)
nominal(P) = replace_particles(P, replacer=nominal)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 1031 | ## Optimization =======================================
function perturb(rng::AbstractRNG, p, Cp)
d = MvNormal(pmean(p), 1.1Cp + 1e-12I)
Particles(rng, nparticles(p[1]), d)
end
"""
res = optimize([rng::AbstractRNG,] f,p,τ=1,iters=10000)
Find the minimum of Function `f`, starting with initial distribution described by `p::Vector{Particles}`. `τ` is the initial temperature.
"""
function optimize(rng::AbstractRNG,f,p,τ=1; τi=1.005, iters=10000, tol=1e-8)
p = deepcopy(p)
N = nparticles(p[1])
we = zeros(N)
for i = 1:iters
y = -(f(p).particles)
we .= exp.(τ.*y)
j = sample(rng,1:N, ProbabilityWeights(we), N, ordered=true)
foreach(x->(x.particles .= x.particles[j]), p); # @test length(unique(p[1].particles)) == length(unique(j))
Cp = pcov(p)
tr(Cp) < tol && (@info "Converged at iteration $i"; return p)
p = perturb(rng, p, Cp)
τ *= τi
end
p
end
optimize(f,p,τ=1; kwargs...) = optimize(Random.GLOBAL_RNG,f,p,τ; kwargs...)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 28684 | """
μ ± σ
Creates $DEFAULT_NUM_PARTICLES `Particles` with mean `μ` and std `σ`. It can also be used as a unary operator, a mean of 0 is then used with std `σ`.
If `μ` is a vector, the constructor `MvNormal` is used, and `σ` is thus treated as std if it's a scalar, and variances if it's a matrix or vector.
See also [`∓`](@ref), [`..`](@ref)
"""
±
"""
μ ∓ σ
Creates $DEFAULT_STATIC_NUM_PARTICLES `StaticParticles` with mean `μ` and std `σ`. It can also be used as a unary operator, a mean of 0 is then used with std `σ`.
If `μ` is a vector, the constructor `MvNormal` is used, and `σ` is thus treated as std if it's a scalar, and variances if it's a matrix or vector.
See also [`±`](@ref), [`⊗`](@ref)
"""
∓
±(μ::Real,σ) = Particles{promote_type(float(typeof(μ)),float(typeof(σ))),DEFAULT_NUM_PARTICLES}(systematic_sample(DEFAULT_NUM_PARTICLES,Normal(μ,σ); permute=true))
±(μ::AbstractVector,σ) = Particles(DEFAULT_NUM_PARTICLES, MvNormal(μ, σ))
±(σ) = zero(σ) ± σ
∓(μ::Real,σ) = StaticParticles{promote_type(float(typeof(μ)),float(typeof(σ))),DEFAULT_STATIC_NUM_PARTICLES}(systematic_sample(DEFAULT_STATIC_NUM_PARTICLES,Normal(μ,σ); permute=true))
∓(μ::AbstractVector,σ) = StaticParticles(DEFAULT_STATIC_NUM_PARTICLES, MvNormal(μ, σ))
∓(σ) = zero(σ) ∓ σ
"""
a .. b
Creates $DEFAULT_NUM_PARTICLES `Particles` with a `Uniform` distribution between `a` and `b`.
See also [`±`](@ref), [`⊗`](@ref)
"""
(..)(a,b) = Particles{float(promote_type(eltype(a), eltype(b))), DEFAULT_NUM_PARTICLES}(Random.GLOBAL_RNG, Uniform(a,b))
"""
a ⊠ Distribution()
Multiplies `a` by $DEFAULT_NUM_PARTICLES `Particles` sampled from a specified `::Distribution`.
Shorthand for `a * Particles(Distribution())`, e.g., `a ⊠ Gamma(1)`.
"""
⊠(a,d::Distribution) = a * Particles{eltype(d), DEFAULT_NUM_PARTICLES}(Random.GLOBAL_RNG, d)
"""
a ⊞ Distribution()
Adds $DEFAULT_NUM_PARTICLES `Particles` sampled from a specified `::Distribution` to `a`.
Shorthand for `a + Particles(Distribution())`, e.g., `1 ⊞ Binomial(3)`.
"""
⊞(a,d::Distribution) = a + Particles{eltype(d), DEFAULT_NUM_PARTICLES}(Random.GLOBAL_RNG, d)
"""
⊗(μ,σ) = outer_product(Normal.(μ,σ))
See also [`outer_product`](@ref), [`±`](@ref)
"""
⊗(μ,σ) = outer_product(Normal.(μ,σ))
"""
p = outer_product([rng::AbstractRNG,] dists::Vector{<:Distribution}, N=100_000)
Creates a multivariate systematic sample where each dimension is sampled according to the corresponding univariate distribution in `dists`. Returns `p::Vector{Particles}` where each Particles has a length approximately equal to `N`.
The particles form the outer product between `d` systematically sampled vectors with length given by the d:th root of N, where `d` is the length of `dists`, All particles will be independent and have marginal distributions given by `dists`.
See also `MonteCarloMeasurements.⊗`
"""
function outer_product(rng::AbstractRNG, dists::AbstractVector{<:Distribution}, N=100_000)
d = length(dists)
N = floor(Int,N^(1/d))
dims = map(dists) do dist
v = systematic_sample(rng,N,dist; permute=true)
end
cart_prod = vec(collect(Iterators.product(dims...)))
p = map(1:d) do i
Particles(getindex.(cart_prod,i))
end
end
function outer_product(dists::AbstractVector{<:Distribution}, N=100_000)
return outer_product(Random.GLOBAL_RNG, dists, N)
end
# StaticParticles(N::Integer = DEFAULT_NUM_PARTICLES; permute=true) = StaticParticles{Float64,N}(SVector{N,Float64}(systematic_sample(N, permute=permute)))
function print_functions_to_extend()
excluded_functions = [fill, |>, <, display, show, promote, promote_rule, promote_type, size, length, ndims, convert, isapprox, ≈, <, (<=), (==), zeros, zero, eltype, getproperty, fieldtype, rand, randn]
functions_to_extend = setdiff(names(Base), Symbol.(excluded_functions))
for fs in functions_to_extend
ff = @eval $fs
ff isa Function || continue
isempty(methods(ff)) && continue # Sort out intrinsics and builtins
f = nameof(ff)
if !isempty(methods(ff, (Real,Real)))
println(f, ",")
end
end
end
"""
shortform(p::AbstractParticles)
Return a short string describing the type
"""
shortform(p::Particles) = "Part"
shortform(p::StaticParticles) = "SPart"
function to_num_str(p::AbstractParticles{T}, d=3, ds=d-1) where {T<:Union{Number,AbstractArray}}
s = pstd(p)
# TODO: be smart and select sig digits based on s
if T <: AbstractFloat && s < eps(p)
string(round(pmean(p), sigdigits=d))
else
string(round(pmean(p), sigdigits=d), " ± ", round(s, sigdigits=ds))
end
end
to_num_str(p::AbstractParticles{T}, d, ds) where T = ""
function Base.show(io::IO, p::AbstractParticles{T,N}) where {T,N}
print(io, to_num_str(p, 3))
end
function Base.show(io::IO, ::MIME"text/plain", p::AbstractParticles{T,N}) where {T,N}
sPT = MonteCarloMeasurements.shortform(p)
compact = get(io, :compact, false)
if compact
print(io, MonteCarloMeasurements.to_num_str(p, 6, 3))
else
print(io, MonteCarloMeasurements.to_num_str(p, 6, 3), " $(typeof(p))\n")
end
end
Base.show(io::IO, z::Complex{PT}) where PT <: AbstractParticles =
show(io, MIME"text/plain"(), z)
function Base.show(io::IO, ::MIME"text/plain", z::Complex{PT}) where PT <: AbstractParticles
r, i = reim(z)
compact = get(io, :compact, false)
print(io, "(")
show(io, r)
print(io, ")")
if pmaximum(i) < 0
i = -i
print(io, compact ? "-" : " - ")
else
print(io, compact ? "+" : " + ")
end
print(io, "(")
show(io, i)
print(io, ")")
print(io, "im")
end
# function Base.show(io::IO, p::MvParticles)
# sPT = shortform(p)
# print(io, "(", N, " $sPT with mean ", round.(mean(p), sigdigits=3), " and std ", round.(sqrt.(diag(cov(p))), sigdigits=3),")")
# end
for mime in (MIME"text/x-tex", MIME"text/x-latex")
@eval function Base.show(io::IO, ::$mime, p::AbstractParticles)
print(io, "\$"); show(io, p); print("\$")
end
@eval function Base.show(io::IO, ::$mime, z::Complex{<:AbstractParticles})
print(io, "\$")
r, i = reim(z)
compact = get(io, :compact, false)
print(io, "(")
show(io, r)
print(io, ")")
if pmaximum(i) < 0
i = -i
print(io, compact ? "-" : " - ")
else
print(io, compact ? "+" : " + ")
end
print(io, "(")
show(io, i)
print(io, ")")
print(io, "i")
print("\$")
end
end
# Two-argument functions
# foreach(register_primitive_binop, [+,-,*,/,//,^])
foreach(register_primitive_multi, [+,-,*,/,//,^,max,min,mod,mod1,atan,atand,add_sum,hypot,clamp])
# One-argument functions
foreach(register_primitive_single, [+,-,
exp,exp2,exp10,expm1,
log,log10,log2,log1p,
sin,cos,tan,sind,cosd,tand,sinh,cosh,tanh,
asin,acos,atan,asind,acosd,atand,asinh,acosh,atanh,
zero,sign,abs,sqrt,rad2deg,deg2rad,float])
MvParticles(x::AbstractVector{<:AbstractArray{<:Number}}) = Particles(copy(reduce(hcat, x)'))
MvParticles(v::AbstractVector{<:Number}) = Particles(v)
function MvParticles(v::AbstractVector{<:Tuple})
Particles.([getindex.(v,i) for i in 1:length(v[1])])
end
function MvParticles(s::Vector{NamedTuple{vs, T}}) where {vs, T}
nt = NamedTuple()
for k in keys(s[1])
nt = merge(nt, [k => MvParticles(getproperty.(s,k))])
end
nt
end
function _finish_individuals(::Type{PT}, N, individuals::AbstractArray{<:Tuple}, p) where PT
ntuple(length(individuals[1])) do ti
RT = eltype(first(individuals)[ti])
PRT = PT{RT,N}
no = length(individuals[1][ti])
out = Vector{PRT}(undef, no)
for i = 1:no
out[i] = PRT(getindex.(getindex.(individuals,ti), i))
end
reshape(out, size(individuals[1][ti]))
end
end
function _finish_individuals(::Type{PT}, ::Val{N}, individuals, p) where {PT, N}
RT = eltype(eltype(individuals))
PRT = PT{RT,N}
out = similar(p, PRT)
for i = 1:length(p)
out[i] = PRT(getindex.(individuals,i))
end
reshape(out, size(p))
end
for PT in ParticleSymbols
# Constructors
@eval begin
"""
ℝⁿ2ℝⁿ_function(f::Function, p::AbstractArray{T})
Helper function for performing uncertainty propagation through vector-valued functions with vector inputs.
Applies `f : ℝⁿ → ℝⁿ` to an array of particles. E.g., `Base.log(p::Matrix{<:AbstractParticles}) = ℝⁿ2ℝⁿ_function(log,p)`
"""
function ℝⁿ2ℝⁿ_function(f::F, p::AbstractArray{$PT{T,N}}) where {F,T,N}
individuals = map(1:nparticles(p[1])) do i
f(vecindex.(p,i))
end
_finish_individuals($PT, Val{N}(), individuals, p)
end
function ℝⁿ2ℝⁿ_function(f::F, p::AbstractArray{$PT{T,N}}, p2::AbstractArray{$PT{T,N}}) where {F,T,N}
individuals = map(1:nparticles(p[1])) do i
f(vecindex.(p,i), vecindex.(p2,i))
end
_finish_individuals($PT, Val{N}(), individuals, p)
end
"""
ℝⁿ2ℂⁿ_function(f::Function, p::AbstractArray{T})
Helper function for performing uncertainty propagation through complex-valued functions with vector inputs.
Applies `f : ℝⁿ → Cⁿ` to an array of particles. E.g., `LinearAlgebra.eigvals(p::Matrix{<:AbstractParticles}) = ℝⁿ2ℂⁿ_function(eigvals,p)`
"""
function ℝⁿ2ℂⁿ_function(f::F, p::AbstractArray{$PT{T,N}}; kwargs...) where {F,T,N}
individuals = map(1:nparticles(p[1])) do i
f(vecindex.(p,i); kwargs...)
end
PRT = $PT{T,N}
RT = eltype(eltype(individuals))
if RT <: Complex
CRT = Complex{PRT}
else
CRT = PRT
end
out = Array{CRT}(undef, size(individuals[1]))
for i = eachindex(out)
ind = getindex.(individuals,i)
if RT <: Complex
out[i] = complex(PRT(real.(ind)), PRT(imag.(ind)))
else
out[i] = PRT(ind)
end
end
out
end
#
# function ℝⁿ2ℂⁿ_function(f::F, p::AbstractArray{$PT{T,N}}, p2::AbstractArray{$PT{T,N}}) where {F,T,N}
# individuals = map(1:nparticles(p[1])) do i
# f(getindex.(p,i), getindex.(p2,i))
# end
# RT = eltype(eltype(individuals))
# @assert RT <: Complex
# PRT = $PT{T,N}
# CRT = Complex{PRT}
# out = similar(p, CRT)
# for i = 1:length(p)
# ind = getindex.(individuals,i)
# out[i] = complex(PRT(real.(ind)), PRT(imag.(ind)))
# end
# reshape(out, size(p))
# end
end
end
for ff in (var, std)
f = nameof(ff)
@eval function (Statistics.$f)(p::ParticleDistribution{<:AbstractParticles{T,N}}, args...; kwargs...) where {N,T}
N == 1 && (return zero(T))
$f(p.p.particles, args...;kwargs...)
end
pname = Symbol("p"*string(f))
@eval function ($pname)(p::AbstractParticles{T,N}, args...; kwargs...) where {N,T}
N == 1 && (return zero(T))
$f(p.particles, args...;kwargs...)
end
end
# Instead of @forward
# TODO: convert all these to operate on ParticleDistribution
for ff in [Statistics.mean, Statistics.cov, Statistics.median, Statistics.quantile, Statistics.middle, Base.iterate, Base.extrema, Base.minimum, Base.maximum]
f = nameof(ff)
m = Base.parentmodule(ff)
@eval ($m.$f)(p::ParticleDistribution, args...; kwargs...) = ($m.$f)(p.p.particles, args...; kwargs...)
pname = Symbol("p"*string(f))
@eval ($pname)(p::AbstractParticles, args...; kwargs...) = ($m.$f)(p.particles, args...; kwargs...)
@eval ($pname)(p::Number, args...; kwargs...) = ($m.$f)(p, args...; kwargs...)
end
for PT in ParticleSymbols
@eval begin
# Base.length(::Type{$PT{T,N}}) where {T,N} = N
# Base.eltype(::Type{$PT{T,N}}) where {T,N} = $PT{T,N} # TODO: remove
Base.convert(::Type{StaticParticles{T,N}}, p::$PT{T,N}) where {T,N} = StaticParticles(p.particles)
Base.convert(::Type{$PT{T,N}}, f::Real) where {T,N} = $PT{T,N}(fill(T(f),N))
Base.convert(::Type{$PT{T,N}}, f::$PT{S,N}) where {T,N,S} = $PT{promote_type(T,S),N}(convert.(promote_type(T,S),f.particles))
function Base.convert(::Type{S}, p::$PT{T,N}) where {S<:ConcreteFloat,T,N}
N == 1 && (return S(p.particles[1]))
pstd(p) < eps(S) || throw(ArgumentError("Cannot convert a particle distribution to a float if not all particles are the same."))
return S(p.particles[1])
end
function Base.convert(::Type{S}, p::$PT{T,N}) where {S<:ConcreteInt,T,N}
isinteger(p) || throw(ArgumentError("Cannot convert a particle distribution to an int if not all particles are the same."))
return S(p.particles[1])
end
Base.zeros(::Type{$PT{T,N}}, dim::Integer) where {T,N} = [$PT{T,N}(zeros(eltype(T),N)) for d = 1:dim]
Base.zero(::Type{$PT{T,N}}) where {T,N} = $PT{T,N}(zeros(eltype(T),N))
Base.isfinite(p::$PT{T,N}) where {T,N} = isfinite(pmean(p))
Base.round(p::$PT{T,N}, r::RoundingMode, args...; kwargs...) where {T,N} = $PT{T,N}(round.(p.particles, r, args...; kwargs...))
Base.round(::Type{S}, p::$PT{T,N}, args...; kwargs...) where {S,T,N} = $PT{S,N}(round.(S, p.particles, args...; kwargs...))
function Base.AbstractFloat(p::$PT{T,N}) where {T,N}
N == 1 && (return p.particles[1])
pstd(p) < eps(T) || throw(ArgumentError("Cannot convert a particle distribution to a number if not all particles are the same."))
return p.particles[1]
end
Base.rem(p1::$PT{T,N}, p2::$PT{T,N}, args...) where {T,N} = $PT{T,N}(Base.rem.(p1.particles, p2.particles, args...))
Base.div(p1::$PT{T,N}, p2::$PT{T,N}, args...) where {T,N} = $PT{T,N}(Base.div.(p1.particles, p2.particles, args...))
"""
union(p1::AbstractParticles, p2::AbstractParticles)
A `Particles` containing all particles from both `p1` and `p2`. Note, this will be twice as long as `p1` or `p2` and thus of a different type.
`pu = Particles([p1.particles; p2.particles])`
"""
function Base.union(p1::$PT{T,NT},p2::$PT{T,NS}) where {T,NT,NS}
$PT{T,NT+NS}([p1.particles; p2.particles])
end
"""
intersect(p1::AbstractParticles, p2::AbstractParticles)
A `Particles` containing all particles from the common support of `p1` and `p2`. Note, this will be of undetermined length and thus undetermined type.
"""
function Base.intersect(p1::$PT,p2::$PT)
mi = max(pminimum(p1),pminimum(p2))
ma = min(pmaximum(p1),pmaximum(p2))
f = x-> mi <= x <= ma
$PT([filter(f, p1.particles); filter(f, p2.particles)])
end
function Base.:^(p::$PT{T,N}, i::Integer) where {T,N} # Resolves ambiguity
res = p.particles.^i
$PT{eltype(res),N}(res)
end
Base.:\(p::Vector{<:$PT}, p2::Vector{<:$PT}) = Matrix(p)\Matrix(p2) # Must be here to be most specific
function LinearAlgebra.eigvals(p::Matrix{$PT{T,N}}; kwargs...) where {T,N} # Special case to propte types differently
individuals = map(1:N) do i
eigvals(vecindex.(p,i); kwargs...)
end
PRT = Complex{$PT{T,N}}
out = Vector{PRT}(undef, length(individuals[1]))
for i = eachindex(out)
c = getindex.(individuals,i)
out[i] = complex($PT{T,N}(real(c)),$PT{T,N}(imag(c)))
end
out
end
end
for XT in (:Number, :($PT{<:Number,N})), YT in (:Number, :($PT{<:Number,N})), ZT in (:Number, :($PT{<:Number,N}))
XT == YT == ZT == :Number && continue
@eval function Base.muladd(x::$XT,y::$YT,z::$ZT) where {N}
res = muladd.(maybe_particles(x),maybe_particles(y),maybe_particles(z))
$PT{eltype(res),N}(res)
end
end
@eval Base.promote_rule(::Type{S}, ::Type{$PT{T,N}}) where {S<:Number,T,N} = $PT{promote_type(S,T),N} # This is hard to hit due to method for real 3 lines down
@eval Base.promote_rule(::Type{Bool}, ::Type{$PT{T,N}}) where {T,N} = $PT{promote_type(Bool,T),N}
for PT2 in ParticleSymbols
if PT == PT2
@eval Base.promote_rule(::Type{$PT{S,N}}, ::Type{$PT{T,N}}) where {S,T,N} = $PT{promote_type(S,T),N}
elseif any(==(:StaticParticles), (PT, PT2))
@eval Base.promote_rule(::Type{$PT{S,N}}, ::Type{$PT2{T,N}}) where {S,T,N} = StaticParticles{promote_type(S,T),N}
else
@eval Base.promote_rule(::Type{$PT{S,N}}, ::Type{$PT2{T,N}}) where {S,T,N} = Particles{promote_type(S,T),N}
end
end
@eval Base.promote_rule(::Type{<:AbstractParticles}, ::Type{$PT{T,N}}) where {T,N} = Union{}
end
# Base.length(p::AbstractParticles{T,N}) where {T,N} = N
Base.ndims(p::AbstractParticles{T,N}) where {T,N} = ndims(T)
Base.:\(H::MvParticles,p::AbstractParticles) = Matrix(H)\p.particles
# Base.:\(p::AbstractParticles, H) = p.particles\H
# Base.:\(p::MvParticles, H) = Matrix(p)\H
# Base.:\(H,p::MvParticles) = H\Matrix(p)
Base.Broadcast.broadcastable(p::AbstractParticles) = Ref(p)
# Base.setindex!(p::AbstractParticles, val, i::Integer) = setindex!(p.particles, val, i)
# Base.getindex(p::AbstractParticles, i::Integer) = getindex(p.particles, i)
# Base.getindex(v::MvParticles, i::Int, j::Int) = v[j][i] # Defining this methods screws with show(::MvParticles)
Base.Array(p::AbstractParticles) = p.particles
Base.Vector(p::AbstractParticles) = Array(p)
function Base.Array(v::Array{<:AbstractParticles})
m = reduce(hcat, Array.(v))
return reshape(m, size(m, 1), size(v)...)
end
Base.Matrix(v::MvParticles) = Array(v)
# function Statistics.var(v::MvParticles,args...;kwargs...) # Not sure if it's a good idea to define this. Is needed for when var(v::AbstractArray) is used
# s2 = map(1:length(v[1])) do i
# var(getindex.(v,i))
# end
# eltype(v)(s2)
# end
pmean(v::MvParticles) = pmean.(v)
pcov(v::MvParticles,args...;kwargs...) = cov(Matrix(v), args...; kwargs...)
pcor(v::MvParticles,args...;kwargs...) = cor(Matrix(v), args...; kwargs...)
pvar(v::MvParticles,args...; corrected = true, kwargs...) = sum(abs2, v)/(nparticles(v) - corrected)
Distributions.fit(d::Type{<:MultivariateDistribution}, p::MvParticles) = fit(d,Matrix(p)')
Distributions.fit(d::Type{<:Distribution}, p::AbstractParticles) = fit(d,p.particles)
Distributions.Normal(p::AbstractParticles) = Normal(pmean(p), pstd(p))
Distributions.MvNormal(p::MvParticles) = MvNormal(pmean(p), pcov(p))
meanstd(p::AbstractParticles) = pstd(p)/sqrt(nparticles(p))
meanvar(p::AbstractParticles) = pvar(p)/nparticles(p)
Base.:(==)(p1::AbstractParticles{T,N},p2::AbstractParticles{T,N}) where {T,N} = p1.particles == p2.particles
Base.:(!=)(p1::AbstractParticles{T,N},p2::AbstractParticles{T,N}) where {T,N} = p1.particles != p2.particles
function Base.hash(p::AbstractParticles, h::UInt)
h = hash(p.particles, h)
hash(typeof(p), h)
end
function zip_longest(a_,b_)
a,b = maybe_particles(a_), maybe_particles(b_)
l = max(length(a), length(b))
Iterators.take(zip(Iterators.cycle(a), Iterators.cycle(b)), l)
end
function safe_comparison(a_, b_, op::F) where F
a,b = maybe_particles(a_), maybe_particles(b_)
all(((a,b),)->op(a,b), Iterators.product(extrema(a),extrema(b))) && (return true)
!any(((a,b),)->op(a,b), Iterators.product(extrema(a),extrema(b))) && (return false)
_comparison_error()
end
function do_comparison(a,b,op::F) where F
mode = COMPARISON_MODE[]
if mode === :reduction
op(COMPARISON_FUNCTION[](a), COMPARISON_FUNCTION[](b))
elseif mode === :montecarlo
all(((a,b),)->op(a,b), zip_longest(a,b)) && return true
!any(((a,b),)->op(a,b), zip_longest(a,b)) && return false
_comparison_error()
elseif mode === :safe
safe_comparison(a,b,op)
else
error("Got unsupported comparison mode.")
end
end
function _comparison_error()
msg = "Comparison of uncertain values using comparison mode $(COMPARISON_MODE[]) failed. Comparison operators are not well defined for uncertain values. Call `unsafe_comparisons(true)` to enable comparison operators for particles using the current reduction function $(COMPARISON_FUNCTION[]). Change this function using `set_comparison_function(f)`. "
if COMPARISON_MODE[] === :safe
msg *= "For safety reasons, the default safe comparison function is maximally conservative and tests if the extreme values of the distributions fulfil the comparison operator."
elseif COMPARISON_MODE[] === :montecarlo
msg *= "For safety reasons, montecarlo comparison is conservative and tests if pairwise particles fulfil the comparison operator. If some do *and* some do not, this error is thrown. Consider if you can define a primitive function ([docs](https://baggepinnen.github.io/MonteCarloMeasurements.jl/stable/overloading/#Overloading-a-new-function-1)) or switch to `unsafe_comparisons(:reduction)`"
end
error(msg)
end
function Base.:<(a::Real,p::AbstractParticles)
do_comparison(a,p,<)
end
function Base.:<(p::AbstractParticles,a::Real)
do_comparison(p,a,<)
end
function Base.:<(p::AbstractParticles, a::AbstractParticles)
do_comparison(p,a,<)
end
function Base.:(<=)(p::AbstractParticles{T,N}, a::AbstractParticles{T,N}) where {T,N}
do_comparison(p,a,<=)
end
"""
p1 ≈ p2
Determine if two particles are not significantly different
"""
Base.:≈(p::AbstractParticles, a::AbstractParticles, lim=2) = abs(pmean(p)-pmean(a))/(2sqrt(pstd(p)^2 + pstd(a)^2)) < lim
function Base.:≈(a::Real,p::AbstractParticles, lim=2)
m = pmean(p)
s = pstd(p, mean=m)
s == 0 && (return m == a)
abs(pmean(p)-a)/pstd(p) < lim
end
function Base.:≈(p::AbstractParticles, a::Real, lim=2)
m = pmean(p)
s = pstd(p, mean=m)
s == 0 && (return m == a)
abs(pmean(p)-a)/pstd(p) < lim
end
Base.:≈(p::MvParticles, a::AbstractVector) = all(a ≈ b for (a,b) in zip(a,p))
Base.:≈(a::AbstractVector, p::MvParticles) = all(a ≈ b for (a,b) in zip(a,p))
Base.:≈(a::MvParticles, p::MvParticles) = all(a ≈ b for (a,b) in zip(a,p))
Base.:≉(a,b::AbstractParticles,lim=2) = !(≈(a,b,lim))
Base.:≉(a::AbstractParticles,b,lim=2) = !(≈(a,b,lim))
"""
p1 ≉ p2
Determine if two particles are significantly different
"""
Base.:≉(a::AbstractParticles,b::AbstractParticles,lim=2) = !(≈(a,b,lim))
Base.sincos(x::AbstractParticles) = sin(x),cos(x)
Base.minmax(x::AbstractParticles,y::AbstractParticles) = (min(x,y), max(x,y))
Base.:!(p::AbstractParticles) = all(p.particles .== 0)
Base.isinteger(p::AbstractParticles) = all(isinteger, p.particles)
Base.iszero(p::AbstractParticles) = all(iszero, p.particles)
Base.iszero(p::AbstractParticles, tol) = abs(mean(p.particles)) < tol
≲(a,b,args...) = a < b
≲(a::Real,p::AbstractParticles,lim=2) = (pmean(p)-a)/pstd(p) > lim
≲(p::AbstractParticles,a::Real,lim=2) = (a-pmean(p))/pstd(p) > lim
≲(p::AbstractParticles,a::AbstractParticles,lim=2) = (pmean(a)-pmean(p))/(2sqrt(pstd(p)^2 + pstd(a)^2)) > lim
≳(a::Real,p::AbstractParticles,lim=2) = ≲(p,a,lim)
≳(p::AbstractParticles,a::Real,lim=2) = ≲(a,p,lim)
≳(p::AbstractParticles,a::AbstractParticles,lim=2) = ≲(a,p,lim)
Base.eps(p::Type{<:AbstractParticles{T,N}}) where {T,N} = eps(T)
Base.eps(p::AbstractParticles{T,N}) where {T,N} = eps(T)
Base.eps(p::AbstractParticles{<:Complex{T},N}) where {T,N} = eps(T)
Base.rtoldefault(::Type{<:AbstractParticles{T,N}}) where {T,N} = sqrt(eps(T))
LinearAlgebra.norm(x::AbstractParticles, args...) = abs(x)
Base.log(p::Matrix{<:AbstractParticles}) = ℝⁿ2ℂⁿ_function(log,p) # Matrix more specific than StridedMatrix used in Base.log
# LinearAlgebra.eigvals(p::Matrix{<:AbstractParticles}; kwargs...) = ℝⁿ2ℂⁿ_function(eigvals,p; kwargs...) # Replaced with implementation below
Base.exp(p::AbstractMatrix{<:AbstractParticles}) = ℝⁿ2ℝⁿ_function(exp, p)
LinearAlgebra.exp!(p::AbstractMatrix{<:AbstractParticles}) = ℝⁿ2ℝⁿ_function(LinearAlgebra.exp!, p)
LinearAlgebra.lyap(p1::Matrix{<:AbstractParticles}, p2::Matrix{<:AbstractParticles}) = ℝⁿ2ℝⁿ_function(lyap, p1, p2)
LinearAlgebra.hessenberg!(A::StridedMatrix{<: AbstractParticles}) = GenericSchur._hessenberg!(A)
Base.floatmin(::Type{<:AbstractParticles{T}}) where T = floatmin(T)
Base.floatmax(::Type{<:AbstractParticles{T}}) where T = floatmax(T)
## Particle BLAS
# pgemv is up to twice as fast as the naive way already for A(2,2)-A(20,20)
"""
_pgemv(A, p::Vector{StaticParticles{T, N}}) where {T, N}
Perform `A*p::Vector{StaticParticles{T,N}` using BLAS matrix-matrix multiply. This function is automatically used when applicable and there is no need to call it manually.
"""
function _pgemv(
A,
p::Vector{StaticParticles{T,N}},
) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N}
pm = reinterpret(T, p)
M = reshape(pm, N, :)'
AM = A * M
reinterpret(StaticParticles{T,N}, vec(AM'))
end
Base.:*(A::Matrix{T}, p::Vector{StaticParticles{T,N}}) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N} = _pgemv(A,p)
"""
_pdot(v::Vector{T}, p::Vector{StaticParticles{T, N}}) where {T, N}
Perform `v'p::Vector{StaticParticles{T,N}` using BLAS matrix-vector multiply. This function is automatically used when applicable and there is no need to call it manually.
"""
function _pdot(
v::AbstractVector{T},
p::Vector{StaticParticles{T,N}},
) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N}
pm = reinterpret(T, p)
M = reshape(pm, N, :)
Mv = M*v
StaticParticles{T,N}(Mv)
end
LinearAlgebra.dot(v::AbstractVector{T}, p::Vector{StaticParticles{T,N}}) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N} = _pdot(v,p)
LinearAlgebra.dot(p::Vector{StaticParticles{T,N}}, v::AbstractVector{T}) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N} = _pdot(v,p)
function _paxpy!(
a::T,
x::Vector{StaticParticles{T,N}},
y::Vector{StaticParticles{T,N}},
) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N}
X = reinterpret(T, x)
Y = reinterpret(T, y)
LinearAlgebra.axpy!(a,X,Y)
reinterpret(StaticParticles{T,N}, Y)
end
LinearAlgebra.axpy!(
a::T,
x::Vector{StaticParticles{T,N}},
y::Vector{StaticParticles{T,N}},
) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N} = _paxpy!(a,x,y)
function LinearAlgebra.mul!(
y::Vector{StaticParticles{T,N}},
A::AbstractMatrix{T},
b::Vector{StaticParticles{T,N}},
) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N}
Bv = reinterpret(T, b)
B = reshape(Bv, N, :)'
# Y0 = A*B
# reinterpret(StaticParticles{T,N}, vec(Y0'))
Yv = reinterpret(T, y)
Y = reshape(Yv, :, N)
mul!(Y,A,B)
reinterpret(StaticParticles{T,N}, vec(Y'))
end
function LinearAlgebra.mul!(
y::Vector{Particles{T,N}},
A::AbstractMatrix{T},
b::Vector{Particles{T,N}},
) where {T<:Union{Float32,Float64,ComplexF32,ComplexF64},N}
B = Matrix(b)
# Y = A*B
Y = B*A' # This order makes slicing below more efficient
@inbounds if isdefined(y, 1)
for i in eachindex(y)
@views y[i].particles .= Y[:,i]
end
else
for i in eachindex(y)
y[i] = Particles(Y[:,i])
end
end
y
end
"""
particle_dict2dict_vec(dict)
Take a dict that vaps keys to uncertain values, and return a vector of dicts where each dict has a single sample (particle) of the uncertain values. The length of the returned vector is the number of samples (particles) for all uncertain parameters.
"""
function particle_dict2dict_vec(dict)
# check the validity of uncertain parameters
found_particle_numbers = Set{Int}()
uncertain_parameters = Set{Base.keytype(dict)}()
for (k, v) in dict
if v isa AbstractParticles
push!(found_particle_numbers, nparticles(v))
push!(uncertain_parameters, k)
end
end
if length(found_particle_numbers) > 1
error("The number of samples (particles) for all uncertain parameters must be the same, but I found $(found_particle_numbers)")
elseif isempty(found_particle_numbers)
return [dict] # not much to do here
end
N = only(found_particle_numbers)
map(1:N) do i
Dict(k => vecindex(v, i) for (k, v) in dict)
end
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 7761 |
@recipe function plot(p::AbstractParticles)
seriestype --> :histogram
@series p.particles
end
# @recipe f(::Type{<:AbstractParticles}, p::AbstractParticles) = p.particles # Does not seem to be needed
const RealOrTuple = Union{Real, Tuple}
handle_args(y::AbstractVecOrMat{<:AbstractParticles}, q::RealOrTuple=0.025) = 1:size(y,1), y, q
handle_args(x::AbstractVector, y::AbstractVecOrMat{<:AbstractParticles}, q::RealOrTuple=0.025) = x, y, q
handle_args(p) = handle_args(p.args...)
handle_args(args...) = throw(ArgumentError("The plot function should be called with the signature plotfun([x=1:length(y)], y::Vector{Particles}, [q=0.025])"))
function quantiles(y,q::Number)
m = vec(pmean.(y))
q > 0.5 && (q = 1-q)
lower = reshape(-(pquantile.(vec(y),q)-m), size(y))
upper = reshape(pquantile.(vec(y),1-q)-m, size(y))
lower,upper
end
function quantiles(y,q)
m = vec(pmean.(y))
lower = reshape(-(pquantile.(vec(y),q[1])-m), size(y))
upper = reshape(pquantile.(vec(y),q[2])-m, size(y))
lower,upper
end
@userplot Errorbarplot
@recipe function plt(p::Errorbarplot; quantile=nothing)
x,y,q = handle_args(p)
q = quantile === nothing ? q : quantile
m = pmean.(y)
label --> "Mean with $q quantile"
Q = quantiles(y, q)
if y isa AbstractMatrix
for c in 1:size(y,2)
@series begin
yerror := (Q[1][:,c], Q[2][:,c])
x,m[:,c]
end
end
else
yerror := Q
@series x,m
end
end
"This is a helper function to make multiple series into one series separated by `Inf`. This makes plotting vastly more efficient."
function to1series(x,y)
r,c = size(y)
y2 = vec([y; fill(Inf, 1, c)])
x2 = repeat([x; Inf], c)
x2,y2
end
to1series(y) = to1series(1:size(y,1),y)
@userplot MCplot
@recipe function plt(p::MCplot; oneseries=true)
x,y,q = handle_args(p)
to1series = oneseries ? MonteCarloMeasurements.to1series : (x,y) -> (x,y)
N = nparticles(y)
selected = q > 1 ? randperm(N)[1:q] : 1:N
N = length(selected)
label --> ""
seriesalpha --> 1/log(N)
if y isa AbstractMatrix
for c in 1:size(y,2)
m = Matrix(y[:,c])'
@series to1series(x, m[:, selected])
end
else
m = Matrix(y)'
@series to1series(x, m[:, selected])
end
end
@userplot Ribbonplot
@recipe function plt(p::Ribbonplot; N=false, quantile=nothing, oneseries=true)
x,y,q = handle_args(p)
q = quantile === nothing ? q : quantile
to1series = oneseries ? MonteCarloMeasurements.to1series : identity
if N > 0
for col = 1:size(y,2)
yc = y[:,col]
m = pmean.(yc)
@series begin
label --> "Mean with $q quantile"
ribbon := quantiles(yc, q)
x,m
end
@series begin
ribbon := quantiles(yc, q)
m
end
@series begin
M = Matrix(yc)
np,ny = size(M)
primary := false
nc = N > 1 ? N : min(np, 50)
seriesalpha --> max(1/sqrt(nc), 0.1)
chosen = randperm(np)[1:nc]
to1series(M[chosen, :]')
end
end
else
@series begin
label --> "Mean with $q quantile"
m = pmean.(y)
ribbon := quantiles(y, q)
x,m
end
end
end
"""
errorbarplot(x,y,[q=0.025])
Plots a vector of particles with error bars at quantile `q`.
If `q::Tuple`, then you can specify both lower and upper quantile, e.g., `(0.01, 0.99)`.
"""
errorbarplot
"""
mcplot(x,y,[N=0])
Plots all trajectories represented by a vector of particles. `N > 1` controls the number of trajectories to plot.
"""
mcplot
"""
ribbonplot(x,y,[q=0.025]; N=true)
Plots a vector of particles with a ribbon covering quantiles `q, 1-q`.
If `q::Tuple`, then you can specify both lower and upper quantile, e.g., `(0.01, 0.99)`.
If a positive number `N` is provided, `N` sample trajectories will be plotted on top of the ribbon.
"""
ribbonplot
@recipe function plt(y::Union{MvParticles,AbstractMatrix{<:AbstractParticles}}, q=0.025; N=true, ri=true, quantile=nothing, oneseries=true)
q = quantile === nothing ? q : quantile
label --> "Mean with ($q, $(1-q)) quantiles"
to1series = oneseries ? MonteCarloMeasurements.to1series : identity
for col = 1:size(y,2)
yc = y[:,col]
if ri
@series begin
ribbon := quantiles(yc, q)
pmean.(yc)
end
end
if N > 0
@series begin
M = Matrix(yc)
np,ny = size(M)
primary := !ri
nc = N > 1 ? N : min(np, 50)
seriesalpha --> max(1/sqrt(nc), 0.1)
chosen = randperm(np)[1:nc]
M[chosen, :]'
to1series(M[chosen, :]') # We want different columns to look different, but M here represents a single column (Matrix(yc)), so each column in M correspond to the same yc
end
end
end
end
@recipe function plt(func::Function, x::Union{MvParticles,AbstractMatrix{<:AbstractParticles}}, q=0.025; quantile=nothing)
y = func.(x)
q = quantile === nothing ? q : quantile
label --> "Mean with ($q, $(1-q)) quantiles"
xerror := quantiles(x, q)
yerror := quantiles(y, q)
pmean.(x), pmean.(y)
end
@recipe function plt(x::Union{MvParticles,AbstractMatrix{<:AbstractParticles}}, y::Union{MvParticles,AbstractMatrix{<:AbstractParticles}}, q=0.025; points=false, quantile=nothing)
my = pmean.(y)
mx = pmean.(x)
q = quantile === nothing ? q : quantile
if points
@series begin
seriestype --> :scatter
primary := true
seriesalpha --> 0.1
vec(Matrix(x)), vec(Matrix(y))
end
else
@series begin
yerror := quantiles(y, q)
xerror := quantiles(x, q)
label --> "Mean with $q quantile"
mx, my
end
end
end
@recipe function plt(x::Union{MvParticles,AbstractMatrix{<:AbstractParticles}}, y::AbstractArray, q=0.025; quantile=nothing)
mx = pmean.(x)
q = quantile === nothing ? q : quantile
lower,upper = quantiles(x, q)
xerror := (lower,upper)
mx, y
end
@recipe function plt(x::AbstractArray, y::Union{MvParticles,AbstractMatrix{<:AbstractParticles}}, q=0.025; N=true, ri=true, quantile=nothing, oneseries=true)
samedim = size(x) === size(y)
# layout --> max(size(x, 2), size(y, 2))
q = quantile === nothing ? q : quantile
to1series = oneseries ? MonteCarloMeasurements.to1series : (x,y) -> (x,y)
if N > 0
for col = 1:size(y,2)
yc = y[:,col]
if ri
@series begin
# seriescolor --> col
# subplot --> col
ribbon := quantiles(yc, q)
label --> "Mean with ($q, $(1-q)) quantiles"
x, pmean.(yc)
end
end
@series begin
# seriescolor --> col
# subplot --> col
M = Matrix(yc)
np,ny = size(M)
primary := !ri
nc = N > 1 ? N : min(np, 50)
seriesalpha --> max(1/sqrt(nc), 0.1)
chosen = randperm(np)[1:nc]
to1series(samedim ? x[:, col] : x, M[chosen, :]')
end
end
else
@series begin
ribbon := quantiles(y, q)
label --> "Mean with ($q, $(1-q)) quantiles"
x, pmean.(y)
end
end
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 4348 | @inline maybe_particles(x) = x
@inline maybe_particles(p::AbstractParticles) = p.particles
"""
register_primitive(f, eval=eval)
Register both single and multi-argument function so that it works with particles. If you want to register functions from within a module, you must pass the modules `eval` function.
"""
function register_primitive(ff, eval=eval)
register_primitive_multi(ff, eval)
register_primitive_single(ff, eval)
end
"""
register_primitive_multi(ff, eval=eval)
Register a multi-argument function so that it works with particles. If you want to register functions from within a module, you must pass the modules `eval` function.
"""
function register_primitive_multi(ff, eval=eval)
f = nameof(ff)
m = Base.parentmodule(ff)
# for PT in (:Particles, :StaticParticles)
# eval(quote
# function ($m.$f)(p::$PT{T,N},a::Real...) where {T,N}
# res = ($m.$f).(p.particles, MonteCarloMeasurements.maybe_particles.(a)...) # maybe_particles introduced to handle >2 arg operators
# return $PT{eltype(res),N}(res)
# end
# function ($m.$f)(a::Real,p::$PT{T,N}) where {T,N}
# res = map(x->($m.$f)(a,x), p.particles)
# return $PT{eltype(res),N}(res)
# end
# function ($m.$f)(p1::$PT{T,N},p2::$PT{T,N}) where {T,N}
# res = map(($m.$f), p1.particles, p2.particles)
# return $PT{eltype(res),N}(res)
# end
# function ($m.$f)(p1::$PT{T,N},p2::$PT{S,N}) where {T,S,N} # Needed for particles of different float types :/
# res = map(($m.$f), p1.particles, p2.particles)
# return $PT{eltype(res),N}(res)
# end
# end)
# end
for PT in (:Particles, :StaticParticles)
eval(quote
function ($m.$f)(p::$PT{T,N},a::Real...) where {T,N}
res = ($m.$f).(p.particles, MonteCarloMeasurements.maybe_particles.(a)...) # maybe_particles introduced to handle >2 arg operators
return $PT{eltype(res),N}(res)
end
function ($m.$f)(a::Real,p::$PT{T,N}) where {T,N}
res = ($m.$f).(a, p.particles)
return $PT{eltype(res),N}(res)
end
function ($m.$f)(p1::$PT{T,N},p2::$PT{T,N}) where {T,N}
res = ($m.$f).(p1.particles, p2.particles)
return $PT{eltype(res),N}(res)
end
function ($m.$f)(p1::$PT{T,N},p2::$PT{S,N}) where {T,S,N} # Needed for particles of different float types :/
res = ($m.$f).(p1.particles, p2.particles)
return $PT{eltype(res),N}(res)
end
end)
end
# The code below is resolving some method ambiguities
eval(quote
function ($m.$f)(p1::StaticParticles{T,N},p2::Particles{T,N}) where {T,N}
res = map(($m.$f), p1.particles, p2.particles)
return StaticParticles{eltype(res),N}(res)
end
function ($m.$f)(p1::StaticParticles{T,N},p2::Particles{S,N}) where {T,S,N} # Needed for particles of different float types :/
res = map(($m.$f), p1.particles, p2.particles)
return StaticParticles{eltype(res),N}(res)
end
function ($m.$f)(p1::Particles{T,N},p2::StaticParticles{T,N}) where {T,N}
res = map(($m.$f), p1.particles, p2.particles)
return StaticParticles{eltype(res),N}(res)
end
function ($m.$f)(p1::Particles{T,N},p2::StaticParticles{S,N}) where {T,S,N} # Needed for particles of different float types :/
res = map(($m.$f), p1.particles, p2.particles)
return StaticParticles{eltype(res),N}(res)
end
end)
end
"""
register_primitive_single(ff, eval=eval)
Register a single-argument function so that it works with particles. If you want to register functions from within a module, you must pass the modules `eval` function.
"""
function register_primitive_single(ff, eval=eval)
f = nameof(ff)
m = Base.parentmodule(ff)
for PT in (:Particles, :StaticParticles)
eval(quote
function ($m.$f)(p::$PT{T,N}) where {T,N}
res = ($m.$f).(p.particles)
return $PT{eltype(res),N}(res)
end
end)
end
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 3020 | # """
# logΣexp, Σexp = logsumexp!(p::WeightedParticles)
# Return log(∑exp(w)). Modifies the weight vector to `w = exp(w-offset)`
# Uses a numerically stable algorithm with offset to control for overflow and `log1p` to control for underflow. `Σexp` is the sum of the weifhts in the state they are left, i.e., `sum(exp.(w).-offset)`.
#
# References:
# https://arxiv.org/pdf/1412.8695.pdf eq 3.8 for p(y)
# https://discourse.julialang.org/t/fast-logsumexp/22827/7?u=baggepinnen for stable logsumexp
# """
# function logsumexp!(p::WeightedParticles)
# N = length(p)
# w = p.logweights
# offset, maxind = findmax(w)
# w .= exp.(w .- offset)
# Σ = sum_all_but(w,maxind) # Σ = ∑wₑ-1
# log1p(Σ) + offset, Σ+1
# end
#
# """
# sum_all_but(w, i)
#
# Add all elements of vector `w` except for index `i`. The element at index `i` is assumed to have value 1
# """
# function sum_all_but(w,i)
# w[i] -= 1
# s = sum(w)
# w[i] += 1
# s
# end
#
# """
# loglik = resample!(p::WeightedParticles)
# Resample the particles based on the `p.logweights`. After a call to this function, weights will be reset to sum to one. Returns log-likelihood.
# """
# function resample!(p::WeightedParticles)
# N = length(p)
# w = p.logweights
# logΣexp,Σ = logsumexp!(p)
# _resample!(p,Σ)
# # fill!(p.weights, 1/N)
# fill!(w, -log(N))
# logΣexp - log(N)
# end
# """
# In-place systematic resampling of `p`, returns the sum of weights.
# `p.logweights` should be exponentiated before calling this function.
# """
# function _resample!(p::WeightedParticles,Σ)
# x,w = p.particles, p.logweights
# N = length(w)
# bin = w[1]
# s = rand()*Σ/N
# bo = 1
# for i = 1:N
# @inbounds for b = bo:N
# if s < bin
# x[i] = x[b]
# bo = b
# break
# end
# bin += w[b+1] # should never reach here when b==N
# end
# s += Σ/N
# end
# Σ
# end
for PT in ParticleSymbols
@eval begin
"""
bootstrap([rng::AbstractRNG,] p::Particles, n = nparticles(p))
Return Particles resampled with replacement. `n` specifies the number of samples to draw. Also works for arrays of Particles, in which case a single set of indices are drawn and used to extract samples from all elements in the array.
"""
function bootstrap(rng::AbstractRNG, p::$PT, n::Integer = nparticles(p))
$PT(p.particles[[rand(rng, 1:nparticles(p)) for _ in 1:n]])
end
function bootstrap(rng::AbstractRNG, p::AbstractArray{<:$PT}, n::Integer = nparticles(p))
inds = [rand(rng, 1:nparticles(p)) for _ in 1:n]
newpart = [p.particles[inds] for p in p]
$PT.(newpart)
end
end
end
bootstrap(p::T, n::Integer = nparticles(p)) where T <: AbstractParticles = bootstrap(Random.GLOBAL_RNG, p, n)
bootstrap(p::MvParticles, n::Integer = nparticles(p)) = bootstrap(Random.GLOBAL_RNG, p, n)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 1222 | """
systematic_sample([rng::AbstractRNG,] N, d=Normal(0,1); permute=true)
returns a `Vector` of length `N` sampled systematically from the distribution `d`. If `permute=false`, this vector will be sorted.
"""
function systematic_sample(rng::AbstractRNG, N, d=Normal(0,1); permute=true)
T = eltype(mean(d)) # eltype(d) does not appear to be sufficient
e = T(0.5/N) # rand()/N
y = e:1/N:1
o = quantile.((d, ),y)
permute && permute!(o, randperm(rng, N))
return eltype(o) == T ? o : T.(o)
end
function systematic_sample(N, d=Normal(0,1); kwargs...)
return systematic_sample(Random.GLOBAL_RNG, N, d; kwargs...)
end
"""
ess(p::AbstractParticles{T,N})
Calculates the effective sample size. This is useful if particles come from MCMC sampling and are correlated in time. The ESS is a number between [0,N].
Initial source: https://github.com/tpapp/MCMCDiagnostics.jl
"""
function ess(p::AbstractParticles)
ac = autocor(p.particles,1:min(250, nparticles(p)÷2))
N = length(ac)
τ_inv = 1 + 2ac[1]
K = 2
while K < N - 2
Δ = ac[K] + ac[K+1]
Δ < 0 && break
τ_inv += 2Δ
K += 2
end
min(1 / τ_inv, one(τ_inv))*nparticles(p)
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 2632 | """
sigmapoints(m, Σ)
sigmapoints(d::Normal)
sigmapoints(d::MvNormal)
The [unscented transform](https://en.wikipedia.org/wiki/Unscented_transform#Sigma_points) uses a small number of points to propagate the first and second moments of a probability density, called *sigma points*. We provide a function `sigmapoints(μ, Σ)` that creates a `Matrix` of `2n+1` sigma points, where `n` is the dimension. This can be used to initialize any kind of `AbstractParticles`, e.g.:
```julia
julia> m = [1,2]
julia> Σ = [3. 1; 1 4]
julia> p = StaticParticles(sigmapoints(m,Σ))
2-element Array{StaticParticles{Float64,5},1}:
(5 StaticParticles: 1.0 ± 1.73)
(5 StaticParticles: 2.0 ± 2.0)
julia> cov(p) ≈ Σ
true
julia> mean(p) ≈ m
true
```
Make sure to pass the variance (not std) as second argument in case `μ` and `Σ` are scalars.
# Caveat
If you are creating several one-dimensional uncertain values using sigmapoints independently, they will be strongly correlated. Use the multidimensional constructor! Example:
```julia
p = StaticParticles(sigmapoints(1, 0.1^2)) # Wrong!
ζ = StaticParticles(sigmapoints(0.3, 0.1^2)) # Wrong!
ω = StaticParticles(sigmapoints(1, 0.1^2)) # Wrong!
p,ζ,ω = StaticParticles(sigmapoints([1, 0.3, 1], 0.1^2)) # Correct
```
"""
function sigmapoints(m, Σ::AbstractMatrix)
n = length(m)
# X = sqrt(n*Σ)
X = cholesky(Symmetric(n*Σ)).U # Much faster than sqrt
T = promote_type(eltype(m), eltype(X))
[X; -X; zeros(T,1,n)] .+ m'
end
sigmapoints(m, Σ::Number) = sigmapoints(m, diagm(0=>fill(Σ, length(m))))
sigmapoints(d::Normal) = sigmapoints(mean(d), var(d))
sigmapoints(d::MvNormal) = sigmapoints(mean(d), Matrix(cov(d)))
"""
Y = transform_moments(X::Matrix, m, Σ; preserve_latin=false)
Transforms `X` such that it get the specified mean and covariance.
```julia
m, Σ = [1,2], [2 1; 1 4] # Desired mean and covariance
particles = transform_moments(X, m, Σ)
julia> cov(particles) ≈ Σ
true
```
**Note**, if `X` is a latin hypercube and `Σ` is non-diagonal, then the latin property is destroyed for all dimensions but the first.
We provide a method `preserve_latin=true`) which absolutely preserves the latin property in all dimensions, but if you use this, the covariance of the sample will be slightly wrong
"""
function transform_moments(X,m,Σ; preserve_latin=false)
X = X .- mean(X,dims=1) # Normalize the sample
if preserve_latin
xl = Diagonal(std(X,dims=1)[:])
# xl = cholesky(Diagonal(var(X,dims=1)[:])).L
else
xl = cholesky(cov(X)).L
end
Matrix((m .+ (cholesky(Σ).L/xl)*X')')
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 1097 |
for ff in (exp,)
f = nameof(ff)
m = Base.parentmodule(ff)
for PT in (:Particles, :StaticParticles)
@eval begin
@inline function ($m.$f)(p::$PT{Float64,N}) where N
res = (SLEEFPirates.$f).(p.particles)
return $PT{Float64,N}(res)
end
@inline function ($m.$f)(p::$PT{Float32,N}) where N
res = (SLEEFPirates.$f).(p.particles)
return $PT{Float32,N}(res)
end
end
end
end
for ff in (log, sin, cos, asin, acos, atan) # tan is not faster
f = nameof(ff)
fs = Symbol(f,"_fast")
m = Base.parentmodule(ff)
for PT in (:Particles, :StaticParticles)
@eval begin
@inline function ($m.$f)(p::$PT{Float64,N}) where N
res = (SLEEFPirates.$fs).(p.particles)
return $PT{Float64,N}(res)
end
@inline function ($m.$f)(p::$PT{Float32,N}) where N
res = (SLEEFPirates.$fs).(p.particles)
return $PT{Float32,N}(res)
end
end
end
end
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
|
[
"MIT"
] | 1.2.1 | 36ccc5e09dbba9aea61d78cd7bc46c5113e6ad84 | code | 5739 | const ConcreteFloat = Union{Float64,Float32,Float16,BigFloat}
const ConcreteInt = Union{Bool,Int8,Int16,Int32,Int64,Int128,BigInt}
abstract type AbstractParticles{T,N} <: Real end
"""
struct Particles{T, N} <: AbstractParticles{T, N}
This type represents uncertainty using a cloud of particles.
# Constructors:
- `Particles()`
- `Particles(N::Integer)`
- `Particles([rng::AbstractRNG,] d::Distribution)`
- `Particles([rng::AbstractRNG,] N::Integer, d::Distribution; permute=true, systematic=true)`
- `Particles(v::Vector{T} where T)`
- `Particles(m::Matrix{T} where T)`: Creates multivariate particles (Vector{Particles})
"""
struct Particles{T,N} <: AbstractParticles{T,N}
particles::Vector{T}
end
"""
struct StaticParticles{T, N} <: AbstractParticles{T, N}
See `?Particles` for help. The difference between `StaticParticles` and `Particles` is that the `StaticParticles` store particles in a static vecetor. This makes runtimes much shorter, but compile times longer. See the documentation for some benchmarks. Only recommended for sample sizes of ≲ 300-400
"""
struct StaticParticles{T,N} <: AbstractParticles{T,N}
particles::SArray{Tuple{N}, T, 1, N}
end
DNP(PT) = PT === Particles ? DEFAULT_NUM_PARTICLES : DEFAULT_STATIC_NUM_PARTICLES
const ParticleSymbols = (:Particles, :StaticParticles)
for PT in ParticleSymbols
for D in (2,3,4,5)
D1 = D-1
@eval function $PT{T,N}(m::AbstractArray{T,$D}) where {T,N} # ::Array{$PT{T,N},$D1}
size(m, 1) == N || throw(ArgumentError("The first dimension of the array must be the same as the number N of particles."))
inds = CartesianIndices(axes(m)[2:end])
map(inds) do ind
$PT{T,N}(@view(m[:,ind]))::$PT{T,N}
end#::Array{$PT{T,N},$(D1)}
end
@eval function $PT(m::AbstractArray{T,$D}) where T
N = size(m, 1)
inds = CartesianIndices(axes(m)[2:end])
map(inds) do ind
$PT{T,N}(@view(m[:,ind]))
end
end
end
@eval begin
$PT(v::Vector) = $PT{eltype(v),length(v)}(v)
function $PT{T,N}(n::Real) where {T,N} # This constructor is potentially dangerous, replace with convert?
if n isa AbstractParticles
return convert($PT{T,N}, n)
end
v = fill(n,N)
$PT{T,N}(v)
end
$PT{T,N}(p::$PT{T,N}) where {T,N} = p
function $PT(rng::AbstractRNG, N::Integer=DNP($PT), d::Distribution{<:Any,VS}=Normal(0,1); permute=true, systematic=VS==Continuous) where VS
if systematic
v = systematic_sample(rng, N, d; permute)
else
v = rand(rng, d, N)
end
$PT{eltype(v),N}(v)
end
function $PT{T,N}(rng::AbstractRNG, d::Distribution{<:Any,VS}=Normal(0,1); permute=true, systematic=VS==Continuous) where {T,N,VS}
if systematic
v = systematic_sample(rng, N, d; permute)
else
v = rand(rng, d, N)
end
$PT{T,N}(v)
end
function $PT(N::Integer=DNP($PT), d::Distribution{<:Any,VS}=Normal(0,1); kwargs...) where VS
return $PT(Random.GLOBAL_RNG, N, d; kwargs...)
end
function $PT(::Type{T}, N::Integer=DNP($PT), d::Distribution=Normal(T(0),T(1)); kwargs...) where {T <: Real}
eltype(d) == T || throw(ArgumentError("Element type of the provided distribution $d does not match $T. The element type of a distribution is the element type of a value sampled from it. Some distributions, like `Gamma(0.1f0)` generated `Float64` random number even though it appears like it should generate `Float32` numbers."))
return $PT(Random.GLOBAL_RNG, N, d; kwargs...)
end
function $PT(rng::AbstractRNG, N::Integer, d::MultivariateDistribution)
v = rand(rng,d,N)'
$PT{eltype(v), N}(v)
end
function $PT{T,N}(rng::AbstractRNG, d::MultivariateDistribution) where {T,N}
v = rand(rng,d,N)'
$PT{T, N}(v)
end
$PT(N::Integer, d::MultivariateDistribution) = $PT(Random.GLOBAL_RNG, N, d)
nakedtypeof(p::$PT{T,N}) where {T,N} = $PT
nakedtypeof(::Type{$PT{T,N}}) where {T,N} = $PT
end
end
function Particles(rng::AbstractRNG, d::Distribution; kwargs...)
Particles{eltype(d), DEFAULT_NUM_PARTICLES}(rng, d; kwargs...)
end
Particles(d::Distribution; kwargs...) = Particles(Random.GLOBAL_RNG, d; kwargs...)
function StaticParticles(rng::AbstractRNG, d::Distribution;kwargs...)
StaticParticles(rng, DEFAULT_STATIC_NUM_PARTICLES, d; kwargs...)
end
StaticParticles(d::Distribution;kwargs...) = StaticParticles(Random.GLOBAL_RNG, d; kwargs...)
const AbstractMvParticles = AbstractVector{<:AbstractParticles}
const MvParticles = Vector{<:AbstractParticles} # This can not be AbstractVector since it causes some methods below to be less specific than desired
const ParticleArray = AbstractArray{<:AbstractParticles}
const SomeKindOfParticles = Union{<:AbstractParticles, ParticleArray}
"""
This is an experimental wrapper around `Particles` that changes the semantics from `Particles <: Real` to `ParticleDistribution <: Distribution`. Note that this type is to be considered experimental and subject to change at any time.
"""
struct ParticleDistribution{T <: SomeKindOfParticles, U} <: Distribution{U, Continuous}
p::T
end
"Experimental"
pdist(p::AbstractParticles) = ParticleDistribution{particleeltype(p), Univariate}(p)
pdist(p::AbstractMvParticles) = ParticleDistribution{particleeltype(p), Multivariate}(p)
Particles(p::StaticParticles{T,N}) where {T,N} = Particles{T,N}(p.particles)
| MonteCarloMeasurements | https://github.com/baggepinnen/MonteCarloMeasurements.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.