max_stars_repo_path
stringlengths 4
286
| max_stars_repo_name
stringlengths 5
119
| max_stars_count
int64 0
191k
| id
stringlengths 1
7
| content
stringlengths 6
1.03M
| content_cleaned
stringlengths 6
1.03M
| language
stringclasses 111
values | language_score
float64 0.03
1
| comments
stringlengths 0
556k
| edu_score
float64 0.32
5.03
| edu_int_score
int64 0
5
|
---|---|---|---|---|---|---|---|---|---|---|
benchmarks/SimResults/combinations_spec_mylocality/oldstuff/cmp_soplexmcfcalculixgcc/power.py | TugberkArkose/MLScheduler | 0 | 10200 | <filename>benchmarks/SimResults/combinations_spec_mylocality/oldstuff/cmp_soplexmcfcalculixgcc/power.py
power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.181181,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.344996,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.977935,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.486054,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.841669,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.482721,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.81044,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.330514,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.28395,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.184753,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0176198,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.195265,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.130309,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.380018,
'Execution Unit/Register Files/Runtime Dynamic': 0.147929,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.521478,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.08927,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 3.79801,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00272158,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00272158,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.0023766,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000923356,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00187191,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00969166,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0258763,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.12527,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.372767,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.425473,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.959077,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.090727,
'L2/Runtime Dynamic': 0.0127692,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.08122,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.38167,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0920133,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0920133,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.51749,
'Load Store Unit/Runtime Dynamic': 1.92746,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.226889,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.453778,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0805237,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0817258,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.061585,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.697703,
'Memory Management Unit/Runtime Dynamic': 0.143311,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 26.1203,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.644561,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0326103,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.237087,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.914258,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 7.75489,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.11996,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.29691,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.64733,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.234954,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.378972,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.191292,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.805218,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.169475,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.2954,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.122295,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00985502,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.116195,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0728839,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.23849,
'Execution Unit/Register Files/Runtime Dynamic': 0.0827389,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.274787,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.565173,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.15542,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00133282,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00133282,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00118494,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000471861,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00104698,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00489756,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0119197,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0700652,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.45674,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.197355,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.237973,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.89155,
'Instruction Fetch Unit/Runtime Dynamic': 0.522211,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0504299,
'L2/Runtime Dynamic': 0.0069462,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.70196,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.713329,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0473909,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0473909,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.92575,
'Load Store Unit/Runtime Dynamic': 0.994436,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.116858,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.233716,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0414733,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0421754,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.277104,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0325171,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.504457,
'Memory Management Unit/Runtime Dynamic': 0.0746925,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.2571,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.321701,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0145155,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.111753,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.44797,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.20167,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0065108,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.207803,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0335685,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.102536,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.165386,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0834813,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.351403,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.112125,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.10223,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.00634181,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0043008,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0336025,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0318071,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0399443,
'Execution Unit/Register Files/Runtime Dynamic': 0.0361079,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0724192,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.179703,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.18039,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00112696,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00112696,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000995662,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000393137,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000456911,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0037065,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0103022,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0305769,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 1.94496,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0958958,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.103853,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.25787,
'Instruction Fetch Unit/Runtime Dynamic': 0.244335,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0538499,
'L2/Runtime Dynamic': 0.0148173,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.02873,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.40237,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0256105,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0256104,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.14967,
'Load Store Unit/Runtime Dynamic': 0.554282,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.063151,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.126302,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0224125,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0232096,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.12093,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0157552,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.31554,
'Memory Management Unit/Runtime Dynamic': 0.0389648,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 14.4686,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0166828,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00482915,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0520126,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0735245,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.10632,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.00682822,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.208052,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0364806,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.106185,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.171272,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0864526,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.36391,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.115853,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.11398,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.00689197,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00445387,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0347798,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0329391,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0416718,
'Execution Unit/Register Files/Runtime Dynamic': 0.037393,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0749788,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.202833,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.21756,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000625326,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000625326,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000550159,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000215984,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000473173,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00227399,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00579905,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0316652,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.01418,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0689457,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.107549,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.33045,
'Instruction Fetch Unit/Runtime Dynamic': 0.216233,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0418086,
'L2/Runtime Dynamic': 0.00989266,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.36015,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.554162,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0363327,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0363327,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.53172,
'Load Store Unit/Runtime Dynamic': 0.769675,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0895903,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.17918,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0317959,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0324228,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.125234,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0113054,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.335963,
'Memory Management Unit/Runtime Dynamic': 0.0437282,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 14.9434,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0181291,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0050114,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0551057,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0782462,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.33534,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 3.868411224021876,
'Runtime Dynamic': 3.868411224021876,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.371973,
'Runtime Dynamic': 0.183113,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 75.1614,
'Peak Power': 108.274,
'Runtime Dynamic': 16.5813,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 74.7894,
'Total Cores/Runtime Dynamic': 16.3982,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.371973,
'Total L3s/Runtime Dynamic': 0.183113,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | <filename>benchmarks/SimResults/combinations_spec_mylocality/oldstuff/cmp_soplexmcfcalculixgcc/power.py
power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.181181,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.344996,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.977935,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.486054,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.841669,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.482721,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.81044,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.330514,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 7.28395,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.184753,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0176198,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.195265,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.130309,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.380018,
'Execution Unit/Register Files/Runtime Dynamic': 0.147929,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.521478,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.08927,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 3.79801,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00272158,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00272158,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.0023766,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000923356,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00187191,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00969166,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0258763,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.12527,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.372767,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.425473,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 0.959077,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.090727,
'L2/Runtime Dynamic': 0.0127692,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.08122,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.38167,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0920133,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0920133,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 4.51749,
'Load Store Unit/Runtime Dynamic': 1.92746,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.226889,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.453778,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0805237,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0817258,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.061585,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.697703,
'Memory Management Unit/Runtime Dynamic': 0.143311,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 26.1203,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.644561,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0326103,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.237087,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 0.914258,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 7.75489,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.11996,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.29691,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.64733,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.234954,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.378972,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.191292,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.805218,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.169475,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 5.2954,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.122295,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00985502,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.116195,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0728839,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.23849,
'Execution Unit/Register Files/Runtime Dynamic': 0.0827389,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.274787,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.565173,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.15542,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00133282,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00133282,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00118494,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000471861,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00104698,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00489756,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0119197,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0700652,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 4.45674,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.197355,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.237973,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 6.89155,
'Instruction Fetch Unit/Runtime Dynamic': 0.522211,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0504299,
'L2/Runtime Dynamic': 0.0069462,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.70196,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.713329,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0473909,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0473909,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.92575,
'Load Store Unit/Runtime Dynamic': 0.994436,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.116858,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.233716,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0414733,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0421754,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.277104,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0325171,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.504457,
'Memory Management Unit/Runtime Dynamic': 0.0746925,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 19.2571,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.321701,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0145155,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.111753,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.44797,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 4.20167,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.0065108,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.207803,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0335685,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.102536,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.165386,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0834813,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.351403,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.112125,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.10223,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.00634181,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0043008,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0336025,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0318071,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0399443,
'Execution Unit/Register Files/Runtime Dynamic': 0.0361079,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0724192,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.179703,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.18039,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00112696,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00112696,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000995662,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000393137,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000456911,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0037065,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0103022,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0305769,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 1.94496,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0958958,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.103853,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.25787,
'Instruction Fetch Unit/Runtime Dynamic': 0.244335,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0538499,
'L2/Runtime Dynamic': 0.0148173,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.02873,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.40237,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0256105,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0256104,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.14967,
'Load Store Unit/Runtime Dynamic': 0.554282,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.063151,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.126302,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0224125,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0232096,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.12093,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0157552,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.31554,
'Memory Management Unit/Runtime Dynamic': 0.0389648,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 14.4686,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0166828,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.00482915,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0520126,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0735245,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.10632,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.00682822,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.208052,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 0.0364806,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.106185,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.171272,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.0864526,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 0.36391,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.115853,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 4.11398,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.00689197,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.00445387,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.0347798,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.0329391,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.0416718,
'Execution Unit/Register Files/Runtime Dynamic': 0.037393,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.0749788,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.202833,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 1.21756,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000625326,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000625326,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000550159,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000215984,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.000473173,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00227399,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00579905,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0316652,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 2.01418,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.0689457,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.107549,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 4.33045,
'Instruction Fetch Unit/Runtime Dynamic': 0.216233,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0418086,
'L2/Runtime Dynamic': 0.00989266,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.36015,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.554162,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0363327,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0363327,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 2.53172,
'Load Store Unit/Runtime Dynamic': 0.769675,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.0895903,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.17918,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0317959,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0324228,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.125234,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0113054,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.335963,
'Memory Management Unit/Runtime Dynamic': 0.0437282,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 14.9434,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.0181291,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0050114,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.0551057,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.0782462,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 2.33534,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 3.868411224021876,
'Runtime Dynamic': 3.868411224021876,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.371973,
'Runtime Dynamic': 0.183113,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 75.1614,
'Peak Power': 108.274,
'Runtime Dynamic': 16.5813,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 74.7894,
'Total Cores/Runtime Dynamic': 16.3982,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.371973,
'Total L3s/Runtime Dynamic': 0.183113,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | none | 1 | 1.286766 | 1 |
|
packages/gtmcore/gtmcore/environment/conda.py | gigabackup/gigantum-client | 60 | 10201 | from typing import List, Dict
import json
from gtmcore.http import ConcurrentRequestManager, ConcurrentRequest
from gtmcore.environment.packagemanager import PackageManager, PackageResult, PackageMetadata
from gtmcore.container import container_for_context
from gtmcore.labbook import LabBook
from gtmcore.logging import LMLogger
logger = LMLogger.get_logger()
class CondaPackageManagerBase(PackageManager):
"""Class to implement the conda package manager
"""
def __init__(self):
# String to be set in child classes indicating which python version you are checking. Typically should be either
# python 3.6* or python 2.7*
self.python_depends_str = None
# String of the name of the conda environment (e.g. py36 or py27, as created via container build)
self.python_env = None
# Note, currently we hard code channel config. Future changes to support the user specifying channels
# will modify this behavior
self.channel_priority = ['conda-forge', 'anaconda']
self.request_mgr = ConcurrentRequestManager()
def list_versions(self, package_name: str, labbook: LabBook, username: str) -> List[str]:
"""Method to list all available versions of a package based on the package name
Args:
package_name: Name of the package to query
labbook: Subject LabBook
username: username of current user
Returns:
list(str): Version strings
"""
# Check for package in channels, picking out version by priority
request_list = list()
for channel in self.channel_priority:
request_list.append(ConcurrentRequest(f"https://api.anaconda.org/package/{channel}/{package_name}",
headers={'Accept': 'application/json'}))
responses = self.request_mgr.resolve_many(request_list)
versions = None
for response in responses:
if response.status_code != 200:
continue
versions = response.json.get('versions')
break
if not versions:
raise ValueError(f"Package {package_name} not found in channels {' ,'.join(self.channel_priority)}.")
versions.reverse()
return versions
def list_installed_packages(self, labbook: LabBook, username: str) -> List[Dict[str, str]]:
"""Method to get a list of all packages that are currently installed
Note, this will return results for the computer/container in which it is executed. To get the properties of
a LabBook container, a docker exec command would be needed from the Gigantum application container.
return format is a list of dicts with the format (name: <package name>, version: <version string>)
Returns:
list
"""
project_container = container_for_context(username, labbook=labbook)
result = project_container.run_container("conda list --no-pip --json", wait_for_output=True)
if result:
data = json.loads(result)
if data:
return [{"name": x['name'], 'version': x['version']} for x in data]
else:
return []
def validate_packages(self, package_list: List[Dict[str, str]], labbook: LabBook, username: str) \
-> List[PackageResult]:
"""Method to validate a list of packages, and if needed fill in any missing versions
Should check both the provided package name and version. If the version is omitted, it should be generated
from the latest version.
Args:
package_list(list): A list of dictionaries of packages to validate
labbook(str): The labbook instance
username(str): The username for the logged in user
Returns:
namedtuple: namedtuple indicating if the package and version are valid
"""
result = list()
# Check for package in channels, picking out version by priority
request_list = list()
for pkg in package_list:
for channel in self.channel_priority:
request_list.append(ConcurrentRequest(f"https://api.anaconda.org/package/{channel}/{pkg['package']}",
headers={'Accept': 'application/json'}))
responses = self.request_mgr.resolve_many(request_list)
# Repack into groups by package
responses_per_package = list(zip(*(iter(responses),) * len(self.channel_priority)))
for package, responses in zip(package_list, responses_per_package):
versions = None
latest_version = None
for response in responses:
if response.status_code != 200:
continue
versions = response.json.get('versions')
latest_version = response.json.get('latest_version')
break
if not versions:
# Package is not found
result.append(PackageResult(package=package['package'], version=package.get('version'), error=True))
continue
if package.get('version'):
# Package has been set, so validate it
if package.get('version') in versions:
# Both package name and version are valid
result.append(PackageResult(package=package['package'], version=package.get('version'),
error=False))
else:
# The package version is not in the list, so invalid
result.append(PackageResult(package=package['package'], version=package.get('version'), error=True))
else:
# You need to look up the latest version since not included
result.append(PackageResult(package=package['package'], version=str(latest_version),
error=False))
return result
def get_packages_metadata(self, package_list: List[str], labbook: LabBook, username: str) -> List[PackageMetadata]:
"""Method to get package metadata
Args:
package_list: List of package names
labbook(str): The labbook instance
username(str): The username for the logged in user
Returns:
list
"""
def _extract_metadata(data):
"""Extraction method to pull out the docs URL and description"""
latest_val = data.get('latest_version')
description_val = data.get('summary').strip()
docs_val = data.get('doc_url')
if not docs_val:
docs_val = data.get('html_url')
return latest_val, description_val, docs_val
# Check for package in channels, picking out version by priority
request_list = list()
for pkg in package_list:
for channel in self.channel_priority:
request_list.append(ConcurrentRequest(f"https://api.anaconda.org/package/{channel}/{pkg}",
headers={'Accept': 'application/json'},
extraction_function=_extract_metadata))
responses = self.request_mgr.resolve_many(request_list)
# Repack into groups by package
responses_per_package = list(zip(*(iter(responses),) * len(self.channel_priority)))
result = list()
for package, responses in zip(package_list, responses_per_package):
data = None
for response in responses:
if response.status_code == 200:
data = response.extracted_json
break
if data:
latest_version, description, docs_url = data
result.append(PackageMetadata(package_manager="conda", package=package, latest_version=latest_version,
description=description, docs_url=docs_url))
else:
result.append(PackageMetadata(package_manager="conda", package=package, latest_version=None,
description=None, docs_url=None))
return result
def generate_docker_install_snippet(self, packages: List[Dict[str, str]], single_line: bool = False) -> List[str]:
"""Method to generate a docker snippet to install 1 or more packages
Note: Because conda be so slow to solve environments with conda-forge included, always single line it.
Args:
packages(list(dict)): A list of package names and versions to install
single_line(bool): If true, collapse
Returns:
list
"""
package_strings = [f"{x['name']}={x['version']}" for x in packages]
if single_line:
return [f"RUN conda install -yq {' '.join(package_strings)}"]
else:
return [f"RUN conda install -yq {' '.join(package_strings)}"]
class Conda3PackageManager(CondaPackageManagerBase):
"""Class to implement the conda3 package manager
"""
def __init__(self):
super().__init__()
self.python_depends_str = 'python 3.6*'
self.python_env = 'py36'
class Conda2PackageManager(CondaPackageManagerBase):
"""Class to implement the conda2 package manager
"""
def __init__(self):
super().__init__()
self.python_depends_str = 'python 2.7*'
self.python_env = 'py27'
| from typing import List, Dict
import json
from gtmcore.http import ConcurrentRequestManager, ConcurrentRequest
from gtmcore.environment.packagemanager import PackageManager, PackageResult, PackageMetadata
from gtmcore.container import container_for_context
from gtmcore.labbook import LabBook
from gtmcore.logging import LMLogger
logger = LMLogger.get_logger()
class CondaPackageManagerBase(PackageManager):
"""Class to implement the conda package manager
"""
def __init__(self):
# String to be set in child classes indicating which python version you are checking. Typically should be either
# python 3.6* or python 2.7*
self.python_depends_str = None
# String of the name of the conda environment (e.g. py36 or py27, as created via container build)
self.python_env = None
# Note, currently we hard code channel config. Future changes to support the user specifying channels
# will modify this behavior
self.channel_priority = ['conda-forge', 'anaconda']
self.request_mgr = ConcurrentRequestManager()
def list_versions(self, package_name: str, labbook: LabBook, username: str) -> List[str]:
"""Method to list all available versions of a package based on the package name
Args:
package_name: Name of the package to query
labbook: Subject LabBook
username: username of current user
Returns:
list(str): Version strings
"""
# Check for package in channels, picking out version by priority
request_list = list()
for channel in self.channel_priority:
request_list.append(ConcurrentRequest(f"https://api.anaconda.org/package/{channel}/{package_name}",
headers={'Accept': 'application/json'}))
responses = self.request_mgr.resolve_many(request_list)
versions = None
for response in responses:
if response.status_code != 200:
continue
versions = response.json.get('versions')
break
if not versions:
raise ValueError(f"Package {package_name} not found in channels {' ,'.join(self.channel_priority)}.")
versions.reverse()
return versions
def list_installed_packages(self, labbook: LabBook, username: str) -> List[Dict[str, str]]:
"""Method to get a list of all packages that are currently installed
Note, this will return results for the computer/container in which it is executed. To get the properties of
a LabBook container, a docker exec command would be needed from the Gigantum application container.
return format is a list of dicts with the format (name: <package name>, version: <version string>)
Returns:
list
"""
project_container = container_for_context(username, labbook=labbook)
result = project_container.run_container("conda list --no-pip --json", wait_for_output=True)
if result:
data = json.loads(result)
if data:
return [{"name": x['name'], 'version': x['version']} for x in data]
else:
return []
def validate_packages(self, package_list: List[Dict[str, str]], labbook: LabBook, username: str) \
-> List[PackageResult]:
"""Method to validate a list of packages, and if needed fill in any missing versions
Should check both the provided package name and version. If the version is omitted, it should be generated
from the latest version.
Args:
package_list(list): A list of dictionaries of packages to validate
labbook(str): The labbook instance
username(str): The username for the logged in user
Returns:
namedtuple: namedtuple indicating if the package and version are valid
"""
result = list()
# Check for package in channels, picking out version by priority
request_list = list()
for pkg in package_list:
for channel in self.channel_priority:
request_list.append(ConcurrentRequest(f"https://api.anaconda.org/package/{channel}/{pkg['package']}",
headers={'Accept': 'application/json'}))
responses = self.request_mgr.resolve_many(request_list)
# Repack into groups by package
responses_per_package = list(zip(*(iter(responses),) * len(self.channel_priority)))
for package, responses in zip(package_list, responses_per_package):
versions = None
latest_version = None
for response in responses:
if response.status_code != 200:
continue
versions = response.json.get('versions')
latest_version = response.json.get('latest_version')
break
if not versions:
# Package is not found
result.append(PackageResult(package=package['package'], version=package.get('version'), error=True))
continue
if package.get('version'):
# Package has been set, so validate it
if package.get('version') in versions:
# Both package name and version are valid
result.append(PackageResult(package=package['package'], version=package.get('version'),
error=False))
else:
# The package version is not in the list, so invalid
result.append(PackageResult(package=package['package'], version=package.get('version'), error=True))
else:
# You need to look up the latest version since not included
result.append(PackageResult(package=package['package'], version=str(latest_version),
error=False))
return result
def get_packages_metadata(self, package_list: List[str], labbook: LabBook, username: str) -> List[PackageMetadata]:
"""Method to get package metadata
Args:
package_list: List of package names
labbook(str): The labbook instance
username(str): The username for the logged in user
Returns:
list
"""
def _extract_metadata(data):
"""Extraction method to pull out the docs URL and description"""
latest_val = data.get('latest_version')
description_val = data.get('summary').strip()
docs_val = data.get('doc_url')
if not docs_val:
docs_val = data.get('html_url')
return latest_val, description_val, docs_val
# Check for package in channels, picking out version by priority
request_list = list()
for pkg in package_list:
for channel in self.channel_priority:
request_list.append(ConcurrentRequest(f"https://api.anaconda.org/package/{channel}/{pkg}",
headers={'Accept': 'application/json'},
extraction_function=_extract_metadata))
responses = self.request_mgr.resolve_many(request_list)
# Repack into groups by package
responses_per_package = list(zip(*(iter(responses),) * len(self.channel_priority)))
result = list()
for package, responses in zip(package_list, responses_per_package):
data = None
for response in responses:
if response.status_code == 200:
data = response.extracted_json
break
if data:
latest_version, description, docs_url = data
result.append(PackageMetadata(package_manager="conda", package=package, latest_version=latest_version,
description=description, docs_url=docs_url))
else:
result.append(PackageMetadata(package_manager="conda", package=package, latest_version=None,
description=None, docs_url=None))
return result
def generate_docker_install_snippet(self, packages: List[Dict[str, str]], single_line: bool = False) -> List[str]:
"""Method to generate a docker snippet to install 1 or more packages
Note: Because conda be so slow to solve environments with conda-forge included, always single line it.
Args:
packages(list(dict)): A list of package names and versions to install
single_line(bool): If true, collapse
Returns:
list
"""
package_strings = [f"{x['name']}={x['version']}" for x in packages]
if single_line:
return [f"RUN conda install -yq {' '.join(package_strings)}"]
else:
return [f"RUN conda install -yq {' '.join(package_strings)}"]
class Conda3PackageManager(CondaPackageManagerBase):
"""Class to implement the conda3 package manager
"""
def __init__(self):
super().__init__()
self.python_depends_str = 'python 3.6*'
self.python_env = 'py36'
class Conda2PackageManager(CondaPackageManagerBase):
"""Class to implement the conda2 package manager
"""
def __init__(self):
super().__init__()
self.python_depends_str = 'python 2.7*'
self.python_env = 'py27'
| en | 0.821075 | Class to implement the conda package manager # String to be set in child classes indicating which python version you are checking. Typically should be either # python 3.6* or python 2.7* # String of the name of the conda environment (e.g. py36 or py27, as created via container build) # Note, currently we hard code channel config. Future changes to support the user specifying channels # will modify this behavior Method to list all available versions of a package based on the package name Args: package_name: Name of the package to query labbook: Subject LabBook username: username of current user Returns: list(str): Version strings # Check for package in channels, picking out version by priority Method to get a list of all packages that are currently installed Note, this will return results for the computer/container in which it is executed. To get the properties of a LabBook container, a docker exec command would be needed from the Gigantum application container. return format is a list of dicts with the format (name: <package name>, version: <version string>) Returns: list Method to validate a list of packages, and if needed fill in any missing versions Should check both the provided package name and version. If the version is omitted, it should be generated from the latest version. Args: package_list(list): A list of dictionaries of packages to validate labbook(str): The labbook instance username(str): The username for the logged in user Returns: namedtuple: namedtuple indicating if the package and version are valid # Check for package in channels, picking out version by priority # Repack into groups by package # Package is not found # Package has been set, so validate it # Both package name and version are valid # The package version is not in the list, so invalid # You need to look up the latest version since not included Method to get package metadata Args: package_list: List of package names labbook(str): The labbook instance username(str): The username for the logged in user Returns: list Extraction method to pull out the docs URL and description # Check for package in channels, picking out version by priority # Repack into groups by package Method to generate a docker snippet to install 1 or more packages Note: Because conda be so slow to solve environments with conda-forge included, always single line it. Args: packages(list(dict)): A list of package names and versions to install single_line(bool): If true, collapse Returns: list Class to implement the conda3 package manager Class to implement the conda2 package manager | 2.353314 | 2 |
netchos/io/io_mpl_to_px.py | brainets/netchos | 11 | 10202 | <gh_stars>10-100
"""Conversion of Matplotlib / Seaborn inputs to plotly."""
import os.path as op
from pkg_resources import resource_filename
import json
def mpl_to_px_inputs(inputs, plt_types=None):
"""Convert typical matplotlib inputs to plotly to simplify API.
Parameters
----------
inputs : dict
Dictionary of inputs
plt_types : string or list or None
Sub select some plotting types (e.g heatmap, line etc.). If None, all
types are used
Returns
-------
outputs : dict
Dictionary of converted inputs
"""
# load reference table
file = op.join(op.dirname(__file__), "io_mpl_to_px.json")
with open(file, 'r') as f:
table = json.load(f)
# go through the desired plotting types for conversion
if plt_types is None:
plt_types = list(table.keys())
if isinstance(plt_types, str):
plt_types = [plt_types]
ref = {}
for plt_type in plt_types:
ref.update(table[plt_type])
# convert inputs
outputs = {}
for k, v in inputs.items():
if k in ref.keys():
k = ref[k]
outputs[k] = v
return outputs
| """Conversion of Matplotlib / Seaborn inputs to plotly."""
import os.path as op
from pkg_resources import resource_filename
import json
def mpl_to_px_inputs(inputs, plt_types=None):
"""Convert typical matplotlib inputs to plotly to simplify API.
Parameters
----------
inputs : dict
Dictionary of inputs
plt_types : string or list or None
Sub select some plotting types (e.g heatmap, line etc.). If None, all
types are used
Returns
-------
outputs : dict
Dictionary of converted inputs
"""
# load reference table
file = op.join(op.dirname(__file__), "io_mpl_to_px.json")
with open(file, 'r') as f:
table = json.load(f)
# go through the desired plotting types for conversion
if plt_types is None:
plt_types = list(table.keys())
if isinstance(plt_types, str):
plt_types = [plt_types]
ref = {}
for plt_type in plt_types:
ref.update(table[plt_type])
# convert inputs
outputs = {}
for k, v in inputs.items():
if k in ref.keys():
k = ref[k]
outputs[k] = v
return outputs | en | 0.415941 | Conversion of Matplotlib / Seaborn inputs to plotly. Convert typical matplotlib inputs to plotly to simplify API. Parameters ---------- inputs : dict Dictionary of inputs plt_types : string or list or None Sub select some plotting types (e.g heatmap, line etc.). If None, all types are used Returns ------- outputs : dict Dictionary of converted inputs # load reference table # go through the desired plotting types for conversion # convert inputs | 3.279682 | 3 |
fizzbuzz_for_02.py | toastyxen/FizzBuzz | 0 | 10203 | """Fizzbuzz for loop variant 3"""
for x in range(1, 101):
OUTPUT = ""
if x % 3 == 0:
OUTPUT += "Fizz"
if x % 5 == 0:
OUTPUT += "Buzz"
print(OUTPUT or x)
| """Fizzbuzz for loop variant 3"""
for x in range(1, 101):
OUTPUT = ""
if x % 3 == 0:
OUTPUT += "Fizz"
if x % 5 == 0:
OUTPUT += "Buzz"
print(OUTPUT or x)
| en | 0.564812 | Fizzbuzz for loop variant 3 | 3.897786 | 4 |
cnn/struct/layer/parse_tensor_module.py | hslee1539/GIS_GANs | 0 | 10204 | from tensor.main_module import Tensor
import numpy as np
def getTensor(value):
if type(value) is np.ndarray:
return Tensor.numpy2Tensor(value)
elif type(value) is Tensor:
return value
else:
raise Exception | from tensor.main_module import Tensor
import numpy as np
def getTensor(value):
if type(value) is np.ndarray:
return Tensor.numpy2Tensor(value)
elif type(value) is Tensor:
return value
else:
raise Exception | none | 1 | 2.842778 | 3 |
|
openstack_dashboard/dashboards/admin/volume_types/qos_specs/forms.py | hemantsonawane95/horizon-apelby | 0 | 10205 | <reponame>hemantsonawane95/horizon-apelby<filename>openstack_dashboard/dashboards/admin/volume_types/qos_specs/forms.py
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from django.urls import reverse
from django.utils.translation import gettext_lazy as _
from horizon import exceptions
from horizon import forms
from horizon import messages
from openstack_dashboard import api
KEY_NAME_REGEX = re.compile(r"^[a-zA-Z0-9-_:. /]+$", re.UNICODE)
KEY_ERROR_MESSAGES = {
'invalid': _("The key must match the following the regex: "
"'^[a-zA-Z0-9-_:. /]'")}
class CreateKeyValuePair(forms.SelfHandlingForm):
# this if for creating a spec key-value pair for an existing QOS Spec
key = forms.RegexField(max_length=255, label=_("Key"),
regex=KEY_NAME_REGEX,
error_messages=KEY_ERROR_MESSAGES)
value = forms.CharField(max_length=255, label=_("Value"))
def handle(self, request, data):
qos_spec_id = self.initial['qos_spec_id']
try:
# first retrieve current value of specs
specs = api.cinder.qos_spec_get(request, qos_spec_id)
# now add new key-value pair to list of specs
specs.specs[data['key']] = data['value']
api.cinder.qos_spec_set_keys(request,
qos_spec_id,
specs.specs)
msg = _('Created spec "%s".') % data['key']
messages.success(request, msg)
return True
except Exception:
redirect = reverse("horizon:admin:volume_types:index")
exceptions.handle(request,
_("Unable to create spec."),
redirect=redirect)
class EditKeyValuePair(forms.SelfHandlingForm):
value = forms.CharField(max_length=255, label=_("Value"))
# update the backend with the new qos spec value
def handle(self, request, data):
key = self.initial['key']
qos_spec_id = self.initial['qos_spec_id']
# build up new 'specs' object with all previous values plus new value
try:
# first retrieve current value of specs
specs = api.cinder.qos_spec_get_keys(request,
qos_spec_id,
raw=True)
specs.specs[key] = data['value']
api.cinder.qos_spec_set_keys(request,
qos_spec_id,
specs.specs)
msg = _('Saved spec "%s".') % key
messages.success(request, msg)
return True
except Exception:
redirect = reverse("horizon:admin:volume_types:index")
exceptions.handle(request,
_("Unable to edit spec."),
redirect=redirect)
| # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from django.urls import reverse
from django.utils.translation import gettext_lazy as _
from horizon import exceptions
from horizon import forms
from horizon import messages
from openstack_dashboard import api
KEY_NAME_REGEX = re.compile(r"^[a-zA-Z0-9-_:. /]+$", re.UNICODE)
KEY_ERROR_MESSAGES = {
'invalid': _("The key must match the following the regex: "
"'^[a-zA-Z0-9-_:. /]'")}
class CreateKeyValuePair(forms.SelfHandlingForm):
# this if for creating a spec key-value pair for an existing QOS Spec
key = forms.RegexField(max_length=255, label=_("Key"),
regex=KEY_NAME_REGEX,
error_messages=KEY_ERROR_MESSAGES)
value = forms.CharField(max_length=255, label=_("Value"))
def handle(self, request, data):
qos_spec_id = self.initial['qos_spec_id']
try:
# first retrieve current value of specs
specs = api.cinder.qos_spec_get(request, qos_spec_id)
# now add new key-value pair to list of specs
specs.specs[data['key']] = data['value']
api.cinder.qos_spec_set_keys(request,
qos_spec_id,
specs.specs)
msg = _('Created spec "%s".') % data['key']
messages.success(request, msg)
return True
except Exception:
redirect = reverse("horizon:admin:volume_types:index")
exceptions.handle(request,
_("Unable to create spec."),
redirect=redirect)
class EditKeyValuePair(forms.SelfHandlingForm):
value = forms.CharField(max_length=255, label=_("Value"))
# update the backend with the new qos spec value
def handle(self, request, data):
key = self.initial['key']
qos_spec_id = self.initial['qos_spec_id']
# build up new 'specs' object with all previous values plus new value
try:
# first retrieve current value of specs
specs = api.cinder.qos_spec_get_keys(request,
qos_spec_id,
raw=True)
specs.specs[key] = data['value']
api.cinder.qos_spec_set_keys(request,
qos_spec_id,
specs.specs)
msg = _('Saved spec "%s".') % key
messages.success(request, msg)
return True
except Exception:
redirect = reverse("horizon:admin:volume_types:index")
exceptions.handle(request,
_("Unable to edit spec."),
redirect=redirect) | en | 0.789195 | # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # this if for creating a spec key-value pair for an existing QOS Spec # first retrieve current value of specs # now add new key-value pair to list of specs # update the backend with the new qos spec value # build up new 'specs' object with all previous values plus new value # first retrieve current value of specs | 1.806878 | 2 |
data_structure/const_tree.py | alipay/StructuredLM_RTDT | 42 | 10206 | <reponame>alipay/StructuredLM_RTDT
# coding=utf-8
# Copyright (c) 2021 <NAME>
import sys
LABEL_SEP = '@'
INDENT_STRING1 = '│ '
INDENT_STRING2 = '├──'
EMPTY_TOKEN = '___EMPTY___'
def print_tree(const_tree, indent=0, out=sys.stdout):
for i in range(indent - 1):
out.write(INDENT_STRING1)
if indent > 0:
out.write(INDENT_STRING2)
out.write(const_tree.tag)
if not isinstance(const_tree.children[0], ConstTree):
out.write(f' {const_tree.children[0].string}\n')
else:
out.write('\n')
for child in const_tree.children:
print_tree(child, indent + 1, out)
def _make_tree(string, make_leaf_fn, make_internal_fn):
tokens = string.replace('(', ' ( ').replace(')', ' ) ').split()
index, stack = 0, []
lexicons = []
root = None
while index < len(tokens):
token = tokens[index]
index += 1
if token == ')':
if not stack:
raise ConstTreeParserError('redundant ")" at token ' + str(index))
node = stack.pop()
if not stack:
root = node
else:
stack[-1].children.append(node)
elif token == '(':
tag = tokens[index]
index += 1
stack.append(make_internal_fn(tag))
else:
if not stack:
raise ConnectionError('??? at pos ' + str(index))
new_token = []
while token != ')':
if not token != '(':
raise Exception('bracket error')
new_token.append(token)
token = tokens[index]
index += 1
# is lexicon
leaf_node = make_leaf_fn('_'.join(new_token))
lexicons.append(leaf_node)
postag_node = stack.pop()
postag_node.children.append(leaf_node)
if not stack:
root = postag_node
else:
stack[-1].children.append(postag_node)
if not root or stack:
raise ConstTreeParserError('missing ")".')
return root, lexicons
class ConstTreeParserError(Exception):
pass
class Lexicon:
__slots__ = ('string', 'span', 'parent')
def __init__(self, string, span=None):
self.string = string
self.span = span
def __str__(self):
return f'<Lexicon {self.string}>'
def __repr__(self):
return str(self)
def __eq__(self, other):
return self.string == other.string
def __hash__(self):
return hash(self.string) + 2
@property
def tag(self):
return self.string
def to_string(self, quote_lexicon):
if quote_lexicon:
return f'"{self.string}"'
return self.string
class ConstTree:
__slots__ = ('children', 'tag', 'span', 'index', 'parent', 'attrs')
ROOT_LABEL = 'ROOT'
def __init__(self, tag, children=None, span=None):
self.tag = tag
self.children = children if children is not None else []
self.span = span
self.index = None
def __str__(self):
child_string = ' + '.join(child.tag for child in self.children)
return f'{self.span} {self.tag} => {child_string}'
def __repr__(self):
return str(self)
def __getitem__(self, index):
if isinstance(index, int):
return self.children[index]
if isinstance(index, str):
for child in self.children:
if isinstance(child, ConstTree) and child.tag == index.upper():
return child
raise KeyError
def to_string(self, quote_lexicon=False):
child_string = ' '.join(child.to_string(quote_lexicon) for child in self.children)
return f'({self.tag} {child_string})'
@staticmethod
def from_string(string):
""" Construct ConstTree from parenthesis representation.
:param string: string of parenthesis representation
:return: ConstTree root and all leaf Lexicons
"""
tree, lexicons = _make_tree(string, Lexicon, ConstTree)
for index, lexicon in enumerate(lexicons):
lexicon.span = index, index + 1
tree.populate_spans_internal()
return tree, lexicons
def traverse_postorder(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.traverse_postorder()
yield self
def traverse_postorder_with_lexicons(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.traverse_postorder_with_lexicons()
else:
yield child
yield self
def generate_preterminals(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.generate_preterminals()
for child in self.children:
if isinstance(child, Lexicon):
yield self
def generate_lexicons(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.generate_lexicons()
for child in self.children:
if isinstance(child, Lexicon):
yield child
def is_binary_tree(self):
if isinstance(self.children[0], Lexicon):
return True
return len(self.children <= 2) and all(child.is_binary_tree() for child in self.children)
def condensed_unary_chain(self, include_preterminal=True, remove_root=None):
if self.tag == remove_root:
assert len(self.children) == 1
return self.children[0].condensed_unary_chain(include_preterminal=include_preterminal)
if len(self.children) > 1:
return ConstTree(self.tag,
children=list(child.condensed_unary_chain()
for child in self.children),
span=self.span)
if isinstance(self.children[0], Lexicon):
return ConstTree((self.tag if include_preterminal else EMPTY_TOKEN),
children=list(self.children),
span=self.span)
assert isinstance(self.children[0], ConstTree)
node = self
new_tag = self.tag
while len(node.children) == 1 and isinstance(node.children[0], ConstTree):
node = node.children[0]
if include_preterminal or isinstance(node.children[0], ConstTree):
new_tag += LABEL_SEP + node.tag
if len(node.children) == 1:
children = list(node.children)
else:
children = list(child.condensed_unary_chain() for child in node.children)
return ConstTree(new_tag, children=children, span=self.span)
def expanded_unary_chain(self, add_root=None):
if isinstance(self.children[0], Lexicon):
children = list(self.children)
else:
children = list(child.expanded_unary_chain() for child in self.children)
tags = self.tag.split(LABEL_SEP)
for tag in reversed(tags):
children = [ConstTree(tag, children=children, span=self.span)]
root = children[0]
if add_root:
root = ConstTree(add_root, children=[root])
return root
def calculate_span(self):
self.span = self.children[0].span[0], self.children[-1].span[1]
def populate_spans_internal(self):
for child in self.children:
if isinstance(child, ConstTree):
child.populate_spans_internal()
self.calculate_span()
def add_postorder_index(self):
for index, node in enumerate(self.traverse_postorder()):
node.index = index
def add_parents(self, parent=None):
self.parent = parent
for child in self.children:
if isinstance(child, ConstTree):
child.add_parents(self)
def is_ancestor_of(self, other):
other = other.parent
while other is not None and other is not self:
other = other.parent
return other is not None
def generate_path_to_root(self, include_self=False):
node = self
if not include_self:
node = self.parent
while node is not None:
yield node
node = node.parent
def lowest_common_ancestor(self, other):
path = list(other.generate_path_to_root())
for node in self.generate_path_to_root():
try:
return path[path.index(node)]
except ValueError:
pass
def remove_nodes(self, filter):
_children = []
for c in self.children:
if isinstance(c, ConstTree):
if filter(c):
pass
else:
filtered_node = c.remove_nodes(filter)
_children.append(filtered_node)
else:
_children.append(c)
return ConstTree(self.tag, _children)
| # coding=utf-8
# Copyright (c) 2021 <NAME>
import sys
LABEL_SEP = '@'
INDENT_STRING1 = '│ '
INDENT_STRING2 = '├──'
EMPTY_TOKEN = '___EMPTY___'
def print_tree(const_tree, indent=0, out=sys.stdout):
for i in range(indent - 1):
out.write(INDENT_STRING1)
if indent > 0:
out.write(INDENT_STRING2)
out.write(const_tree.tag)
if not isinstance(const_tree.children[0], ConstTree):
out.write(f' {const_tree.children[0].string}\n')
else:
out.write('\n')
for child in const_tree.children:
print_tree(child, indent + 1, out)
def _make_tree(string, make_leaf_fn, make_internal_fn):
tokens = string.replace('(', ' ( ').replace(')', ' ) ').split()
index, stack = 0, []
lexicons = []
root = None
while index < len(tokens):
token = tokens[index]
index += 1
if token == ')':
if not stack:
raise ConstTreeParserError('redundant ")" at token ' + str(index))
node = stack.pop()
if not stack:
root = node
else:
stack[-1].children.append(node)
elif token == '(':
tag = tokens[index]
index += 1
stack.append(make_internal_fn(tag))
else:
if not stack:
raise ConnectionError('??? at pos ' + str(index))
new_token = []
while token != ')':
if not token != '(':
raise Exception('bracket error')
new_token.append(token)
token = tokens[index]
index += 1
# is lexicon
leaf_node = make_leaf_fn('_'.join(new_token))
lexicons.append(leaf_node)
postag_node = stack.pop()
postag_node.children.append(leaf_node)
if not stack:
root = postag_node
else:
stack[-1].children.append(postag_node)
if not root or stack:
raise ConstTreeParserError('missing ")".')
return root, lexicons
class ConstTreeParserError(Exception):
pass
class Lexicon:
__slots__ = ('string', 'span', 'parent')
def __init__(self, string, span=None):
self.string = string
self.span = span
def __str__(self):
return f'<Lexicon {self.string}>'
def __repr__(self):
return str(self)
def __eq__(self, other):
return self.string == other.string
def __hash__(self):
return hash(self.string) + 2
@property
def tag(self):
return self.string
def to_string(self, quote_lexicon):
if quote_lexicon:
return f'"{self.string}"'
return self.string
class ConstTree:
__slots__ = ('children', 'tag', 'span', 'index', 'parent', 'attrs')
ROOT_LABEL = 'ROOT'
def __init__(self, tag, children=None, span=None):
self.tag = tag
self.children = children if children is not None else []
self.span = span
self.index = None
def __str__(self):
child_string = ' + '.join(child.tag for child in self.children)
return f'{self.span} {self.tag} => {child_string}'
def __repr__(self):
return str(self)
def __getitem__(self, index):
if isinstance(index, int):
return self.children[index]
if isinstance(index, str):
for child in self.children:
if isinstance(child, ConstTree) and child.tag == index.upper():
return child
raise KeyError
def to_string(self, quote_lexicon=False):
child_string = ' '.join(child.to_string(quote_lexicon) for child in self.children)
return f'({self.tag} {child_string})'
@staticmethod
def from_string(string):
""" Construct ConstTree from parenthesis representation.
:param string: string of parenthesis representation
:return: ConstTree root and all leaf Lexicons
"""
tree, lexicons = _make_tree(string, Lexicon, ConstTree)
for index, lexicon in enumerate(lexicons):
lexicon.span = index, index + 1
tree.populate_spans_internal()
return tree, lexicons
def traverse_postorder(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.traverse_postorder()
yield self
def traverse_postorder_with_lexicons(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.traverse_postorder_with_lexicons()
else:
yield child
yield self
def generate_preterminals(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.generate_preterminals()
for child in self.children:
if isinstance(child, Lexicon):
yield self
def generate_lexicons(self):
for child in self.children:
if isinstance(child, ConstTree):
yield from child.generate_lexicons()
for child in self.children:
if isinstance(child, Lexicon):
yield child
def is_binary_tree(self):
if isinstance(self.children[0], Lexicon):
return True
return len(self.children <= 2) and all(child.is_binary_tree() for child in self.children)
def condensed_unary_chain(self, include_preterminal=True, remove_root=None):
if self.tag == remove_root:
assert len(self.children) == 1
return self.children[0].condensed_unary_chain(include_preterminal=include_preterminal)
if len(self.children) > 1:
return ConstTree(self.tag,
children=list(child.condensed_unary_chain()
for child in self.children),
span=self.span)
if isinstance(self.children[0], Lexicon):
return ConstTree((self.tag if include_preterminal else EMPTY_TOKEN),
children=list(self.children),
span=self.span)
assert isinstance(self.children[0], ConstTree)
node = self
new_tag = self.tag
while len(node.children) == 1 and isinstance(node.children[0], ConstTree):
node = node.children[0]
if include_preterminal or isinstance(node.children[0], ConstTree):
new_tag += LABEL_SEP + node.tag
if len(node.children) == 1:
children = list(node.children)
else:
children = list(child.condensed_unary_chain() for child in node.children)
return ConstTree(new_tag, children=children, span=self.span)
def expanded_unary_chain(self, add_root=None):
if isinstance(self.children[0], Lexicon):
children = list(self.children)
else:
children = list(child.expanded_unary_chain() for child in self.children)
tags = self.tag.split(LABEL_SEP)
for tag in reversed(tags):
children = [ConstTree(tag, children=children, span=self.span)]
root = children[0]
if add_root:
root = ConstTree(add_root, children=[root])
return root
def calculate_span(self):
self.span = self.children[0].span[0], self.children[-1].span[1]
def populate_spans_internal(self):
for child in self.children:
if isinstance(child, ConstTree):
child.populate_spans_internal()
self.calculate_span()
def add_postorder_index(self):
for index, node in enumerate(self.traverse_postorder()):
node.index = index
def add_parents(self, parent=None):
self.parent = parent
for child in self.children:
if isinstance(child, ConstTree):
child.add_parents(self)
def is_ancestor_of(self, other):
other = other.parent
while other is not None and other is not self:
other = other.parent
return other is not None
def generate_path_to_root(self, include_self=False):
node = self
if not include_self:
node = self.parent
while node is not None:
yield node
node = node.parent
def lowest_common_ancestor(self, other):
path = list(other.generate_path_to_root())
for node in self.generate_path_to_root():
try:
return path[path.index(node)]
except ValueError:
pass
def remove_nodes(self, filter):
_children = []
for c in self.children:
if isinstance(c, ConstTree):
if filter(c):
pass
else:
filtered_node = c.remove_nodes(filter)
_children.append(filtered_node)
else:
_children.append(c)
return ConstTree(self.tag, _children) | en | 0.743059 | # coding=utf-8 # Copyright (c) 2021 <NAME> # is lexicon Construct ConstTree from parenthesis representation. :param string: string of parenthesis representation :return: ConstTree root and all leaf Lexicons | 3.503178 | 4 |
tests/test_minimize.py | The-Ludwig/iminuit | 0 | 10207 | <filename>tests/test_minimize.py
import pytest
from iminuit import minimize
import numpy as np
from numpy.testing import assert_allclose, assert_equal
opt = pytest.importorskip("scipy.optimize")
def func(x, *args):
c = args[0] if args else 1
return c + x[0] ** 2 + (x[1] - 1) ** 2 + (x[2] - 2) ** 2
def grad(x, *args):
return 2 * (x - (0, 1, 2))
def test_simple():
result = minimize(func, (1, 1, 1))
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 1)
assert result.nfev > 0
assert result.njev == 0
def test_gradient():
result = minimize(func, (1, 1, 1), jac=grad)
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 1)
assert result.nfev > 0
assert result.njev > 0
def test_args():
result = minimize(func, np.ones(3), args=(5,))
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 5)
assert result.nfev > 0
assert result.njev == 0
def test_callback():
trace = []
result = minimize(func, np.ones(3), callback=lambda x: trace.append(x.copy()))
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 1)
assert result.nfev == len(trace)
assert_allclose(trace[0], np.ones(3), atol=1e-2)
assert_allclose(trace[-1], result.x, atol=1e-2)
def test_tol():
ref = np.ones(2)
def rosen(par):
x, y = par
return (1 - x) ** 2 + 100 * (y - x ** 2) ** 2
r1 = minimize(rosen, (0, 0), tol=1)
r2 = minimize(rosen, (0, 0), tol=1e-6)
assert max(np.abs(r2.x - ref)) < max(np.abs(r1.x - ref))
def test_disp(capsys):
minimize(lambda x: x ** 2, 0)
assert capsys.readouterr()[0] == ""
minimize(lambda x: x ** 2, 0, options={"disp": True})
assert capsys.readouterr()[0] != ""
def test_hessinv():
r = minimize(func, (1, 1, 1))
href = np.zeros((3, 3))
for i in range(3):
href[i, i] = 0.5
assert_allclose(r.hess_inv, href, atol=1e-8)
def test_unsupported():
with pytest.raises(ValueError):
minimize(func, (1, 1, 1), constraints=[])
with pytest.raises(ValueError):
minimize(func, (1, 1, 1), jac=True)
def test_call_limit():
ref = minimize(func, (1, 1, 1))
with pytest.warns(UserWarning):
r1 = minimize(func, (1, 1, 1), options={"maxiter": 1})
assert r1.nfev < ref.nfev
assert not r1.success
assert "Call limit" in r1.message
with pytest.warns(DeprecationWarning):
r2 = minimize(func, (1, 1, 1), options={"maxfev": 1})
assert not r2.success
assert r2.nfev == r1.nfev
r3 = minimize(func, (1, 1, 1), options={"maxfun": 1})
assert not r3.success
assert r3.nfev == r1.nfev
def test_eps():
ref = minimize(func, (1, 1, 1))
r = minimize(func, (1, 1, 1), options={"eps": 1e-10})
assert np.any(ref.x != r.x)
assert_allclose(r.x, ref.x, atol=1e-9)
def test_bad_function():
class Fcn:
n = 0
def __call__(self, x):
self.n += 1
return x ** 2 + 1e-4 * (self.n % 3)
r = minimize(Fcn(), [1], options={"maxfun": 100000000})
assert not r.success
assert "Estimated distance to minimum too large" in r.message
def test_bounds():
r1 = minimize(func, (1.5, 1.7, 1.5), bounds=opt.Bounds((1, 1.5, 1), (2, 2, 2)))
assert r1.success
assert_allclose(r1.x, (1, 1.5, 2), atol=1e-2)
r2 = minimize(func, (1.5, 1.7, 1.5), bounds=((1, 2), (1.5, 2), (1, 2)))
assert r2.success
assert_equal(r1.x, r2.x)
def test_method_warn():
with pytest.raises(ValueError):
minimize(func, (1.5, 1.7, 1.5), method="foo")
def test_hess_warn():
with pytest.warns(UserWarning):
minimize(func, (1.5, 1.7, 1.5), hess=True)
def test_unreliable_uncertainties():
r = minimize(func, (1.5, 1.7, 1.5), options={"stra": 0})
assert (
r.message
== "Optimization terminated successfully, but uncertainties are unrealiable."
)
def test_simplex():
r = minimize(func, (1.5, 1.7, 1.5), method="simplex", tol=1e-4)
assert r.success
assert_allclose(r.x, (0, 1, 2), atol=2e-3)
| <filename>tests/test_minimize.py
import pytest
from iminuit import minimize
import numpy as np
from numpy.testing import assert_allclose, assert_equal
opt = pytest.importorskip("scipy.optimize")
def func(x, *args):
c = args[0] if args else 1
return c + x[0] ** 2 + (x[1] - 1) ** 2 + (x[2] - 2) ** 2
def grad(x, *args):
return 2 * (x - (0, 1, 2))
def test_simple():
result = minimize(func, (1, 1, 1))
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 1)
assert result.nfev > 0
assert result.njev == 0
def test_gradient():
result = minimize(func, (1, 1, 1), jac=grad)
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 1)
assert result.nfev > 0
assert result.njev > 0
def test_args():
result = minimize(func, np.ones(3), args=(5,))
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 5)
assert result.nfev > 0
assert result.njev == 0
def test_callback():
trace = []
result = minimize(func, np.ones(3), callback=lambda x: trace.append(x.copy()))
assert_allclose(result.x, (0, 1, 2), atol=1e-8)
assert_allclose(result.fun, 1)
assert result.nfev == len(trace)
assert_allclose(trace[0], np.ones(3), atol=1e-2)
assert_allclose(trace[-1], result.x, atol=1e-2)
def test_tol():
ref = np.ones(2)
def rosen(par):
x, y = par
return (1 - x) ** 2 + 100 * (y - x ** 2) ** 2
r1 = minimize(rosen, (0, 0), tol=1)
r2 = minimize(rosen, (0, 0), tol=1e-6)
assert max(np.abs(r2.x - ref)) < max(np.abs(r1.x - ref))
def test_disp(capsys):
minimize(lambda x: x ** 2, 0)
assert capsys.readouterr()[0] == ""
minimize(lambda x: x ** 2, 0, options={"disp": True})
assert capsys.readouterr()[0] != ""
def test_hessinv():
r = minimize(func, (1, 1, 1))
href = np.zeros((3, 3))
for i in range(3):
href[i, i] = 0.5
assert_allclose(r.hess_inv, href, atol=1e-8)
def test_unsupported():
with pytest.raises(ValueError):
minimize(func, (1, 1, 1), constraints=[])
with pytest.raises(ValueError):
minimize(func, (1, 1, 1), jac=True)
def test_call_limit():
ref = minimize(func, (1, 1, 1))
with pytest.warns(UserWarning):
r1 = minimize(func, (1, 1, 1), options={"maxiter": 1})
assert r1.nfev < ref.nfev
assert not r1.success
assert "Call limit" in r1.message
with pytest.warns(DeprecationWarning):
r2 = minimize(func, (1, 1, 1), options={"maxfev": 1})
assert not r2.success
assert r2.nfev == r1.nfev
r3 = minimize(func, (1, 1, 1), options={"maxfun": 1})
assert not r3.success
assert r3.nfev == r1.nfev
def test_eps():
ref = minimize(func, (1, 1, 1))
r = minimize(func, (1, 1, 1), options={"eps": 1e-10})
assert np.any(ref.x != r.x)
assert_allclose(r.x, ref.x, atol=1e-9)
def test_bad_function():
class Fcn:
n = 0
def __call__(self, x):
self.n += 1
return x ** 2 + 1e-4 * (self.n % 3)
r = minimize(Fcn(), [1], options={"maxfun": 100000000})
assert not r.success
assert "Estimated distance to minimum too large" in r.message
def test_bounds():
r1 = minimize(func, (1.5, 1.7, 1.5), bounds=opt.Bounds((1, 1.5, 1), (2, 2, 2)))
assert r1.success
assert_allclose(r1.x, (1, 1.5, 2), atol=1e-2)
r2 = minimize(func, (1.5, 1.7, 1.5), bounds=((1, 2), (1.5, 2), (1, 2)))
assert r2.success
assert_equal(r1.x, r2.x)
def test_method_warn():
with pytest.raises(ValueError):
minimize(func, (1.5, 1.7, 1.5), method="foo")
def test_hess_warn():
with pytest.warns(UserWarning):
minimize(func, (1.5, 1.7, 1.5), hess=True)
def test_unreliable_uncertainties():
r = minimize(func, (1.5, 1.7, 1.5), options={"stra": 0})
assert (
r.message
== "Optimization terminated successfully, but uncertainties are unrealiable."
)
def test_simplex():
r = minimize(func, (1.5, 1.7, 1.5), method="simplex", tol=1e-4)
assert r.success
assert_allclose(r.x, (0, 1, 2), atol=2e-3)
| none | 1 | 2.427253 | 2 |
|
murtanto/parsing.py | amandatv20/botfb | 1 | 10208 | <filename>murtanto/parsing.py
# coded by: salism3
# 23 - 05 - 2020 23:18 (<NAME>)
from bs4 import BeautifulSoup as parser
from . import sorting
import re
def to_bs4(html):
return parser(html, "html.parser")
def refsrc(html):
return True if re.search(r'http.+\Wrefsrc', html) else False
def parsing_href(html, href, one = False, bs4_class = False):
data = to_bs4(html)
if one:
data = data.find("a", href = lambda x: x and href in x)
if not bs4_class and data != None:
data = sorting.to_mbasic(data["href"])
else:
data = data.find_all("a", href = lambda x: x and href in x)
if not bs4_class:
data = [sorting.to_mbasic(x["href"]) for x in data]
return data
def parsing_href_regex(html, pattern, one = False, bs4_class = False):
data = to_bs4(html)
if one:
data = data.find("a", href = lambda x: x and re.search(pattern, x))
if not bs4_class and data != None:
data = sorting.to_mbasic(data["href"])
else:
data = data.find_all("a", href = lambda x: x and re.search(pattern, x))
if not bs4_class:
data = [sorting.to_mbasic(x["href"]) for x in data]
return data
def getMyName(html):
data = to_bs4(html).find("title").text
return data
def getName(html):
data = to_bs4(html).find("title").text
return data
def getMyId(html):
data = to_bs4(html).find("a", href = lambda x:"/allactivity" in x)["href"]
data = re.search(r"/\d+/?", data).group().replace("/", "")
return data
def getHiddenInput(html, post_action):
rv = {}
data = to_bs4(html).find("form", action = lambda x: post_action in x)
data = data.find_all("input", {"type":"hidden", "name":True, "value":True})
for x in data:
rv[x["name"]] = x["value"]
return rv
def friendRequestParser(html):
confirm = parsing_href(html, "?confirm=")
reject = parsing_href(html, "?delete=")
rv = list(zip(confirm, reject))
next = parsing_href(html, "?ppk=", one = True)
return {"items":rv, "next":next}
def listFriendParser(html):
data = parsing_href(html, "fref=fr_tab", bs4_class = True)
nama = [x.text for x in data]
id_ = [re.search(r"\w[\w.]+", x["href"].replace("/", "").replace("profile.php?id=", "")).group() for x in data]
img = [x["src"] for x in to_bs4(html).find_all("img", alt = lambda x: x and "profile picture" in x)]
if "/allactivity?" in html:
del img[0]
next = parsing_href(html, "unit_cursor=", one = True)
return {"items":list(zip(nama, id_, img)), "next":next, "html":html} | <filename>murtanto/parsing.py
# coded by: salism3
# 23 - 05 - 2020 23:18 (<NAME>)
from bs4 import BeautifulSoup as parser
from . import sorting
import re
def to_bs4(html):
return parser(html, "html.parser")
def refsrc(html):
return True if re.search(r'http.+\Wrefsrc', html) else False
def parsing_href(html, href, one = False, bs4_class = False):
data = to_bs4(html)
if one:
data = data.find("a", href = lambda x: x and href in x)
if not bs4_class and data != None:
data = sorting.to_mbasic(data["href"])
else:
data = data.find_all("a", href = lambda x: x and href in x)
if not bs4_class:
data = [sorting.to_mbasic(x["href"]) for x in data]
return data
def parsing_href_regex(html, pattern, one = False, bs4_class = False):
data = to_bs4(html)
if one:
data = data.find("a", href = lambda x: x and re.search(pattern, x))
if not bs4_class and data != None:
data = sorting.to_mbasic(data["href"])
else:
data = data.find_all("a", href = lambda x: x and re.search(pattern, x))
if not bs4_class:
data = [sorting.to_mbasic(x["href"]) for x in data]
return data
def getMyName(html):
data = to_bs4(html).find("title").text
return data
def getName(html):
data = to_bs4(html).find("title").text
return data
def getMyId(html):
data = to_bs4(html).find("a", href = lambda x:"/allactivity" in x)["href"]
data = re.search(r"/\d+/?", data).group().replace("/", "")
return data
def getHiddenInput(html, post_action):
rv = {}
data = to_bs4(html).find("form", action = lambda x: post_action in x)
data = data.find_all("input", {"type":"hidden", "name":True, "value":True})
for x in data:
rv[x["name"]] = x["value"]
return rv
def friendRequestParser(html):
confirm = parsing_href(html, "?confirm=")
reject = parsing_href(html, "?delete=")
rv = list(zip(confirm, reject))
next = parsing_href(html, "?ppk=", one = True)
return {"items":rv, "next":next}
def listFriendParser(html):
data = parsing_href(html, "fref=fr_tab", bs4_class = True)
nama = [x.text for x in data]
id_ = [re.search(r"\w[\w.]+", x["href"].replace("/", "").replace("profile.php?id=", "")).group() for x in data]
img = [x["src"] for x in to_bs4(html).find_all("img", alt = lambda x: x and "profile picture" in x)]
if "/allactivity?" in html:
del img[0]
next = parsing_href(html, "unit_cursor=", one = True)
return {"items":list(zip(nama, id_, img)), "next":next, "html":html} | en | 0.44789 | # coded by: salism3 # 23 - 05 - 2020 23:18 (<NAME>) | 2.850233 | 3 |
test/test_watchdog_status.py | ike709/tgs4-api-pyclient | 0 | 10209 | # coding: utf-8
"""
TGS API
A production scale tool for BYOND server management # noqa: E501
OpenAPI spec version: 9.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import swagger_client
from swagger_client.models.watchdog_status import WatchdogStatus # noqa: E501
from swagger_client.rest import ApiException
class TestWatchdogStatus(unittest.TestCase):
"""WatchdogStatus unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testWatchdogStatus(self):
"""Test WatchdogStatus"""
# FIXME: construct object with mandatory attributes with example values
# model = swagger_client.models.watchdog_status.WatchdogStatus() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| # coding: utf-8
"""
TGS API
A production scale tool for BYOND server management # noqa: E501
OpenAPI spec version: 9.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import swagger_client
from swagger_client.models.watchdog_status import WatchdogStatus # noqa: E501
from swagger_client.rest import ApiException
class TestWatchdogStatus(unittest.TestCase):
"""WatchdogStatus unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testWatchdogStatus(self):
"""Test WatchdogStatus"""
# FIXME: construct object with mandatory attributes with example values
# model = swagger_client.models.watchdog_status.WatchdogStatus() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| en | 0.55272 | # coding: utf-8 TGS API A production scale tool for BYOND server management # noqa: E501 OpenAPI spec version: 9.0.0 Generated by: https://github.com/swagger-api/swagger-codegen.git # noqa: E501 WatchdogStatus unit test stubs Test WatchdogStatus # FIXME: construct object with mandatory attributes with example values # model = swagger_client.models.watchdog_status.WatchdogStatus() # noqa: E501 | 1.742767 | 2 |
setup.py | joesan/housing-classification-example | 0 | 10210 | from setuptools import find_packages, setup
setup(
name='src',
packages=find_packages(),
version='0.1.0',
description='Python codebase for the housing classification ML problem',
author='Joesan',
license='',
)
| from setuptools import find_packages, setup
setup(
name='src',
packages=find_packages(),
version='0.1.0',
description='Python codebase for the housing classification ML problem',
author='Joesan',
license='',
)
| none | 1 | 1.049739 | 1 |
|
tests/test_models/test_backbones/test_sr_backbones/test_edvr_net.py | wangruohui/mmediting | 45 | 10211 | # Copyright (c) OpenMMLab. All rights reserved.
import pytest
import torch
from mmedit.models.backbones.sr_backbones.edvr_net import (EDVRNet,
PCDAlignment,
TSAFusion)
def test_pcd_alignment():
"""Test PCDAlignment."""
# cpu
pcd_alignment = PCDAlignment(mid_channels=4, deform_groups=2)
input_list = []
for i in range(3, 0, -1):
input_list.append(torch.rand(1, 4, 2**i, 2**i))
pcd_alignment = pcd_alignment
input_list = [v for v in input_list]
output = pcd_alignment(input_list, input_list)
assert output.shape == (1, 4, 8, 8)
with pytest.raises(AssertionError):
pcd_alignment(input_list[0:2], input_list)
# gpu
if torch.cuda.is_available():
pcd_alignment = PCDAlignment(mid_channels=4, deform_groups=2)
input_list = []
for i in range(3, 0, -1):
input_list.append(torch.rand(1, 4, 2**i, 2**i))
pcd_alignment = pcd_alignment.cuda()
input_list = [v.cuda() for v in input_list]
output = pcd_alignment(input_list, input_list)
assert output.shape == (1, 4, 8, 8)
with pytest.raises(AssertionError):
pcd_alignment(input_list[0:2], input_list)
def test_tsa_fusion():
"""Test TSAFusion."""
# cpu
tsa_fusion = TSAFusion(mid_channels=4, num_frames=5, center_frame_idx=2)
input_tensor = torch.rand(1, 5, 4, 8, 8)
output = tsa_fusion(input_tensor)
assert output.shape == (1, 4, 8, 8)
# gpu
if torch.cuda.is_available():
tsa_fusion = tsa_fusion.cuda()
input_tensor = input_tensor.cuda()
output = tsa_fusion(input_tensor)
assert output.shape == (1, 4, 8, 8)
def test_edvrnet():
"""Test EDVRNet."""
# cpu
# with tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=True)
input_tensor = torch.rand(1, 5, 3, 8, 8)
edvrnet.init_weights(pretrained=None)
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
# without tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=False)
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
with pytest.raises(AssertionError):
# The height and width of inputs should be a multiple of 4
input_tensor = torch.rand(1, 5, 3, 3, 3)
edvrnet(input_tensor)
with pytest.raises(TypeError):
# pretrained should be str or None
edvrnet.init_weights(pretrained=[1])
# gpu
if torch.cuda.is_available():
# with tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=True).cuda()
input_tensor = torch.rand(1, 5, 3, 8, 8).cuda()
edvrnet.init_weights(pretrained=None)
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
# without tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=False).cuda()
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
with pytest.raises(AssertionError):
# The height and width of inputs should be a multiple of 4
input_tensor = torch.rand(1, 5, 3, 3, 3).cuda()
edvrnet(input_tensor)
with pytest.raises(TypeError):
# pretrained should be str or None
edvrnet.init_weights(pretrained=[1])
| # Copyright (c) OpenMMLab. All rights reserved.
import pytest
import torch
from mmedit.models.backbones.sr_backbones.edvr_net import (EDVRNet,
PCDAlignment,
TSAFusion)
def test_pcd_alignment():
"""Test PCDAlignment."""
# cpu
pcd_alignment = PCDAlignment(mid_channels=4, deform_groups=2)
input_list = []
for i in range(3, 0, -1):
input_list.append(torch.rand(1, 4, 2**i, 2**i))
pcd_alignment = pcd_alignment
input_list = [v for v in input_list]
output = pcd_alignment(input_list, input_list)
assert output.shape == (1, 4, 8, 8)
with pytest.raises(AssertionError):
pcd_alignment(input_list[0:2], input_list)
# gpu
if torch.cuda.is_available():
pcd_alignment = PCDAlignment(mid_channels=4, deform_groups=2)
input_list = []
for i in range(3, 0, -1):
input_list.append(torch.rand(1, 4, 2**i, 2**i))
pcd_alignment = pcd_alignment.cuda()
input_list = [v.cuda() for v in input_list]
output = pcd_alignment(input_list, input_list)
assert output.shape == (1, 4, 8, 8)
with pytest.raises(AssertionError):
pcd_alignment(input_list[0:2], input_list)
def test_tsa_fusion():
"""Test TSAFusion."""
# cpu
tsa_fusion = TSAFusion(mid_channels=4, num_frames=5, center_frame_idx=2)
input_tensor = torch.rand(1, 5, 4, 8, 8)
output = tsa_fusion(input_tensor)
assert output.shape == (1, 4, 8, 8)
# gpu
if torch.cuda.is_available():
tsa_fusion = tsa_fusion.cuda()
input_tensor = input_tensor.cuda()
output = tsa_fusion(input_tensor)
assert output.shape == (1, 4, 8, 8)
def test_edvrnet():
"""Test EDVRNet."""
# cpu
# with tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=True)
input_tensor = torch.rand(1, 5, 3, 8, 8)
edvrnet.init_weights(pretrained=None)
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
# without tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=False)
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
with pytest.raises(AssertionError):
# The height and width of inputs should be a multiple of 4
input_tensor = torch.rand(1, 5, 3, 3, 3)
edvrnet(input_tensor)
with pytest.raises(TypeError):
# pretrained should be str or None
edvrnet.init_weights(pretrained=[1])
# gpu
if torch.cuda.is_available():
# with tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=True).cuda()
input_tensor = torch.rand(1, 5, 3, 8, 8).cuda()
edvrnet.init_weights(pretrained=None)
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
# without tsa
edvrnet = EDVRNet(
3,
3,
mid_channels=8,
num_frames=5,
deform_groups=2,
num_blocks_extraction=1,
num_blocks_reconstruction=1,
center_frame_idx=2,
with_tsa=False).cuda()
output = edvrnet(input_tensor)
assert output.shape == (1, 3, 32, 32)
with pytest.raises(AssertionError):
# The height and width of inputs should be a multiple of 4
input_tensor = torch.rand(1, 5, 3, 3, 3).cuda()
edvrnet(input_tensor)
with pytest.raises(TypeError):
# pretrained should be str or None
edvrnet.init_weights(pretrained=[1])
| en | 0.78364 | # Copyright (c) OpenMMLab. All rights reserved. Test PCDAlignment. # cpu # gpu Test TSAFusion. # cpu # gpu Test EDVRNet. # cpu # with tsa # without tsa # The height and width of inputs should be a multiple of 4 # pretrained should be str or None # gpu # with tsa # without tsa # The height and width of inputs should be a multiple of 4 # pretrained should be str or None | 2.108957 | 2 |
mat2py/core/datastoreio.py | mat2py/mat2py | 0 | 10212 | # type: ignore
__all__ = [
"readDatastoreImage",
"datastore",
]
def readDatastoreImage(*args):
raise NotImplementedError("readDatastoreImage")
def datastore(*args):
raise NotImplementedError("datastore")
| # type: ignore
__all__ = [
"readDatastoreImage",
"datastore",
]
def readDatastoreImage(*args):
raise NotImplementedError("readDatastoreImage")
def datastore(*args):
raise NotImplementedError("datastore")
| it | 0.190853 | # type: ignore | 1.800924 | 2 |
enjoliver-api/tests/test_generate_groups.py | netturpin/enjoliver | 11 | 10213 | import os
from shutil import rmtree
from tempfile import mkdtemp
from unittest import TestCase
from enjoliver import generator
class GenerateGroupTestCase(TestCase):
api_uri = None
test_matchbox_path = None
test_resources_path = None
tests_path = None
@classmethod
def setUpClass(cls):
cls.tests_path = mkdtemp(dir='/tmp')
cls.test_matchbox_path = os.path.join(cls.tests_path, 'test_matchbox')
cls.test_resources_path = os.path.join(cls.tests_path, 'test_resources')
os.mkdir(cls.test_matchbox_path)
os.mkdir(cls.test_resources_path)
os.mkdir(os.path.join(cls.test_matchbox_path, 'groups'))
cls.api_uri = "http://127.0.0.1:5000"
@classmethod
def tearDownClass(cls):
rmtree(cls.tests_path)
class TestGenerateGroups(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
matchbox_path=cls.test_matchbox_path
)
cls.gen.profiles_path = cls.test_resources_path
def test_instantiate_generate_group_with_incorrect_parameters(self):
with self.assertRaises(TypeError):
generator.GenerateGroup()
def test_instantiate_generate_group_with_non_existing_matchbox_path(self):
with self.assertRaises(OSError):
generator.GenerateGroup(
api_uri='foobar',
_id='foo',
name='foo-bar',
profile='foo-bar-baz',
matchbox_path='/foo/bar'
)
def test_instantiate_generate_group(self):
sandbox = mkdtemp(dir='/tmp')
os.mkdir(os.path.join(sandbox, 'groups'))
generator.GenerateGroup(
api_uri='foobar',
_id='foo',
name='foo-bar',
profile='foo-bar-baz',
matchbox_path=sandbox
)
rmtree(sandbox)
def test_00_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {'etcd_initial_cluster': '',
'api_uri': '%s' % self.gen.api_uri,
'ssh_authorized_keys': []}
self.gen._metadata()
self.assertEqual(expect['api_uri'], self.gen._target_data["metadata"]["api_uri"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': '%s' % self.gen.api_uri,
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy'
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="etcd-proxy.yaml",
matchbox_path=self.test_matchbox_path
)
result = new.generate()
self.assertEqual(expect["profile"], result["profile"])
self.assertEqual(expect["id"], result["id"])
self.assertEqual(expect["name"], result["name"])
self.assertEqual(expect["metadata"]["api_uri"], result["metadata"]["api_uri"])
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id=_id,
name="etcd-test",
profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
self.assertFalse(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id=_id,
name="etcd-test",
profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"one": "selector"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsSelectorLower(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=cls.test_matchbox_path
)
def test_00_api_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {
'api_uri': "%s" % self.gen.api_uri,
'ssh_authorized_keys': []
}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy", name="etcd-proxy", profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path)
result = new.generate()
result["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsSelectorUpper(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2E"},
matchbox_path=cls.test_matchbox_path
)
def test_00_ip_address(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {
'api_uri': "%s" % self.gen.api_uri,
'ssh_authorized_keys': []
}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': "%s" % self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri, _id="etcd-proxy",
name="etcd-proxy",
profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path
)
result = new.generate()
result["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
new.dump()
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsExtraMetadata(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2E"},
metadata={"etcd_initial_cluster": "static0=http://192.168.1.1:2379",
"api_seed": "http://192.168.1.2:5000"},
matchbox_path=cls.test_matchbox_path
)
def test_00_api_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {'etcd_initial_cluster': 'static0=http://192.168.1.1:2379',
'api_uri': "%s" % self.gen.api_uri,
'api_seed': 'http://192.168.1.2:5000',
'ssh_authorized_keys': []}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': "%s" % self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy", name="etcd-proxy", profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path
)
result = new.generate()
result["metadata"]["ssh_authorized_keys"] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
self.assertTrue(new.dump())
for i in range(10):
self.assertFalse(new.dump())
new.api_uri = "http://google.com"
self.assertTrue(new.dump())
self.assertFalse(new.dump())
| import os
from shutil import rmtree
from tempfile import mkdtemp
from unittest import TestCase
from enjoliver import generator
class GenerateGroupTestCase(TestCase):
api_uri = None
test_matchbox_path = None
test_resources_path = None
tests_path = None
@classmethod
def setUpClass(cls):
cls.tests_path = mkdtemp(dir='/tmp')
cls.test_matchbox_path = os.path.join(cls.tests_path, 'test_matchbox')
cls.test_resources_path = os.path.join(cls.tests_path, 'test_resources')
os.mkdir(cls.test_matchbox_path)
os.mkdir(cls.test_resources_path)
os.mkdir(os.path.join(cls.test_matchbox_path, 'groups'))
cls.api_uri = "http://127.0.0.1:5000"
@classmethod
def tearDownClass(cls):
rmtree(cls.tests_path)
class TestGenerateGroups(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
matchbox_path=cls.test_matchbox_path
)
cls.gen.profiles_path = cls.test_resources_path
def test_instantiate_generate_group_with_incorrect_parameters(self):
with self.assertRaises(TypeError):
generator.GenerateGroup()
def test_instantiate_generate_group_with_non_existing_matchbox_path(self):
with self.assertRaises(OSError):
generator.GenerateGroup(
api_uri='foobar',
_id='foo',
name='foo-bar',
profile='foo-bar-baz',
matchbox_path='/foo/bar'
)
def test_instantiate_generate_group(self):
sandbox = mkdtemp(dir='/tmp')
os.mkdir(os.path.join(sandbox, 'groups'))
generator.GenerateGroup(
api_uri='foobar',
_id='foo',
name='foo-bar',
profile='foo-bar-baz',
matchbox_path=sandbox
)
rmtree(sandbox)
def test_00_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {'etcd_initial_cluster': '',
'api_uri': '%s' % self.gen.api_uri,
'ssh_authorized_keys': []}
self.gen._metadata()
self.assertEqual(expect['api_uri'], self.gen._target_data["metadata"]["api_uri"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': '%s' % self.gen.api_uri,
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy'
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="etcd-proxy.yaml",
matchbox_path=self.test_matchbox_path
)
result = new.generate()
self.assertEqual(expect["profile"], result["profile"])
self.assertEqual(expect["id"], result["id"])
self.assertEqual(expect["name"], result["name"])
self.assertEqual(expect["metadata"]["api_uri"], result["metadata"]["api_uri"])
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id=_id,
name="etcd-test",
profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
self.assertFalse(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id=_id,
name="etcd-test",
profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"one": "selector"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsSelectorLower(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=cls.test_matchbox_path
)
def test_00_api_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {
'api_uri': "%s" % self.gen.api_uri,
'ssh_authorized_keys': []
}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy", name="etcd-proxy", profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path)
result = new.generate()
result["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsSelectorUpper(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2E"},
matchbox_path=cls.test_matchbox_path
)
def test_00_ip_address(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {
'api_uri': "%s" % self.gen.api_uri,
'ssh_authorized_keys': []
}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': "%s" % self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri, _id="etcd-proxy",
name="etcd-proxy",
profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path
)
result = new.generate()
result["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
new.dump()
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
class TestGenerateGroupsExtraMetadata(GenerateGroupTestCase):
@classmethod
def setUpClass(cls):
super().setUpClass()
os.environ["MATCHBOX_URI"] = "http://127.0.0.1:8080"
os.environ["API_URI"] = "http://127.0.0.1:5000"
cls.gen = generator.GenerateGroup(
api_uri=cls.api_uri,
_id="etcd-proxy",
name="etcd-proxy",
profile="TestGenerateProfiles",
selector={"mac": "08:00:27:37:28:2E"},
metadata={"etcd_initial_cluster": "static0=http://192.168.1.1:2379",
"api_seed": "http://192.168.1.2:5000"},
matchbox_path=cls.test_matchbox_path
)
def test_00_api_uri(self):
ip = self.gen.api_uri
self.assertIsNotNone(ip)
def test_01_metadata(self):
expect = {'etcd_initial_cluster': 'static0=http://192.168.1.1:2379',
'api_uri': "%s" % self.gen.api_uri,
'api_seed': 'http://192.168.1.2:5000',
'ssh_authorized_keys': []}
self.gen._metadata()
self.gen._target_data["metadata"]['ssh_authorized_keys'] = []
self.assertEqual(expect, self.gen._target_data["metadata"])
def test_02_selector(self):
expect = {'mac': '08:00:27:37:28:2e'}
self.gen._selector()
self.assertEqual(expect, self.gen._target_data["selector"])
def test_990_generate(self):
expect = {
'profile': 'etcd-proxy.yaml',
'metadata': {
'api_uri': "%s" % self.gen.api_uri,
'selector': {'mac': '08:00:27:37:28:2e'},
'ssh_authorized_keys': []
},
'id': 'etcd-proxy',
'name': 'etcd-proxy',
'selector': {'mac': '08:00:27:37:28:2e'}
}
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="etcd-proxy", name="etcd-proxy", profile="etcd-proxy.yaml",
selector={"mac": "08:00:27:37:28:2e"},
matchbox_path=self.test_matchbox_path
)
result = new.generate()
result["metadata"]["ssh_authorized_keys"] = []
self.assertEqual(expect, result)
def test_991_dump(self):
_id = "etcd-test-%s" % self.test_991_dump.__name__
new = generator.GenerateGroup(
api_uri=self.api_uri,
_id="%s" % _id, name="etcd-test", profile="etcd-test.yaml",
matchbox_path=self.test_matchbox_path,
selector={"mac": "08:00:27:37:28:2e"}
)
self.assertTrue(new.dump())
self.assertTrue(os.path.isfile("%s/groups/%s.json" % (self.test_matchbox_path, _id)))
os.remove("%s/groups/%s.json" % (self.test_matchbox_path, _id))
self.assertTrue(new.dump())
for i in range(10):
self.assertFalse(new.dump())
new.api_uri = "http://google.com"
self.assertTrue(new.dump())
self.assertFalse(new.dump())
| none | 1 | 2.376044 | 2 |
|
HackerRank/Calendar Module/solution.py | nikku1234/Code-Practise | 9 | 10214 | # Enter your code here. Read input from STDIN. Print output to STDOUT
import calendar
mm,dd,yyyy = map(int,input().split())
day = ["MONDAY","TUESDAY","WEDNESDAY","THURSDAY","FRIDAY","SATURDAY","SUNDAY"]
val = int (calendar.weekday(yyyy,mm,dd))
print(day[val])
| # Enter your code here. Read input from STDIN. Print output to STDOUT
import calendar
mm,dd,yyyy = map(int,input().split())
day = ["MONDAY","TUESDAY","WEDNESDAY","THURSDAY","FRIDAY","SATURDAY","SUNDAY"]
val = int (calendar.weekday(yyyy,mm,dd))
print(day[val])
| en | 0.824269 | # Enter your code here. Read input from STDIN. Print output to STDOUT | 3.817077 | 4 |
scale/trigger/models.py | stevevarner/scale | 0 | 10215 | """Defines the models for trigger rules and events"""
from __future__ import unicode_literals
import django.contrib.postgres.fields
from django.db import models, transaction
from django.utils.timezone import now
class TriggerEventManager(models.Manager):
"""Provides additional methods for handling trigger events
"""
def create_trigger_event(self, trigger_type, rule, description, occurred):
"""Creates a new trigger event and returns the event model. The given rule model, if not None, must have already
been saved in the database (it must have an ID). The returned trigger event model will be saved in the database.
:param trigger_type: The type of the trigger that occurred
:type trigger_type: str
:param rule: The rule that triggered the event, possibly None
:type rule: :class:`trigger.models.TriggerRule`
:param description: The JSON description of the event as a dict
:type description: dict
:param occurred: When the event occurred
:type occurred: :class:`datetime.datetime`
:returns: The new trigger event
:rtype: :class:`trigger.models.TriggerEvent`
"""
if trigger_type is None:
raise Exception('Trigger event must have a type')
if description is None:
raise Exception('Trigger event must have a JSON description')
if occurred is None:
raise Exception('Trigger event must have a timestamp')
event = TriggerEvent()
event.type = trigger_type
event.rule = rule
event.description = description
event.occurred = occurred
event.save()
return event
class TriggerEvent(models.Model):
"""Represents an event where a trigger occurred
:keyword type: The type of the trigger that occurred
:type type: :class:`django.db.models.CharField`
:keyword rule: The rule that triggered this event, possibly None (some events are not triggered by rules)
:type rule: :class:`django.db.models.ForeignKey`
:keyword description: JSON description of the event. This will contain fields specific to the type of the trigger
that occurred.
:type description: :class:`django.contrib.postgres.fields.JSONField`
:keyword occurred: When the event occurred
:type occurred: :class:`django.db.models.DateTimeField`
"""
type = models.CharField(db_index=True, max_length=50)
rule = models.ForeignKey('trigger.TriggerRule', blank=True, null=True, on_delete=models.PROTECT)
description = django.contrib.postgres.fields.JSONField(default=dict)
occurred = models.DateTimeField(db_index=True)
objects = TriggerEventManager()
class Meta(object):
"""meta information for the db"""
db_table = 'trigger_event'
class TriggerRuleManager(models.Manager):
"""Provides additional methods for handling trigger rules
"""
@transaction.atomic
def archive_trigger_rule(self, trigger_rule_id):
"""Archives the trigger rule (will no longer be active) with the given ID
:param trigger_rule_id: The ID of the trigger rule to archive
:type trigger_rule_id: int
"""
rule = TriggerRule.objects.select_for_update().get(pk=trigger_rule_id)
rule.is_active = False
rule.archived = now()
rule.save()
def create_trigger_rule(self, trigger_type, configuration, name='', is_active=True):
"""Creates a new trigger rule and returns the rule model. The returned trigger rule model will be saved in the
database.
:param trigger_type: The type of this trigger rule
:type trigger_type: str
:param configuration: The rule configuration
:type configuration: :class:`trigger.configuration.TriggerRuleConfiguration`
:param name: An optional name for the trigger
:type name: str
:param is_active: Whether or not the trigger should be active
:type is_active: bool
:returns: The new trigger rule
:rtype: :class:`trigger.models.TriggerRule`
:raises trigger.configuration.exceptions.InvalidTriggerRule: If the configuration is invalid
"""
if not trigger_type:
raise Exception('Trigger rule must have a type')
if not configuration:
raise Exception('Trigger rule must have a configuration')
configuration.validate()
rule = TriggerRule()
rule.type = trigger_type
rule.name = name
rule.is_active = is_active
rule.configuration = configuration.get_dict()
rule.save()
return rule
def get_by_natural_key(self, name):
"""Django method to retrieve a trigger rule for the given natural key. NOTE: All trigger rule names are NOT
unique. This is implemented to allow the loading of defined system trigger rules which do have unique names.
:param name: The name of the trigger rule
:type name: str
:returns: The trigger rule defined by the natural key
:rtype: :class:`error.models.Error`
"""
return self.get(name=name)
class TriggerRule(models.Model):
"""Represents a rule that, when triggered, creates a trigger event
:keyword type: The type of the trigger for the rule
:type type: :class:`django.db.models.CharField`
:keyword name: The identifying name of the trigger rule used by clients for queries
:type name: :class:`django.db.models.CharField`
:keyword configuration: JSON configuration for the rule. This will contain fields specific to the type of the
trigger.
:type configuration: :class:`django.contrib.postgres.fields.JSONField`
:keyword is_active: Whether the rule is still active (false once rule is archived)
:type is_active: :class:`django.db.models.BooleanField`
:keyword created: When the rule was created
:type created: :class:`django.db.models.DateTimeField`
:keyword archived: When the rule was archived (no longer active)
:type archived: :class:`django.db.models.DateTimeField`
:keyword last_modified: When the rule was last modified
:type last_modified: :class:`django.db.models.DateTimeField`
"""
type = models.CharField(max_length=50, db_index=True)
name = models.CharField(blank=True, max_length=50)
configuration = django.contrib.postgres.fields.JSONField(default=dict)
is_active = models.BooleanField(default=True, db_index=True)
created = models.DateTimeField(auto_now_add=True)
archived = models.DateTimeField(blank=True, null=True)
last_modified = models.DateTimeField(auto_now=True)
objects = TriggerRuleManager()
def get_configuration(self):
"""Returns the configuration for this trigger rule
:returns: The configuration for this trigger rule
:rtype: :class:`trigger.configuration.trigger_rule.TriggerRuleConfiguration`
:raises :class:`trigger.configuration.exceptions.InvalidTriggerType`: If the trigger type is invalid
"""
from trigger.handler import get_trigger_rule_handler
handler = get_trigger_rule_handler(self.type)
return handler.create_configuration(self.configuration)
def natural_key(self):
"""Django method to define the natural key for a trigger rule as the name
:returns: A tuple representing the natural key
:rtype: tuple(str,)
"""
return (self.name,)
class Meta(object):
"""meta information for the db"""
db_table = 'trigger_rule'
| """Defines the models for trigger rules and events"""
from __future__ import unicode_literals
import django.contrib.postgres.fields
from django.db import models, transaction
from django.utils.timezone import now
class TriggerEventManager(models.Manager):
"""Provides additional methods for handling trigger events
"""
def create_trigger_event(self, trigger_type, rule, description, occurred):
"""Creates a new trigger event and returns the event model. The given rule model, if not None, must have already
been saved in the database (it must have an ID). The returned trigger event model will be saved in the database.
:param trigger_type: The type of the trigger that occurred
:type trigger_type: str
:param rule: The rule that triggered the event, possibly None
:type rule: :class:`trigger.models.TriggerRule`
:param description: The JSON description of the event as a dict
:type description: dict
:param occurred: When the event occurred
:type occurred: :class:`datetime.datetime`
:returns: The new trigger event
:rtype: :class:`trigger.models.TriggerEvent`
"""
if trigger_type is None:
raise Exception('Trigger event must have a type')
if description is None:
raise Exception('Trigger event must have a JSON description')
if occurred is None:
raise Exception('Trigger event must have a timestamp')
event = TriggerEvent()
event.type = trigger_type
event.rule = rule
event.description = description
event.occurred = occurred
event.save()
return event
class TriggerEvent(models.Model):
"""Represents an event where a trigger occurred
:keyword type: The type of the trigger that occurred
:type type: :class:`django.db.models.CharField`
:keyword rule: The rule that triggered this event, possibly None (some events are not triggered by rules)
:type rule: :class:`django.db.models.ForeignKey`
:keyword description: JSON description of the event. This will contain fields specific to the type of the trigger
that occurred.
:type description: :class:`django.contrib.postgres.fields.JSONField`
:keyword occurred: When the event occurred
:type occurred: :class:`django.db.models.DateTimeField`
"""
type = models.CharField(db_index=True, max_length=50)
rule = models.ForeignKey('trigger.TriggerRule', blank=True, null=True, on_delete=models.PROTECT)
description = django.contrib.postgres.fields.JSONField(default=dict)
occurred = models.DateTimeField(db_index=True)
objects = TriggerEventManager()
class Meta(object):
"""meta information for the db"""
db_table = 'trigger_event'
class TriggerRuleManager(models.Manager):
"""Provides additional methods for handling trigger rules
"""
@transaction.atomic
def archive_trigger_rule(self, trigger_rule_id):
"""Archives the trigger rule (will no longer be active) with the given ID
:param trigger_rule_id: The ID of the trigger rule to archive
:type trigger_rule_id: int
"""
rule = TriggerRule.objects.select_for_update().get(pk=trigger_rule_id)
rule.is_active = False
rule.archived = now()
rule.save()
def create_trigger_rule(self, trigger_type, configuration, name='', is_active=True):
"""Creates a new trigger rule and returns the rule model. The returned trigger rule model will be saved in the
database.
:param trigger_type: The type of this trigger rule
:type trigger_type: str
:param configuration: The rule configuration
:type configuration: :class:`trigger.configuration.TriggerRuleConfiguration`
:param name: An optional name for the trigger
:type name: str
:param is_active: Whether or not the trigger should be active
:type is_active: bool
:returns: The new trigger rule
:rtype: :class:`trigger.models.TriggerRule`
:raises trigger.configuration.exceptions.InvalidTriggerRule: If the configuration is invalid
"""
if not trigger_type:
raise Exception('Trigger rule must have a type')
if not configuration:
raise Exception('Trigger rule must have a configuration')
configuration.validate()
rule = TriggerRule()
rule.type = trigger_type
rule.name = name
rule.is_active = is_active
rule.configuration = configuration.get_dict()
rule.save()
return rule
def get_by_natural_key(self, name):
"""Django method to retrieve a trigger rule for the given natural key. NOTE: All trigger rule names are NOT
unique. This is implemented to allow the loading of defined system trigger rules which do have unique names.
:param name: The name of the trigger rule
:type name: str
:returns: The trigger rule defined by the natural key
:rtype: :class:`error.models.Error`
"""
return self.get(name=name)
class TriggerRule(models.Model):
"""Represents a rule that, when triggered, creates a trigger event
:keyword type: The type of the trigger for the rule
:type type: :class:`django.db.models.CharField`
:keyword name: The identifying name of the trigger rule used by clients for queries
:type name: :class:`django.db.models.CharField`
:keyword configuration: JSON configuration for the rule. This will contain fields specific to the type of the
trigger.
:type configuration: :class:`django.contrib.postgres.fields.JSONField`
:keyword is_active: Whether the rule is still active (false once rule is archived)
:type is_active: :class:`django.db.models.BooleanField`
:keyword created: When the rule was created
:type created: :class:`django.db.models.DateTimeField`
:keyword archived: When the rule was archived (no longer active)
:type archived: :class:`django.db.models.DateTimeField`
:keyword last_modified: When the rule was last modified
:type last_modified: :class:`django.db.models.DateTimeField`
"""
type = models.CharField(max_length=50, db_index=True)
name = models.CharField(blank=True, max_length=50)
configuration = django.contrib.postgres.fields.JSONField(default=dict)
is_active = models.BooleanField(default=True, db_index=True)
created = models.DateTimeField(auto_now_add=True)
archived = models.DateTimeField(blank=True, null=True)
last_modified = models.DateTimeField(auto_now=True)
objects = TriggerRuleManager()
def get_configuration(self):
"""Returns the configuration for this trigger rule
:returns: The configuration for this trigger rule
:rtype: :class:`trigger.configuration.trigger_rule.TriggerRuleConfiguration`
:raises :class:`trigger.configuration.exceptions.InvalidTriggerType`: If the trigger type is invalid
"""
from trigger.handler import get_trigger_rule_handler
handler = get_trigger_rule_handler(self.type)
return handler.create_configuration(self.configuration)
def natural_key(self):
"""Django method to define the natural key for a trigger rule as the name
:returns: A tuple representing the natural key
:rtype: tuple(str,)
"""
return (self.name,)
class Meta(object):
"""meta information for the db"""
db_table = 'trigger_rule'
| en | 0.757686 | Defines the models for trigger rules and events Provides additional methods for handling trigger events Creates a new trigger event and returns the event model. The given rule model, if not None, must have already been saved in the database (it must have an ID). The returned trigger event model will be saved in the database. :param trigger_type: The type of the trigger that occurred :type trigger_type: str :param rule: The rule that triggered the event, possibly None :type rule: :class:`trigger.models.TriggerRule` :param description: The JSON description of the event as a dict :type description: dict :param occurred: When the event occurred :type occurred: :class:`datetime.datetime` :returns: The new trigger event :rtype: :class:`trigger.models.TriggerEvent` Represents an event where a trigger occurred :keyword type: The type of the trigger that occurred :type type: :class:`django.db.models.CharField` :keyword rule: The rule that triggered this event, possibly None (some events are not triggered by rules) :type rule: :class:`django.db.models.ForeignKey` :keyword description: JSON description of the event. This will contain fields specific to the type of the trigger that occurred. :type description: :class:`django.contrib.postgres.fields.JSONField` :keyword occurred: When the event occurred :type occurred: :class:`django.db.models.DateTimeField` meta information for the db Provides additional methods for handling trigger rules Archives the trigger rule (will no longer be active) with the given ID :param trigger_rule_id: The ID of the trigger rule to archive :type trigger_rule_id: int Creates a new trigger rule and returns the rule model. The returned trigger rule model will be saved in the database. :param trigger_type: The type of this trigger rule :type trigger_type: str :param configuration: The rule configuration :type configuration: :class:`trigger.configuration.TriggerRuleConfiguration` :param name: An optional name for the trigger :type name: str :param is_active: Whether or not the trigger should be active :type is_active: bool :returns: The new trigger rule :rtype: :class:`trigger.models.TriggerRule` :raises trigger.configuration.exceptions.InvalidTriggerRule: If the configuration is invalid Django method to retrieve a trigger rule for the given natural key. NOTE: All trigger rule names are NOT unique. This is implemented to allow the loading of defined system trigger rules which do have unique names. :param name: The name of the trigger rule :type name: str :returns: The trigger rule defined by the natural key :rtype: :class:`error.models.Error` Represents a rule that, when triggered, creates a trigger event :keyword type: The type of the trigger for the rule :type type: :class:`django.db.models.CharField` :keyword name: The identifying name of the trigger rule used by clients for queries :type name: :class:`django.db.models.CharField` :keyword configuration: JSON configuration for the rule. This will contain fields specific to the type of the trigger. :type configuration: :class:`django.contrib.postgres.fields.JSONField` :keyword is_active: Whether the rule is still active (false once rule is archived) :type is_active: :class:`django.db.models.BooleanField` :keyword created: When the rule was created :type created: :class:`django.db.models.DateTimeField` :keyword archived: When the rule was archived (no longer active) :type archived: :class:`django.db.models.DateTimeField` :keyword last_modified: When the rule was last modified :type last_modified: :class:`django.db.models.DateTimeField` Returns the configuration for this trigger rule :returns: The configuration for this trigger rule :rtype: :class:`trigger.configuration.trigger_rule.TriggerRuleConfiguration` :raises :class:`trigger.configuration.exceptions.InvalidTriggerType`: If the trigger type is invalid Django method to define the natural key for a trigger rule as the name :returns: A tuple representing the natural key :rtype: tuple(str,) meta information for the db | 2.763077 | 3 |
leetcode/0506_relative_ranks.py | chaosWsF/Python-Practice | 0 | 10216 | """
Given scores of N athletes, find their relative ranks and the people with the top
three highest scores, who will be awarded medals: "Gold Medal", "Silver Medal" and
"Bronze Medal".
Example 1:
Input: [5, 4, 3, 2, 1]
Output: ["Gold Medal", "Silver Medal", "Bronze Medal", "4", "5"]
Explanation: The first three athletes got the top three highest scores, so they
got "Gold Medal", "Silver Medal" and "Bronze Medal". For the left two athletes,
you just need to output their relative ranks according to their scores.
Note:
N is a positive integer and won't exceed 10,000.
All the scores of athletes are guaranteed to be unique.
"""
class Solution:
def findRelativeRanks(self, nums):
scores_rank = sorted(nums, reverse=True)
d = {}
for i, score in enumerate(scores_rank):
if i == 0:
d[score] = 'Gold Medal'
elif i == 1:
d[score] = 'Silver Medal'
elif i == 2:
d[score] = 'Bronze Medal'
else:
d[score] = str(i + 1)
return [d[x] for x in nums]
| """
Given scores of N athletes, find their relative ranks and the people with the top
three highest scores, who will be awarded medals: "Gold Medal", "Silver Medal" and
"Bronze Medal".
Example 1:
Input: [5, 4, 3, 2, 1]
Output: ["Gold Medal", "Silver Medal", "Bronze Medal", "4", "5"]
Explanation: The first three athletes got the top three highest scores, so they
got "Gold Medal", "Silver Medal" and "Bronze Medal". For the left two athletes,
you just need to output their relative ranks according to their scores.
Note:
N is a positive integer and won't exceed 10,000.
All the scores of athletes are guaranteed to be unique.
"""
class Solution:
def findRelativeRanks(self, nums):
scores_rank = sorted(nums, reverse=True)
d = {}
for i, score in enumerate(scores_rank):
if i == 0:
d[score] = 'Gold Medal'
elif i == 1:
d[score] = 'Silver Medal'
elif i == 2:
d[score] = 'Bronze Medal'
else:
d[score] = str(i + 1)
return [d[x] for x in nums]
| en | 0.931818 | Given scores of N athletes, find their relative ranks and the people with the top three highest scores, who will be awarded medals: "Gold Medal", "Silver Medal" and "Bronze Medal". Example 1: Input: [5, 4, 3, 2, 1] Output: ["Gold Medal", "Silver Medal", "Bronze Medal", "4", "5"] Explanation: The first three athletes got the top three highest scores, so they got "Gold Medal", "Silver Medal" and "Bronze Medal". For the left two athletes, you just need to output their relative ranks according to their scores. Note: N is a positive integer and won't exceed 10,000. All the scores of athletes are guaranteed to be unique. | 4.113877 | 4 |
barriers/models/history/assessments/economic_impact.py | felix781/market-access-python-frontend | 1 | 10217 | from ..base import BaseHistoryItem, GenericHistoryItem
from ..utils import PolymorphicBase
class ArchivedHistoryItem(BaseHistoryItem):
field = "archived"
field_name = "Valuation assessment: Archived"
def get_value(self, value):
if value is True:
return "Archived"
elif value is False:
return "Unarchived"
class ExplanationHistoryItem(BaseHistoryItem):
field = "explanation"
field_name = "Valuation assessment: Explanation"
class ImpactHistoryItem(BaseHistoryItem):
field = "impact"
field_name = "Valuation assessment: Impact"
def get_value(self, value):
if value:
return value.get("name")
class EconomicImpactAssessmentHistoryItem(PolymorphicBase):
model = "economic_impact_assessment"
key = "field"
subclasses = (
ArchivedHistoryItem,
ExplanationHistoryItem,
ImpactHistoryItem,
)
default_subclass = GenericHistoryItem
class_lookup = {}
| from ..base import BaseHistoryItem, GenericHistoryItem
from ..utils import PolymorphicBase
class ArchivedHistoryItem(BaseHistoryItem):
field = "archived"
field_name = "Valuation assessment: Archived"
def get_value(self, value):
if value is True:
return "Archived"
elif value is False:
return "Unarchived"
class ExplanationHistoryItem(BaseHistoryItem):
field = "explanation"
field_name = "Valuation assessment: Explanation"
class ImpactHistoryItem(BaseHistoryItem):
field = "impact"
field_name = "Valuation assessment: Impact"
def get_value(self, value):
if value:
return value.get("name")
class EconomicImpactAssessmentHistoryItem(PolymorphicBase):
model = "economic_impact_assessment"
key = "field"
subclasses = (
ArchivedHistoryItem,
ExplanationHistoryItem,
ImpactHistoryItem,
)
default_subclass = GenericHistoryItem
class_lookup = {}
| none | 1 | 2.689965 | 3 |
|
link_prob_show.py | Rheinwalt/spatial-effects-networks | 3 | 10218 | <filename>link_prob_show.py
import sys
import numpy as np
from sern import *
ids, lon, lat = np.loadtxt('nodes', unpack = True)
links = np.loadtxt('links', dtype = 'int')
A, b = AdjacencyMatrix(ids, links)
lon, lat = lon[b], lat[b]
n = A.shape[0]
# LinkProbability expects A as triu
A = A[np.triu_indices(n, 1)]
# play around with the scale, maybe you don't need log binning?
D, x = IntegerDistances(lat, lon, scale = 50)
p = LinkProbability(A, D)
from matplotlib import pyplot as pl
pl.plot(p, 'bo')
pl.ylabel('Link probability given distance')
pl.xlabel('Bin number')
pl.savefig('link_prob_bin.png')
pl.close('all')
pl.semilogx(x, p, 'bo')
pl.ylabel('Link probability given distance')
pl.xlabel('Distance [km]')
pl.savefig('link_prob_distance.png')
| <filename>link_prob_show.py
import sys
import numpy as np
from sern import *
ids, lon, lat = np.loadtxt('nodes', unpack = True)
links = np.loadtxt('links', dtype = 'int')
A, b = AdjacencyMatrix(ids, links)
lon, lat = lon[b], lat[b]
n = A.shape[0]
# LinkProbability expects A as triu
A = A[np.triu_indices(n, 1)]
# play around with the scale, maybe you don't need log binning?
D, x = IntegerDistances(lat, lon, scale = 50)
p = LinkProbability(A, D)
from matplotlib import pyplot as pl
pl.plot(p, 'bo')
pl.ylabel('Link probability given distance')
pl.xlabel('Bin number')
pl.savefig('link_prob_bin.png')
pl.close('all')
pl.semilogx(x, p, 'bo')
pl.ylabel('Link probability given distance')
pl.xlabel('Distance [km]')
pl.savefig('link_prob_distance.png')
| en | 0.989835 | # LinkProbability expects A as triu # play around with the scale, maybe you don't need log binning? | 2.420012 | 2 |
controller/components/app.py | isabella232/flight-lab | 15 | 10219 | # Copyright 2018 Flight Lab authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library for components related to running apps."""
import subprocess
import threading
from components import base
from protos import controller_pb2
from utils import app
class AppComponent(base.Component):
"""Component to run command-line based app on any platform.
This component can start app, restart app upon crash, and stop app.
Events:
"status_changed": when status of the app is changed.
Args:
app_component: instance of this class.
"""
def __init__(self, proto, *args, **kwargs):
"""Initializes the component.
Args:
proto: flightlab.App proto defining app details and options.
"""
super(AppComponent, self).__init__(proto, *args, **kwargs)
self._app = app.Application(
name=self.name,
bin_path=self.settings.executable_path,
arguments=(list(self.settings.arguments)
if self.settings.arguments else []),
working_dir=self.settings.working_dir,
restart_on_crash=(self.settings.restart_on_crash
if self.settings.restart_on_crash else False),
env=(self.settings.env if self.settings.env else None))
self._app.on('started', self._on_app_started)
self._app.on('stopped', self._on_app_stopped)
self._monitor = threading.Timer(1, self._check_status)
self._monitor.start()
def close(self):
if self._monitor:
self._monitor.cancel()
self._monitor = None
self._app.stop()
def _check_status(self):
if self._app.has_running_instance():
component_status = controller_pb2.Component.ON
app_status = controller_pb2.App.RUNNING
else:
component_status = controller_pb2.Component.OFF
app_status = controller_pb2.App.NOT_RUNNING
if (self.proto.status != component_status or
self.settings.status != app_status):
self.proto.status = component_status
self.settings.status = app_status
self.emit('status_changed', self)
def _start(self):
self.logger.info('[App - {0}] Starting...'.format(self.name))
self._app.start()
def _stop(self):
self.logger.info('[App - {0}] Stopping...'.format(self.name))
self._app.stop()
def _restart(self):
self._stop()
self._start()
def _on_app_started(self, app):
self.logger.info('[App - {0}] Started.'.format(self.name))
self.settings.status = controller_pb2.App.RUNNING
self.proto.status = controller_pb2.Component.ON
self.emit('status_changed', self)
def _on_app_stopped(self, app):
self.logger.info('[App - {0}] Stopped.'.format(self.name))
self.settings.status = controller_pb2.App.NOT_RUNNING
self.proto.status = controller_pb2.Component.OFF
self.emit('status_changed', self)
class CommandLineComponent(base.Component):
"""Component to run command-line based apps on any platform."""
def _start(self):
for cmd in self.settings.when_on:
self.logger.info('[{0}] Running: {1}'.format(self.name, cmd))
ret = subprocess.call(cmd)
self.logger.info('[{0}] Done (return code={1})'.format(self.name, ret))
def _stop(self):
for cmd in self.settings.when_off:
self.logger.info('[{0}] Running: {1}'.format(self.name, cmd))
ret = subprocess.call(cmd)
self.logger.info('[{0}] Done (return code={1})'.format(self.name, ret)) | # Copyright 2018 Flight Lab authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Library for components related to running apps."""
import subprocess
import threading
from components import base
from protos import controller_pb2
from utils import app
class AppComponent(base.Component):
"""Component to run command-line based app on any platform.
This component can start app, restart app upon crash, and stop app.
Events:
"status_changed": when status of the app is changed.
Args:
app_component: instance of this class.
"""
def __init__(self, proto, *args, **kwargs):
"""Initializes the component.
Args:
proto: flightlab.App proto defining app details and options.
"""
super(AppComponent, self).__init__(proto, *args, **kwargs)
self._app = app.Application(
name=self.name,
bin_path=self.settings.executable_path,
arguments=(list(self.settings.arguments)
if self.settings.arguments else []),
working_dir=self.settings.working_dir,
restart_on_crash=(self.settings.restart_on_crash
if self.settings.restart_on_crash else False),
env=(self.settings.env if self.settings.env else None))
self._app.on('started', self._on_app_started)
self._app.on('stopped', self._on_app_stopped)
self._monitor = threading.Timer(1, self._check_status)
self._monitor.start()
def close(self):
if self._monitor:
self._monitor.cancel()
self._monitor = None
self._app.stop()
def _check_status(self):
if self._app.has_running_instance():
component_status = controller_pb2.Component.ON
app_status = controller_pb2.App.RUNNING
else:
component_status = controller_pb2.Component.OFF
app_status = controller_pb2.App.NOT_RUNNING
if (self.proto.status != component_status or
self.settings.status != app_status):
self.proto.status = component_status
self.settings.status = app_status
self.emit('status_changed', self)
def _start(self):
self.logger.info('[App - {0}] Starting...'.format(self.name))
self._app.start()
def _stop(self):
self.logger.info('[App - {0}] Stopping...'.format(self.name))
self._app.stop()
def _restart(self):
self._stop()
self._start()
def _on_app_started(self, app):
self.logger.info('[App - {0}] Started.'.format(self.name))
self.settings.status = controller_pb2.App.RUNNING
self.proto.status = controller_pb2.Component.ON
self.emit('status_changed', self)
def _on_app_stopped(self, app):
self.logger.info('[App - {0}] Stopped.'.format(self.name))
self.settings.status = controller_pb2.App.NOT_RUNNING
self.proto.status = controller_pb2.Component.OFF
self.emit('status_changed', self)
class CommandLineComponent(base.Component):
"""Component to run command-line based apps on any platform."""
def _start(self):
for cmd in self.settings.when_on:
self.logger.info('[{0}] Running: {1}'.format(self.name, cmd))
ret = subprocess.call(cmd)
self.logger.info('[{0}] Done (return code={1})'.format(self.name, ret))
def _stop(self):
for cmd in self.settings.when_off:
self.logger.info('[{0}] Running: {1}'.format(self.name, cmd))
ret = subprocess.call(cmd)
self.logger.info('[{0}] Done (return code={1})'.format(self.name, ret)) | en | 0.859134 | # Copyright 2018 Flight Lab authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Library for components related to running apps. Component to run command-line based app on any platform. This component can start app, restart app upon crash, and stop app. Events: "status_changed": when status of the app is changed. Args: app_component: instance of this class. Initializes the component. Args: proto: flightlab.App proto defining app details and options. Component to run command-line based apps on any platform. | 2.215003 | 2 |
botorch/acquisition/__init__.py | jmren168/botorch | 1 | 10220 | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from .acquisition import AcquisitionFunction
from .analytic import (
AnalyticAcquisitionFunction,
ConstrainedExpectedImprovement,
ExpectedImprovement,
NoisyExpectedImprovement,
PosteriorMean,
ProbabilityOfImprovement,
UpperConfidenceBound,
)
from .fixed_feature import FixedFeatureAcquisitionFunction
from .monte_carlo import (
MCAcquisitionFunction,
qExpectedImprovement,
qNoisyExpectedImprovement,
qProbabilityOfImprovement,
qSimpleRegret,
qUpperConfidenceBound,
)
from .objective import (
ConstrainedMCObjective,
GenericMCObjective,
IdentityMCObjective,
LinearMCObjective,
MCAcquisitionObjective,
ScalarizedObjective,
)
from .utils import get_acquisition_function
__all__ = [
"AcquisitionFunction",
"AnalyticAcquisitionFunction",
"ConstrainedExpectedImprovement",
"ExpectedImprovement",
"FixedFeatureAcquisitionFunction",
"NoisyExpectedImprovement",
"PosteriorMean",
"ProbabilityOfImprovement",
"UpperConfidenceBound",
"qExpectedImprovement",
"qNoisyExpectedImprovement",
"qProbabilityOfImprovement",
"qSimpleRegret",
"qUpperConfidenceBound",
"ConstrainedMCObjective",
"GenericMCObjective",
"IdentityMCObjective",
"LinearMCObjective",
"MCAcquisitionFunction",
"MCAcquisitionObjective",
"ScalarizedObjective",
"get_acquisition_function",
]
| #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from .acquisition import AcquisitionFunction
from .analytic import (
AnalyticAcquisitionFunction,
ConstrainedExpectedImprovement,
ExpectedImprovement,
NoisyExpectedImprovement,
PosteriorMean,
ProbabilityOfImprovement,
UpperConfidenceBound,
)
from .fixed_feature import FixedFeatureAcquisitionFunction
from .monte_carlo import (
MCAcquisitionFunction,
qExpectedImprovement,
qNoisyExpectedImprovement,
qProbabilityOfImprovement,
qSimpleRegret,
qUpperConfidenceBound,
)
from .objective import (
ConstrainedMCObjective,
GenericMCObjective,
IdentityMCObjective,
LinearMCObjective,
MCAcquisitionObjective,
ScalarizedObjective,
)
from .utils import get_acquisition_function
__all__ = [
"AcquisitionFunction",
"AnalyticAcquisitionFunction",
"ConstrainedExpectedImprovement",
"ExpectedImprovement",
"FixedFeatureAcquisitionFunction",
"NoisyExpectedImprovement",
"PosteriorMean",
"ProbabilityOfImprovement",
"UpperConfidenceBound",
"qExpectedImprovement",
"qNoisyExpectedImprovement",
"qProbabilityOfImprovement",
"qSimpleRegret",
"qUpperConfidenceBound",
"ConstrainedMCObjective",
"GenericMCObjective",
"IdentityMCObjective",
"LinearMCObjective",
"MCAcquisitionFunction",
"MCAcquisitionObjective",
"ScalarizedObjective",
"get_acquisition_function",
]
| en | 0.797894 | #!/usr/bin/env python3 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved | 1.340029 | 1 |
examples/pybullet/gym/pybullet_envs/minitaur/envs/env_randomizers/minitaur_alternating_legs_env_randomizer.py | felipeek/bullet3 | 9,136 | 10221 | """Randomize the minitaur_gym_alternating_leg_env when reset() is called.
The randomization include swing_offset, extension_offset of all legs that mimics
bent legs, desired_pitch from user input, battery voltage and motor damping.
"""
import os, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
parentdir = os.path.dirname(os.path.dirname(parentdir))
os.sys.path.insert(0, parentdir)
import numpy as np
import tf.compat.v1 as tf
from pybullet_envs.minitaur.envs import env_randomizer_base
# Absolute range.
NUM_LEGS = 4
BATTERY_VOLTAGE_RANGE = (14.8, 16.8)
MOTOR_VISCOUS_DAMPING_RANGE = (0, 0.01)
class MinitaurAlternatingLegsEnvRandomizer(env_randomizer_base.EnvRandomizerBase):
"""A randomizer that changes the minitaur_gym_alternating_leg_env."""
def __init__(self,
perturb_swing_bound=0.1,
perturb_extension_bound=0.1,
perturb_desired_pitch_bound=0.01):
super(MinitaurAlternatingLegsEnvRandomizer, self).__init__()
self.perturb_swing_bound = perturb_swing_bound
self.perturb_extension_bound = perturb_extension_bound
self.perturb_desired_pitch_bound = perturb_desired_pitch_bound
def randomize_env(self, env):
perturb_magnitude = np.random.uniform(low=-self.perturb_swing_bound,
high=self.perturb_swing_bound,
size=NUM_LEGS)
env.set_swing_offset(perturb_magnitude)
tf.logging.info("swing_offset: {}".format(perturb_magnitude))
perturb_magnitude = np.random.uniform(low=-self.perturb_extension_bound,
high=self.perturb_extension_bound,
size=NUM_LEGS)
env.set_extension_offset(perturb_magnitude)
tf.logging.info("extension_offset: {}".format(perturb_magnitude))
perturb_magnitude = np.random.uniform(low=-self.perturb_desired_pitch_bound,
high=self.perturb_desired_pitch_bound)
env.set_desired_pitch(perturb_magnitude)
tf.logging.info("desired_pitch: {}".format(perturb_magnitude))
randomized_battery_voltage = np.random.uniform(BATTERY_VOLTAGE_RANGE[0],
BATTERY_VOLTAGE_RANGE[1])
env.minitaur.SetBatteryVoltage(randomized_battery_voltage)
tf.logging.info("battery_voltage: {}".format(randomized_battery_voltage))
randomized_motor_damping = np.random.uniform(MOTOR_VISCOUS_DAMPING_RANGE[0],
MOTOR_VISCOUS_DAMPING_RANGE[1])
env.minitaur.SetMotorViscousDamping(randomized_motor_damping)
tf.logging.info("motor_damping: {}".format(randomized_motor_damping))
| """Randomize the minitaur_gym_alternating_leg_env when reset() is called.
The randomization include swing_offset, extension_offset of all legs that mimics
bent legs, desired_pitch from user input, battery voltage and motor damping.
"""
import os, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(os.path.dirname(currentdir))
parentdir = os.path.dirname(os.path.dirname(parentdir))
os.sys.path.insert(0, parentdir)
import numpy as np
import tf.compat.v1 as tf
from pybullet_envs.minitaur.envs import env_randomizer_base
# Absolute range.
NUM_LEGS = 4
BATTERY_VOLTAGE_RANGE = (14.8, 16.8)
MOTOR_VISCOUS_DAMPING_RANGE = (0, 0.01)
class MinitaurAlternatingLegsEnvRandomizer(env_randomizer_base.EnvRandomizerBase):
"""A randomizer that changes the minitaur_gym_alternating_leg_env."""
def __init__(self,
perturb_swing_bound=0.1,
perturb_extension_bound=0.1,
perturb_desired_pitch_bound=0.01):
super(MinitaurAlternatingLegsEnvRandomizer, self).__init__()
self.perturb_swing_bound = perturb_swing_bound
self.perturb_extension_bound = perturb_extension_bound
self.perturb_desired_pitch_bound = perturb_desired_pitch_bound
def randomize_env(self, env):
perturb_magnitude = np.random.uniform(low=-self.perturb_swing_bound,
high=self.perturb_swing_bound,
size=NUM_LEGS)
env.set_swing_offset(perturb_magnitude)
tf.logging.info("swing_offset: {}".format(perturb_magnitude))
perturb_magnitude = np.random.uniform(low=-self.perturb_extension_bound,
high=self.perturb_extension_bound,
size=NUM_LEGS)
env.set_extension_offset(perturb_magnitude)
tf.logging.info("extension_offset: {}".format(perturb_magnitude))
perturb_magnitude = np.random.uniform(low=-self.perturb_desired_pitch_bound,
high=self.perturb_desired_pitch_bound)
env.set_desired_pitch(perturb_magnitude)
tf.logging.info("desired_pitch: {}".format(perturb_magnitude))
randomized_battery_voltage = np.random.uniform(BATTERY_VOLTAGE_RANGE[0],
BATTERY_VOLTAGE_RANGE[1])
env.minitaur.SetBatteryVoltage(randomized_battery_voltage)
tf.logging.info("battery_voltage: {}".format(randomized_battery_voltage))
randomized_motor_damping = np.random.uniform(MOTOR_VISCOUS_DAMPING_RANGE[0],
MOTOR_VISCOUS_DAMPING_RANGE[1])
env.minitaur.SetMotorViscousDamping(randomized_motor_damping)
tf.logging.info("motor_damping: {}".format(randomized_motor_damping))
| en | 0.691932 | Randomize the minitaur_gym_alternating_leg_env when reset() is called. The randomization include swing_offset, extension_offset of all legs that mimics bent legs, desired_pitch from user input, battery voltage and motor damping. # Absolute range. A randomizer that changes the minitaur_gym_alternating_leg_env. | 2.704479 | 3 |
pygsti/modelmembers/states/tensorprodstate.py | pyGSTi-Developers/pyGSTi | 73 | 10222 | """
The TensorProductState class and supporting functionality.
"""
#***************************************************************************************************
# Copyright 2015, 2019 National Technology & Engineering Solutions of Sandia, LLC (NTESS).
# Under the terms of Contract DE-NA0003525 with NTESS, the U.S. Government retains certain rights
# in this software.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0 or in the LICENSE file in the root pyGSTi directory.
#***************************************************************************************************
import functools as _functools
import itertools as _itertools
import numpy as _np
from pygsti.modelmembers.states.state import State as _State
from pygsti.modelmembers import modelmember as _modelmember, term as _term
from pygsti.baseobjs import statespace as _statespace
from pygsti.tools import listtools as _lt
from pygsti.tools import matrixtools as _mt
class TensorProductState(_State):
"""
A state vector that is a tensor-product of other state vectors.
Parameters
----------
factors : list of States
a list of the component states to take the tensor product of.
state_space : StateSpace, optional
The state space for this operation.
"""
def __init__(self, factors, state_space):
assert(len(factors) > 0), "Must have at least one factor!"
self.factors = factors # do *not* copy - needs to reference common objects
evotype = self.factors[0]._evotype
rep = evotype.create_tensorproduct_state_rep([f._rep for f in factors], state_space)
_State.__init__(self, rep, evotype)
self.init_gpindices() # initialize our gpindices based on sub-members
self._update_rep() # initializes rep data
#Note: no to_memoized_dict needed, as ModelMember version does all we need.
@classmethod
def _from_memoized_dict(cls, mm_dict, serial_memo):
state_space = _statespace.StateSpace.from_nice_serialization(mm_dict['state_space'])
factors = [serial_memo[i] for i in mm_dict['submembers']]
return cls(factors, state_space)
def submembers(self):
"""
Get the ModelMember-derived objects contained in this one.
Returns
-------
list
"""
return self.factors # factor POVM object
def _update_rep(self):
self._rep.reps_have_changed()
@property
def parameter_labels(self):
"""
An array of labels (usually strings) describing this model member's parameters.
"""
vl = _np.empty(self.num_params, dtype=object)
for factor_state, factor_local_inds in zip(self.factors, self._submember_rpindices):
vl[factor_local_inds] = factor_state.parameter_labels
return vl
def to_dense(self, on_space='minimal', scratch=None):
"""
Return this state vector as a (dense) numpy array.
The memory in `scratch` maybe used when it is not-None.
Parameters
----------
on_space : {'minimal', 'Hilbert', 'HilbertSchmidt'}
The space that the returned dense operation acts upon. For unitary matrices and bra/ket vectors,
use `'Hilbert'`. For superoperator matrices and super-bra/super-ket vectors use `'HilbertSchmidt'`.
`'minimal'` means that `'Hilbert'` is used if possible given this operator's evolution type, and
otherwise `'HilbertSchmidt'` is used.
scratch : numpy.ndarray, optional
scratch space available for use.
Returns
-------
numpy.ndarray
"""
return self._rep.to_dense(on_space)
def taylor_order_terms(self, order, max_polynomial_vars=100, return_coeff_polys=False):
"""
Get the `order`-th order Taylor-expansion terms of this state vector.
This function either constructs or returns a cached list of the terms at
the given order. Each term is "rank-1", meaning that it is a state
preparation followed by or POVM effect preceded by actions on a
density matrix `rho` of the form:
`rho -> A rho B`
The coefficients of these terms are typically polynomials of the
State's parameters, where the polynomial's variable indices index the
*global* parameters of the State's parent (usually a :class:`Model`)
, not the State's local parameter array (i.e. that returned from
`to_vector`).
Parameters
----------
order : int
The order of terms to get.
max_polynomial_vars : int, optional
maximum number of variables the created polynomials can have.
return_coeff_polys : bool
Whether a parallel list of locally-indexed (using variable indices
corresponding to *this* object's parameters rather than its parent's)
polynomial coefficients should be returned as well.
Returns
-------
terms : list
A list of :class:`RankOneTerm` objects.
coefficients : list
Only present when `return_coeff_polys == True`.
A list of *compact* polynomial objects, meaning that each element
is a `(vtape,ctape)` 2-tuple formed by concatenating together the
output of :method:`Polynomial.compact`.
"""
terms = []
fnq = [int(round(_np.log2(f.dim))) // 2 for f in self.factors] # num of qubits per factor
# assumes density matrix evolution
total_nQ = sum(fnq) # total number of qubits
for p in _lt.partition_into(order, len(self.factors)):
factor_lists = [self.factors[i].taylor_order_terms(pi, max_polynomial_vars) for i, pi in enumerate(p)]
# When possible, create COLLAPSED factor_lists so each factor has just a single
# (State) pre & post op, which can be formed into the new terms'
# TensorProdState ops.
# - DON'T collapse stabilizer states & clifford ops - can't for POVMs
collapsible = False # bool(self._evotype =="svterm") # need to use reps for collapsing now... TODO?
if collapsible:
factor_lists = [[t.collapse_vec() for t in fterms] for fterms in factor_lists]
for factors in _itertools.product(*factor_lists):
# create a term with a TensorProdState - Note we always create
# "prep"-mode vectors, since even when self._prep_or_effect == "effect" these
# vectors are created with factor (prep- or effect-type) States not factor POVMs
# we workaround this by still allowing such "prep"-mode
# TensorProdStates to be represented as effects (i.e. in torep('effect'...) works)
coeff = _functools.reduce(lambda x, y: x.mult(y), [f.coeff for f in factors])
pre_rep = self._evotype.create_tensorproduct_state_rep(
[f.pre_state for f in factors if (f.pre_state is not None)], self.state_space)
post_rep = self._evotype.create_tensorproduct_state_rep(
[f.post_state for f in factors if (f.post_state is not None)], self.state_space)
term = _term.RankOnePolynomialPrepTerm.create_from(coeff, pre_rep, post_rep,
self._evotype, self.state_space)
if not collapsible: # then may need to add more ops. Assume factor ops are clifford gates
# Embed each factors ops according to their target qubit(s) and just daisy chain them
ss = _statespace.QubitSpace(total_nQ); curQ = 0
for f, nq in zip(factors, fnq):
targetLabels = tuple(range(curQ, curQ + nq)); curQ += nq
term._rep.pre_ops.extend([self._evotype.create_embedded_rep(ss, targetLabels, op)
for op in f.pre_ops]) # embed and add ops
term._rep.post_ops.extend([self._evotype.create_embedded_rep(ss, targetLabels, op)
for op in f.post_ops]) # embed and add ops
terms.append(term)
if return_coeff_polys:
def _decompose_indices(x):
return tuple(_modelmember._decompose_gpindices(
self.gpindices, _np.array(x, _np.int64)))
poly_coeffs = [t.coeff.map_indices(_decompose_indices) for t in terms] # with *local* indices
tapes = [poly.compact(complex_coeff_tape=True) for poly in poly_coeffs]
if len(tapes) > 0:
vtape = _np.concatenate([t[0] for t in tapes])
ctape = _np.concatenate([t[1] for t in tapes])
else:
vtape = _np.empty(0, _np.int64)
ctape = _np.empty(0, complex)
coeffs_as_compact_polys = (vtape, ctape)
#self.local_term_poly_coeffs[order] = coeffs_as_compact_polys #FUTURE?
return terms, coeffs_as_compact_polys
else:
return terms # Cache terms in FUTURE?
@property
def num_params(self):
"""
Get the number of independent parameters which specify this state vector.
Returns
-------
int
the number of independent parameters.
"""
return len(self.gpindices_as_array())
def to_vector(self):
"""
Get the state vector parameters as an array of values.
Returns
-------
numpy array
The parameters as a 1D array with length num_params().
"""
v = _np.empty(self.num_params, 'd')
for factor_state, factor_local_inds in zip(self.factors, self._submember_rpindices):
v[factor_local_inds] = factor_state.to_vector()
return v
def from_vector(self, v, close=False, dirty_value=True):
"""
Initialize the state vector using a 1D array of parameters.
Parameters
----------
v : numpy array
The 1D vector of state vector parameters. Length
must == num_params()
close : bool, optional
Whether `v` is close to this state vector's current
set of parameters. Under some circumstances, when this
is true this call can be completed more quickly.
dirty_value : bool, optional
The value to set this object's "dirty flag" to before exiting this
call. This is passed as an argument so it can be updated *recursively*.
Leave this set to `True` unless you know what you're doing.
Returns
-------
None
"""
for factor_state, factor_local_inds in zip(self.factors, self._submember_rpindices):
factor_state.from_vector(v[factor_local_inds], close, dirty_value)
#Update representation, which may be a dense matrix or
# just fast-kron arrays or a stabilizer state.
self._update_rep() # TODO - how does this apply to state reps??
def deriv_wrt_params(self, wrt_filter=None):
"""
The element-wise derivative this state vector.
Construct a matrix whose columns are the derivatives of the state vector
with respect to a single param. Thus, each column is of length
dimension and there is one column per state vector parameter.
An empty 2D array in the StaticState case (num_params == 0).
Parameters
----------
wrt_filter : list or numpy.ndarray
List of parameter indices to take derivative with respect to.
(None means to use all the this operation's parameters.)
Returns
-------
numpy array
Array of derivatives, shape == (dimension, num_params)
"""
typ = self.factors[0].to_dense(on_space='minimal').dtype if len(self.factors) > 0 else 'd'
#HACK to deal with fact that output of to_dense is really what is differentiated
# but this may not match self.dim == self.state_space.dim, e.g. for pure state vecs.
dims = [len(fct.to_dense(on_space='minimal')) for fct in self.factors]
dim = int(_np.product(dims))
derivMx = _np.zeros((dim, self.num_params), typ)
#Product rule to compute jacobian
# loop over the spamvec/povm we differentiate wrt:
for i, (fct, fct_local_inds, fct_dim) in enumerate(zip(self.factors, self._submember_rpindices, dims)):
vec = fct
if vec.num_params == 0: continue # no contribution
deriv = vec.deriv_wrt_params(None) # TODO: use filter?? / make relative to this gate...
deriv.shape = (fct_dim, vec.num_params)
if i > 0: # factors before ith
pre = self.factors[0].to_dense(on_space='minimal')
for vecA in self.factors[1:i]:
pre = _np.kron(pre, vecA.to_dense(on_space='minimal'))
deriv = _np.kron(pre[:, None], deriv) # add a dummy 1-dim to 'pre' and do kron properly...
if i + 1 < len(self.factors): # factors after ith
post = self.factors[i + 1].to_dense(on_space='minimal')
for vecA in self.factors[i + 2:]:
post = _np.kron(post, vecA.to_dense(on_space='minimal'))
deriv = _np.kron(deriv, post[:, None]) # add a dummy 1-dim to 'post' and do kron properly...
assert(fct_local_inds is not None), \
"Error: gpindices has not been initialized for factor %d - cannot compute derivative!" % i
derivMx[:, fct_local_inds] += deriv
derivMx.shape = (dim, self.num_params) # necessary?
if wrt_filter is None:
return derivMx
else:
return _np.take(derivMx, wrt_filter, axis=1)
def has_nonzero_hessian(self):
"""
Whether this state vector has a non-zero Hessian with respect to its parameters.
Returns
-------
bool
"""
return False
def __str__(self):
s = "Tensor product %s vector with length %d\n" % (self._prep_or_effect, self.dim)
#ar = self.to_dense()
#s += _mt.mx_to_string(ar, width=4, prec=2)
# factors are just other States
s += " x ".join([_mt.mx_to_string(fct.to_dense(on_space='minimal'), width=4, prec=2) for fct in self.factors])
return s
| """
The TensorProductState class and supporting functionality.
"""
#***************************************************************************************************
# Copyright 2015, 2019 National Technology & Engineering Solutions of Sandia, LLC (NTESS).
# Under the terms of Contract DE-NA0003525 with NTESS, the U.S. Government retains certain rights
# in this software.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0 or in the LICENSE file in the root pyGSTi directory.
#***************************************************************************************************
import functools as _functools
import itertools as _itertools
import numpy as _np
from pygsti.modelmembers.states.state import State as _State
from pygsti.modelmembers import modelmember as _modelmember, term as _term
from pygsti.baseobjs import statespace as _statespace
from pygsti.tools import listtools as _lt
from pygsti.tools import matrixtools as _mt
class TensorProductState(_State):
"""
A state vector that is a tensor-product of other state vectors.
Parameters
----------
factors : list of States
a list of the component states to take the tensor product of.
state_space : StateSpace, optional
The state space for this operation.
"""
def __init__(self, factors, state_space):
assert(len(factors) > 0), "Must have at least one factor!"
self.factors = factors # do *not* copy - needs to reference common objects
evotype = self.factors[0]._evotype
rep = evotype.create_tensorproduct_state_rep([f._rep for f in factors], state_space)
_State.__init__(self, rep, evotype)
self.init_gpindices() # initialize our gpindices based on sub-members
self._update_rep() # initializes rep data
#Note: no to_memoized_dict needed, as ModelMember version does all we need.
@classmethod
def _from_memoized_dict(cls, mm_dict, serial_memo):
state_space = _statespace.StateSpace.from_nice_serialization(mm_dict['state_space'])
factors = [serial_memo[i] for i in mm_dict['submembers']]
return cls(factors, state_space)
def submembers(self):
"""
Get the ModelMember-derived objects contained in this one.
Returns
-------
list
"""
return self.factors # factor POVM object
def _update_rep(self):
self._rep.reps_have_changed()
@property
def parameter_labels(self):
"""
An array of labels (usually strings) describing this model member's parameters.
"""
vl = _np.empty(self.num_params, dtype=object)
for factor_state, factor_local_inds in zip(self.factors, self._submember_rpindices):
vl[factor_local_inds] = factor_state.parameter_labels
return vl
def to_dense(self, on_space='minimal', scratch=None):
"""
Return this state vector as a (dense) numpy array.
The memory in `scratch` maybe used when it is not-None.
Parameters
----------
on_space : {'minimal', 'Hilbert', 'HilbertSchmidt'}
The space that the returned dense operation acts upon. For unitary matrices and bra/ket vectors,
use `'Hilbert'`. For superoperator matrices and super-bra/super-ket vectors use `'HilbertSchmidt'`.
`'minimal'` means that `'Hilbert'` is used if possible given this operator's evolution type, and
otherwise `'HilbertSchmidt'` is used.
scratch : numpy.ndarray, optional
scratch space available for use.
Returns
-------
numpy.ndarray
"""
return self._rep.to_dense(on_space)
def taylor_order_terms(self, order, max_polynomial_vars=100, return_coeff_polys=False):
"""
Get the `order`-th order Taylor-expansion terms of this state vector.
This function either constructs or returns a cached list of the terms at
the given order. Each term is "rank-1", meaning that it is a state
preparation followed by or POVM effect preceded by actions on a
density matrix `rho` of the form:
`rho -> A rho B`
The coefficients of these terms are typically polynomials of the
State's parameters, where the polynomial's variable indices index the
*global* parameters of the State's parent (usually a :class:`Model`)
, not the State's local parameter array (i.e. that returned from
`to_vector`).
Parameters
----------
order : int
The order of terms to get.
max_polynomial_vars : int, optional
maximum number of variables the created polynomials can have.
return_coeff_polys : bool
Whether a parallel list of locally-indexed (using variable indices
corresponding to *this* object's parameters rather than its parent's)
polynomial coefficients should be returned as well.
Returns
-------
terms : list
A list of :class:`RankOneTerm` objects.
coefficients : list
Only present when `return_coeff_polys == True`.
A list of *compact* polynomial objects, meaning that each element
is a `(vtape,ctape)` 2-tuple formed by concatenating together the
output of :method:`Polynomial.compact`.
"""
terms = []
fnq = [int(round(_np.log2(f.dim))) // 2 for f in self.factors] # num of qubits per factor
# assumes density matrix evolution
total_nQ = sum(fnq) # total number of qubits
for p in _lt.partition_into(order, len(self.factors)):
factor_lists = [self.factors[i].taylor_order_terms(pi, max_polynomial_vars) for i, pi in enumerate(p)]
# When possible, create COLLAPSED factor_lists so each factor has just a single
# (State) pre & post op, which can be formed into the new terms'
# TensorProdState ops.
# - DON'T collapse stabilizer states & clifford ops - can't for POVMs
collapsible = False # bool(self._evotype =="svterm") # need to use reps for collapsing now... TODO?
if collapsible:
factor_lists = [[t.collapse_vec() for t in fterms] for fterms in factor_lists]
for factors in _itertools.product(*factor_lists):
# create a term with a TensorProdState - Note we always create
# "prep"-mode vectors, since even when self._prep_or_effect == "effect" these
# vectors are created with factor (prep- or effect-type) States not factor POVMs
# we workaround this by still allowing such "prep"-mode
# TensorProdStates to be represented as effects (i.e. in torep('effect'...) works)
coeff = _functools.reduce(lambda x, y: x.mult(y), [f.coeff for f in factors])
pre_rep = self._evotype.create_tensorproduct_state_rep(
[f.pre_state for f in factors if (f.pre_state is not None)], self.state_space)
post_rep = self._evotype.create_tensorproduct_state_rep(
[f.post_state for f in factors if (f.post_state is not None)], self.state_space)
term = _term.RankOnePolynomialPrepTerm.create_from(coeff, pre_rep, post_rep,
self._evotype, self.state_space)
if not collapsible: # then may need to add more ops. Assume factor ops are clifford gates
# Embed each factors ops according to their target qubit(s) and just daisy chain them
ss = _statespace.QubitSpace(total_nQ); curQ = 0
for f, nq in zip(factors, fnq):
targetLabels = tuple(range(curQ, curQ + nq)); curQ += nq
term._rep.pre_ops.extend([self._evotype.create_embedded_rep(ss, targetLabels, op)
for op in f.pre_ops]) # embed and add ops
term._rep.post_ops.extend([self._evotype.create_embedded_rep(ss, targetLabels, op)
for op in f.post_ops]) # embed and add ops
terms.append(term)
if return_coeff_polys:
def _decompose_indices(x):
return tuple(_modelmember._decompose_gpindices(
self.gpindices, _np.array(x, _np.int64)))
poly_coeffs = [t.coeff.map_indices(_decompose_indices) for t in terms] # with *local* indices
tapes = [poly.compact(complex_coeff_tape=True) for poly in poly_coeffs]
if len(tapes) > 0:
vtape = _np.concatenate([t[0] for t in tapes])
ctape = _np.concatenate([t[1] for t in tapes])
else:
vtape = _np.empty(0, _np.int64)
ctape = _np.empty(0, complex)
coeffs_as_compact_polys = (vtape, ctape)
#self.local_term_poly_coeffs[order] = coeffs_as_compact_polys #FUTURE?
return terms, coeffs_as_compact_polys
else:
return terms # Cache terms in FUTURE?
@property
def num_params(self):
"""
Get the number of independent parameters which specify this state vector.
Returns
-------
int
the number of independent parameters.
"""
return len(self.gpindices_as_array())
def to_vector(self):
"""
Get the state vector parameters as an array of values.
Returns
-------
numpy array
The parameters as a 1D array with length num_params().
"""
v = _np.empty(self.num_params, 'd')
for factor_state, factor_local_inds in zip(self.factors, self._submember_rpindices):
v[factor_local_inds] = factor_state.to_vector()
return v
def from_vector(self, v, close=False, dirty_value=True):
"""
Initialize the state vector using a 1D array of parameters.
Parameters
----------
v : numpy array
The 1D vector of state vector parameters. Length
must == num_params()
close : bool, optional
Whether `v` is close to this state vector's current
set of parameters. Under some circumstances, when this
is true this call can be completed more quickly.
dirty_value : bool, optional
The value to set this object's "dirty flag" to before exiting this
call. This is passed as an argument so it can be updated *recursively*.
Leave this set to `True` unless you know what you're doing.
Returns
-------
None
"""
for factor_state, factor_local_inds in zip(self.factors, self._submember_rpindices):
factor_state.from_vector(v[factor_local_inds], close, dirty_value)
#Update representation, which may be a dense matrix or
# just fast-kron arrays or a stabilizer state.
self._update_rep() # TODO - how does this apply to state reps??
def deriv_wrt_params(self, wrt_filter=None):
"""
The element-wise derivative this state vector.
Construct a matrix whose columns are the derivatives of the state vector
with respect to a single param. Thus, each column is of length
dimension and there is one column per state vector parameter.
An empty 2D array in the StaticState case (num_params == 0).
Parameters
----------
wrt_filter : list or numpy.ndarray
List of parameter indices to take derivative with respect to.
(None means to use all the this operation's parameters.)
Returns
-------
numpy array
Array of derivatives, shape == (dimension, num_params)
"""
typ = self.factors[0].to_dense(on_space='minimal').dtype if len(self.factors) > 0 else 'd'
#HACK to deal with fact that output of to_dense is really what is differentiated
# but this may not match self.dim == self.state_space.dim, e.g. for pure state vecs.
dims = [len(fct.to_dense(on_space='minimal')) for fct in self.factors]
dim = int(_np.product(dims))
derivMx = _np.zeros((dim, self.num_params), typ)
#Product rule to compute jacobian
# loop over the spamvec/povm we differentiate wrt:
for i, (fct, fct_local_inds, fct_dim) in enumerate(zip(self.factors, self._submember_rpindices, dims)):
vec = fct
if vec.num_params == 0: continue # no contribution
deriv = vec.deriv_wrt_params(None) # TODO: use filter?? / make relative to this gate...
deriv.shape = (fct_dim, vec.num_params)
if i > 0: # factors before ith
pre = self.factors[0].to_dense(on_space='minimal')
for vecA in self.factors[1:i]:
pre = _np.kron(pre, vecA.to_dense(on_space='minimal'))
deriv = _np.kron(pre[:, None], deriv) # add a dummy 1-dim to 'pre' and do kron properly...
if i + 1 < len(self.factors): # factors after ith
post = self.factors[i + 1].to_dense(on_space='minimal')
for vecA in self.factors[i + 2:]:
post = _np.kron(post, vecA.to_dense(on_space='minimal'))
deriv = _np.kron(deriv, post[:, None]) # add a dummy 1-dim to 'post' and do kron properly...
assert(fct_local_inds is not None), \
"Error: gpindices has not been initialized for factor %d - cannot compute derivative!" % i
derivMx[:, fct_local_inds] += deriv
derivMx.shape = (dim, self.num_params) # necessary?
if wrt_filter is None:
return derivMx
else:
return _np.take(derivMx, wrt_filter, axis=1)
def has_nonzero_hessian(self):
"""
Whether this state vector has a non-zero Hessian with respect to its parameters.
Returns
-------
bool
"""
return False
def __str__(self):
s = "Tensor product %s vector with length %d\n" % (self._prep_or_effect, self.dim)
#ar = self.to_dense()
#s += _mt.mx_to_string(ar, width=4, prec=2)
# factors are just other States
s += " x ".join([_mt.mx_to_string(fct.to_dense(on_space='minimal'), width=4, prec=2) for fct in self.factors])
return s
| en | 0.753729 | The TensorProductState class and supporting functionality. #*************************************************************************************************** # Copyright 2015, 2019 National Technology & Engineering Solutions of Sandia, LLC (NTESS). # Under the terms of Contract DE-NA0003525 with NTESS, the U.S. Government retains certain rights # in this software. # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except # in compliance with the License. You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 or in the LICENSE file in the root pyGSTi directory. #*************************************************************************************************** A state vector that is a tensor-product of other state vectors. Parameters ---------- factors : list of States a list of the component states to take the tensor product of. state_space : StateSpace, optional The state space for this operation. # do *not* copy - needs to reference common objects # initialize our gpindices based on sub-members # initializes rep data #Note: no to_memoized_dict needed, as ModelMember version does all we need. Get the ModelMember-derived objects contained in this one. Returns ------- list # factor POVM object An array of labels (usually strings) describing this model member's parameters. Return this state vector as a (dense) numpy array. The memory in `scratch` maybe used when it is not-None. Parameters ---------- on_space : {'minimal', 'Hilbert', 'HilbertSchmidt'} The space that the returned dense operation acts upon. For unitary matrices and bra/ket vectors, use `'Hilbert'`. For superoperator matrices and super-bra/super-ket vectors use `'HilbertSchmidt'`. `'minimal'` means that `'Hilbert'` is used if possible given this operator's evolution type, and otherwise `'HilbertSchmidt'` is used. scratch : numpy.ndarray, optional scratch space available for use. Returns ------- numpy.ndarray Get the `order`-th order Taylor-expansion terms of this state vector. This function either constructs or returns a cached list of the terms at the given order. Each term is "rank-1", meaning that it is a state preparation followed by or POVM effect preceded by actions on a density matrix `rho` of the form: `rho -> A rho B` The coefficients of these terms are typically polynomials of the State's parameters, where the polynomial's variable indices index the *global* parameters of the State's parent (usually a :class:`Model`) , not the State's local parameter array (i.e. that returned from `to_vector`). Parameters ---------- order : int The order of terms to get. max_polynomial_vars : int, optional maximum number of variables the created polynomials can have. return_coeff_polys : bool Whether a parallel list of locally-indexed (using variable indices corresponding to *this* object's parameters rather than its parent's) polynomial coefficients should be returned as well. Returns ------- terms : list A list of :class:`RankOneTerm` objects. coefficients : list Only present when `return_coeff_polys == True`. A list of *compact* polynomial objects, meaning that each element is a `(vtape,ctape)` 2-tuple formed by concatenating together the output of :method:`Polynomial.compact`. # num of qubits per factor # assumes density matrix evolution # total number of qubits # When possible, create COLLAPSED factor_lists so each factor has just a single # (State) pre & post op, which can be formed into the new terms' # TensorProdState ops. # - DON'T collapse stabilizer states & clifford ops - can't for POVMs # bool(self._evotype =="svterm") # need to use reps for collapsing now... TODO? # create a term with a TensorProdState - Note we always create # "prep"-mode vectors, since even when self._prep_or_effect == "effect" these # vectors are created with factor (prep- or effect-type) States not factor POVMs # we workaround this by still allowing such "prep"-mode # TensorProdStates to be represented as effects (i.e. in torep('effect'...) works) # then may need to add more ops. Assume factor ops are clifford gates # Embed each factors ops according to their target qubit(s) and just daisy chain them # embed and add ops # embed and add ops # with *local* indices #self.local_term_poly_coeffs[order] = coeffs_as_compact_polys #FUTURE? # Cache terms in FUTURE? Get the number of independent parameters which specify this state vector. Returns ------- int the number of independent parameters. Get the state vector parameters as an array of values. Returns ------- numpy array The parameters as a 1D array with length num_params(). Initialize the state vector using a 1D array of parameters. Parameters ---------- v : numpy array The 1D vector of state vector parameters. Length must == num_params() close : bool, optional Whether `v` is close to this state vector's current set of parameters. Under some circumstances, when this is true this call can be completed more quickly. dirty_value : bool, optional The value to set this object's "dirty flag" to before exiting this call. This is passed as an argument so it can be updated *recursively*. Leave this set to `True` unless you know what you're doing. Returns ------- None #Update representation, which may be a dense matrix or # just fast-kron arrays or a stabilizer state. # TODO - how does this apply to state reps?? The element-wise derivative this state vector. Construct a matrix whose columns are the derivatives of the state vector with respect to a single param. Thus, each column is of length dimension and there is one column per state vector parameter. An empty 2D array in the StaticState case (num_params == 0). Parameters ---------- wrt_filter : list or numpy.ndarray List of parameter indices to take derivative with respect to. (None means to use all the this operation's parameters.) Returns ------- numpy array Array of derivatives, shape == (dimension, num_params) #HACK to deal with fact that output of to_dense is really what is differentiated # but this may not match self.dim == self.state_space.dim, e.g. for pure state vecs. #Product rule to compute jacobian # loop over the spamvec/povm we differentiate wrt: # no contribution # TODO: use filter?? / make relative to this gate... # factors before ith # add a dummy 1-dim to 'pre' and do kron properly... # factors after ith # add a dummy 1-dim to 'post' and do kron properly... # necessary? Whether this state vector has a non-zero Hessian with respect to its parameters. Returns ------- bool #ar = self.to_dense() #s += _mt.mx_to_string(ar, width=4, prec=2) # factors are just other States | 1.80285 | 2 |
edivorce/apps/core/views/graphql.py | gerritvdm/eDivorce | 6 | 10223 | <filename>edivorce/apps/core/views/graphql.py
import graphene
import graphene_django
from django.http import HttpResponseForbidden
from graphene_django.views import GraphQLView
from graphql import GraphQLError
from edivorce.apps.core.models import Document
class PrivateGraphQLView(GraphQLView):
def dispatch(self, request, *args, **kwargs):
if not request.user.is_authenticated:
return HttpResponseForbidden()
return super().dispatch(request, *args, **kwargs)
class DocumentType(graphene_django.DjangoObjectType):
file_url = graphene.String(source='get_file_url')
content_type = graphene.String(source='get_content_type')
class Meta:
model = Document
exclude = ('id', 'file')
class Query(graphene.ObjectType):
documents = graphene.List(DocumentType, doc_type=graphene.String(required=True), party_code=graphene.Int(required=True))
def resolve_documents(self, info, **kwargs):
if info.context.user.is_anonymous:
raise GraphQLError('Unauthorized')
q = Document.objects.filter(bceid_user=info.context.user, **kwargs)
for doc in q:
if not doc.file_exists():
q.delete()
return Document.objects.none()
return q
class DocumentInput(graphene.InputObjectType):
filename = graphene.String(required=True)
size = graphene.Int(required=True)
width = graphene.Int()
height = graphene.Int()
rotation = graphene.Int()
class DocumentMetaDataInput(graphene.InputObjectType):
files = graphene.List(DocumentInput, required=True)
doc_type = graphene.String(required=True)
party_code = graphene.Int(required=True)
class UpdateMetadata(graphene.Mutation):
class Arguments:
input = DocumentMetaDataInput(required=True)
documents = graphene.List(DocumentType)
def mutate(self, info, **kwargs):
input_ = kwargs['input']
documents = Document.objects.filter(bceid_user=info.context.user, doc_type=input_['doc_type'], party_code=input_['party_code'])
unique_files = [dict(s) for s in set(frozenset(d.items()) for d in input_['files'])]
if documents.count() != len(input_['files']) or documents.count() != len(unique_files):
raise GraphQLError("Invalid input: there must be the same number of files")
for i, file in enumerate(input_['files']):
try:
doc = documents.get(filename=file['filename'], size=file['size'])
doc.sort_order = i + 1
doc.width = file.get('width', doc.width)
doc.height = file.get('height', doc.height)
doc.rotation = file.get('rotation', doc.rotation)
if doc.rotation not in [0, 90, 180, 270]:
raise GraphQLError(f"Invalid rotation {doc.rotation}, must be 0, 90, 180, 270")
doc.save()
except Document.DoesNotExist:
raise GraphQLError(f"Couldn't find document '{file['filename']}' with size '{file['size']}'")
return UpdateMetadata(documents=documents.all())
class Mutations(graphene.ObjectType):
update_metadata = UpdateMetadata.Field()
graphql_schema = graphene.Schema(query=Query, mutation=Mutations)
| <filename>edivorce/apps/core/views/graphql.py
import graphene
import graphene_django
from django.http import HttpResponseForbidden
from graphene_django.views import GraphQLView
from graphql import GraphQLError
from edivorce.apps.core.models import Document
class PrivateGraphQLView(GraphQLView):
def dispatch(self, request, *args, **kwargs):
if not request.user.is_authenticated:
return HttpResponseForbidden()
return super().dispatch(request, *args, **kwargs)
class DocumentType(graphene_django.DjangoObjectType):
file_url = graphene.String(source='get_file_url')
content_type = graphene.String(source='get_content_type')
class Meta:
model = Document
exclude = ('id', 'file')
class Query(graphene.ObjectType):
documents = graphene.List(DocumentType, doc_type=graphene.String(required=True), party_code=graphene.Int(required=True))
def resolve_documents(self, info, **kwargs):
if info.context.user.is_anonymous:
raise GraphQLError('Unauthorized')
q = Document.objects.filter(bceid_user=info.context.user, **kwargs)
for doc in q:
if not doc.file_exists():
q.delete()
return Document.objects.none()
return q
class DocumentInput(graphene.InputObjectType):
filename = graphene.String(required=True)
size = graphene.Int(required=True)
width = graphene.Int()
height = graphene.Int()
rotation = graphene.Int()
class DocumentMetaDataInput(graphene.InputObjectType):
files = graphene.List(DocumentInput, required=True)
doc_type = graphene.String(required=True)
party_code = graphene.Int(required=True)
class UpdateMetadata(graphene.Mutation):
class Arguments:
input = DocumentMetaDataInput(required=True)
documents = graphene.List(DocumentType)
def mutate(self, info, **kwargs):
input_ = kwargs['input']
documents = Document.objects.filter(bceid_user=info.context.user, doc_type=input_['doc_type'], party_code=input_['party_code'])
unique_files = [dict(s) for s in set(frozenset(d.items()) for d in input_['files'])]
if documents.count() != len(input_['files']) or documents.count() != len(unique_files):
raise GraphQLError("Invalid input: there must be the same number of files")
for i, file in enumerate(input_['files']):
try:
doc = documents.get(filename=file['filename'], size=file['size'])
doc.sort_order = i + 1
doc.width = file.get('width', doc.width)
doc.height = file.get('height', doc.height)
doc.rotation = file.get('rotation', doc.rotation)
if doc.rotation not in [0, 90, 180, 270]:
raise GraphQLError(f"Invalid rotation {doc.rotation}, must be 0, 90, 180, 270")
doc.save()
except Document.DoesNotExist:
raise GraphQLError(f"Couldn't find document '{file['filename']}' with size '{file['size']}'")
return UpdateMetadata(documents=documents.all())
class Mutations(graphene.ObjectType):
update_metadata = UpdateMetadata.Field()
graphql_schema = graphene.Schema(query=Query, mutation=Mutations)
| none | 1 | 2.255126 | 2 |
|
amazing/maze.py | danieloconell/maze-solver | 0 | 10224 | from .exceptions import MazeNotSolved, AlgorithmNotFound
from .dijkstra import Dijkstra
from .astar import Astar
from functools import wraps
import warnings
from daedalus import Maze as _maze
from PIL import Image
warnings.simplefilter("once", UserWarning)
class Maze:
"""
Create a maze and solve it.
Available algorithms:
dijkstra
astar (WIP)
Steps:
1. Create maze using the daedalus library.
2. Convert maze to graph.
3. Solve maze with algorithm.
"""
WHITE = (0, 0, 0)
BLACK = (255, 255, 255)
RED = (255, 0, 0)
def __init__(self, width, height, algorithm="dijkstra"):
"""Set algorithm to be used when solving.
Args:
algorithm (str) to be used when solving maze
width (int) of maze in pixels
height (int) of maze in pixels
"""
self.algorithm = algorithm
if not width % 2 or not height % 2:
warnings.warn(
"Using even width or height, use even numbers for optimal images"
)
self._create_maze(width, height)
self._create_graph()
self.width = width
self.height = height
def _create_maze(self, width, height):
"""Make maze to be solved and add border to maze.
Args:
width (int) of maze
height (int) of maze
"""
# create maze
self.maze = _maze(width, height)
self.maze.create_perfect()
# define maze variables
self.entrance = self.maze.entrance
self.exit = self.maze.exit
# add index to maze
self.maze = {
row_i: {item_i: item for item_i, item in enumerate(row)}
for row_i, row in enumerate(self.maze)
}
def _create_graph(self):
"""Remove unnecessary states from maze and convert maze to graph to be
solved."""
self.graph = {}
# convert to graph
for column in self.maze.keys():
for row in self.maze[column].keys():
item = self.maze[column][row]
if item != 1:
neighbours = []
try:
if self.maze[column][row - 1] != 1:
neighbours.append(["left", (column, row - 1)])
except KeyError:
None
try:
if self.maze[column][row + 1] != 1:
neighbours.append(["right", (column, row + 1)])
except KeyError:
None
try:
if self.maze[column - 1][row] != 1:
neighbours.append(["above", (column - 1, row)])
except KeyError:
None
try:
if self.maze[column + 1][row] != 1:
neighbours.append(["below", (column + 1, row)])
except KeyError:
None
self.graph[(column, row)] = {x[:][1]: 1 for x in neighbours}
# TODO: remove unnecessary states
def _maze_maker(file_name):
def real_decorator(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
data = []
for row_i, row in enumerate(list(self.maze)):
for item_i, item in enumerate(self.maze[row].values()):
func(self, data, item, row_i=row_i, item_i=item_i)
# save maze
image = Image.new("RGB", (self.width, self.height))
image.putdata(data)
image.save(file_name)
return wrapper
return real_decorator
@_maze_maker("maze.png")
def save(self, data, item, row_i=None, item_i=None):
"""Save maze locally as an image."""
# invert maze because maze is incorrect
if item:
data.append(self.WHITE)
else:
data.append(self.BLACK)
def solve(self):
""" Solve maze using specified algorithm.
Returns:
shortest path as a queue from start to finish of maze
"""
if self.algorithm == "astar":
algorithm = Astar()
elif self.algorithm == "dijkstra":
algorithm = Dijkstra()
else:
raise AlgorithmNotFound(
f"Invalid algorithm: {self.algorithm}. See help({type(self).__name__}) for available algorithms."
)
# add nodes to graph
for node in self.graph:
algorithm.add_node(node, self.graph[node])
# pydaedalus stores y then x value which need to be reversed
self.entrance = tuple(reversed(self.entrance))
self.exit = tuple(reversed(self.exit))
self.path = algorithm.shortest_path(self.entrance, self.exit)
@_maze_maker("solution.png")
def save_solution(self, data, item, row_i=None, item_i=None):
"""Save maze image and the shortest path."""
if not hasattr(self, "path"):
raise MazeNotSolved(
f"Maze must be solved to save solution. Run {type(self).__name__}.solve() first."
)
if (row_i, item_i) in self.path:
data.append(self.RED)
elif item:
data.append(self.WHITE)
else:
data.append(self.BLACK)
def __str__(self):
"""Just cause it looks nice."""
string = []
for row in self.maze:
string.append(["█" if item else " " for item in self.maze[row].values()])
return "\n".join(["".join(line) for line in string])
def __repr__(self):
"""Easier on the eyes."""
return f"Maze(algorithm='{self.algorithm}', width={self.width}, height={self.height})"
| from .exceptions import MazeNotSolved, AlgorithmNotFound
from .dijkstra import Dijkstra
from .astar import Astar
from functools import wraps
import warnings
from daedalus import Maze as _maze
from PIL import Image
warnings.simplefilter("once", UserWarning)
class Maze:
"""
Create a maze and solve it.
Available algorithms:
dijkstra
astar (WIP)
Steps:
1. Create maze using the daedalus library.
2. Convert maze to graph.
3. Solve maze with algorithm.
"""
WHITE = (0, 0, 0)
BLACK = (255, 255, 255)
RED = (255, 0, 0)
def __init__(self, width, height, algorithm="dijkstra"):
"""Set algorithm to be used when solving.
Args:
algorithm (str) to be used when solving maze
width (int) of maze in pixels
height (int) of maze in pixels
"""
self.algorithm = algorithm
if not width % 2 or not height % 2:
warnings.warn(
"Using even width or height, use even numbers for optimal images"
)
self._create_maze(width, height)
self._create_graph()
self.width = width
self.height = height
def _create_maze(self, width, height):
"""Make maze to be solved and add border to maze.
Args:
width (int) of maze
height (int) of maze
"""
# create maze
self.maze = _maze(width, height)
self.maze.create_perfect()
# define maze variables
self.entrance = self.maze.entrance
self.exit = self.maze.exit
# add index to maze
self.maze = {
row_i: {item_i: item for item_i, item in enumerate(row)}
for row_i, row in enumerate(self.maze)
}
def _create_graph(self):
"""Remove unnecessary states from maze and convert maze to graph to be
solved."""
self.graph = {}
# convert to graph
for column in self.maze.keys():
for row in self.maze[column].keys():
item = self.maze[column][row]
if item != 1:
neighbours = []
try:
if self.maze[column][row - 1] != 1:
neighbours.append(["left", (column, row - 1)])
except KeyError:
None
try:
if self.maze[column][row + 1] != 1:
neighbours.append(["right", (column, row + 1)])
except KeyError:
None
try:
if self.maze[column - 1][row] != 1:
neighbours.append(["above", (column - 1, row)])
except KeyError:
None
try:
if self.maze[column + 1][row] != 1:
neighbours.append(["below", (column + 1, row)])
except KeyError:
None
self.graph[(column, row)] = {x[:][1]: 1 for x in neighbours}
# TODO: remove unnecessary states
def _maze_maker(file_name):
def real_decorator(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
data = []
for row_i, row in enumerate(list(self.maze)):
for item_i, item in enumerate(self.maze[row].values()):
func(self, data, item, row_i=row_i, item_i=item_i)
# save maze
image = Image.new("RGB", (self.width, self.height))
image.putdata(data)
image.save(file_name)
return wrapper
return real_decorator
@_maze_maker("maze.png")
def save(self, data, item, row_i=None, item_i=None):
"""Save maze locally as an image."""
# invert maze because maze is incorrect
if item:
data.append(self.WHITE)
else:
data.append(self.BLACK)
def solve(self):
""" Solve maze using specified algorithm.
Returns:
shortest path as a queue from start to finish of maze
"""
if self.algorithm == "astar":
algorithm = Astar()
elif self.algorithm == "dijkstra":
algorithm = Dijkstra()
else:
raise AlgorithmNotFound(
f"Invalid algorithm: {self.algorithm}. See help({type(self).__name__}) for available algorithms."
)
# add nodes to graph
for node in self.graph:
algorithm.add_node(node, self.graph[node])
# pydaedalus stores y then x value which need to be reversed
self.entrance = tuple(reversed(self.entrance))
self.exit = tuple(reversed(self.exit))
self.path = algorithm.shortest_path(self.entrance, self.exit)
@_maze_maker("solution.png")
def save_solution(self, data, item, row_i=None, item_i=None):
"""Save maze image and the shortest path."""
if not hasattr(self, "path"):
raise MazeNotSolved(
f"Maze must be solved to save solution. Run {type(self).__name__}.solve() first."
)
if (row_i, item_i) in self.path:
data.append(self.RED)
elif item:
data.append(self.WHITE)
else:
data.append(self.BLACK)
def __str__(self):
"""Just cause it looks nice."""
string = []
for row in self.maze:
string.append(["█" if item else " " for item in self.maze[row].values()])
return "\n".join(["".join(line) for line in string])
def __repr__(self):
"""Easier on the eyes."""
return f"Maze(algorithm='{self.algorithm}', width={self.width}, height={self.height})"
| en | 0.809515 | Create a maze and solve it. Available algorithms: dijkstra astar (WIP) Steps: 1. Create maze using the daedalus library. 2. Convert maze to graph. 3. Solve maze with algorithm. Set algorithm to be used when solving. Args: algorithm (str) to be used when solving maze width (int) of maze in pixels height (int) of maze in pixels Make maze to be solved and add border to maze. Args: width (int) of maze height (int) of maze # create maze # define maze variables # add index to maze Remove unnecessary states from maze and convert maze to graph to be solved. # convert to graph # TODO: remove unnecessary states # save maze Save maze locally as an image. # invert maze because maze is incorrect Solve maze using specified algorithm. Returns: shortest path as a queue from start to finish of maze # add nodes to graph # pydaedalus stores y then x value which need to be reversed Save maze image and the shortest path. Just cause it looks nice. Easier on the eyes. | 3.357956 | 3 |
config.py | FarbodFarhangfar/midi_player_python | 0 | 10225 | <reponame>FarbodFarhangfar/midi_player_python
import os
def get_note_dic():
_note_dic = {'C': 0, 'C#': 1, 'Db': 1, 'D': 2, 'D#': 3, 'Eb': 3, 'E': 4, 'F': 5, 'F#': 6,
'Gb': 6, 'G': 7,
'G#': 8, 'Ab': 8, 'A': 9, 'A#': 10, 'Bb': 10, 'B': 11}
return _note_dic
def get_value_list():
values = {"16": 16, "8": 8, "4": 4, "2": 2, "1": 1, "0.5": 0.5, "1/2": 0.5, "0.25": 0.25, "1/4": 0.25,
"0.125": 0.125, "1/8": 0.125, "0.0625": 0.0625, "1/16": 0.0625,
"0.03125": 0.03125, "1/32": 0.03125}
return values
def instruments(inst):
instruments_dict = {
# Piano
'Acoustic Grand Piano': '1', 'Bright Acoustic Piano': '2', 'Electric Grand Piano': '3',
'Honky-tonk Piano': '4',
'Electric Piano 1': '5', 'Electric Piano 2': '6', 'Harpsichord': '7', 'Clavi': '8',
# Chromatic Percussion
'Celesta': '9',
'Glockenspiel': '10', 'Music Box': '11', 'Vibraphone': '12', 'Marimba': '13', 'Xylophone': '14',
'Tubular Bells': '15', 'Dulcimer': '16',
# Organ
'Drawbar Organ': '17', 'Percussive Organ': '18',
'Rock Organ': '19',
'Church Organ': '20', 'Reed Organ': '21', 'Accordion': '22', 'Harmonica': '23',
'Tango Accordion': '24',
# Guitar
'Acoustic Guitar (nylon)': '25', 'Acoustic Guitar (steel)': '26',
'Electric Guitar (jazz)': '27',
'Electric Guitar (clean)': '28', 'Electric Guitar (muted)': '29', 'Overdriven Guitar': '30',
'Distortion Guitar': '31', 'Guitar Harmonics': '32',
# Bass
'Acoustic Bass': '33',
'Electric Bass (finger)': '34',
'Electric Bass (pick)': '35', 'Fretless Bass': '36', 'Slap Bass 1': '37', 'Slap Bass 2': '38',
'Synth Bass 1': '39', 'Synth Bass 2': '40',
# Strings
'Violin': '41', 'Viola': '42', 'Cello': '43',
'Contrabass': '44',
'Tremolo Strings': '45', 'Pizzicato Strings': '46', 'Orchestral Harp': '47', 'Timpani': '48',
# Ensemble
'String Ensemble 1': '49', 'String Ensemble 2': '50', 'Synth Strings 1': '51',
'Synth Strings 2': '52',
'Choir Aahs': '53', 'Voice Oohs': '54', 'Synth Choir': '55', 'Orchestra Hit': '56',
# Brass
'Trumpet': '57',
'Trombone': '58', 'Tuba': '59', 'Muted Trumpet': '60', 'French Horn': '61',
'Brass Section': '62',
'Synth Brass 1': '63', 'Synth Brass 2': '64',
# Reed
'Soprano Sax': '65', 'Alto Sax': '66',
'Tenor Sax': '67',
'Baritone Sax': '68', 'Oboe': '69', 'English Horn': '70', 'Bassoon': '71', 'Clarinet': '72',
# Pipe
'Piccolo': '73',
'Flute': '74', 'Recorder': '75', 'Pan Flute': '76', 'Blown bottle': '77', 'Shakuhachi': '78',
'Whistle': '79',
'Ocarina': '80',
# Synth Lead
'Lead 1 (square)': '81', 'Lead 2 (sawtooth)': '82', 'Lead 3 (calliope)': '83',
'Lead 4 (chiff)': '84', 'Lead 5 (charang)': '85', 'Lead 6 (voice)': '86',
'Lead 7 (fifths)': '87',
'Lead 8 (bass + lead)': '88',
# Synth Pad
'Pad 1 (new age)': '89', 'Pad 2 (warm)': '90',
'Pad 3 (polysynth)': '91',
'Pad 4 (choir)': '92', 'Pad 5 (bowed)': '93', 'Pad 6 (metallic)': '94', 'Pad 7 (halo)': '95',
'Pad 8 (sweep)': '96',
# Synth Effects
'FX 1 (rain)': '97', 'FX 2 (soundtrack)': '98', 'FX 3 (crystal)': '99',
'FX 4 (atmosphere)': '100', 'FX 5 (brightness)': '101', 'FX 6 (goblins)': '102',
'FX 7 (echoes)': '103',
'FX 8 (sci-fi)': '104',
# Ethnic
'Sitar': '105', 'Banjo': '106', 'Shamisen': '107', 'Koto': '108',
'Kalimba': '109',
'Bagpipe': '110', 'Fiddle': '111', 'Shanai': '112',
# Percussive
'Tinkle Bell': '113', 'Agogo': '114',
'Steel Drums': '115',
'Woodblock': '116', 'Taiko Drum': '117', 'Melodic Tom': '118', 'Synth Drum': '119',
'Reverse Cymbal': '120',
# Sound effects
'Guitar Fret Noise': '121', 'Breath Noise': '122', 'Seashore': '123', 'Bird Tweet': '124',
'Telephone Ring': '125',
'Helicopter': '126', 'Applause': '127'}
return instruments_dict
| import os
def get_note_dic():
_note_dic = {'C': 0, 'C#': 1, 'Db': 1, 'D': 2, 'D#': 3, 'Eb': 3, 'E': 4, 'F': 5, 'F#': 6,
'Gb': 6, 'G': 7,
'G#': 8, 'Ab': 8, 'A': 9, 'A#': 10, 'Bb': 10, 'B': 11}
return _note_dic
def get_value_list():
values = {"16": 16, "8": 8, "4": 4, "2": 2, "1": 1, "0.5": 0.5, "1/2": 0.5, "0.25": 0.25, "1/4": 0.25,
"0.125": 0.125, "1/8": 0.125, "0.0625": 0.0625, "1/16": 0.0625,
"0.03125": 0.03125, "1/32": 0.03125}
return values
def instruments(inst):
instruments_dict = {
# Piano
'Acoustic Grand Piano': '1', 'Bright Acoustic Piano': '2', 'Electric Grand Piano': '3',
'Honky-tonk Piano': '4',
'Electric Piano 1': '5', 'Electric Piano 2': '6', 'Harpsichord': '7', 'Clavi': '8',
# Chromatic Percussion
'Celesta': '9',
'Glockenspiel': '10', 'Music Box': '11', 'Vibraphone': '12', 'Marimba': '13', 'Xylophone': '14',
'Tubular Bells': '15', 'Dulcimer': '16',
# Organ
'Drawbar Organ': '17', 'Percussive Organ': '18',
'Rock Organ': '19',
'Church Organ': '20', 'Reed Organ': '21', 'Accordion': '22', 'Harmonica': '23',
'Tango Accordion': '24',
# Guitar
'Acoustic Guitar (nylon)': '25', 'Acoustic Guitar (steel)': '26',
'Electric Guitar (jazz)': '27',
'Electric Guitar (clean)': '28', 'Electric Guitar (muted)': '29', 'Overdriven Guitar': '30',
'Distortion Guitar': '31', 'Guitar Harmonics': '32',
# Bass
'Acoustic Bass': '33',
'Electric Bass (finger)': '34',
'Electric Bass (pick)': '35', 'Fretless Bass': '36', 'Slap Bass 1': '37', 'Slap Bass 2': '38',
'Synth Bass 1': '39', 'Synth Bass 2': '40',
# Strings
'Violin': '41', 'Viola': '42', 'Cello': '43',
'Contrabass': '44',
'Tremolo Strings': '45', 'Pizzicato Strings': '46', 'Orchestral Harp': '47', 'Timpani': '48',
# Ensemble
'String Ensemble 1': '49', 'String Ensemble 2': '50', 'Synth Strings 1': '51',
'Synth Strings 2': '52',
'Choir Aahs': '53', 'Voice Oohs': '54', 'Synth Choir': '55', 'Orchestra Hit': '56',
# Brass
'Trumpet': '57',
'Trombone': '58', 'Tuba': '59', 'Muted Trumpet': '60', 'French Horn': '61',
'Brass Section': '62',
'Synth Brass 1': '63', 'Synth Brass 2': '64',
# Reed
'Soprano Sax': '65', 'Alto Sax': '66',
'Tenor Sax': '67',
'Baritone Sax': '68', 'Oboe': '69', 'English Horn': '70', 'Bassoon': '71', 'Clarinet': '72',
# Pipe
'Piccolo': '73',
'Flute': '74', 'Recorder': '75', 'Pan Flute': '76', 'Blown bottle': '77', 'Shakuhachi': '78',
'Whistle': '79',
'Ocarina': '80',
# Synth Lead
'Lead 1 (square)': '81', 'Lead 2 (sawtooth)': '82', 'Lead 3 (calliope)': '83',
'Lead 4 (chiff)': '84', 'Lead 5 (charang)': '85', 'Lead 6 (voice)': '86',
'Lead 7 (fifths)': '87',
'Lead 8 (bass + lead)': '88',
# Synth Pad
'Pad 1 (new age)': '89', 'Pad 2 (warm)': '90',
'Pad 3 (polysynth)': '91',
'Pad 4 (choir)': '92', 'Pad 5 (bowed)': '93', 'Pad 6 (metallic)': '94', 'Pad 7 (halo)': '95',
'Pad 8 (sweep)': '96',
# Synth Effects
'FX 1 (rain)': '97', 'FX 2 (soundtrack)': '98', 'FX 3 (crystal)': '99',
'FX 4 (atmosphere)': '100', 'FX 5 (brightness)': '101', 'FX 6 (goblins)': '102',
'FX 7 (echoes)': '103',
'FX 8 (sci-fi)': '104',
# Ethnic
'Sitar': '105', 'Banjo': '106', 'Shamisen': '107', 'Koto': '108',
'Kalimba': '109',
'Bagpipe': '110', 'Fiddle': '111', 'Shanai': '112',
# Percussive
'Tinkle Bell': '113', 'Agogo': '114',
'Steel Drums': '115',
'Woodblock': '116', 'Taiko Drum': '117', 'Melodic Tom': '118', 'Synth Drum': '119',
'Reverse Cymbal': '120',
# Sound effects
'Guitar Fret Noise': '121', 'Breath Noise': '122', 'Seashore': '123', 'Bird Tweet': '124',
'Telephone Ring': '125',
'Helicopter': '126', 'Applause': '127'}
return instruments_dict | en | 0.497871 | #': 1, 'Db': 1, 'D': 2, 'D#': 3, 'Eb': 3, 'E': 4, 'F': 5, 'F#': 6, #': 8, 'Ab': 8, 'A': 9, 'A#': 10, 'Bb': 10, 'B': 11} # Piano # Chromatic Percussion # Organ # Guitar # Bass # Strings # Ensemble # Brass # Reed # Pipe # Synth Lead # Synth Pad # Synth Effects # Ethnic # Percussive # Sound effects | 2.8247 | 3 |
roles/openshift_health_checker/library/ocutil.py | shgriffi/openshift-ansible | 164 | 10226 | #!/usr/bin/python
"""Interface to OpenShift oc command"""
import os
import shlex
import shutil
import subprocess
from ansible.module_utils.basic import AnsibleModule
ADDITIONAL_PATH_LOOKUPS = ['/usr/local/bin', os.path.expanduser('~/bin')]
def locate_oc_binary():
"""Find and return oc binary file"""
# https://github.com/openshift/openshift-ansible/issues/3410
# oc can be in /usr/local/bin in some cases, but that may not
# be in $PATH due to ansible/sudo
paths = os.environ.get("PATH", os.defpath).split(os.pathsep) + ADDITIONAL_PATH_LOOKUPS
oc_binary = 'oc'
# Use shutil.which if it is available, otherwise fallback to a naive path search
try:
which_result = shutil.which(oc_binary, path=os.pathsep.join(paths))
if which_result is not None:
oc_binary = which_result
except AttributeError:
for path in paths:
if os.path.exists(os.path.join(path, oc_binary)):
oc_binary = os.path.join(path, oc_binary)
break
return oc_binary
def main():
"""Module that executes commands on a remote OpenShift cluster"""
module = AnsibleModule(
argument_spec=dict(
namespace=dict(type="str", required=False),
config_file=dict(type="str", required=True),
cmd=dict(type="str", required=True),
extra_args=dict(type="list", default=[]),
),
)
cmd = [locate_oc_binary(), '--config', module.params["config_file"]]
if module.params["namespace"]:
cmd += ['-n', module.params["namespace"]]
cmd += shlex.split(module.params["cmd"]) + module.params["extra_args"]
failed = True
try:
cmd_result = subprocess.check_output(list(cmd), stderr=subprocess.STDOUT)
failed = False
except subprocess.CalledProcessError as exc:
cmd_result = '[rc {}] {}\n{}'.format(exc.returncode, ' '.join(exc.cmd), exc.output)
except OSError as exc:
# we get this when 'oc' is not there
cmd_result = str(exc)
module.exit_json(
changed=False,
failed=failed,
result=cmd_result,
)
if __name__ == '__main__':
main()
| #!/usr/bin/python
"""Interface to OpenShift oc command"""
import os
import shlex
import shutil
import subprocess
from ansible.module_utils.basic import AnsibleModule
ADDITIONAL_PATH_LOOKUPS = ['/usr/local/bin', os.path.expanduser('~/bin')]
def locate_oc_binary():
"""Find and return oc binary file"""
# https://github.com/openshift/openshift-ansible/issues/3410
# oc can be in /usr/local/bin in some cases, but that may not
# be in $PATH due to ansible/sudo
paths = os.environ.get("PATH", os.defpath).split(os.pathsep) + ADDITIONAL_PATH_LOOKUPS
oc_binary = 'oc'
# Use shutil.which if it is available, otherwise fallback to a naive path search
try:
which_result = shutil.which(oc_binary, path=os.pathsep.join(paths))
if which_result is not None:
oc_binary = which_result
except AttributeError:
for path in paths:
if os.path.exists(os.path.join(path, oc_binary)):
oc_binary = os.path.join(path, oc_binary)
break
return oc_binary
def main():
"""Module that executes commands on a remote OpenShift cluster"""
module = AnsibleModule(
argument_spec=dict(
namespace=dict(type="str", required=False),
config_file=dict(type="str", required=True),
cmd=dict(type="str", required=True),
extra_args=dict(type="list", default=[]),
),
)
cmd = [locate_oc_binary(), '--config', module.params["config_file"]]
if module.params["namespace"]:
cmd += ['-n', module.params["namespace"]]
cmd += shlex.split(module.params["cmd"]) + module.params["extra_args"]
failed = True
try:
cmd_result = subprocess.check_output(list(cmd), stderr=subprocess.STDOUT)
failed = False
except subprocess.CalledProcessError as exc:
cmd_result = '[rc {}] {}\n{}'.format(exc.returncode, ' '.join(exc.cmd), exc.output)
except OSError as exc:
# we get this when 'oc' is not there
cmd_result = str(exc)
module.exit_json(
changed=False,
failed=failed,
result=cmd_result,
)
if __name__ == '__main__':
main()
| en | 0.793541 | #!/usr/bin/python Interface to OpenShift oc command Find and return oc binary file # https://github.com/openshift/openshift-ansible/issues/3410 # oc can be in /usr/local/bin in some cases, but that may not # be in $PATH due to ansible/sudo # Use shutil.which if it is available, otherwise fallback to a naive path search Module that executes commands on a remote OpenShift cluster # we get this when 'oc' is not there | 2.471472 | 2 |
code/network/__init__.py | michalochman/complex-networks | 0 | 10227 | import fractions
class Network(object):
def __init__(self, network):
self.network = network
def degree(self, link_type, key):
return len(self.network.get(link_type).get(key))
def average_degree(self, link_type):
degree = 0
for link in self.network.get(link_type).itervalues():
degree += len(link)
return float(degree) / float(len(self.network.get(link_type)))
def nn_degree(self, link_type, link_n_type, key):
degree = self.degree(link_type, key)
nn_degree = 0
for n_key in self.network.get(link_type, key):
nn_degree += self.degree(link_n_type, n_key)
return '%d/%d' % (nn_degree, degree)
def jaccard_index(self, set_a, set_b):
n = len(set_a & set_b)
return float(n)/float(len(set_a) + len(set_b) - n)
def jaccard_similarity(self, link_type, key_a, key_b, return_string=False):
key_a = int(key_a)
key_b = int(key_b)
set_a = set(self.network.get(link_type).get(key_a).values())
set_b = set(self.network.get(link_type).get(key_b).values())
if return_string:
intersection = len(set_a & set_b)
union = len(set_a | set_b)
gcd = fractions.gcd(intersection, union)
return '%d/%d' % (intersection/gcd, union/gcd)
return self.jaccard_index(set_a, set_b)
def collaborative_similarity(self, link_type, link_n_type, key, return_string=False):
degree = self.degree(link_type, key)
if degree <= 1:
return 0
similarity_sum = 0
for n_key_1 in self.network.get(link_type).get(key).itervalues():
for n_key_2 in self.network.get(link_type).get(key).itervalues():
if n_key_1 == n_key_2:
continue
similarity_sum += self.jaccard_similarity(link_n_type, n_key_1, n_key_2)
if return_string:
precision = 1e3
new_similarity_sum = round(similarity_sum * degree*(degree-1) * precision)
gcd = fractions.gcd(new_similarity_sum, degree*(degree-1) * precision)
new_similarity_sum /= gcd
return '%d/%d' % (new_similarity_sum, degree*(degree-1)*round(new_similarity_sum/similarity_sum))
return similarity_sum / (degree*(degree-1))
def average_jaccard_similarity(self, link_type, link_n_type, return_string=False):
nodes = 0
similarity_sum = 0
for key_links in self.network.get(link_type).itervalues():
for n_key_1 in key_links.itervalues():
for n_key_2 in key_links.itervalues():
if n_key_1 == n_key_2:
continue
nodes += 1
similarity_sum += self.jaccard_similarity(link_n_type, n_key_1, n_key_2)
if nodes == 0:
return 0
if return_string:
precision = 1e3
new_similarity_sum = round(similarity_sum * nodes * precision)
gcd = fractions.gcd(new_similarity_sum, nodes * precision)
new_similarity_sum /= gcd
return '%d/%d' % (new_similarity_sum, nodes*round(new_similarity_sum/similarity_sum))
return similarity_sum / nodes
def network_collaborative_similarity(self, link_type, link_n_type, return_string=False):
nodes = 0
similarity_sum = 0
for key, key_links in self.network.get(link_type).iteritems():
if self.degree(link_type, key) <= 1:
continue
nodes += 1
collaborative_similarity = self.collaborative_similarity(link_type, link_n_type, key)
similarity_sum += collaborative_similarity
if nodes == 0:
return 0
if return_string:
precision = 1e3
new_similarity_sum = round(similarity_sum * nodes * precision)
gcd = fractions.gcd(new_similarity_sum, nodes * precision)
new_similarity_sum /= gcd
return '%d/%d' % (new_similarity_sum, nodes*(new_similarity_sum/similarity_sum))
return similarity_sum/nodes
| import fractions
class Network(object):
def __init__(self, network):
self.network = network
def degree(self, link_type, key):
return len(self.network.get(link_type).get(key))
def average_degree(self, link_type):
degree = 0
for link in self.network.get(link_type).itervalues():
degree += len(link)
return float(degree) / float(len(self.network.get(link_type)))
def nn_degree(self, link_type, link_n_type, key):
degree = self.degree(link_type, key)
nn_degree = 0
for n_key in self.network.get(link_type, key):
nn_degree += self.degree(link_n_type, n_key)
return '%d/%d' % (nn_degree, degree)
def jaccard_index(self, set_a, set_b):
n = len(set_a & set_b)
return float(n)/float(len(set_a) + len(set_b) - n)
def jaccard_similarity(self, link_type, key_a, key_b, return_string=False):
key_a = int(key_a)
key_b = int(key_b)
set_a = set(self.network.get(link_type).get(key_a).values())
set_b = set(self.network.get(link_type).get(key_b).values())
if return_string:
intersection = len(set_a & set_b)
union = len(set_a | set_b)
gcd = fractions.gcd(intersection, union)
return '%d/%d' % (intersection/gcd, union/gcd)
return self.jaccard_index(set_a, set_b)
def collaborative_similarity(self, link_type, link_n_type, key, return_string=False):
degree = self.degree(link_type, key)
if degree <= 1:
return 0
similarity_sum = 0
for n_key_1 in self.network.get(link_type).get(key).itervalues():
for n_key_2 in self.network.get(link_type).get(key).itervalues():
if n_key_1 == n_key_2:
continue
similarity_sum += self.jaccard_similarity(link_n_type, n_key_1, n_key_2)
if return_string:
precision = 1e3
new_similarity_sum = round(similarity_sum * degree*(degree-1) * precision)
gcd = fractions.gcd(new_similarity_sum, degree*(degree-1) * precision)
new_similarity_sum /= gcd
return '%d/%d' % (new_similarity_sum, degree*(degree-1)*round(new_similarity_sum/similarity_sum))
return similarity_sum / (degree*(degree-1))
def average_jaccard_similarity(self, link_type, link_n_type, return_string=False):
nodes = 0
similarity_sum = 0
for key_links in self.network.get(link_type).itervalues():
for n_key_1 in key_links.itervalues():
for n_key_2 in key_links.itervalues():
if n_key_1 == n_key_2:
continue
nodes += 1
similarity_sum += self.jaccard_similarity(link_n_type, n_key_1, n_key_2)
if nodes == 0:
return 0
if return_string:
precision = 1e3
new_similarity_sum = round(similarity_sum * nodes * precision)
gcd = fractions.gcd(new_similarity_sum, nodes * precision)
new_similarity_sum /= gcd
return '%d/%d' % (new_similarity_sum, nodes*round(new_similarity_sum/similarity_sum))
return similarity_sum / nodes
def network_collaborative_similarity(self, link_type, link_n_type, return_string=False):
nodes = 0
similarity_sum = 0
for key, key_links in self.network.get(link_type).iteritems():
if self.degree(link_type, key) <= 1:
continue
nodes += 1
collaborative_similarity = self.collaborative_similarity(link_type, link_n_type, key)
similarity_sum += collaborative_similarity
if nodes == 0:
return 0
if return_string:
precision = 1e3
new_similarity_sum = round(similarity_sum * nodes * precision)
gcd = fractions.gcd(new_similarity_sum, nodes * precision)
new_similarity_sum /= gcd
return '%d/%d' % (new_similarity_sum, nodes*(new_similarity_sum/similarity_sum))
return similarity_sum/nodes
| none | 1 | 3.337132 | 3 |
|
invoke_ansible.py | samvarankashyap/ansible_api_usage | 0 | 10228 | import ansible
import pprint
from ansible import utils
from jinja2 import Environment, PackageLoader
from collections import namedtuple
from ansible import utils
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.plugins.callback import CallbackBase
from callbacks import PlaybookCallback
def invoke_ansible_playbook(module_path, e_vars, playbook_path="site.yml", console=True):
""" Invokes playbook """
loader = DataLoader()
variable_manager = VariableManager()
variable_manager.extra_vars = e_vars
inventory = Inventory(loader=loader,
variable_manager=variable_manager,
host_list=['localhost'])
passwords = {}
utils.VERBOSITY = 4
Options = namedtuple('Options', ['listtags',
'listtasks',
'listhosts',
'syntax',
'connection',
'module_path',
'forks',
'remote_user',
'private_key_file',
'ssh_common_args',
'ssh_extra_args',
'sftp_extra_args',
'scp_extra_args',
'become',
'become_method',
'become_user',
'verbosity',
'check'])
options = Options(listtags=False,
listtasks=False,
listhosts=False,
syntax=False,
connection='ssh',
module_path=module_path,
forks=100,
remote_user='root',
private_key_file=None,
ssh_common_args=None,
ssh_extra_args=None,
sftp_extra_args=None,
scp_extra_args=None,
become=False,
become_method=None,
become_user='root',
verbosity=utils.VERBOSITY,
check=False)
pbex = PlaybookExecutor(playbooks=[playbook_path],
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords)
if not console:
cb = PlaybookCallback()
pbex._tqm._stdout_callback = cb
return_code = pbex.run()
results = cb.results
else:
results = pbex.run()
return results
| import ansible
import pprint
from ansible import utils
from jinja2 import Environment, PackageLoader
from collections import namedtuple
from ansible import utils
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.plugins.callback import CallbackBase
from callbacks import PlaybookCallback
def invoke_ansible_playbook(module_path, e_vars, playbook_path="site.yml", console=True):
""" Invokes playbook """
loader = DataLoader()
variable_manager = VariableManager()
variable_manager.extra_vars = e_vars
inventory = Inventory(loader=loader,
variable_manager=variable_manager,
host_list=['localhost'])
passwords = {}
utils.VERBOSITY = 4
Options = namedtuple('Options', ['listtags',
'listtasks',
'listhosts',
'syntax',
'connection',
'module_path',
'forks',
'remote_user',
'private_key_file',
'ssh_common_args',
'ssh_extra_args',
'sftp_extra_args',
'scp_extra_args',
'become',
'become_method',
'become_user',
'verbosity',
'check'])
options = Options(listtags=False,
listtasks=False,
listhosts=False,
syntax=False,
connection='ssh',
module_path=module_path,
forks=100,
remote_user='root',
private_key_file=None,
ssh_common_args=None,
ssh_extra_args=None,
sftp_extra_args=None,
scp_extra_args=None,
become=False,
become_method=None,
become_user='root',
verbosity=utils.VERBOSITY,
check=False)
pbex = PlaybookExecutor(playbooks=[playbook_path],
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords)
if not console:
cb = PlaybookCallback()
pbex._tqm._stdout_callback = cb
return_code = pbex.run()
results = cb.results
else:
results = pbex.run()
return results
| en | 0.212026 | Invokes playbook | 2.031261 | 2 |
bin/python/csv2es.py | reid-wagner/proteomics-pipelines | 2 | 10229 | #!/usr/bin/env python3
import itertools
import string
from elasticsearch import Elasticsearch,helpers
import sys
import os
from glob import glob
import pandas as pd
import json
host = sys.argv[1]
port = int(sys.argv[2])
alias = sys.argv[3]
print(host)
print(port)
print(alias)
es = Elasticsearch([{'host': host, 'port': port}])
# create our test index
# Get all csv files in /root/data
files = [y for x in os.walk('/root/data') for y in glob(os.path.join(x[0], '*.csv'))]
count = 0
def clean_field(val):
val = val.split('.')
val = [i for i in val if i != '']
val = '_'.join(val)
val = val.split()
val = [i for i in val if i != '']
val = '_'.join(val)
val = val.split('/')
val = [i for i in val if i != '']
val = '_'.join(val)
return val
es.indices.delete(index=alias + '*', ignore=[400, 404])
indices = []
for file in files:
data = pd.read_csv(file, sep=None, engine='python')
index = alias + '_'.join(file.split('/'))
index = clean_field(index).lower().split('_csv')[0]
indices.append(index)
es.indices.create(index)
for col in data.columns:
if col.startswith('Unnamed'):
del data[col]
else:
data.rename(columns= { col : clean_field(col) },inplace=True )
data = data.reset_index() # Make sure there is no duplicate indexing
data.rename(columns={'index':'row'},inplace =True)
data['File'] = file
data['_id'] = data['File'] + '.{}.'.format(str(count)) + data.reset_index()['index'].apply(str)
data['_type'] = "document"
data['_index'] = index
records = data.to_json(orient='records')
records = json.loads(records)
helpers.bulk(es, records, chunk_size=100)
count += 1
print(es.count(index=index))
# Create an index table in elasticsearch to locate the files
indices_table = pd.DataFrame()
indices_table['Index'] = pd.Series(indices)
indices_table['File'] = pd.Series(files)
indices_table['Alias'] = alias
indices_table['_id'] = indices_table['Alias'] + '.' + indices_table['File']
indices_table['_type'] = "document"
indices_table['_index'] = alias + '_indices'
es.indices.create(alias + '_indices')
records = indices_table.to_json(orient='records')
records = json.loads(records)
helpers.bulk(es, records, chunk_size=100)
print(es.count(index=alias + '_indices'))
| #!/usr/bin/env python3
import itertools
import string
from elasticsearch import Elasticsearch,helpers
import sys
import os
from glob import glob
import pandas as pd
import json
host = sys.argv[1]
port = int(sys.argv[2])
alias = sys.argv[3]
print(host)
print(port)
print(alias)
es = Elasticsearch([{'host': host, 'port': port}])
# create our test index
# Get all csv files in /root/data
files = [y for x in os.walk('/root/data') for y in glob(os.path.join(x[0], '*.csv'))]
count = 0
def clean_field(val):
val = val.split('.')
val = [i for i in val if i != '']
val = '_'.join(val)
val = val.split()
val = [i for i in val if i != '']
val = '_'.join(val)
val = val.split('/')
val = [i for i in val if i != '']
val = '_'.join(val)
return val
es.indices.delete(index=alias + '*', ignore=[400, 404])
indices = []
for file in files:
data = pd.read_csv(file, sep=None, engine='python')
index = alias + '_'.join(file.split('/'))
index = clean_field(index).lower().split('_csv')[0]
indices.append(index)
es.indices.create(index)
for col in data.columns:
if col.startswith('Unnamed'):
del data[col]
else:
data.rename(columns= { col : clean_field(col) },inplace=True )
data = data.reset_index() # Make sure there is no duplicate indexing
data.rename(columns={'index':'row'},inplace =True)
data['File'] = file
data['_id'] = data['File'] + '.{}.'.format(str(count)) + data.reset_index()['index'].apply(str)
data['_type'] = "document"
data['_index'] = index
records = data.to_json(orient='records')
records = json.loads(records)
helpers.bulk(es, records, chunk_size=100)
count += 1
print(es.count(index=index))
# Create an index table in elasticsearch to locate the files
indices_table = pd.DataFrame()
indices_table['Index'] = pd.Series(indices)
indices_table['File'] = pd.Series(files)
indices_table['Alias'] = alias
indices_table['_id'] = indices_table['Alias'] + '.' + indices_table['File']
indices_table['_type'] = "document"
indices_table['_index'] = alias + '_indices'
es.indices.create(alias + '_indices')
records = indices_table.to_json(orient='records')
records = json.loads(records)
helpers.bulk(es, records, chunk_size=100)
print(es.count(index=alias + '_indices'))
| en | 0.575823 | #!/usr/bin/env python3 # create our test index # Get all csv files in /root/data # Make sure there is no duplicate indexing # Create an index table in elasticsearch to locate the files | 2.789907 | 3 |
main/src/preparation/parsers/tree-sitter-python/examples/crlf-line-endings.py | jason424217/Artificial-Code-Gen | 0 | 10230 | <filename>main/src/preparation/parsers/tree-sitter-python/examples/crlf-line-endings.py<gh_stars>0
print a
if b:
if c:
d
e
| <filename>main/src/preparation/parsers/tree-sitter-python/examples/crlf-line-endings.py<gh_stars>0
print a
if b:
if c:
d
e
| none | 1 | 1.835831 | 2 |
|
Src/main.py | DukeA/DAT02X-19-03-MachineLearning-Starcraft2 | 0 | 10231 |
from absl import app
from mainLoop import main
if __name__ == '__main__':
app.run(main)
|
from absl import app
from mainLoop import main
if __name__ == '__main__':
app.run(main)
| none | 1 | 1.250541 | 1 |
|
bos_sarcat_scraper/__main__.py | hysds/bos_sarcat_scraper | 1 | 10232 | <gh_stars>1-10
from __future__ import absolute_import
from builtins import str
from builtins import input
import sys
import argparse
from . import bosart_scrape
import datetime
import json
def valid_date(s):
try:
try:
date = datetime.datetime.strptime(s, "%Y-%m-%dT%H:%M:%S.%fZ")
except:
date = datetime.datetime.strptime(s, "%Y-%m-%dT%H:%M:%SZ")
return date
except ValueError:
msg = "Not a valid date: '{0}'.".format(s)
raise argparse.ArgumentTypeError(msg)
def geojson(spatial_extent):
if type(json.loads(spatial_extent)) is dict:
return spatial_extent
def sort_field(s_f):
if s_f == "start_time" or s_f == "stop_time" or s_f == "bos_ingest":
return s_f
else:
raise argparse.ArgumentError("The value for sortBy should be either start_time, stop_time or bos_ingest not %s."%s_f)
def sort_order(order):
if order == "asc" or order == "des":
return order
else:
raise argparse.ArgumentError("The value for sort should be either asc or des not %s,"%order)
def check_inputs(args):
yes = "y"
no = "n"
if not args.fromTime and not args.fromBosIngestTime:
print ("You have NOT specified any start time using --fromTime, -from or --fromBosIngestTime. \nYou are asking to find all acquisitions from the beginning of time! \nThis query will take a very long time.\nTHIS IS NOT RECOMMENDED.")
resp = str(eval(input('Are you sure you want to proceed? (y/n):')))
if resp.lower() == yes.lower():
print("Okay! Please wait...")
return True
elif resp.lower() == no.lower():
print("Please try again with the start time specified using --fromTime, -from or --fromBosIngestTime.")
exit()
else:
print("Please specify y/n\n")
return False
return True
def main():
parser = argparse.ArgumentParser(description='Query BOS SarCat for acquisitions.')
parser.add_argument("-from","--fromTime", help='specify the temporal start point in format , to get acquisitions starting after the given timestamp in the format yyyy-mm-ddThh:mm:ss.sssZ', type=valid_date)
parser.add_argument("--fromBosIngestTime", help='provide date and time in format , to get acquisitions acquired by BOS after the given timestamp in the format yyyy-mm-ddThh:mm:ss.sssZ', type=valid_date)
parser.add_argument("-to","--toTime", help='specify the temporal end point in format , to get acquisitions ending before the given timestamp in the format yyyy-mm-ddThh:mm:ss.sssZ', type=valid_date)
parser.add_argument("--spatialExtent", help='specify the area of interest in GeoJSON format', type = geojson)
parser.add_argument("--sortBy", help='type "start_time" , "stop_time" or "bos_ingest" to sort results by field', type = sort_field)
parser.add_argument("--sort", help='type "asc" or "des" to get results in ascending or descending order of time respectively. If sortBy is specified but sort is not, then defaults to ascending', type = sort_order)
args = parser.parse_args()
checked = False
while not checked:
checked = check_inputs(args)
# construct the parameter list based on user specified restrictions
params = {}
if args.fromTime:
params["fromTime"] = args.fromTime
if args.fromBosIngestTime:
params["fromBosIngestTime"] = args.fromBosIngestTime
if args.toTime:
params["toTime"] = args.toTime
if args.spatialExtent:
params["spatialExtent"] = json.dumps(args.spatialExtent)
if args.sortBy:
params["sortBy"] = args.sortBy
if args.sort:
params["sort"] = args.sort
print(bosart_scrape.make_api_call(parameters=params))
if __name__ == '__main__':
main()
| from __future__ import absolute_import
from builtins import str
from builtins import input
import sys
import argparse
from . import bosart_scrape
import datetime
import json
def valid_date(s):
try:
try:
date = datetime.datetime.strptime(s, "%Y-%m-%dT%H:%M:%S.%fZ")
except:
date = datetime.datetime.strptime(s, "%Y-%m-%dT%H:%M:%SZ")
return date
except ValueError:
msg = "Not a valid date: '{0}'.".format(s)
raise argparse.ArgumentTypeError(msg)
def geojson(spatial_extent):
if type(json.loads(spatial_extent)) is dict:
return spatial_extent
def sort_field(s_f):
if s_f == "start_time" or s_f == "stop_time" or s_f == "bos_ingest":
return s_f
else:
raise argparse.ArgumentError("The value for sortBy should be either start_time, stop_time or bos_ingest not %s."%s_f)
def sort_order(order):
if order == "asc" or order == "des":
return order
else:
raise argparse.ArgumentError("The value for sort should be either asc or des not %s,"%order)
def check_inputs(args):
yes = "y"
no = "n"
if not args.fromTime and not args.fromBosIngestTime:
print ("You have NOT specified any start time using --fromTime, -from or --fromBosIngestTime. \nYou are asking to find all acquisitions from the beginning of time! \nThis query will take a very long time.\nTHIS IS NOT RECOMMENDED.")
resp = str(eval(input('Are you sure you want to proceed? (y/n):')))
if resp.lower() == yes.lower():
print("Okay! Please wait...")
return True
elif resp.lower() == no.lower():
print("Please try again with the start time specified using --fromTime, -from or --fromBosIngestTime.")
exit()
else:
print("Please specify y/n\n")
return False
return True
def main():
parser = argparse.ArgumentParser(description='Query BOS SarCat for acquisitions.')
parser.add_argument("-from","--fromTime", help='specify the temporal start point in format , to get acquisitions starting after the given timestamp in the format yyyy-mm-ddThh:mm:ss.sssZ', type=valid_date)
parser.add_argument("--fromBosIngestTime", help='provide date and time in format , to get acquisitions acquired by BOS after the given timestamp in the format yyyy-mm-ddThh:mm:ss.sssZ', type=valid_date)
parser.add_argument("-to","--toTime", help='specify the temporal end point in format , to get acquisitions ending before the given timestamp in the format yyyy-mm-ddThh:mm:ss.sssZ', type=valid_date)
parser.add_argument("--spatialExtent", help='specify the area of interest in GeoJSON format', type = geojson)
parser.add_argument("--sortBy", help='type "start_time" , "stop_time" or "bos_ingest" to sort results by field', type = sort_field)
parser.add_argument("--sort", help='type "asc" or "des" to get results in ascending or descending order of time respectively. If sortBy is specified but sort is not, then defaults to ascending', type = sort_order)
args = parser.parse_args()
checked = False
while not checked:
checked = check_inputs(args)
# construct the parameter list based on user specified restrictions
params = {}
if args.fromTime:
params["fromTime"] = args.fromTime
if args.fromBosIngestTime:
params["fromBosIngestTime"] = args.fromBosIngestTime
if args.toTime:
params["toTime"] = args.toTime
if args.spatialExtent:
params["spatialExtent"] = json.dumps(args.spatialExtent)
if args.sortBy:
params["sortBy"] = args.sortBy
if args.sort:
params["sort"] = args.sort
print(bosart_scrape.make_api_call(parameters=params))
if __name__ == '__main__':
main() | en | 0.404797 | # construct the parameter list based on user specified restrictions | 2.897735 | 3 |
vgm2electron.py | simondotm/vgm2electron | 2 | 10233 | #!/usr/bin/env python
# vgm2electron.py
# Tool for converting SN76489-based PSG VGM data to Acorn Electron
# By <NAME> (https://github.com/simondotm/)
# See https://github.com/simondotm/vgm-packer
#
# Copyright (c) 2019 <NAME>. All rights reserved.
#
# "MIT License":
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the Software
# is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import functools
import itertools
import struct
import sys
import time
import binascii
import math
import operator
import os
from modules.vgmparser import VgmStream
class VgmElectron:
OUTPUT_RAWDATA = False # output raw dumps of the data that was compressed by LZ4/Huffman
VERBOSE = True
# 0-3 represents approx the loudest 50% of volumes (=ON), 4-15 are the quietest 50% (=OFF)
ATTENTUATION_THRESHOLD1 = 10
ATTENTUATION_THRESHOLD2 = 10
ATTENTUATION_THRESHOLD3 = 10
# define the number of octaves to transpose whole song by, in case too much bass getting lost
TRANSPOSE_OCTAVES1 = 0
TRANSPOSE_OCTAVES2 = 0
TRANSPOSE_OCTAVES3 = 0 #-1
ENABLE_CHANNEL1 = True
ENABLE_CHANNEL2 = True
ENABLE_CHANNEL3 = True
USE_TECHNIQUE = 2
def __init__(self):
print("init")
#----------------------------------------------------------
# Utilities
#----------------------------------------------------------
# split the packed raw data into 11 separate streams
# returns array of 11 bytearrays
def split_raw(self, rawData, stripCommands = True):
registers = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
registers_opt = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
latched_channel = -1
output_block = bytearray()
output_blocks = []
for o in range(11):
output_blocks.append( bytearray() )
if stripCommands:
register_mask = 15
else:
register_mask = 255
# unpack the raw binary data in 11 arrays of register data without any deltas between them
# eg. the raw chip writes to all 11 registers every frame
n = 0
Packet = True
verbose = False
while (Packet):
packet_size = rawData[n]
if verbose:
print("packet_size=" + str(packet_size))
n += 1
if packet_size == 255:
Packet = False
else:
for x in range(packet_size):
d = rawData[n+x]
#if verbose:
# print " frame byte number=" +str(x)
# print " frame byte=" +str(d)
if d & 128:
# latch
c = (d>>5)&3
latched_channel = c
if d & 16:
# volume
if verbose:
print(" volume on channel " + str(c))
registers[c+7] = d & register_mask
else:
# tone
if verbose:
print(" tone on channel " + str(c))
registers[c*2+0] = d & register_mask
else:
if verbose:
print(" tone data on latched channel " + str(latched_channel))
registers[latched_channel*2+1] = d # we no longer do any masking here # d & 63 # tone data only contains 6 bits of info anyway, so no need for mask
if latched_channel == 3:
print("ERROR CHANNEL")
# emit current state of each of the 11 registers to 11 different bytearrays
for x in range(11):
output_blocks[x].append( registers[x] )
# next packet
n += packet_size
#print(output_blocks[6])
#IGNORE we no longer do this - let the decoder do it instead.
if False:
# make sure we only emit tone3 when it changes, or 15 for no-change
# this prevents the LFSR from being reset
lastTone3 = 255
for x in range(len(output_blocks[6])):
t = output_blocks[6][x]
if t == lastTone3:
output_blocks[6][x] = 15
lastTone3 = t
# print(output_blocks[6])
# Add EOF marker (0x08) to tone3 byte stream
output_blocks[6].append(0x08) # 0x08 is an invalid noise tone.
# return the split blocks
return output_blocks
# given an array of data points, serialize it to a bytearray
# size is the number of bytes to be used to represent each element in the source array.
def toByteArray(self, array, size = 1):
r = bytearray()
for v in array:
if size < 2:
r.append(v & 255)
else:
r.append(v & 255)
r.append(v >> 8)
return r
#----------------------------------------------------------
# Process(filename)
# Convert the given VGM file to an electron VGM file
#----------------------------------------------------------
def process(self, src_filename, dst_filename):
# load the VGM file, or alternatively interpret as a binary
if src_filename.lower()[-4:] != ".vgm":
print("ERROR: Not a VGM source")
return
vgm = VgmStream(src_filename)
data_block = vgm.as_binary()
data_offset = 0
# parse the header
header_size = data_block[0] # header size
play_rate = data_block[1] # play rate
if header_size == 5 and play_rate == 50:
packet_count = data_block[2] + data_block[3]*256 # packet count LO
duration_mm = data_block[4] # duration mm
duration_ss = data_block[5] # duration ss
data_offset = header_size+1
data_offset += data_block[data_offset]+1
data_offset += data_block[data_offset]+1
print("header_size=" +str(header_size))
print("play_rate="+str(play_rate))
print("packet_count="+str(packet_count))
print("duration_mm="+str(duration_mm))
print("duration_ss="+str(duration_ss))
print("data_offset="+str(data_offset))
else:
print("No header.")
print("")
# Trim off the header data. The rest is raw data.
data_block = data_block[data_offset:]
#----------------------------------------------------------
# Unpack the register data into 11 separate data streams
#----------------------------------------------------------
registers = self.split_raw(data_block, True)
#----------------------------------------------------------
# Begin VGM conversion to Electron
#----------------------------------------------------------
# Filter out channels we do not need
# Modify all volumes to full or none
# Interleave sound to a single channel
# output final VGM
vgm_stream = bytearray()
vgm_time = 0
electron_data = bytearray()
# given an SN76489 tone register value, return the equivalent Electron ULA register setting
def sn_to_electron(tone_value):
# hack to protect against divbyzero
if (tone_value == 0):
tone_value = 1
hz = float(vgm.vgm_source_clock) / ( 2.0 * float(tone_value) * 16.0)
print(" sn_to_electron freq " + str(hz) + "hz")
# electron
# Sound frequency = 1 MHz / [32 * (S + 1)]
# f * 32*(S+1) = 1Mhz
# 32*(S+1) = 1Mhz / f
# (S+1) = 1Mhz / f*32
#print ("SN freq is " + str(hz))
ula6 = int( 1000000.0 / (hz * 32.0) ) - 1
# check we are within range
if ula6 < 0:
print(" WARNING: Electron freqency '" + str(ula6) + "' too high (" + str(hz) + ")")
ula6 = 0
if ula6 > 255:
print(" WARNING: Electron frequency '" + str(ula6) + "' too low (" + str(hz) + ")")
ula6 = 255
return ula6
#--------------------------------------------------------------
# conversion settings
#--------------------------------------------------------------
# convert the register data to a vgm stream
sample_interval = int(44100 / vgm.metadata['rate']) # 882 # 50hz - TODO: use frame rate
print("sample_interval=" + str(sample_interval))
USE_TONE3 = VgmElectron.ENABLE_CHANNEL3 # True
# TODO: make these all parameters
# Add channel filter option
# Add mix type options
# --attentuation 468 --filter 123 --transpose 00F --mix 123 --arpeggio 2 --rate 50
# Add option to clamp or transpose out of range frequencies
# Make the .ula output file filename.electron.ula
# Add 0x01 as a terminating byte in the output ULA
MIX_RATE = 2 # modulo 2 for interleaving channels
# other options
# bias for channels
# transpose or silence out of range notes
channel_mix = 0
#--------------------------------------------------------------
# pre-process music to suit Electron capabilities
#--------------------------------------------------------------
for i in range(len(registers[0])):
print("Frame " + str(i))
#--------------------------------------------------------------
# step 1- map volumes to 1-bit precision
#--------------------------------------------------------------
# 11 registers per frame
# Tone 0 HL Tone 1 HL Tone 2 HL Tone 3 Vol 0123
for r in range(11):
if r > 6:
register_data = registers[r][i]
# apply the threshold for each channel
threshold = VgmElectron.ATTENTUATION_THRESHOLD1
if r == 8:
threshold = VgmElectron.ATTENTUATION_THRESHOLD2
if r == 9:
threshold = VgmElectron.ATTENTUATION_THRESHOLD3
# if its a volume, map to loudest volume or no volume (using logarithmic scale)
if register_data < threshold:
register_data = 0 # full volume
else:
register_data = 15 # zero volume
if r == 7 and VgmElectron.ENABLE_CHANNEL1 == False:
register_data = 15 # zero volume
if r == 8 and VgmElectron.ENABLE_CHANNEL2 == False:
register_data = 15 # zero volume
if r == 9 and VgmElectron.ENABLE_CHANNEL3 == False:
register_data = 15 # zero volume
registers[r][i] = register_data
#--------------------------------------------------------------
# step 2 - transpose to fit frequency range
#--------------------------------------------------------------
# final step - bring tone1 into the frequency range of the electron
# if the frequency goes below the range of the ULA capabilities, add an octave
def retune(octaves, l,h,v):
#if (octaves == 0):
# print(" No transpose performed, octaves set to 0")
# return
print( " tonehi=" + str(registers[h][i]) + ", tonelo=" + str(registers[l][i]))
tone_value = (registers[h][i] << 4) + registers[l][i]
if tone_value > 0:
tone_freq = float(vgm.vgm_source_clock) / ( 2.0 * float(tone_value) * 16.0)
print(" Retune, Channel " + str(int(l/2)) + " tone=" + str(tone_value) + ", freq=" + str(tone_freq))
# electron baseline is 122Hz not 244Hz as the AUG states.
baseline_freq = 1000000.0 / (32.0*256.0)
target_freq = tone_freq
retuned = 0
transpose = abs(octaves)
while retuned != transpose: # target_freq < baseline_freq:
if (octaves < 0):
target_freq /= 2.0
else:
target_freq *= 2.0
retuned += 1
# if cant reach baseline freq, transpose once, then silence if still too low :(
if target_freq < baseline_freq:
print(" WARNING: Freq too low - Added " + str(1) + " octave(s) - from " + str(target_freq) + " to " + str(target_freq*2.0) + "Hz")
# better to just clamp low frequencies at the bottom, and risk tuning issues rather than transposition jumps
target_freq = baseline_freq #*= 2.0
retuned = 1
if target_freq < baseline_freq:
registers[v][i] = 15
print(" Tone " + str(i) + " silenced because frequency too low - " + str(target_freq))
#target_freq *= 2.0
#retuned += 1
if retuned:
#print(" WARNING: Freq too low - Added " + str(retuned) + " octave(s) - from " + str(tone_freq) + " to " + str(target_freq) + "Hz")
tone_value = int( round( float(vgm.vgm_source_clock) / (2.0 * target_freq * 16.0 ) ) )
registers[h][i] = tone_value >> 4
registers[l][i] = tone_value & 15
# transpose
#if TRANSPOSE_OCTAVES > 0:
print(" Transposing ")
retune(VgmElectron.TRANSPOSE_OCTAVES1, 0,1,7)
retune(VgmElectron.TRANSPOSE_OCTAVES2, 2,3,8)
retune(VgmElectron.TRANSPOSE_OCTAVES3, 4,5,9)
#--------------------------------------------------------------
# Step 3 - mix the 2 primary channels down to 1 channel
#--------------------------------------------------------------
# map channel 2 to channel 1
# noise channel is completely ignored
ENABLE_DOWNMIX = True
if ENABLE_DOWNMIX:
print(" Downmix channels ")
#print("Frame " + str(i))
vol1 = registers[7][i]
vol2 = registers[8][i]
vol3 = registers[9][i]
tone1_active = vol1 != 15
tone2_active = vol2 != 15
tone3_active = vol3 != 15
tone_active = tone1_active or tone2_active or tone3_active
if tone_active:
print(" Tone active, mixing")
output_tone = 1
if self.USE_TECHNIQUE == 2:
c1f = (registers[1][i] << 4) + registers[0][i]
c2f = (registers[3][i] << 4) + registers[2][i]
c3f = (registers[5][i] << 4) + registers[4][i]
active_channels = [ False, False, False ]
if tone1_active:
active_channels[0] = True
print("Channel 1 is active volume")
if tone2_active:
active_channels[1] = True
print("Channel 2 is active volume")
if tone3_active:
active_channels[2] = True
print("Channel 3 is active volume")
# any channels playing the same frequency are filtered out
if tone1_active and tone2_active and c2f == c1f:
active_channels[1] = False
print("Channel 2 is same freq as Channel 1, filtered")
if tone1_active and tone3_active and c3f == c1f:
active_channels[2] = False
print("Channel 3 is same freq as Channel 1, filtered")
if tone2_active and tone3_active and c2f == c3f:
active_channels[2] = False
print("Channel 3 is same freq as Channel 2, filtered")
channel_count = 0
if active_channels[0]: channel_count += 1
if active_channels[1]: channel_count += 1
if active_channels[2]: channel_count += 1
print("channel_count=" + str(channel_count))
output_mix = []
if active_channels[0]: output_mix.append(1)
if active_channels[1]: output_mix.append(2)
if active_channels[2]: output_mix.append(3)
mix = (i % channel_count)
output_tone = output_mix[mix]
if self.USE_TECHNIQUE == 1:
# interleaving of channels 1+2 is done on odd/even frames for a consistent effect
mix = (i % MIX_RATE) == 0 #(i & 1) == 0
# random is no good, thought it might average out but it sounds , well random
#mix = random.random() < 0.5
# test code to see if modulo 3 any good, it wasn't
if False:
if channel_mix == 0 and vol1 != 0:
channel_mix = (channel_mix + 1) % 3
if channel_mix == 1 and vol2 != 0:
channel_mix = (channel_mix + 1) % 3
if channel_mix == 1 and vol3 != 0:
channel_mix = (channel_mix + 1) % 3
output_tone = (channel_mix % 3) + 1
print("output tone=" + str(output_tone))
channel_mix = (channel_mix + 1) % 3
if True:
# detect if channel 1 needs priority this frame
# - its volume is on, and the alternative frame mix flag is good
c1p = vol1 == 0 and mix
# don't give channel 2 priority if tone is the same and channel1 is playing
c1f = (registers[1][i] << 4) + registers[0][i]
c2f = (registers[3][i] << 4) + registers[2][i]
sametone = (c1f == c2f/2) or (c1f == c2f * 2) or (c1f == c2f)
sametone = sametone and (vol1 == vol2) and (vol1 == 0)
if vol1 == 0 and sametone: #diff < 100: #registers[0][i] == registers[2][i] and registers[1][i] == registers[2][i] and vol1 == 0:
c1p = True
print(" NOTE: channel 1 & channel 2 have same tone")
# replace channel 1 data with channel 2 data
# if, channel2 is active, but c1 doesn't have priority this frame
if vol2 == 0 and not c1p:# and vol1 != 0:
output_tone = 2
# if no volume on tone1, we can look at channel 3 too
if USE_TONE3:
#if registers[7][i] == 15:
if vol1 == 15 and vol2 == 15 and vol3 == 0 and not mix:# and not c1p and output_tone != 2:
print("tone3 active")
output_tone = 3
# pick which tone to output
if output_tone == 1:
# do nothing, because tone1 register frequency already setup
output_tone = 1
elif output_tone == 2:
# replace tone 1 frequency with tone 2 frequency
registers[0][i] = registers[2][i]
registers[1][i] = registers[3][i]
registers[7][i] = registers[8][i]
elif output_tone == 3:
# replace tone 1 frequency with tone 3 frequency
registers[0][i] = registers[4][i]
registers[1][i] = registers[5][i]
registers[7][i] = registers[9][i]
else:
print("UNHANDLED CASE - output_tone not set")
# output ULA data
final_volume = registers[7][i]
ula_tone = 0 # zero is highest freq. so inaudible, so thats how we handle volume
if final_volume == 0:
final_tone1 = (registers[1][i] << 4) + registers[0][i]
ula_tone = sn_to_electron(final_tone1)
electron_data.append( ula_tone )
# write to output ULA file
ula_file = open(dst_filename + ".ula.bin", 'wb')
ula_file.write(electron_data)
ula_file.close()
#--------------------------------------------------------------
# Final stage - output to vgm
#--------------------------------------------------------------
# Tone1----- Tone2----- Tone3----- Tone4 Vol1 Vol2 Vol3 Vol4
control = [ 0x80, 0x00, 0xa0, 0x00, 0xc0, 0x00, 0xe0, 0x90, 0xb0, 0xd0, 0xf0 ]
#filter = [ 0,1,2,3,7,8 ]
#filter = [ 2,3,8 ]
#filter = [ 0,1,2,3,4,5,6,7,8,9,10 ]
filter = [ 0,1,2,3,4,5,7,8,9 ]
if ENABLE_DOWNMIX:
filter = [ 0,1,7 ]
last_tone3 = 255
for i in range(len(registers[0])):
# 11 registers per frame
# Tone 0 HL Tone 1 HL Tone 2 HL Tone 3 Vol 0123
for r in range(11):
register_data = registers[r][i]
# dont update noise register unless different
update = True
if r == 6:
if register_data == last_tone3:
update = False
else:
last_tone3 = register_data
if not r in filter:
update = False
if update:
register_data |= control[r]
vgm_stream.extend( struct.pack('B', 0x50) ) # COMMAND
vgm_stream.extend( struct.pack('B', register_data) ) # DATA
# next frame
if sample_interval == 882: # wait 50
vgm_stream.extend( struct.pack('B', 0x63) )
elif sample_interval == 735: # wait 60
vgm_stream.extend( struct.pack('B', 0x62) )
else:
vgm_stream.extend( struct.pack('B', 0x61) )
vgm_stream.extend( struct.pack('B', int(sample_interval % 256)) )
vgm_stream.extend( struct.pack('B', int(sample_interval / 256)) )
# END command
vgm_stream.extend( struct.pack('B', 0x66) )
vgm.write_vgm(vgm_stream, dst_filename)
#output = bytearray()
# write the electron vgm file
#open(dst_filename, "wb").write( output )
#------------------------------------------------------------------------
# Main()
#------------------------------------------------------------------------
import argparse
# Determine if running as a script
if __name__ == '__main__':
print("Vgm2Electron.py : VGM music converter for Acorn Electron")
print("Written in 2019 by <NAME>, https://github.com/simondotm/vgm-packer")
print("")
epilog_string = ""
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=epilog_string)
parser.add_argument("input", help="VGM source file (must be single SN76489 PSG format) [input]")
parser.add_argument("-o", "--output", metavar="<output>", help="write VGC file <output> (default is '[input].vgc')")
parser.add_argument("-v", "--verbose", help="Enable verbose mode", action="store_true")
parser.add_argument("-a", "--attenuation", default="444", metavar="<nnn>", help="Set attenuation threshold for each channel, 3 character string where each character is 0-F and 0 is loudest, 4 is 50%, F is quietest, default: 444")
parser.add_argument("-t", "--transpose", default="000", metavar="<nnn>", help="Set octaves to transpose for each channel, where 1 is +1 octave and F is -1 octave.")
parser.add_argument("-c", "--channels", default="123", metavar="[1][2][3]", help="Set which channels will be included in the conversion, default 123, which means all 3 channels")
parser.add_argument("-q", "--technique", default=2, metavar="<n>", help="Set which downmix technique to use 1 or 2.")
args = parser.parse_args()
src = args.input
dst = args.output
if dst == None:
dst = os.path.splitext(src)[0] + ".electron.vgm"
# attenuation options
attenuation = args.attenuation
if (len(attenuation) != 3):
print("ERROR: attenuation must be 3 values eg. '444'")
sys.exit()
#print("attenuation=" + attenuation)
VgmElectron.ATTENTUATION_THRESHOLD1 = int(attenuation[0],16)
VgmElectron.ATTENTUATION_THRESHOLD2 = int(attenuation[1],16)
VgmElectron.ATTENTUATION_THRESHOLD3 = int(attenuation[2],16)
# transpose options
transpose = args.transpose
if (len(transpose) != 3):
print("ERROR: transpose must be 3 values eg. '000'")
sys.exit()
#print("transpose=" + transpose)
# 0 1 2 3 4 5 6 7 8 9 a b c d e f
ttable = [0,1,2,3,4,5,6,7,-8,-7,-6,-5,-4,-3,-2,-1]
VgmElectron.TRANSPOSE_OCTAVES1 = ttable[ int(transpose[0],16) ]
VgmElectron.TRANSPOSE_OCTAVES2 = ttable[ int(transpose[1],16) ]
VgmElectron.TRANSPOSE_OCTAVES3 = ttable[ int(transpose[2],16) ]
# channel options
print(args.channels)
VgmElectron.ENABLE_CHANNEL1 = args.channels.find("1") >= 0
VgmElectron.ENABLE_CHANNEL2 = args.channels.find("2") >= 0
VgmElectron.ENABLE_CHANNEL3 = args.channels.find("3") >= 0
print("Channel 1: Enabled=" + str(VgmElectron.ENABLE_CHANNEL1) + ", Transpose=" + str(VgmElectron.TRANSPOSE_OCTAVES1) + ", Attenuation="+str(VgmElectron.ATTENTUATION_THRESHOLD1))
print("Channel 2: Enabled=" + str(VgmElectron.ENABLE_CHANNEL2) + ", Transpose=" + str(VgmElectron.TRANSPOSE_OCTAVES2) + ", Attenuation="+str(VgmElectron.ATTENTUATION_THRESHOLD2))
print("Channel 3: Enabled=" + str(VgmElectron.ENABLE_CHANNEL3) + ", Transpose=" + str(VgmElectron.TRANSPOSE_OCTAVES3) + ", Attenuation="+str(VgmElectron.ATTENTUATION_THRESHOLD3))
# technique
VgmElectron.USE_TECHNIQUE = int(args.technique)
print("Using technique " + str(VgmElectron.USE_TECHNIQUE))
# check for missing files
if not os.path.isfile(src):
print("ERROR: File '" + src + "' not found")
sys.exit()
packer = VgmElectron()
packer.VERBOSE = args.verbose
packer.process(src, dst)
| #!/usr/bin/env python
# vgm2electron.py
# Tool for converting SN76489-based PSG VGM data to Acorn Electron
# By <NAME> (https://github.com/simondotm/)
# See https://github.com/simondotm/vgm-packer
#
# Copyright (c) 2019 <NAME>. All rights reserved.
#
# "MIT License":
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the Software
# is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import functools
import itertools
import struct
import sys
import time
import binascii
import math
import operator
import os
from modules.vgmparser import VgmStream
class VgmElectron:
OUTPUT_RAWDATA = False # output raw dumps of the data that was compressed by LZ4/Huffman
VERBOSE = True
# 0-3 represents approx the loudest 50% of volumes (=ON), 4-15 are the quietest 50% (=OFF)
ATTENTUATION_THRESHOLD1 = 10
ATTENTUATION_THRESHOLD2 = 10
ATTENTUATION_THRESHOLD3 = 10
# define the number of octaves to transpose whole song by, in case too much bass getting lost
TRANSPOSE_OCTAVES1 = 0
TRANSPOSE_OCTAVES2 = 0
TRANSPOSE_OCTAVES3 = 0 #-1
ENABLE_CHANNEL1 = True
ENABLE_CHANNEL2 = True
ENABLE_CHANNEL3 = True
USE_TECHNIQUE = 2
def __init__(self):
print("init")
#----------------------------------------------------------
# Utilities
#----------------------------------------------------------
# split the packed raw data into 11 separate streams
# returns array of 11 bytearrays
def split_raw(self, rawData, stripCommands = True):
registers = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
registers_opt = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
latched_channel = -1
output_block = bytearray()
output_blocks = []
for o in range(11):
output_blocks.append( bytearray() )
if stripCommands:
register_mask = 15
else:
register_mask = 255
# unpack the raw binary data in 11 arrays of register data without any deltas between them
# eg. the raw chip writes to all 11 registers every frame
n = 0
Packet = True
verbose = False
while (Packet):
packet_size = rawData[n]
if verbose:
print("packet_size=" + str(packet_size))
n += 1
if packet_size == 255:
Packet = False
else:
for x in range(packet_size):
d = rawData[n+x]
#if verbose:
# print " frame byte number=" +str(x)
# print " frame byte=" +str(d)
if d & 128:
# latch
c = (d>>5)&3
latched_channel = c
if d & 16:
# volume
if verbose:
print(" volume on channel " + str(c))
registers[c+7] = d & register_mask
else:
# tone
if verbose:
print(" tone on channel " + str(c))
registers[c*2+0] = d & register_mask
else:
if verbose:
print(" tone data on latched channel " + str(latched_channel))
registers[latched_channel*2+1] = d # we no longer do any masking here # d & 63 # tone data only contains 6 bits of info anyway, so no need for mask
if latched_channel == 3:
print("ERROR CHANNEL")
# emit current state of each of the 11 registers to 11 different bytearrays
for x in range(11):
output_blocks[x].append( registers[x] )
# next packet
n += packet_size
#print(output_blocks[6])
#IGNORE we no longer do this - let the decoder do it instead.
if False:
# make sure we only emit tone3 when it changes, or 15 for no-change
# this prevents the LFSR from being reset
lastTone3 = 255
for x in range(len(output_blocks[6])):
t = output_blocks[6][x]
if t == lastTone3:
output_blocks[6][x] = 15
lastTone3 = t
# print(output_blocks[6])
# Add EOF marker (0x08) to tone3 byte stream
output_blocks[6].append(0x08) # 0x08 is an invalid noise tone.
# return the split blocks
return output_blocks
# given an array of data points, serialize it to a bytearray
# size is the number of bytes to be used to represent each element in the source array.
def toByteArray(self, array, size = 1):
r = bytearray()
for v in array:
if size < 2:
r.append(v & 255)
else:
r.append(v & 255)
r.append(v >> 8)
return r
#----------------------------------------------------------
# Process(filename)
# Convert the given VGM file to an electron VGM file
#----------------------------------------------------------
def process(self, src_filename, dst_filename):
# load the VGM file, or alternatively interpret as a binary
if src_filename.lower()[-4:] != ".vgm":
print("ERROR: Not a VGM source")
return
vgm = VgmStream(src_filename)
data_block = vgm.as_binary()
data_offset = 0
# parse the header
header_size = data_block[0] # header size
play_rate = data_block[1] # play rate
if header_size == 5 and play_rate == 50:
packet_count = data_block[2] + data_block[3]*256 # packet count LO
duration_mm = data_block[4] # duration mm
duration_ss = data_block[5] # duration ss
data_offset = header_size+1
data_offset += data_block[data_offset]+1
data_offset += data_block[data_offset]+1
print("header_size=" +str(header_size))
print("play_rate="+str(play_rate))
print("packet_count="+str(packet_count))
print("duration_mm="+str(duration_mm))
print("duration_ss="+str(duration_ss))
print("data_offset="+str(data_offset))
else:
print("No header.")
print("")
# Trim off the header data. The rest is raw data.
data_block = data_block[data_offset:]
#----------------------------------------------------------
# Unpack the register data into 11 separate data streams
#----------------------------------------------------------
registers = self.split_raw(data_block, True)
#----------------------------------------------------------
# Begin VGM conversion to Electron
#----------------------------------------------------------
# Filter out channels we do not need
# Modify all volumes to full or none
# Interleave sound to a single channel
# output final VGM
vgm_stream = bytearray()
vgm_time = 0
electron_data = bytearray()
# given an SN76489 tone register value, return the equivalent Electron ULA register setting
def sn_to_electron(tone_value):
# hack to protect against divbyzero
if (tone_value == 0):
tone_value = 1
hz = float(vgm.vgm_source_clock) / ( 2.0 * float(tone_value) * 16.0)
print(" sn_to_electron freq " + str(hz) + "hz")
# electron
# Sound frequency = 1 MHz / [32 * (S + 1)]
# f * 32*(S+1) = 1Mhz
# 32*(S+1) = 1Mhz / f
# (S+1) = 1Mhz / f*32
#print ("SN freq is " + str(hz))
ula6 = int( 1000000.0 / (hz * 32.0) ) - 1
# check we are within range
if ula6 < 0:
print(" WARNING: Electron freqency '" + str(ula6) + "' too high (" + str(hz) + ")")
ula6 = 0
if ula6 > 255:
print(" WARNING: Electron frequency '" + str(ula6) + "' too low (" + str(hz) + ")")
ula6 = 255
return ula6
#--------------------------------------------------------------
# conversion settings
#--------------------------------------------------------------
# convert the register data to a vgm stream
sample_interval = int(44100 / vgm.metadata['rate']) # 882 # 50hz - TODO: use frame rate
print("sample_interval=" + str(sample_interval))
USE_TONE3 = VgmElectron.ENABLE_CHANNEL3 # True
# TODO: make these all parameters
# Add channel filter option
# Add mix type options
# --attentuation 468 --filter 123 --transpose 00F --mix 123 --arpeggio 2 --rate 50
# Add option to clamp or transpose out of range frequencies
# Make the .ula output file filename.electron.ula
# Add 0x01 as a terminating byte in the output ULA
MIX_RATE = 2 # modulo 2 for interleaving channels
# other options
# bias for channels
# transpose or silence out of range notes
channel_mix = 0
#--------------------------------------------------------------
# pre-process music to suit Electron capabilities
#--------------------------------------------------------------
for i in range(len(registers[0])):
print("Frame " + str(i))
#--------------------------------------------------------------
# step 1- map volumes to 1-bit precision
#--------------------------------------------------------------
# 11 registers per frame
# Tone 0 HL Tone 1 HL Tone 2 HL Tone 3 Vol 0123
for r in range(11):
if r > 6:
register_data = registers[r][i]
# apply the threshold for each channel
threshold = VgmElectron.ATTENTUATION_THRESHOLD1
if r == 8:
threshold = VgmElectron.ATTENTUATION_THRESHOLD2
if r == 9:
threshold = VgmElectron.ATTENTUATION_THRESHOLD3
# if its a volume, map to loudest volume or no volume (using logarithmic scale)
if register_data < threshold:
register_data = 0 # full volume
else:
register_data = 15 # zero volume
if r == 7 and VgmElectron.ENABLE_CHANNEL1 == False:
register_data = 15 # zero volume
if r == 8 and VgmElectron.ENABLE_CHANNEL2 == False:
register_data = 15 # zero volume
if r == 9 and VgmElectron.ENABLE_CHANNEL3 == False:
register_data = 15 # zero volume
registers[r][i] = register_data
#--------------------------------------------------------------
# step 2 - transpose to fit frequency range
#--------------------------------------------------------------
# final step - bring tone1 into the frequency range of the electron
# if the frequency goes below the range of the ULA capabilities, add an octave
def retune(octaves, l,h,v):
#if (octaves == 0):
# print(" No transpose performed, octaves set to 0")
# return
print( " tonehi=" + str(registers[h][i]) + ", tonelo=" + str(registers[l][i]))
tone_value = (registers[h][i] << 4) + registers[l][i]
if tone_value > 0:
tone_freq = float(vgm.vgm_source_clock) / ( 2.0 * float(tone_value) * 16.0)
print(" Retune, Channel " + str(int(l/2)) + " tone=" + str(tone_value) + ", freq=" + str(tone_freq))
# electron baseline is 122Hz not 244Hz as the AUG states.
baseline_freq = 1000000.0 / (32.0*256.0)
target_freq = tone_freq
retuned = 0
transpose = abs(octaves)
while retuned != transpose: # target_freq < baseline_freq:
if (octaves < 0):
target_freq /= 2.0
else:
target_freq *= 2.0
retuned += 1
# if cant reach baseline freq, transpose once, then silence if still too low :(
if target_freq < baseline_freq:
print(" WARNING: Freq too low - Added " + str(1) + " octave(s) - from " + str(target_freq) + " to " + str(target_freq*2.0) + "Hz")
# better to just clamp low frequencies at the bottom, and risk tuning issues rather than transposition jumps
target_freq = baseline_freq #*= 2.0
retuned = 1
if target_freq < baseline_freq:
registers[v][i] = 15
print(" Tone " + str(i) + " silenced because frequency too low - " + str(target_freq))
#target_freq *= 2.0
#retuned += 1
if retuned:
#print(" WARNING: Freq too low - Added " + str(retuned) + " octave(s) - from " + str(tone_freq) + " to " + str(target_freq) + "Hz")
tone_value = int( round( float(vgm.vgm_source_clock) / (2.0 * target_freq * 16.0 ) ) )
registers[h][i] = tone_value >> 4
registers[l][i] = tone_value & 15
# transpose
#if TRANSPOSE_OCTAVES > 0:
print(" Transposing ")
retune(VgmElectron.TRANSPOSE_OCTAVES1, 0,1,7)
retune(VgmElectron.TRANSPOSE_OCTAVES2, 2,3,8)
retune(VgmElectron.TRANSPOSE_OCTAVES3, 4,5,9)
#--------------------------------------------------------------
# Step 3 - mix the 2 primary channels down to 1 channel
#--------------------------------------------------------------
# map channel 2 to channel 1
# noise channel is completely ignored
ENABLE_DOWNMIX = True
if ENABLE_DOWNMIX:
print(" Downmix channels ")
#print("Frame " + str(i))
vol1 = registers[7][i]
vol2 = registers[8][i]
vol3 = registers[9][i]
tone1_active = vol1 != 15
tone2_active = vol2 != 15
tone3_active = vol3 != 15
tone_active = tone1_active or tone2_active or tone3_active
if tone_active:
print(" Tone active, mixing")
output_tone = 1
if self.USE_TECHNIQUE == 2:
c1f = (registers[1][i] << 4) + registers[0][i]
c2f = (registers[3][i] << 4) + registers[2][i]
c3f = (registers[5][i] << 4) + registers[4][i]
active_channels = [ False, False, False ]
if tone1_active:
active_channels[0] = True
print("Channel 1 is active volume")
if tone2_active:
active_channels[1] = True
print("Channel 2 is active volume")
if tone3_active:
active_channels[2] = True
print("Channel 3 is active volume")
# any channels playing the same frequency are filtered out
if tone1_active and tone2_active and c2f == c1f:
active_channels[1] = False
print("Channel 2 is same freq as Channel 1, filtered")
if tone1_active and tone3_active and c3f == c1f:
active_channels[2] = False
print("Channel 3 is same freq as Channel 1, filtered")
if tone2_active and tone3_active and c2f == c3f:
active_channels[2] = False
print("Channel 3 is same freq as Channel 2, filtered")
channel_count = 0
if active_channels[0]: channel_count += 1
if active_channels[1]: channel_count += 1
if active_channels[2]: channel_count += 1
print("channel_count=" + str(channel_count))
output_mix = []
if active_channels[0]: output_mix.append(1)
if active_channels[1]: output_mix.append(2)
if active_channels[2]: output_mix.append(3)
mix = (i % channel_count)
output_tone = output_mix[mix]
if self.USE_TECHNIQUE == 1:
# interleaving of channels 1+2 is done on odd/even frames for a consistent effect
mix = (i % MIX_RATE) == 0 #(i & 1) == 0
# random is no good, thought it might average out but it sounds , well random
#mix = random.random() < 0.5
# test code to see if modulo 3 any good, it wasn't
if False:
if channel_mix == 0 and vol1 != 0:
channel_mix = (channel_mix + 1) % 3
if channel_mix == 1 and vol2 != 0:
channel_mix = (channel_mix + 1) % 3
if channel_mix == 1 and vol3 != 0:
channel_mix = (channel_mix + 1) % 3
output_tone = (channel_mix % 3) + 1
print("output tone=" + str(output_tone))
channel_mix = (channel_mix + 1) % 3
if True:
# detect if channel 1 needs priority this frame
# - its volume is on, and the alternative frame mix flag is good
c1p = vol1 == 0 and mix
# don't give channel 2 priority if tone is the same and channel1 is playing
c1f = (registers[1][i] << 4) + registers[0][i]
c2f = (registers[3][i] << 4) + registers[2][i]
sametone = (c1f == c2f/2) or (c1f == c2f * 2) or (c1f == c2f)
sametone = sametone and (vol1 == vol2) and (vol1 == 0)
if vol1 == 0 and sametone: #diff < 100: #registers[0][i] == registers[2][i] and registers[1][i] == registers[2][i] and vol1 == 0:
c1p = True
print(" NOTE: channel 1 & channel 2 have same tone")
# replace channel 1 data with channel 2 data
# if, channel2 is active, but c1 doesn't have priority this frame
if vol2 == 0 and not c1p:# and vol1 != 0:
output_tone = 2
# if no volume on tone1, we can look at channel 3 too
if USE_TONE3:
#if registers[7][i] == 15:
if vol1 == 15 and vol2 == 15 and vol3 == 0 and not mix:# and not c1p and output_tone != 2:
print("tone3 active")
output_tone = 3
# pick which tone to output
if output_tone == 1:
# do nothing, because tone1 register frequency already setup
output_tone = 1
elif output_tone == 2:
# replace tone 1 frequency with tone 2 frequency
registers[0][i] = registers[2][i]
registers[1][i] = registers[3][i]
registers[7][i] = registers[8][i]
elif output_tone == 3:
# replace tone 1 frequency with tone 3 frequency
registers[0][i] = registers[4][i]
registers[1][i] = registers[5][i]
registers[7][i] = registers[9][i]
else:
print("UNHANDLED CASE - output_tone not set")
# output ULA data
final_volume = registers[7][i]
ula_tone = 0 # zero is highest freq. so inaudible, so thats how we handle volume
if final_volume == 0:
final_tone1 = (registers[1][i] << 4) + registers[0][i]
ula_tone = sn_to_electron(final_tone1)
electron_data.append( ula_tone )
# write to output ULA file
ula_file = open(dst_filename + ".ula.bin", 'wb')
ula_file.write(electron_data)
ula_file.close()
#--------------------------------------------------------------
# Final stage - output to vgm
#--------------------------------------------------------------
# Tone1----- Tone2----- Tone3----- Tone4 Vol1 Vol2 Vol3 Vol4
control = [ 0x80, 0x00, 0xa0, 0x00, 0xc0, 0x00, 0xe0, 0x90, 0xb0, 0xd0, 0xf0 ]
#filter = [ 0,1,2,3,7,8 ]
#filter = [ 2,3,8 ]
#filter = [ 0,1,2,3,4,5,6,7,8,9,10 ]
filter = [ 0,1,2,3,4,5,7,8,9 ]
if ENABLE_DOWNMIX:
filter = [ 0,1,7 ]
last_tone3 = 255
for i in range(len(registers[0])):
# 11 registers per frame
# Tone 0 HL Tone 1 HL Tone 2 HL Tone 3 Vol 0123
for r in range(11):
register_data = registers[r][i]
# dont update noise register unless different
update = True
if r == 6:
if register_data == last_tone3:
update = False
else:
last_tone3 = register_data
if not r in filter:
update = False
if update:
register_data |= control[r]
vgm_stream.extend( struct.pack('B', 0x50) ) # COMMAND
vgm_stream.extend( struct.pack('B', register_data) ) # DATA
# next frame
if sample_interval == 882: # wait 50
vgm_stream.extend( struct.pack('B', 0x63) )
elif sample_interval == 735: # wait 60
vgm_stream.extend( struct.pack('B', 0x62) )
else:
vgm_stream.extend( struct.pack('B', 0x61) )
vgm_stream.extend( struct.pack('B', int(sample_interval % 256)) )
vgm_stream.extend( struct.pack('B', int(sample_interval / 256)) )
# END command
vgm_stream.extend( struct.pack('B', 0x66) )
vgm.write_vgm(vgm_stream, dst_filename)
#output = bytearray()
# write the electron vgm file
#open(dst_filename, "wb").write( output )
#------------------------------------------------------------------------
# Main()
#------------------------------------------------------------------------
import argparse
# Determine if running as a script
if __name__ == '__main__':
print("Vgm2Electron.py : VGM music converter for Acorn Electron")
print("Written in 2019 by <NAME>, https://github.com/simondotm/vgm-packer")
print("")
epilog_string = ""
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=epilog_string)
parser.add_argument("input", help="VGM source file (must be single SN76489 PSG format) [input]")
parser.add_argument("-o", "--output", metavar="<output>", help="write VGC file <output> (default is '[input].vgc')")
parser.add_argument("-v", "--verbose", help="Enable verbose mode", action="store_true")
parser.add_argument("-a", "--attenuation", default="444", metavar="<nnn>", help="Set attenuation threshold for each channel, 3 character string where each character is 0-F and 0 is loudest, 4 is 50%, F is quietest, default: 444")
parser.add_argument("-t", "--transpose", default="000", metavar="<nnn>", help="Set octaves to transpose for each channel, where 1 is +1 octave and F is -1 octave.")
parser.add_argument("-c", "--channels", default="123", metavar="[1][2][3]", help="Set which channels will be included in the conversion, default 123, which means all 3 channels")
parser.add_argument("-q", "--technique", default=2, metavar="<n>", help="Set which downmix technique to use 1 or 2.")
args = parser.parse_args()
src = args.input
dst = args.output
if dst == None:
dst = os.path.splitext(src)[0] + ".electron.vgm"
# attenuation options
attenuation = args.attenuation
if (len(attenuation) != 3):
print("ERROR: attenuation must be 3 values eg. '444'")
sys.exit()
#print("attenuation=" + attenuation)
VgmElectron.ATTENTUATION_THRESHOLD1 = int(attenuation[0],16)
VgmElectron.ATTENTUATION_THRESHOLD2 = int(attenuation[1],16)
VgmElectron.ATTENTUATION_THRESHOLD3 = int(attenuation[2],16)
# transpose options
transpose = args.transpose
if (len(transpose) != 3):
print("ERROR: transpose must be 3 values eg. '000'")
sys.exit()
#print("transpose=" + transpose)
# 0 1 2 3 4 5 6 7 8 9 a b c d e f
ttable = [0,1,2,3,4,5,6,7,-8,-7,-6,-5,-4,-3,-2,-1]
VgmElectron.TRANSPOSE_OCTAVES1 = ttable[ int(transpose[0],16) ]
VgmElectron.TRANSPOSE_OCTAVES2 = ttable[ int(transpose[1],16) ]
VgmElectron.TRANSPOSE_OCTAVES3 = ttable[ int(transpose[2],16) ]
# channel options
print(args.channels)
VgmElectron.ENABLE_CHANNEL1 = args.channels.find("1") >= 0
VgmElectron.ENABLE_CHANNEL2 = args.channels.find("2") >= 0
VgmElectron.ENABLE_CHANNEL3 = args.channels.find("3") >= 0
print("Channel 1: Enabled=" + str(VgmElectron.ENABLE_CHANNEL1) + ", Transpose=" + str(VgmElectron.TRANSPOSE_OCTAVES1) + ", Attenuation="+str(VgmElectron.ATTENTUATION_THRESHOLD1))
print("Channel 2: Enabled=" + str(VgmElectron.ENABLE_CHANNEL2) + ", Transpose=" + str(VgmElectron.TRANSPOSE_OCTAVES2) + ", Attenuation="+str(VgmElectron.ATTENTUATION_THRESHOLD2))
print("Channel 3: Enabled=" + str(VgmElectron.ENABLE_CHANNEL3) + ", Transpose=" + str(VgmElectron.TRANSPOSE_OCTAVES3) + ", Attenuation="+str(VgmElectron.ATTENTUATION_THRESHOLD3))
# technique
VgmElectron.USE_TECHNIQUE = int(args.technique)
print("Using technique " + str(VgmElectron.USE_TECHNIQUE))
# check for missing files
if not os.path.isfile(src):
print("ERROR: File '" + src + "' not found")
sys.exit()
packer = VgmElectron()
packer.VERBOSE = args.verbose
packer.process(src, dst)
| en | 0.590557 | #!/usr/bin/env python # vgm2electron.py # Tool for converting SN76489-based PSG VGM data to Acorn Electron # By <NAME> (https://github.com/simondotm/) # See https://github.com/simondotm/vgm-packer # # Copyright (c) 2019 <NAME>. All rights reserved. # # "MIT License": # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the Software # is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included # in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # output raw dumps of the data that was compressed by LZ4/Huffman # 0-3 represents approx the loudest 50% of volumes (=ON), 4-15 are the quietest 50% (=OFF) # define the number of octaves to transpose whole song by, in case too much bass getting lost #-1 #---------------------------------------------------------- # Utilities #---------------------------------------------------------- # split the packed raw data into 11 separate streams # returns array of 11 bytearrays # unpack the raw binary data in 11 arrays of register data without any deltas between them # eg. the raw chip writes to all 11 registers every frame #if verbose: # print " frame byte number=" +str(x) # print " frame byte=" +str(d) # latch # volume # tone # we no longer do any masking here # d & 63 # tone data only contains 6 bits of info anyway, so no need for mask # emit current state of each of the 11 registers to 11 different bytearrays # next packet #print(output_blocks[6]) #IGNORE we no longer do this - let the decoder do it instead. # make sure we only emit tone3 when it changes, or 15 for no-change # this prevents the LFSR from being reset # print(output_blocks[6]) # Add EOF marker (0x08) to tone3 byte stream # 0x08 is an invalid noise tone. # return the split blocks # given an array of data points, serialize it to a bytearray # size is the number of bytes to be used to represent each element in the source array. #---------------------------------------------------------- # Process(filename) # Convert the given VGM file to an electron VGM file #---------------------------------------------------------- # load the VGM file, or alternatively interpret as a binary # parse the header # header size # play rate # packet count LO # duration mm # duration ss # Trim off the header data. The rest is raw data. #---------------------------------------------------------- # Unpack the register data into 11 separate data streams #---------------------------------------------------------- #---------------------------------------------------------- # Begin VGM conversion to Electron #---------------------------------------------------------- # Filter out channels we do not need # Modify all volumes to full or none # Interleave sound to a single channel # output final VGM # given an SN76489 tone register value, return the equivalent Electron ULA register setting # hack to protect against divbyzero # electron # Sound frequency = 1 MHz / [32 * (S + 1)] # f * 32*(S+1) = 1Mhz # 32*(S+1) = 1Mhz / f # (S+1) = 1Mhz / f*32 #print ("SN freq is " + str(hz)) # check we are within range #-------------------------------------------------------------- # conversion settings #-------------------------------------------------------------- # convert the register data to a vgm stream # 882 # 50hz - TODO: use frame rate # True # TODO: make these all parameters # Add channel filter option # Add mix type options # --attentuation 468 --filter 123 --transpose 00F --mix 123 --arpeggio 2 --rate 50 # Add option to clamp or transpose out of range frequencies # Make the .ula output file filename.electron.ula # Add 0x01 as a terminating byte in the output ULA # modulo 2 for interleaving channels # other options # bias for channels # transpose or silence out of range notes #-------------------------------------------------------------- # pre-process music to suit Electron capabilities #-------------------------------------------------------------- #-------------------------------------------------------------- # step 1- map volumes to 1-bit precision #-------------------------------------------------------------- # 11 registers per frame # Tone 0 HL Tone 1 HL Tone 2 HL Tone 3 Vol 0123 # apply the threshold for each channel # if its a volume, map to loudest volume or no volume (using logarithmic scale) # full volume # zero volume # zero volume # zero volume # zero volume #-------------------------------------------------------------- # step 2 - transpose to fit frequency range #-------------------------------------------------------------- # final step - bring tone1 into the frequency range of the electron # if the frequency goes below the range of the ULA capabilities, add an octave #if (octaves == 0): # print(" No transpose performed, octaves set to 0") # return # electron baseline is 122Hz not 244Hz as the AUG states. # target_freq < baseline_freq: # if cant reach baseline freq, transpose once, then silence if still too low :( # better to just clamp low frequencies at the bottom, and risk tuning issues rather than transposition jumps #*= 2.0 #target_freq *= 2.0 #retuned += 1 #print(" WARNING: Freq too low - Added " + str(retuned) + " octave(s) - from " + str(tone_freq) + " to " + str(target_freq) + "Hz") # transpose #if TRANSPOSE_OCTAVES > 0: #-------------------------------------------------------------- # Step 3 - mix the 2 primary channels down to 1 channel #-------------------------------------------------------------- # map channel 2 to channel 1 # noise channel is completely ignored #print("Frame " + str(i)) # any channels playing the same frequency are filtered out # interleaving of channels 1+2 is done on odd/even frames for a consistent effect #(i & 1) == 0 # random is no good, thought it might average out but it sounds , well random #mix = random.random() < 0.5 # test code to see if modulo 3 any good, it wasn't # detect if channel 1 needs priority this frame # - its volume is on, and the alternative frame mix flag is good # don't give channel 2 priority if tone is the same and channel1 is playing #diff < 100: #registers[0][i] == registers[2][i] and registers[1][i] == registers[2][i] and vol1 == 0: # replace channel 1 data with channel 2 data # if, channel2 is active, but c1 doesn't have priority this frame # and vol1 != 0: # if no volume on tone1, we can look at channel 3 too #if registers[7][i] == 15: # and not c1p and output_tone != 2: # pick which tone to output # do nothing, because tone1 register frequency already setup # replace tone 1 frequency with tone 2 frequency # replace tone 1 frequency with tone 3 frequency # output ULA data # zero is highest freq. so inaudible, so thats how we handle volume # write to output ULA file #-------------------------------------------------------------- # Final stage - output to vgm #-------------------------------------------------------------- # Tone1----- Tone2----- Tone3----- Tone4 Vol1 Vol2 Vol3 Vol4 #filter = [ 0,1,2,3,7,8 ] #filter = [ 2,3,8 ] #filter = [ 0,1,2,3,4,5,6,7,8,9,10 ] # 11 registers per frame # Tone 0 HL Tone 1 HL Tone 2 HL Tone 3 Vol 0123 # dont update noise register unless different # COMMAND # DATA # next frame # wait 50 # wait 60 # END command #output = bytearray() # write the electron vgm file #open(dst_filename, "wb").write( output ) #------------------------------------------------------------------------ # Main() #------------------------------------------------------------------------ # Determine if running as a script # attenuation options #print("attenuation=" + attenuation) # transpose options #print("transpose=" + transpose) # 0 1 2 3 4 5 6 7 8 9 a b c d e f # channel options # technique # check for missing files | 1.670812 | 2 |
twitter_sent.py | rthorst/TwitterSentiment | 6 | 10234 | <gh_stars>1-10
import webapp2
import tweepy
import json
import csv
import os
import statistics
import bokeh
from bokeh.io import show, output_file
from bokeh.plotting import figure
from bokeh.models import HoverTool, ColumnDataSource
from bokeh.embed import components, json_item
from bokeh.resources import INLINE
from bokeh.models.glyphs import Line, Text
import numpy as np
import random
import operator
from collections import Counter
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
"""
---AUTHOR: ---
<NAME>
<EMAIL>
---LICENSE: ---
MIT License.
---ABOUT: ---
Application to get the sentiment of recent tweets based on a keyword.
Example:
keyword -> "taco bell"
retrieve 300 recent tweets mentioning taco bell.
get average sentiment.
plot distribution of tweets and sentiment.
plot most informative words for this application.
This script runs based on google app server.
Expects Python 2.7
Depenencies need to be included in the lib/ directory (pip install -t lib [PACKAGE_NAME])
The main work is done by the MainPage class. The get() method runs the main pipeline of code and returns HTML as a
string.
Working online version: https://twittersentiment-247018.appspot.com/
"""
def get_tweets(keyword, max_tweets=200):
"""
Given a keyword as a string (e.g. "data science"), get recent tweets matching that string up to # max_tweets.
Return a list of tweets, represented as strings.
"""
# API keys.
consumer_key = "kNOG1klRMMUYbsjMuY5TKl4lE"
consumer_secret = "ieghv6WI1qseYly43A0Ra1MPksEw1i5Onma0txfEu5aHantD2v"
access_key = "<KEY>"
access_secret = "<KEY>"
# Initialize tweepy API object and authorize using API key.
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
""" Get tweets."""
alltweets = []
for status in tweepy.Cursor(
api.search,
q=keyword + " -RT", # the -RT flag excludes retweets.
count=1000,
result_type="recent",
include_entities=True,
monitor_rate_limit=True,
wait_on_rate_limit=True,
lang="en",
).items():
# get text of the tweet, encoding as utf-8.
text = str(status.text.encode("utf-8"))
# add to the data structure, alltweets, holding the tweets.
alltweets.append(text)
# if we've reached max_tweets, break.
if len(alltweets) >= max_tweets:
break
return alltweets
class VaderSentimentModel:
"""
Calculate sentiment using a mostly lexicon-based approach that is optimized for social media.
Approach is social media aware, for example emoticons are part of the lexicon and tokenization is twitter-sensitive.
There are also some basic rules, e.g. it's sensitive to negations.
"""
def __init__(self):
# Initialize a vader_analyzer object which does the work of sentiment analysis.
self.vader_analyzer = SentimentIntensityAnalyzer()
pass
def classify_sentiment(self, tweet):
# Classify sentiment of a single tweet.
# Input tweet: as string.
# Return sentiment score :
# range -1 (very negaitve) to +1 (very positive).
# score is calculated as p(positive) - p(negative)
# normalizing to range from -1 to 1.
# calculate sentiment in a dictionary. key is polarity ("pos", "neg", "neut") and value is probability.
sentiment_dict = self.vader_analyzer.polarity_scores(tweet)
# retrieve the compound sentiment score, which is p(pos) - p(neg), but normalized to range from {-1, 1}
score = sentiment_dict["compound"] # compound is the combined score scaled to {-1, 1}
return score
def plot_tweets(tweets, sentiment_scores):
"""
Create a histogram-style barplot of tweets and their sentiment.
Return a bokeh plot object, expressed as a tuple of (resources, script, div).
Where :
resources: some CSS, etc. that goes in the head of the webpage for styling the plot.
script: javascript for the plot to function. expressed as string.
div: html div container for the plot. expressed as string.
"""
# Sort tweets from negative to positive.
# This step is not strictly necessary, but makes it easier to see the overall shape of the data.
sorted_indices = np.argsort(sentiment_scores)
sentiment_scores = np.array(sentiment_scores)[sorted_indices]
tweets = np.array(tweets)[sorted_indices]
# Express the data as a bokeh data source object.
source = ColumnDataSource(data={
"text": tweets,
"sentiment": sentiment_scores,
"x": np.arange(len(tweets)),
})
"""
Create plot.
"""
# Create plot object.
width = 0.9
p = figure(x_axis_label="Tweet", y_axis_label="Sentiment (0 = Neutral)")
p.vbar(source=source, x="x", top="sentiment", width=width)
# Add hover tool, allowing mouseover to view text and sentiment.
hover = HoverTool(
tooltips=[
("text", "@text"),
("sentiment", "@sentiment")
],
formatters={
"text": "printf",
"sentiment": "printf"
},
mode="vline"
)
p.add_tools(hover)
"""
Format plot.
"""
# axis font size
p.xaxis.axis_label_text_font_size = "15pt"
p.yaxis.axis_label_text_font_size = "15pt"
# remove tick marks from axes
p.xaxis.major_tick_line_color = None
p.xaxis.minor_tick_line_color = None
p.yaxis.major_tick_line_color = None
p.yaxis.minor_tick_line_color = None
# adjust plot width, height
scale = 1.5
p.plot_height = int(250 * scale)
p.plot_width = int(450 * scale)
# remove toolbar (e.g. move, resize, etc) from right of plot.
p.toolbar.logo = None
p.toolbar_location = None
# remove gridlines
p.xgrid.visible = False
p.ygrid.visible = False
# remove x axis tick labels (done by setting label fontsize to 0 pt)
p.xaxis.major_label_text_font_size = '0pt'
"""
Export plot
"""
# Create resources string, which is CSS, etc. that goes in the head of
resources = INLINE.render()
# Get javascript (script) and HTML div (div) for the plot.
script, div = components(p)
return (resources, script, div)
def plot_reason(tweets, sentiment_scores):
"""
Plot the top words that lead us to the classification as positive or negative.
Return:
script : javascript for the plot, expressed as string.
div : html container for the plot, expressed as string.
NOTE: requires the shared resources attribute from plot_tweets() in the HTML header.
"""
"""
Calculate the sentiment of each individual token in the tweets.
"""
# list tokens, keeping only unique tokens (e.g. remove repeated words).
all_toks = []
for tweet in tweets:
toks = tweet.lower().split()
all_toks.extend(toks)
all_toks = [tok for tok in set(all_toks)] # remove duplicates.
# calculate sentiment of each token.
sm = VaderSentimentModel()
toks_sentiment = [sm.classify_sentiment(tok) for tok in all_toks]
"""
sort tokens by sentiment.
if overall valence is negative, sort negative to postitive.
if overall valence is positive, sort positive to negative.
thus, in any case, the earliest elements in the list are the most informative words.
"""
nwords = 20
# negative? sort neg -> positive.
if np.mean(sentiment_scores) < 0:
sorted_indices = np.argsort(toks_sentiment)
# else (positive)? sort positive -> negative
else:
sorted_indices = np.argsort(toks_sentiment)[::-1]
# toks_to_plot: shape (nwords, ) list of informative tokens.
# sentiment_to_plot: shape (nwords, ) list of sentiment of these tokens.
toks_to_plot = np.array(all_toks)[sorted_indices][:nwords]
sentiment_to_plot = np.array(toks_sentiment)[sorted_indices][:nwords]
# convert all sentiment scores to positive values.
# this is for DISPLAY only, to make all plots go from left to right.
# we still retain the correct tokens and sorting order.
sentiment_to_plot = np.array([abs(v) for v in sentiment_to_plot])
"""
Set up plot.
- create data source object.
- define formatting variables.
"""
text_offset = 0.1
source = ColumnDataSource(data={
"token": toks_to_plot,
"sentiment": sentiment_to_plot,
"x": np.arange(len(toks_to_plot))[::-1],
"label_x": sentiment_to_plot + text_offset
})
"""
Make plot.
"""
# Create initial plot.
width = 0.9
xrange = [0, max(sentiment_to_plot) + 1]
p2 = figure(x_axis_label="Sentiment", y_axis_label="Word", x_range=xrange)
p2.hbar(source=source, y="x", right="sentiment", height=width)
"""
Format plot.
"""
# Annotate each bar with the word being represented.
glyph = Text(x="label_x", y="x", text="token")
p2.add_glyph(source, glyph)
# Axis labels.
p2.xaxis.axis_label_text_font_size = "15pt"
p2.yaxis.axis_label_text_font_size = "15pt"
# Remove ticks.
p2.xaxis.major_tick_line_color = None
p2.xaxis.minor_tick_line_color = None
p2.yaxis.major_tick_line_color = None
p2.yaxis.minor_tick_line_color = None
# Remove y axis tick labels.
p2.yaxis.major_label_text_font_size = '0pt'
# Plot width, height.
scale = 1.5
p2.plot_height = int(250 * scale)
p2.plot_width = int(250 * scale)
# remove toolbar (e.g. move, resize, etc) from right of plot.
p2.toolbar.logo = None
p2.toolbar_location = None
# remove gridlines
p2.xgrid.visible = False
p2.ygrid.visible = False
# remove x axis tick labels (set font to 0pt)
p2.xaxis.major_label_text_font_size = '0pt'
# get bokeh component for plot 2.
script2, div2 = components(p2)
return (script2, div2)
class MainPage(webapp2.RequestHandler):
"""
This class does the work of writing HTML to the google app server.
Thus, we allow the get() method to incorporate:
our main pipeline (getting tweets, analyzing sentiment, producing graphs)
writing html
"""
def get(self):
"""
Get tweets and sentiment scores.
"""
# Retrieve keyword from the HTML form. If no keyword provided, use a random suggested keyword.
keyword = self.request.get("keyword")
if not keyword:
suggested_keywords = ["alarm clocks", "the future", "miller lite", "taco bell", "yoga", "netflix",
"life", "traffic", "elon musk", "beards", "world trade", "pepsi", "amazon"]
indices = np.arange(len(suggested_keywords))
random.shuffle(indices)
keyword = suggested_keywords[indices[0]]
# Get recent tweets based on the keyword, up to 300 maximum tweets.
tweets = get_tweets(keyword, max_tweets=300)
# Compute the sentiment of each tweet.
v = VaderSentimentModel()
sentiment_scores = [v.classify_sentiment(tw) for tw in tweets] # shape (ntweets,)
# Label sentiment categorically, e.g. "negative" or "positive"
M_sent = np.mean(sentiment_scores)
map = {1 : "positive", 0 : "negative"}
valence = map[int(M_sent > 0)]
"""
Create plots.
"""
#############
# Plot #1:
############
# Plot the distribution of tweets and sentiment.
# Resources is CSS code that goes in the header of the HTML. Shared across all bokeh plots.
# Script1 is javascript for this plot.
# Div1 is an HTML container for the plot. Goes where you want the plot to appear.
resources, script1, div1 = plot_tweets(tweets=tweets, sentiment_scores=sentiment_scores)
#############
# Plot #2:
############
# Plot the key words that lead us to this classification.
# Script2 is javascript for this plot.
# Div2 is an HTML container for this plot. Goes where you want the plot to appear.
# Requires the HTML to include the shared resources, generated above, in the <HEAD>
script2, div2 = plot_reason(tweets=tweets, sentiment_scores=sentiment_scores)
"""
Create HTML output.
"""
# Load HTML template.
# This is a functioning webpage, with some placeholders for the keywords and plots we have created.
html_p = os.path.join("html", "index.html")
html = open(html_p, "r").read()
# Fill in placeholders in the HTML with varibles we have created.
term_to_value = {
"[[!KEYWORD]]" : keyword,
"[[!VALENCE]]" : valence,
"[[!BOKEH_SCRIPT]]" : script1,
"[[!BOKEH_SCRIPT2]]": script2,
"[[!BOKEH_DIV]]" : div1,
"[[!BOKEH_RESOURCES]]" : resources,
"[[!BOKEH_DIV2]]" : div2
}
for term, val in term_to_value.items():
html = html.replace(term, val)
"""
Write a response.
This essentially returns HTML to the google app engine.
This will render a webpage visible to the user.
"""
self.response.headers["Content-Type"] = "text/html"
self.response.write(html)
# Run application.
routes = [('/', MainPage)]
my_app = webapp2.WSGIApplication(routes, debug=True) | import webapp2
import tweepy
import json
import csv
import os
import statistics
import bokeh
from bokeh.io import show, output_file
from bokeh.plotting import figure
from bokeh.models import HoverTool, ColumnDataSource
from bokeh.embed import components, json_item
from bokeh.resources import INLINE
from bokeh.models.glyphs import Line, Text
import numpy as np
import random
import operator
from collections import Counter
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
"""
---AUTHOR: ---
<NAME>
<EMAIL>
---LICENSE: ---
MIT License.
---ABOUT: ---
Application to get the sentiment of recent tweets based on a keyword.
Example:
keyword -> "taco bell"
retrieve 300 recent tweets mentioning taco bell.
get average sentiment.
plot distribution of tweets and sentiment.
plot most informative words for this application.
This script runs based on google app server.
Expects Python 2.7
Depenencies need to be included in the lib/ directory (pip install -t lib [PACKAGE_NAME])
The main work is done by the MainPage class. The get() method runs the main pipeline of code and returns HTML as a
string.
Working online version: https://twittersentiment-247018.appspot.com/
"""
def get_tweets(keyword, max_tweets=200):
"""
Given a keyword as a string (e.g. "data science"), get recent tweets matching that string up to # max_tweets.
Return a list of tweets, represented as strings.
"""
# API keys.
consumer_key = "kNOG1klRMMUYbsjMuY5TKl4lE"
consumer_secret = "ieghv6WI1qseYly43A0Ra1MPksEw1i5Onma0txfEu5aHantD2v"
access_key = "<KEY>"
access_secret = "<KEY>"
# Initialize tweepy API object and authorize using API key.
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
""" Get tweets."""
alltweets = []
for status in tweepy.Cursor(
api.search,
q=keyword + " -RT", # the -RT flag excludes retweets.
count=1000,
result_type="recent",
include_entities=True,
monitor_rate_limit=True,
wait_on_rate_limit=True,
lang="en",
).items():
# get text of the tweet, encoding as utf-8.
text = str(status.text.encode("utf-8"))
# add to the data structure, alltweets, holding the tweets.
alltweets.append(text)
# if we've reached max_tweets, break.
if len(alltweets) >= max_tweets:
break
return alltweets
class VaderSentimentModel:
"""
Calculate sentiment using a mostly lexicon-based approach that is optimized for social media.
Approach is social media aware, for example emoticons are part of the lexicon and tokenization is twitter-sensitive.
There are also some basic rules, e.g. it's sensitive to negations.
"""
def __init__(self):
# Initialize a vader_analyzer object which does the work of sentiment analysis.
self.vader_analyzer = SentimentIntensityAnalyzer()
pass
def classify_sentiment(self, tweet):
# Classify sentiment of a single tweet.
# Input tweet: as string.
# Return sentiment score :
# range -1 (very negaitve) to +1 (very positive).
# score is calculated as p(positive) - p(negative)
# normalizing to range from -1 to 1.
# calculate sentiment in a dictionary. key is polarity ("pos", "neg", "neut") and value is probability.
sentiment_dict = self.vader_analyzer.polarity_scores(tweet)
# retrieve the compound sentiment score, which is p(pos) - p(neg), but normalized to range from {-1, 1}
score = sentiment_dict["compound"] # compound is the combined score scaled to {-1, 1}
return score
def plot_tweets(tweets, sentiment_scores):
"""
Create a histogram-style barplot of tweets and their sentiment.
Return a bokeh plot object, expressed as a tuple of (resources, script, div).
Where :
resources: some CSS, etc. that goes in the head of the webpage for styling the plot.
script: javascript for the plot to function. expressed as string.
div: html div container for the plot. expressed as string.
"""
# Sort tweets from negative to positive.
# This step is not strictly necessary, but makes it easier to see the overall shape of the data.
sorted_indices = np.argsort(sentiment_scores)
sentiment_scores = np.array(sentiment_scores)[sorted_indices]
tweets = np.array(tweets)[sorted_indices]
# Express the data as a bokeh data source object.
source = ColumnDataSource(data={
"text": tweets,
"sentiment": sentiment_scores,
"x": np.arange(len(tweets)),
})
"""
Create plot.
"""
# Create plot object.
width = 0.9
p = figure(x_axis_label="Tweet", y_axis_label="Sentiment (0 = Neutral)")
p.vbar(source=source, x="x", top="sentiment", width=width)
# Add hover tool, allowing mouseover to view text and sentiment.
hover = HoverTool(
tooltips=[
("text", "@text"),
("sentiment", "@sentiment")
],
formatters={
"text": "printf",
"sentiment": "printf"
},
mode="vline"
)
p.add_tools(hover)
"""
Format plot.
"""
# axis font size
p.xaxis.axis_label_text_font_size = "15pt"
p.yaxis.axis_label_text_font_size = "15pt"
# remove tick marks from axes
p.xaxis.major_tick_line_color = None
p.xaxis.minor_tick_line_color = None
p.yaxis.major_tick_line_color = None
p.yaxis.minor_tick_line_color = None
# adjust plot width, height
scale = 1.5
p.plot_height = int(250 * scale)
p.plot_width = int(450 * scale)
# remove toolbar (e.g. move, resize, etc) from right of plot.
p.toolbar.logo = None
p.toolbar_location = None
# remove gridlines
p.xgrid.visible = False
p.ygrid.visible = False
# remove x axis tick labels (done by setting label fontsize to 0 pt)
p.xaxis.major_label_text_font_size = '0pt'
"""
Export plot
"""
# Create resources string, which is CSS, etc. that goes in the head of
resources = INLINE.render()
# Get javascript (script) and HTML div (div) for the plot.
script, div = components(p)
return (resources, script, div)
def plot_reason(tweets, sentiment_scores):
"""
Plot the top words that lead us to the classification as positive or negative.
Return:
script : javascript for the plot, expressed as string.
div : html container for the plot, expressed as string.
NOTE: requires the shared resources attribute from plot_tweets() in the HTML header.
"""
"""
Calculate the sentiment of each individual token in the tweets.
"""
# list tokens, keeping only unique tokens (e.g. remove repeated words).
all_toks = []
for tweet in tweets:
toks = tweet.lower().split()
all_toks.extend(toks)
all_toks = [tok for tok in set(all_toks)] # remove duplicates.
# calculate sentiment of each token.
sm = VaderSentimentModel()
toks_sentiment = [sm.classify_sentiment(tok) for tok in all_toks]
"""
sort tokens by sentiment.
if overall valence is negative, sort negative to postitive.
if overall valence is positive, sort positive to negative.
thus, in any case, the earliest elements in the list are the most informative words.
"""
nwords = 20
# negative? sort neg -> positive.
if np.mean(sentiment_scores) < 0:
sorted_indices = np.argsort(toks_sentiment)
# else (positive)? sort positive -> negative
else:
sorted_indices = np.argsort(toks_sentiment)[::-1]
# toks_to_plot: shape (nwords, ) list of informative tokens.
# sentiment_to_plot: shape (nwords, ) list of sentiment of these tokens.
toks_to_plot = np.array(all_toks)[sorted_indices][:nwords]
sentiment_to_plot = np.array(toks_sentiment)[sorted_indices][:nwords]
# convert all sentiment scores to positive values.
# this is for DISPLAY only, to make all plots go from left to right.
# we still retain the correct tokens and sorting order.
sentiment_to_plot = np.array([abs(v) for v in sentiment_to_plot])
"""
Set up plot.
- create data source object.
- define formatting variables.
"""
text_offset = 0.1
source = ColumnDataSource(data={
"token": toks_to_plot,
"sentiment": sentiment_to_plot,
"x": np.arange(len(toks_to_plot))[::-1],
"label_x": sentiment_to_plot + text_offset
})
"""
Make plot.
"""
# Create initial plot.
width = 0.9
xrange = [0, max(sentiment_to_plot) + 1]
p2 = figure(x_axis_label="Sentiment", y_axis_label="Word", x_range=xrange)
p2.hbar(source=source, y="x", right="sentiment", height=width)
"""
Format plot.
"""
# Annotate each bar with the word being represented.
glyph = Text(x="label_x", y="x", text="token")
p2.add_glyph(source, glyph)
# Axis labels.
p2.xaxis.axis_label_text_font_size = "15pt"
p2.yaxis.axis_label_text_font_size = "15pt"
# Remove ticks.
p2.xaxis.major_tick_line_color = None
p2.xaxis.minor_tick_line_color = None
p2.yaxis.major_tick_line_color = None
p2.yaxis.minor_tick_line_color = None
# Remove y axis tick labels.
p2.yaxis.major_label_text_font_size = '0pt'
# Plot width, height.
scale = 1.5
p2.plot_height = int(250 * scale)
p2.plot_width = int(250 * scale)
# remove toolbar (e.g. move, resize, etc) from right of plot.
p2.toolbar.logo = None
p2.toolbar_location = None
# remove gridlines
p2.xgrid.visible = False
p2.ygrid.visible = False
# remove x axis tick labels (set font to 0pt)
p2.xaxis.major_label_text_font_size = '0pt'
# get bokeh component for plot 2.
script2, div2 = components(p2)
return (script2, div2)
class MainPage(webapp2.RequestHandler):
"""
This class does the work of writing HTML to the google app server.
Thus, we allow the get() method to incorporate:
our main pipeline (getting tweets, analyzing sentiment, producing graphs)
writing html
"""
def get(self):
"""
Get tweets and sentiment scores.
"""
# Retrieve keyword from the HTML form. If no keyword provided, use a random suggested keyword.
keyword = self.request.get("keyword")
if not keyword:
suggested_keywords = ["alarm clocks", "the future", "miller lite", "taco bell", "yoga", "netflix",
"life", "traffic", "elon musk", "beards", "world trade", "pepsi", "amazon"]
indices = np.arange(len(suggested_keywords))
random.shuffle(indices)
keyword = suggested_keywords[indices[0]]
# Get recent tweets based on the keyword, up to 300 maximum tweets.
tweets = get_tweets(keyword, max_tweets=300)
# Compute the sentiment of each tweet.
v = VaderSentimentModel()
sentiment_scores = [v.classify_sentiment(tw) for tw in tweets] # shape (ntweets,)
# Label sentiment categorically, e.g. "negative" or "positive"
M_sent = np.mean(sentiment_scores)
map = {1 : "positive", 0 : "negative"}
valence = map[int(M_sent > 0)]
"""
Create plots.
"""
#############
# Plot #1:
############
# Plot the distribution of tweets and sentiment.
# Resources is CSS code that goes in the header of the HTML. Shared across all bokeh plots.
# Script1 is javascript for this plot.
# Div1 is an HTML container for the plot. Goes where you want the plot to appear.
resources, script1, div1 = plot_tweets(tweets=tweets, sentiment_scores=sentiment_scores)
#############
# Plot #2:
############
# Plot the key words that lead us to this classification.
# Script2 is javascript for this plot.
# Div2 is an HTML container for this plot. Goes where you want the plot to appear.
# Requires the HTML to include the shared resources, generated above, in the <HEAD>
script2, div2 = plot_reason(tweets=tweets, sentiment_scores=sentiment_scores)
"""
Create HTML output.
"""
# Load HTML template.
# This is a functioning webpage, with some placeholders for the keywords and plots we have created.
html_p = os.path.join("html", "index.html")
html = open(html_p, "r").read()
# Fill in placeholders in the HTML with varibles we have created.
term_to_value = {
"[[!KEYWORD]]" : keyword,
"[[!VALENCE]]" : valence,
"[[!BOKEH_SCRIPT]]" : script1,
"[[!BOKEH_SCRIPT2]]": script2,
"[[!BOKEH_DIV]]" : div1,
"[[!BOKEH_RESOURCES]]" : resources,
"[[!BOKEH_DIV2]]" : div2
}
for term, val in term_to_value.items():
html = html.replace(term, val)
"""
Write a response.
This essentially returns HTML to the google app engine.
This will render a webpage visible to the user.
"""
self.response.headers["Content-Type"] = "text/html"
self.response.write(html)
# Run application.
routes = [('/', MainPage)]
my_app = webapp2.WSGIApplication(routes, debug=True) | en | 0.824098 | ---AUTHOR: --- <NAME> <EMAIL> ---LICENSE: --- MIT License. ---ABOUT: --- Application to get the sentiment of recent tweets based on a keyword. Example: keyword -> "taco bell" retrieve 300 recent tweets mentioning taco bell. get average sentiment. plot distribution of tweets and sentiment. plot most informative words for this application. This script runs based on google app server. Expects Python 2.7 Depenencies need to be included in the lib/ directory (pip install -t lib [PACKAGE_NAME]) The main work is done by the MainPage class. The get() method runs the main pipeline of code and returns HTML as a string. Working online version: https://twittersentiment-247018.appspot.com/ Given a keyword as a string (e.g. "data science"), get recent tweets matching that string up to # max_tweets. Return a list of tweets, represented as strings. # API keys. # Initialize tweepy API object and authorize using API key. Get tweets. # the -RT flag excludes retweets. # get text of the tweet, encoding as utf-8. # add to the data structure, alltweets, holding the tweets. # if we've reached max_tweets, break. Calculate sentiment using a mostly lexicon-based approach that is optimized for social media. Approach is social media aware, for example emoticons are part of the lexicon and tokenization is twitter-sensitive. There are also some basic rules, e.g. it's sensitive to negations. # Initialize a vader_analyzer object which does the work of sentiment analysis. # Classify sentiment of a single tweet. # Input tweet: as string. # Return sentiment score : # range -1 (very negaitve) to +1 (very positive). # score is calculated as p(positive) - p(negative) # normalizing to range from -1 to 1. # calculate sentiment in a dictionary. key is polarity ("pos", "neg", "neut") and value is probability. # retrieve the compound sentiment score, which is p(pos) - p(neg), but normalized to range from {-1, 1} # compound is the combined score scaled to {-1, 1} Create a histogram-style barplot of tweets and their sentiment. Return a bokeh plot object, expressed as a tuple of (resources, script, div). Where : resources: some CSS, etc. that goes in the head of the webpage for styling the plot. script: javascript for the plot to function. expressed as string. div: html div container for the plot. expressed as string. # Sort tweets from negative to positive. # This step is not strictly necessary, but makes it easier to see the overall shape of the data. # Express the data as a bokeh data source object. Create plot. # Create plot object. # Add hover tool, allowing mouseover to view text and sentiment. Format plot. # axis font size # remove tick marks from axes # adjust plot width, height # remove toolbar (e.g. move, resize, etc) from right of plot. # remove gridlines # remove x axis tick labels (done by setting label fontsize to 0 pt) Export plot # Create resources string, which is CSS, etc. that goes in the head of # Get javascript (script) and HTML div (div) for the plot. Plot the top words that lead us to the classification as positive or negative. Return: script : javascript for the plot, expressed as string. div : html container for the plot, expressed as string. NOTE: requires the shared resources attribute from plot_tweets() in the HTML header. Calculate the sentiment of each individual token in the tweets. # list tokens, keeping only unique tokens (e.g. remove repeated words). # remove duplicates. # calculate sentiment of each token. sort tokens by sentiment. if overall valence is negative, sort negative to postitive. if overall valence is positive, sort positive to negative. thus, in any case, the earliest elements in the list are the most informative words. # negative? sort neg -> positive. # else (positive)? sort positive -> negative # toks_to_plot: shape (nwords, ) list of informative tokens. # sentiment_to_plot: shape (nwords, ) list of sentiment of these tokens. # convert all sentiment scores to positive values. # this is for DISPLAY only, to make all plots go from left to right. # we still retain the correct tokens and sorting order. Set up plot. - create data source object. - define formatting variables. Make plot. # Create initial plot. Format plot. # Annotate each bar with the word being represented. # Axis labels. # Remove ticks. # Remove y axis tick labels. # Plot width, height. # remove toolbar (e.g. move, resize, etc) from right of plot. # remove gridlines # remove x axis tick labels (set font to 0pt) # get bokeh component for plot 2. This class does the work of writing HTML to the google app server. Thus, we allow the get() method to incorporate: our main pipeline (getting tweets, analyzing sentiment, producing graphs) writing html Get tweets and sentiment scores. # Retrieve keyword from the HTML form. If no keyword provided, use a random suggested keyword. # Get recent tweets based on the keyword, up to 300 maximum tweets. # Compute the sentiment of each tweet. # shape (ntweets,) # Label sentiment categorically, e.g. "negative" or "positive" Create plots. ############# # Plot #1: ############ # Plot the distribution of tweets and sentiment. # Resources is CSS code that goes in the header of the HTML. Shared across all bokeh plots. # Script1 is javascript for this plot. # Div1 is an HTML container for the plot. Goes where you want the plot to appear. ############# # Plot #2: ############ # Plot the key words that lead us to this classification. # Script2 is javascript for this plot. # Div2 is an HTML container for this plot. Goes where you want the plot to appear. # Requires the HTML to include the shared resources, generated above, in the <HEAD> Create HTML output. # Load HTML template. # This is a functioning webpage, with some placeholders for the keywords and plots we have created. # Fill in placeholders in the HTML with varibles we have created. Write a response. This essentially returns HTML to the google app engine. This will render a webpage visible to the user. # Run application. | 2.989811 | 3 |
tests/testcgatools.py | ereide/pyga-camcal | 5 | 10235 | <reponame>ereide/pyga-camcal<gh_stars>1-10
import unittest
import clifford as cl
from clifford import g3c
from numpy import pi, e
import numpy as np
from scipy.sparse.linalg.matfuncs import _sinch as sinch
from clifford import MultiVector
from pygacal.common.cgatools import ( Sandwich, Dilator, Translator, Reflector,
inversion, Rotor, Transversor, I3, I5,
VectorEquality, Distance, ga_log, ga_exp, MVEqual, Meet,
extractBivectorParameters_complicated, ga_exp_complicated, one)
from pygacal.geometry import createRandomBivector, createRandomVector, createRandomPoints
from pygacal.geometry.lines import createLine
from pygacal.geometry.planes import createPlane
layout = g3c.layout
locals().update(g3c.blades)
ep, en, up, down, homo, E0, ninf, no = (g3c.stuff["ep"], g3c.stuff["en"],
g3c.stuff["up"], g3c.stuff["down"], g3c.stuff["homo"],
g3c.stuff["E0"], g3c.stuff["einf"], -g3c.stuff["eo"])
np.random.seed(2512)
def AssertMVEqual(actual, expected, rtol = 1e-5, atol = 1e-6, verbose = False):
assert(MVEqual(actual, expected, rtol, atol, verbose))
def AssertMVUnEqual(actual, expected, rtol = 1e-5, atol = 1e-6, verbose = False):
assert(not MVEqual(actual, expected, rtol, atol, verbose))
class TestCGAOperators(unittest.TestCase):
def testDilator(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
assert(down(Sandwich(X, Dilator(0.1))) == x * 0.1)
def testTranslation(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
a = 2 * e1 + e3
assert(down(Sandwich(X, Translator(a))) == x + a)
def testRotation(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
actual = down(Sandwich(X, Rotor(e12, pi/2)))
expected = (-3.0)*e1 + 2.0*e2 + 4.0 * e3
assert(actual == expected)
def testInversion(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
assert(down(inversion(X)) * x == 1)
def testDistance(self):
a = e1
b = e2
A, B = up(a), up(b)
assert(Distance(A, B) == np.sqrt(2))
def testMeet(self):
A, B, C, D = createRandomPoints(N = 4, scale = 50)
L = createLine(A, B)
L2 = createLine(A, C)
P1 = createPlane(A, B, C)
P2 = createPlane(A, B, D)
L_actual = Meet(P1, P2)
assert(MVEqual(L, L_actual))
#Plane to line
Q = (ninf ^ A).normal()
P3 = A ^ C ^ D ^ ninf
Q_actual = Meet(P3, L).normal() #How do we define order/direction?
assert(MVEqual(Q, Q_actual))
def testAssertEqual(self):
verbose = False
a = createRandomBivector()
b = a + 0.01
a2 = b - 0.01
c = a + 1
d = c - a
AssertMVEqual(a, a2, verbose = verbose)
AssertMVUnEqual(a, b, verbose = verbose)
AssertMVEqual(d, 1, verbose = verbose)
def testLogarithm(self):
verbose = False
if verbose:
print("\nTest Logarithms and exponents")
phi = 0.5 #Rotation amount
P = (e12 + 2*e23 + 3*e13).normal() #Rotation Plane
P_n = P*I3
t = 2.73 * e1 + 3.14*e2 #Translation vector
t_nor = (P_n | t) * P_n #Decomposition into normal component
t_par = t - t_nor #Decomposition into paralel component
assert(t_par + t_nor == t)
if verbose:
print("P = ", P)
print("phi = ", phi)
print("t = ", t)
print("t_nor = ", t_nor)
print("t_par = ", t_par)
print("")
assert(P|t_nor == 0) #Normal to P
assert(P^t_nor != 0) #Normal to P
assert(P|t_par != 0) #Parallel to P
assert(P^t_par == 0) #Parallel to P
assert(P*t != 0) #Non zero product
R_expected = (np.cos(phi) + (np.sin(phi) * P))*(1 + (t_nor*ninf)) + np.sinc(phi/np.pi)*t_par * ninf
B_expected = phi * P + t*ninf
R_exponential = np.exp(B_expected)
R_actual = ga_exp(B_expected, verbose = verbose)
B_new = ga_log(R_expected, verbose = verbose)
R_ga = ga_exp(B_new)
if verbose:
print("R_old ", R_expected)
print("R_expected ", R_actual)
print("R_exponential", R_exponential)
print("R_ga ", R_ga)
print("B_new ", B_new)
print("B_expected ", B_expected)
#Rotor properties
AssertMVEqual(R_expected * ~R_expected, 1, verbose = verbose)
AssertMVEqual(R_ga * ~R_ga, 1, verbose = verbose)
#Equalities
AssertMVEqual(R_actual, R_expected, verbose = verbose)
AssertMVEqual(R_exponential, R_expected, verbose = verbose)
AssertMVEqual(B_new, B_expected, verbose = verbose)
AssertMVEqual(R_ga, R_expected, verbose = verbose)
N = 100
#Random bivectors to test this as well
for i in range(N):
B = createRandomBivector()
AssertMVEqual(B, ga_log(ga_exp(B, verbose = verbose), verbose = verbose), verbose = verbose)
def testComplicatedLogarithm(self):
verbose = True
if verbose:
print("\nTest Complicated Logarithms and exponents")
phi = 0.2 #Rotation amount
P = (e12 + 2*e23 + 3*e13).normal() #Rotation Plane
P_n = P*I3
#t = 0
t = 2.73 * e1 + 3.14*e2 #Translation vector
t_nor = (P_n | t) * P_n #Decomposition into normal component
t_par = t - t_nor #Decomposition into paralel component
omega = 0.1
assert(t_par + t_nor == t)
if verbose:
print("P = ", P)
print("phi = ", phi)
print("t = ", t)
print("t_nor = ", t_nor)
print("t_par = ", t_par)
print("omega = ", omega)
print("")
"""
assert(P|t_nor == 0) #Normal to P
assert(P^t_nor != 0) #Normal to P
assert(P|t_par != 0) #Parallel to P
assert(P^t_par == 0) #Parallel to P
assert(P*t != 0) #Non zero product
assert(t_par|t_nor == 0) #Non zero product
"""
B_expected = (phi * P) + (t*ninf) + (omega * E0)
k = (omega * omega + phi * phi)
R_expected = (np.cos(phi) + np.sin(phi) * P)*(np.cosh(omega) + np.sinh(omega) * E0 + sinch(omega) * t_nor*ninf)
if (k > 0):
R_expected += 1/k* ( (-omega * np.sin(phi) * np.cosh(omega) + phi * np.cos(phi) * np.sinh(omega)) * P
+ ( omega * np.cos(phi) * np.sinh(omega) + phi * np.sin(phi) * np.cosh(omega))) * t_par * ninf
else:
R_expected += t_par * ninf
phi_test, P_test, t_nor_test, t_par_test, omega_test = extractBivectorParameters_complicated(B_expected)
B_actual = phi_test * P_test + (t_nor_test + t_par_test)*ninf + omega_test * E0
#Testing some basic properties of the extraction
AssertMVEqual(phi*(P * ~P), phi*one, verbose = False)
AssertMVEqual(phi*P, phi*P_test, verbose = False)
R_exponential = np.exp(B_expected)
R_actual = ga_exp_complicated(B_expected, verbose = verbose)
#B_new = ga_log(R_expected, verbose = verbose)
#R_ga = ga_exp(B_new)
if verbose:
print("R_expected ", R_expected)
print("R_actual ", R_actual)
print("R_exponential ", R_exponential)
#print("R_ga ", R_ga)
#print("B_new ", B_new)
print("B_expected ", B_expected)
print()
#BivectorExtraction
AssertMVEqual(B_actual, B_expected, verbose = verbose)
AssertMVEqual(R_expected * ~R_expected, one, verbose = verbose)
#Rotor properties
AssertMVEqual(R_actual * ~R_actual, one, verbose = verbose)
#Only an approximation
AssertMVEqual(R_exponential * ~R_exponential, one, verbose = verbose)
#AssertMVEqual(R_expected * ~R_expected, 1, verbose = verbose)
#AssertMVEqual(R_ga * ~R_ga, 1, verbose = verbose)
#Equalities
#AssertMVEqual(R_actual, R_expected, verbose = verbose)
AssertMVEqual(R_exponential, R_actual, rtol = 1e-2, atol = 1e-3, verbose = verbose)
#AssertMVEqual(B_new, B_expected, verbose = verbose)
#AssertMVEqual(R_ga, R_expected, verbose = verbose)
#N = 100
#Random bivectors to test this as well
#for i in range(N):
# B = createRandomBivector()
# AssertMVEqual(B, ga_log(ga_exp(B, verbose = verbose), verbose = verbose), verbose = verbose)
if __name__ == "__main__":
unittest.main()
| import unittest
import clifford as cl
from clifford import g3c
from numpy import pi, e
import numpy as np
from scipy.sparse.linalg.matfuncs import _sinch as sinch
from clifford import MultiVector
from pygacal.common.cgatools import ( Sandwich, Dilator, Translator, Reflector,
inversion, Rotor, Transversor, I3, I5,
VectorEquality, Distance, ga_log, ga_exp, MVEqual, Meet,
extractBivectorParameters_complicated, ga_exp_complicated, one)
from pygacal.geometry import createRandomBivector, createRandomVector, createRandomPoints
from pygacal.geometry.lines import createLine
from pygacal.geometry.planes import createPlane
layout = g3c.layout
locals().update(g3c.blades)
ep, en, up, down, homo, E0, ninf, no = (g3c.stuff["ep"], g3c.stuff["en"],
g3c.stuff["up"], g3c.stuff["down"], g3c.stuff["homo"],
g3c.stuff["E0"], g3c.stuff["einf"], -g3c.stuff["eo"])
np.random.seed(2512)
def AssertMVEqual(actual, expected, rtol = 1e-5, atol = 1e-6, verbose = False):
assert(MVEqual(actual, expected, rtol, atol, verbose))
def AssertMVUnEqual(actual, expected, rtol = 1e-5, atol = 1e-6, verbose = False):
assert(not MVEqual(actual, expected, rtol, atol, verbose))
class TestCGAOperators(unittest.TestCase):
def testDilator(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
assert(down(Sandwich(X, Dilator(0.1))) == x * 0.1)
def testTranslation(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
a = 2 * e1 + e3
assert(down(Sandwich(X, Translator(a))) == x + a)
def testRotation(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
actual = down(Sandwich(X, Rotor(e12, pi/2)))
expected = (-3.0)*e1 + 2.0*e2 + 4.0 * e3
assert(actual == expected)
def testInversion(self):
x = 2*e1 + 3* e2 + 4*e3
X = up(x)
assert(down(inversion(X)) * x == 1)
def testDistance(self):
a = e1
b = e2
A, B = up(a), up(b)
assert(Distance(A, B) == np.sqrt(2))
def testMeet(self):
A, B, C, D = createRandomPoints(N = 4, scale = 50)
L = createLine(A, B)
L2 = createLine(A, C)
P1 = createPlane(A, B, C)
P2 = createPlane(A, B, D)
L_actual = Meet(P1, P2)
assert(MVEqual(L, L_actual))
#Plane to line
Q = (ninf ^ A).normal()
P3 = A ^ C ^ D ^ ninf
Q_actual = Meet(P3, L).normal() #How do we define order/direction?
assert(MVEqual(Q, Q_actual))
def testAssertEqual(self):
verbose = False
a = createRandomBivector()
b = a + 0.01
a2 = b - 0.01
c = a + 1
d = c - a
AssertMVEqual(a, a2, verbose = verbose)
AssertMVUnEqual(a, b, verbose = verbose)
AssertMVEqual(d, 1, verbose = verbose)
def testLogarithm(self):
verbose = False
if verbose:
print("\nTest Logarithms and exponents")
phi = 0.5 #Rotation amount
P = (e12 + 2*e23 + 3*e13).normal() #Rotation Plane
P_n = P*I3
t = 2.73 * e1 + 3.14*e2 #Translation vector
t_nor = (P_n | t) * P_n #Decomposition into normal component
t_par = t - t_nor #Decomposition into paralel component
assert(t_par + t_nor == t)
if verbose:
print("P = ", P)
print("phi = ", phi)
print("t = ", t)
print("t_nor = ", t_nor)
print("t_par = ", t_par)
print("")
assert(P|t_nor == 0) #Normal to P
assert(P^t_nor != 0) #Normal to P
assert(P|t_par != 0) #Parallel to P
assert(P^t_par == 0) #Parallel to P
assert(P*t != 0) #Non zero product
R_expected = (np.cos(phi) + (np.sin(phi) * P))*(1 + (t_nor*ninf)) + np.sinc(phi/np.pi)*t_par * ninf
B_expected = phi * P + t*ninf
R_exponential = np.exp(B_expected)
R_actual = ga_exp(B_expected, verbose = verbose)
B_new = ga_log(R_expected, verbose = verbose)
R_ga = ga_exp(B_new)
if verbose:
print("R_old ", R_expected)
print("R_expected ", R_actual)
print("R_exponential", R_exponential)
print("R_ga ", R_ga)
print("B_new ", B_new)
print("B_expected ", B_expected)
#Rotor properties
AssertMVEqual(R_expected * ~R_expected, 1, verbose = verbose)
AssertMVEqual(R_ga * ~R_ga, 1, verbose = verbose)
#Equalities
AssertMVEqual(R_actual, R_expected, verbose = verbose)
AssertMVEqual(R_exponential, R_expected, verbose = verbose)
AssertMVEqual(B_new, B_expected, verbose = verbose)
AssertMVEqual(R_ga, R_expected, verbose = verbose)
N = 100
#Random bivectors to test this as well
for i in range(N):
B = createRandomBivector()
AssertMVEqual(B, ga_log(ga_exp(B, verbose = verbose), verbose = verbose), verbose = verbose)
def testComplicatedLogarithm(self):
verbose = True
if verbose:
print("\nTest Complicated Logarithms and exponents")
phi = 0.2 #Rotation amount
P = (e12 + 2*e23 + 3*e13).normal() #Rotation Plane
P_n = P*I3
#t = 0
t = 2.73 * e1 + 3.14*e2 #Translation vector
t_nor = (P_n | t) * P_n #Decomposition into normal component
t_par = t - t_nor #Decomposition into paralel component
omega = 0.1
assert(t_par + t_nor == t)
if verbose:
print("P = ", P)
print("phi = ", phi)
print("t = ", t)
print("t_nor = ", t_nor)
print("t_par = ", t_par)
print("omega = ", omega)
print("")
"""
assert(P|t_nor == 0) #Normal to P
assert(P^t_nor != 0) #Normal to P
assert(P|t_par != 0) #Parallel to P
assert(P^t_par == 0) #Parallel to P
assert(P*t != 0) #Non zero product
assert(t_par|t_nor == 0) #Non zero product
"""
B_expected = (phi * P) + (t*ninf) + (omega * E0)
k = (omega * omega + phi * phi)
R_expected = (np.cos(phi) + np.sin(phi) * P)*(np.cosh(omega) + np.sinh(omega) * E0 + sinch(omega) * t_nor*ninf)
if (k > 0):
R_expected += 1/k* ( (-omega * np.sin(phi) * np.cosh(omega) + phi * np.cos(phi) * np.sinh(omega)) * P
+ ( omega * np.cos(phi) * np.sinh(omega) + phi * np.sin(phi) * np.cosh(omega))) * t_par * ninf
else:
R_expected += t_par * ninf
phi_test, P_test, t_nor_test, t_par_test, omega_test = extractBivectorParameters_complicated(B_expected)
B_actual = phi_test * P_test + (t_nor_test + t_par_test)*ninf + omega_test * E0
#Testing some basic properties of the extraction
AssertMVEqual(phi*(P * ~P), phi*one, verbose = False)
AssertMVEqual(phi*P, phi*P_test, verbose = False)
R_exponential = np.exp(B_expected)
R_actual = ga_exp_complicated(B_expected, verbose = verbose)
#B_new = ga_log(R_expected, verbose = verbose)
#R_ga = ga_exp(B_new)
if verbose:
print("R_expected ", R_expected)
print("R_actual ", R_actual)
print("R_exponential ", R_exponential)
#print("R_ga ", R_ga)
#print("B_new ", B_new)
print("B_expected ", B_expected)
print()
#BivectorExtraction
AssertMVEqual(B_actual, B_expected, verbose = verbose)
AssertMVEqual(R_expected * ~R_expected, one, verbose = verbose)
#Rotor properties
AssertMVEqual(R_actual * ~R_actual, one, verbose = verbose)
#Only an approximation
AssertMVEqual(R_exponential * ~R_exponential, one, verbose = verbose)
#AssertMVEqual(R_expected * ~R_expected, 1, verbose = verbose)
#AssertMVEqual(R_ga * ~R_ga, 1, verbose = verbose)
#Equalities
#AssertMVEqual(R_actual, R_expected, verbose = verbose)
AssertMVEqual(R_exponential, R_actual, rtol = 1e-2, atol = 1e-3, verbose = verbose)
#AssertMVEqual(B_new, B_expected, verbose = verbose)
#AssertMVEqual(R_ga, R_expected, verbose = verbose)
#N = 100
#Random bivectors to test this as well
#for i in range(N):
# B = createRandomBivector()
# AssertMVEqual(B, ga_log(ga_exp(B, verbose = verbose), verbose = verbose), verbose = verbose)
if __name__ == "__main__":
unittest.main() | en | 0.775235 | #Plane to line #How do we define order/direction? #Rotation amount #Rotation Plane #Translation vector #Decomposition into normal component #Decomposition into paralel component #Normal to P #Normal to P #Parallel to P #Parallel to P #Non zero product #Rotor properties #Equalities #Random bivectors to test this as well #Rotation amount #Rotation Plane #t = 0 #Translation vector #Decomposition into normal component #Decomposition into paralel component assert(P|t_nor == 0) #Normal to P assert(P^t_nor != 0) #Normal to P assert(P|t_par != 0) #Parallel to P assert(P^t_par == 0) #Parallel to P assert(P*t != 0) #Non zero product assert(t_par|t_nor == 0) #Non zero product #Testing some basic properties of the extraction #B_new = ga_log(R_expected, verbose = verbose) #R_ga = ga_exp(B_new) #print("R_ga ", R_ga) #print("B_new ", B_new) #BivectorExtraction #Rotor properties #Only an approximation #AssertMVEqual(R_expected * ~R_expected, 1, verbose = verbose) #AssertMVEqual(R_ga * ~R_ga, 1, verbose = verbose) #Equalities #AssertMVEqual(R_actual, R_expected, verbose = verbose) #AssertMVEqual(B_new, B_expected, verbose = verbose) #AssertMVEqual(R_ga, R_expected, verbose = verbose) #N = 100 #Random bivectors to test this as well #for i in range(N): # B = createRandomBivector() # AssertMVEqual(B, ga_log(ga_exp(B, verbose = verbose), verbose = verbose), verbose = verbose) | 2.070098 | 2 |
router/posts.py | DiegoLing33/prestij.xyz-api | 0 | 10236 | # ██╗░░░░░██╗███╗░░██╗░██████╗░░░░██████╗░██╗░░░░░░█████╗░░█████╗░██╗░░██╗
# ██║░░░░░██║████╗░██║██╔════╝░░░░██╔══██╗██║░░░░░██╔══██╗██╔══██╗██║░██╔╝
# ██║░░░░░██║██╔██╗██║██║░░██╗░░░░██████╦╝██║░░░░░███████║██║░░╚═╝█████═╝░
# ██║░░░░░██║██║╚████║██║░░╚██╗░░░██╔══██╗██║░░░░░██╔══██║██║░░██╗██╔═██╗░
# ███████╗██║██║░╚███║╚██████╔╝░░░██████╦╝███████╗██║░░██║╚█████╔╝██║░╚██╗
# ╚══════╝╚═╝╚═╝░░╚══╝░╚═════╝░░░░╚═════╝░╚══════╝╚═╝░░╚═╝░╚════╝░╚═╝░░╚═╝
#
# Developed by <NAME> (C) Ling • Black 2020
# @site http://ling.black
# ██╗░░░░░██╗███╗░░██╗░██████╗░░░░██████╗░██╗░░░░░░█████╗░░█████╗░██╗░░██╗
# ██║░░░░░██║████╗░██║██╔════╝░░░░██╔══██╗██║░░░░░██╔══██╗██╔══██╗██║░██╔╝
# ██║░░░░░██║██╔██╗██║██║░░██╗░░░░██████╦╝██║░░░░░███████║██║░░╚═╝█████═╝░
# ██║░░░░░██║██║╚████║██║░░╚██╗░░░██╔══██╗██║░░░░░██╔══██║██║░░██╗██╔═██╗░
# ███████╗██║██║░╚███║╚██████╔╝░░░██████╦╝███████╗██║░░██║╚█████╔╝██║░╚██╗
# ╚══════╝╚═╝╚═╝░░╚══╝░╚═════╝░░░░╚═════╝░╚══════╝╚═╝░░╚═╝░╚════╝░╚═╝░░╚═╝
#
# Developed by <NAME> (C) Ling • Black 2020
# @site http://ling.black
from typing import List
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
from core.response import RequestLimit
from database import get_db, DatabaseUtils
from database.wow.models import PostModel, PostCommentsModel
from wow.interface.entity import PostCategory, Post, PostCategoryCreate, PostCreate, PostLikeCreate, PostCommentCreate
from wow.utils.posts import PostsUtils
from wow.utils.users import BlizzardUsersUtils
router = APIRouter()
class TokenArgs(BaseModel):
token: str
class TokenPostIdArgs(BaseModel):
token: str
post_id: int
class CommentIdAndToken(TokenArgs):
comment_id: int
class PostAPIList(BaseModel):
items: List[Post]
count: int
class PostAPIListResponse(BaseModel):
response: PostAPIList
request: RequestLimit
# -----------------------------------
# CATEGORIES
# -----------------------------------
@router.post(
"/categories",
response_model=PostCategory,
summary='Adds the category'
)
def add_category(body: PostCategoryCreate):
"""
Adds the category
:param body:
:return:
"""
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_category(user_id=blizzard_id, url=body.url, title=body.title)
@router.get(
"/categories",
response_model=List[PostCategory],
summary='Returns the categories'
)
def get_categories():
"""
Returns the categories list
:return:
"""
return PostsUtils.get_categories()
# -----------------------------------
# POSTS
# -----------------------------------
@router.get(
"/",
response_model=PostAPIListResponse,
summary='Returns all the posts'
)
def get_posts_all(limit: int = 100, offset: int = 0):
return PostsUtils.get_posts_limit(
limit=limit,
offset=offset
)
@router.get(
"/category/{category_url}",
response_model=PostAPIListResponse,
summary='Returns the posts in category'
)
def get_posts_all(category_url: int, limit: int = 100, offset: int = 0):
"""
Returns all the posts by category
:param category_url:
:param limit:
:param offset:
:return:
"""
return PostsUtils.get_posts_by_category_limit(
category_id=category_url,
limit=limit,
offset=offset
)
@router.get(
"/user/{blizzard_id}",
response_model=PostAPIListResponse,
summary='Returns the posts by users'
)
def get_posts_all(blizzard_id: int, limit: int = 100, offset: int = 0):
"""
Returns all the posts by category
:param blizzard_id:
:param limit:
:param offset:
:return:
"""
return PostsUtils.get_posts_by_blizzard_id(
blizzard_id=blizzard_id,
limit=limit,
offset=offset
)
@router.post(
"/like",
summary='Likes the post',
tags=['Лайки']
)
def like_post(body: PostLikeCreate):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_like(
user_id=blizzard_id,
post_id=body.post_id,
)
@router.post(
"/unlike",
summary='Unlikes the post',
tags=['Лайки']
)
def like_post(body: PostLikeCreate):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.remove_like(
user_id=blizzard_id,
post_id=body.post_id,
)
@router.post(
"/comment",
summary='Adds the comment',
tags=['Комментарии']
)
def like_post(body: PostCommentCreate):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_comment(
user_id=blizzard_id,
post_id=body.post_id,
reply_id=body.reply_id,
text=body.text,
)
@router.delete(
"/comment",
summary='Removes the comment',
tags=['Комментарии']
)
def removes_post(body: CommentIdAndToken, db=Depends(get_db)):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
com = db.query(PostCommentsModel).filter(PostCommentsModel.id == body.comment_id).filter(
PostCommentsModel.user_id == blizzard_id)
if com.count() > 0:
com.delete()
db.commit()
return True
return False
@router.post(
"/",
response_model=Post,
summary='Adds the post'
)
def add_post(body: PostCreate):
"""
Adds the post item
:param body:
:return:
"""
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_post(
user_id=blizzard_id,
category_id=body.category_id,
title=body.title,
content=body.content,
tags=body.tags,
image=body.image
)
@router.delete(
"/{post_id}",
summary='Deletes the post'
)
def delete_post(post_id: int, body: TokenArgs, db=Depends(get_db)):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
q = db.query(PostModel).filter(PostModel.id == post_id).filter(PostModel.user_id == blizzard_id)
if q.count() == 0:
raise HTTPException(status_code=404, detail='Post is undefined')
return DatabaseUtils.remove_query(db, q)
@router.post(
"/{post_id}",
summary='Edits the post'
)
def edit_post(post_id: int, body: PostCreate, db=Depends(get_db)):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
q = db.query(PostModel).filter(PostModel.id == post_id).filter(PostModel.user_id == blizzard_id)
if q.count() == 0:
raise HTTPException(status_code=404, detail='Post is undefined')
q.update({
'title': body.title,
'content': body.content,
'category_id': body.category_id,
'image': body.image,
'tags': body.tags,
})
db.commit()
return True
@router.get(
"/{post_id}",
response_model=Post,
summary='Returns the post'
)
def get_post(post_id: int, db=Depends(get_db)):
return db.query(PostModel).filter(PostModel.id == post_id).first()
| # ██╗░░░░░██╗███╗░░██╗░██████╗░░░░██████╗░██╗░░░░░░█████╗░░█████╗░██╗░░██╗
# ██║░░░░░██║████╗░██║██╔════╝░░░░██╔══██╗██║░░░░░██╔══██╗██╔══██╗██║░██╔╝
# ██║░░░░░██║██╔██╗██║██║░░██╗░░░░██████╦╝██║░░░░░███████║██║░░╚═╝█████═╝░
# ██║░░░░░██║██║╚████║██║░░╚██╗░░░██╔══██╗██║░░░░░██╔══██║██║░░██╗██╔═██╗░
# ███████╗██║██║░╚███║╚██████╔╝░░░██████╦╝███████╗██║░░██║╚█████╔╝██║░╚██╗
# ╚══════╝╚═╝╚═╝░░╚══╝░╚═════╝░░░░╚═════╝░╚══════╝╚═╝░░╚═╝░╚════╝░╚═╝░░╚═╝
#
# Developed by <NAME> (C) Ling • Black 2020
# @site http://ling.black
# ██╗░░░░░██╗███╗░░██╗░██████╗░░░░██████╗░██╗░░░░░░█████╗░░█████╗░██╗░░██╗
# ██║░░░░░██║████╗░██║██╔════╝░░░░██╔══██╗██║░░░░░██╔══██╗██╔══██╗██║░██╔╝
# ██║░░░░░██║██╔██╗██║██║░░██╗░░░░██████╦╝██║░░░░░███████║██║░░╚═╝█████═╝░
# ██║░░░░░██║██║╚████║██║░░╚██╗░░░██╔══██╗██║░░░░░██╔══██║██║░░██╗██╔═██╗░
# ███████╗██║██║░╚███║╚██████╔╝░░░██████╦╝███████╗██║░░██║╚█████╔╝██║░╚██╗
# ╚══════╝╚═╝╚═╝░░╚══╝░╚═════╝░░░░╚═════╝░╚══════╝╚═╝░░╚═╝░╚════╝░╚═╝░░╚═╝
#
# Developed by <NAME> (C) Ling • Black 2020
# @site http://ling.black
from typing import List
from fastapi import APIRouter, Depends, HTTPException
from pydantic import BaseModel
from core.response import RequestLimit
from database import get_db, DatabaseUtils
from database.wow.models import PostModel, PostCommentsModel
from wow.interface.entity import PostCategory, Post, PostCategoryCreate, PostCreate, PostLikeCreate, PostCommentCreate
from wow.utils.posts import PostsUtils
from wow.utils.users import BlizzardUsersUtils
router = APIRouter()
class TokenArgs(BaseModel):
token: str
class TokenPostIdArgs(BaseModel):
token: str
post_id: int
class CommentIdAndToken(TokenArgs):
comment_id: int
class PostAPIList(BaseModel):
items: List[Post]
count: int
class PostAPIListResponse(BaseModel):
response: PostAPIList
request: RequestLimit
# -----------------------------------
# CATEGORIES
# -----------------------------------
@router.post(
"/categories",
response_model=PostCategory,
summary='Adds the category'
)
def add_category(body: PostCategoryCreate):
"""
Adds the category
:param body:
:return:
"""
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_category(user_id=blizzard_id, url=body.url, title=body.title)
@router.get(
"/categories",
response_model=List[PostCategory],
summary='Returns the categories'
)
def get_categories():
"""
Returns the categories list
:return:
"""
return PostsUtils.get_categories()
# -----------------------------------
# POSTS
# -----------------------------------
@router.get(
"/",
response_model=PostAPIListResponse,
summary='Returns all the posts'
)
def get_posts_all(limit: int = 100, offset: int = 0):
return PostsUtils.get_posts_limit(
limit=limit,
offset=offset
)
@router.get(
"/category/{category_url}",
response_model=PostAPIListResponse,
summary='Returns the posts in category'
)
def get_posts_all(category_url: int, limit: int = 100, offset: int = 0):
"""
Returns all the posts by category
:param category_url:
:param limit:
:param offset:
:return:
"""
return PostsUtils.get_posts_by_category_limit(
category_id=category_url,
limit=limit,
offset=offset
)
@router.get(
"/user/{blizzard_id}",
response_model=PostAPIListResponse,
summary='Returns the posts by users'
)
def get_posts_all(blizzard_id: int, limit: int = 100, offset: int = 0):
"""
Returns all the posts by category
:param blizzard_id:
:param limit:
:param offset:
:return:
"""
return PostsUtils.get_posts_by_blizzard_id(
blizzard_id=blizzard_id,
limit=limit,
offset=offset
)
@router.post(
"/like",
summary='Likes the post',
tags=['Лайки']
)
def like_post(body: PostLikeCreate):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_like(
user_id=blizzard_id,
post_id=body.post_id,
)
@router.post(
"/unlike",
summary='Unlikes the post',
tags=['Лайки']
)
def like_post(body: PostLikeCreate):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.remove_like(
user_id=blizzard_id,
post_id=body.post_id,
)
@router.post(
"/comment",
summary='Adds the comment',
tags=['Комментарии']
)
def like_post(body: PostCommentCreate):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_comment(
user_id=blizzard_id,
post_id=body.post_id,
reply_id=body.reply_id,
text=body.text,
)
@router.delete(
"/comment",
summary='Removes the comment',
tags=['Комментарии']
)
def removes_post(body: CommentIdAndToken, db=Depends(get_db)):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
com = db.query(PostCommentsModel).filter(PostCommentsModel.id == body.comment_id).filter(
PostCommentsModel.user_id == blizzard_id)
if com.count() > 0:
com.delete()
db.commit()
return True
return False
@router.post(
"/",
response_model=Post,
summary='Adds the post'
)
def add_post(body: PostCreate):
"""
Adds the post item
:param body:
:return:
"""
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
return PostsUtils.add_post(
user_id=blizzard_id,
category_id=body.category_id,
title=body.title,
content=body.content,
tags=body.tags,
image=body.image
)
@router.delete(
"/{post_id}",
summary='Deletes the post'
)
def delete_post(post_id: int, body: TokenArgs, db=Depends(get_db)):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
q = db.query(PostModel).filter(PostModel.id == post_id).filter(PostModel.user_id == blizzard_id)
if q.count() == 0:
raise HTTPException(status_code=404, detail='Post is undefined')
return DatabaseUtils.remove_query(db, q)
@router.post(
"/{post_id}",
summary='Edits the post'
)
def edit_post(post_id: int, body: PostCreate, db=Depends(get_db)):
blizzard_id = BlizzardUsersUtils.id__safe(body.token)
q = db.query(PostModel).filter(PostModel.id == post_id).filter(PostModel.user_id == blizzard_id)
if q.count() == 0:
raise HTTPException(status_code=404, detail='Post is undefined')
q.update({
'title': body.title,
'content': body.content,
'category_id': body.category_id,
'image': body.image,
'tags': body.tags,
})
db.commit()
return True
@router.get(
"/{post_id}",
response_model=Post,
summary='Returns the post'
)
def get_post(post_id: int, db=Depends(get_db)):
return db.query(PostModel).filter(PostModel.id == post_id).first()
| en | 0.085405 | # ██╗░░░░░██╗███╗░░██╗░██████╗░░░░██████╗░██╗░░░░░░█████╗░░█████╗░██╗░░██╗ # ██║░░░░░██║████╗░██║██╔════╝░░░░██╔══██╗██║░░░░░██╔══██╗██╔══██╗██║░██╔╝ # ██║░░░░░██║██╔██╗██║██║░░██╗░░░░██████╦╝██║░░░░░███████║██║░░╚═╝█████═╝░ # ██║░░░░░██║██║╚████║██║░░╚██╗░░░██╔══██╗██║░░░░░██╔══██║██║░░██╗██╔═██╗░ # ███████╗██║██║░╚███║╚██████╔╝░░░██████╦╝███████╗██║░░██║╚█████╔╝██║░╚██╗ # ╚══════╝╚═╝╚═╝░░╚══╝░╚═════╝░░░░╚═════╝░╚══════╝╚═╝░░╚═╝░╚════╝░╚═╝░░╚═╝ # # Developed by <NAME> (C) Ling • Black 2020 # @site http://ling.black # ██╗░░░░░██╗███╗░░██╗░██████╗░░░░██████╗░██╗░░░░░░█████╗░░█████╗░██╗░░██╗ # ██║░░░░░██║████╗░██║██╔════╝░░░░██╔══██╗██║░░░░░██╔══██╗██╔══██╗██║░██╔╝ # ██║░░░░░██║██╔██╗██║██║░░██╗░░░░██████╦╝██║░░░░░███████║██║░░╚═╝█████═╝░ # ██║░░░░░██║██║╚████║██║░░╚██╗░░░██╔══██╗██║░░░░░██╔══██║██║░░██╗██╔═██╗░ # ███████╗██║██║░╚███║╚██████╔╝░░░██████╦╝███████╗██║░░██║╚█████╔╝██║░╚██╗ # ╚══════╝╚═╝╚═╝░░╚══╝░╚═════╝░░░░╚═════╝░╚══════╝╚═╝░░╚═╝░╚════╝░╚═╝░░╚═╝ # # Developed by <NAME> (C) Ling • Black 2020 # @site http://ling.black # ----------------------------------- # CATEGORIES # ----------------------------------- Adds the category :param body: :return: Returns the categories list :return: # ----------------------------------- # POSTS # ----------------------------------- Returns all the posts by category :param category_url: :param limit: :param offset: :return: Returns all the posts by category :param blizzard_id: :param limit: :param offset: :return: Adds the post item :param body: :return: | 2.165511 | 2 |
toontown/catalog/CatalogChatBalloon.py | CrankySupertoon01/Toontown-2 | 1 | 10237 | <filename>toontown/catalog/CatalogChatBalloon.py
from pandac.PandaModules import *
class CatalogChatBalloon:
TEXT_SHIFT = (0.1, -0.05, 1.1)
TEXT_SHIFT_REVERSED = -0.05
TEXT_SHIFT_PROP = 0.08
NATIVE_WIDTH = 10.0
MIN_WIDTH = 2.5
MIN_HEIGHT = 1
BUBBLE_PADDING = 0.3
BUBBLE_PADDING_PROP = 0.05
BUTTON_SCALE = 6
BUTTON_SHIFT = (-0.2, 0, 0.6)
FRAME_SHIFT = (0.2, 1.4)
def __init__(self, model):
self.model = model
def generate(self, text, font, textColor=(0,0,0,1), balloonColor=(1,1,1,1),
wordWrap = 10.0, button=None, reversed=False):
root = NodePath('balloon')
# Add balloon geometry:
balloon = self.model.copyTo(root)
top = balloon.find('**/top')
middle = balloon.find('**/middle')
bottom = balloon.find('**/bottom')
balloon.setColor(balloonColor)
if balloonColor[3] < 1.0:
balloon.setTransparency(1)
# Render the text into a TextNode, using the font:
t = root.attachNewNode(TextNode('text'))
t.node().setFont(font)
t.node().setWordwrap(wordWrap)
t.node().setText(text)
t.node().setTextColor(textColor)
width, height = t.node().getWidth(), t.node().getHeight()
# Turn off depth write for the text: The place in the depth buffer is
# held by the chat bubble anyway, and the text renders after the bubble
# so there's no risk of the bubble overwriting the text's pixels.
t.setAttrib(DepthWriteAttrib.make(0))
t.setPos(self.TEXT_SHIFT)
t.setX(t, self.TEXT_SHIFT_PROP*width)
t.setZ(t, height)
if reversed:
# The nametag code wants the text on the left side of the axis,
# rather than on the right side. Therefore, we move the text to the
# opposite side:
t.setX(self.TEXT_SHIFT_REVERSED - self.TEXT_SHIFT_PROP*width - width)
# Give the chat bubble a button, if one is requested:
if button:
np = button.copyTo(root)
np.setPos(t, width, 0, -height)
np.setPos(np, self.BUTTON_SHIFT)
np.setScale(self.BUTTON_SCALE)
# Set a minimum width and height for short or empty messages
if width < self.MIN_WIDTH:
width = self.MIN_WIDTH
if reversed:
t.setX(t, -width/2.0)
else:
t.setX(t, width/2.0)
t.node().setAlign(TextNode.ACenter)
if height < self.MIN_HEIGHT:
height = self.MIN_HEIGHT
t.setX(t, height/2.0)
t.node().setAlign(TextNode.ACenter)
# Set the balloon's size:
width *= 1+self.BUBBLE_PADDING_PROP
width += self.BUBBLE_PADDING
balloon.setSx(width/self.NATIVE_WIDTH)
if reversed:
balloon.setSx(-balloon.getSx())
balloon.setTwoSided(True) # Render the backface of the balloon
middle.setSz(height)
top.setZ(top, height-1)
# Calculate the frame occupied by the balloon:
left, bottom = self.FRAME_SHIFT
if reversed:
left = -left - width
frame = (left, left+width, bottom, bottom+height+1)
return root, frame
| <filename>toontown/catalog/CatalogChatBalloon.py
from pandac.PandaModules import *
class CatalogChatBalloon:
TEXT_SHIFT = (0.1, -0.05, 1.1)
TEXT_SHIFT_REVERSED = -0.05
TEXT_SHIFT_PROP = 0.08
NATIVE_WIDTH = 10.0
MIN_WIDTH = 2.5
MIN_HEIGHT = 1
BUBBLE_PADDING = 0.3
BUBBLE_PADDING_PROP = 0.05
BUTTON_SCALE = 6
BUTTON_SHIFT = (-0.2, 0, 0.6)
FRAME_SHIFT = (0.2, 1.4)
def __init__(self, model):
self.model = model
def generate(self, text, font, textColor=(0,0,0,1), balloonColor=(1,1,1,1),
wordWrap = 10.0, button=None, reversed=False):
root = NodePath('balloon')
# Add balloon geometry:
balloon = self.model.copyTo(root)
top = balloon.find('**/top')
middle = balloon.find('**/middle')
bottom = balloon.find('**/bottom')
balloon.setColor(balloonColor)
if balloonColor[3] < 1.0:
balloon.setTransparency(1)
# Render the text into a TextNode, using the font:
t = root.attachNewNode(TextNode('text'))
t.node().setFont(font)
t.node().setWordwrap(wordWrap)
t.node().setText(text)
t.node().setTextColor(textColor)
width, height = t.node().getWidth(), t.node().getHeight()
# Turn off depth write for the text: The place in the depth buffer is
# held by the chat bubble anyway, and the text renders after the bubble
# so there's no risk of the bubble overwriting the text's pixels.
t.setAttrib(DepthWriteAttrib.make(0))
t.setPos(self.TEXT_SHIFT)
t.setX(t, self.TEXT_SHIFT_PROP*width)
t.setZ(t, height)
if reversed:
# The nametag code wants the text on the left side of the axis,
# rather than on the right side. Therefore, we move the text to the
# opposite side:
t.setX(self.TEXT_SHIFT_REVERSED - self.TEXT_SHIFT_PROP*width - width)
# Give the chat bubble a button, if one is requested:
if button:
np = button.copyTo(root)
np.setPos(t, width, 0, -height)
np.setPos(np, self.BUTTON_SHIFT)
np.setScale(self.BUTTON_SCALE)
# Set a minimum width and height for short or empty messages
if width < self.MIN_WIDTH:
width = self.MIN_WIDTH
if reversed:
t.setX(t, -width/2.0)
else:
t.setX(t, width/2.0)
t.node().setAlign(TextNode.ACenter)
if height < self.MIN_HEIGHT:
height = self.MIN_HEIGHT
t.setX(t, height/2.0)
t.node().setAlign(TextNode.ACenter)
# Set the balloon's size:
width *= 1+self.BUBBLE_PADDING_PROP
width += self.BUBBLE_PADDING
balloon.setSx(width/self.NATIVE_WIDTH)
if reversed:
balloon.setSx(-balloon.getSx())
balloon.setTwoSided(True) # Render the backface of the balloon
middle.setSz(height)
top.setZ(top, height-1)
# Calculate the frame occupied by the balloon:
left, bottom = self.FRAME_SHIFT
if reversed:
left = -left - width
frame = (left, left+width, bottom, bottom+height+1)
return root, frame
| en | 0.842173 | # Add balloon geometry: # Render the text into a TextNode, using the font: # Turn off depth write for the text: The place in the depth buffer is # held by the chat bubble anyway, and the text renders after the bubble # so there's no risk of the bubble overwriting the text's pixels. # The nametag code wants the text on the left side of the axis, # rather than on the right side. Therefore, we move the text to the # opposite side: # Give the chat bubble a button, if one is requested: # Set a minimum width and height for short or empty messages # Set the balloon's size: # Render the backface of the balloon # Calculate the frame occupied by the balloon: | 2.802248 | 3 |
TTS/vocoder/tf/utils/io.py | mightmay/Mien-TTS | 0 | 10238 | import datetime
import pickle
import tensorflow as tf
def save_checkpoint(model, current_step, epoch, output_path, **kwargs):
""" Save TF Vocoder model """
state = {
'model': model.weights,
'step': current_step,
'epoch': epoch,
'date': datetime.date.today().strftime("%B %d, %Y"),
}
state.update(kwargs)
pickle.dump(state, open(output_path, 'wb'))
def load_checkpoint(model, checkpoint_path):
""" Load TF Vocoder model """
checkpoint = pickle.load(open(checkpoint_path, 'rb'))
chkp_var_dict = {var.name: var.numpy() for var in checkpoint['model']}
tf_vars = model.weights
for tf_var in tf_vars:
layer_name = tf_var.name
chkp_var_value = chkp_var_dict[layer_name]
tf.keras.backend.set_value(tf_var, chkp_var_value)
return model
| import datetime
import pickle
import tensorflow as tf
def save_checkpoint(model, current_step, epoch, output_path, **kwargs):
""" Save TF Vocoder model """
state = {
'model': model.weights,
'step': current_step,
'epoch': epoch,
'date': datetime.date.today().strftime("%B %d, %Y"),
}
state.update(kwargs)
pickle.dump(state, open(output_path, 'wb'))
def load_checkpoint(model, checkpoint_path):
""" Load TF Vocoder model """
checkpoint = pickle.load(open(checkpoint_path, 'rb'))
chkp_var_dict = {var.name: var.numpy() for var in checkpoint['model']}
tf_vars = model.weights
for tf_var in tf_vars:
layer_name = tf_var.name
chkp_var_value = chkp_var_dict[layer_name]
tf.keras.backend.set_value(tf_var, chkp_var_value)
return model
| en | 0.399366 | Save TF Vocoder model Load TF Vocoder model | 2.468851 | 2 |
tests/test_path_choice.py | jataware/flee | 3 | 10239 | <reponame>jataware/flee
from flee import flee
"""
Generation 1 code. Incorporates only distance, travel always takes one day.
"""
def test_path_choice():
print("Testing basic data handling and simulation kernel.")
flee.SimulationSettings.MinMoveSpeed = 5000.0
flee.SimulationSettings.MaxMoveSpeed = 5000.0
flee.SimulationSettings.MaxWalkSpeed = 5000.0
e = flee.Ecosystem()
l1 = e.addLocation(name="A", movechance=1.0)
_ = e.addLocation(name="B", movechance=1.0)
_ = e.addLocation(name="C1", movechance=1.0)
_ = e.addLocation(name="C2", movechance=1.0)
_ = e.addLocation(name="D1", movechance=1.0)
_ = e.addLocation(name="D2", movechance=1.0)
_ = e.addLocation(name="D3", movechance=1.0)
# l2 = e.addLocation(name="B", movechance=1.0)
# l3 = e.addLocation(name="C1", movechance=1.0)
# l4 = e.addLocation(name="C2", movechance=1.0)
# l5 = e.addLocation(name="D1", movechance=1.0)
# l6 = e.addLocation(name="D2", movechance=1.0)
# l7 = e.addLocation(name="D3", movechance=1.0)
e.linkUp(endpoint1="A", endpoint2="B", distance=10.0)
e.linkUp(endpoint1="A", endpoint2="C1", distance=10.0)
e.linkUp(endpoint1="A", endpoint2="D1", distance=10.0)
e.linkUp(endpoint1="C1", endpoint2="C2", distance=10.0)
e.linkUp(endpoint1="D1", endpoint2="D2", distance=10.0)
e.linkUp(endpoint1="D2", endpoint2="D3", distance=10.0)
e.addAgent(location=l1)
print("Test successful!")
if __name__ == "__main__":
test_path_choice()
| from flee import flee
"""
Generation 1 code. Incorporates only distance, travel always takes one day.
"""
def test_path_choice():
print("Testing basic data handling and simulation kernel.")
flee.SimulationSettings.MinMoveSpeed = 5000.0
flee.SimulationSettings.MaxMoveSpeed = 5000.0
flee.SimulationSettings.MaxWalkSpeed = 5000.0
e = flee.Ecosystem()
l1 = e.addLocation(name="A", movechance=1.0)
_ = e.addLocation(name="B", movechance=1.0)
_ = e.addLocation(name="C1", movechance=1.0)
_ = e.addLocation(name="C2", movechance=1.0)
_ = e.addLocation(name="D1", movechance=1.0)
_ = e.addLocation(name="D2", movechance=1.0)
_ = e.addLocation(name="D3", movechance=1.0)
# l2 = e.addLocation(name="B", movechance=1.0)
# l3 = e.addLocation(name="C1", movechance=1.0)
# l4 = e.addLocation(name="C2", movechance=1.0)
# l5 = e.addLocation(name="D1", movechance=1.0)
# l6 = e.addLocation(name="D2", movechance=1.0)
# l7 = e.addLocation(name="D3", movechance=1.0)
e.linkUp(endpoint1="A", endpoint2="B", distance=10.0)
e.linkUp(endpoint1="A", endpoint2="C1", distance=10.0)
e.linkUp(endpoint1="A", endpoint2="D1", distance=10.0)
e.linkUp(endpoint1="C1", endpoint2="C2", distance=10.0)
e.linkUp(endpoint1="D1", endpoint2="D2", distance=10.0)
e.linkUp(endpoint1="D2", endpoint2="D3", distance=10.0)
e.addAgent(location=l1)
print("Test successful!")
if __name__ == "__main__":
test_path_choice() | en | 0.281603 | Generation 1 code. Incorporates only distance, travel always takes one day. # l2 = e.addLocation(name="B", movechance=1.0) # l3 = e.addLocation(name="C1", movechance=1.0) # l4 = e.addLocation(name="C2", movechance=1.0) # l5 = e.addLocation(name="D1", movechance=1.0) # l6 = e.addLocation(name="D2", movechance=1.0) # l7 = e.addLocation(name="D3", movechance=1.0) | 2.96455 | 3 |
archive/old_plots/plot_supplemental_divergence_correlations.py | garudlab/mother_infant | 2 | 10240 | <reponame>garudlab/mother_infant<filename>archive/old_plots/plot_supplemental_divergence_correlations.py
import matplotlib
matplotlib.use('Agg')
import config
import parse_midas_data
import parse_HMP_data
import os.path
import pylab
import sys
import numpy
import diversity_utils
import gene_diversity_utils
import calculate_substitution_rates
import stats_utils
import matplotlib.colors as colors
import matplotlib.cm as cmx
from math import log10,ceil
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from numpy.random import randint
from scipy.cluster.hierarchy import dendrogram, linkage
from scipy.cluster.hierarchy import cophenet
from scipy.cluster.hierarchy import fcluster
from scipy.stats import gaussian_kde
mpl.rcParams['font.size'] = 6
mpl.rcParams['lines.linewidth'] = 0.5
mpl.rcParams['legend.frameon'] = False
mpl.rcParams['legend.fontsize'] = 'small'
################################################################################
#
# Standard header to read in argument information
#
################################################################################
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--debug", help="Loads only a subset of SNPs for speed", action="store_true")
parser.add_argument("--chunk-size", type=int, help="max number of records to load", default=1000000000)
args = parser.parse_args()
debug = args.debug
chunk_size = args.chunk_size
################################################################################
good_species_list = ['Bacteroides_vulgatus_57955', 'Bacteroides_uniformis_57318', 'Alistipes_putredinis_61533']
####################################################
#
# Set up Figure (3 panels, arranged in 1x3 grid)
#
####################################################
pylab.figure(1,figsize=(7,1.5))
fig = pylab.gcf()
# make three panels panels
outer_grid = gridspec.GridSpec(1,3,width_ratios=[1,1,1],wspace=0.1)
#######
#
# SNP divergence vs Gene divergence in B. vulgatus
#
#######
gene_axis = plt.Subplot(fig, outer_grid[0])
fig.add_subplot(gene_axis)
gene_axis.set_ylabel('SNP divergence\n %s' % (good_species_list[0]))
gene_axis.set_xlabel('Gene divergence\n %s' % (good_species_list[0]))
gene_axis.set_ylim([1e-06,1e-01])
#gene_axis.set_xlim([1e-02,1])
gene_axis.spines['top'].set_visible(False)
gene_axis.spines['right'].set_visible(False)
gene_axis.get_xaxis().tick_bottom()
gene_axis.get_yaxis().tick_left()
#######
#
# SNP divergence (B vulgatus) vs SNP divergence (A putredinis)
#
#######
species_axis_1 = plt.Subplot(fig, outer_grid[1])
fig.add_subplot(species_axis_1)
species_axis_1.set_xlabel('SNP divergence\n %s' % (good_species_list[1]))
species_axis_1.set_ylim([1e-06,1e-01])
species_axis_1.set_xlim([1e-06,1e-01])
species_axis_1.spines['top'].set_visible(False)
species_axis_1.spines['right'].set_visible(False)
species_axis_1.get_xaxis().tick_bottom()
species_axis_1.get_yaxis().tick_left()
#######
#
# SNP divergence (B vulgatus) vs SNP divergence (A putredinis)
#
#######
species_axis_2 = plt.Subplot(fig, outer_grid[2])
fig.add_subplot(species_axis_2)
species_axis_2.set_xlabel('SNP divergence\n %s' % (good_species_list[2]))
species_axis_2.set_ylim([1e-06,1e-01])
species_axis_2.set_xlim([1e-06,1e-01])
species_axis_2.spines['top'].set_visible(False)
species_axis_2.spines['right'].set_visible(False)
species_axis_2.get_xaxis().tick_bottom()
species_axis_2.get_yaxis().tick_left()
########
#
# Now do calculation and plot figures
#
########
sys.stderr.write("Loading sample metadata...\n")
subject_sample_map = parse_HMP_data.parse_subject_sample_map()
sample_order_map = parse_HMP_data.parse_sample_order_map()
sys.stderr.write("Done!\n")
snp_divergence_map = {species_name: {} for species_name in good_species_list}
gene_divergence_map = {species_name: {} for species_name in good_species_list}
for species_name in good_species_list:
sys.stderr.write("Loading haploid samples...\n")
snp_samples = diversity_utils.calculate_haploid_samples(species_name, debug=debug)
sys.stderr.write("Calculating unique samples...\n")
# Only consider one sample per person
snp_samples = snp_samples[parse_midas_data.calculate_unique_samples(subject_sample_map, sample_list=snp_samples)]
sys.stderr.write("Loading pre-computed substitution rates for %s...\n" % species_name)
substitution_rate_map = calculate_substitution_rates.load_substitution_rate_map(species_name)
sys.stderr.write("Calculating snp matrix...\n")
dummy_samples, snp_difference_matrix, snp_opportunity_matrix = calculate_substitution_rates.calculate_matrices_from_substitution_rate_map(substitution_rate_map, 'core', allowed_samples=snp_samples)
snp_samples = dummy_samples
sys.stderr.write("Done!\n")
sys.stderr.write("Calculating gene matrix...\n")
gene_samples, gene_difference_matrix, gene_opportunity_matrix = calculate_substitution_rates.calculate_matrices_from_substitution_rate_map(substitution_rate_map, 'genes', allowed_samples=snp_samples)
snp_samples = gene_samples
sys.stderr.write("Done!\n")
# Focus on the subset of samples that have sufficient gene depth and snp depth
desired_samples = gene_samples
# Figure out which pairs of indices in desired_samples belong to diff subjects
desired_same_sample_idxs, desired_same_subject_idxs, desired_diff_subject_idxs = parse_midas_data.calculate_subject_pairs( subject_sample_map, desired_samples)
# Turn these into indices for snp and gene matrices
snp_sample_idx_map = parse_midas_data.calculate_sample_idx_map(desired_samples, snp_samples)
gene_sample_idx_map = parse_midas_data.calculate_sample_idx_map(desired_samples, gene_samples)
same_subject_snp_idxs = parse_midas_data.apply_sample_index_map_to_indices(snp_sample_idx_map, desired_same_subject_idxs)
same_subject_gene_idxs = parse_midas_data.apply_sample_index_map_to_indices(gene_sample_idx_map, desired_same_subject_idxs)
diff_subject_snp_idxs = parse_midas_data.apply_sample_index_map_to_indices(snp_sample_idx_map, desired_diff_subject_idxs)
diff_subject_gene_idxs = parse_midas_data.apply_sample_index_map_to_indices(gene_sample_idx_map, desired_diff_subject_idxs)
for sample_pair_idx in xrange(0,len(diff_subject_snp_idxs[0])):
snp_i = diff_subject_snp_idxs[0][sample_pair_idx]
snp_j = diff_subject_snp_idxs[1][sample_pair_idx]
gene_i = diff_subject_gene_idxs[0][sample_pair_idx]
gene_j = diff_subject_gene_idxs[1][sample_pair_idx]
sample_i = desired_samples[gene_i]
sample_j = desired_samples[gene_j]
# This will serve as a key in snp_divergence_map
sample_pair = frozenset([sample_i,sample_j])
# Focus on pairs of samples with sufficient coverage
if snp_opportunity_matrix[snp_i,snp_j]>0:
snp_d = snp_difference_matrix[snp_i,snp_j]*1.0/snp_opportunity_matrix[snp_i,snp_j]
snp_divergence_map[species_name][sample_pair] = snp_d
if gene_opportunity_matrix[gene_i, gene_j]>0:
gene_d = gene_difference_matrix[gene_i, gene_j]*1.0/gene_opportunity_matrix[gene_i, gene_j]
gene_divergence_map[species_name][sample_pair] = gene_d
#################
#
# Plot figures!
#
#################
# First calculate SNP vs gene divergence in B. vulgatus
species_name = good_species_list[0]
snp_divergences = []
gene_divergences = []
# Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map
for sample_pair in (set(snp_divergence_map[species_name].keys()) & set(gene_divergence_map[species_name].keys()) ):
snp_divergences.append( snp_divergence_map[species_name][sample_pair] )
gene_divergences.append( gene_divergence_map[species_name][sample_pair] )
snp_divergences = numpy.array(snp_divergences)
gene_divergences = numpy.array(gene_divergences)
# Null expectation (medians line up)
median_ratio = numpy.median(snp_divergences)/numpy.median(gene_divergences)
gene_axis.loglog([1e-02,1],[1e-02*median_ratio,1*median_ratio],'k-',linewidth=0.25)
gene_axis.loglog(gene_divergences, snp_divergences, 'r.', markersize=2,alpha=0.5,markeredgewidth=0, rasterized=True)
# Then SNP divergence between two species
species_1 = good_species_list[0]
species_2 = good_species_list[1]
snp_divergences_1 = []
snp_divergences_2 = []
# Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map
for sample_pair in (set(snp_divergence_map[species_1].keys()) & set(snp_divergence_map[species_2].keys()) ):
snp_divergences_1.append( snp_divergence_map[species_1][sample_pair] )
snp_divergences_2.append( snp_divergence_map[species_2][sample_pair] )
snp_divergences_1 = numpy.array(snp_divergences_1)
snp_divergences_2 = numpy.array(snp_divergences_2)
# Null expectation (medians line up)
median_ratio = numpy.median(snp_divergences_1)/numpy.median(snp_divergences_2)
species_axis_1.loglog([1e-06,1e-01],[1e-06*median_ratio,1e-01*median_ratio],'k-',linewidth=0.25)
# Observed values
species_axis_1.loglog(snp_divergences_2, snp_divergences_1, 'r.', markersize=2,alpha=0.5,markeredgewidth=0, rasterized=True)
# Then SNP divergence between other two species
species_1 = good_species_list[0]
species_2 = good_species_list[2]
snp_divergences_1 = []
snp_divergences_2 = []
# Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map
for sample_pair in (set(snp_divergence_map[species_1].keys()) & set(snp_divergence_map[species_2].keys()) ):
snp_divergences_1.append( snp_divergence_map[species_1][sample_pair] )
snp_divergences_2.append( snp_divergence_map[species_2][sample_pair] )
snp_divergences_1 = numpy.array(snp_divergences_1)
snp_divergences_2 = numpy.array(snp_divergences_2)
# Null expectation (medians line up)
median_ratio = numpy.median(snp_divergences_1)/numpy.median(snp_divergences_2)
species_axis_2.loglog([1e-06,1e-01],[1e-06*median_ratio,1e-01*median_ratio],'k-',linewidth=0.25)
species_axis_2.loglog(snp_divergences_2, snp_divergences_1, 'r.', markersize=2,alpha=0.5,markeredgewidth=0,rasterized=True)
# Since y-axes are shared, do not duplicate ticklables
species_axis_1.set_yticklabels([])
species_axis_2.set_yticklabels([])
sys.stderr.write("Saving figure...\t")
fig.savefig('%s/supplemental_divergence_correlations.pdf' % (parse_midas_data.analysis_directory),bbox_inches='tight',dpi=600)
sys.stderr.write("Done!\n")
| import matplotlib
matplotlib.use('Agg')
import config
import parse_midas_data
import parse_HMP_data
import os.path
import pylab
import sys
import numpy
import diversity_utils
import gene_diversity_utils
import calculate_substitution_rates
import stats_utils
import matplotlib.colors as colors
import matplotlib.cm as cmx
from math import log10,ceil
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from numpy.random import randint
from scipy.cluster.hierarchy import dendrogram, linkage
from scipy.cluster.hierarchy import cophenet
from scipy.cluster.hierarchy import fcluster
from scipy.stats import gaussian_kde
mpl.rcParams['font.size'] = 6
mpl.rcParams['lines.linewidth'] = 0.5
mpl.rcParams['legend.frameon'] = False
mpl.rcParams['legend.fontsize'] = 'small'
################################################################################
#
# Standard header to read in argument information
#
################################################################################
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--debug", help="Loads only a subset of SNPs for speed", action="store_true")
parser.add_argument("--chunk-size", type=int, help="max number of records to load", default=1000000000)
args = parser.parse_args()
debug = args.debug
chunk_size = args.chunk_size
################################################################################
good_species_list = ['Bacteroides_vulgatus_57955', 'Bacteroides_uniformis_57318', 'Alistipes_putredinis_61533']
####################################################
#
# Set up Figure (3 panels, arranged in 1x3 grid)
#
####################################################
pylab.figure(1,figsize=(7,1.5))
fig = pylab.gcf()
# make three panels panels
outer_grid = gridspec.GridSpec(1,3,width_ratios=[1,1,1],wspace=0.1)
#######
#
# SNP divergence vs Gene divergence in B. vulgatus
#
#######
gene_axis = plt.Subplot(fig, outer_grid[0])
fig.add_subplot(gene_axis)
gene_axis.set_ylabel('SNP divergence\n %s' % (good_species_list[0]))
gene_axis.set_xlabel('Gene divergence\n %s' % (good_species_list[0]))
gene_axis.set_ylim([1e-06,1e-01])
#gene_axis.set_xlim([1e-02,1])
gene_axis.spines['top'].set_visible(False)
gene_axis.spines['right'].set_visible(False)
gene_axis.get_xaxis().tick_bottom()
gene_axis.get_yaxis().tick_left()
#######
#
# SNP divergence (B vulgatus) vs SNP divergence (A putredinis)
#
#######
species_axis_1 = plt.Subplot(fig, outer_grid[1])
fig.add_subplot(species_axis_1)
species_axis_1.set_xlabel('SNP divergence\n %s' % (good_species_list[1]))
species_axis_1.set_ylim([1e-06,1e-01])
species_axis_1.set_xlim([1e-06,1e-01])
species_axis_1.spines['top'].set_visible(False)
species_axis_1.spines['right'].set_visible(False)
species_axis_1.get_xaxis().tick_bottom()
species_axis_1.get_yaxis().tick_left()
#######
#
# SNP divergence (B vulgatus) vs SNP divergence (A putredinis)
#
#######
species_axis_2 = plt.Subplot(fig, outer_grid[2])
fig.add_subplot(species_axis_2)
species_axis_2.set_xlabel('SNP divergence\n %s' % (good_species_list[2]))
species_axis_2.set_ylim([1e-06,1e-01])
species_axis_2.set_xlim([1e-06,1e-01])
species_axis_2.spines['top'].set_visible(False)
species_axis_2.spines['right'].set_visible(False)
species_axis_2.get_xaxis().tick_bottom()
species_axis_2.get_yaxis().tick_left()
########
#
# Now do calculation and plot figures
#
########
sys.stderr.write("Loading sample metadata...\n")
subject_sample_map = parse_HMP_data.parse_subject_sample_map()
sample_order_map = parse_HMP_data.parse_sample_order_map()
sys.stderr.write("Done!\n")
snp_divergence_map = {species_name: {} for species_name in good_species_list}
gene_divergence_map = {species_name: {} for species_name in good_species_list}
for species_name in good_species_list:
sys.stderr.write("Loading haploid samples...\n")
snp_samples = diversity_utils.calculate_haploid_samples(species_name, debug=debug)
sys.stderr.write("Calculating unique samples...\n")
# Only consider one sample per person
snp_samples = snp_samples[parse_midas_data.calculate_unique_samples(subject_sample_map, sample_list=snp_samples)]
sys.stderr.write("Loading pre-computed substitution rates for %s...\n" % species_name)
substitution_rate_map = calculate_substitution_rates.load_substitution_rate_map(species_name)
sys.stderr.write("Calculating snp matrix...\n")
dummy_samples, snp_difference_matrix, snp_opportunity_matrix = calculate_substitution_rates.calculate_matrices_from_substitution_rate_map(substitution_rate_map, 'core', allowed_samples=snp_samples)
snp_samples = dummy_samples
sys.stderr.write("Done!\n")
sys.stderr.write("Calculating gene matrix...\n")
gene_samples, gene_difference_matrix, gene_opportunity_matrix = calculate_substitution_rates.calculate_matrices_from_substitution_rate_map(substitution_rate_map, 'genes', allowed_samples=snp_samples)
snp_samples = gene_samples
sys.stderr.write("Done!\n")
# Focus on the subset of samples that have sufficient gene depth and snp depth
desired_samples = gene_samples
# Figure out which pairs of indices in desired_samples belong to diff subjects
desired_same_sample_idxs, desired_same_subject_idxs, desired_diff_subject_idxs = parse_midas_data.calculate_subject_pairs( subject_sample_map, desired_samples)
# Turn these into indices for snp and gene matrices
snp_sample_idx_map = parse_midas_data.calculate_sample_idx_map(desired_samples, snp_samples)
gene_sample_idx_map = parse_midas_data.calculate_sample_idx_map(desired_samples, gene_samples)
same_subject_snp_idxs = parse_midas_data.apply_sample_index_map_to_indices(snp_sample_idx_map, desired_same_subject_idxs)
same_subject_gene_idxs = parse_midas_data.apply_sample_index_map_to_indices(gene_sample_idx_map, desired_same_subject_idxs)
diff_subject_snp_idxs = parse_midas_data.apply_sample_index_map_to_indices(snp_sample_idx_map, desired_diff_subject_idxs)
diff_subject_gene_idxs = parse_midas_data.apply_sample_index_map_to_indices(gene_sample_idx_map, desired_diff_subject_idxs)
for sample_pair_idx in xrange(0,len(diff_subject_snp_idxs[0])):
snp_i = diff_subject_snp_idxs[0][sample_pair_idx]
snp_j = diff_subject_snp_idxs[1][sample_pair_idx]
gene_i = diff_subject_gene_idxs[0][sample_pair_idx]
gene_j = diff_subject_gene_idxs[1][sample_pair_idx]
sample_i = desired_samples[gene_i]
sample_j = desired_samples[gene_j]
# This will serve as a key in snp_divergence_map
sample_pair = frozenset([sample_i,sample_j])
# Focus on pairs of samples with sufficient coverage
if snp_opportunity_matrix[snp_i,snp_j]>0:
snp_d = snp_difference_matrix[snp_i,snp_j]*1.0/snp_opportunity_matrix[snp_i,snp_j]
snp_divergence_map[species_name][sample_pair] = snp_d
if gene_opportunity_matrix[gene_i, gene_j]>0:
gene_d = gene_difference_matrix[gene_i, gene_j]*1.0/gene_opportunity_matrix[gene_i, gene_j]
gene_divergence_map[species_name][sample_pair] = gene_d
#################
#
# Plot figures!
#
#################
# First calculate SNP vs gene divergence in B. vulgatus
species_name = good_species_list[0]
snp_divergences = []
gene_divergences = []
# Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map
for sample_pair in (set(snp_divergence_map[species_name].keys()) & set(gene_divergence_map[species_name].keys()) ):
snp_divergences.append( snp_divergence_map[species_name][sample_pair] )
gene_divergences.append( gene_divergence_map[species_name][sample_pair] )
snp_divergences = numpy.array(snp_divergences)
gene_divergences = numpy.array(gene_divergences)
# Null expectation (medians line up)
median_ratio = numpy.median(snp_divergences)/numpy.median(gene_divergences)
gene_axis.loglog([1e-02,1],[1e-02*median_ratio,1*median_ratio],'k-',linewidth=0.25)
gene_axis.loglog(gene_divergences, snp_divergences, 'r.', markersize=2,alpha=0.5,markeredgewidth=0, rasterized=True)
# Then SNP divergence between two species
species_1 = good_species_list[0]
species_2 = good_species_list[1]
snp_divergences_1 = []
snp_divergences_2 = []
# Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map
for sample_pair in (set(snp_divergence_map[species_1].keys()) & set(snp_divergence_map[species_2].keys()) ):
snp_divergences_1.append( snp_divergence_map[species_1][sample_pair] )
snp_divergences_2.append( snp_divergence_map[species_2][sample_pair] )
snp_divergences_1 = numpy.array(snp_divergences_1)
snp_divergences_2 = numpy.array(snp_divergences_2)
# Null expectation (medians line up)
median_ratio = numpy.median(snp_divergences_1)/numpy.median(snp_divergences_2)
species_axis_1.loglog([1e-06,1e-01],[1e-06*median_ratio,1e-01*median_ratio],'k-',linewidth=0.25)
# Observed values
species_axis_1.loglog(snp_divergences_2, snp_divergences_1, 'r.', markersize=2,alpha=0.5,markeredgewidth=0, rasterized=True)
# Then SNP divergence between other two species
species_1 = good_species_list[0]
species_2 = good_species_list[2]
snp_divergences_1 = []
snp_divergences_2 = []
# Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map
for sample_pair in (set(snp_divergence_map[species_1].keys()) & set(snp_divergence_map[species_2].keys()) ):
snp_divergences_1.append( snp_divergence_map[species_1][sample_pair] )
snp_divergences_2.append( snp_divergence_map[species_2][sample_pair] )
snp_divergences_1 = numpy.array(snp_divergences_1)
snp_divergences_2 = numpy.array(snp_divergences_2)
# Null expectation (medians line up)
median_ratio = numpy.median(snp_divergences_1)/numpy.median(snp_divergences_2)
species_axis_2.loglog([1e-06,1e-01],[1e-06*median_ratio,1e-01*median_ratio],'k-',linewidth=0.25)
species_axis_2.loglog(snp_divergences_2, snp_divergences_1, 'r.', markersize=2,alpha=0.5,markeredgewidth=0,rasterized=True)
# Since y-axes are shared, do not duplicate ticklables
species_axis_1.set_yticklabels([])
species_axis_2.set_yticklabels([])
sys.stderr.write("Saving figure...\t")
fig.savefig('%s/supplemental_divergence_correlations.pdf' % (parse_midas_data.analysis_directory),bbox_inches='tight',dpi=600)
sys.stderr.write("Done!\n") | en | 0.498721 | ################################################################################ # # Standard header to read in argument information # ################################################################################ ################################################################################ #################################################### # # Set up Figure (3 panels, arranged in 1x3 grid) # #################################################### # make three panels panels ####### # # SNP divergence vs Gene divergence in B. vulgatus # ####### #gene_axis.set_xlim([1e-02,1]) ####### # # SNP divergence (B vulgatus) vs SNP divergence (A putredinis) # ####### ####### # # SNP divergence (B vulgatus) vs SNP divergence (A putredinis) # ####### ######## # # Now do calculation and plot figures # ######## # Only consider one sample per person # Focus on the subset of samples that have sufficient gene depth and snp depth # Figure out which pairs of indices in desired_samples belong to diff subjects # Turn these into indices for snp and gene matrices # This will serve as a key in snp_divergence_map # Focus on pairs of samples with sufficient coverage ################# # # Plot figures! # ################# # First calculate SNP vs gene divergence in B. vulgatus # Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map # Null expectation (medians line up) # Then SNP divergence between two species # Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map # Null expectation (medians line up) # Observed values # Then SNP divergence between other two species # Loop over sample pairs that are in both snp_divergence_map and gene_divergence_map # Null expectation (medians line up) # Since y-axes are shared, do not duplicate ticklables | 2.018415 | 2 |
multivis/plotFeatures.py | brettChapman/cimcb_vis | 1 | 10241 | import sys
import copy
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
from .utils import *
import numpy as np
import pandas as pd
class plotFeatures:
usage = """Produces different feature plots given a data table and peak table.
Initial_Parameters
----------
peaktable : Pandas dataframe containing peak data. Must contain 'Name' and 'Label'.
datatable : Pandas dataframe containing matrix of values to plot (N samples x N features). Columns/features must be same as 'Name' from Peak Table.
Methods
-------
set_params : Set parameters -
plot_type: The type of plot. Either "point", "violin", "box", "swarm", "violin-swarm" or "box-swarm" (default: 'point')
column_numbers: The number of columns to display in the plots (default: 4)
log_data: Perform a log ('natural', base 2 or base 10) on all data (default: (True, 2))
scale_data: Scale the data ('standard' (centers to the mean and scales to unit variance), 'minmax' (scales between 0 and 1), 'maxabs' (scales to the absolute maximum value), 'robust' (centers to the median and scales to between 25th and 75th quantile range) (default: (True, 'minmax'))
impute_data: Impute any missing values using KNN impute with a set number of nearest neighbours (default: (True, 3))
style: Set the matplotlib style (see https://matplotlib.org/stable/tutorials/introductory/customizing.html) (default: 'seaborn-white')
transparent: Setting to 'True' will make the background transparent (default: False)
figSize: The figure size as a tuple (width,height) (default: (15,10))
fontSize: The font size for all text (default: 12)
colour_palette: The colour palette to use for the plot (default: None)
y_axis_label: The label to customise the y axis (default: None)
x_axis_rotation: Rotate the x axis labels this number of degrees (default: 0)
group_column_name: The group column name used in the datatable (e.g. 'Class') (default: None)
point_estimator: The statistical function to use for the point plot. Either "mean" or "median" (default: 'mean')
point_ci: The bootstrapped confidence interval for the point plot. Can also be standard deviation ("sd") (default: 95)
violin_distribution_type: The representation of the distribution of data points within the violin plot. Either "quartile", "box", "point", "stick" or None (default: 'box')
violin_width_scale: The method used to scale the width of the violin plot. Either "area", "count" or "width" (default: "width")
box_iqr: The proportion past the lower and upper quartiles to extend the plot whiskers for the box plot. Points outside this range will be identified as outliers (default: 1.5)
saveImage: Setting to 'True' will save the image to file (default: True)
imageFileName: The image file name to save to (default: [plot_type]_features.png')
dpi: The number of Dots Per Inch (DPI) for the image (default: 200)
help : Print this help text
plot : Generates feature plots
"""
def __init__(self, peaktable, datatable):
peaktable = self.__checkPeakTable(self.__checkData(peaktable))
datatable = self.__checkData(datatable)
# Slice the meta-data, and select only peaks from the peaktable for processing, and add the meta-data back
meta = datatable.T[~datatable.T.index.isin(peaktable['Name'])].T.reset_index(drop=True)
dat = datatable[peaktable['Name']].reset_index()
datatable = pd.concat([meta, dat], axis=1).set_index(['index'])
datatable.index.name = None
self.__peaktable = peaktable
# Search for duplicate labels and amend with a suffix, to avoid issues when relabelling the datatable
labels = copy.deepcopy(list(peaktable['Label']))
label_counts = {k: v for k, v in Counter(labels).items() if v > 1}
for i in reversed(range(len(labels))):
item = str(labels[i])
if item in label_counts and label_counts[item]:
labels[i] += "_" + str(label_counts[item])
label_counts[item] -= 1
#Label datatable with peak labels instead of names for ease of feature plotting
col_label_dict = dict(zip(list(peaktable['Name']), labels))
datatable.rename(columns=col_label_dict, inplace=True)
self.__peak_labels = labels
self.__datatable = datatable
self.set_params()
def help(self):
print(plotFeatures.usage)
def set_params(self, plot_type='point', column_numbers=4, log_data=(True, 2), scale_data=(True, 'minmax'), impute_data=(True, 3), style='seaborn-white', transparent=False, figSize = (15, 10), fontSize = 12, colour_palette=None, y_axis_label=None, x_axis_rotation=0, group_column_name=None, point_estimator='mean', point_ci=95, violin_distribution_type='box', violin_width_scale='width', box_iqr=1.5, saveImage=True, imageFileName='_features.png', dpi = 200):
plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi = self.__paramCheck(plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi)
self.__plot_type = plot_type;
self.__column_numbers = column_numbers;
self.__log_data = log_data;
self.__scale_data = scale_data;
self.__impute_data = impute_data;
self.__style = style;
self.__transparent = transparent;
self.__figSize = figSize;
self.__fontSize = fontSize;
self.__colour_palette = colour_palette;
self.__y_axis_label = y_axis_label;
self.__x_axis_rotation = x_axis_rotation;
self.__group_column_name = group_column_name;
self.__point_estimator = point_estimator;
self.__point_ci = point_ci;
self.__violin_distribution_type = violin_distribution_type;
self.__violin_width_scale = violin_width_scale;
self.__box_iqr = box_iqr;
self.__saveImage = saveImage;
self.__imageFileName = imageFileName;
self.__dpi = dpi;
def plot(self):
datatable = copy.deepcopy(self.__datatable)
labels = self.__peak_labels
plot_type = self.__plot_type
group_column_name = self.__group_column_name
column_numbers = self.__column_numbers
colour_palette = self.__colour_palette
point_ci = self.__point_ci
point_estimator = self.__point_estimator
log_data = self.__log_data
scale_data = self.__scale_data
impute_data = self.__impute_data
x_axis_rotation = self.__x_axis_rotation
y_axis_label = self.__y_axis_label
violin_distribution_type = self.__violin_distribution_type
violin_width_scale = self.__violin_width_scale
box_iqr = self.__box_iqr
imageFileName = self.__imageFileName
saveImage = self.__saveImage
fontSize = self.__fontSize
style = self.__style
transparent = self.__transparent
dpi = self.__dpi
figSize = self.__figSize
meta = datatable.T[~datatable.T.index.isin(labels)].T.reset_index(drop=True)
X = datatable[labels].reset_index(drop=True)
(log_bool, log_base) = log_data;
if log_bool:
if isinstance(log_base, str) and log_base.lower() == 'natural':
X = X.applymap(np.log);
elif log_base == 2:
X = X.applymap(np.log2);
elif log_base == 10:
X = X.applymap(np.log10);
else:
print("Error: The chosen log type is invalid.")
sys.exit()
(scale_bool, scale_type) = scale_data
if scale_bool:
if isinstance(scale_type, str) and scale_type.lower() == 'standard':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
elif isinstance(scale_type, str) and scale_type.lower() == 'minmax':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
elif isinstance(scale_type, str) and scale_type.lower() == 'maxabs':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
elif isinstance(scale_type, str) and scale_type.lower() == 'robust':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
else:
print("Error: The chosen scale type is invalid.")
sys.exit()
(impute_bool, k) = impute_data;
if impute_bool:
X = imputeData(X, k=k).reset_index(drop=True)
if not isinstance(X, pd.DataFrame):
X = pd.DataFrame(X, columns=labels)
# Add the meta data back in with the logged, scaled, or imputed data
datatable = pd.concat([meta, X], axis=1).reset_index(drop=True)
with plt.style.context(style):
fig, axes = plt.subplots(nrows=int(np.ceil(float(len(labels) / column_numbers))), ncols=column_numbers, sharey=True, figsize=figSize)
if plot_type == 'point':
for peak_index, peak in enumerate(labels):
if point_estimator.lower() == 'mean':
point_estimator = 'Mean'
ax = sns.pointplot(data=datatable, x=group_column_name, y=peak, estimator=np.nanmean, capsize=0.1, ci=point_ci, palette=colour_palette, ax=axes.flat[peak_index])
elif point_estimator.lower() == 'median':
point_estimator = 'Median'
ax = sns.pointplot(data=datatable, x=group_column_name, y=peak, estimator=np.nanmedian, capsize=0.1, ci=point_ci, palette=colour_palette, ax=axes.flat[peak_index])
else:
print("Error: Invalid point plot estimator type.")
sys.exit()
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
if log_bool:
if scale_data:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) {} Peak Area within SD'.format(log_base, scale_type, point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) {} Peak Area & {}% CI'.format(log_base, scale_type, point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) {} Peak Area within SD'.format(log_base, point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) {} Peak Area & {}% CI'.format(log_base, point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) {} Peak Area within SD'.format(scale_type, point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) {} Peak Area & {}% CI'.format(scale_type, point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('{} Peak Area within SD'.format(point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('{} Peak Area & {}% CI'.format(point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'violin':
for peak_index, peak in enumerate(labels):
ax = sns.violinplot(data=datatable, x=group_column_name, y=peak, linewidth=1, inner=violin_distribution_type, scale=violin_width_scale, palette=colour_palette, ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'box':
for peak_index, peak in enumerate(labels):
ax = sns.boxplot(data=datatable, x=group_column_name, y=peak, palette=colour_palette, whis=box_iqr, ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'swarm':
for peak_index, peak in enumerate(labels):
ax = sns.swarmplot(data=datatable, x=group_column_name, y=peak, size=10, palette=colour_palette, ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'violin-swarm':
for peak_index, peak in enumerate(labels):
ax = sns.violinplot(data=datatable, x=group_column_name, y=peak, linewidth=1, inner=None, scale=violin_width_scale, palette=colour_palette, ax=axes.flat[peak_index])
ax = sns.swarmplot(data=datatable, x=group_column_name, y=peak, color="white", edgecolor="gray", ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'box-swarm':
for peak_index, peak in enumerate(labels):
ax = sns.boxplot(data=datatable, x=group_column_name, y=peak, palette=colour_palette, whis=np.inf, ax=axes.flat[peak_index])
ax = sns.swarmplot(data=datatable, x=group_column_name, y=peak, color="0.2", ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
fig.tight_layout(h_pad=5, w_pad=2)
if saveImage:
plt.savefig(plot_type + 'Plot' + imageFileName, dpi=dpi, transparent=transparent)
plt.show()
def __paramCheck(self, plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi):
cmap_list = list(matplotlib.cm.cmaps_listed) + list(matplotlib.cm.datad)
cmap_list_r = [cmap + '_r' for cmap in cmap_list]
cmap_list = cmap_list + cmap_list_r
plot_types = ['point', 'violin', 'box', 'swarm', 'violin-swarm', 'box-swarm']
estimator_types = ['mean', 'median']
datatable = self.__datatable
if plot_type.lower() not in plot_types:
print("Error: Plot type is not valid. Choose one of the following: {}.".format(', '.join(plot_types)))
sys.exit()
if not isinstance(column_numbers, int):
print("Error: Column numbers is not valid. Choose a integer value.")
sys.exit()
if not isinstance(log_data, tuple):
print("Error: Log data type if not a tuple. Please ensure the value is a tuple (e.g. (True, 2).")
sys.exit()
else:
(log_bool, log_base) = log_data
if not isinstance(log_bool, bool):
print("Error: Log data first tuple item is not a boolean value. Choose either \"True\" or \"False\".")
sys.exit()
base_types = ['natural', 2, 10]
if isinstance(log_base, str):
log_base = log_base.lower()
if log_base not in base_types:
print("Error: Log data second tuple item is not valid. Choose one of {}.".format(', '.join(base_types)))
sys.exit()
if not isinstance(scale_data, tuple):
print("Error: Scale data type if not a tuple. Please ensure the value is a tuple (e.g. (True, 'standard').")
sys.exit()
else:
(scale_bool, scale_type) = scale_data
if not isinstance(scale_bool, bool):
print("Error: Scale data first tuple item is not a boolean value. Choose either \"True\" or \"False\".")
sys.exit()
scale_types = ['standard', 'minmax', 'maxabs', 'robust']
if isinstance(scale_type, str):
scale_type = scale_type.lower()
if scale_type not in scale_types:
print("Error: Scale data second tuple item is not valid. Choose one of {}.".format(', '.join(scale_types)))
sys.exit()
if not isinstance(impute_data, tuple):
print("Error: Impute data type if not a tuple. Please ensure the value is a tuple (e.g. (True, 3).")
sys.exit()
else:
(impute_bool, k) = impute_data
if not isinstance(impute_bool, bool):
print("Error: Impute data first tuple item is not a boolean value. Choose either \"True\" or \"False\".")
sys.exit()
if not isinstance(k, float):
if not isinstance(k, int):
print("Error: Impute data second tuple item, the nearest neighbours k value, is not valid. Choose a float or integer value.")
sys.exit()
if not isinstance(style, str):
print("Error: Seaborn style is not valid. Choose a string value.")
sys.exit()
else:
styleList = list(plt.style.available)
if style not in styleList:
print("Error: Chosen style is not valid. Choose one of the following: {}.".format(', '.join(styleList)))
sys.exit()
if not isinstance(transparent, bool):
print("Error: The transparent value is not valid. Choose either \"True\" or \"False\".")
sys.exit()
if not isinstance(figSize, tuple):
print("Error: Figure size is not valid. Choose a tuple of length 2.")
sys.exit()
else:
for length in figSize:
if not isinstance(length, float):
if not isinstance(length, int):
print("Error: Figure size value is not valid. Choose a float or integer value.")
sys.exit()
if not isinstance(fontSize, float):
if not isinstance(fontSize, int):
print("Error: Font size is not valid. Choose a float or integer value.")
sys.exit()
if colour_palette is not None:
if not isinstance(colour_palette, str):
print("Error: The colour palette is not valid. Choose a string value.")
sys.exit()
else:
if colour_palette not in cmap_list:
print("Error: The colour palette is not valid. Choose one of the following: {}.".format(', '.join(cmap_list)))
sys.exit()
if y_axis_label is not None:
if isinstance(y_axis_label, str):
print("Error: The y axis label is not valid. Choose a string value.")
sys.exit()
if not isinstance(x_axis_rotation, float):
if not isinstance(x_axis_rotation, int):
print("Error: The x axis rotation value is not valid. Choose a float or integer value.")
sys.exit()
if ((x_axis_rotation < 0) or (x_axis_rotation > 360)):
print("Error: The x axis rotation value is not valid. Choose a value >=0 or <= 360.")
sys.exit()
if group_column_name is not None:
if not isinstance(group_column_name, str):
print("Error: Group column name is not valid. Choose a string value.")
sys.exit()
else:
if group_column_name not in list(datatable.columns):
print("Error: Group column name not valid. Choose one of {}.".format(', '.join(list(datatable.columns))))
sys.exit()
if point_estimator.lower() not in estimator_types:
print("Error: The chosen point plot estimator is invalid. Choose one of \"{}\".".format('\" or \"'.join(estimator_types)))
sys.exit()
if isinstance(point_ci, str):
if point_ci != 'sd':
print("Error: The string value for point plot ci is invalid. Choose a float, integer or 'sd' value for standard deviation.")
sys.exit()
else:
if not isinstance(point_ci, float):
if not isinstance(point_ci, int):
print("Error: The value for point plot ci is invalid. Choose a float, integer or 'sd' value for standard deviation.")
sys.exit()
violin_distribution_types = ['quartile', 'box', 'point', 'stick', None]
violin_width_scale_types = ['area', 'count', 'width']
if plot_type.lower() == "violin":
if violin_distribution_type not in violin_distribution_types:
print("Error: Violin distribution type not valid. Choose one of the following: {}.".format(', '.join(violin_distribution_types)))
sys.exit()
if violin_width_scale not in violin_width_scale_types:
print("Error: Violin width scale type not valid. Choose one of the following: {}.".format(', '.join(violin_width_scale_types)))
sys.exit()
if plot_type.lower == "box":
if not isinstance(box_iqr, float):
if not isinstance(box_iqr, int):
print(
"Error: The box plot interquartile range extension beyond whiskers is not valid. Choose a float or integer value.")
sys.exit()
if not isinstance(saveImage, bool):
print("Error: Save image is not valid. Choose either \"True\" or \"False\".")
sys.exit()
if not isinstance(imageFileName, str):
print("Error: Image file name is not valid. Choose a string value.")
sys.exit()
if not isinstance(dpi, float):
if not isinstance(dpi, int):
print("Error: Dpi is not valid. Choose a float or integer value.")
sys.exit()
return plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi
def __checkData(self, df):
if not isinstance(df, pd.DataFrame):
print("Error: A dataframe was not entered. Please check your data.")
return df
def __checkPeakTable(self, PeakTable):
if "Name" not in PeakTable.columns:
print("Error: \"Name\" column not in Peak Table. Please check your data.")
sys.exit()
if "Label" not in PeakTable.columns:
print("Error: \"Label\" column not in Peak Table. Please check your data.")
sys.exit()
# Do not assume the peaks/nodes have been indexed correctly. Remove any index columns and reindex.
column_list = [column.lower() for column in PeakTable.columns]
if 'idx' in column_list:
index = column_list.index('idx')
column_name = PeakTable.columns[index]
PeakTable = PeakTable.drop(columns=[column_name])
if 'index' in column_list:
index = column_list.index('index')
column_name = PeakTable.columns[index]
PeakTable = PeakTable.drop(columns=[column_name])
PeakTable = PeakTable.reset_index(drop=True)
PeakTable.index.name = 'Idx'
PeakTable = PeakTable.reset_index()
return PeakTable | import sys
import copy
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from collections import Counter
from .utils import *
import numpy as np
import pandas as pd
class plotFeatures:
usage = """Produces different feature plots given a data table and peak table.
Initial_Parameters
----------
peaktable : Pandas dataframe containing peak data. Must contain 'Name' and 'Label'.
datatable : Pandas dataframe containing matrix of values to plot (N samples x N features). Columns/features must be same as 'Name' from Peak Table.
Methods
-------
set_params : Set parameters -
plot_type: The type of plot. Either "point", "violin", "box", "swarm", "violin-swarm" or "box-swarm" (default: 'point')
column_numbers: The number of columns to display in the plots (default: 4)
log_data: Perform a log ('natural', base 2 or base 10) on all data (default: (True, 2))
scale_data: Scale the data ('standard' (centers to the mean and scales to unit variance), 'minmax' (scales between 0 and 1), 'maxabs' (scales to the absolute maximum value), 'robust' (centers to the median and scales to between 25th and 75th quantile range) (default: (True, 'minmax'))
impute_data: Impute any missing values using KNN impute with a set number of nearest neighbours (default: (True, 3))
style: Set the matplotlib style (see https://matplotlib.org/stable/tutorials/introductory/customizing.html) (default: 'seaborn-white')
transparent: Setting to 'True' will make the background transparent (default: False)
figSize: The figure size as a tuple (width,height) (default: (15,10))
fontSize: The font size for all text (default: 12)
colour_palette: The colour palette to use for the plot (default: None)
y_axis_label: The label to customise the y axis (default: None)
x_axis_rotation: Rotate the x axis labels this number of degrees (default: 0)
group_column_name: The group column name used in the datatable (e.g. 'Class') (default: None)
point_estimator: The statistical function to use for the point plot. Either "mean" or "median" (default: 'mean')
point_ci: The bootstrapped confidence interval for the point plot. Can also be standard deviation ("sd") (default: 95)
violin_distribution_type: The representation of the distribution of data points within the violin plot. Either "quartile", "box", "point", "stick" or None (default: 'box')
violin_width_scale: The method used to scale the width of the violin plot. Either "area", "count" or "width" (default: "width")
box_iqr: The proportion past the lower and upper quartiles to extend the plot whiskers for the box plot. Points outside this range will be identified as outliers (default: 1.5)
saveImage: Setting to 'True' will save the image to file (default: True)
imageFileName: The image file name to save to (default: [plot_type]_features.png')
dpi: The number of Dots Per Inch (DPI) for the image (default: 200)
help : Print this help text
plot : Generates feature plots
"""
def __init__(self, peaktable, datatable):
peaktable = self.__checkPeakTable(self.__checkData(peaktable))
datatable = self.__checkData(datatable)
# Slice the meta-data, and select only peaks from the peaktable for processing, and add the meta-data back
meta = datatable.T[~datatable.T.index.isin(peaktable['Name'])].T.reset_index(drop=True)
dat = datatable[peaktable['Name']].reset_index()
datatable = pd.concat([meta, dat], axis=1).set_index(['index'])
datatable.index.name = None
self.__peaktable = peaktable
# Search for duplicate labels and amend with a suffix, to avoid issues when relabelling the datatable
labels = copy.deepcopy(list(peaktable['Label']))
label_counts = {k: v for k, v in Counter(labels).items() if v > 1}
for i in reversed(range(len(labels))):
item = str(labels[i])
if item in label_counts and label_counts[item]:
labels[i] += "_" + str(label_counts[item])
label_counts[item] -= 1
#Label datatable with peak labels instead of names for ease of feature plotting
col_label_dict = dict(zip(list(peaktable['Name']), labels))
datatable.rename(columns=col_label_dict, inplace=True)
self.__peak_labels = labels
self.__datatable = datatable
self.set_params()
def help(self):
print(plotFeatures.usage)
def set_params(self, plot_type='point', column_numbers=4, log_data=(True, 2), scale_data=(True, 'minmax'), impute_data=(True, 3), style='seaborn-white', transparent=False, figSize = (15, 10), fontSize = 12, colour_palette=None, y_axis_label=None, x_axis_rotation=0, group_column_name=None, point_estimator='mean', point_ci=95, violin_distribution_type='box', violin_width_scale='width', box_iqr=1.5, saveImage=True, imageFileName='_features.png', dpi = 200):
plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi = self.__paramCheck(plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi)
self.__plot_type = plot_type;
self.__column_numbers = column_numbers;
self.__log_data = log_data;
self.__scale_data = scale_data;
self.__impute_data = impute_data;
self.__style = style;
self.__transparent = transparent;
self.__figSize = figSize;
self.__fontSize = fontSize;
self.__colour_palette = colour_palette;
self.__y_axis_label = y_axis_label;
self.__x_axis_rotation = x_axis_rotation;
self.__group_column_name = group_column_name;
self.__point_estimator = point_estimator;
self.__point_ci = point_ci;
self.__violin_distribution_type = violin_distribution_type;
self.__violin_width_scale = violin_width_scale;
self.__box_iqr = box_iqr;
self.__saveImage = saveImage;
self.__imageFileName = imageFileName;
self.__dpi = dpi;
def plot(self):
datatable = copy.deepcopy(self.__datatable)
labels = self.__peak_labels
plot_type = self.__plot_type
group_column_name = self.__group_column_name
column_numbers = self.__column_numbers
colour_palette = self.__colour_palette
point_ci = self.__point_ci
point_estimator = self.__point_estimator
log_data = self.__log_data
scale_data = self.__scale_data
impute_data = self.__impute_data
x_axis_rotation = self.__x_axis_rotation
y_axis_label = self.__y_axis_label
violin_distribution_type = self.__violin_distribution_type
violin_width_scale = self.__violin_width_scale
box_iqr = self.__box_iqr
imageFileName = self.__imageFileName
saveImage = self.__saveImage
fontSize = self.__fontSize
style = self.__style
transparent = self.__transparent
dpi = self.__dpi
figSize = self.__figSize
meta = datatable.T[~datatable.T.index.isin(labels)].T.reset_index(drop=True)
X = datatable[labels].reset_index(drop=True)
(log_bool, log_base) = log_data;
if log_bool:
if isinstance(log_base, str) and log_base.lower() == 'natural':
X = X.applymap(np.log);
elif log_base == 2:
X = X.applymap(np.log2);
elif log_base == 10:
X = X.applymap(np.log10);
else:
print("Error: The chosen log type is invalid.")
sys.exit()
(scale_bool, scale_type) = scale_data
if scale_bool:
if isinstance(scale_type, str) and scale_type.lower() == 'standard':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
elif isinstance(scale_type, str) and scale_type.lower() == 'minmax':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
elif isinstance(scale_type, str) and scale_type.lower() == 'maxabs':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
elif isinstance(scale_type, str) and scale_type.lower() == 'robust':
X = scaler(X, type=scale_type.lower()).reset_index(drop=True)
else:
print("Error: The chosen scale type is invalid.")
sys.exit()
(impute_bool, k) = impute_data;
if impute_bool:
X = imputeData(X, k=k).reset_index(drop=True)
if not isinstance(X, pd.DataFrame):
X = pd.DataFrame(X, columns=labels)
# Add the meta data back in with the logged, scaled, or imputed data
datatable = pd.concat([meta, X], axis=1).reset_index(drop=True)
with plt.style.context(style):
fig, axes = plt.subplots(nrows=int(np.ceil(float(len(labels) / column_numbers))), ncols=column_numbers, sharey=True, figsize=figSize)
if plot_type == 'point':
for peak_index, peak in enumerate(labels):
if point_estimator.lower() == 'mean':
point_estimator = 'Mean'
ax = sns.pointplot(data=datatable, x=group_column_name, y=peak, estimator=np.nanmean, capsize=0.1, ci=point_ci, palette=colour_palette, ax=axes.flat[peak_index])
elif point_estimator.lower() == 'median':
point_estimator = 'Median'
ax = sns.pointplot(data=datatable, x=group_column_name, y=peak, estimator=np.nanmedian, capsize=0.1, ci=point_ci, palette=colour_palette, ax=axes.flat[peak_index])
else:
print("Error: Invalid point plot estimator type.")
sys.exit()
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
if log_bool:
if scale_data:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) {} Peak Area within SD'.format(log_base, scale_type, point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) {} Peak Area & {}% CI'.format(log_base, scale_type, point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) {} Peak Area within SD'.format(log_base, point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Log({}) {} Peak Area & {}% CI'.format(log_base, point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) {} Peak Area within SD'.format(scale_type, point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) {} Peak Area & {}% CI'.format(scale_type, point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if isinstance(point_ci, str):
if point_ci == 'sd':
ax.set_title(peak + ' within SD', fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('{} Peak Area within SD'.format(point_estimator), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
ax.set_title(peak + ' with {}% CI'.format(point_ci), fontsize=fontSize)
ax.set_xlabel('')
if y_axis_label is None:
ax.set_ylabel('{} Peak Area & {}% CI'.format(point_estimator, point_ci), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'violin':
for peak_index, peak in enumerate(labels):
ax = sns.violinplot(data=datatable, x=group_column_name, y=peak, linewidth=1, inner=violin_distribution_type, scale=violin_width_scale, palette=colour_palette, ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'box':
for peak_index, peak in enumerate(labels):
ax = sns.boxplot(data=datatable, x=group_column_name, y=peak, palette=colour_palette, whis=box_iqr, ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'swarm':
for peak_index, peak in enumerate(labels):
ax = sns.swarmplot(data=datatable, x=group_column_name, y=peak, size=10, palette=colour_palette, ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'violin-swarm':
for peak_index, peak in enumerate(labels):
ax = sns.violinplot(data=datatable, x=group_column_name, y=peak, linewidth=1, inner=None, scale=violin_width_scale, palette=colour_palette, ax=axes.flat[peak_index])
ax = sns.swarmplot(data=datatable, x=group_column_name, y=peak, color="white", edgecolor="gray", ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
elif plot_type.lower() == 'box-swarm':
for peak_index, peak in enumerate(labels):
ax = sns.boxplot(data=datatable, x=group_column_name, y=peak, palette=colour_palette, whis=np.inf, ax=axes.flat[peak_index])
ax = sns.swarmplot(data=datatable, x=group_column_name, y=peak, color="0.2", ax=axes.flat[peak_index])
ax.tick_params(labelrotation=x_axis_rotation, labelsize=fontSize)
ax.set_title(peak, fontsize=fontSize)
ax.set_xlabel('')
if log_bool:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Log({}) scaled ({}) Peak Area'.format(log_base, scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Log({}) Peak Area'.format(log_base), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if scale_data:
if y_axis_label is None:
ax.set_ylabel('Scaled ({}) Peak Area'.format(scale_type), fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
else:
if y_axis_label is None:
ax.set_ylabel('Peak Area', fontsize=fontSize)
else:
ax.set_ylabel(y_axis_label, fontsize=fontSize)
fig.tight_layout(h_pad=5, w_pad=2)
if saveImage:
plt.savefig(plot_type + 'Plot' + imageFileName, dpi=dpi, transparent=transparent)
plt.show()
def __paramCheck(self, plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi):
cmap_list = list(matplotlib.cm.cmaps_listed) + list(matplotlib.cm.datad)
cmap_list_r = [cmap + '_r' for cmap in cmap_list]
cmap_list = cmap_list + cmap_list_r
plot_types = ['point', 'violin', 'box', 'swarm', 'violin-swarm', 'box-swarm']
estimator_types = ['mean', 'median']
datatable = self.__datatable
if plot_type.lower() not in plot_types:
print("Error: Plot type is not valid. Choose one of the following: {}.".format(', '.join(plot_types)))
sys.exit()
if not isinstance(column_numbers, int):
print("Error: Column numbers is not valid. Choose a integer value.")
sys.exit()
if not isinstance(log_data, tuple):
print("Error: Log data type if not a tuple. Please ensure the value is a tuple (e.g. (True, 2).")
sys.exit()
else:
(log_bool, log_base) = log_data
if not isinstance(log_bool, bool):
print("Error: Log data first tuple item is not a boolean value. Choose either \"True\" or \"False\".")
sys.exit()
base_types = ['natural', 2, 10]
if isinstance(log_base, str):
log_base = log_base.lower()
if log_base not in base_types:
print("Error: Log data second tuple item is not valid. Choose one of {}.".format(', '.join(base_types)))
sys.exit()
if not isinstance(scale_data, tuple):
print("Error: Scale data type if not a tuple. Please ensure the value is a tuple (e.g. (True, 'standard').")
sys.exit()
else:
(scale_bool, scale_type) = scale_data
if not isinstance(scale_bool, bool):
print("Error: Scale data first tuple item is not a boolean value. Choose either \"True\" or \"False\".")
sys.exit()
scale_types = ['standard', 'minmax', 'maxabs', 'robust']
if isinstance(scale_type, str):
scale_type = scale_type.lower()
if scale_type not in scale_types:
print("Error: Scale data second tuple item is not valid. Choose one of {}.".format(', '.join(scale_types)))
sys.exit()
if not isinstance(impute_data, tuple):
print("Error: Impute data type if not a tuple. Please ensure the value is a tuple (e.g. (True, 3).")
sys.exit()
else:
(impute_bool, k) = impute_data
if not isinstance(impute_bool, bool):
print("Error: Impute data first tuple item is not a boolean value. Choose either \"True\" or \"False\".")
sys.exit()
if not isinstance(k, float):
if not isinstance(k, int):
print("Error: Impute data second tuple item, the nearest neighbours k value, is not valid. Choose a float or integer value.")
sys.exit()
if not isinstance(style, str):
print("Error: Seaborn style is not valid. Choose a string value.")
sys.exit()
else:
styleList = list(plt.style.available)
if style not in styleList:
print("Error: Chosen style is not valid. Choose one of the following: {}.".format(', '.join(styleList)))
sys.exit()
if not isinstance(transparent, bool):
print("Error: The transparent value is not valid. Choose either \"True\" or \"False\".")
sys.exit()
if not isinstance(figSize, tuple):
print("Error: Figure size is not valid. Choose a tuple of length 2.")
sys.exit()
else:
for length in figSize:
if not isinstance(length, float):
if not isinstance(length, int):
print("Error: Figure size value is not valid. Choose a float or integer value.")
sys.exit()
if not isinstance(fontSize, float):
if not isinstance(fontSize, int):
print("Error: Font size is not valid. Choose a float or integer value.")
sys.exit()
if colour_palette is not None:
if not isinstance(colour_palette, str):
print("Error: The colour palette is not valid. Choose a string value.")
sys.exit()
else:
if colour_palette not in cmap_list:
print("Error: The colour palette is not valid. Choose one of the following: {}.".format(', '.join(cmap_list)))
sys.exit()
if y_axis_label is not None:
if isinstance(y_axis_label, str):
print("Error: The y axis label is not valid. Choose a string value.")
sys.exit()
if not isinstance(x_axis_rotation, float):
if not isinstance(x_axis_rotation, int):
print("Error: The x axis rotation value is not valid. Choose a float or integer value.")
sys.exit()
if ((x_axis_rotation < 0) or (x_axis_rotation > 360)):
print("Error: The x axis rotation value is not valid. Choose a value >=0 or <= 360.")
sys.exit()
if group_column_name is not None:
if not isinstance(group_column_name, str):
print("Error: Group column name is not valid. Choose a string value.")
sys.exit()
else:
if group_column_name not in list(datatable.columns):
print("Error: Group column name not valid. Choose one of {}.".format(', '.join(list(datatable.columns))))
sys.exit()
if point_estimator.lower() not in estimator_types:
print("Error: The chosen point plot estimator is invalid. Choose one of \"{}\".".format('\" or \"'.join(estimator_types)))
sys.exit()
if isinstance(point_ci, str):
if point_ci != 'sd':
print("Error: The string value for point plot ci is invalid. Choose a float, integer or 'sd' value for standard deviation.")
sys.exit()
else:
if not isinstance(point_ci, float):
if not isinstance(point_ci, int):
print("Error: The value for point plot ci is invalid. Choose a float, integer or 'sd' value for standard deviation.")
sys.exit()
violin_distribution_types = ['quartile', 'box', 'point', 'stick', None]
violin_width_scale_types = ['area', 'count', 'width']
if plot_type.lower() == "violin":
if violin_distribution_type not in violin_distribution_types:
print("Error: Violin distribution type not valid. Choose one of the following: {}.".format(', '.join(violin_distribution_types)))
sys.exit()
if violin_width_scale not in violin_width_scale_types:
print("Error: Violin width scale type not valid. Choose one of the following: {}.".format(', '.join(violin_width_scale_types)))
sys.exit()
if plot_type.lower == "box":
if not isinstance(box_iqr, float):
if not isinstance(box_iqr, int):
print(
"Error: The box plot interquartile range extension beyond whiskers is not valid. Choose a float or integer value.")
sys.exit()
if not isinstance(saveImage, bool):
print("Error: Save image is not valid. Choose either \"True\" or \"False\".")
sys.exit()
if not isinstance(imageFileName, str):
print("Error: Image file name is not valid. Choose a string value.")
sys.exit()
if not isinstance(dpi, float):
if not isinstance(dpi, int):
print("Error: Dpi is not valid. Choose a float or integer value.")
sys.exit()
return plot_type, column_numbers, log_data, scale_data, impute_data, style, transparent, figSize, fontSize, colour_palette, y_axis_label, x_axis_rotation, group_column_name, point_estimator, point_ci, violin_distribution_type, violin_width_scale, box_iqr, saveImage, imageFileName, dpi
def __checkData(self, df):
if not isinstance(df, pd.DataFrame):
print("Error: A dataframe was not entered. Please check your data.")
return df
def __checkPeakTable(self, PeakTable):
if "Name" not in PeakTable.columns:
print("Error: \"Name\" column not in Peak Table. Please check your data.")
sys.exit()
if "Label" not in PeakTable.columns:
print("Error: \"Label\" column not in Peak Table. Please check your data.")
sys.exit()
# Do not assume the peaks/nodes have been indexed correctly. Remove any index columns and reindex.
column_list = [column.lower() for column in PeakTable.columns]
if 'idx' in column_list:
index = column_list.index('idx')
column_name = PeakTable.columns[index]
PeakTable = PeakTable.drop(columns=[column_name])
if 'index' in column_list:
index = column_list.index('index')
column_name = PeakTable.columns[index]
PeakTable = PeakTable.drop(columns=[column_name])
PeakTable = PeakTable.reset_index(drop=True)
PeakTable.index.name = 'Idx'
PeakTable = PeakTable.reset_index()
return PeakTable | en | 0.600224 | Produces different feature plots given a data table and peak table. Initial_Parameters ---------- peaktable : Pandas dataframe containing peak data. Must contain 'Name' and 'Label'. datatable : Pandas dataframe containing matrix of values to plot (N samples x N features). Columns/features must be same as 'Name' from Peak Table. Methods ------- set_params : Set parameters - plot_type: The type of plot. Either "point", "violin", "box", "swarm", "violin-swarm" or "box-swarm" (default: 'point') column_numbers: The number of columns to display in the plots (default: 4) log_data: Perform a log ('natural', base 2 or base 10) on all data (default: (True, 2)) scale_data: Scale the data ('standard' (centers to the mean and scales to unit variance), 'minmax' (scales between 0 and 1), 'maxabs' (scales to the absolute maximum value), 'robust' (centers to the median and scales to between 25th and 75th quantile range) (default: (True, 'minmax')) impute_data: Impute any missing values using KNN impute with a set number of nearest neighbours (default: (True, 3)) style: Set the matplotlib style (see https://matplotlib.org/stable/tutorials/introductory/customizing.html) (default: 'seaborn-white') transparent: Setting to 'True' will make the background transparent (default: False) figSize: The figure size as a tuple (width,height) (default: (15,10)) fontSize: The font size for all text (default: 12) colour_palette: The colour palette to use for the plot (default: None) y_axis_label: The label to customise the y axis (default: None) x_axis_rotation: Rotate the x axis labels this number of degrees (default: 0) group_column_name: The group column name used in the datatable (e.g. 'Class') (default: None) point_estimator: The statistical function to use for the point plot. Either "mean" or "median" (default: 'mean') point_ci: The bootstrapped confidence interval for the point plot. Can also be standard deviation ("sd") (default: 95) violin_distribution_type: The representation of the distribution of data points within the violin plot. Either "quartile", "box", "point", "stick" or None (default: 'box') violin_width_scale: The method used to scale the width of the violin plot. Either "area", "count" or "width" (default: "width") box_iqr: The proportion past the lower and upper quartiles to extend the plot whiskers for the box plot. Points outside this range will be identified as outliers (default: 1.5) saveImage: Setting to 'True' will save the image to file (default: True) imageFileName: The image file name to save to (default: [plot_type]_features.png') dpi: The number of Dots Per Inch (DPI) for the image (default: 200) help : Print this help text plot : Generates feature plots # Slice the meta-data, and select only peaks from the peaktable for processing, and add the meta-data back # Search for duplicate labels and amend with a suffix, to avoid issues when relabelling the datatable #Label datatable with peak labels instead of names for ease of feature plotting # Add the meta data back in with the logged, scaled, or imputed data # Do not assume the peaks/nodes have been indexed correctly. Remove any index columns and reindex. | 3.178132 | 3 |
core/data/DataWriter.py | berendkleinhaneveld/Registrationshop | 25 | 10242 | <reponame>berendkleinhaneveld/Registrationshop<gh_stars>10-100
"""
DataWriter.py
"""
from DataController import DataController
from DataReader import DataReader
from vtk import vtkMetaImageWriter
from vtk import vtkXMLImageDataWriter
class DataWriter(DataController):
"""
DataWriter writes an image data object to
disk using the provided format.
"""
def __init__(self):
super(DataWriter, self).__init__()
self.supportedExtensions = [DataReader.TypeMHD,
DataReader.TypeVTI,
DataReader.TypeMHA]
def WriteToFile(self, imageData, exportFileName, fileType):
if fileType == DataReader.TypeMHD:
if not exportFileName.endswith(".mhd"):
exportFileName = exportFileName + ".mhd"
writer = vtkMetaImageWriter()
writer.SetFileName(exportFileName)
writer.SetInputData(imageData)
writer.Write()
elif fileType == DataReader.TypeVTI:
writer = vtkXMLImageDataWriter()
writer.SetFileName(exportFileName)
writer.SetInputData(imageData)
writer.Write()
elif fileType == DataReader.TypeMHA:
writer = vtkMetaImageWriter()
writer.SetFileName(exportFileName)
writer.SetInputData(imageData)
writer.Write()
else:
raise NotImplementedError("No writing support for type " + str(fileType))
| """
DataWriter.py
"""
from DataController import DataController
from DataReader import DataReader
from vtk import vtkMetaImageWriter
from vtk import vtkXMLImageDataWriter
class DataWriter(DataController):
"""
DataWriter writes an image data object to
disk using the provided format.
"""
def __init__(self):
super(DataWriter, self).__init__()
self.supportedExtensions = [DataReader.TypeMHD,
DataReader.TypeVTI,
DataReader.TypeMHA]
def WriteToFile(self, imageData, exportFileName, fileType):
if fileType == DataReader.TypeMHD:
if not exportFileName.endswith(".mhd"):
exportFileName = exportFileName + ".mhd"
writer = vtkMetaImageWriter()
writer.SetFileName(exportFileName)
writer.SetInputData(imageData)
writer.Write()
elif fileType == DataReader.TypeVTI:
writer = vtkXMLImageDataWriter()
writer.SetFileName(exportFileName)
writer.SetInputData(imageData)
writer.Write()
elif fileType == DataReader.TypeMHA:
writer = vtkMetaImageWriter()
writer.SetFileName(exportFileName)
writer.SetInputData(imageData)
writer.Write()
else:
raise NotImplementedError("No writing support for type " + str(fileType)) | en | 0.563685 | DataWriter.py DataWriter writes an image data object to disk using the provided format. | 2.945427 | 3 |
parkings/models/permit.py | klemmari1/parkkihubi | 12 | 10243 | from itertools import chain
from django.conf import settings
from django.contrib.gis.db import models as gis_models
from django.db import models, router, transaction
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from ..fields import CleaningJsonField
from ..validators import DictListValidator, TextField, TimestampField
from .constants import GK25FIN_SRID
from .enforcement_domain import EnforcementDomain
from .mixins import TimestampedModelMixin
from .parking import Parking
class PermitArea(TimestampedModelMixin):
name = models.CharField(max_length=40, verbose_name=_('name'))
domain = models.ForeignKey(
EnforcementDomain, on_delete=models.PROTECT,
related_name='permit_areas')
identifier = models.CharField(max_length=10, verbose_name=_('identifier'))
geom = gis_models.MultiPolygonField(
srid=GK25FIN_SRID, verbose_name=_('geometry'))
permitted_user = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.PROTECT, verbose_name=_("permitted_user"))
class Meta:
unique_together = [('domain', 'identifier')]
ordering = ('identifier',)
def __str__(self):
return '{}/{}: {}'.format(self.domain.code, self.identifier, self.name)
class PermitSeriesQuerySet(models.QuerySet):
def active(self):
return self.filter(active=True)
def latest_active(self):
return self.active().order_by('-modified_at').first()
def prunable(self, time_limit=None):
limit = time_limit or (
timezone.now() - settings.PARKKIHUBI_PERMITS_PRUNABLE_AFTER)
return self.filter(created_at__lt=limit, active=False)
class PermitSeries(TimestampedModelMixin, models.Model):
active = models.BooleanField(default=False)
owner = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.PROTECT, verbose_name=_("owner"))
objects = PermitSeriesQuerySet.as_manager()
class Meta:
ordering = ('created_at', 'id')
verbose_name = _("permit series")
verbose_name_plural = _("permit series")
@classmethod
def delete_prunable_series(cls, time_limit=None):
prunable = cls.objects.prunable(time_limit)
Permit.objects.filter(series__in=prunable).delete()
prunable.delete()
def __str__(self):
return str(self.id)
class PermitQuerySet(models.QuerySet):
def active(self):
return self.filter(series__active=True)
def by_time(self, timestamp):
lookup_items = PermitLookupItem.objects.by_time(timestamp)
return self.filter(lookup_items__in=lookup_items).distinct()
def by_subject(self, registration_number):
lookup_items = PermitLookupItem.objects.by_subject(registration_number)
return self.filter(lookup_items__in=lookup_items).distinct()
def by_area(self, area):
lookup_items = PermitLookupItem.objects.by_area(area)
return self.filter(lookup_items__in=lookup_items).distinct()
def bulk_create(self, permits, *args, **kwargs):
for permit in permits:
assert isinstance(permit, Permit)
permit.full_clean()
with transaction.atomic(using=self.db, savepoint=False):
created_permits = super().bulk_create(permits, *args, **kwargs)
PermitLookupItem.objects.using(self.db).bulk_create(
chain(*(x._make_lookup_items() for x in created_permits)))
return created_permits
class Permit(TimestampedModelMixin, models.Model):
domain = models.ForeignKey(
EnforcementDomain, on_delete=models.PROTECT,
related_name='permits')
series = models.ForeignKey(PermitSeries, on_delete=models.PROTECT)
external_id = models.CharField(max_length=50, null=True, blank=True)
subjects = CleaningJsonField(blank=True, validators=[DictListValidator({
'start_time': TimestampField(),
'end_time': TimestampField(),
'registration_number': TextField(max_length=20),
})])
areas = CleaningJsonField(blank=True, validators=[DictListValidator({
'start_time': TimestampField(),
'end_time': TimestampField(),
'area': TextField(max_length=10),
})])
objects = PermitQuerySet.as_manager()
class Meta:
unique_together = [('series', 'external_id')]
indexes = [
models.Index(fields=['series', 'id']),
]
ordering = ('series', 'id')
def __str__(self):
return 'Permit {id} ({series}{active}/{external_id} {dom})'.format(
id=self.id,
dom=self.domain.code,
series=self.series,
active='*' if self.series.active else '',
external_id=self.external_id)
def save(self, using=None, *args, **kwargs):
self.full_clean()
using = using or router.db_for_write(type(self), instance=self)
with transaction.atomic(using=using, savepoint=False):
super(Permit, self).save(using=using, *args, **kwargs)
self.lookup_items.all().using(using).delete()
new_lookup_items = self._make_lookup_items()
PermitLookupItem.objects.using(using).bulk_create(new_lookup_items)
def _make_lookup_items(self):
for area in self.areas:
for subject in self.subjects:
max_start_time = max(subject['start_time'], area['start_time'])
min_end_time = min(subject['end_time'], area['end_time'])
if max_start_time >= min_end_time:
continue
yield PermitLookupItem(
permit=self,
registration_number=Parking.normalize_reg_num(
subject['registration_number']),
area=PermitArea.objects.get(identifier=area['area'], domain=self.domain),
start_time=max_start_time,
end_time=min_end_time
)
class PermitLookupItemQuerySet(models.QuerySet):
def active(self):
return self.filter(permit__series__active=True)
def by_time(self, timestamp):
return self.filter(start_time__lte=timestamp, end_time__gte=timestamp)
def by_subject(self, registration_number):
normalized_reg_num = Parking.normalize_reg_num(registration_number)
return self.filter(registration_number=normalized_reg_num)
def by_area(self, area):
return self.filter(area=area)
class PermitLookupItem(models.Model):
permit = models.ForeignKey(
Permit, related_name="lookup_items", on_delete=models.CASCADE)
registration_number = models.CharField(max_length=20)
area = models.ForeignKey(PermitArea, on_delete=models.PROTECT, default=None, null=True, blank=True)
start_time = models.DateTimeField()
end_time = models.DateTimeField()
objects = PermitLookupItemQuerySet.as_manager()
class Meta:
indexes = [
models.Index(fields=[
'registration_number', 'start_time', 'end_time',
'area', 'permit']),
]
ordering = ('registration_number', 'start_time', 'end_time')
def __str__(self):
return (
'{start_time:%Y-%m-%d %H:%M} -- {end_time:%Y-%m-%d %H:%M} / '
'{registration_number} / {area}'
).format(
start_time=self.start_time, end_time=self.end_time,
registration_number=self.registration_number,
area=self.area.identifier)
| from itertools import chain
from django.conf import settings
from django.contrib.gis.db import models as gis_models
from django.db import models, router, transaction
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from ..fields import CleaningJsonField
from ..validators import DictListValidator, TextField, TimestampField
from .constants import GK25FIN_SRID
from .enforcement_domain import EnforcementDomain
from .mixins import TimestampedModelMixin
from .parking import Parking
class PermitArea(TimestampedModelMixin):
name = models.CharField(max_length=40, verbose_name=_('name'))
domain = models.ForeignKey(
EnforcementDomain, on_delete=models.PROTECT,
related_name='permit_areas')
identifier = models.CharField(max_length=10, verbose_name=_('identifier'))
geom = gis_models.MultiPolygonField(
srid=GK25FIN_SRID, verbose_name=_('geometry'))
permitted_user = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.PROTECT, verbose_name=_("permitted_user"))
class Meta:
unique_together = [('domain', 'identifier')]
ordering = ('identifier',)
def __str__(self):
return '{}/{}: {}'.format(self.domain.code, self.identifier, self.name)
class PermitSeriesQuerySet(models.QuerySet):
def active(self):
return self.filter(active=True)
def latest_active(self):
return self.active().order_by('-modified_at').first()
def prunable(self, time_limit=None):
limit = time_limit or (
timezone.now() - settings.PARKKIHUBI_PERMITS_PRUNABLE_AFTER)
return self.filter(created_at__lt=limit, active=False)
class PermitSeries(TimestampedModelMixin, models.Model):
active = models.BooleanField(default=False)
owner = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models.PROTECT, verbose_name=_("owner"))
objects = PermitSeriesQuerySet.as_manager()
class Meta:
ordering = ('created_at', 'id')
verbose_name = _("permit series")
verbose_name_plural = _("permit series")
@classmethod
def delete_prunable_series(cls, time_limit=None):
prunable = cls.objects.prunable(time_limit)
Permit.objects.filter(series__in=prunable).delete()
prunable.delete()
def __str__(self):
return str(self.id)
class PermitQuerySet(models.QuerySet):
def active(self):
return self.filter(series__active=True)
def by_time(self, timestamp):
lookup_items = PermitLookupItem.objects.by_time(timestamp)
return self.filter(lookup_items__in=lookup_items).distinct()
def by_subject(self, registration_number):
lookup_items = PermitLookupItem.objects.by_subject(registration_number)
return self.filter(lookup_items__in=lookup_items).distinct()
def by_area(self, area):
lookup_items = PermitLookupItem.objects.by_area(area)
return self.filter(lookup_items__in=lookup_items).distinct()
def bulk_create(self, permits, *args, **kwargs):
for permit in permits:
assert isinstance(permit, Permit)
permit.full_clean()
with transaction.atomic(using=self.db, savepoint=False):
created_permits = super().bulk_create(permits, *args, **kwargs)
PermitLookupItem.objects.using(self.db).bulk_create(
chain(*(x._make_lookup_items() for x in created_permits)))
return created_permits
class Permit(TimestampedModelMixin, models.Model):
domain = models.ForeignKey(
EnforcementDomain, on_delete=models.PROTECT,
related_name='permits')
series = models.ForeignKey(PermitSeries, on_delete=models.PROTECT)
external_id = models.CharField(max_length=50, null=True, blank=True)
subjects = CleaningJsonField(blank=True, validators=[DictListValidator({
'start_time': TimestampField(),
'end_time': TimestampField(),
'registration_number': TextField(max_length=20),
})])
areas = CleaningJsonField(blank=True, validators=[DictListValidator({
'start_time': TimestampField(),
'end_time': TimestampField(),
'area': TextField(max_length=10),
})])
objects = PermitQuerySet.as_manager()
class Meta:
unique_together = [('series', 'external_id')]
indexes = [
models.Index(fields=['series', 'id']),
]
ordering = ('series', 'id')
def __str__(self):
return 'Permit {id} ({series}{active}/{external_id} {dom})'.format(
id=self.id,
dom=self.domain.code,
series=self.series,
active='*' if self.series.active else '',
external_id=self.external_id)
def save(self, using=None, *args, **kwargs):
self.full_clean()
using = using or router.db_for_write(type(self), instance=self)
with transaction.atomic(using=using, savepoint=False):
super(Permit, self).save(using=using, *args, **kwargs)
self.lookup_items.all().using(using).delete()
new_lookup_items = self._make_lookup_items()
PermitLookupItem.objects.using(using).bulk_create(new_lookup_items)
def _make_lookup_items(self):
for area in self.areas:
for subject in self.subjects:
max_start_time = max(subject['start_time'], area['start_time'])
min_end_time = min(subject['end_time'], area['end_time'])
if max_start_time >= min_end_time:
continue
yield PermitLookupItem(
permit=self,
registration_number=Parking.normalize_reg_num(
subject['registration_number']),
area=PermitArea.objects.get(identifier=area['area'], domain=self.domain),
start_time=max_start_time,
end_time=min_end_time
)
class PermitLookupItemQuerySet(models.QuerySet):
def active(self):
return self.filter(permit__series__active=True)
def by_time(self, timestamp):
return self.filter(start_time__lte=timestamp, end_time__gte=timestamp)
def by_subject(self, registration_number):
normalized_reg_num = Parking.normalize_reg_num(registration_number)
return self.filter(registration_number=normalized_reg_num)
def by_area(self, area):
return self.filter(area=area)
class PermitLookupItem(models.Model):
permit = models.ForeignKey(
Permit, related_name="lookup_items", on_delete=models.CASCADE)
registration_number = models.CharField(max_length=20)
area = models.ForeignKey(PermitArea, on_delete=models.PROTECT, default=None, null=True, blank=True)
start_time = models.DateTimeField()
end_time = models.DateTimeField()
objects = PermitLookupItemQuerySet.as_manager()
class Meta:
indexes = [
models.Index(fields=[
'registration_number', 'start_time', 'end_time',
'area', 'permit']),
]
ordering = ('registration_number', 'start_time', 'end_time')
def __str__(self):
return (
'{start_time:%Y-%m-%d %H:%M} -- {end_time:%Y-%m-%d %H:%M} / '
'{registration_number} / {area}'
).format(
start_time=self.start_time, end_time=self.end_time,
registration_number=self.registration_number,
area=self.area.identifier)
| none | 1 | 1.983396 | 2 |
|
poi_mining/biz/LSA/logEntropy.py | yummydeli/machine_learning | 1 | 10244 | <filename>poi_mining/biz/LSA/logEntropy.py
#!/usr/bin/env python
# encoding:utf-8
# ##############################################################################
# The MIT License (MIT)
#
# Copyright (c) [2015] [baidu.com]
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ##############################################################################
"""
生成LogEntropy矩阵并筛选出合适的词汇
"""
import glob
import collections
import pandas
from sklearn.feature_extraction.text import CountVectorizer
import math
class LogEntropy(object):
"""计算logentropy, 得到类别关键字"""
def __init__(self):
self.fnames = glob.glob('data/segs/names.*')
def extract_segs(self):
"""分词文件中获取分词结果"""
idx = []
words = []
for f in self.fnames:
lines = []
for i, line in enumerate(open(f)):
if i % 2 == 1:
non_int = '\t'.join([e for e in line.decode('GBK').rstrip('\n').split('\t') \
if not e.isdigit()])
lines.append(non_int)
words.append('\t'.join(lines))
idx.append(f.split('.')[1][1:])
return words, idx
def mk_document_term_matrix(self):
"""生成TDM矩阵"""
words, idx = self.extract_segs()
countvec = CountVectorizer()
dtm = pandas.DataFrame(countvec.fit_transform(words).toarray(),
columns=countvec.get_feature_names(),
index=idx)
"""
canting faguo riben zhongwen
1001 1 0 0 1
991 1 0 1 0
203 1 1 0 0
"""
return dtm
def global_weighting(self, dtm):
""" 1 - Entropy(words) / log(N) """
# normalized entropy for word
pdtm = (dtm / dtm.sum(axis=0))
ndocs = pdtm.shape[0]
gw = 1 + (pdtm.applymap(lambda x: x * math.log(x) if x != 0 else 0).sum() / math.log(ndocs))
"""
canting 2.220446e-16
faguo 1.000000e+00
riben 1.000000e+00
zhongwen 1.000000e+00
"""
return gw
def local_weighting(self, dtm):
""" math.log(freq + 1)"""
lw = dtm.applymap(lambda freq: math.log(freq + 1))
"""
canting faguo riben zhongwen
1001 0.693147 0.000000 0.000000 0.693147
991 0.693147 0.000000 0.693147 0.000000
203 0.693147 0.693147 0.000000 0.000000
"""
return lw
def logEntropyWeighting(self):
"""计算最终的logentropy得分"""
dtm = self.mk_document_term_matrix()
"""
canting faguo riben zhongwen
1001 1.539096e-16 0.000000 0.000000 0.693147
991 1.539096e-16 0.000000 0.693147 0.000000
203 1.539096e-16 0.693147 0.000000 0.000000
"""
logEntro = (self.global_weighting(dtm.copy()) *
self.local_weighting(dtm)).applymap(
lambda x: 0 if x < 0.001 else x
)
logEntro.T.to_csv('data/keyWords.cates', sep='\t', encoding='UTF-8')
if __name__ == '__main__':
lsaEntropy = LogEntropy()
lsaEntropy.logEntropyWeighting()
| <filename>poi_mining/biz/LSA/logEntropy.py
#!/usr/bin/env python
# encoding:utf-8
# ##############################################################################
# The MIT License (MIT)
#
# Copyright (c) [2015] [baidu.com]
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ##############################################################################
"""
生成LogEntropy矩阵并筛选出合适的词汇
"""
import glob
import collections
import pandas
from sklearn.feature_extraction.text import CountVectorizer
import math
class LogEntropy(object):
"""计算logentropy, 得到类别关键字"""
def __init__(self):
self.fnames = glob.glob('data/segs/names.*')
def extract_segs(self):
"""分词文件中获取分词结果"""
idx = []
words = []
for f in self.fnames:
lines = []
for i, line in enumerate(open(f)):
if i % 2 == 1:
non_int = '\t'.join([e for e in line.decode('GBK').rstrip('\n').split('\t') \
if not e.isdigit()])
lines.append(non_int)
words.append('\t'.join(lines))
idx.append(f.split('.')[1][1:])
return words, idx
def mk_document_term_matrix(self):
"""生成TDM矩阵"""
words, idx = self.extract_segs()
countvec = CountVectorizer()
dtm = pandas.DataFrame(countvec.fit_transform(words).toarray(),
columns=countvec.get_feature_names(),
index=idx)
"""
canting faguo riben zhongwen
1001 1 0 0 1
991 1 0 1 0
203 1 1 0 0
"""
return dtm
def global_weighting(self, dtm):
""" 1 - Entropy(words) / log(N) """
# normalized entropy for word
pdtm = (dtm / dtm.sum(axis=0))
ndocs = pdtm.shape[0]
gw = 1 + (pdtm.applymap(lambda x: x * math.log(x) if x != 0 else 0).sum() / math.log(ndocs))
"""
canting 2.220446e-16
faguo 1.000000e+00
riben 1.000000e+00
zhongwen 1.000000e+00
"""
return gw
def local_weighting(self, dtm):
""" math.log(freq + 1)"""
lw = dtm.applymap(lambda freq: math.log(freq + 1))
"""
canting faguo riben zhongwen
1001 0.693147 0.000000 0.000000 0.693147
991 0.693147 0.000000 0.693147 0.000000
203 0.693147 0.693147 0.000000 0.000000
"""
return lw
def logEntropyWeighting(self):
"""计算最终的logentropy得分"""
dtm = self.mk_document_term_matrix()
"""
canting faguo riben zhongwen
1001 1.539096e-16 0.000000 0.000000 0.693147
991 1.539096e-16 0.000000 0.693147 0.000000
203 1.539096e-16 0.693147 0.000000 0.000000
"""
logEntro = (self.global_weighting(dtm.copy()) *
self.local_weighting(dtm)).applymap(
lambda x: 0 if x < 0.001 else x
)
logEntro.T.to_csv('data/keyWords.cates', sep='\t', encoding='UTF-8')
if __name__ == '__main__':
lsaEntropy = LogEntropy()
lsaEntropy.logEntropyWeighting()
| en | 0.504559 | #!/usr/bin/env python # encoding:utf-8 # ############################################################################## # The MIT License (MIT) # # Copyright (c) [2015] [baidu.com] # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # ############################################################################## 生成LogEntropy矩阵并筛选出合适的词汇 计算logentropy, 得到类别关键字 分词文件中获取分词结果 生成TDM矩阵 canting faguo riben zhongwen 1001 1 0 0 1 991 1 0 1 0 203 1 1 0 0 1 - Entropy(words) / log(N) # normalized entropy for word canting 2.220446e-16 faguo 1.000000e+00 riben 1.000000e+00 zhongwen 1.000000e+00 math.log(freq + 1) canting faguo riben zhongwen 1001 0.693147 0.000000 0.000000 0.693147 991 0.693147 0.000000 0.693147 0.000000 203 0.693147 0.693147 0.000000 0.000000 计算最终的logentropy得分 canting faguo riben zhongwen 1001 1.539096e-16 0.000000 0.000000 0.693147 991 1.539096e-16 0.000000 0.693147 0.000000 203 1.539096e-16 0.693147 0.000000 0.000000 | 1.218488 | 1 |
Python/swap_numbers.py | saurabhcommand/Hello-world | 1,428 | 10245 | a = 5
b = 7
a,b = b,a
print a
print b
| a = 5
b = 7
a,b = b,a
print a
print b
| none | 1 | 2.308451 | 2 |
|
algorithms/tests/test_unionfind.py | tommyod/PythonAlgorithms | 1 | 10246 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Tests for the union find data structure.
"""
try:
from ..unionfind import UnionFind
except ValueError:
pass
def test_unionfind_basics():
"""
Test the basic properties of unionfind.
"""
u = UnionFind([1, 2, 3])
assert u.in_same_set(1, 2) is False
assert u.in_same_set(2, 3) is False
u.union(1, 3)
assert u.in_same_set(1, 2) is False
assert u.in_same_set(3, 1)
assert u.get_root(1) == u.get_root(3)
def test_unionfind_adding_elements():
"""
Test adding operations, mostly syntactic sugar.
"""
u = UnionFind([1, 2])
u.add(['a', 'b'])
assert 1 in u
assert 'a' in u
def test_unionfind_example():
"""
Test on a slightly more invovled example.
"""
u = UnionFind([1, 2, 3, 4, 5])
u.union(1, 3)
u.union(2, 4)
assert u.in_same_set(1, 3)
assert u.in_same_set(4, 2)
assert not u.in_same_set(2, 5)
assert not u.in_same_set(2, 1)
assert not u.in_same_set(1, 4)
u.union(5, 1)
assert u.in_same_set(3, 5)
def test_unionfind_several():
"""
Test that we can take union of more than two elements.
"""
u = UnionFind([1, 2, 3, 4, 5, 6, 7, 8])
u.union([1, 2, 3])
u.union([4, 5, 6])
u.union([7, 8])
assert u.in_same_set(1, 3)
assert u.in_same_set(6, 4)
assert u.in_same_set(7, 8)
assert not u.in_same_set(2, 5)
assert not u.in_same_set(4, 8)
def test_unionfind_compression():
"""
Test path compression and the union by rank.
"""
# Test the ranking
elements = list(range(100))
u = UnionFind(elements)
for i in range(len(elements) - 1):
u.union(elements[i], elements[i + 1])
assert max(u._rank.values()) == 1
# Test path compression
parent_nodes = list(u._parent.values())
assert all(parent == parent_nodes[0] for parent in parent_nodes)
if __name__ == "__main__":
import pytest
# --durations=10 <- May be used to show potentially slow tests
pytest.main(args=['.', '--doctest-modules', '-v']) | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Tests for the union find data structure.
"""
try:
from ..unionfind import UnionFind
except ValueError:
pass
def test_unionfind_basics():
"""
Test the basic properties of unionfind.
"""
u = UnionFind([1, 2, 3])
assert u.in_same_set(1, 2) is False
assert u.in_same_set(2, 3) is False
u.union(1, 3)
assert u.in_same_set(1, 2) is False
assert u.in_same_set(3, 1)
assert u.get_root(1) == u.get_root(3)
def test_unionfind_adding_elements():
"""
Test adding operations, mostly syntactic sugar.
"""
u = UnionFind([1, 2])
u.add(['a', 'b'])
assert 1 in u
assert 'a' in u
def test_unionfind_example():
"""
Test on a slightly more invovled example.
"""
u = UnionFind([1, 2, 3, 4, 5])
u.union(1, 3)
u.union(2, 4)
assert u.in_same_set(1, 3)
assert u.in_same_set(4, 2)
assert not u.in_same_set(2, 5)
assert not u.in_same_set(2, 1)
assert not u.in_same_set(1, 4)
u.union(5, 1)
assert u.in_same_set(3, 5)
def test_unionfind_several():
"""
Test that we can take union of more than two elements.
"""
u = UnionFind([1, 2, 3, 4, 5, 6, 7, 8])
u.union([1, 2, 3])
u.union([4, 5, 6])
u.union([7, 8])
assert u.in_same_set(1, 3)
assert u.in_same_set(6, 4)
assert u.in_same_set(7, 8)
assert not u.in_same_set(2, 5)
assert not u.in_same_set(4, 8)
def test_unionfind_compression():
"""
Test path compression and the union by rank.
"""
# Test the ranking
elements = list(range(100))
u = UnionFind(elements)
for i in range(len(elements) - 1):
u.union(elements[i], elements[i + 1])
assert max(u._rank.values()) == 1
# Test path compression
parent_nodes = list(u._parent.values())
assert all(parent == parent_nodes[0] for parent in parent_nodes)
if __name__ == "__main__":
import pytest
# --durations=10 <- May be used to show potentially slow tests
pytest.main(args=['.', '--doctest-modules', '-v']) | en | 0.808516 | #!/usr/bin/env python3 # -*- coding: utf-8 -*- Tests for the union find data structure. Test the basic properties of unionfind. Test adding operations, mostly syntactic sugar. Test on a slightly more invovled example. Test that we can take union of more than two elements. Test path compression and the union by rank. # Test the ranking # Test path compression # --durations=10 <- May be used to show potentially slow tests | 3.456439 | 3 |
a3/ga.py | mishless/LearningSystems | 1 | 10247 | <filename>a3/ga.py
# Genetic Algorithm for solving the Traveling Salesman problem
# Authors: <NAME>, <NAME>
# Includes
import configparser
import math
import matplotlib.pyplot as plt
import numpy
import random
import sys
from operator import itemgetter
#Global variables(yay!)
# Configuration variables(read from config.txt)
mutation_rate = 0;
population_size = 0;
elitism_rate = 0;
tournament_rate = 0;
max_iterations = 0;
input_file_name = "";
parent_rate = 0;
# General global variables
cities = {};
number_of_cities = 0;
parent_number = 0;
tournament_size = 0;
elite_number = 0;
crossover_number = 0;
def read_config():
global mutation_rate;
global elitism_rate;
global tournament_rate;
global population_size;
global input_file_name;
global max_iterations;
global parent_rate;
global parent_number;
global tournament_size;
global elite_number;
global crossover_number;
config = configparser.ConfigParser();
config.read("config.txt");
mutation_rate = float(config['general']['mutation_rate']);
population_size = int(config['general']['population_size']);
elitism_rate = float(config['general']['elitism_rate']);
tournament_rate = float(config['general']['tournament_rate']);
max_iterations = int(config['general']['max_iterations']);
parent_rate = float(config['general']['parent_rate']);
input_file_name = config['general']['input_file_name'];
parent_number = int(population_size * parent_rate);
elite_number = int(population_size * elitism_rate);
tournament_size = int(population_size * tournament_rate);
crossover_number = population_size - elite_number;
def print_config():
print("***** CONFIGURATION *****");
print_var("Population size", population_size);
print_var("Elitism rate", elitism_rate);
print_var("Tournament rate", tournament_rate);
print_var("Mutation rate", mutation_rate);
print_var("Parent rate", parent_rate);
print_var("Iteration number", max_iterations);
print("");
print_var("Tournament size", tournament_size);
print_var("Parent number", parent_number);
print_var("Elite number", elite_number);
print_var("Crossover number", crossover_number);
print("");
def read_input_file():
global number_of_cities;
file = open(input_file_name, "r");
file_lines = file.readlines();
file.close();
for file_line in file_lines:
temp = file_line.split();
cities[int(temp[0])] = {'x' : float(temp[1]), 'y' : float(temp[2])};
number_of_cities = len(cities);
def get_distance(city1, city2):
return math.sqrt( ((city1['x']-city2['x'])**2) +
((city1['y']-city2['y'])**2));
def print_cities():
print("***** CITIES *****");
for key, city in cities.items():
print("#" + "%2s" % str(key) + ": (" +
"%6s" % str(city['x']) + ', ' +
"%6s" % str(city['y']) + ')');
print("");
def print_var(name, var):
print(name + ":" + " "*(17-len(name)) + str(var));
def init():
read_config();
read_input_file();
print_config();
def create_random_individual():
individual = [];
# We must begin at first city
individual.append(1);
# Create list of city indexes
indexes = list(range(2,number_of_cities+1));
while len(indexes) > 0:
picked_index = random.choice(indexes);
indexes.remove(picked_index);
individual.append(picked_index);
# We must end at first city
individual.append(1);
return individual;
def print_population(population, name):
print("***** POPULATION: " + name + " *****");
print("Population size = " + str(len(population)));
i = 0;
for individual in population:
print("IND #" + str(i) + ": " + str(individual));
i += 1;
def print_population_2(population, name):
print("***** POPULATION: " + name + " *****");
print("Population size = " + str(len(population)));
i = 0;
for individual in population:
print("IND #" + str(i) + " distance = " +
str(evaluate_individual(individual)));
i += 1;
print("");
def print_population_3(population, name):
print("***** POPULATION: " + name + " *****");
print("Population size = " + str(len(population)));
for individual in population:
print(str(individual) + ": distance = " +
str(evaluate_individual(individual)));
print("");
def create_random_population(population_size):
population = [];
for i in range(0, population_size):
population.append(create_random_individual());
return population;
def evaluate_individual(individual):
distance_traveled = 0;
for i in range(0, len(individual)-1):
distance_traveled = (distance_traveled +
get_distance(cities[individual[i]], cities[individual[i+1]]));
return distance_traveled;
def evaluate_population(population):
evaluations = [];
for individual in population:
evaluations.append((evaluate_individual(individual), individual));
return evaluations;
def select_tournament_pool(data):
tournament_pool = [];
indexes = list(range(0, len(data)));
for i in range(0, tournament_size):
chosen_index = random.choice(indexes);
tournament_pool.append(data[chosen_index]);
indexes.remove(chosen_index);
return tournament_pool;
def best_solution(pool):
best_individual = {'eval' : sys.float_info.max};
for individual in pool:
if individual['eval'] < best_individual['eval']:
best_individual = individual;
return best_individual;
def run_tournament(pool):
return best_solution(pool);
def merge_popul_and_eval(population, evaluations):
data = [];
for i in range(0, len(population)):
data.append({'ind' : population[i],
'eval' : evaluations[i]});
return data;
def select_parent_pool(population, evaluations):
parent_pool = [];
data = merge_popul_and_eval(population, evaluations);
for i in range(0, parent_number):
tournament_pool = select_tournament_pool(data);
parent = run_tournament(tournament_pool);
parent_pool.append(parent['ind']);
data.remove(parent);
return parent_pool;
def is_individual_valid(individual):
if(len(individual) != (number_of_cities+1)):
print("INVALID " + str(individual));
return False;
if(individual[0] != 1):
print("INVALID " + str(individual));
return False;
if(individual[-1] != 1):
print("INVALID " + str(individual));
return False;
for city in individual:
if city == 1:
if individual.count(city) != 2:
print("INVALID " + str(individual));
return False;
else:
if individual.count(city) != 1:
print("INVALID " + str(individual));
return False;
return True;
def is_population_valid(population):
for individual in population:
if is_individual_valid(individual) == False:
return False;
return True;
def create_child(parent1, parent2):
l = len(parent1);
x = random.randint(1, l-1);
y = random.randint(x, l-1);
child = [];
extract = parent1[x:y];
"""print_var("P1", parent1);
print_var("P2", parent2);
print_var("x", x);
print_var("y", y);
print_var("Extract", extract);"""
i = 0;
for j in range(0, x):
while(parent2[i] in extract):
i += 1;
child.append(parent2[i]);
i += 1;
child.extend(extract);
for j in range(y, l):
while(parent2[i] in extract):
i += 1;
child.append(parent2[i]);
i += 1;
return child;
def generate_children(parent_pool, child_num):
children = [];
for i in range(0, child_num):
parent1 = random.choice(parent_pool);
parent_pool.remove(parent1);
parent2 = random.choice(parent_pool);
parent_pool.append(parent1);
new_child = create_child(parent1, parent2);
children.append(new_child);
return children;
def generate_elites(population, evaluations, number):
data = merge_popul_and_eval(population, evaluations);
elites = [];
for i in range(0, number):
best = best_solution(data);
elites.append(best['ind']);
data.remove(best);
return elites;
def mutate_individual(individual):
i = random.randint(1, len(individual)-2);
j = i;
while j == i:
j = random.randint(1, len(individual)-2);
individual[i], individual[j] = individual[j], individual[i];
def mutate_population(population):
for individual in population:
if random.random() < mutation_rate:
mutate_individual(individual);
def test_stuff():
"""
p1 = "abcdefg";
p2 = "1234567";
for i in range(0,10):
print(create_child(p1,p2));
ind = [1,2,3,4,5,6];
print("Before", ind);
mutate_individual(ind);
print("After", ind);
exit();"""
def perform_GA():
best_solutions = [];
best_individuals = [];
best_solution = None;
#print("***** ALGORITHM START *****");
population = create_random_population(population_size);
iteration_counter = 1;
while True:
#print("Running iteration " + str(iteration_counter) + ":");
evaluations = evaluate_population(population);
best_solution = min(evaluations, key=lambda evaluation:evaluation[0])
best_solutions.append(best_solution[0]);
best_individuals.append(best_solution[1]);
evaluations = [evaluation[0] for evaluation in evaluations]
if iteration_counter == max_iterations:
break;
parent_pool = select_parent_pool(population, evaluations);
children = generate_children(parent_pool, crossover_number);
mutate_population(children);
elites = generate_elites(population, evaluations, elite_number);
# Prepare population for the next iteration
population = children + elites;
iteration_counter += 1;
if is_population_valid(population) == False:
break;
return (best_solutions, best_individuals);
def do_what_needs_to_be_done():
results = [];
bests = [];
print("***** ALGORITHM START *****");
sys.stdout.flush()
for i in range(0, 10):
print("Starting cycle " + str(i+1));
results.append(perform_GA());
bests.append((results[i][0][-1], results[i][1][-1]));
best_ind = bests.index(min(bests, key=lambda best:best[0]));
print(str(best_ind));
print("***** RESULTS *****");
print("Best result is " + str(bests[best_ind][0]));
print("Best result is " + str(bests[best_ind][1]));
plt.plot(results[best_ind][0]);
plt.show();
#main
init();
do_what_needs_to_be_done()
| <filename>a3/ga.py
# Genetic Algorithm for solving the Traveling Salesman problem
# Authors: <NAME>, <NAME>
# Includes
import configparser
import math
import matplotlib.pyplot as plt
import numpy
import random
import sys
from operator import itemgetter
#Global variables(yay!)
# Configuration variables(read from config.txt)
mutation_rate = 0;
population_size = 0;
elitism_rate = 0;
tournament_rate = 0;
max_iterations = 0;
input_file_name = "";
parent_rate = 0;
# General global variables
cities = {};
number_of_cities = 0;
parent_number = 0;
tournament_size = 0;
elite_number = 0;
crossover_number = 0;
def read_config():
global mutation_rate;
global elitism_rate;
global tournament_rate;
global population_size;
global input_file_name;
global max_iterations;
global parent_rate;
global parent_number;
global tournament_size;
global elite_number;
global crossover_number;
config = configparser.ConfigParser();
config.read("config.txt");
mutation_rate = float(config['general']['mutation_rate']);
population_size = int(config['general']['population_size']);
elitism_rate = float(config['general']['elitism_rate']);
tournament_rate = float(config['general']['tournament_rate']);
max_iterations = int(config['general']['max_iterations']);
parent_rate = float(config['general']['parent_rate']);
input_file_name = config['general']['input_file_name'];
parent_number = int(population_size * parent_rate);
elite_number = int(population_size * elitism_rate);
tournament_size = int(population_size * tournament_rate);
crossover_number = population_size - elite_number;
def print_config():
print("***** CONFIGURATION *****");
print_var("Population size", population_size);
print_var("Elitism rate", elitism_rate);
print_var("Tournament rate", tournament_rate);
print_var("Mutation rate", mutation_rate);
print_var("Parent rate", parent_rate);
print_var("Iteration number", max_iterations);
print("");
print_var("Tournament size", tournament_size);
print_var("Parent number", parent_number);
print_var("Elite number", elite_number);
print_var("Crossover number", crossover_number);
print("");
def read_input_file():
global number_of_cities;
file = open(input_file_name, "r");
file_lines = file.readlines();
file.close();
for file_line in file_lines:
temp = file_line.split();
cities[int(temp[0])] = {'x' : float(temp[1]), 'y' : float(temp[2])};
number_of_cities = len(cities);
def get_distance(city1, city2):
return math.sqrt( ((city1['x']-city2['x'])**2) +
((city1['y']-city2['y'])**2));
def print_cities():
print("***** CITIES *****");
for key, city in cities.items():
print("#" + "%2s" % str(key) + ": (" +
"%6s" % str(city['x']) + ', ' +
"%6s" % str(city['y']) + ')');
print("");
def print_var(name, var):
print(name + ":" + " "*(17-len(name)) + str(var));
def init():
read_config();
read_input_file();
print_config();
def create_random_individual():
individual = [];
# We must begin at first city
individual.append(1);
# Create list of city indexes
indexes = list(range(2,number_of_cities+1));
while len(indexes) > 0:
picked_index = random.choice(indexes);
indexes.remove(picked_index);
individual.append(picked_index);
# We must end at first city
individual.append(1);
return individual;
def print_population(population, name):
print("***** POPULATION: " + name + " *****");
print("Population size = " + str(len(population)));
i = 0;
for individual in population:
print("IND #" + str(i) + ": " + str(individual));
i += 1;
def print_population_2(population, name):
print("***** POPULATION: " + name + " *****");
print("Population size = " + str(len(population)));
i = 0;
for individual in population:
print("IND #" + str(i) + " distance = " +
str(evaluate_individual(individual)));
i += 1;
print("");
def print_population_3(population, name):
print("***** POPULATION: " + name + " *****");
print("Population size = " + str(len(population)));
for individual in population:
print(str(individual) + ": distance = " +
str(evaluate_individual(individual)));
print("");
def create_random_population(population_size):
population = [];
for i in range(0, population_size):
population.append(create_random_individual());
return population;
def evaluate_individual(individual):
distance_traveled = 0;
for i in range(0, len(individual)-1):
distance_traveled = (distance_traveled +
get_distance(cities[individual[i]], cities[individual[i+1]]));
return distance_traveled;
def evaluate_population(population):
evaluations = [];
for individual in population:
evaluations.append((evaluate_individual(individual), individual));
return evaluations;
def select_tournament_pool(data):
tournament_pool = [];
indexes = list(range(0, len(data)));
for i in range(0, tournament_size):
chosen_index = random.choice(indexes);
tournament_pool.append(data[chosen_index]);
indexes.remove(chosen_index);
return tournament_pool;
def best_solution(pool):
best_individual = {'eval' : sys.float_info.max};
for individual in pool:
if individual['eval'] < best_individual['eval']:
best_individual = individual;
return best_individual;
def run_tournament(pool):
return best_solution(pool);
def merge_popul_and_eval(population, evaluations):
data = [];
for i in range(0, len(population)):
data.append({'ind' : population[i],
'eval' : evaluations[i]});
return data;
def select_parent_pool(population, evaluations):
parent_pool = [];
data = merge_popul_and_eval(population, evaluations);
for i in range(0, parent_number):
tournament_pool = select_tournament_pool(data);
parent = run_tournament(tournament_pool);
parent_pool.append(parent['ind']);
data.remove(parent);
return parent_pool;
def is_individual_valid(individual):
if(len(individual) != (number_of_cities+1)):
print("INVALID " + str(individual));
return False;
if(individual[0] != 1):
print("INVALID " + str(individual));
return False;
if(individual[-1] != 1):
print("INVALID " + str(individual));
return False;
for city in individual:
if city == 1:
if individual.count(city) != 2:
print("INVALID " + str(individual));
return False;
else:
if individual.count(city) != 1:
print("INVALID " + str(individual));
return False;
return True;
def is_population_valid(population):
for individual in population:
if is_individual_valid(individual) == False:
return False;
return True;
def create_child(parent1, parent2):
l = len(parent1);
x = random.randint(1, l-1);
y = random.randint(x, l-1);
child = [];
extract = parent1[x:y];
"""print_var("P1", parent1);
print_var("P2", parent2);
print_var("x", x);
print_var("y", y);
print_var("Extract", extract);"""
i = 0;
for j in range(0, x):
while(parent2[i] in extract):
i += 1;
child.append(parent2[i]);
i += 1;
child.extend(extract);
for j in range(y, l):
while(parent2[i] in extract):
i += 1;
child.append(parent2[i]);
i += 1;
return child;
def generate_children(parent_pool, child_num):
children = [];
for i in range(0, child_num):
parent1 = random.choice(parent_pool);
parent_pool.remove(parent1);
parent2 = random.choice(parent_pool);
parent_pool.append(parent1);
new_child = create_child(parent1, parent2);
children.append(new_child);
return children;
def generate_elites(population, evaluations, number):
data = merge_popul_and_eval(population, evaluations);
elites = [];
for i in range(0, number):
best = best_solution(data);
elites.append(best['ind']);
data.remove(best);
return elites;
def mutate_individual(individual):
i = random.randint(1, len(individual)-2);
j = i;
while j == i:
j = random.randint(1, len(individual)-2);
individual[i], individual[j] = individual[j], individual[i];
def mutate_population(population):
for individual in population:
if random.random() < mutation_rate:
mutate_individual(individual);
def test_stuff():
"""
p1 = "abcdefg";
p2 = "1234567";
for i in range(0,10):
print(create_child(p1,p2));
ind = [1,2,3,4,5,6];
print("Before", ind);
mutate_individual(ind);
print("After", ind);
exit();"""
def perform_GA():
best_solutions = [];
best_individuals = [];
best_solution = None;
#print("***** ALGORITHM START *****");
population = create_random_population(population_size);
iteration_counter = 1;
while True:
#print("Running iteration " + str(iteration_counter) + ":");
evaluations = evaluate_population(population);
best_solution = min(evaluations, key=lambda evaluation:evaluation[0])
best_solutions.append(best_solution[0]);
best_individuals.append(best_solution[1]);
evaluations = [evaluation[0] for evaluation in evaluations]
if iteration_counter == max_iterations:
break;
parent_pool = select_parent_pool(population, evaluations);
children = generate_children(parent_pool, crossover_number);
mutate_population(children);
elites = generate_elites(population, evaluations, elite_number);
# Prepare population for the next iteration
population = children + elites;
iteration_counter += 1;
if is_population_valid(population) == False:
break;
return (best_solutions, best_individuals);
def do_what_needs_to_be_done():
results = [];
bests = [];
print("***** ALGORITHM START *****");
sys.stdout.flush()
for i in range(0, 10):
print("Starting cycle " + str(i+1));
results.append(perform_GA());
bests.append((results[i][0][-1], results[i][1][-1]));
best_ind = bests.index(min(bests, key=lambda best:best[0]));
print(str(best_ind));
print("***** RESULTS *****");
print("Best result is " + str(bests[best_ind][0]));
print("Best result is " + str(bests[best_ind][1]));
plt.plot(results[best_ind][0]);
plt.show();
#main
init();
do_what_needs_to_be_done()
| en | 0.516051 | # Genetic Algorithm for solving the Traveling Salesman problem # Authors: <NAME>, <NAME> # Includes #Global variables(yay!) # Configuration variables(read from config.txt) # General global variables # We must begin at first city # Create list of city indexes # We must end at first city #" + str(i) + ": " + str(individual)); #" + str(i) + " distance = " + print_var("P1", parent1); print_var("P2", parent2); print_var("x", x); print_var("y", y); print_var("Extract", extract); p1 = "abcdefg"; p2 = "1234567"; for i in range(0,10): print(create_child(p1,p2)); ind = [1,2,3,4,5,6]; print("Before", ind); mutate_individual(ind); print("After", ind); exit(); #print("***** ALGORITHM START *****"); #print("Running iteration " + str(iteration_counter) + ":"); # Prepare population for the next iteration #main | 2.929989 | 3 |
products/migrations/0010_remove_product_updated_at.py | UB-ES-2021-A1/wannasell-backend | 0 | 10248 | # Generated by Django 3.2.8 on 2021-11-25 17:50
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('products', '0009_auto_20211125_1846'),
]
operations = [
migrations.RemoveField(
model_name='product',
name='updated_at',
),
]
| # Generated by Django 3.2.8 on 2021-11-25 17:50
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('products', '0009_auto_20211125_1846'),
]
operations = [
migrations.RemoveField(
model_name='product',
name='updated_at',
),
]
| en | 0.882256 | # Generated by Django 3.2.8 on 2021-11-25 17:50 | 1.295391 | 1 |
ResumeAnalyser/apps.py | samyakj2307/recruitai_resume_backend | 0 | 10249 | <gh_stars>0
from django.apps import AppConfig
class ResumeanalyserConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'ResumeAnalyser'
| from django.apps import AppConfig
class ResumeanalyserConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'ResumeAnalyser' | none | 1 | 1.263755 | 1 |
|
plugins/core/player_manager_plugin/__init__.py | StarryPy/StarryPy-Historic | 38 | 10250 | <filename>plugins/core/player_manager_plugin/__init__.py<gh_stars>10-100
from plugins.core.player_manager_plugin.plugin import PlayerManagerPlugin
from plugins.core.player_manager_plugin.manager import (
Banned,
UserLevels,
permissions,
PlayerManager
)
| <filename>plugins/core/player_manager_plugin/__init__.py<gh_stars>10-100
from plugins.core.player_manager_plugin.plugin import PlayerManagerPlugin
from plugins.core.player_manager_plugin.manager import (
Banned,
UserLevels,
permissions,
PlayerManager
)
| none | 1 | 1.208188 | 1 |
|
src/config/svc-monitor/svc_monitor/services/loadbalancer/drivers/ha_proxy/custom_attributes/haproxy_validator.py | jnpr-pranav/contrail-controller | 37 | 10251 | from builtins import str
from builtins import range
from builtins import object
import logging
import inspect
import os
class CustomAttr(object):
"""This type handles non-flat data-types like
int, str, bool.
"""
def __init__(self, key, value):
self._value = value
self._key = key
def validate(self):
pass
def post_validation(self):
pass
class CustomAttrTlsContainer(CustomAttr):
def __init__(self, key, value):
super(CustomAttrTlsContainer, self).__init__(key, value)
def validate(self):
return True
def post_validation(self):
return self._value
def validate_custom_attributes(custom_attributes_dict, section,
custom_attributes):
section_dict = {}
if custom_attributes and section in custom_attributes_dict:
for key, value in list(custom_attributes.items()):
if key in custom_attributes_dict[section]:
#Sanitize the value
try:
type_attr = custom_attributes_dict[section][key]['type']
limits = custom_attributes_dict[section][key]['limits']
if type_attr == 'int':
value = int(value)
if value in range(limits[0], limits[1]):
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
elif type_attr == 'str':
if len(value) in range(limits[0], limits[1]):
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
elif type_attr == 'bool':
if value in limits:
if value == 'True':
value = ''
elif value == 'False':
value = 'no '
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
elif inspect.isclass(eval(type_attr)):
new_custom_attr = eval(type_attr)(key, value)
if new_custom_attr.validate():
value = new_custom_attr.post_validation()
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
except Exception as e:
logging.error(str(e))
continue
return section_dict
| from builtins import str
from builtins import range
from builtins import object
import logging
import inspect
import os
class CustomAttr(object):
"""This type handles non-flat data-types like
int, str, bool.
"""
def __init__(self, key, value):
self._value = value
self._key = key
def validate(self):
pass
def post_validation(self):
pass
class CustomAttrTlsContainer(CustomAttr):
def __init__(self, key, value):
super(CustomAttrTlsContainer, self).__init__(key, value)
def validate(self):
return True
def post_validation(self):
return self._value
def validate_custom_attributes(custom_attributes_dict, section,
custom_attributes):
section_dict = {}
if custom_attributes and section in custom_attributes_dict:
for key, value in list(custom_attributes.items()):
if key in custom_attributes_dict[section]:
#Sanitize the value
try:
type_attr = custom_attributes_dict[section][key]['type']
limits = custom_attributes_dict[section][key]['limits']
if type_attr == 'int':
value = int(value)
if value in range(limits[0], limits[1]):
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
elif type_attr == 'str':
if len(value) in range(limits[0], limits[1]):
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
elif type_attr == 'bool':
if value in limits:
if value == 'True':
value = ''
elif value == 'False':
value = 'no '
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
elif inspect.isclass(eval(type_attr)):
new_custom_attr = eval(type_attr)(key, value)
if new_custom_attr.validate():
value = new_custom_attr.post_validation()
section_dict.update({key:value})
else:
logging.info("Skipping key: %s, value: %s due to" \
"validation failure" % (key, value))
except Exception as e:
logging.error(str(e))
continue
return section_dict
| en | 0.725463 | This type handles non-flat data-types like int, str, bool. #Sanitize the value | 2.978925 | 3 |
python/janitor/typecache.py | monkeyman79/janitor | 2 | 10252 |
import gdb
class TypeCache(object):
def __init__(self):
self.cache = {}
self.intptr_type = False
def clear(self):
self.cache = {}
self.intptr_type = False
def get_type(self, typename):
if typename in self.cache:
return self.cache[typename]
try:
gdb_type = gdb.lookup_type(typename)
self.cache[typename] = gdb_type
return gdb_type
except:
pass
try:
proto = gdb.parse_and_eval("(%s*)0" % typename)
gdb_type = proto.type.target()
self.cache[typename] = gdb_type
return gdb_type
except:
pass
return None
def get_intptr_type(self):
if self.intptr_type != False:
return self.intptr_type
ptr_type = self.get_type("void*")
if ptr_type == None:
self.intptr_type = None
return None
ulong_type = self.get_type("unsigned long")
if ulong_type == None:
self.intptr_type = None
return None
if ulong_type.sizeof >= ptr_type.sizeof:
self.intptr_type = ulong_type
return ulong_type
ullong_type = self.get_type("unsigned long long")
self.intptr_type = ullong_type
return ullong_type
cache = TypeCache()
|
import gdb
class TypeCache(object):
def __init__(self):
self.cache = {}
self.intptr_type = False
def clear(self):
self.cache = {}
self.intptr_type = False
def get_type(self, typename):
if typename in self.cache:
return self.cache[typename]
try:
gdb_type = gdb.lookup_type(typename)
self.cache[typename] = gdb_type
return gdb_type
except:
pass
try:
proto = gdb.parse_and_eval("(%s*)0" % typename)
gdb_type = proto.type.target()
self.cache[typename] = gdb_type
return gdb_type
except:
pass
return None
def get_intptr_type(self):
if self.intptr_type != False:
return self.intptr_type
ptr_type = self.get_type("void*")
if ptr_type == None:
self.intptr_type = None
return None
ulong_type = self.get_type("unsigned long")
if ulong_type == None:
self.intptr_type = None
return None
if ulong_type.sizeof >= ptr_type.sizeof:
self.intptr_type = ulong_type
return ulong_type
ullong_type = self.get_type("unsigned long long")
self.intptr_type = ullong_type
return ullong_type
cache = TypeCache()
| none | 1 | 2.447621 | 2 |
|
key_phrase.py | Santara/autoSLR | 1 | 10253 | import os
import sys
directory = sys.argv[1]
outfile = open("key_phrases.csv","w")
files = {}
for filename in os.listdir(directory):
text=[]
with open(os.path.join(directory, filename)) as f:
text=[l.strip() for l in f if len(l.strip())>2]
data=''
for t in text:
if len(t.split()) > 1:
data = data+'. '+t.strip()
whitelist = set('abcdefghijklmnopqrstuvwxy ABCDEFGHIJKLMNOPQRSTUVWXYZ')
answer = ''.join(filter(whitelist.__contains__, data))
answer=' '.join(answer.split())
import rake
import operator
rake_object = rake.Rake("/home/ashutosh/Sudeshna/RAKE-tutorial/data/stoplists/SmartStoplist.txt", 3,3,1)
import pprint
pp = pprint.PrettyPrinter()
keywords = rake_object.run(answer)
for entry in keywords:
outfile.write("%s, %s\n" % (entry[0], str(entry[1])) )
outfile.close()
| import os
import sys
directory = sys.argv[1]
outfile = open("key_phrases.csv","w")
files = {}
for filename in os.listdir(directory):
text=[]
with open(os.path.join(directory, filename)) as f:
text=[l.strip() for l in f if len(l.strip())>2]
data=''
for t in text:
if len(t.split()) > 1:
data = data+'. '+t.strip()
whitelist = set('abcdefghijklmnopqrstuvwxy ABCDEFGHIJKLMNOPQRSTUVWXYZ')
answer = ''.join(filter(whitelist.__contains__, data))
answer=' '.join(answer.split())
import rake
import operator
rake_object = rake.Rake("/home/ashutosh/Sudeshna/RAKE-tutorial/data/stoplists/SmartStoplist.txt", 3,3,1)
import pprint
pp = pprint.PrettyPrinter()
keywords = rake_object.run(answer)
for entry in keywords:
outfile.write("%s, %s\n" % (entry[0], str(entry[1])) )
outfile.close()
| none | 1 | 2.646277 | 3 |
|
tests/test_utils.py | aced-differentiate/dft-input-gen | 1 | 10254 | """Unit tests for helper utilities in :mod:`dftinputgen.utils`."""
import os
import pytest
from ase import io as ase_io
from dftinputgen.utils import get_elem_symbol
from dftinputgen.utils import read_crystal_structure
from dftinputgen.utils import get_kpoint_grid_from_spacing
from dftinputgen.utils import DftInputGeneratorUtilsError
test_base_dir = os.path.dirname(__file__)
feo_conv_file = os.path.join(test_base_dir, "qe", "files", "feo_conv.vasp")
feo_conv = ase_io.read(feo_conv_file)
def test_get_elem_symbol():
assert get_elem_symbol("Fe-34") == "Fe"
assert get_elem_symbol("3RGe-34") == "Ge"
with pytest.raises(DftInputGeneratorUtilsError):
get_elem_symbol("G23")
def test_read_crystal_structure():
# str with path to crystal structure file is OK
cs = read_crystal_structure(feo_conv_file)
assert cs == feo_conv
# any other type of input should throw an error
with pytest.raises(TypeError):
read_crystal_structure(feo_conv)
def test_kpoint_grid_from_spacing():
assert get_kpoint_grid_from_spacing(feo_conv, 0.2) == pytest.approx(
[7, 7, 7]
)
| """Unit tests for helper utilities in :mod:`dftinputgen.utils`."""
import os
import pytest
from ase import io as ase_io
from dftinputgen.utils import get_elem_symbol
from dftinputgen.utils import read_crystal_structure
from dftinputgen.utils import get_kpoint_grid_from_spacing
from dftinputgen.utils import DftInputGeneratorUtilsError
test_base_dir = os.path.dirname(__file__)
feo_conv_file = os.path.join(test_base_dir, "qe", "files", "feo_conv.vasp")
feo_conv = ase_io.read(feo_conv_file)
def test_get_elem_symbol():
assert get_elem_symbol("Fe-34") == "Fe"
assert get_elem_symbol("3RGe-34") == "Ge"
with pytest.raises(DftInputGeneratorUtilsError):
get_elem_symbol("G23")
def test_read_crystal_structure():
# str with path to crystal structure file is OK
cs = read_crystal_structure(feo_conv_file)
assert cs == feo_conv
# any other type of input should throw an error
with pytest.raises(TypeError):
read_crystal_structure(feo_conv)
def test_kpoint_grid_from_spacing():
assert get_kpoint_grid_from_spacing(feo_conv, 0.2) == pytest.approx(
[7, 7, 7]
)
| en | 0.726543 | Unit tests for helper utilities in :mod:`dftinputgen.utils`. # str with path to crystal structure file is OK # any other type of input should throw an error | 2.515499 | 3 |
core/models.py | nforesperance/Django-Channels-ChatApp | 2 | 10255 | from django.contrib.auth.models import User
from django.db.models import (Model, TextField, DateTimeField, ForeignKey,
CASCADE)
from asgiref.sync import async_to_sync
from channels.layers import get_channel_layer
from django.db import models
import json
class MessageModel(Model):
"""
This class represents a chat message. It has a owner (user), timestamp and
the message body.
"""
user = ForeignKey(User, on_delete=CASCADE, verbose_name='user',
related_name='from_user', db_index=True)
recipient = ForeignKey(User, on_delete=CASCADE, verbose_name='recipient',
related_name='to_user', db_index=True)
timestamp = DateTimeField('timestamp', auto_now_add=True, editable=False,
db_index=True)
body = TextField('body')
def __str__(self):
return str(self.id)
def characters(self):
"""
Toy function to count body characters.
:return: body's char number
"""
return len(self.body)
def notify_ws_clients(self):
"""
Inform client there is a new message.
"""
notification = {
'type': 'chat_message',
'message': '{}'.format(self.id)
}
channel_layer = get_channel_layer()
print("user.id {}".format(self.user.id))
print("user.id {}".format(self.recipient.id))
async_to_sync(channel_layer.group_send)("{}".format(self.user.id), notification)
async_to_sync(channel_layer.group_send)("{}".format(self.recipient.id), notification)
def save(self, *args, **kwargs):
"""
Trims white spaces, saves the message and notifies the recipient via WS
if the message is new.
"""
new = self.id
self.body = self.body.strip() # Trimming whitespaces from the body
super(MessageModel, self).save(*args, **kwargs)
if new is None:
self.notify_ws_clients()
# Meta
class Meta:
app_label = 'core'
verbose_name = 'message'
verbose_name_plural = 'messages'
ordering = ('-timestamp',)
class Group(models.Model):
name = models.CharField(max_length = 20)
members = models.TextField()
messages = models.TextField ()
def set_members(self,user_id_list):
self.members = json.dumps(user_id_list)
def get_members(self):
return json.loads(self.members)
def add(self,user_id):
current_list = self.get_members()
if user_id in current_list:
print("user is already in the group")
else:
new_list = current_list.append(user_id)
self.set_members(new_list)
def remove(self,user_id):
current_list = self.get_members()
if user_id in current_list:
new_list = current_list.remove(user_id)
self.set_members(new_list)
else:
print("User is not a member of theis group")
def has(self,user_id):
current_list = self.get_members()
return(user_id in current_list)
# Set of functions for dealing with group messages
def set_messages(self,message_id_list):
self.messages = json.dumps(message_id_list)
def get_messages(self):
return json.loads(self.messages)
def add_message(self,message_id):
current_list = self.get_messages()
new_list = current_list.append(message_id)
self.set_messages(new_list)
def delete_message(self,message_id):
current_list = self.get_messages()
if message_id in current_list:
new_list = current_list.remove(message_id)
self.set_messages(new_list)
def save(self, *args, **kwargs):
if self.pk is None or self.members is None or self.members == '':
self.set_members([])
if self.pk is None or self.messages is None or self.messages == '':
self.set_messages([])
super(Group, self).save(*args, **kwargs)
def __str__(self):
return self.name+" ID: "+str(self.id)
# Meta
class Meta:
app_label = 'core'
verbose_name = 'Group'
verbose_name_plural = 'Groups'
ordering = ('name',)
class GroupMessage(Model):
"""
This class represents a chat message. It has a owner (user), timestamp and
the message body.
"""
sender = ForeignKey(User, on_delete=CASCADE, verbose_name='sender',
related_name='from_sender', db_index=True)
group = ForeignKey(Group, on_delete=CASCADE, verbose_name='group',
related_name='to_group', db_index=True)
time = DateTimeField('time', auto_now_add=True, editable=False,
db_index=True)
body = TextField('body')
def __str__(self):
return str(self.id)
def characters(self):
"""
Toy function to count body characters.
:return: body's char number
"""
return len(self.body)
def notify_ws_clients(self):
"""
Inform client there is a new message.
"""
notification = {
'type': 'group_message',
'group': '{}'.format(self.id)
}
channel_layer = get_channel_layer()
group_id = "group"+str(self.group.id)
print("group.id {}".format(group_id))
async_to_sync(channel_layer.group_send)(group_id, notification)
def save(self, *args, **kwargs):
"""
Trims white spaces, saves the message and notifies the recipient via WS
if the message is new.
"""
new = self.id
self.body = self.body.strip() # Trimming whitespaces from the body
super(GroupMessage, self).save(*args, **kwargs)
if new is None:
self.notify_ws_clients()
# Meta
class Meta:
app_label = 'core'
verbose_name = 'group message'
verbose_name_plural = 'group messags'
ordering = ('-time',)
| from django.contrib.auth.models import User
from django.db.models import (Model, TextField, DateTimeField, ForeignKey,
CASCADE)
from asgiref.sync import async_to_sync
from channels.layers import get_channel_layer
from django.db import models
import json
class MessageModel(Model):
"""
This class represents a chat message. It has a owner (user), timestamp and
the message body.
"""
user = ForeignKey(User, on_delete=CASCADE, verbose_name='user',
related_name='from_user', db_index=True)
recipient = ForeignKey(User, on_delete=CASCADE, verbose_name='recipient',
related_name='to_user', db_index=True)
timestamp = DateTimeField('timestamp', auto_now_add=True, editable=False,
db_index=True)
body = TextField('body')
def __str__(self):
return str(self.id)
def characters(self):
"""
Toy function to count body characters.
:return: body's char number
"""
return len(self.body)
def notify_ws_clients(self):
"""
Inform client there is a new message.
"""
notification = {
'type': 'chat_message',
'message': '{}'.format(self.id)
}
channel_layer = get_channel_layer()
print("user.id {}".format(self.user.id))
print("user.id {}".format(self.recipient.id))
async_to_sync(channel_layer.group_send)("{}".format(self.user.id), notification)
async_to_sync(channel_layer.group_send)("{}".format(self.recipient.id), notification)
def save(self, *args, **kwargs):
"""
Trims white spaces, saves the message and notifies the recipient via WS
if the message is new.
"""
new = self.id
self.body = self.body.strip() # Trimming whitespaces from the body
super(MessageModel, self).save(*args, **kwargs)
if new is None:
self.notify_ws_clients()
# Meta
class Meta:
app_label = 'core'
verbose_name = 'message'
verbose_name_plural = 'messages'
ordering = ('-timestamp',)
class Group(models.Model):
name = models.CharField(max_length = 20)
members = models.TextField()
messages = models.TextField ()
def set_members(self,user_id_list):
self.members = json.dumps(user_id_list)
def get_members(self):
return json.loads(self.members)
def add(self,user_id):
current_list = self.get_members()
if user_id in current_list:
print("user is already in the group")
else:
new_list = current_list.append(user_id)
self.set_members(new_list)
def remove(self,user_id):
current_list = self.get_members()
if user_id in current_list:
new_list = current_list.remove(user_id)
self.set_members(new_list)
else:
print("User is not a member of theis group")
def has(self,user_id):
current_list = self.get_members()
return(user_id in current_list)
# Set of functions for dealing with group messages
def set_messages(self,message_id_list):
self.messages = json.dumps(message_id_list)
def get_messages(self):
return json.loads(self.messages)
def add_message(self,message_id):
current_list = self.get_messages()
new_list = current_list.append(message_id)
self.set_messages(new_list)
def delete_message(self,message_id):
current_list = self.get_messages()
if message_id in current_list:
new_list = current_list.remove(message_id)
self.set_messages(new_list)
def save(self, *args, **kwargs):
if self.pk is None or self.members is None or self.members == '':
self.set_members([])
if self.pk is None or self.messages is None or self.messages == '':
self.set_messages([])
super(Group, self).save(*args, **kwargs)
def __str__(self):
return self.name+" ID: "+str(self.id)
# Meta
class Meta:
app_label = 'core'
verbose_name = 'Group'
verbose_name_plural = 'Groups'
ordering = ('name',)
class GroupMessage(Model):
"""
This class represents a chat message. It has a owner (user), timestamp and
the message body.
"""
sender = ForeignKey(User, on_delete=CASCADE, verbose_name='sender',
related_name='from_sender', db_index=True)
group = ForeignKey(Group, on_delete=CASCADE, verbose_name='group',
related_name='to_group', db_index=True)
time = DateTimeField('time', auto_now_add=True, editable=False,
db_index=True)
body = TextField('body')
def __str__(self):
return str(self.id)
def characters(self):
"""
Toy function to count body characters.
:return: body's char number
"""
return len(self.body)
def notify_ws_clients(self):
"""
Inform client there is a new message.
"""
notification = {
'type': 'group_message',
'group': '{}'.format(self.id)
}
channel_layer = get_channel_layer()
group_id = "group"+str(self.group.id)
print("group.id {}".format(group_id))
async_to_sync(channel_layer.group_send)(group_id, notification)
def save(self, *args, **kwargs):
"""
Trims white spaces, saves the message and notifies the recipient via WS
if the message is new.
"""
new = self.id
self.body = self.body.strip() # Trimming whitespaces from the body
super(GroupMessage, self).save(*args, **kwargs)
if new is None:
self.notify_ws_clients()
# Meta
class Meta:
app_label = 'core'
verbose_name = 'group message'
verbose_name_plural = 'group messags'
ordering = ('-time',)
| en | 0.771246 | This class represents a chat message. It has a owner (user), timestamp and the message body. Toy function to count body characters. :return: body's char number Inform client there is a new message. Trims white spaces, saves the message and notifies the recipient via WS if the message is new. # Trimming whitespaces from the body # Meta # Set of functions for dealing with group messages # Meta This class represents a chat message. It has a owner (user), timestamp and the message body. Toy function to count body characters. :return: body's char number Inform client there is a new message. Trims white spaces, saves the message and notifies the recipient via WS if the message is new. # Trimming whitespaces from the body # Meta | 2.333341 | 2 |
backup/model.py | jsikyoon/ASNP-RMR | 8 | 10256 | <reponame>jsikyoon/ASNP-RMR
import tensorflow as tf
import numpy as np
# utility methods
def batch_mlp(input, output_sizes, variable_scope):
"""Apply MLP to the final axis of a 3D tensor (reusing already defined MLPs).
Args:
input: input tensor of shape [B,n,d_in].
output_sizes: An iterable containing the output sizes of the MLP as defined
in `basic.Linear`.
variable_scope: String giving the name of the variable scope. If this is set
to be the same as a previously defined MLP, then the weights are reused.
Returns:
tensor of shape [B,n,d_out] where d_out=output_sizes[-1]
"""
# Get the shapes of the input and reshape to parallelise across observations
batch_size, _, filter_size = input.shape.as_list()
output = tf.reshape(input, (-1, filter_size))
output.set_shape((None, filter_size))
# Pass through MLP
with tf.variable_scope(variable_scope, reuse=tf.AUTO_REUSE):
for i, size in enumerate(output_sizes[:-1]):
output = tf.nn.relu(
tf.layers.dense(output, size, name="layer_{}".format(i)))
# Last layer without a ReLu
output = tf.layers.dense(
output, output_sizes[-1], name="layer_{}".format(i + 1))
# Bring back into original shape
output = tf.reshape(output, (batch_size, -1, output_sizes[-1]))
return output
class DeterministicEncoder(object):
"""The Deterministic Encoder."""
def __init__(self, output_sizes, attention):
"""(A)NP deterministic encoder.
Args:
output_sizes: An iterable containing the output sizes of the encoding MLP.
attention: The attention module.
"""
self._output_sizes = output_sizes
self._attention = attention
def __call__(self, context_x, context_y, target_x):
"""Encodes the inputs into one representation.
Args:
context_x: Tensor of shape [B,observations,d_x]. For this 1D regression
task this corresponds to the x-values.
context_y: Tensor of shape [B,observations,d_y]. For this 1D regression
task this corresponds to the y-values.
target_x: Tensor of shape [B,target_observations,d_x].
For this 1D regression task this corresponds to the x-values.
Returns:
The encoded representation. Tensor of shape [B,target_observations,d]
"""
# Concatenate x and y along the filter axes
encoder_input = tf.concat([context_x, context_y], axis=-1)
# Pass final axis through MLP
hidden = batch_mlp(encoder_input, self._output_sizes,
"deterministic_encoder")
# Apply attention
with tf.variable_scope("deterministic_encoder", reuse=tf.AUTO_REUSE):
hidden = self._attention(context_x, target_x, hidden)
return hidden
class LatentEncoder(object):
"""The Latent Encoder."""
def __init__(self, output_sizes, num_latents):
"""(A)NP latent encoder.
Args:
output_sizes: An iterable containing the output sizes of the encoding MLP.
num_latents: The latent dimensionality.
"""
self._output_sizes = output_sizes
self._num_latents = num_latents
def __call__(self, x, y):
"""Encodes the inputs into one representation.
Args:
x: Tensor of shape [B,observations,d_x]. For this 1D regression
task this corresponds to the x-values.
y: Tensor of shape [B,observations,d_y]. For this 1D regression
task this corresponds to the y-values.
Returns:
A normal distribution over tensors of shape [B, num_latents]
"""
# Concatenate x and y along the filter axes
encoder_input = tf.concat([x, y], axis=-1)
# Pass final axis through MLP
hidden = batch_mlp(encoder_input, self._output_sizes, "latent_encoder")
# Aggregator: take the mean over all points
hidden = tf.reduce_mean(hidden, axis=1)
# Have further MLP layers that map to the parameters of the Gaussian latent
with tf.variable_scope("latent_encoder", reuse=tf.AUTO_REUSE):
# First apply intermediate relu layer
hidden = tf.nn.relu(
tf.layers.dense(hidden,
(self._output_sizes[-1] + self._num_latents)/2,
name="penultimate_layer"))
# Then apply further linear layers to output latent mu and log sigma
mu = tf.layers.dense(hidden, self._num_latents, name="mean_layer")
log_sigma = tf.layers.dense(hidden, self._num_latents, name="std_layer")
# Compute sigma
sigma = 0.1 + 0.9 * tf.sigmoid(log_sigma)
return tf.contrib.distributions.Normal(loc=mu, scale=sigma)
class Decoder(object):
"""The Decoder."""
def __init__(self, output_sizes):
"""(A)NP decoder.
Args:
output_sizes: An iterable containing the output sizes of the decoder MLP
as defined in `basic.Linear`.
"""
self._output_sizes = output_sizes
def __call__(self, representation, target_x):
"""Decodes the individual targets.
Args:
representation: The representation of the context for target predictions.
Tensor of shape [B,target_observations,?].
target_x: The x locations for the target query.
Tensor of shape [B,target_observations,d_x].
Returns:
dist: A multivariate Gaussian over the target points. A distribution over
tensors of shape [B,target_observations,d_y].
mu: The mean of the multivariate Gaussian.
Tensor of shape [B,target_observations,d_x].
sigma: The standard deviation of the multivariate Gaussian.
Tensor of shape [B,target_observations,d_x].
"""
# concatenate target_x and representation
hidden = tf.concat([representation, target_x], axis=-1)
# Pass final axis through MLP
hidden = batch_mlp(hidden, self._output_sizes, "decoder")
# Get the mean an the variance
mu, log_sigma = tf.split(hidden, 2, axis=-1)
# Bound the variance
sigma = 0.1 + 0.9 * tf.nn.softplus(log_sigma)
# Get the distribution
dist = tf.contrib.distributions.MultivariateNormalDiag(
loc=mu, scale_diag=sigma)
return dist, mu, sigma
class LatentModel(object):
"""The (A)NP model."""
def __init__(self, latent_encoder_output_sizes, num_latents,
decoder_output_sizes, use_deterministic_path=True,
deterministic_encoder_output_sizes=None, attention=None):
"""Initialises the model.
Args:
latent_encoder_output_sizes: An iterable containing the sizes of hidden
layers of the latent encoder.
num_latents: The latent dimensionality.
decoder_output_sizes: An iterable containing the sizes of hidden layers of
the decoder. The last element should correspond to d_y * 2
(it encodes both mean and variance concatenated)
use_deterministic_path: a boolean that indicates whether the deterministic
encoder is used or not.
deterministic_encoder_output_sizes: An iterable containing the sizes of
hidden layers of the deterministic encoder. The last one is the size
of the deterministic representation r.
attention: The attention module used in the deterministic encoder.
Only relevant when use_deterministic_path=True.
"""
self._latent_encoder = LatentEncoder(latent_encoder_output_sizes,
num_latents)
self._decoder = Decoder(decoder_output_sizes)
self._use_deterministic_path = use_deterministic_path
if use_deterministic_path:
self._deterministic_encoder = DeterministicEncoder(
deterministic_encoder_output_sizes, attention)
def __call__(self, query, num_targets, target_y=None):
"""Returns the predicted mean and variance at the target points.
Args:
query: Array containing ((context_x, context_y), target_x) where:
context_x: Tensor of shape [B,num_contexts,d_x].
Contains the x values of the context points.
context_y: Tensor of shape [B,num_contexts,d_y].
Contains the y values of the context points.
target_x: Tensor of shape [B,num_targets,d_x].
Contains the x values of the target points.
num_targets: Number of target points.
target_y: The ground truth y values of the target y.
Tensor of shape [B,num_targets,d_y].
Returns:
log_p: The log_probability of the target_y given the predicted
distribution. Tensor of shape [B,num_targets].
mu: The mean of the predicted distribution.
Tensor of shape [B,num_targets,d_y].
sigma: The variance of the predicted distribution.
Tensor of shape [B,num_targets,d_y].
"""
(context_x, context_y), target_x = query
# Pass query through the encoder and the decoder
prior = self._latent_encoder(context_x, context_y)
# For training, when target_y is available, use targets for latent encoder.
# Note that targets contain contexts by design.
if target_y is None:
latent_rep = prior.sample()
# For testing, when target_y unavailable, use contexts for latent encoder.
else:
posterior = self._latent_encoder(target_x, target_y)
latent_rep = posterior.sample()
latent_rep = tf.tile(tf.expand_dims(latent_rep, axis=1),
[1, num_targets, 1])
if self._use_deterministic_path:
deterministic_rep = self._deterministic_encoder(context_x, context_y,
target_x)
representation = tf.concat([deterministic_rep, latent_rep], axis=-1)
else:
representation = latent_rep
dist, mu, sigma = self._decoder(representation, target_x)
# If we want to calculate the log_prob for training we will make use of the
# target_y. At test time the target_y is not available so we return None.
if target_y is not None:
log_p = dist.log_prob(target_y)
posterior = self._latent_encoder(target_x, target_y)
kl = tf.reduce_sum(
tf.contrib.distributions.kl_divergence(posterior, prior),
axis=-1, keepdims=True)
kl = tf.tile(kl, [1, num_targets])
loss = - tf.reduce_mean(log_p - kl / tf.cast(num_targets, tf.float32))
else:
log_p = None
kl = None
loss = None
return mu, sigma, log_p, kl, loss
def uniform_attention(q, v):
"""Uniform attention. Equivalent to np.
Args:
q: queries. tensor of shape [B,m,d_k].
v: values. tensor of shape [B,n,d_v].
Returns:
tensor of shape [B,m,d_v].
"""
total_points = tf.shape(q)[1]
rep = tf.reduce_mean(v, axis=1, keepdims=True) # [B,1,d_v]
rep = tf.tile(rep, [1, total_points, 1])
return rep
def laplace_attention(q, k, v, scale, normalise):
"""Computes laplace exponential attention.
Args:
q: queries. tensor of shape [B,m,d_k].
k: keys. tensor of shape [B,n,d_k].
v: values. tensor of shape [B,n,d_v].
scale: float that scales the L1 distance.
normalise: Boolean that determines whether weights sum to 1.
Returns:
tensor of shape [B,m,d_v].
"""
k = tf.expand_dims(k, axis=1) # [B,1,n,d_k]
q = tf.expand_dims(q, axis=2) # [B,m,1,d_k]
unnorm_weights = - tf.abs((k - q) / scale) # [B,m,n,d_k]
unnorm_weights = tf.reduce_sum(unnorm_weights, axis=-1) # [B,m,n]
if normalise:
weight_fn = tf.nn.softmax
else:
weight_fn = lambda x: 1 + tf.tanh(x)
weights = weight_fn(unnorm_weights) # [B,m,n]
rep = tf.einsum('bik,bkj->bij', weights, v) # [B,m,d_v]
return rep
def dot_product_attention(q, k, v, normalise):
"""Computes dot product attention.
Args:
q: queries. tensor of shape [B,m,d_k].
k: keys. tensor of shape [B,n,d_k].
v: values. tensor of shape [B,n,d_v].
normalise: Boolean that determines whether weights sum to 1.
Returns:
tensor of shape [B,m,d_v].
"""
d_k = tf.shape(q)[-1]
scale = tf.sqrt(tf.cast(d_k, tf.float32))
unnorm_weights = tf.einsum('bjk,bik->bij', k, q) / scale # [B,m,n]
if normalise:
weight_fn = tf.nn.softmax
else:
weight_fn = tf.sigmoid
weights = weight_fn(unnorm_weights) # [B,m,n]
rep = tf.einsum('bik,bkj->bij', weights, v) # [B,m,d_v]
return rep
def multihead_attention(q, k, v, num_heads=8):
"""Computes multi-head attention.
Args:
q: queries. tensor of shape [B,m,d_k].
k: keys. tensor of shape [B,n,d_k].
v: values. tensor of shape [B,n,d_v].
num_heads: number of heads. Should divide d_v.
Returns:
tensor of shape [B,m,d_v].
"""
d_k = q.get_shape().as_list()[-1]
d_v = v.get_shape().as_list()[-1]
head_size = d_v / num_heads
key_initializer = tf.random_normal_initializer(stddev=d_k**-0.5)
value_initializer = tf.random_normal_initializer(stddev=d_v**-0.5)
rep = tf.constant(0.0)
for h in range(num_heads):
o = dot_product_attention(
tf.layers.Conv1D(head_size, 1, kernel_initializer=key_initializer,
name='wq%d' % h, use_bias=False, padding='VALID')(q),
tf.layers.Conv1D(head_size, 1, kernel_initializer=key_initializer,
name='wk%d' % h, use_bias=False, padding='VALID')(k),
tf.layers.Conv1D(head_size, 1, kernel_initializer=key_initializer,
name='wv%d' % h, use_bias=False, padding='VALID')(v),
normalise=True)
rep += tf.layers.Conv1D(d_v, 1, kernel_initializer=value_initializer,
name='wo%d' % h, use_bias=False, padding='VALID')(o)
return rep
class Attention(object):
"""The Attention module."""
def __init__(self, rep, output_sizes, att_type, scale=1., normalise=True,
num_heads=8):
"""Create attention module.
Takes in context inputs, target inputs and
representations of each context input/output pair
to output an aggregated representation of the context data.
Args:
rep: transformation to apply to contexts before computing attention.
One of: ['identity','mlp'].
output_sizes: list of number of hidden units per layer of mlp.
Used only if rep == 'mlp'.
att_type: type of attention. One of the following:
['uniform','laplace','dot_product','multihead']
scale: scale of attention.
normalise: Boolean determining whether to:
1. apply softmax to weights so that they sum to 1 across context pts or
2. apply custom transformation to have weights in [0,1].
num_heads: number of heads for multihead.
"""
self._rep = rep
self._output_sizes = output_sizes
self._type = att_type
self._scale = scale
self._normalise = normalise
if self._type == 'multihead':
self._num_heads = num_heads
def __call__(self, x1, x2, r):
"""Apply attention to create aggregated representation of r.
Args:
x1: tensor of shape [B,n1,d_x].
x2: tensor of shape [B,n2,d_x].
r: tensor of shape [B,n1,d].
Returns:
tensor of shape [B,n2,d]
Raises:
NameError: The argument for rep/type was invalid.
"""
if self._rep == 'identity':
k, q = (x1, x2)
elif self._rep == 'mlp':
# Pass through MLP
k = batch_mlp(x1, self._output_sizes, "attention")
q = batch_mlp(x2, self._output_sizes, "attention")
else:
raise NameError("'rep' not among ['identity','mlp']")
if self._type == 'uniform':
rep = uniform_attention(q, r)
elif self._type == 'laplace':
rep = laplace_attention(q, k, r, self._scale, self._normalise)
elif self._type == 'dot_product':
rep = dot_product_attention(q, k, r, self._normalise)
elif self._type == 'multihead':
rep = multihead_attention(q, k, r, self._num_heads)
else:
raise NameError(("'att_type' not among ['uniform','laplace','dot_product'"
",'multihead']"))
return rep
| import tensorflow as tf
import numpy as np
# utility methods
def batch_mlp(input, output_sizes, variable_scope):
"""Apply MLP to the final axis of a 3D tensor (reusing already defined MLPs).
Args:
input: input tensor of shape [B,n,d_in].
output_sizes: An iterable containing the output sizes of the MLP as defined
in `basic.Linear`.
variable_scope: String giving the name of the variable scope. If this is set
to be the same as a previously defined MLP, then the weights are reused.
Returns:
tensor of shape [B,n,d_out] where d_out=output_sizes[-1]
"""
# Get the shapes of the input and reshape to parallelise across observations
batch_size, _, filter_size = input.shape.as_list()
output = tf.reshape(input, (-1, filter_size))
output.set_shape((None, filter_size))
# Pass through MLP
with tf.variable_scope(variable_scope, reuse=tf.AUTO_REUSE):
for i, size in enumerate(output_sizes[:-1]):
output = tf.nn.relu(
tf.layers.dense(output, size, name="layer_{}".format(i)))
# Last layer without a ReLu
output = tf.layers.dense(
output, output_sizes[-1], name="layer_{}".format(i + 1))
# Bring back into original shape
output = tf.reshape(output, (batch_size, -1, output_sizes[-1]))
return output
class DeterministicEncoder(object):
"""The Deterministic Encoder."""
def __init__(self, output_sizes, attention):
"""(A)NP deterministic encoder.
Args:
output_sizes: An iterable containing the output sizes of the encoding MLP.
attention: The attention module.
"""
self._output_sizes = output_sizes
self._attention = attention
def __call__(self, context_x, context_y, target_x):
"""Encodes the inputs into one representation.
Args:
context_x: Tensor of shape [B,observations,d_x]. For this 1D regression
task this corresponds to the x-values.
context_y: Tensor of shape [B,observations,d_y]. For this 1D regression
task this corresponds to the y-values.
target_x: Tensor of shape [B,target_observations,d_x].
For this 1D regression task this corresponds to the x-values.
Returns:
The encoded representation. Tensor of shape [B,target_observations,d]
"""
# Concatenate x and y along the filter axes
encoder_input = tf.concat([context_x, context_y], axis=-1)
# Pass final axis through MLP
hidden = batch_mlp(encoder_input, self._output_sizes,
"deterministic_encoder")
# Apply attention
with tf.variable_scope("deterministic_encoder", reuse=tf.AUTO_REUSE):
hidden = self._attention(context_x, target_x, hidden)
return hidden
class LatentEncoder(object):
"""The Latent Encoder."""
def __init__(self, output_sizes, num_latents):
"""(A)NP latent encoder.
Args:
output_sizes: An iterable containing the output sizes of the encoding MLP.
num_latents: The latent dimensionality.
"""
self._output_sizes = output_sizes
self._num_latents = num_latents
def __call__(self, x, y):
"""Encodes the inputs into one representation.
Args:
x: Tensor of shape [B,observations,d_x]. For this 1D regression
task this corresponds to the x-values.
y: Tensor of shape [B,observations,d_y]. For this 1D regression
task this corresponds to the y-values.
Returns:
A normal distribution over tensors of shape [B, num_latents]
"""
# Concatenate x and y along the filter axes
encoder_input = tf.concat([x, y], axis=-1)
# Pass final axis through MLP
hidden = batch_mlp(encoder_input, self._output_sizes, "latent_encoder")
# Aggregator: take the mean over all points
hidden = tf.reduce_mean(hidden, axis=1)
# Have further MLP layers that map to the parameters of the Gaussian latent
with tf.variable_scope("latent_encoder", reuse=tf.AUTO_REUSE):
# First apply intermediate relu layer
hidden = tf.nn.relu(
tf.layers.dense(hidden,
(self._output_sizes[-1] + self._num_latents)/2,
name="penultimate_layer"))
# Then apply further linear layers to output latent mu and log sigma
mu = tf.layers.dense(hidden, self._num_latents, name="mean_layer")
log_sigma = tf.layers.dense(hidden, self._num_latents, name="std_layer")
# Compute sigma
sigma = 0.1 + 0.9 * tf.sigmoid(log_sigma)
return tf.contrib.distributions.Normal(loc=mu, scale=sigma)
class Decoder(object):
"""The Decoder."""
def __init__(self, output_sizes):
"""(A)NP decoder.
Args:
output_sizes: An iterable containing the output sizes of the decoder MLP
as defined in `basic.Linear`.
"""
self._output_sizes = output_sizes
def __call__(self, representation, target_x):
"""Decodes the individual targets.
Args:
representation: The representation of the context for target predictions.
Tensor of shape [B,target_observations,?].
target_x: The x locations for the target query.
Tensor of shape [B,target_observations,d_x].
Returns:
dist: A multivariate Gaussian over the target points. A distribution over
tensors of shape [B,target_observations,d_y].
mu: The mean of the multivariate Gaussian.
Tensor of shape [B,target_observations,d_x].
sigma: The standard deviation of the multivariate Gaussian.
Tensor of shape [B,target_observations,d_x].
"""
# concatenate target_x and representation
hidden = tf.concat([representation, target_x], axis=-1)
# Pass final axis through MLP
hidden = batch_mlp(hidden, self._output_sizes, "decoder")
# Get the mean an the variance
mu, log_sigma = tf.split(hidden, 2, axis=-1)
# Bound the variance
sigma = 0.1 + 0.9 * tf.nn.softplus(log_sigma)
# Get the distribution
dist = tf.contrib.distributions.MultivariateNormalDiag(
loc=mu, scale_diag=sigma)
return dist, mu, sigma
class LatentModel(object):
"""The (A)NP model."""
def __init__(self, latent_encoder_output_sizes, num_latents,
decoder_output_sizes, use_deterministic_path=True,
deterministic_encoder_output_sizes=None, attention=None):
"""Initialises the model.
Args:
latent_encoder_output_sizes: An iterable containing the sizes of hidden
layers of the latent encoder.
num_latents: The latent dimensionality.
decoder_output_sizes: An iterable containing the sizes of hidden layers of
the decoder. The last element should correspond to d_y * 2
(it encodes both mean and variance concatenated)
use_deterministic_path: a boolean that indicates whether the deterministic
encoder is used or not.
deterministic_encoder_output_sizes: An iterable containing the sizes of
hidden layers of the deterministic encoder. The last one is the size
of the deterministic representation r.
attention: The attention module used in the deterministic encoder.
Only relevant when use_deterministic_path=True.
"""
self._latent_encoder = LatentEncoder(latent_encoder_output_sizes,
num_latents)
self._decoder = Decoder(decoder_output_sizes)
self._use_deterministic_path = use_deterministic_path
if use_deterministic_path:
self._deterministic_encoder = DeterministicEncoder(
deterministic_encoder_output_sizes, attention)
def __call__(self, query, num_targets, target_y=None):
"""Returns the predicted mean and variance at the target points.
Args:
query: Array containing ((context_x, context_y), target_x) where:
context_x: Tensor of shape [B,num_contexts,d_x].
Contains the x values of the context points.
context_y: Tensor of shape [B,num_contexts,d_y].
Contains the y values of the context points.
target_x: Tensor of shape [B,num_targets,d_x].
Contains the x values of the target points.
num_targets: Number of target points.
target_y: The ground truth y values of the target y.
Tensor of shape [B,num_targets,d_y].
Returns:
log_p: The log_probability of the target_y given the predicted
distribution. Tensor of shape [B,num_targets].
mu: The mean of the predicted distribution.
Tensor of shape [B,num_targets,d_y].
sigma: The variance of the predicted distribution.
Tensor of shape [B,num_targets,d_y].
"""
(context_x, context_y), target_x = query
# Pass query through the encoder and the decoder
prior = self._latent_encoder(context_x, context_y)
# For training, when target_y is available, use targets for latent encoder.
# Note that targets contain contexts by design.
if target_y is None:
latent_rep = prior.sample()
# For testing, when target_y unavailable, use contexts for latent encoder.
else:
posterior = self._latent_encoder(target_x, target_y)
latent_rep = posterior.sample()
latent_rep = tf.tile(tf.expand_dims(latent_rep, axis=1),
[1, num_targets, 1])
if self._use_deterministic_path:
deterministic_rep = self._deterministic_encoder(context_x, context_y,
target_x)
representation = tf.concat([deterministic_rep, latent_rep], axis=-1)
else:
representation = latent_rep
dist, mu, sigma = self._decoder(representation, target_x)
# If we want to calculate the log_prob for training we will make use of the
# target_y. At test time the target_y is not available so we return None.
if target_y is not None:
log_p = dist.log_prob(target_y)
posterior = self._latent_encoder(target_x, target_y)
kl = tf.reduce_sum(
tf.contrib.distributions.kl_divergence(posterior, prior),
axis=-1, keepdims=True)
kl = tf.tile(kl, [1, num_targets])
loss = - tf.reduce_mean(log_p - kl / tf.cast(num_targets, tf.float32))
else:
log_p = None
kl = None
loss = None
return mu, sigma, log_p, kl, loss
def uniform_attention(q, v):
"""Uniform attention. Equivalent to np.
Args:
q: queries. tensor of shape [B,m,d_k].
v: values. tensor of shape [B,n,d_v].
Returns:
tensor of shape [B,m,d_v].
"""
total_points = tf.shape(q)[1]
rep = tf.reduce_mean(v, axis=1, keepdims=True) # [B,1,d_v]
rep = tf.tile(rep, [1, total_points, 1])
return rep
def laplace_attention(q, k, v, scale, normalise):
"""Computes laplace exponential attention.
Args:
q: queries. tensor of shape [B,m,d_k].
k: keys. tensor of shape [B,n,d_k].
v: values. tensor of shape [B,n,d_v].
scale: float that scales the L1 distance.
normalise: Boolean that determines whether weights sum to 1.
Returns:
tensor of shape [B,m,d_v].
"""
k = tf.expand_dims(k, axis=1) # [B,1,n,d_k]
q = tf.expand_dims(q, axis=2) # [B,m,1,d_k]
unnorm_weights = - tf.abs((k - q) / scale) # [B,m,n,d_k]
unnorm_weights = tf.reduce_sum(unnorm_weights, axis=-1) # [B,m,n]
if normalise:
weight_fn = tf.nn.softmax
else:
weight_fn = lambda x: 1 + tf.tanh(x)
weights = weight_fn(unnorm_weights) # [B,m,n]
rep = tf.einsum('bik,bkj->bij', weights, v) # [B,m,d_v]
return rep
def dot_product_attention(q, k, v, normalise):
"""Computes dot product attention.
Args:
q: queries. tensor of shape [B,m,d_k].
k: keys. tensor of shape [B,n,d_k].
v: values. tensor of shape [B,n,d_v].
normalise: Boolean that determines whether weights sum to 1.
Returns:
tensor of shape [B,m,d_v].
"""
d_k = tf.shape(q)[-1]
scale = tf.sqrt(tf.cast(d_k, tf.float32))
unnorm_weights = tf.einsum('bjk,bik->bij', k, q) / scale # [B,m,n]
if normalise:
weight_fn = tf.nn.softmax
else:
weight_fn = tf.sigmoid
weights = weight_fn(unnorm_weights) # [B,m,n]
rep = tf.einsum('bik,bkj->bij', weights, v) # [B,m,d_v]
return rep
def multihead_attention(q, k, v, num_heads=8):
"""Computes multi-head attention.
Args:
q: queries. tensor of shape [B,m,d_k].
k: keys. tensor of shape [B,n,d_k].
v: values. tensor of shape [B,n,d_v].
num_heads: number of heads. Should divide d_v.
Returns:
tensor of shape [B,m,d_v].
"""
d_k = q.get_shape().as_list()[-1]
d_v = v.get_shape().as_list()[-1]
head_size = d_v / num_heads
key_initializer = tf.random_normal_initializer(stddev=d_k**-0.5)
value_initializer = tf.random_normal_initializer(stddev=d_v**-0.5)
rep = tf.constant(0.0)
for h in range(num_heads):
o = dot_product_attention(
tf.layers.Conv1D(head_size, 1, kernel_initializer=key_initializer,
name='wq%d' % h, use_bias=False, padding='VALID')(q),
tf.layers.Conv1D(head_size, 1, kernel_initializer=key_initializer,
name='wk%d' % h, use_bias=False, padding='VALID')(k),
tf.layers.Conv1D(head_size, 1, kernel_initializer=key_initializer,
name='wv%d' % h, use_bias=False, padding='VALID')(v),
normalise=True)
rep += tf.layers.Conv1D(d_v, 1, kernel_initializer=value_initializer,
name='wo%d' % h, use_bias=False, padding='VALID')(o)
return rep
class Attention(object):
"""The Attention module."""
def __init__(self, rep, output_sizes, att_type, scale=1., normalise=True,
num_heads=8):
"""Create attention module.
Takes in context inputs, target inputs and
representations of each context input/output pair
to output an aggregated representation of the context data.
Args:
rep: transformation to apply to contexts before computing attention.
One of: ['identity','mlp'].
output_sizes: list of number of hidden units per layer of mlp.
Used only if rep == 'mlp'.
att_type: type of attention. One of the following:
['uniform','laplace','dot_product','multihead']
scale: scale of attention.
normalise: Boolean determining whether to:
1. apply softmax to weights so that they sum to 1 across context pts or
2. apply custom transformation to have weights in [0,1].
num_heads: number of heads for multihead.
"""
self._rep = rep
self._output_sizes = output_sizes
self._type = att_type
self._scale = scale
self._normalise = normalise
if self._type == 'multihead':
self._num_heads = num_heads
def __call__(self, x1, x2, r):
"""Apply attention to create aggregated representation of r.
Args:
x1: tensor of shape [B,n1,d_x].
x2: tensor of shape [B,n2,d_x].
r: tensor of shape [B,n1,d].
Returns:
tensor of shape [B,n2,d]
Raises:
NameError: The argument for rep/type was invalid.
"""
if self._rep == 'identity':
k, q = (x1, x2)
elif self._rep == 'mlp':
# Pass through MLP
k = batch_mlp(x1, self._output_sizes, "attention")
q = batch_mlp(x2, self._output_sizes, "attention")
else:
raise NameError("'rep' not among ['identity','mlp']")
if self._type == 'uniform':
rep = uniform_attention(q, r)
elif self._type == 'laplace':
rep = laplace_attention(q, k, r, self._scale, self._normalise)
elif self._type == 'dot_product':
rep = dot_product_attention(q, k, r, self._normalise)
elif self._type == 'multihead':
rep = multihead_attention(q, k, r, self._num_heads)
else:
raise NameError(("'att_type' not among ['uniform','laplace','dot_product'"
",'multihead']"))
return rep | en | 0.72758 | # utility methods Apply MLP to the final axis of a 3D tensor (reusing already defined MLPs). Args: input: input tensor of shape [B,n,d_in]. output_sizes: An iterable containing the output sizes of the MLP as defined in `basic.Linear`. variable_scope: String giving the name of the variable scope. If this is set to be the same as a previously defined MLP, then the weights are reused. Returns: tensor of shape [B,n,d_out] where d_out=output_sizes[-1] # Get the shapes of the input and reshape to parallelise across observations # Pass through MLP # Last layer without a ReLu # Bring back into original shape The Deterministic Encoder. (A)NP deterministic encoder. Args: output_sizes: An iterable containing the output sizes of the encoding MLP. attention: The attention module. Encodes the inputs into one representation. Args: context_x: Tensor of shape [B,observations,d_x]. For this 1D regression task this corresponds to the x-values. context_y: Tensor of shape [B,observations,d_y]. For this 1D regression task this corresponds to the y-values. target_x: Tensor of shape [B,target_observations,d_x]. For this 1D regression task this corresponds to the x-values. Returns: The encoded representation. Tensor of shape [B,target_observations,d] # Concatenate x and y along the filter axes # Pass final axis through MLP # Apply attention The Latent Encoder. (A)NP latent encoder. Args: output_sizes: An iterable containing the output sizes of the encoding MLP. num_latents: The latent dimensionality. Encodes the inputs into one representation. Args: x: Tensor of shape [B,observations,d_x]. For this 1D regression task this corresponds to the x-values. y: Tensor of shape [B,observations,d_y]. For this 1D regression task this corresponds to the y-values. Returns: A normal distribution over tensors of shape [B, num_latents] # Concatenate x and y along the filter axes # Pass final axis through MLP # Aggregator: take the mean over all points # Have further MLP layers that map to the parameters of the Gaussian latent # First apply intermediate relu layer # Then apply further linear layers to output latent mu and log sigma # Compute sigma The Decoder. (A)NP decoder. Args: output_sizes: An iterable containing the output sizes of the decoder MLP as defined in `basic.Linear`. Decodes the individual targets. Args: representation: The representation of the context for target predictions. Tensor of shape [B,target_observations,?]. target_x: The x locations for the target query. Tensor of shape [B,target_observations,d_x]. Returns: dist: A multivariate Gaussian over the target points. A distribution over tensors of shape [B,target_observations,d_y]. mu: The mean of the multivariate Gaussian. Tensor of shape [B,target_observations,d_x]. sigma: The standard deviation of the multivariate Gaussian. Tensor of shape [B,target_observations,d_x]. # concatenate target_x and representation # Pass final axis through MLP # Get the mean an the variance # Bound the variance # Get the distribution The (A)NP model. Initialises the model. Args: latent_encoder_output_sizes: An iterable containing the sizes of hidden layers of the latent encoder. num_latents: The latent dimensionality. decoder_output_sizes: An iterable containing the sizes of hidden layers of the decoder. The last element should correspond to d_y * 2 (it encodes both mean and variance concatenated) use_deterministic_path: a boolean that indicates whether the deterministic encoder is used or not. deterministic_encoder_output_sizes: An iterable containing the sizes of hidden layers of the deterministic encoder. The last one is the size of the deterministic representation r. attention: The attention module used in the deterministic encoder. Only relevant when use_deterministic_path=True. Returns the predicted mean and variance at the target points. Args: query: Array containing ((context_x, context_y), target_x) where: context_x: Tensor of shape [B,num_contexts,d_x]. Contains the x values of the context points. context_y: Tensor of shape [B,num_contexts,d_y]. Contains the y values of the context points. target_x: Tensor of shape [B,num_targets,d_x]. Contains the x values of the target points. num_targets: Number of target points. target_y: The ground truth y values of the target y. Tensor of shape [B,num_targets,d_y]. Returns: log_p: The log_probability of the target_y given the predicted distribution. Tensor of shape [B,num_targets]. mu: The mean of the predicted distribution. Tensor of shape [B,num_targets,d_y]. sigma: The variance of the predicted distribution. Tensor of shape [B,num_targets,d_y]. # Pass query through the encoder and the decoder # For training, when target_y is available, use targets for latent encoder. # Note that targets contain contexts by design. # For testing, when target_y unavailable, use contexts for latent encoder. # If we want to calculate the log_prob for training we will make use of the # target_y. At test time the target_y is not available so we return None. Uniform attention. Equivalent to np. Args: q: queries. tensor of shape [B,m,d_k]. v: values. tensor of shape [B,n,d_v]. Returns: tensor of shape [B,m,d_v]. # [B,1,d_v] Computes laplace exponential attention. Args: q: queries. tensor of shape [B,m,d_k]. k: keys. tensor of shape [B,n,d_k]. v: values. tensor of shape [B,n,d_v]. scale: float that scales the L1 distance. normalise: Boolean that determines whether weights sum to 1. Returns: tensor of shape [B,m,d_v]. # [B,1,n,d_k] # [B,m,1,d_k] # [B,m,n,d_k] # [B,m,n] # [B,m,n] # [B,m,d_v] Computes dot product attention. Args: q: queries. tensor of shape [B,m,d_k]. k: keys. tensor of shape [B,n,d_k]. v: values. tensor of shape [B,n,d_v]. normalise: Boolean that determines whether weights sum to 1. Returns: tensor of shape [B,m,d_v]. # [B,m,n] # [B,m,n] # [B,m,d_v] Computes multi-head attention. Args: q: queries. tensor of shape [B,m,d_k]. k: keys. tensor of shape [B,n,d_k]. v: values. tensor of shape [B,n,d_v]. num_heads: number of heads. Should divide d_v. Returns: tensor of shape [B,m,d_v]. The Attention module. Create attention module. Takes in context inputs, target inputs and representations of each context input/output pair to output an aggregated representation of the context data. Args: rep: transformation to apply to contexts before computing attention. One of: ['identity','mlp']. output_sizes: list of number of hidden units per layer of mlp. Used only if rep == 'mlp'. att_type: type of attention. One of the following: ['uniform','laplace','dot_product','multihead'] scale: scale of attention. normalise: Boolean determining whether to: 1. apply softmax to weights so that they sum to 1 across context pts or 2. apply custom transformation to have weights in [0,1]. num_heads: number of heads for multihead. Apply attention to create aggregated representation of r. Args: x1: tensor of shape [B,n1,d_x]. x2: tensor of shape [B,n2,d_x]. r: tensor of shape [B,n1,d]. Returns: tensor of shape [B,n2,d] Raises: NameError: The argument for rep/type was invalid. # Pass through MLP | 3.012563 | 3 |
minotaur/_minotaur.py | giannitedesco/minotaur | 172 | 10257 | <filename>minotaur/_minotaur.py
from typing import Dict, Tuple, Optional
from pathlib import Path
import asyncio
from ._mask import Mask
from ._event import Event
from ._base import InotifyBase
__all__ = ('Minotaur',)
class Notification:
__slots__ = (
'_path',
'_type',
'_isdir',
'_unmount',
'_qoverflow',
)
def __init__(self,
path: Path,
type: Mask,
isdir: bool,
unmount: bool,
qoverflow: bool = False):
self._path = path
self._type = type
self._isdir = bool(isdir)
self._unmount = bool(unmount)
self._qoverflow = bool(qoverflow)
@property
def isdir(self) -> bool:
return self._isdir
@property
def unmount(self) -> bool:
return self._unmount
@property
def qoverflow(self) -> bool:
return self._qoverflow
@property
def path(self) -> Path:
return self._path
def __repr__(self) -> str:
t = self._isdir and 'dir' or 'file'
return f'{type(self).__name__}({self._type.name} {t} {self._path})'
@classmethod
def create(cls, path: Path, mask: Mask) -> 'Notification':
return cls(path,
mask & Mask.EVENT_TYPE,
bool(mask & Mask.ISDIR),
bool(mask & Mask.UNMOUNT),
bool(mask & Mask.Q_OVERFLOW))
class Minotaur(InotifyBase):
"""
Fancy interface for Inotify which does questionable things like:
1. Resolve watch-descriptors back to paths (which races with renames of
original paths and can't be used safely, but other inotify packages
provide this feature, so here it is for your delectation).
2. Link rename_from/rename_to events together. This feature would be
useful but isn't yet actually implemented. Working on it...
"""
__slots__ = (
'_wdmap',
'_cmap',
)
_wdmap: Dict[int, Path]
_cmap: Dict[Tuple[int, int], Event]
def __init__(self,
blocking: bool = True,
cloexec: bool = True,
loop: Optional[asyncio.AbstractEventLoop] = None,
) -> None:
super().__init__(blocking, cloexec, loop)
self._wdmap = {}
self._cmap = {}
def add_watch(self, p: Path, mask: Mask) -> int:
try:
wd = super().add_watch(p, mask)
except Exception:
raise
else:
self._wdmap[wd] = p.resolve()
return wd
def rm_watch(self, wd: int) -> int:
try:
return super().rm_watch(wd)
except Exception:
raise
else:
del self._wdmap[wd]
def _resolve_path(self, wd: int, name: Path) -> Path:
try:
base_dir = self._wdmap[wd]
except KeyError:
path = name
else:
path = base_dir / name
return path
def __next__(self) -> Notification:
evt = super()._next_event()
if evt is None:
raise StopIteration
# TODO: Link rename_from/rename_to together if we have them
path = self._resolve_path(evt.wd, evt.name)
return Notification.create(path, evt.mask)
async def __anext__(self) -> Notification:
evt = await super()._next_event_async()
if evt is None:
raise StopIteration
path = self._resolve_path(evt.wd, evt.name)
return Notification.create(path, evt.mask)
| <filename>minotaur/_minotaur.py
from typing import Dict, Tuple, Optional
from pathlib import Path
import asyncio
from ._mask import Mask
from ._event import Event
from ._base import InotifyBase
__all__ = ('Minotaur',)
class Notification:
__slots__ = (
'_path',
'_type',
'_isdir',
'_unmount',
'_qoverflow',
)
def __init__(self,
path: Path,
type: Mask,
isdir: bool,
unmount: bool,
qoverflow: bool = False):
self._path = path
self._type = type
self._isdir = bool(isdir)
self._unmount = bool(unmount)
self._qoverflow = bool(qoverflow)
@property
def isdir(self) -> bool:
return self._isdir
@property
def unmount(self) -> bool:
return self._unmount
@property
def qoverflow(self) -> bool:
return self._qoverflow
@property
def path(self) -> Path:
return self._path
def __repr__(self) -> str:
t = self._isdir and 'dir' or 'file'
return f'{type(self).__name__}({self._type.name} {t} {self._path})'
@classmethod
def create(cls, path: Path, mask: Mask) -> 'Notification':
return cls(path,
mask & Mask.EVENT_TYPE,
bool(mask & Mask.ISDIR),
bool(mask & Mask.UNMOUNT),
bool(mask & Mask.Q_OVERFLOW))
class Minotaur(InotifyBase):
"""
Fancy interface for Inotify which does questionable things like:
1. Resolve watch-descriptors back to paths (which races with renames of
original paths and can't be used safely, but other inotify packages
provide this feature, so here it is for your delectation).
2. Link rename_from/rename_to events together. This feature would be
useful but isn't yet actually implemented. Working on it...
"""
__slots__ = (
'_wdmap',
'_cmap',
)
_wdmap: Dict[int, Path]
_cmap: Dict[Tuple[int, int], Event]
def __init__(self,
blocking: bool = True,
cloexec: bool = True,
loop: Optional[asyncio.AbstractEventLoop] = None,
) -> None:
super().__init__(blocking, cloexec, loop)
self._wdmap = {}
self._cmap = {}
def add_watch(self, p: Path, mask: Mask) -> int:
try:
wd = super().add_watch(p, mask)
except Exception:
raise
else:
self._wdmap[wd] = p.resolve()
return wd
def rm_watch(self, wd: int) -> int:
try:
return super().rm_watch(wd)
except Exception:
raise
else:
del self._wdmap[wd]
def _resolve_path(self, wd: int, name: Path) -> Path:
try:
base_dir = self._wdmap[wd]
except KeyError:
path = name
else:
path = base_dir / name
return path
def __next__(self) -> Notification:
evt = super()._next_event()
if evt is None:
raise StopIteration
# TODO: Link rename_from/rename_to together if we have them
path = self._resolve_path(evt.wd, evt.name)
return Notification.create(path, evt.mask)
async def __anext__(self) -> Notification:
evt = await super()._next_event_async()
if evt is None:
raise StopIteration
path = self._resolve_path(evt.wd, evt.name)
return Notification.create(path, evt.mask)
| en | 0.932626 | Fancy interface for Inotify which does questionable things like: 1. Resolve watch-descriptors back to paths (which races with renames of original paths and can't be used safely, but other inotify packages provide this feature, so here it is for your delectation). 2. Link rename_from/rename_to events together. This feature would be useful but isn't yet actually implemented. Working on it... # TODO: Link rename_from/rename_to together if we have them | 2.328925 | 2 |
pyclustering/container/examples/__init__.py | JosephChataignon/pyclustering | 1,013 | 10258 | <reponame>JosephChataignon/pyclustering
"""!
@brief Collection of examples devoted to containers.
@authors <NAME> (<EMAIL>)
@date 2014-2020
@copyright BSD-3-Clause
""" | """!
@brief Collection of examples devoted to containers.
@authors <NAME> (<EMAIL>)
@date 2014-2020
@copyright BSD-3-Clause
""" | en | 0.532854 | !
@brief Collection of examples devoted to containers.
@authors <NAME> (<EMAIL>)
@date 2014-2020
@copyright BSD-3-Clause | 1.233931 | 1 |
novelty-detection/train_wood_vgg19.py | matherm/python-data-science | 1 | 10259 | <reponame>matherm/python-data-science
import argparse
import sys
import torch
import numpy as np
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
parser = argparse.ArgumentParser(description='PyTorch Novelty Detection')
# TRAINING PARAMS
parser.add_argument('--epochs', type=int, default=100, metavar='',
help='Amount of epochs for training (default: 100)')
parser.add_argument('--batch_size', type=int, default=1000, metavar='',
help='Batch size for SGD (default: 100)')
parser.add_argument('--lrate', type=float, default=0.0001, metavar="",
help="Learning rate (default: 0.001")
parser.add_argument('--with_cuda', action='store_true', dest='use_cuda',
help="Shall cuda be used (default: False)")
parser.add_argument('--model', type=int, default=0,
help="Which model to train (0=KLminimizer, 1=Euclidean-Minimizer) (default: 0)")
parser.add_argument('--plots', action='store_true', dest='plots',
help="Shall matplotlib be used (default: False)")
parser.add_argument('--grid', action='store_true', dest='grid',
help="Grid search (default: False)")
argv = parser.parse_args()
sys.argv = [sys.argv[0]]
from ummon import *
from negvarbound import *
from model import *
from helpers import Evaluator
import helpers
torch.manual_seed(4)
if __name__ == '__main__':
# WOOD
transform = transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), VGG19Features("pool4"), helpers.flatten_transform])
wood_data = ImagePatches("/ext/data/Wood-0035.png", mode='rgb', train=True, stride_y=14, stride_x=14, window_size=28, transform=transform)
wood_data_test = AnomalyImagePatches("/ext/data/Wood-0035.png", mode='rgb', train=True, stride_y=14, stride_x=14, window_size=28, transform=transform, propability=1.0, anomaly=SquareAnomaly(size=8, color=255))
wood_data = [wood_data[i][0].data for i in range(len(wood_data))]
wood_data = torch.stack(wood_data).numpy() / 10
wood_data_test = [wood_data_test[i][0].data for i in range(len(wood_data_test))]
wood_data_test = torch.stack(wood_data_test).numpy() / 10
# Novelty
data_novelty = wood_data_test
# Train
data_train = wood_data
# Val
data_val = data_train
######################################################
# NORMAL DISTRIBUTION
######################################################
# Model
model = ModelNormal(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = KLLoss(model=model, size_average=False)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=argv.lrate, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-5)
# START TRAINING
my_trainer.fit(dataloader_training=(wood_data, 20),
epochs=200)
evaluator.evaluate_model(argv)
######################################################
# LOGNORMAL
######################################################
# Model
model = ModelLogNormal(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = KLLoss_lognormal(model=model, size_average=False)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=argv.lrate, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-5)
# START TRAINING
my_trainer.fit(dataloader_training=(data_train, 20),
epochs=argv.epochs)
evaluator.evaluate_model(argv)
######################################################
# LAPLACE
######################################################
# Model
model = ModelLaplace(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = KLLoss_laplace(model=model, size_average=False, mean=2, scale=0.5)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=0.000001, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-1)
# START TRAINING
my_trainer.fit(dataloader_training=(data_train, 20),
epochs=300)
evaluator.evaluate_model(argv)
# {'AUROC LAT (TRAIN)': 0.8743801652892562,
# 'AUROC LAT (VAL)': 0.8661157024793389,
# 'AUROC REC (TRAIN)': 0.86900826446281,
# 'AUROC REC (VAL)': 0.8528925619834712}
######################################################
# LAPLACE WITH R-SHIFT
######################################################
class CombinedLoss(nn.Module):
def __init__(self, model, *args, **kwargs):
super(CombinedLoss, self).__init__()
self.model = model
self.r_shift = KLLoss_shift_r(model=model, size_average=False)
self.kl_loss = KLLoss_laplace(model=model, size_average=False, mean=10, scale=0.3)
def forward(self, inpt, outpt):
self.r_shift()
return self.kl_loss(inpt,outpt)
# Model
model = ModelLaplace(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = CombinedLoss(model)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=argv.lrate, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-3)
# START TRAINING
my_trainer.fit(dataloader_training=(data_train, 20),
epochs=200)
evaluator.evaluate_model(argv)
# {'AUROC LAT (TRAIN)': 0.8590909090909091,
# 'AUROC LAT (VAL)': 0.8752066115702479,
# 'AUROC REC (TRAIN)': 0.8677685950413224,
# 'AUROC REC (VAL)': 0.8619834710743801}
| import argparse
import sys
import torch
import numpy as np
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.datasets import CIFAR10
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
parser = argparse.ArgumentParser(description='PyTorch Novelty Detection')
# TRAINING PARAMS
parser.add_argument('--epochs', type=int, default=100, metavar='',
help='Amount of epochs for training (default: 100)')
parser.add_argument('--batch_size', type=int, default=1000, metavar='',
help='Batch size for SGD (default: 100)')
parser.add_argument('--lrate', type=float, default=0.0001, metavar="",
help="Learning rate (default: 0.001")
parser.add_argument('--with_cuda', action='store_true', dest='use_cuda',
help="Shall cuda be used (default: False)")
parser.add_argument('--model', type=int, default=0,
help="Which model to train (0=KLminimizer, 1=Euclidean-Minimizer) (default: 0)")
parser.add_argument('--plots', action='store_true', dest='plots',
help="Shall matplotlib be used (default: False)")
parser.add_argument('--grid', action='store_true', dest='grid',
help="Grid search (default: False)")
argv = parser.parse_args()
sys.argv = [sys.argv[0]]
from ummon import *
from negvarbound import *
from model import *
from helpers import Evaluator
import helpers
torch.manual_seed(4)
if __name__ == '__main__':
# WOOD
transform = transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), VGG19Features("pool4"), helpers.flatten_transform])
wood_data = ImagePatches("/ext/data/Wood-0035.png", mode='rgb', train=True, stride_y=14, stride_x=14, window_size=28, transform=transform)
wood_data_test = AnomalyImagePatches("/ext/data/Wood-0035.png", mode='rgb', train=True, stride_y=14, stride_x=14, window_size=28, transform=transform, propability=1.0, anomaly=SquareAnomaly(size=8, color=255))
wood_data = [wood_data[i][0].data for i in range(len(wood_data))]
wood_data = torch.stack(wood_data).numpy() / 10
wood_data_test = [wood_data_test[i][0].data for i in range(len(wood_data_test))]
wood_data_test = torch.stack(wood_data_test).numpy() / 10
# Novelty
data_novelty = wood_data_test
# Train
data_train = wood_data
# Val
data_val = data_train
######################################################
# NORMAL DISTRIBUTION
######################################################
# Model
model = ModelNormal(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = KLLoss(model=model, size_average=False)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=argv.lrate, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-5)
# START TRAINING
my_trainer.fit(dataloader_training=(wood_data, 20),
epochs=200)
evaluator.evaluate_model(argv)
######################################################
# LOGNORMAL
######################################################
# Model
model = ModelLogNormal(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = KLLoss_lognormal(model=model, size_average=False)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=argv.lrate, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-5)
# START TRAINING
my_trainer.fit(dataloader_training=(data_train, 20),
epochs=argv.epochs)
evaluator.evaluate_model(argv)
######################################################
# LAPLACE
######################################################
# Model
model = ModelLaplace(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = KLLoss_laplace(model=model, size_average=False, mean=2, scale=0.5)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=0.000001, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-1)
# START TRAINING
my_trainer.fit(dataloader_training=(data_train, 20),
epochs=300)
evaluator.evaluate_model(argv)
# {'AUROC LAT (TRAIN)': 0.8743801652892562,
# 'AUROC LAT (VAL)': 0.8661157024793389,
# 'AUROC REC (TRAIN)': 0.86900826446281,
# 'AUROC REC (VAL)': 0.8528925619834712}
######################################################
# LAPLACE WITH R-SHIFT
######################################################
class CombinedLoss(nn.Module):
def __init__(self, model, *args, **kwargs):
super(CombinedLoss, self).__init__()
self.model = model
self.r_shift = KLLoss_shift_r(model=model, size_average=False)
self.kl_loss = KLLoss_laplace(model=model, size_average=False, mean=10, scale=0.3)
def forward(self, inpt, outpt):
self.r_shift()
return self.kl_loss(inpt,outpt)
# Model
model = ModelLaplace(input_features = data_train.shape[1], hidden_layer=20, latent_features=20)
torch.manual_seed(4)
# LOSS
criterion = CombinedLoss(model)
# INSTANTIATE OPTIMIZER
optimizer = torch.optim.SGD(model.parameters(), lr=argv.lrate, weight_decay=1)
#Evaluator
evaluator = Evaluator(model, data_train, data_val, data_novelty)
# Activate matplotlib
argv.plots = True
with Logger(loglevel=10, log_batch_interval=601) as lg:
# CREATE A TRAINER
my_trainer = UnsupervisedTrainer(lg,
model,
criterion,
optimizer,
trainingstate = Trainingstate(),
model_filename="KL_MIN",
use_cuda= argv.use_cuda,
profile = False,
convergence_eps = 1e-3)
# START TRAINING
my_trainer.fit(dataloader_training=(data_train, 20),
epochs=200)
evaluator.evaluate_model(argv)
# {'AUROC LAT (TRAIN)': 0.8590909090909091,
# 'AUROC LAT (VAL)': 0.8752066115702479,
# 'AUROC REC (TRAIN)': 0.8677685950413224,
# 'AUROC REC (VAL)': 0.8619834710743801} | de | 0.33531 | # TRAINING PARAMS # WOOD # Novelty # Train # Val ###################################################### # NORMAL DISTRIBUTION ###################################################### # Model # LOSS # INSTANTIATE OPTIMIZER #Evaluator # Activate matplotlib # CREATE A TRAINER # START TRAINING ###################################################### # LOGNORMAL ###################################################### # Model # LOSS # INSTANTIATE OPTIMIZER #Evaluator # Activate matplotlib # CREATE A TRAINER # START TRAINING ###################################################### # LAPLACE ###################################################### # Model # LOSS # INSTANTIATE OPTIMIZER #Evaluator # Activate matplotlib # CREATE A TRAINER # START TRAINING # {'AUROC LAT (TRAIN)': 0.8743801652892562, # 'AUROC LAT (VAL)': 0.8661157024793389, # 'AUROC REC (TRAIN)': 0.86900826446281, # 'AUROC REC (VAL)': 0.8528925619834712} ###################################################### # LAPLACE WITH R-SHIFT ###################################################### # Model # LOSS # INSTANTIATE OPTIMIZER #Evaluator # Activate matplotlib # CREATE A TRAINER # START TRAINING # {'AUROC LAT (TRAIN)': 0.8590909090909091, # 'AUROC LAT (VAL)': 0.8752066115702479, # 'AUROC REC (TRAIN)': 0.8677685950413224, # 'AUROC REC (VAL)': 0.8619834710743801} | 2.441559 | 2 |
sample_architectures/cnn.py | hvarS/PyTorch-Refer | 0 | 10260 | # -*- coding: utf-8 -*-
"""CNN.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1Tq6HUya2PrC0SmyOIFo2c_eVtguRED2q
"""
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
import torchvision.datasets as datasets
import torchvision.transforms as transforms
class CNN(nn.Module):
def __init__(self,in_channels = 1,num_classes = 10):
super(CNN,self).__init__()
self.conv1 = nn.Conv2d(in_channels= in_channels,out_channels = 8,kernel_size =(3,3),stride = (1,1),padding = (1,1))
self.pool1 = nn.MaxPool2d(kernel_size=(2,2),stride=(2,2))
self.conv2 = nn.Conv2d(in_channels= 8,out_channels = 16,kernel_size =(3,3),stride = (1,1),padding = (1,1))
self.pool2 = nn.MaxPool2d(kernel_size=(2,2),stride=(2,2))
self.fc1 = nn.Linear(16*7*7,num_classes)
def forward(self,x):
x = F.relu(self.conv1(x))
x = self.pool1(x)
x = F.relu(self.conv2(x))
x = self.pool2(x)
x = x.reshape(x.shape[0],-1)
x = self.fc1(x)
return x
model = CNN(1,10)
x = torch.randn((64,1,28,28))
print(model(x).shape)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
in_channels = 1
num_classes = 10
learning_rate = 0.001
batch_size = 64
num_epochs = 4
train_dataset = datasets.MNIST(root = "dataset/",train = True,transform = transforms.ToTensor(),download = True)
train_loader = DataLoader(dataset=train_dataset,batch_size=64,shuffle=True)
test_dataset = train_dataset = datasets.MNIST(root = "dataset/",train = False,transform = transforms.ToTensor(),download = True)
test_loader = DataLoader(dataset = test_dataset,batch_size = batch_size,shuffle = True)
model = CNN(1,10).to(device = device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(),lr = learning_rate)
for epoch in range(num_epochs):
for batch_idx,(data,targets) in enumerate(train_loader):
#get data to cuda if possible
data = data.cuda()
targets = targets.cuda()
scores = model(data)
loss = criterion(scores,targets)
#backward
optimizer.zero_grad()
loss.backward()
#gradient_descent or adam-step
optimizer.step()
# Check the accuracy for the training step
def check_accuracy(loader,model):
if loader.dataset.train:
print("Checking accuracy on training data")
else:
print("Checking accuracy on test data")
num_correct = 0
num_samples = 0
model.eval()
with torch.no_grad():
for x,y in loader:
x = x.cuda()
y = y.cuda()
scores = model(x)
_,predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
print(f' Got {num_correct}/{num_samples} with accuracy ={float(num_correct)/float(num_samples)*100:.2f} ')
model.train()
check_accuracy(train_loader,model)
check_accuracy(test_loader,model)
| # -*- coding: utf-8 -*-
"""CNN.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1Tq6HUya2PrC0SmyOIFo2c_eVtguRED2q
"""
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
import torchvision.datasets as datasets
import torchvision.transforms as transforms
class CNN(nn.Module):
def __init__(self,in_channels = 1,num_classes = 10):
super(CNN,self).__init__()
self.conv1 = nn.Conv2d(in_channels= in_channels,out_channels = 8,kernel_size =(3,3),stride = (1,1),padding = (1,1))
self.pool1 = nn.MaxPool2d(kernel_size=(2,2),stride=(2,2))
self.conv2 = nn.Conv2d(in_channels= 8,out_channels = 16,kernel_size =(3,3),stride = (1,1),padding = (1,1))
self.pool2 = nn.MaxPool2d(kernel_size=(2,2),stride=(2,2))
self.fc1 = nn.Linear(16*7*7,num_classes)
def forward(self,x):
x = F.relu(self.conv1(x))
x = self.pool1(x)
x = F.relu(self.conv2(x))
x = self.pool2(x)
x = x.reshape(x.shape[0],-1)
x = self.fc1(x)
return x
model = CNN(1,10)
x = torch.randn((64,1,28,28))
print(model(x).shape)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
in_channels = 1
num_classes = 10
learning_rate = 0.001
batch_size = 64
num_epochs = 4
train_dataset = datasets.MNIST(root = "dataset/",train = True,transform = transforms.ToTensor(),download = True)
train_loader = DataLoader(dataset=train_dataset,batch_size=64,shuffle=True)
test_dataset = train_dataset = datasets.MNIST(root = "dataset/",train = False,transform = transforms.ToTensor(),download = True)
test_loader = DataLoader(dataset = test_dataset,batch_size = batch_size,shuffle = True)
model = CNN(1,10).to(device = device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(),lr = learning_rate)
for epoch in range(num_epochs):
for batch_idx,(data,targets) in enumerate(train_loader):
#get data to cuda if possible
data = data.cuda()
targets = targets.cuda()
scores = model(data)
loss = criterion(scores,targets)
#backward
optimizer.zero_grad()
loss.backward()
#gradient_descent or adam-step
optimizer.step()
# Check the accuracy for the training step
def check_accuracy(loader,model):
if loader.dataset.train:
print("Checking accuracy on training data")
else:
print("Checking accuracy on test data")
num_correct = 0
num_samples = 0
model.eval()
with torch.no_grad():
for x,y in loader:
x = x.cuda()
y = y.cuda()
scores = model(x)
_,predictions = scores.max(1)
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
print(f' Got {num_correct}/{num_samples} with accuracy ={float(num_correct)/float(num_samples)*100:.2f} ')
model.train()
check_accuracy(train_loader,model)
check_accuracy(test_loader,model)
| en | 0.798453 | # -*- coding: utf-8 -*- CNN.ipynb Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/1Tq6HUya2PrC0SmyOIFo2c_eVtguRED2q #get data to cuda if possible #backward #gradient_descent or adam-step # Check the accuracy for the training step | 3.112416 | 3 |
aispace/layers/callbacks/qa_evaluators.py | SmileGoat/AiSpace | 32 | 10261 | <reponame>SmileGoat/AiSpace
# -*- coding: utf-8 -*-
# @Time : 2020-07-30 15:06
# @Author : yingyuankai
# @Email : <EMAIL>
# @File : qa_evaluators.py
import os
import logging
import numpy as np
import tensorflow as tf
import json
from pprint import pprint
from collections import defaultdict
from aispace.utils.eval_utils import calc_em_score, calc_f1_score
from aispace.utils.io_utils import save_json
from aispace.utils.print_utils import print_boxed
from aispace.utils.metrics_utils import ConfusionMatrix
__all__ = [
'EvaluatorForQaSimple',
'EvaluatorForQaWithImpossible'
]
logger = logging.getLogger(__name__)
class EvaluatorForQaSimple(tf.keras.callbacks.Callback):
"""
start_top_log_prob and end_top_log_prob's shape is [batch, k]
ref: https://keras.io/examples/nlp/text_extraction_with_bert/
"""
def __init__(self, validation_dataset, validation_steps, test_dataset, test_steps, report_dir, max_answer_length=64, n_best_size=5):
self.validation_dataset = validation_dataset
self.validation_steps = validation_steps
self.test_dataset = test_dataset
self.test_steps = test_steps
self.max_answer_length = max_answer_length
self.n_best_size = n_best_size
self.report_dir = report_dir
def on_epoch_end(self, epoch, logs=None):
new_logs = self.eval_process(self.validation_dataset, self.validation_steps)
logs = logs or {}
logs.update(new_logs)
print(f"Epoch: {epoch + 1}, val_f1_score: {logs['val_f1_score']:.4f}, val_em_score: {logs['val_em_score']:.4f}, "
f"val_f1_em_avg_score: {logs['val_f1_em_avg_score']:.4f}")
def on_train_end(self, logs=None):
logger.info("Start Evaluate.")
if not os.path.exists(self.report_dir):
os.makedirs(self.report_dir)
new_logs = self.eval_process(self.test_dataset, self.test_steps)
save_json(os.path.join(self.report_dir, 'performance.json'), new_logs)
print_boxed(f"Question Answer Evaluation")
pprint(new_logs)
logger.info(f"Save question answer reports in {self.report_dir}")
def eval_process(self, dataset, n_steps=None):
f1 = 0
em = 0
total_count = 0
skip_count = 0
start_top_res, end_top_res, unique_id_res = self.model.predict(dataset, steps=n_steps)
start_top_log_prob, start_top_index = start_top_res[:, :, 0], start_top_res[:, :, 1].astype(np.int) # [b, k]
end_top_log_prob, end_top_index = end_top_res[:, :, 0], end_top_res[:, :, 1].astype(np.int) # [b, k]
unique_id_res = unique_id_res.astype(np.int)
# predict results
results = {}
for i in range(end_top_index.shape[0]):
unique_id = unique_id_res[i][0]
itm = {
'unique_id': unique_id,
'start_top_log_prob': start_top_log_prob[i],
'start_top_index': start_top_index[i],
'end_top_log_prob': end_top_log_prob[i],
'end_top_index': end_top_index[i],
}
results[unique_id] = itm
# raw inputs
start_n_top, end_n_top = start_top_index.shape[-1], end_top_index.shape[-1]
qas_id_to_examples = defaultdict(list)
unique_id_to_examples = {}
for idx, (inputs, outputs) in enumerate(dataset):
if n_steps is not None and idx >= n_steps:
break
unique_ids = inputs['unique_id'].numpy().astype(np.int).tolist()
offsets = inputs['offset'].numpy().astype(np.int).tolist()
qas_ids = inputs['qas_id'].numpy().astype(str).tolist()
doc_token2char_raw_start_indexs = inputs['doc_token2char_raw_start_index'].numpy().astype(str).tolist()
doc_token2char_raw_end_indexs = inputs['doc_token2char_raw_end_index'].numpy().astype(str).tolist()
doc_token2doc_indexs = inputs['doc_token2doc_index'].numpy().astype(str).tolist()
all_answers = inputs['all_answers'].numpy().astype(str).tolist()
answer_texts = inputs['answer_text'].numpy().tolist()
context_texts = inputs['context_text'].numpy().tolist()
question_texts = inputs['question_text'].numpy().tolist()
is_impossibles = inputs['is_impossible'].numpy().tolist()
p_masks = inputs['p_mask'].numpy().astype(np.int).tolist()
for t in range(len(unique_ids)):
itm = {
'unique_id': unique_ids[t],
'qas_id': qas_ids[t],
'question_text': question_texts[t].decode("utf8"),
'context_text': context_texts[t].decode("utf8"),
'answer_text': answer_texts[t].decode("utf8"),
'all_answers': json.loads(all_answers[t]),
'doc_token2char_raw_start_index': json.loads(doc_token2char_raw_start_indexs[t]),
'doc_token2char_raw_end_index': json.loads(doc_token2char_raw_end_indexs[t]),
'doc_token2doc_index': json.loads(doc_token2doc_indexs[t]),
'is_impossible': is_impossibles[t],
'p_mask': p_masks[t],
'offset': offsets[t]
}
unique_id_to_examples[unique_ids[t]] = itm
qas_id_to_examples[qas_ids[t]].append(itm)
for qas_id, examples in qas_id_to_examples.items():
example_all_predicts = []
answers = set()
for example in examples:
cur_unique_id = example['unique_id']
if cur_unique_id not in results:
continue
if example['is_impossible'] == 1:
continue
# if example['answer_text'] not in answers:
# answers.append(example['answer_text'])
answers |= set(example['all_answers'])
cur_result = results.get(cur_unique_id)
cur_start_top_log_prob = cur_result['start_top_log_prob']
cur_start_top_index = cur_result['start_top_index']
cur_end_top_log_prob = cur_result['end_top_log_prob']
cur_end_top_index = cur_result['end_top_index']
cur_p_mask = example['p_mask']
for i in range(start_n_top):
start_prob = cur_start_top_log_prob[i]
start_index = cur_start_top_index[i]
if not cur_p_mask[start_index]:
continue
for j in range(end_n_top):
end_prob = cur_end_top_log_prob[j]
end_index = cur_end_top_index[j]
if not cur_p_mask[end_index]:
continue
answer_length = end_index - start_index + 1
if end_index < start_index or answer_length > self.max_answer_length:
continue
itm = {
'unique_id': cur_unique_id,
'start_prob': start_prob,
'start_index': start_index,
'end_prob': end_prob,
'end_index': end_index,
'predict_score': np.log(start_prob) + np.log(end_prob)
}
example_all_predicts.append(itm)
if len(answers) != 0:
total_count += 1
else:
skip_count += 1
continue
example_all_predicts.sort(key=lambda s: s['predict_score'], reverse=True)
example_top_predicts = []
is_visited = set()
for example_predict in example_all_predicts:
if len(example_top_predicts) >= self.n_best_size:
break
example_feature = unique_id_to_examples[example_predict['unique_id']]
if example_predict['start_index'] - example_feature['offset'] < 0 or example_predict['end_index'] - example_feature['offset'] < 0:
predict_text = ""
else:
predict_start = example_feature['doc_token2char_raw_start_index'][
example_predict['start_index'] - example_feature['offset']]
predict_end = example_feature['doc_token2char_raw_end_index'][
example_predict['end_index'] - example_feature['offset']]
predict_text = example_feature['context_text'][predict_start: predict_end + 1].strip()
if predict_text in is_visited:
continue
is_visited.add(predict_text)
itm = {
'predict_text': predict_text,
'start_prob': example_predict['start_prob'],
'end_prob': example_predict['end_prob'],
'predict_score': example_predict['predict_score']
}
example_top_predicts.append(itm)
if len(example_top_predicts) == 0:
example_top_predicts.append(
{
'predict_text': "",
'start_prob': 0.,
'end_prob': 0.,
'predict_score': 0.
}
)
example_best_predict = example_top_predicts[0]
cur_f1 = calc_f1_score(list(answers), example_best_predict['predict_text'])
cur_em = calc_em_score(list(answers), example_best_predict['predict_text'])
f1 += cur_f1
em += cur_em
# debug
if cur_f1 == 0 or cur_em == 0:
example_output = {}
example_output.update(example_best_predict)
example_output['question'] = examples[0]['question_text']
example_output['answer'] = answers
example_output['f1'] = cur_f1
example_output['em'] = cur_em
print(example_output)
# total_count = len(qas_id_to_examples)
f1_score = f1 / total_count
em_score = em / total_count
logs = {}
logs['skip_count'] = skip_count
logs['total'] = total_count
logs['val_f1_score'] = f1_score
logs['val_em_score'] = em_score
logs['val_f1_em_avg_score'] = (em_score + f1_score) / 2.
return logs
class EvaluatorForQaWithImpossible(tf.keras.callbacks.Callback):
"""
start_top_log_prob and end_top_log_prob's shape is [batch, k, k]
ref: https://keras.io/examples/nlp/text_extraction_with_bert/
"""
def __init__(self, validation_dataset, validation_steps, test_dataset, test_steps,
report_dir, max_answer_length=64, n_best_size=5, is_impossible_threshold=0.5, weights=[1., 1., 1.]):
self.validation_dataset = validation_dataset
self.validation_steps = validation_steps
self.test_dataset = test_dataset
self.test_steps = test_steps
self.max_answer_length = max_answer_length
self.n_best_size = n_best_size
self.report_dir = report_dir
self.is_impossible_threshold = is_impossible_threshold
self.weights = weights
def on_epoch_end(self, epoch, logs=None):
new_logs = self.eval_process(self.validation_dataset, self.validation_steps)
logs = logs or {}
logs.update(new_logs)
print(f"\nEpoch: {epoch + 1}, val_f1_score: {logs['val_f1_score']:.4f}, "
f"val_em_score: {logs['val_em_score']:.4f}, "
f"val_f1_em_avg_score: {logs['val_f1_em_avg_score']:.4f},"
f" val_f1_for_impossible: {logs['val_f1_for_impossible']:.4f},"
f" val_f1_avg_score: {logs['val_f1_avg_score']:.4f},")
def on_train_end(self, logs=None):
logger.info("Start Evaluate.")
if not os.path.exists(self.report_dir):
os.makedirs(self.report_dir)
new_logs = self.eval_process(self.test_dataset, self.test_steps)
save_json(os.path.join(self.report_dir, 'performance.json'), new_logs)
print_boxed(f"Question Answer Evaluation")
pprint(new_logs)
logger.info(f"Save question answer reports in {self.report_dir}")
def eval_process(self, dataset, n_steps=None):
f1 = 0
em = 0
total_count = 0
skip_count = 0
start_top_res, end_top_res, answer_prob, unique_id_res = self.model.predict(dataset, steps=n_steps)
start_top_log_prob, start_top_index = start_top_res[:, :, 0], start_top_res[:, :, 1].astype(np.int) # [b, k]
end_top_log_prob, end_top_index = end_top_res[:, :, :, 0], end_top_res[:, :, :, 1].astype(np.int) # [b, k, k]
unique_id_res = unique_id_res.astype(np.int)
# predict results
results = {}
for i in range(end_top_index.shape[0]):
unique_id = unique_id_res[i][0]
itm = {
'unique_id': unique_id,
'start_top_log_prob': start_top_log_prob[i],
'start_top_index': start_top_index[i],
'end_top_log_prob': end_top_log_prob[i],
'end_top_index': end_top_index[i],
'is_impossible_prob': answer_prob[i][0]
}
results[unique_id] = itm
# raw inputs
start_n_top, end_n_top = end_top_index.shape[1:]
qas_id_to_examples = defaultdict(list)
unique_id_to_examples = {}
for idx, (inputs, outputs) in enumerate(dataset):
if n_steps is not None and idx >= n_steps:
break
unique_ids = inputs['unique_id'].numpy().astype(np.int).tolist()
offsets = inputs['offset'].numpy().astype(np.int).tolist()
qas_ids = inputs['qas_id'].numpy().astype(str).tolist()
doc_token2char_raw_start_indexs = inputs['doc_token2char_raw_start_index'].numpy().astype(str).tolist()
doc_token2char_raw_end_indexs = inputs['doc_token2char_raw_end_index'].numpy().astype(str).tolist()
doc_token2doc_indexs = inputs['doc_token2doc_index'].numpy().astype(str).tolist()
all_answers = inputs['all_answers'].numpy().astype(str).tolist()
answer_texts = inputs['answer_text'].numpy().tolist()
context_texts = inputs['context_text'].numpy().tolist()
question_texts = inputs['question_text'].numpy().tolist()
is_impossibles = inputs['is_impossible'].numpy().tolist()
p_masks = inputs['p_mask'].numpy().astype(np.int).tolist()
for t in range(len(unique_ids)):
itm = {
'unique_id': unique_ids[t],
'qas_id': qas_ids[t],
'question_text': question_texts[t].decode("utf8"),
'context_text': context_texts[t].decode("utf8"),
'answer_text': answer_texts[t].decode("utf8"),
'all_answers': json.loads(all_answers[t]),
'doc_token2char_raw_start_index': json.loads(doc_token2char_raw_start_indexs[t]),
'doc_token2char_raw_end_index': json.loads(doc_token2char_raw_end_indexs[t]),
'doc_token2doc_index': json.loads(doc_token2doc_indexs[t]),
'is_impossible': is_impossibles[t],
'p_mask': p_masks[t],
'offset': offsets[t]
}
unique_id_to_examples[unique_ids[t]] = itm
qas_id_to_examples[qas_ids[t]].append(itm)
ground_truth_for_impossible, predictions_for_impossible = [], []
for qas_id, examples in qas_id_to_examples.items():
example_all_predicts = []
answers = set()
for example in examples:
cur_unique_id = example['unique_id']
if cur_unique_id not in results:
continue
# if example['answer_text'] not in answers:
# answers.append(example['answer_text'])
answers |= set(example['all_answers'])
cur_result = results.get(cur_unique_id)
cur_start_top_log_prob = cur_result['start_top_log_prob']
cur_start_top_index = cur_result['start_top_index']
cur_end_top_log_prob = cur_result['end_top_log_prob']
cur_end_top_index = cur_result['end_top_index']
ground_truth_for_impossible.append(example['is_impossible'])
predictions_for_impossible.append(int(cur_result['is_impossible_prob'] >= self.is_impossible_threshold))
if example['is_impossible'] == 1:
continue
cur_p_mask = example['p_mask']
for i in range(start_n_top):
start_prob = cur_start_top_log_prob[i]
start_index = cur_start_top_index[i]
if not cur_p_mask[start_index]:
continue
for j in range(end_n_top):
end_prob = cur_end_top_log_prob[i, j]
end_index = cur_end_top_index[i, j]
if not cur_p_mask[end_index]:
continue
answer_length = end_index - start_index + 1
if end_index < start_index or answer_length > self.max_answer_length:
continue
itm = {
'unique_id': cur_unique_id,
'start_prob': start_prob,
'start_index': start_index,
'end_prob': end_prob,
'end_index': end_index,
'predict_score': np.log(end_prob)
}
example_all_predicts.append(itm)
if len(answers) != 0 and "" not in answers:
total_count += 1
else:
skip_count += 1
continue
example_all_predicts.sort(key=lambda s: s['predict_score'], reverse=True)
example_top_predicts = []
is_visited = set()
for example_predict in example_all_predicts:
if len(example_top_predicts) >= self.n_best_size:
break
example_feature = unique_id_to_examples[example_predict['unique_id']]
if example_predict['start_index'] - example_feature['offset'] < 0 or example_predict['end_index'] - example_feature['offset'] < 0:
predict_text = ""
else:
predict_start = example_feature['doc_token2char_raw_start_index'][
example_predict['start_index'] - example_feature['offset']]
predict_end = example_feature['doc_token2char_raw_end_index'][
example_predict['end_index'] - example_feature['offset']]
predict_text = example_feature['context_text'][predict_start: predict_end + 1].strip()
if predict_text in is_visited:
continue
is_visited.add(predict_text)
itm = {
'predict_text': predict_text,
'start_prob': example_predict['start_prob'],
'end_prob': example_predict['end_prob'],
'predict_score': example_predict['predict_score']
}
example_top_predicts.append(itm)
if len(example_top_predicts) == 0:
example_top_predicts.append(
{
'predict_text': "",
'start_prob': 0.,
'end_prob': 0.,
'predict_score': 0.
}
)
example_best_predict = example_top_predicts[0]
cur_f1 = calc_f1_score(list(answers), example_best_predict['predict_text'])
cur_em = calc_em_score(list(answers), example_best_predict['predict_text'])
f1 += cur_f1
em += cur_em
# debug
if cur_f1 == 0 or cur_em == 0:
example_output = {}
example_output.update(example_best_predict)
example_output['question'] = examples[0]['question_text']
example_output['answer'] = answers
example_output['f1'] = cur_f1
example_output['em'] = cur_em
print(example_output)
# total_count = len(qas_id_to_examples)
f1_score = f1 / total_count
em_score = em / total_count
cm = ConfusionMatrix(ground_truth_for_impossible, predictions_for_impossible)
logs = {}
logs['skip_count'] = skip_count
logs['total'] = total_count
logs['val_f1_score'] = f1_score
logs['val_em_score'] = em_score
logs['val_f1_em_avg_score'] = (em_score * self.weights[0] + f1_score * self.weights[1]) / sum(self.weights[:2])
logs['val_f1_for_impossible'] = cm.avg_f1_score(average='macro')
logs['val_accuracy_for_impossible'] = cm.overall_accuracy()
logs['val_f1_avg_score'] = (em_score * self.weights[0] + f1_score * self.weights[1] +
logs['val_f1_for_impossible'] * self.weights[2]) / sum(self.weights)
return logs
| # -*- coding: utf-8 -*-
# @Time : 2020-07-30 15:06
# @Author : yingyuankai
# @Email : <EMAIL>
# @File : qa_evaluators.py
import os
import logging
import numpy as np
import tensorflow as tf
import json
from pprint import pprint
from collections import defaultdict
from aispace.utils.eval_utils import calc_em_score, calc_f1_score
from aispace.utils.io_utils import save_json
from aispace.utils.print_utils import print_boxed
from aispace.utils.metrics_utils import ConfusionMatrix
__all__ = [
'EvaluatorForQaSimple',
'EvaluatorForQaWithImpossible'
]
logger = logging.getLogger(__name__)
class EvaluatorForQaSimple(tf.keras.callbacks.Callback):
"""
start_top_log_prob and end_top_log_prob's shape is [batch, k]
ref: https://keras.io/examples/nlp/text_extraction_with_bert/
"""
def __init__(self, validation_dataset, validation_steps, test_dataset, test_steps, report_dir, max_answer_length=64, n_best_size=5):
self.validation_dataset = validation_dataset
self.validation_steps = validation_steps
self.test_dataset = test_dataset
self.test_steps = test_steps
self.max_answer_length = max_answer_length
self.n_best_size = n_best_size
self.report_dir = report_dir
def on_epoch_end(self, epoch, logs=None):
new_logs = self.eval_process(self.validation_dataset, self.validation_steps)
logs = logs or {}
logs.update(new_logs)
print(f"Epoch: {epoch + 1}, val_f1_score: {logs['val_f1_score']:.4f}, val_em_score: {logs['val_em_score']:.4f}, "
f"val_f1_em_avg_score: {logs['val_f1_em_avg_score']:.4f}")
def on_train_end(self, logs=None):
logger.info("Start Evaluate.")
if not os.path.exists(self.report_dir):
os.makedirs(self.report_dir)
new_logs = self.eval_process(self.test_dataset, self.test_steps)
save_json(os.path.join(self.report_dir, 'performance.json'), new_logs)
print_boxed(f"Question Answer Evaluation")
pprint(new_logs)
logger.info(f"Save question answer reports in {self.report_dir}")
def eval_process(self, dataset, n_steps=None):
f1 = 0
em = 0
total_count = 0
skip_count = 0
start_top_res, end_top_res, unique_id_res = self.model.predict(dataset, steps=n_steps)
start_top_log_prob, start_top_index = start_top_res[:, :, 0], start_top_res[:, :, 1].astype(np.int) # [b, k]
end_top_log_prob, end_top_index = end_top_res[:, :, 0], end_top_res[:, :, 1].astype(np.int) # [b, k]
unique_id_res = unique_id_res.astype(np.int)
# predict results
results = {}
for i in range(end_top_index.shape[0]):
unique_id = unique_id_res[i][0]
itm = {
'unique_id': unique_id,
'start_top_log_prob': start_top_log_prob[i],
'start_top_index': start_top_index[i],
'end_top_log_prob': end_top_log_prob[i],
'end_top_index': end_top_index[i],
}
results[unique_id] = itm
# raw inputs
start_n_top, end_n_top = start_top_index.shape[-1], end_top_index.shape[-1]
qas_id_to_examples = defaultdict(list)
unique_id_to_examples = {}
for idx, (inputs, outputs) in enumerate(dataset):
if n_steps is not None and idx >= n_steps:
break
unique_ids = inputs['unique_id'].numpy().astype(np.int).tolist()
offsets = inputs['offset'].numpy().astype(np.int).tolist()
qas_ids = inputs['qas_id'].numpy().astype(str).tolist()
doc_token2char_raw_start_indexs = inputs['doc_token2char_raw_start_index'].numpy().astype(str).tolist()
doc_token2char_raw_end_indexs = inputs['doc_token2char_raw_end_index'].numpy().astype(str).tolist()
doc_token2doc_indexs = inputs['doc_token2doc_index'].numpy().astype(str).tolist()
all_answers = inputs['all_answers'].numpy().astype(str).tolist()
answer_texts = inputs['answer_text'].numpy().tolist()
context_texts = inputs['context_text'].numpy().tolist()
question_texts = inputs['question_text'].numpy().tolist()
is_impossibles = inputs['is_impossible'].numpy().tolist()
p_masks = inputs['p_mask'].numpy().astype(np.int).tolist()
for t in range(len(unique_ids)):
itm = {
'unique_id': unique_ids[t],
'qas_id': qas_ids[t],
'question_text': question_texts[t].decode("utf8"),
'context_text': context_texts[t].decode("utf8"),
'answer_text': answer_texts[t].decode("utf8"),
'all_answers': json.loads(all_answers[t]),
'doc_token2char_raw_start_index': json.loads(doc_token2char_raw_start_indexs[t]),
'doc_token2char_raw_end_index': json.loads(doc_token2char_raw_end_indexs[t]),
'doc_token2doc_index': json.loads(doc_token2doc_indexs[t]),
'is_impossible': is_impossibles[t],
'p_mask': p_masks[t],
'offset': offsets[t]
}
unique_id_to_examples[unique_ids[t]] = itm
qas_id_to_examples[qas_ids[t]].append(itm)
for qas_id, examples in qas_id_to_examples.items():
example_all_predicts = []
answers = set()
for example in examples:
cur_unique_id = example['unique_id']
if cur_unique_id not in results:
continue
if example['is_impossible'] == 1:
continue
# if example['answer_text'] not in answers:
# answers.append(example['answer_text'])
answers |= set(example['all_answers'])
cur_result = results.get(cur_unique_id)
cur_start_top_log_prob = cur_result['start_top_log_prob']
cur_start_top_index = cur_result['start_top_index']
cur_end_top_log_prob = cur_result['end_top_log_prob']
cur_end_top_index = cur_result['end_top_index']
cur_p_mask = example['p_mask']
for i in range(start_n_top):
start_prob = cur_start_top_log_prob[i]
start_index = cur_start_top_index[i]
if not cur_p_mask[start_index]:
continue
for j in range(end_n_top):
end_prob = cur_end_top_log_prob[j]
end_index = cur_end_top_index[j]
if not cur_p_mask[end_index]:
continue
answer_length = end_index - start_index + 1
if end_index < start_index or answer_length > self.max_answer_length:
continue
itm = {
'unique_id': cur_unique_id,
'start_prob': start_prob,
'start_index': start_index,
'end_prob': end_prob,
'end_index': end_index,
'predict_score': np.log(start_prob) + np.log(end_prob)
}
example_all_predicts.append(itm)
if len(answers) != 0:
total_count += 1
else:
skip_count += 1
continue
example_all_predicts.sort(key=lambda s: s['predict_score'], reverse=True)
example_top_predicts = []
is_visited = set()
for example_predict in example_all_predicts:
if len(example_top_predicts) >= self.n_best_size:
break
example_feature = unique_id_to_examples[example_predict['unique_id']]
if example_predict['start_index'] - example_feature['offset'] < 0 or example_predict['end_index'] - example_feature['offset'] < 0:
predict_text = ""
else:
predict_start = example_feature['doc_token2char_raw_start_index'][
example_predict['start_index'] - example_feature['offset']]
predict_end = example_feature['doc_token2char_raw_end_index'][
example_predict['end_index'] - example_feature['offset']]
predict_text = example_feature['context_text'][predict_start: predict_end + 1].strip()
if predict_text in is_visited:
continue
is_visited.add(predict_text)
itm = {
'predict_text': predict_text,
'start_prob': example_predict['start_prob'],
'end_prob': example_predict['end_prob'],
'predict_score': example_predict['predict_score']
}
example_top_predicts.append(itm)
if len(example_top_predicts) == 0:
example_top_predicts.append(
{
'predict_text': "",
'start_prob': 0.,
'end_prob': 0.,
'predict_score': 0.
}
)
example_best_predict = example_top_predicts[0]
cur_f1 = calc_f1_score(list(answers), example_best_predict['predict_text'])
cur_em = calc_em_score(list(answers), example_best_predict['predict_text'])
f1 += cur_f1
em += cur_em
# debug
if cur_f1 == 0 or cur_em == 0:
example_output = {}
example_output.update(example_best_predict)
example_output['question'] = examples[0]['question_text']
example_output['answer'] = answers
example_output['f1'] = cur_f1
example_output['em'] = cur_em
print(example_output)
# total_count = len(qas_id_to_examples)
f1_score = f1 / total_count
em_score = em / total_count
logs = {}
logs['skip_count'] = skip_count
logs['total'] = total_count
logs['val_f1_score'] = f1_score
logs['val_em_score'] = em_score
logs['val_f1_em_avg_score'] = (em_score + f1_score) / 2.
return logs
class EvaluatorForQaWithImpossible(tf.keras.callbacks.Callback):
"""
start_top_log_prob and end_top_log_prob's shape is [batch, k, k]
ref: https://keras.io/examples/nlp/text_extraction_with_bert/
"""
def __init__(self, validation_dataset, validation_steps, test_dataset, test_steps,
report_dir, max_answer_length=64, n_best_size=5, is_impossible_threshold=0.5, weights=[1., 1., 1.]):
self.validation_dataset = validation_dataset
self.validation_steps = validation_steps
self.test_dataset = test_dataset
self.test_steps = test_steps
self.max_answer_length = max_answer_length
self.n_best_size = n_best_size
self.report_dir = report_dir
self.is_impossible_threshold = is_impossible_threshold
self.weights = weights
def on_epoch_end(self, epoch, logs=None):
new_logs = self.eval_process(self.validation_dataset, self.validation_steps)
logs = logs or {}
logs.update(new_logs)
print(f"\nEpoch: {epoch + 1}, val_f1_score: {logs['val_f1_score']:.4f}, "
f"val_em_score: {logs['val_em_score']:.4f}, "
f"val_f1_em_avg_score: {logs['val_f1_em_avg_score']:.4f},"
f" val_f1_for_impossible: {logs['val_f1_for_impossible']:.4f},"
f" val_f1_avg_score: {logs['val_f1_avg_score']:.4f},")
def on_train_end(self, logs=None):
logger.info("Start Evaluate.")
if not os.path.exists(self.report_dir):
os.makedirs(self.report_dir)
new_logs = self.eval_process(self.test_dataset, self.test_steps)
save_json(os.path.join(self.report_dir, 'performance.json'), new_logs)
print_boxed(f"Question Answer Evaluation")
pprint(new_logs)
logger.info(f"Save question answer reports in {self.report_dir}")
def eval_process(self, dataset, n_steps=None):
f1 = 0
em = 0
total_count = 0
skip_count = 0
start_top_res, end_top_res, answer_prob, unique_id_res = self.model.predict(dataset, steps=n_steps)
start_top_log_prob, start_top_index = start_top_res[:, :, 0], start_top_res[:, :, 1].astype(np.int) # [b, k]
end_top_log_prob, end_top_index = end_top_res[:, :, :, 0], end_top_res[:, :, :, 1].astype(np.int) # [b, k, k]
unique_id_res = unique_id_res.astype(np.int)
# predict results
results = {}
for i in range(end_top_index.shape[0]):
unique_id = unique_id_res[i][0]
itm = {
'unique_id': unique_id,
'start_top_log_prob': start_top_log_prob[i],
'start_top_index': start_top_index[i],
'end_top_log_prob': end_top_log_prob[i],
'end_top_index': end_top_index[i],
'is_impossible_prob': answer_prob[i][0]
}
results[unique_id] = itm
# raw inputs
start_n_top, end_n_top = end_top_index.shape[1:]
qas_id_to_examples = defaultdict(list)
unique_id_to_examples = {}
for idx, (inputs, outputs) in enumerate(dataset):
if n_steps is not None and idx >= n_steps:
break
unique_ids = inputs['unique_id'].numpy().astype(np.int).tolist()
offsets = inputs['offset'].numpy().astype(np.int).tolist()
qas_ids = inputs['qas_id'].numpy().astype(str).tolist()
doc_token2char_raw_start_indexs = inputs['doc_token2char_raw_start_index'].numpy().astype(str).tolist()
doc_token2char_raw_end_indexs = inputs['doc_token2char_raw_end_index'].numpy().astype(str).tolist()
doc_token2doc_indexs = inputs['doc_token2doc_index'].numpy().astype(str).tolist()
all_answers = inputs['all_answers'].numpy().astype(str).tolist()
answer_texts = inputs['answer_text'].numpy().tolist()
context_texts = inputs['context_text'].numpy().tolist()
question_texts = inputs['question_text'].numpy().tolist()
is_impossibles = inputs['is_impossible'].numpy().tolist()
p_masks = inputs['p_mask'].numpy().astype(np.int).tolist()
for t in range(len(unique_ids)):
itm = {
'unique_id': unique_ids[t],
'qas_id': qas_ids[t],
'question_text': question_texts[t].decode("utf8"),
'context_text': context_texts[t].decode("utf8"),
'answer_text': answer_texts[t].decode("utf8"),
'all_answers': json.loads(all_answers[t]),
'doc_token2char_raw_start_index': json.loads(doc_token2char_raw_start_indexs[t]),
'doc_token2char_raw_end_index': json.loads(doc_token2char_raw_end_indexs[t]),
'doc_token2doc_index': json.loads(doc_token2doc_indexs[t]),
'is_impossible': is_impossibles[t],
'p_mask': p_masks[t],
'offset': offsets[t]
}
unique_id_to_examples[unique_ids[t]] = itm
qas_id_to_examples[qas_ids[t]].append(itm)
ground_truth_for_impossible, predictions_for_impossible = [], []
for qas_id, examples in qas_id_to_examples.items():
example_all_predicts = []
answers = set()
for example in examples:
cur_unique_id = example['unique_id']
if cur_unique_id not in results:
continue
# if example['answer_text'] not in answers:
# answers.append(example['answer_text'])
answers |= set(example['all_answers'])
cur_result = results.get(cur_unique_id)
cur_start_top_log_prob = cur_result['start_top_log_prob']
cur_start_top_index = cur_result['start_top_index']
cur_end_top_log_prob = cur_result['end_top_log_prob']
cur_end_top_index = cur_result['end_top_index']
ground_truth_for_impossible.append(example['is_impossible'])
predictions_for_impossible.append(int(cur_result['is_impossible_prob'] >= self.is_impossible_threshold))
if example['is_impossible'] == 1:
continue
cur_p_mask = example['p_mask']
for i in range(start_n_top):
start_prob = cur_start_top_log_prob[i]
start_index = cur_start_top_index[i]
if not cur_p_mask[start_index]:
continue
for j in range(end_n_top):
end_prob = cur_end_top_log_prob[i, j]
end_index = cur_end_top_index[i, j]
if not cur_p_mask[end_index]:
continue
answer_length = end_index - start_index + 1
if end_index < start_index or answer_length > self.max_answer_length:
continue
itm = {
'unique_id': cur_unique_id,
'start_prob': start_prob,
'start_index': start_index,
'end_prob': end_prob,
'end_index': end_index,
'predict_score': np.log(end_prob)
}
example_all_predicts.append(itm)
if len(answers) != 0 and "" not in answers:
total_count += 1
else:
skip_count += 1
continue
example_all_predicts.sort(key=lambda s: s['predict_score'], reverse=True)
example_top_predicts = []
is_visited = set()
for example_predict in example_all_predicts:
if len(example_top_predicts) >= self.n_best_size:
break
example_feature = unique_id_to_examples[example_predict['unique_id']]
if example_predict['start_index'] - example_feature['offset'] < 0 or example_predict['end_index'] - example_feature['offset'] < 0:
predict_text = ""
else:
predict_start = example_feature['doc_token2char_raw_start_index'][
example_predict['start_index'] - example_feature['offset']]
predict_end = example_feature['doc_token2char_raw_end_index'][
example_predict['end_index'] - example_feature['offset']]
predict_text = example_feature['context_text'][predict_start: predict_end + 1].strip()
if predict_text in is_visited:
continue
is_visited.add(predict_text)
itm = {
'predict_text': predict_text,
'start_prob': example_predict['start_prob'],
'end_prob': example_predict['end_prob'],
'predict_score': example_predict['predict_score']
}
example_top_predicts.append(itm)
if len(example_top_predicts) == 0:
example_top_predicts.append(
{
'predict_text': "",
'start_prob': 0.,
'end_prob': 0.,
'predict_score': 0.
}
)
example_best_predict = example_top_predicts[0]
cur_f1 = calc_f1_score(list(answers), example_best_predict['predict_text'])
cur_em = calc_em_score(list(answers), example_best_predict['predict_text'])
f1 += cur_f1
em += cur_em
# debug
if cur_f1 == 0 or cur_em == 0:
example_output = {}
example_output.update(example_best_predict)
example_output['question'] = examples[0]['question_text']
example_output['answer'] = answers
example_output['f1'] = cur_f1
example_output['em'] = cur_em
print(example_output)
# total_count = len(qas_id_to_examples)
f1_score = f1 / total_count
em_score = em / total_count
cm = ConfusionMatrix(ground_truth_for_impossible, predictions_for_impossible)
logs = {}
logs['skip_count'] = skip_count
logs['total'] = total_count
logs['val_f1_score'] = f1_score
logs['val_em_score'] = em_score
logs['val_f1_em_avg_score'] = (em_score * self.weights[0] + f1_score * self.weights[1]) / sum(self.weights[:2])
logs['val_f1_for_impossible'] = cm.avg_f1_score(average='macro')
logs['val_accuracy_for_impossible'] = cm.overall_accuracy()
logs['val_f1_avg_score'] = (em_score * self.weights[0] + f1_score * self.weights[1] +
logs['val_f1_for_impossible'] * self.weights[2]) / sum(self.weights)
return logs | en | 0.509923 | # -*- coding: utf-8 -*- # @Time : 2020-07-30 15:06 # @Author : yingyuankai # @Email : <EMAIL> # @File : qa_evaluators.py start_top_log_prob and end_top_log_prob's shape is [batch, k] ref: https://keras.io/examples/nlp/text_extraction_with_bert/ # [b, k] # [b, k] # predict results # raw inputs # if example['answer_text'] not in answers: # answers.append(example['answer_text']) # debug # total_count = len(qas_id_to_examples) start_top_log_prob and end_top_log_prob's shape is [batch, k, k] ref: https://keras.io/examples/nlp/text_extraction_with_bert/ # [b, k] # [b, k, k] # predict results # raw inputs # if example['answer_text'] not in answers: # answers.append(example['answer_text']) # debug # total_count = len(qas_id_to_examples) | 2.171053 | 2 |
tests/conftest.py | junjunjunk/torchgpipe | 532 | 10262 | <reponame>junjunjunk/torchgpipe
import pytest
import torch
@pytest.fixture(autouse=True)
def manual_seed_zero():
torch.manual_seed(0)
@pytest.fixture(scope='session')
def cuda_sleep():
# Warm-up CUDA.
torch.empty(1, device='cuda')
# From test/test_cuda.py in PyTorch.
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
torch.cuda._sleep(1000000)
end.record()
end.synchronize()
cycles_per_ms = 1000000 / start.elapsed_time(end)
def cuda_sleep(seconds):
torch.cuda._sleep(int(seconds * cycles_per_ms * 1000))
return cuda_sleep
def pytest_report_header():
return f'torch: {torch.__version__}'
| import pytest
import torch
@pytest.fixture(autouse=True)
def manual_seed_zero():
torch.manual_seed(0)
@pytest.fixture(scope='session')
def cuda_sleep():
# Warm-up CUDA.
torch.empty(1, device='cuda')
# From test/test_cuda.py in PyTorch.
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
torch.cuda._sleep(1000000)
end.record()
end.synchronize()
cycles_per_ms = 1000000 / start.elapsed_time(end)
def cuda_sleep(seconds):
torch.cuda._sleep(int(seconds * cycles_per_ms * 1000))
return cuda_sleep
def pytest_report_header():
return f'torch: {torch.__version__}' | en | 0.545309 | # Warm-up CUDA. # From test/test_cuda.py in PyTorch. | 2.126187 | 2 |
lib/python/treadmill/scheduler/__init__.py | drienyov/treadmill | 0 | 10263 | """Treadmill hierarchical scheduler.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import abc
import collections
import datetime
import heapq
import itertools
import logging
import operator
import sys
import time
import enum
import numpy as np
import six
_LOGGER = logging.getLogger(__name__)
MAX_PRIORITY = 100
DEFAULT_RANK = 100
_UNPLACED_RANK = sys.maxsize
DIMENSION_COUNT = None
_MAX_UTILIZATION = float('inf')
_GLOBAL_ORDER_BASE = time.mktime((2014, 1, 1, 0, 0, 0, 0, 0, 0))
# 21 day
DEFAULT_SERVER_UPTIME = 21 * 24 * 60 * 60
# 1 day
MIN_SERVER_UPTIME = 1 * 24 * 60 * 60
# 7 days
DEFAULT_MAX_APP_LEASE = 7 * 24 * 60 * 60
# Default partition threshold
DEFAULT_THRESHOLD = 0.9
# pylint: disable=C0302,too-many-lines
def _bit_count(value):
"""Returns number of bits set.
"""
count = 0
while value:
value &= value - 1
count += 1
return count
def zero_capacity():
"""Returns zero capacity vector.
"""
assert DIMENSION_COUNT is not None, 'Dimension count not set.'
return np.zeros(DIMENSION_COUNT)
def eps_capacity():
"""Returns eps capacity vector.
"""
assert DIMENSION_COUNT is not None, 'Dimension count not set.'
return np.array(
[np.finfo(float).eps for _x in range(0, DIMENSION_COUNT)]
)
def _global_order():
"""Use timestamp in nanoseconds, from Jan 1st 2014, to break tie in
scheduling conflicts for apps of the same priority, in a FIFO fashion.
"""
# Take the current EPOCH in nanosec
global_order = int(time.time() * 1000000) - _GLOBAL_ORDER_BASE
return global_order
def utilization(demand, allocated, available):
"""Calculates utilization score.
"""
return np.max(np.subtract(demand, allocated) / available)
def _all(oper, left, right):
"""Short circuit all for ndarray.
"""
return all(
oper(ai, bi)
for ai, bi in six.moves.zip(left, right)
)
def _any(oper, left, right):
"""Short circuit any for ndarray.
"""
return any(
oper(ai, bi)
for ai, bi in six.moves.zip(left, right)
)
def _any_eq(left, right):
"""Short circuit any eq for ndarray.
"""
return _any(operator.eq, left, right)
def _any_isclose(left, right):
"""Short circuit any isclose for ndarray.
"""
return _any(np.isclose, left, right)
def _any_lt(left, right):
"""Short circuit any lt for ndarray.
"""
return _any(operator.lt, left, right)
def _any_le(left, right):
"""Short circuit any le for ndarray.
"""
return _any(operator.le, left, right)
def _any_gt(left, right):
"""Short circuit any gt for ndarray.
"""
return _any(operator.gt, left, right)
def _any_ge(left, right):
"""Short circuit any ge for ndarray.
"""
return _any(operator.ge, left, right)
def _all_eq(left, right):
"""Short circuit all eq for ndarray.
"""
return _all(operator.eq, left, right)
def _all_isclose(left, right):
"""Short circuit all isclose for ndarray.
"""
return _all(np.isclose, left, right)
def _all_lt(left, right):
"""Short circuit all lt for ndarray.
"""
return _all(operator.lt, left, right)
def _all_le(left, right):
"""Short circuit all le for ndarray.
"""
return _all(operator.le, left, right)
def _all_gt(left, right):
"""Short circuit all gt for ndarray.
"""
return _all(operator.gt, left, right)
def _all_ge(left, right):
"""Short circuit all ge for ndarray.
"""
return _all(operator.ge, left, right)
class IdentityGroup:
"""Identity group.
"""
__slots__ = (
'available',
'count',
)
def __init__(self, count=0):
self.count = count
self.available = set(range(0, count))
def acquire(self):
"""Return next available identity or None.
"""
if self.available:
return self.available.pop()
else:
return None
def release(self, ident):
"""Mark identity as available.
"""
if ident < self.count:
self.available.add(ident)
def adjust(self, count):
"""Adjust identities with new count.
If count is larger, add additional identities to the set.
If count is lower, remove identities that are no longer valid.
All apps that have invalid identities will be adjusted in the
schedule cycle.
"""
if count >= self.count:
self.available ^= set(six.moves.xrange(self.count, count))
else:
self.available -= set(six.moves.xrange(count, self.count))
self.count = count
class State(enum.Enum):
"""Enumeration of node/server states.
"""
# Ready to accept new applications.
# TODO: Fix attribute name
up = 'up' # pylint: disable=invalid-name
# Applications need to be migrated.
down = 'down'
# Existing applications can stay, but will not accept new.
frozen = 'frozen'
class Affinity:
"""Model affinity and affinity limits.
"""
__slots__ = (
'name',
'limits',
'constraints',
)
def __init__(self, name, limits=None):
self.name = name
self.limits = collections.defaultdict(lambda: float('inf'))
if limits:
self.limits.update(limits)
# freeze affinity shape constraints.
self.constraints = tuple([self.name] + sorted(self.limits.values()))
class Application:
"""Application object.
"""
__slots__ = (
'global_order',
'name',
'demand',
'affinity',
'priority',
'allocation',
'data_retention_timeout',
'server',
'lease',
'identity',
'identity_group',
'identity_group_ref',
'schedule_once',
'evicted',
'placement_expiry',
'renew',
'unschedule',
'final_rank',
'final_util',
'constraints',
)
def __init__(self, name, priority, demand, affinity,
affinity_limits=None,
data_retention_timeout=0,
lease=0,
identity_group=None,
identity=None,
schedule_once=False):
self.global_order = _global_order()
self.allocation = None
self.server = None
self.name = name
self.affinity = Affinity(affinity, affinity_limits)
self.priority = priority
self.demand = np.array(demand, dtype=float)
self.data_retention_timeout = data_retention_timeout
self.lease = lease
self.identity_group = identity_group
self.identity = identity
self.identity_group_ref = None
self.schedule_once = schedule_once
self.evicted = False
self.unschedule = False
self.placement_expiry = None
self.renew = False
def shape(self):
"""Return tuple of application (constraints, demand).
Application shape is tuple of constraints that affect application
placement. Currently this includes affinity constraints and app lease
time.
"""
constraints = (self.affinity.constraints + (self.lease,))
if self.allocation:
constraints += self.allocation.constraints
return constraints, self.demand
def acquire_identity(self):
"""Try to acquire identity if belong to the group.
Returns True if successfull or if identity group is none.
"""
if not self.identity_group_ref:
return True
if self.identity is None:
self.identity = self.identity_group_ref.acquire()
_LOGGER.info('Acquired identity: %s: %s - %s',
self.name, self.identity_group, self.identity)
return self.identity is not None
def release_identity(self):
"""Release app identity.
"""
if self.identity_group_ref and self.identity is not None:
self.identity_group_ref.release(self.identity)
self.identity = None
def force_set_identity(self, identity):
"""Force identity of the app.
"""
if identity is not None:
assert self.identity_group_ref
self.identity = identity
self.identity_group_ref.available.discard(identity)
def has_identity(self):
"""Checks if app has identity if identity group is specified.
"""
return self.identity_group_ref is None or self.identity is not None
@property
def traits(self):
"""The app traits are derived from allocation.
"""
if self.allocation is None:
return 0
else:
return self.allocation.traits
@six.add_metaclass(abc.ABCMeta)
class Strategy:
"""Base class for all placement strategies.
"""
@abc.abstractmethod
def suggested_node(self):
"""Suggested node that should be tried first.
"""
pass
@abc.abstractmethod
def next_node(self):
"""Next node to try, if previous suggestion was rejected.
"""
pass
class SpreadStrategy(Strategy):
"""Spread strategy will suggest new node for each subsequent placement.
"""
__slots__ = (
'current_idx',
'node',
)
def __init__(self, node):
self.current_idx = 0
self.node = node
def suggested_node(self):
"""Suggest next node from the cycle.
"""
for _ in six.moves.xrange(0, len(self.node.children)):
if self.current_idx == len(self.node.children):
self.current_idx = 0
current = self.node.children[self.current_idx]
self.current_idx += 1
if current:
return current
# Not a single non-none node.
return None
def next_node(self):
"""Suggest next node from the cycle.
"""
return self.suggested_node()
class PackStrategy(Strategy):
"""Pack strategy will suggest same node until it is full.
"""
__slots__ = (
'current_idx',
'node',
)
def __init__(self, node):
self.current_idx = 0
self.node = node
def suggested_node(self):
"""Suggest same node as previous placement.
"""
for _ in six.moves.xrange(0, len(self.node.children)):
if self.current_idx == len(self.node.children):
self.current_idx = 0
node = self.node.children[self.current_idx]
if node:
return node
return None
def next_node(self):
"""Suggest next node from the cycle.
"""
self.current_idx += 1
return self.suggested_node()
class TraitSet:
"""Hierarchical set of traits.
"""
__slots__ = (
'self_traits',
'children_traits',
'traits',
)
def __init__(self, traits=0):
if not traits:
traits = 0
# Private traits.
assert isinstance(traits, six.integer_types)
self.self_traits = traits
# Union of all children traits.
self.children_traits = dict()
self._recalculate()
def _recalculate(self):
"""Calculate combined set of all traits.
"""
self.traits = self.self_traits
for trait in six.itervalues(self.children_traits):
self.traits |= trait
def has(self, traits):
"""Check if all traits are present.
"""
return (self.traits & traits) == traits
def add(self, child, traits):
"""Add a child with given traits.
"""
# Update children traits.
self.children_traits[child] = traits
self._recalculate()
def remove(self, child):
"""Remove child traits from the list.
"""
if child in self.children_traits:
del self.children_traits[child]
self._recalculate()
def is_same(self, other):
"""Compares own traits, ignore child.
"""
return self.self_traits == other.self_traits
class AffinityCounter:
"""Manages affinity count.
"""
__slots__ = (
'affinity_counter',
)
def __init__(self):
self.affinity_counter = collections.Counter()
class Node:
"""Abstract placement node.
"""
__slots__ = (
'name',
'level',
'free_capacity',
'parent',
'children',
'children_by_name',
'traits',
'labels',
'affinity_counters',
'valid_until',
'_state',
'_state_since',
)
def __init__(self, name, traits, level, valid_until=0):
self.name = name
self.level = level
self.free_capacity = zero_capacity()
self.parent = None
self.children = list()
self.children_by_name = dict()
self.traits = TraitSet(traits)
self.labels = set()
self.affinity_counters = collections.Counter()
self.valid_until = valid_until
self._state = State.up
self._state_since = time.time()
def empty(self):
"""Return true if there are no children.
"""
return not bool(self.children_by_name)
def children_iter(self):
"""Iterate over active children.
"""
for child in self.children:
if child:
yield child
def get_state(self):
"""Returns tuple of (state, since).
"""
return self. _state, self._state_since
def set_state(self, state, since):
"""Sets the state and time since.
"""
if self._state is not state:
self._state_since = since
self._state = state
_LOGGER.debug('state: %s - (%s, %s)',
self.name, self._state, self._state_since)
@property
def state(self):
"""Return current state.
"""
return self._state
@state.setter
def state(self, new_state):
"""Set node state and records time.
"""
self.set_state(new_state, time.time())
def add_child_traits(self, node):
"""Recursively add child traits up.
"""
self.traits.add(node.name, node.traits.traits)
if self.parent:
self.parent.remove_child_traits(self.name)
self.parent.add_child_traits(self)
def adjust_valid_until(self, child_valid_until):
"""Recursively adjust valid until time.
"""
if child_valid_until:
self.valid_until = max(self.valid_until, child_valid_until)
else:
if self.empty():
self.valid_until = 0
else:
self.valid_until = max([node.valid_until
for node in self.children_iter()])
if self.parent:
self.parent.adjust_valid_until(child_valid_until)
def remove_child_traits(self, node_name):
"""Recursively remove child traits up.
"""
self.traits.remove(node_name)
if self.parent:
self.parent.remove_child_traits(self.name)
self.parent.add_child_traits(self)
def reset_children(self):
"""Reset children to empty list.
"""
for child in self.children_iter():
child.parent = None
self.children = list()
self.children_by_name = dict()
def add_node(self, node):
"""Add child node, set the traits and propagate traits up.
"""
assert node.parent is None
assert node.name not in self.children_by_name
node.parent = self
self.children.append(node)
self.children_by_name[node.name] = node
self.add_child_traits(node)
self.increment_affinity(node.affinity_counters)
self.add_labels(node.labels)
self.adjust_valid_until(node.valid_until)
def add_labels(self, labels):
"""Recursively add labels to self and parents.
"""
self.labels.update(labels)
if self.parent:
self.parent.add_labels(self.labels)
def remove_node(self, node):
"""Remove child node and adjust the traits.
"""
assert node.name in self.children_by_name
del self.children_by_name[node.name]
for idx in six.moves.xrange(0, len(self.children)):
if self.children[idx] == node:
self.children[idx] = None
self.remove_child_traits(node.name)
self.decrement_affinity(node.affinity_counters)
self.adjust_valid_until(None)
node.parent = None
return node
def remove_node_by_name(self, nodename):
"""Removes node by name.
"""
assert nodename in self.children_by_name
return self.remove_node(self.children_by_name[nodename])
def check_app_constraints(self, app):
"""Find app placement on the node.
"""
if app.allocation is not None:
if app.allocation.label not in self.labels:
_LOGGER.info('Missing label: %s on %s', app.allocation.label,
self.name)
return False
if app.traits != 0 and not self.traits.has(app.traits):
_LOGGER.info('Missing traits: %s on %s', app.traits, self.name)
return False
if not self.check_app_affinity_limit(app):
return False
if _any_gt(app.demand, self.free_capacity):
_LOGGER.info('Not enough free capacity: %s', self.free_capacity)
return False
return True
def check_app_affinity_limit(self, app):
"""Check app affinity limits
"""
count = self.affinity_counters[app.affinity.name]
limit = app.affinity.limits[self.level]
return count < limit
def put(self, _app):
"""Abstract method, should never be called.
"""
raise Exception('Not implemented.')
def size(self, label):
"""Returns total capacity of the children.
"""
if self.empty() or label not in self.labels:
return eps_capacity()
return np.sum([
n.size(label) for n in self.children_iter()], 0)
def members(self):
"""Return set of all leaf node names.
"""
names = dict()
for node in self.children_iter():
names.update(node.members())
return names
def increment_affinity(self, counters):
"""Increment affinity counters recursively.
"""
self.affinity_counters.update(counters)
if self.parent:
self.parent.increment_affinity(counters)
def decrement_affinity(self, counters):
"""Decrement affinity counters recursively.
"""
self.affinity_counters.subtract(counters)
if self.parent:
self.parent.decrement_affinity(counters)
class Bucket(Node):
"""Collection of nodes/buckets.
"""
__slots__ = (
'affinity_strategies',
'traits',
)
_default_strategy_t = SpreadStrategy
def __init__(self, name, traits=0, level=None):
super(Bucket, self).__init__(name, traits, level)
self.affinity_strategies = dict()
self.traits = TraitSet(traits)
def set_affinity_strategy(self, affinity, strategy_t):
"""Initilaizes placement strategy for given affinity.
"""
self.affinity_strategies[affinity] = strategy_t(self)
def get_affinity_strategy(self, affinity):
"""Returns placement strategy for the affinity, defaults to spread.
"""
if affinity not in self.affinity_strategies:
self.set_affinity_strategy(affinity, Bucket._default_strategy_t)
return self.affinity_strategies[affinity]
def adjust_capacity_up(self, new_capacity):
"""Node can only increase capacity.
"""
self.free_capacity = np.maximum(self.free_capacity, new_capacity)
if self.parent:
self.parent.adjust_capacity_up(self.free_capacity)
def adjust_capacity_down(self, prev_capacity=None):
"""Called when capacity is decreased.
"""
if self.empty():
self.free_capacity = zero_capacity()
if self.parent:
self.parent.adjust_capacity_down()
else:
if prev_capacity is not None and _all_lt(prev_capacity,
self.free_capacity):
return
free_capacity = zero_capacity()
for child_node in self.children_iter():
if child_node.state is not State.up:
continue
free_capacity = np.maximum(free_capacity,
child_node.free_capacity)
# If resulting free_capacity is less the previous, we need to
# adjust the parent, otherwise, nothing needs to be done.
prev_capacity = self.free_capacity.copy()
if _any_lt(free_capacity, self.free_capacity):
self.free_capacity = free_capacity
if self.parent:
self.parent.adjust_capacity_down(prev_capacity)
def add_node(self, node):
"""Adds node to the bucket.
"""
super(Bucket, self).add_node(node)
self.adjust_capacity_up(node.free_capacity)
def remove_node(self, node):
"""Removes node from the bucket.
"""
super(Bucket, self).remove_node(node)
# if _any_isclose(self.free_capacity, node.free_capacity):
self.adjust_capacity_down(node.free_capacity)
return node
def put(self, app):
"""Try to put app on one of the nodes that belong to the bucket.
"""
# Check if it is feasible to put app on some node low in the
# hierarchy
_LOGGER.debug('bucket.put: %s => %s', app.name, self.name)
if not self.check_app_constraints(app):
return False
strategy = self.get_affinity_strategy(app.affinity.name)
node = strategy.suggested_node()
if node is None:
_LOGGER.debug('All nodes in the bucket deleted.')
return False
nodename0 = node.name
first = True
while True:
# End of iteration.
if not first and node.name == nodename0:
_LOGGER.debug('Finished iterating on: %s.', self.name)
break
first = False
_LOGGER.debug('Trying node: %s:', node.name)
if node.state is not State.up:
_LOGGER.debug('Node not up: %s, %s', node.name, node.state)
else:
if node.put(app):
return True
node = strategy.next_node()
return False
class Server(Node):
"""Server object, final app placement.
"""
__slots__ = (
'init_capacity',
'apps',
'up_since',
'presence_id',
)
def __init__(self, name, capacity, up_since=0, valid_until=0,
traits=0, label=None, presence_id=None):
super(Server, self).__init__(name, traits=traits, level='server',
valid_until=valid_until)
self.labels = set([label])
self.init_capacity = np.array(capacity, dtype=float)
self.free_capacity = self.init_capacity.copy()
self.apps = dict()
self.up_since = up_since
self.presence_id = presence_id
def __str__(self):
return 'server: %s %s' % (self.name, self.init_capacity)
def is_same(self, other):
"""Compares capacity and traits against another server.
valid_until is ignored, as server comes up after reboot will have
different valid_until value.
"""
return (self.labels == other.labels and
_all_eq(self.init_capacity, other.init_capacity) and
self.traits.is_same(other.traits))
def put(self, app):
"""Tries to put the app on the server.
"""
assert app.name not in self.apps
_LOGGER.debug('server.put: %s => %s', app.name, self.name)
if not self.check_app_lifetime(app):
return False
if not self.check_app_constraints(app):
return False
prev_capacity = self.free_capacity.copy()
self.free_capacity -= app.demand
self.apps[app.name] = app
self.increment_affinity([app.affinity.name])
app.server = self.name
if self.parent:
self.parent.adjust_capacity_down(prev_capacity)
if app.placement_expiry is None:
app.placement_expiry = time.time() + app.lease
return True
def restore(self, app, placement_expiry=None):
"""Put app back on the server, ignore app lifetime.
"""
_LOGGER.debug('server.restore: %s => %s (%s)',
app.name, self.name, placement_expiry)
lease = app.lease
# If not explicit
if placement_expiry is None:
placement_expiry = app.placement_expiry
app.lease = 0
rc = self.put(app)
app.lease = lease
app.placement_expiry = placement_expiry
return rc
def renew(self, app):
"""Try to extend the placement for app lease.
"""
can_renew = self.check_app_lifetime(app)
if can_renew:
app.placement_expiry = time.time() + app.lease
return can_renew
def check_app_lifetime(self, app):
"""Check if the app lease fits until server is rebooted.
"""
# app with 0 lease can be placed anywhere (ignore potentially
# expired servers)
if not app.lease:
return True
return time.time() + app.lease < self.valid_until
def remove(self, app_name):
"""Removes app from the server.
"""
assert app_name in self.apps
app = self.apps[app_name]
del self.apps[app_name]
app.server = None
app.evicted = True
app.unschedule = False
app.placement_expiry = None
self.free_capacity += app.demand
self.decrement_affinity([app.affinity.name])
if self.parent:
self.parent.adjust_capacity_up(self.free_capacity)
def remove_all(self):
"""Remove all apps.
"""
# iterate over copy of the keys, as we are removing them in the loop.
for appname in list(self.apps):
self.remove(appname)
def size(self, label):
"""Return server capacity.
"""
if label not in self.labels:
return eps_capacity()
return self.init_capacity
def members(self):
"""Return set of all leaf node names.
"""
return {self.name: self}
def set_state(self, state, since):
"""Change host state.
"""
if self.state is state:
return
super(Server, self).set_state(state, since)
if state == State.up:
if self.parent:
self.parent.adjust_capacity_up(self.free_capacity)
elif state in (State.down, State.frozen):
if self.parent:
self.parent.adjust_capacity_down(self.free_capacity)
else:
raise Exception('Invalid state: ' % state)
class Allocation:
"""Allocation manages queue of apps sharing same reserved capacity.
In reality allocation is tied to grn via application proid.
Applications within the allocation are organized by application priority.
Allocations are ranked, and the rank is used to globally order applications
from different allocations into global queue.
Default allocation has rank 100. Defining allocation with lower rank will
result in all it's applications to be evaluated first regardless of
utilization. This is used to model "system" applications that should be
always present regardless of utilization.
Allocation queue can be capped with max_utilization parameter. If set, it
will specify the max_utilization which will be considered for scheduling.
"""
__slots__ = (
'reserved',
'rank',
'rank_adjustment',
'traits',
'label',
'max_utilization',
'apps',
'sub_allocations',
'path',
'constraints',
)
def __init__(self, reserved=None, rank=None, traits=None,
max_utilization=None, partition=None):
self.set_reserved(reserved)
self.rank = None
self.rank_adjustment = 0
self.traits = 0
self.label = partition
self.max_utilization = _MAX_UTILIZATION
self.reserved = zero_capacity()
self.set_max_utilization(max_utilization)
self.set_traits(traits)
self.update(reserved, rank, 0)
self.apps = dict()
self.sub_allocations = dict()
self.path = []
# Freeze shape constraintes.
self.constraints = (self.label, self.traits,)
@property
def name(self):
"""Returns full allocation name.
"""
return '/'.join(self.path)
def set_reserved(self, reserved):
"""Update reserved capacity.
"""
if reserved is None:
self.reserved = zero_capacity()
elif isinstance(reserved, int):
assert reserved == 0
self.reserved = zero_capacity()
elif isinstance(reserved, float):
assert reserved == 0.0
self.reserved = zero_capacity()
elif isinstance(reserved, list):
assert len(reserved) == DIMENSION_COUNT
self.reserved = np.array(reserved, dtype=float)
elif isinstance(reserved, np.ndarray):
self.reserved = reserved
else:
assert 'Unsupported type: %r' % type(reserved)
def update(self, reserved, rank, rank_adjustment, max_utilization=None):
"""Updates allocation.
"""
if rank is not None:
self.rank = rank
else:
self.rank = DEFAULT_RANK
if rank_adjustment is not None:
self.rank_adjustment = rank_adjustment
self.set_reserved(reserved)
self.set_max_utilization(max_utilization)
def set_max_utilization(self, max_utilization):
"""Sets max_utilization, accounting for default None value.
"""
if max_utilization is not None:
self.max_utilization = max_utilization
else:
self.max_utilization = _MAX_UTILIZATION
def set_traits(self, traits):
"""Set traits, account for default None value.
"""
if not traits:
self.traits = 0
else:
self.traits = traits
def add(self, app):
"""Add application to the allocation queue.
Once added, the scheduler will make an attempt to place the app on one
of the cell nodes.
"""
# Check that there are no duplicate app names.
if app.name in self.apps:
_LOGGER.warning(
'Duplicate app on alllocation queue: %s', app.name
)
return
app.allocation = self
self.apps[app.name] = app
def remove(self, name):
"""Remove application from the allocation queue.
"""
if name in self.apps:
self.apps[name].allocation = None
del self.apps[name]
def priv_utilization_queue(self):
"""Returns tuples for sorted by global utilization.
Apps in the queue are ordered by priority, insertion order.
Adding or removing maintains invariant that apps utilization
monotonically increases as well.
Returns local prioritization queue in a tuple where first element is
utilization ratio, so that this queue is suitable for merging into
global priority queue.
"""
def _app_key(app):
"""Compares apps by priority, state, global index
"""
return (-app.priority, 0 if app.server else 1,
app.global_order, app.name)
prio_queue = sorted(six.viewvalues(self.apps), key=_app_key)
acc_demand = zero_capacity()
available = self.reserved + np.finfo(float).eps
util_before = utilization(acc_demand, self.reserved, available)
for app in prio_queue:
acc_demand = acc_demand + app.demand
util_after = utilization(acc_demand, self.reserved, available)
# Priority 0 apps are treated specially - utilization is set to
# max float.
#
# This ensures that they are at the end of the all queues.
if app.priority == 0:
util_before = _MAX_UTILIZATION
util_after = _MAX_UTILIZATION
# All things equal, already scheduled applications have priority
# over pending.
pending = 0 if app.server else 1
if util_after <= self.max_utilization - 1:
rank = self.rank
if util_before < 0:
rank -= self.rank_adjustment
else:
rank = _UNPLACED_RANK
entry = (rank, util_before, util_after, pending, app.global_order,
app)
util_before = util_after
yield entry
def utilization_queue(self, free_capacity, visitor=None):
"""Returns utilization queue including the sub-allocs.
All app queues from self and sub-allocs are merged in standard order,
and then utilization is recalculated based on total reserved capacity
of this alloc and sub-allocs combined.
The function maintains invariant that any app (self or inside sub-alloc
with utilization < 1 will remain with utilzation < 1.
"""
total_reserved = self.total_reserved()
queues = [
alloc.utilization_queue(free_capacity, visitor)
for alloc in six.itervalues(self.sub_allocations)
]
queues.append(self.priv_utilization_queue())
acc_demand = zero_capacity()
available = total_reserved + free_capacity + np.finfo(float).eps
util_before = utilization(acc_demand, total_reserved, available)
for item in heapq.merge(*queues):
rank, _u_before, _u_after, pending, order, app = item
acc_demand = acc_demand + app.demand
util_after = utilization(acc_demand, total_reserved, available)
if app.priority == 0:
util_before = _MAX_UTILIZATION
util_after = _MAX_UTILIZATION
# - lower rank allocations take precedence.
# - for same rank, utilization takes precedence
# - False < True, so for apps with same utilization we prefer
# those that already running (False == not pending)
# - Global order
entry = (rank, util_before, util_after, pending, order, app)
if visitor:
visitor(self, entry, acc_demand)
util_before = util_after
yield entry
def total_reserved(self):
"""Total reserved capacity including sub-allocs.
"""
return six.moves.reduce(
lambda acc, alloc: acc + alloc.total_reserved(),
six.itervalues(self.sub_allocations),
self.reserved
)
def add_sub_alloc(self, name, alloc):
"""Add child allocation.
"""
self.sub_allocations[name] = alloc
assert not alloc.path
alloc.path = self.path + [name]
alloc.label = self.label
def remove_sub_alloc(self, name):
"""Remove chlid allocation.
"""
if name in self.sub_allocations:
del self.sub_allocations[name]
def get_sub_alloc(self, name):
"""Return sub allocation, create empty if it does not exist.
"""
if name not in self.sub_allocations:
self.add_sub_alloc(name, Allocation())
return self.sub_allocations[name]
def all_apps(self):
"""Return all apps in allocation and sub-allocations."""
all_apps = list(six.itervalues(self.apps))
for alloc in six.itervalues(self.sub_allocations):
all_apps.extend(alloc.all_apps())
return all_apps
class Partition:
"""Cell partition.
"""
__slots__ = (
'allocation',
'max_server_uptime',
'max_lease',
'threshold',
'label',
'_reboot_buckets',
'_reboot_dates',
'_reboot_last',
)
def __init__(self, max_server_uptime=None, max_lease=None, threshold=None,
label=None, reboot_schedule=None, now=None):
self.label = label
self.allocation = Allocation(partition=label)
# Default -
if not max_server_uptime:
max_server_uptime = DEFAULT_SERVER_UPTIME
if not max_lease:
max_lease = DEFAULT_MAX_APP_LEASE
if not threshold:
threshold = DEFAULT_THRESHOLD
self.max_server_uptime = max_server_uptime
self.max_lease = max_lease
self.threshold = threshold
if not reboot_schedule:
# reboot every day
reboot_schedule = {day: (23, 59, 59) for day in range(7)}
if not now:
now = time.time()
self._reboot_dates = reboot_dates(
reboot_schedule,
start_date=datetime.date.fromtimestamp(now)
)
self._reboot_buckets = []
self._reboot_last = now
self.tick(now)
def _find_bucket(self, timestamp):
"""Try to find bucket with given timestamp.
"""
for bucket in self._reboot_buckets:
if bucket.timestamp == timestamp:
return bucket
return None
def add(self, server, timestamp=None):
"""Add server.
"""
bucket = None
if timestamp:
bucket = self._find_bucket(timestamp)
# servers with larger than max lifetime should be rebooted at
# the next opportunity
if (self._reboot_buckets[0].timestamp >
server.up_since + DEFAULT_SERVER_UPTIME):
bucket = self._reboot_buckets[0]
if not bucket:
bucket = min(reversed(self._reboot_buckets),
key=lambda b: b.cost(server))
bucket.add(server)
def remove(self, server):
"""Remove server.
"""
for bucket in self._reboot_buckets:
bucket.remove(server)
def tick(self, now):
"""Do per-tick-bookkeeping.
"""
while self._reboot_last <= now + DEFAULT_SERVER_UPTIME:
bucket = RebootBucket(next(self._reboot_dates))
self._reboot_buckets.append(bucket)
self._reboot_last = bucket.timestamp
while self._reboot_buckets[0].timestamp < now:
self._reboot_buckets.pop(0)
class PartitionDict(dict):
"""Dict that creates partitions on demand.
We use this instead of collections.defaultdict so that we can provide
the new partition with its label, to be propagated to its allocations.
"""
def __missing__(self, label):
"""Create a new partition, passing the label to its constructor.
"""
self[label] = Partition(label=label)
return self[label]
# pylint: disable=invalid-name
def reboot_dates(schedule, start_date=None):
"""Generate list of valid reboot dates.
"""
date = datetime.date.today()
if start_date:
date = start_date
while True:
weekday = date.weekday()
if weekday in schedule:
h, m, s = schedule[weekday]
yield time.mktime((date.year, date.month, date.day,
h, m, s, 0, 0, 0))
date += datetime.timedelta(days=1)
class RebootBucket:
"""Bucket of servers to be rebooted at the same time.
"""
__slots__ = (
'timestamp',
'servers',
)
def __init__(self, timestamp):
self.timestamp = timestamp
self.servers = []
def add(self, server):
"""Add server to this bucket.
"""
self.servers.append(server)
server.valid_until = self.timestamp
_LOGGER.info('Setting valid until on server: %s %s',
server.name, server.valid_until)
def remove(self, server):
"""Remove server from this bucket.
"""
try:
self.servers.remove(server)
except ValueError:
pass
def cost(self, server):
"""The cost of adding server to this bucket.
"""
if self.timestamp > server.up_since + DEFAULT_SERVER_UPTIME:
return float('inf')
if self.timestamp < server.up_since + MIN_SERVER_UPTIME:
return float('inf')
return len(self.servers)
class PlacementFeasibilityTracker:
"""Tracks similar apps placement failures."""
def __init__(self):
self.recorder = dict()
def feasible(self, app):
"""Checks if it is feasible to satisfy demand."""
constraints, demand = app.shape()
if constraints in self.recorder:
# If demand is >= than recorded failure, placement is not feasible.
if _all_ge(demand, self.recorder[constraints]):
return False
return True
def adjust(self, app):
"""Adjust info about failed placement."""
constraints, demand = app.shape()
if constraints not in self.recorder:
self.recorder[constraints] = demand
else:
if _all_le(demand, self.recorder[constraints]):
self.recorder[constraints] = demand
class Cell(Bucket):
"""Top level node.
"""
__slots__ = (
'partitions',
'next_event_at',
'apps',
'identity_groups',
)
def __init__(self, name):
super(Cell, self).__init__(name, traits=0, level='cell')
self.partitions = PartitionDict()
self.apps = dict()
self.identity_groups = collections.defaultdict(IdentityGroup)
self.next_event_at = np.inf
def add_app(self, allocation, app):
"""Adds application to the scheduled list.
"""
assert allocation is not None
if app.allocation:
app.allocation.remove(app.name)
allocation.add(app)
self.apps[app.name] = app
if app.identity_group:
app.identity_group_ref = self.identity_groups[app.identity_group]
def remove_app(self, appname):
"""Remove app from scheduled list.
"""
if appname not in self.apps:
return
app = self.apps[appname]
servers = self.members()
if app.server in servers:
servers[app.server].remove(app.name)
if app.allocation:
app.allocation.remove(app.name)
app.release_identity()
del self.apps[appname]
def configure_identity_group(self, name, count):
"""Add identity group to the cell.
"""
if name not in self.identity_groups:
self.identity_groups[name] = IdentityGroup(count)
else:
self.identity_groups[name].adjust(count)
def remove_identity_group(self, name):
"""Remove identity group.
"""
ident_group = self.identity_groups.get(name)
if ident_group:
in_use = False
for app in six.itervalues(self.apps):
if app.identity_group_ref == ident_group:
ident_group.adjust(0)
in_use = True
break
if not in_use:
del self.identity_groups[name]
def _fix_invalid_placements(self, queue, servers):
"""If app is placed on non-existent server, set server to None.
"""
for app in queue:
if app.server and app.server not in servers:
app.server = None
app.evicted = True
app.release_identity()
def _record_rank_and_util(self, queue):
"""Set final rank and utilization for all apps in the queue.
"""
for item in queue:
rank = item[0]
util = item[1]
app = item[-1]
app.final_rank = rank
app.final_util = util
def _fix_invalid_identities(self, queue, servers):
"""Check that app identity is valid for given identity group.
"""
for app in queue:
if app.identity is not None and app.identity_group_ref is not None:
# Can happen if identity group was adjusted to lower count.
if app.identity >= app.identity_group_ref.count:
# Can't release identity as it is invalid.
_LOGGER.info('Identity exceeds limit: %s - %s, limit %s',
app.name, app.identity,
app.identity_group_ref.count)
app.identity = None
# Invalidate any existing placement.
if app.server:
servers[app.server].remove(app.name)
def _handle_inactive_servers(self, servers):
"""Migrate apps from inactive servers.
"""
self.next_event_at = np.inf
for server in six.itervalues(servers):
state, since = server.get_state()
to_be_moved = []
if state == State.down:
_LOGGER.debug('Server state is down: %s', server.name)
for name, app in six.iteritems(server.apps):
if app.data_retention_timeout is None:
expires_at = 0
else:
expires_at = since + app.data_retention_timeout
if expires_at <= time.time():
_LOGGER.debug('Expired placement: %s', name)
app.release_identity()
to_be_moved.append(name)
else:
_LOGGER.debug('Keep placement: %s until %s',
name, expires_at)
self.next_event_at = min(expires_at,
self.next_event_at)
elif state == State.frozen:
_LOGGER.debug('Server state is frozen: %s', server.name)
to_be_moved = [
name for name, app in six.iteritems(server.apps)
if app.unschedule
]
for name in to_be_moved:
server.remove(name)
def _find_placements(self, queue, servers):
"""Run the queue and find placements.
"""
# TODO: refactor to get rid of warnings.
#
# pylint: disable=too-many-branches,too-many-statements
#
# At this point, if app.server is defined, it points to attached
# server.
evicted = dict()
reversed_queue = queue[::-1]
placement_tracker = PlacementFeasibilityTracker()
for app in queue:
_LOGGER.debug('scheduling %s', app.name)
if app.final_rank == _UNPLACED_RANK:
if app.server:
assert app.server in servers
assert app.has_identity()
servers[app.server].remove(app.name)
app.release_identity()
continue
restore = {}
if app.renew:
assert app.server
assert app.has_identity()
assert app.server in servers
server = servers[app.server]
if not server.renew(app):
# Save information that will be used to restore placement
# in case renewal fails.
_LOGGER.debug('Cannot renew app %s on server %s',
app.name, app.server)
restore['server'] = server
restore['placement_expiry'] = app.placement_expiry
server.remove(app.name)
# At this point app was either renewed on the same server, or
# temporarily removed from server if renew failed.
#
# If placement will be found, renew should remain False. If
# placement will not be found, renew will be set to True when
# placement is restored to the server it was running.
app.renew = False
if app.server:
assert app.server in servers
assert app.has_identity()
continue
assert app.server is None
if not app.acquire_identity():
_LOGGER.info('Unable to acquire identity: %s, %s', app.name,
app.identity_group)
continue
# If app was evicted before, try to restore to the same node.
if app in evicted:
assert app.has_identity()
evicted_from, app_expiry = evicted[app]
del evicted[app]
if evicted_from.restore(app, app_expiry):
app.evicted = False
continue
assert app.server is None
if app.schedule_once and app.evicted:
continue
# Check if placement is feasible.
if not placement_tracker.feasible(app):
_LOGGER.info(
'Placement not feasible: %s %r', app.name, app.shape()
)
continue
if not self.put(app):
# There is not enough capacity, from the end of the queue,
# evict apps, freeing capacity.
for evicted_app in reversed_queue:
# We reached the app we can't place
if evicted_app == app:
break
# The app is not yet placed, skip
if not evicted_app.server:
continue
assert evicted_app.server in servers
evicted_app_server = servers[evicted_app.server]
# Do not consider servers that are not up.
if evicted_app_server.state is not State.up:
continue
evicted[evicted_app] = (evicted_app_server,
evicted_app.placement_expiry)
evicted_app_server.remove(evicted_app.name)
# TODO: we need to check affinity limit constraints on
# each level, all the way to the top.
if evicted_app_server.put(app):
break
# Placement failed.
if not app.server:
# If renewal attempt failed, restore previous placement and
# expiry date.
if restore:
restore['server'].restore(app, restore['placement_expiry'])
app.renew = True
else:
app.release_identity()
placement_tracker.adjust(app)
def schedule_alloc(self, allocation, servers):
"""Run the scheduler for given allocation.
"""
begin = time.time()
size = self.size(allocation.label)
util_queue = list(allocation.utilization_queue(size))
self._record_rank_and_util(util_queue)
queue = [item[-1] for item in util_queue]
self._find_placements(queue, servers)
_LOGGER.info('Scheduled %s (%d) apps in %r',
allocation.label,
len(queue),
time.time() - begin)
def schedule(self):
"""Run the scheduler.
"""
begin = time.time()
all_apps = []
for label, partition in six.iteritems(self.partitions):
allocation = partition.allocation
all_apps.extend(allocation.all_apps())
before = [(app.name, app.server, app.placement_expiry)
for app in all_apps]
servers = self.members()
self._fix_invalid_placements(six.viewvalues(self.apps), servers)
self._handle_inactive_servers(servers)
self._fix_invalid_identities(six.viewvalues(self.apps), servers)
for label, partition in six.iteritems(self.partitions):
allocation = partition.allocation
allocation.label = label
self.schedule_alloc(allocation, servers)
after = [(app.server, app.placement_expiry)
for app in all_apps]
placement = [
tuple(itertools.chain(b, a))
for b, a in six.moves.zip(before, after)
]
for appname, s_before, exp_before, s_after, exp_after in placement:
if s_before != s_after:
_LOGGER.info('New placement: %s - %s => %s',
appname, s_before, s_after)
else:
if exp_before != exp_after:
_LOGGER.info('Renewed: %s [%s] - %s => %s',
appname, s_before, exp_before, exp_after)
_LOGGER.info('Total scheduler time for %s apps: %r (sec)',
len(all_apps),
time.time() - begin)
return placement
def resolve_reboot_conflicts(self):
"""Adjust server exipiration time to avoid conflicts.
"""
pass
def dumps(cell):
"""Serializes cell to string.
"""
del cell
return ''
def loads(data):
"""Loads scheduler from string.
"""
del data
assert False, 'not implemented.'
| """Treadmill hierarchical scheduler.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import abc
import collections
import datetime
import heapq
import itertools
import logging
import operator
import sys
import time
import enum
import numpy as np
import six
_LOGGER = logging.getLogger(__name__)
MAX_PRIORITY = 100
DEFAULT_RANK = 100
_UNPLACED_RANK = sys.maxsize
DIMENSION_COUNT = None
_MAX_UTILIZATION = float('inf')
_GLOBAL_ORDER_BASE = time.mktime((2014, 1, 1, 0, 0, 0, 0, 0, 0))
# 21 day
DEFAULT_SERVER_UPTIME = 21 * 24 * 60 * 60
# 1 day
MIN_SERVER_UPTIME = 1 * 24 * 60 * 60
# 7 days
DEFAULT_MAX_APP_LEASE = 7 * 24 * 60 * 60
# Default partition threshold
DEFAULT_THRESHOLD = 0.9
# pylint: disable=C0302,too-many-lines
def _bit_count(value):
"""Returns number of bits set.
"""
count = 0
while value:
value &= value - 1
count += 1
return count
def zero_capacity():
"""Returns zero capacity vector.
"""
assert DIMENSION_COUNT is not None, 'Dimension count not set.'
return np.zeros(DIMENSION_COUNT)
def eps_capacity():
"""Returns eps capacity vector.
"""
assert DIMENSION_COUNT is not None, 'Dimension count not set.'
return np.array(
[np.finfo(float).eps for _x in range(0, DIMENSION_COUNT)]
)
def _global_order():
"""Use timestamp in nanoseconds, from Jan 1st 2014, to break tie in
scheduling conflicts for apps of the same priority, in a FIFO fashion.
"""
# Take the current EPOCH in nanosec
global_order = int(time.time() * 1000000) - _GLOBAL_ORDER_BASE
return global_order
def utilization(demand, allocated, available):
"""Calculates utilization score.
"""
return np.max(np.subtract(demand, allocated) / available)
def _all(oper, left, right):
"""Short circuit all for ndarray.
"""
return all(
oper(ai, bi)
for ai, bi in six.moves.zip(left, right)
)
def _any(oper, left, right):
"""Short circuit any for ndarray.
"""
return any(
oper(ai, bi)
for ai, bi in six.moves.zip(left, right)
)
def _any_eq(left, right):
"""Short circuit any eq for ndarray.
"""
return _any(operator.eq, left, right)
def _any_isclose(left, right):
"""Short circuit any isclose for ndarray.
"""
return _any(np.isclose, left, right)
def _any_lt(left, right):
"""Short circuit any lt for ndarray.
"""
return _any(operator.lt, left, right)
def _any_le(left, right):
"""Short circuit any le for ndarray.
"""
return _any(operator.le, left, right)
def _any_gt(left, right):
"""Short circuit any gt for ndarray.
"""
return _any(operator.gt, left, right)
def _any_ge(left, right):
"""Short circuit any ge for ndarray.
"""
return _any(operator.ge, left, right)
def _all_eq(left, right):
"""Short circuit all eq for ndarray.
"""
return _all(operator.eq, left, right)
def _all_isclose(left, right):
"""Short circuit all isclose for ndarray.
"""
return _all(np.isclose, left, right)
def _all_lt(left, right):
"""Short circuit all lt for ndarray.
"""
return _all(operator.lt, left, right)
def _all_le(left, right):
"""Short circuit all le for ndarray.
"""
return _all(operator.le, left, right)
def _all_gt(left, right):
"""Short circuit all gt for ndarray.
"""
return _all(operator.gt, left, right)
def _all_ge(left, right):
"""Short circuit all ge for ndarray.
"""
return _all(operator.ge, left, right)
class IdentityGroup:
"""Identity group.
"""
__slots__ = (
'available',
'count',
)
def __init__(self, count=0):
self.count = count
self.available = set(range(0, count))
def acquire(self):
"""Return next available identity or None.
"""
if self.available:
return self.available.pop()
else:
return None
def release(self, ident):
"""Mark identity as available.
"""
if ident < self.count:
self.available.add(ident)
def adjust(self, count):
"""Adjust identities with new count.
If count is larger, add additional identities to the set.
If count is lower, remove identities that are no longer valid.
All apps that have invalid identities will be adjusted in the
schedule cycle.
"""
if count >= self.count:
self.available ^= set(six.moves.xrange(self.count, count))
else:
self.available -= set(six.moves.xrange(count, self.count))
self.count = count
class State(enum.Enum):
"""Enumeration of node/server states.
"""
# Ready to accept new applications.
# TODO: Fix attribute name
up = 'up' # pylint: disable=invalid-name
# Applications need to be migrated.
down = 'down'
# Existing applications can stay, but will not accept new.
frozen = 'frozen'
class Affinity:
"""Model affinity and affinity limits.
"""
__slots__ = (
'name',
'limits',
'constraints',
)
def __init__(self, name, limits=None):
self.name = name
self.limits = collections.defaultdict(lambda: float('inf'))
if limits:
self.limits.update(limits)
# freeze affinity shape constraints.
self.constraints = tuple([self.name] + sorted(self.limits.values()))
class Application:
"""Application object.
"""
__slots__ = (
'global_order',
'name',
'demand',
'affinity',
'priority',
'allocation',
'data_retention_timeout',
'server',
'lease',
'identity',
'identity_group',
'identity_group_ref',
'schedule_once',
'evicted',
'placement_expiry',
'renew',
'unschedule',
'final_rank',
'final_util',
'constraints',
)
def __init__(self, name, priority, demand, affinity,
affinity_limits=None,
data_retention_timeout=0,
lease=0,
identity_group=None,
identity=None,
schedule_once=False):
self.global_order = _global_order()
self.allocation = None
self.server = None
self.name = name
self.affinity = Affinity(affinity, affinity_limits)
self.priority = priority
self.demand = np.array(demand, dtype=float)
self.data_retention_timeout = data_retention_timeout
self.lease = lease
self.identity_group = identity_group
self.identity = identity
self.identity_group_ref = None
self.schedule_once = schedule_once
self.evicted = False
self.unschedule = False
self.placement_expiry = None
self.renew = False
def shape(self):
"""Return tuple of application (constraints, demand).
Application shape is tuple of constraints that affect application
placement. Currently this includes affinity constraints and app lease
time.
"""
constraints = (self.affinity.constraints + (self.lease,))
if self.allocation:
constraints += self.allocation.constraints
return constraints, self.demand
def acquire_identity(self):
"""Try to acquire identity if belong to the group.
Returns True if successfull or if identity group is none.
"""
if not self.identity_group_ref:
return True
if self.identity is None:
self.identity = self.identity_group_ref.acquire()
_LOGGER.info('Acquired identity: %s: %s - %s',
self.name, self.identity_group, self.identity)
return self.identity is not None
def release_identity(self):
"""Release app identity.
"""
if self.identity_group_ref and self.identity is not None:
self.identity_group_ref.release(self.identity)
self.identity = None
def force_set_identity(self, identity):
"""Force identity of the app.
"""
if identity is not None:
assert self.identity_group_ref
self.identity = identity
self.identity_group_ref.available.discard(identity)
def has_identity(self):
"""Checks if app has identity if identity group is specified.
"""
return self.identity_group_ref is None or self.identity is not None
@property
def traits(self):
"""The app traits are derived from allocation.
"""
if self.allocation is None:
return 0
else:
return self.allocation.traits
@six.add_metaclass(abc.ABCMeta)
class Strategy:
"""Base class for all placement strategies.
"""
@abc.abstractmethod
def suggested_node(self):
"""Suggested node that should be tried first.
"""
pass
@abc.abstractmethod
def next_node(self):
"""Next node to try, if previous suggestion was rejected.
"""
pass
class SpreadStrategy(Strategy):
"""Spread strategy will suggest new node for each subsequent placement.
"""
__slots__ = (
'current_idx',
'node',
)
def __init__(self, node):
self.current_idx = 0
self.node = node
def suggested_node(self):
"""Suggest next node from the cycle.
"""
for _ in six.moves.xrange(0, len(self.node.children)):
if self.current_idx == len(self.node.children):
self.current_idx = 0
current = self.node.children[self.current_idx]
self.current_idx += 1
if current:
return current
# Not a single non-none node.
return None
def next_node(self):
"""Suggest next node from the cycle.
"""
return self.suggested_node()
class PackStrategy(Strategy):
"""Pack strategy will suggest same node until it is full.
"""
__slots__ = (
'current_idx',
'node',
)
def __init__(self, node):
self.current_idx = 0
self.node = node
def suggested_node(self):
"""Suggest same node as previous placement.
"""
for _ in six.moves.xrange(0, len(self.node.children)):
if self.current_idx == len(self.node.children):
self.current_idx = 0
node = self.node.children[self.current_idx]
if node:
return node
return None
def next_node(self):
"""Suggest next node from the cycle.
"""
self.current_idx += 1
return self.suggested_node()
class TraitSet:
"""Hierarchical set of traits.
"""
__slots__ = (
'self_traits',
'children_traits',
'traits',
)
def __init__(self, traits=0):
if not traits:
traits = 0
# Private traits.
assert isinstance(traits, six.integer_types)
self.self_traits = traits
# Union of all children traits.
self.children_traits = dict()
self._recalculate()
def _recalculate(self):
"""Calculate combined set of all traits.
"""
self.traits = self.self_traits
for trait in six.itervalues(self.children_traits):
self.traits |= trait
def has(self, traits):
"""Check if all traits are present.
"""
return (self.traits & traits) == traits
def add(self, child, traits):
"""Add a child with given traits.
"""
# Update children traits.
self.children_traits[child] = traits
self._recalculate()
def remove(self, child):
"""Remove child traits from the list.
"""
if child in self.children_traits:
del self.children_traits[child]
self._recalculate()
def is_same(self, other):
"""Compares own traits, ignore child.
"""
return self.self_traits == other.self_traits
class AffinityCounter:
"""Manages affinity count.
"""
__slots__ = (
'affinity_counter',
)
def __init__(self):
self.affinity_counter = collections.Counter()
class Node:
"""Abstract placement node.
"""
__slots__ = (
'name',
'level',
'free_capacity',
'parent',
'children',
'children_by_name',
'traits',
'labels',
'affinity_counters',
'valid_until',
'_state',
'_state_since',
)
def __init__(self, name, traits, level, valid_until=0):
self.name = name
self.level = level
self.free_capacity = zero_capacity()
self.parent = None
self.children = list()
self.children_by_name = dict()
self.traits = TraitSet(traits)
self.labels = set()
self.affinity_counters = collections.Counter()
self.valid_until = valid_until
self._state = State.up
self._state_since = time.time()
def empty(self):
"""Return true if there are no children.
"""
return not bool(self.children_by_name)
def children_iter(self):
"""Iterate over active children.
"""
for child in self.children:
if child:
yield child
def get_state(self):
"""Returns tuple of (state, since).
"""
return self. _state, self._state_since
def set_state(self, state, since):
"""Sets the state and time since.
"""
if self._state is not state:
self._state_since = since
self._state = state
_LOGGER.debug('state: %s - (%s, %s)',
self.name, self._state, self._state_since)
@property
def state(self):
"""Return current state.
"""
return self._state
@state.setter
def state(self, new_state):
"""Set node state and records time.
"""
self.set_state(new_state, time.time())
def add_child_traits(self, node):
"""Recursively add child traits up.
"""
self.traits.add(node.name, node.traits.traits)
if self.parent:
self.parent.remove_child_traits(self.name)
self.parent.add_child_traits(self)
def adjust_valid_until(self, child_valid_until):
"""Recursively adjust valid until time.
"""
if child_valid_until:
self.valid_until = max(self.valid_until, child_valid_until)
else:
if self.empty():
self.valid_until = 0
else:
self.valid_until = max([node.valid_until
for node in self.children_iter()])
if self.parent:
self.parent.adjust_valid_until(child_valid_until)
def remove_child_traits(self, node_name):
"""Recursively remove child traits up.
"""
self.traits.remove(node_name)
if self.parent:
self.parent.remove_child_traits(self.name)
self.parent.add_child_traits(self)
def reset_children(self):
"""Reset children to empty list.
"""
for child in self.children_iter():
child.parent = None
self.children = list()
self.children_by_name = dict()
def add_node(self, node):
"""Add child node, set the traits and propagate traits up.
"""
assert node.parent is None
assert node.name not in self.children_by_name
node.parent = self
self.children.append(node)
self.children_by_name[node.name] = node
self.add_child_traits(node)
self.increment_affinity(node.affinity_counters)
self.add_labels(node.labels)
self.adjust_valid_until(node.valid_until)
def add_labels(self, labels):
"""Recursively add labels to self and parents.
"""
self.labels.update(labels)
if self.parent:
self.parent.add_labels(self.labels)
def remove_node(self, node):
"""Remove child node and adjust the traits.
"""
assert node.name in self.children_by_name
del self.children_by_name[node.name]
for idx in six.moves.xrange(0, len(self.children)):
if self.children[idx] == node:
self.children[idx] = None
self.remove_child_traits(node.name)
self.decrement_affinity(node.affinity_counters)
self.adjust_valid_until(None)
node.parent = None
return node
def remove_node_by_name(self, nodename):
"""Removes node by name.
"""
assert nodename in self.children_by_name
return self.remove_node(self.children_by_name[nodename])
def check_app_constraints(self, app):
"""Find app placement on the node.
"""
if app.allocation is not None:
if app.allocation.label not in self.labels:
_LOGGER.info('Missing label: %s on %s', app.allocation.label,
self.name)
return False
if app.traits != 0 and not self.traits.has(app.traits):
_LOGGER.info('Missing traits: %s on %s', app.traits, self.name)
return False
if not self.check_app_affinity_limit(app):
return False
if _any_gt(app.demand, self.free_capacity):
_LOGGER.info('Not enough free capacity: %s', self.free_capacity)
return False
return True
def check_app_affinity_limit(self, app):
"""Check app affinity limits
"""
count = self.affinity_counters[app.affinity.name]
limit = app.affinity.limits[self.level]
return count < limit
def put(self, _app):
"""Abstract method, should never be called.
"""
raise Exception('Not implemented.')
def size(self, label):
"""Returns total capacity of the children.
"""
if self.empty() or label not in self.labels:
return eps_capacity()
return np.sum([
n.size(label) for n in self.children_iter()], 0)
def members(self):
"""Return set of all leaf node names.
"""
names = dict()
for node in self.children_iter():
names.update(node.members())
return names
def increment_affinity(self, counters):
"""Increment affinity counters recursively.
"""
self.affinity_counters.update(counters)
if self.parent:
self.parent.increment_affinity(counters)
def decrement_affinity(self, counters):
"""Decrement affinity counters recursively.
"""
self.affinity_counters.subtract(counters)
if self.parent:
self.parent.decrement_affinity(counters)
class Bucket(Node):
"""Collection of nodes/buckets.
"""
__slots__ = (
'affinity_strategies',
'traits',
)
_default_strategy_t = SpreadStrategy
def __init__(self, name, traits=0, level=None):
super(Bucket, self).__init__(name, traits, level)
self.affinity_strategies = dict()
self.traits = TraitSet(traits)
def set_affinity_strategy(self, affinity, strategy_t):
"""Initilaizes placement strategy for given affinity.
"""
self.affinity_strategies[affinity] = strategy_t(self)
def get_affinity_strategy(self, affinity):
"""Returns placement strategy for the affinity, defaults to spread.
"""
if affinity not in self.affinity_strategies:
self.set_affinity_strategy(affinity, Bucket._default_strategy_t)
return self.affinity_strategies[affinity]
def adjust_capacity_up(self, new_capacity):
"""Node can only increase capacity.
"""
self.free_capacity = np.maximum(self.free_capacity, new_capacity)
if self.parent:
self.parent.adjust_capacity_up(self.free_capacity)
def adjust_capacity_down(self, prev_capacity=None):
"""Called when capacity is decreased.
"""
if self.empty():
self.free_capacity = zero_capacity()
if self.parent:
self.parent.adjust_capacity_down()
else:
if prev_capacity is not None and _all_lt(prev_capacity,
self.free_capacity):
return
free_capacity = zero_capacity()
for child_node in self.children_iter():
if child_node.state is not State.up:
continue
free_capacity = np.maximum(free_capacity,
child_node.free_capacity)
# If resulting free_capacity is less the previous, we need to
# adjust the parent, otherwise, nothing needs to be done.
prev_capacity = self.free_capacity.copy()
if _any_lt(free_capacity, self.free_capacity):
self.free_capacity = free_capacity
if self.parent:
self.parent.adjust_capacity_down(prev_capacity)
def add_node(self, node):
"""Adds node to the bucket.
"""
super(Bucket, self).add_node(node)
self.adjust_capacity_up(node.free_capacity)
def remove_node(self, node):
"""Removes node from the bucket.
"""
super(Bucket, self).remove_node(node)
# if _any_isclose(self.free_capacity, node.free_capacity):
self.adjust_capacity_down(node.free_capacity)
return node
def put(self, app):
"""Try to put app on one of the nodes that belong to the bucket.
"""
# Check if it is feasible to put app on some node low in the
# hierarchy
_LOGGER.debug('bucket.put: %s => %s', app.name, self.name)
if not self.check_app_constraints(app):
return False
strategy = self.get_affinity_strategy(app.affinity.name)
node = strategy.suggested_node()
if node is None:
_LOGGER.debug('All nodes in the bucket deleted.')
return False
nodename0 = node.name
first = True
while True:
# End of iteration.
if not first and node.name == nodename0:
_LOGGER.debug('Finished iterating on: %s.', self.name)
break
first = False
_LOGGER.debug('Trying node: %s:', node.name)
if node.state is not State.up:
_LOGGER.debug('Node not up: %s, %s', node.name, node.state)
else:
if node.put(app):
return True
node = strategy.next_node()
return False
class Server(Node):
"""Server object, final app placement.
"""
__slots__ = (
'init_capacity',
'apps',
'up_since',
'presence_id',
)
def __init__(self, name, capacity, up_since=0, valid_until=0,
traits=0, label=None, presence_id=None):
super(Server, self).__init__(name, traits=traits, level='server',
valid_until=valid_until)
self.labels = set([label])
self.init_capacity = np.array(capacity, dtype=float)
self.free_capacity = self.init_capacity.copy()
self.apps = dict()
self.up_since = up_since
self.presence_id = presence_id
def __str__(self):
return 'server: %s %s' % (self.name, self.init_capacity)
def is_same(self, other):
"""Compares capacity and traits against another server.
valid_until is ignored, as server comes up after reboot will have
different valid_until value.
"""
return (self.labels == other.labels and
_all_eq(self.init_capacity, other.init_capacity) and
self.traits.is_same(other.traits))
def put(self, app):
"""Tries to put the app on the server.
"""
assert app.name not in self.apps
_LOGGER.debug('server.put: %s => %s', app.name, self.name)
if not self.check_app_lifetime(app):
return False
if not self.check_app_constraints(app):
return False
prev_capacity = self.free_capacity.copy()
self.free_capacity -= app.demand
self.apps[app.name] = app
self.increment_affinity([app.affinity.name])
app.server = self.name
if self.parent:
self.parent.adjust_capacity_down(prev_capacity)
if app.placement_expiry is None:
app.placement_expiry = time.time() + app.lease
return True
def restore(self, app, placement_expiry=None):
"""Put app back on the server, ignore app lifetime.
"""
_LOGGER.debug('server.restore: %s => %s (%s)',
app.name, self.name, placement_expiry)
lease = app.lease
# If not explicit
if placement_expiry is None:
placement_expiry = app.placement_expiry
app.lease = 0
rc = self.put(app)
app.lease = lease
app.placement_expiry = placement_expiry
return rc
def renew(self, app):
"""Try to extend the placement for app lease.
"""
can_renew = self.check_app_lifetime(app)
if can_renew:
app.placement_expiry = time.time() + app.lease
return can_renew
def check_app_lifetime(self, app):
"""Check if the app lease fits until server is rebooted.
"""
# app with 0 lease can be placed anywhere (ignore potentially
# expired servers)
if not app.lease:
return True
return time.time() + app.lease < self.valid_until
def remove(self, app_name):
"""Removes app from the server.
"""
assert app_name in self.apps
app = self.apps[app_name]
del self.apps[app_name]
app.server = None
app.evicted = True
app.unschedule = False
app.placement_expiry = None
self.free_capacity += app.demand
self.decrement_affinity([app.affinity.name])
if self.parent:
self.parent.adjust_capacity_up(self.free_capacity)
def remove_all(self):
"""Remove all apps.
"""
# iterate over copy of the keys, as we are removing them in the loop.
for appname in list(self.apps):
self.remove(appname)
def size(self, label):
"""Return server capacity.
"""
if label not in self.labels:
return eps_capacity()
return self.init_capacity
def members(self):
"""Return set of all leaf node names.
"""
return {self.name: self}
def set_state(self, state, since):
"""Change host state.
"""
if self.state is state:
return
super(Server, self).set_state(state, since)
if state == State.up:
if self.parent:
self.parent.adjust_capacity_up(self.free_capacity)
elif state in (State.down, State.frozen):
if self.parent:
self.parent.adjust_capacity_down(self.free_capacity)
else:
raise Exception('Invalid state: ' % state)
class Allocation:
"""Allocation manages queue of apps sharing same reserved capacity.
In reality allocation is tied to grn via application proid.
Applications within the allocation are organized by application priority.
Allocations are ranked, and the rank is used to globally order applications
from different allocations into global queue.
Default allocation has rank 100. Defining allocation with lower rank will
result in all it's applications to be evaluated first regardless of
utilization. This is used to model "system" applications that should be
always present regardless of utilization.
Allocation queue can be capped with max_utilization parameter. If set, it
will specify the max_utilization which will be considered for scheduling.
"""
__slots__ = (
'reserved',
'rank',
'rank_adjustment',
'traits',
'label',
'max_utilization',
'apps',
'sub_allocations',
'path',
'constraints',
)
def __init__(self, reserved=None, rank=None, traits=None,
max_utilization=None, partition=None):
self.set_reserved(reserved)
self.rank = None
self.rank_adjustment = 0
self.traits = 0
self.label = partition
self.max_utilization = _MAX_UTILIZATION
self.reserved = zero_capacity()
self.set_max_utilization(max_utilization)
self.set_traits(traits)
self.update(reserved, rank, 0)
self.apps = dict()
self.sub_allocations = dict()
self.path = []
# Freeze shape constraintes.
self.constraints = (self.label, self.traits,)
@property
def name(self):
"""Returns full allocation name.
"""
return '/'.join(self.path)
def set_reserved(self, reserved):
"""Update reserved capacity.
"""
if reserved is None:
self.reserved = zero_capacity()
elif isinstance(reserved, int):
assert reserved == 0
self.reserved = zero_capacity()
elif isinstance(reserved, float):
assert reserved == 0.0
self.reserved = zero_capacity()
elif isinstance(reserved, list):
assert len(reserved) == DIMENSION_COUNT
self.reserved = np.array(reserved, dtype=float)
elif isinstance(reserved, np.ndarray):
self.reserved = reserved
else:
assert 'Unsupported type: %r' % type(reserved)
def update(self, reserved, rank, rank_adjustment, max_utilization=None):
"""Updates allocation.
"""
if rank is not None:
self.rank = rank
else:
self.rank = DEFAULT_RANK
if rank_adjustment is not None:
self.rank_adjustment = rank_adjustment
self.set_reserved(reserved)
self.set_max_utilization(max_utilization)
def set_max_utilization(self, max_utilization):
"""Sets max_utilization, accounting for default None value.
"""
if max_utilization is not None:
self.max_utilization = max_utilization
else:
self.max_utilization = _MAX_UTILIZATION
def set_traits(self, traits):
"""Set traits, account for default None value.
"""
if not traits:
self.traits = 0
else:
self.traits = traits
def add(self, app):
"""Add application to the allocation queue.
Once added, the scheduler will make an attempt to place the app on one
of the cell nodes.
"""
# Check that there are no duplicate app names.
if app.name in self.apps:
_LOGGER.warning(
'Duplicate app on alllocation queue: %s', app.name
)
return
app.allocation = self
self.apps[app.name] = app
def remove(self, name):
"""Remove application from the allocation queue.
"""
if name in self.apps:
self.apps[name].allocation = None
del self.apps[name]
def priv_utilization_queue(self):
"""Returns tuples for sorted by global utilization.
Apps in the queue are ordered by priority, insertion order.
Adding or removing maintains invariant that apps utilization
monotonically increases as well.
Returns local prioritization queue in a tuple where first element is
utilization ratio, so that this queue is suitable for merging into
global priority queue.
"""
def _app_key(app):
"""Compares apps by priority, state, global index
"""
return (-app.priority, 0 if app.server else 1,
app.global_order, app.name)
prio_queue = sorted(six.viewvalues(self.apps), key=_app_key)
acc_demand = zero_capacity()
available = self.reserved + np.finfo(float).eps
util_before = utilization(acc_demand, self.reserved, available)
for app in prio_queue:
acc_demand = acc_demand + app.demand
util_after = utilization(acc_demand, self.reserved, available)
# Priority 0 apps are treated specially - utilization is set to
# max float.
#
# This ensures that they are at the end of the all queues.
if app.priority == 0:
util_before = _MAX_UTILIZATION
util_after = _MAX_UTILIZATION
# All things equal, already scheduled applications have priority
# over pending.
pending = 0 if app.server else 1
if util_after <= self.max_utilization - 1:
rank = self.rank
if util_before < 0:
rank -= self.rank_adjustment
else:
rank = _UNPLACED_RANK
entry = (rank, util_before, util_after, pending, app.global_order,
app)
util_before = util_after
yield entry
def utilization_queue(self, free_capacity, visitor=None):
"""Returns utilization queue including the sub-allocs.
All app queues from self and sub-allocs are merged in standard order,
and then utilization is recalculated based on total reserved capacity
of this alloc and sub-allocs combined.
The function maintains invariant that any app (self or inside sub-alloc
with utilization < 1 will remain with utilzation < 1.
"""
total_reserved = self.total_reserved()
queues = [
alloc.utilization_queue(free_capacity, visitor)
for alloc in six.itervalues(self.sub_allocations)
]
queues.append(self.priv_utilization_queue())
acc_demand = zero_capacity()
available = total_reserved + free_capacity + np.finfo(float).eps
util_before = utilization(acc_demand, total_reserved, available)
for item in heapq.merge(*queues):
rank, _u_before, _u_after, pending, order, app = item
acc_demand = acc_demand + app.demand
util_after = utilization(acc_demand, total_reserved, available)
if app.priority == 0:
util_before = _MAX_UTILIZATION
util_after = _MAX_UTILIZATION
# - lower rank allocations take precedence.
# - for same rank, utilization takes precedence
# - False < True, so for apps with same utilization we prefer
# those that already running (False == not pending)
# - Global order
entry = (rank, util_before, util_after, pending, order, app)
if visitor:
visitor(self, entry, acc_demand)
util_before = util_after
yield entry
def total_reserved(self):
"""Total reserved capacity including sub-allocs.
"""
return six.moves.reduce(
lambda acc, alloc: acc + alloc.total_reserved(),
six.itervalues(self.sub_allocations),
self.reserved
)
def add_sub_alloc(self, name, alloc):
"""Add child allocation.
"""
self.sub_allocations[name] = alloc
assert not alloc.path
alloc.path = self.path + [name]
alloc.label = self.label
def remove_sub_alloc(self, name):
"""Remove chlid allocation.
"""
if name in self.sub_allocations:
del self.sub_allocations[name]
def get_sub_alloc(self, name):
"""Return sub allocation, create empty if it does not exist.
"""
if name not in self.sub_allocations:
self.add_sub_alloc(name, Allocation())
return self.sub_allocations[name]
def all_apps(self):
"""Return all apps in allocation and sub-allocations."""
all_apps = list(six.itervalues(self.apps))
for alloc in six.itervalues(self.sub_allocations):
all_apps.extend(alloc.all_apps())
return all_apps
class Partition:
"""Cell partition.
"""
__slots__ = (
'allocation',
'max_server_uptime',
'max_lease',
'threshold',
'label',
'_reboot_buckets',
'_reboot_dates',
'_reboot_last',
)
def __init__(self, max_server_uptime=None, max_lease=None, threshold=None,
label=None, reboot_schedule=None, now=None):
self.label = label
self.allocation = Allocation(partition=label)
# Default -
if not max_server_uptime:
max_server_uptime = DEFAULT_SERVER_UPTIME
if not max_lease:
max_lease = DEFAULT_MAX_APP_LEASE
if not threshold:
threshold = DEFAULT_THRESHOLD
self.max_server_uptime = max_server_uptime
self.max_lease = max_lease
self.threshold = threshold
if not reboot_schedule:
# reboot every day
reboot_schedule = {day: (23, 59, 59) for day in range(7)}
if not now:
now = time.time()
self._reboot_dates = reboot_dates(
reboot_schedule,
start_date=datetime.date.fromtimestamp(now)
)
self._reboot_buckets = []
self._reboot_last = now
self.tick(now)
def _find_bucket(self, timestamp):
"""Try to find bucket with given timestamp.
"""
for bucket in self._reboot_buckets:
if bucket.timestamp == timestamp:
return bucket
return None
def add(self, server, timestamp=None):
"""Add server.
"""
bucket = None
if timestamp:
bucket = self._find_bucket(timestamp)
# servers with larger than max lifetime should be rebooted at
# the next opportunity
if (self._reboot_buckets[0].timestamp >
server.up_since + DEFAULT_SERVER_UPTIME):
bucket = self._reboot_buckets[0]
if not bucket:
bucket = min(reversed(self._reboot_buckets),
key=lambda b: b.cost(server))
bucket.add(server)
def remove(self, server):
"""Remove server.
"""
for bucket in self._reboot_buckets:
bucket.remove(server)
def tick(self, now):
"""Do per-tick-bookkeeping.
"""
while self._reboot_last <= now + DEFAULT_SERVER_UPTIME:
bucket = RebootBucket(next(self._reboot_dates))
self._reboot_buckets.append(bucket)
self._reboot_last = bucket.timestamp
while self._reboot_buckets[0].timestamp < now:
self._reboot_buckets.pop(0)
class PartitionDict(dict):
"""Dict that creates partitions on demand.
We use this instead of collections.defaultdict so that we can provide
the new partition with its label, to be propagated to its allocations.
"""
def __missing__(self, label):
"""Create a new partition, passing the label to its constructor.
"""
self[label] = Partition(label=label)
return self[label]
# pylint: disable=invalid-name
def reboot_dates(schedule, start_date=None):
"""Generate list of valid reboot dates.
"""
date = datetime.date.today()
if start_date:
date = start_date
while True:
weekday = date.weekday()
if weekday in schedule:
h, m, s = schedule[weekday]
yield time.mktime((date.year, date.month, date.day,
h, m, s, 0, 0, 0))
date += datetime.timedelta(days=1)
class RebootBucket:
"""Bucket of servers to be rebooted at the same time.
"""
__slots__ = (
'timestamp',
'servers',
)
def __init__(self, timestamp):
self.timestamp = timestamp
self.servers = []
def add(self, server):
"""Add server to this bucket.
"""
self.servers.append(server)
server.valid_until = self.timestamp
_LOGGER.info('Setting valid until on server: %s %s',
server.name, server.valid_until)
def remove(self, server):
"""Remove server from this bucket.
"""
try:
self.servers.remove(server)
except ValueError:
pass
def cost(self, server):
"""The cost of adding server to this bucket.
"""
if self.timestamp > server.up_since + DEFAULT_SERVER_UPTIME:
return float('inf')
if self.timestamp < server.up_since + MIN_SERVER_UPTIME:
return float('inf')
return len(self.servers)
class PlacementFeasibilityTracker:
"""Tracks similar apps placement failures."""
def __init__(self):
self.recorder = dict()
def feasible(self, app):
"""Checks if it is feasible to satisfy demand."""
constraints, demand = app.shape()
if constraints in self.recorder:
# If demand is >= than recorded failure, placement is not feasible.
if _all_ge(demand, self.recorder[constraints]):
return False
return True
def adjust(self, app):
"""Adjust info about failed placement."""
constraints, demand = app.shape()
if constraints not in self.recorder:
self.recorder[constraints] = demand
else:
if _all_le(demand, self.recorder[constraints]):
self.recorder[constraints] = demand
class Cell(Bucket):
"""Top level node.
"""
__slots__ = (
'partitions',
'next_event_at',
'apps',
'identity_groups',
)
def __init__(self, name):
super(Cell, self).__init__(name, traits=0, level='cell')
self.partitions = PartitionDict()
self.apps = dict()
self.identity_groups = collections.defaultdict(IdentityGroup)
self.next_event_at = np.inf
def add_app(self, allocation, app):
"""Adds application to the scheduled list.
"""
assert allocation is not None
if app.allocation:
app.allocation.remove(app.name)
allocation.add(app)
self.apps[app.name] = app
if app.identity_group:
app.identity_group_ref = self.identity_groups[app.identity_group]
def remove_app(self, appname):
"""Remove app from scheduled list.
"""
if appname not in self.apps:
return
app = self.apps[appname]
servers = self.members()
if app.server in servers:
servers[app.server].remove(app.name)
if app.allocation:
app.allocation.remove(app.name)
app.release_identity()
del self.apps[appname]
def configure_identity_group(self, name, count):
"""Add identity group to the cell.
"""
if name not in self.identity_groups:
self.identity_groups[name] = IdentityGroup(count)
else:
self.identity_groups[name].adjust(count)
def remove_identity_group(self, name):
"""Remove identity group.
"""
ident_group = self.identity_groups.get(name)
if ident_group:
in_use = False
for app in six.itervalues(self.apps):
if app.identity_group_ref == ident_group:
ident_group.adjust(0)
in_use = True
break
if not in_use:
del self.identity_groups[name]
def _fix_invalid_placements(self, queue, servers):
"""If app is placed on non-existent server, set server to None.
"""
for app in queue:
if app.server and app.server not in servers:
app.server = None
app.evicted = True
app.release_identity()
def _record_rank_and_util(self, queue):
"""Set final rank and utilization for all apps in the queue.
"""
for item in queue:
rank = item[0]
util = item[1]
app = item[-1]
app.final_rank = rank
app.final_util = util
def _fix_invalid_identities(self, queue, servers):
"""Check that app identity is valid for given identity group.
"""
for app in queue:
if app.identity is not None and app.identity_group_ref is not None:
# Can happen if identity group was adjusted to lower count.
if app.identity >= app.identity_group_ref.count:
# Can't release identity as it is invalid.
_LOGGER.info('Identity exceeds limit: %s - %s, limit %s',
app.name, app.identity,
app.identity_group_ref.count)
app.identity = None
# Invalidate any existing placement.
if app.server:
servers[app.server].remove(app.name)
def _handle_inactive_servers(self, servers):
"""Migrate apps from inactive servers.
"""
self.next_event_at = np.inf
for server in six.itervalues(servers):
state, since = server.get_state()
to_be_moved = []
if state == State.down:
_LOGGER.debug('Server state is down: %s', server.name)
for name, app in six.iteritems(server.apps):
if app.data_retention_timeout is None:
expires_at = 0
else:
expires_at = since + app.data_retention_timeout
if expires_at <= time.time():
_LOGGER.debug('Expired placement: %s', name)
app.release_identity()
to_be_moved.append(name)
else:
_LOGGER.debug('Keep placement: %s until %s',
name, expires_at)
self.next_event_at = min(expires_at,
self.next_event_at)
elif state == State.frozen:
_LOGGER.debug('Server state is frozen: %s', server.name)
to_be_moved = [
name for name, app in six.iteritems(server.apps)
if app.unschedule
]
for name in to_be_moved:
server.remove(name)
def _find_placements(self, queue, servers):
"""Run the queue and find placements.
"""
# TODO: refactor to get rid of warnings.
#
# pylint: disable=too-many-branches,too-many-statements
#
# At this point, if app.server is defined, it points to attached
# server.
evicted = dict()
reversed_queue = queue[::-1]
placement_tracker = PlacementFeasibilityTracker()
for app in queue:
_LOGGER.debug('scheduling %s', app.name)
if app.final_rank == _UNPLACED_RANK:
if app.server:
assert app.server in servers
assert app.has_identity()
servers[app.server].remove(app.name)
app.release_identity()
continue
restore = {}
if app.renew:
assert app.server
assert app.has_identity()
assert app.server in servers
server = servers[app.server]
if not server.renew(app):
# Save information that will be used to restore placement
# in case renewal fails.
_LOGGER.debug('Cannot renew app %s on server %s',
app.name, app.server)
restore['server'] = server
restore['placement_expiry'] = app.placement_expiry
server.remove(app.name)
# At this point app was either renewed on the same server, or
# temporarily removed from server if renew failed.
#
# If placement will be found, renew should remain False. If
# placement will not be found, renew will be set to True when
# placement is restored to the server it was running.
app.renew = False
if app.server:
assert app.server in servers
assert app.has_identity()
continue
assert app.server is None
if not app.acquire_identity():
_LOGGER.info('Unable to acquire identity: %s, %s', app.name,
app.identity_group)
continue
# If app was evicted before, try to restore to the same node.
if app in evicted:
assert app.has_identity()
evicted_from, app_expiry = evicted[app]
del evicted[app]
if evicted_from.restore(app, app_expiry):
app.evicted = False
continue
assert app.server is None
if app.schedule_once and app.evicted:
continue
# Check if placement is feasible.
if not placement_tracker.feasible(app):
_LOGGER.info(
'Placement not feasible: %s %r', app.name, app.shape()
)
continue
if not self.put(app):
# There is not enough capacity, from the end of the queue,
# evict apps, freeing capacity.
for evicted_app in reversed_queue:
# We reached the app we can't place
if evicted_app == app:
break
# The app is not yet placed, skip
if not evicted_app.server:
continue
assert evicted_app.server in servers
evicted_app_server = servers[evicted_app.server]
# Do not consider servers that are not up.
if evicted_app_server.state is not State.up:
continue
evicted[evicted_app] = (evicted_app_server,
evicted_app.placement_expiry)
evicted_app_server.remove(evicted_app.name)
# TODO: we need to check affinity limit constraints on
# each level, all the way to the top.
if evicted_app_server.put(app):
break
# Placement failed.
if not app.server:
# If renewal attempt failed, restore previous placement and
# expiry date.
if restore:
restore['server'].restore(app, restore['placement_expiry'])
app.renew = True
else:
app.release_identity()
placement_tracker.adjust(app)
def schedule_alloc(self, allocation, servers):
"""Run the scheduler for given allocation.
"""
begin = time.time()
size = self.size(allocation.label)
util_queue = list(allocation.utilization_queue(size))
self._record_rank_and_util(util_queue)
queue = [item[-1] for item in util_queue]
self._find_placements(queue, servers)
_LOGGER.info('Scheduled %s (%d) apps in %r',
allocation.label,
len(queue),
time.time() - begin)
def schedule(self):
"""Run the scheduler.
"""
begin = time.time()
all_apps = []
for label, partition in six.iteritems(self.partitions):
allocation = partition.allocation
all_apps.extend(allocation.all_apps())
before = [(app.name, app.server, app.placement_expiry)
for app in all_apps]
servers = self.members()
self._fix_invalid_placements(six.viewvalues(self.apps), servers)
self._handle_inactive_servers(servers)
self._fix_invalid_identities(six.viewvalues(self.apps), servers)
for label, partition in six.iteritems(self.partitions):
allocation = partition.allocation
allocation.label = label
self.schedule_alloc(allocation, servers)
after = [(app.server, app.placement_expiry)
for app in all_apps]
placement = [
tuple(itertools.chain(b, a))
for b, a in six.moves.zip(before, after)
]
for appname, s_before, exp_before, s_after, exp_after in placement:
if s_before != s_after:
_LOGGER.info('New placement: %s - %s => %s',
appname, s_before, s_after)
else:
if exp_before != exp_after:
_LOGGER.info('Renewed: %s [%s] - %s => %s',
appname, s_before, exp_before, exp_after)
_LOGGER.info('Total scheduler time for %s apps: %r (sec)',
len(all_apps),
time.time() - begin)
return placement
def resolve_reboot_conflicts(self):
"""Adjust server exipiration time to avoid conflicts.
"""
pass
def dumps(cell):
"""Serializes cell to string.
"""
del cell
return ''
def loads(data):
"""Loads scheduler from string.
"""
del data
assert False, 'not implemented.'
| en | 0.88203 | Treadmill hierarchical scheduler. # 21 day # 1 day # 7 days # Default partition threshold # pylint: disable=C0302,too-many-lines Returns number of bits set. Returns zero capacity vector. Returns eps capacity vector. Use timestamp in nanoseconds, from Jan 1st 2014, to break tie in scheduling conflicts for apps of the same priority, in a FIFO fashion. # Take the current EPOCH in nanosec Calculates utilization score. Short circuit all for ndarray. Short circuit any for ndarray. Short circuit any eq for ndarray. Short circuit any isclose for ndarray. Short circuit any lt for ndarray. Short circuit any le for ndarray. Short circuit any gt for ndarray. Short circuit any ge for ndarray. Short circuit all eq for ndarray. Short circuit all isclose for ndarray. Short circuit all lt for ndarray. Short circuit all le for ndarray. Short circuit all gt for ndarray. Short circuit all ge for ndarray. Identity group. Return next available identity or None. Mark identity as available. Adjust identities with new count. If count is larger, add additional identities to the set. If count is lower, remove identities that are no longer valid. All apps that have invalid identities will be adjusted in the schedule cycle. Enumeration of node/server states. # Ready to accept new applications. # TODO: Fix attribute name # pylint: disable=invalid-name # Applications need to be migrated. # Existing applications can stay, but will not accept new. Model affinity and affinity limits. # freeze affinity shape constraints. Application object. Return tuple of application (constraints, demand). Application shape is tuple of constraints that affect application placement. Currently this includes affinity constraints and app lease time. Try to acquire identity if belong to the group. Returns True if successfull or if identity group is none. Release app identity. Force identity of the app. Checks if app has identity if identity group is specified. The app traits are derived from allocation. Base class for all placement strategies. Suggested node that should be tried first. Next node to try, if previous suggestion was rejected. Spread strategy will suggest new node for each subsequent placement. Suggest next node from the cycle. # Not a single non-none node. Suggest next node from the cycle. Pack strategy will suggest same node until it is full. Suggest same node as previous placement. Suggest next node from the cycle. Hierarchical set of traits. # Private traits. # Union of all children traits. Calculate combined set of all traits. Check if all traits are present. Add a child with given traits. # Update children traits. Remove child traits from the list. Compares own traits, ignore child. Manages affinity count. Abstract placement node. Return true if there are no children. Iterate over active children. Returns tuple of (state, since). Sets the state and time since. Return current state. Set node state and records time. Recursively add child traits up. Recursively adjust valid until time. Recursively remove child traits up. Reset children to empty list. Add child node, set the traits and propagate traits up. Recursively add labels to self and parents. Remove child node and adjust the traits. Removes node by name. Find app placement on the node. Check app affinity limits Abstract method, should never be called. Returns total capacity of the children. Return set of all leaf node names. Increment affinity counters recursively. Decrement affinity counters recursively. Collection of nodes/buckets. Initilaizes placement strategy for given affinity. Returns placement strategy for the affinity, defaults to spread. Node can only increase capacity. Called when capacity is decreased. # If resulting free_capacity is less the previous, we need to # adjust the parent, otherwise, nothing needs to be done. Adds node to the bucket. Removes node from the bucket. # if _any_isclose(self.free_capacity, node.free_capacity): Try to put app on one of the nodes that belong to the bucket. # Check if it is feasible to put app on some node low in the # hierarchy # End of iteration. Server object, final app placement. Compares capacity and traits against another server. valid_until is ignored, as server comes up after reboot will have different valid_until value. Tries to put the app on the server. Put app back on the server, ignore app lifetime. # If not explicit Try to extend the placement for app lease. Check if the app lease fits until server is rebooted. # app with 0 lease can be placed anywhere (ignore potentially # expired servers) Removes app from the server. Remove all apps. # iterate over copy of the keys, as we are removing them in the loop. Return server capacity. Return set of all leaf node names. Change host state. Allocation manages queue of apps sharing same reserved capacity. In reality allocation is tied to grn via application proid. Applications within the allocation are organized by application priority. Allocations are ranked, and the rank is used to globally order applications from different allocations into global queue. Default allocation has rank 100. Defining allocation with lower rank will result in all it's applications to be evaluated first regardless of utilization. This is used to model "system" applications that should be always present regardless of utilization. Allocation queue can be capped with max_utilization parameter. If set, it will specify the max_utilization which will be considered for scheduling. # Freeze shape constraintes. Returns full allocation name. Update reserved capacity. Updates allocation. Sets max_utilization, accounting for default None value. Set traits, account for default None value. Add application to the allocation queue. Once added, the scheduler will make an attempt to place the app on one of the cell nodes. # Check that there are no duplicate app names. Remove application from the allocation queue. Returns tuples for sorted by global utilization. Apps in the queue are ordered by priority, insertion order. Adding or removing maintains invariant that apps utilization monotonically increases as well. Returns local prioritization queue in a tuple where first element is utilization ratio, so that this queue is suitable for merging into global priority queue. Compares apps by priority, state, global index # Priority 0 apps are treated specially - utilization is set to # max float. # # This ensures that they are at the end of the all queues. # All things equal, already scheduled applications have priority # over pending. Returns utilization queue including the sub-allocs. All app queues from self and sub-allocs are merged in standard order, and then utilization is recalculated based on total reserved capacity of this alloc and sub-allocs combined. The function maintains invariant that any app (self or inside sub-alloc with utilization < 1 will remain with utilzation < 1. # - lower rank allocations take precedence. # - for same rank, utilization takes precedence # - False < True, so for apps with same utilization we prefer # those that already running (False == not pending) # - Global order Total reserved capacity including sub-allocs. Add child allocation. Remove chlid allocation. Return sub allocation, create empty if it does not exist. Return all apps in allocation and sub-allocations. Cell partition. # Default - # reboot every day Try to find bucket with given timestamp. Add server. # servers with larger than max lifetime should be rebooted at # the next opportunity Remove server. Do per-tick-bookkeeping. Dict that creates partitions on demand. We use this instead of collections.defaultdict so that we can provide the new partition with its label, to be propagated to its allocations. Create a new partition, passing the label to its constructor. # pylint: disable=invalid-name Generate list of valid reboot dates. Bucket of servers to be rebooted at the same time. Add server to this bucket. Remove server from this bucket. The cost of adding server to this bucket. Tracks similar apps placement failures. Checks if it is feasible to satisfy demand. # If demand is >= than recorded failure, placement is not feasible. Adjust info about failed placement. Top level node. Adds application to the scheduled list. Remove app from scheduled list. Add identity group to the cell. Remove identity group. If app is placed on non-existent server, set server to None. Set final rank and utilization for all apps in the queue. Check that app identity is valid for given identity group. # Can happen if identity group was adjusted to lower count. # Can't release identity as it is invalid. # Invalidate any existing placement. Migrate apps from inactive servers. Run the queue and find placements. # TODO: refactor to get rid of warnings. # # pylint: disable=too-many-branches,too-many-statements # # At this point, if app.server is defined, it points to attached # server. # Save information that will be used to restore placement # in case renewal fails. # At this point app was either renewed on the same server, or # temporarily removed from server if renew failed. # # If placement will be found, renew should remain False. If # placement will not be found, renew will be set to True when # placement is restored to the server it was running. # If app was evicted before, try to restore to the same node. # Check if placement is feasible. # There is not enough capacity, from the end of the queue, # evict apps, freeing capacity. # We reached the app we can't place # The app is not yet placed, skip # Do not consider servers that are not up. # TODO: we need to check affinity limit constraints on # each level, all the way to the top. # Placement failed. # If renewal attempt failed, restore previous placement and # expiry date. Run the scheduler for given allocation. Run the scheduler. Adjust server exipiration time to avoid conflicts. Serializes cell to string. Loads scheduler from string. | 2.319293 | 2 |
banners/bannerRan.py | gothyyy/AIDungeon | 1 | 10264 | <filename>banners/bannerRan.py
import random
import sys
import time
import json
import os
import warnings
import numpy as np
import glob, os
stat_mini = 1
stat_max = 0
listBanners = []
#HOW TO USE IT:
#1 copy the opening.txt
#2 remove the graphic (but do keep top logo for consistency)
#3 add ASCII art that is 78 or less characters in width
#4 save txt file under a complete new name
class bannerRan:
def __init__(self):
banner_number = load_banner() #insert function to get random
self.banner_number = banner_number
def load_banner():
global stat_max
global stat_mini
global listBanners
hey = scanBanners() #load text and get proper numbers
choose_between = r(stat_mini, stat_max)
x = random.choice(listBanners)
return x
def r(x,y): #randmom, picks between X and Y
return int(str(random.randint(x,y)))
def scanBanners():
global stat_max
global listBanners
dir_path = os.path.dirname(os.path.realpath(__file__)) # directory of banners path
#os.chdir("")
i = 0
for file in glob.glob("banners/*.txt"):
i+=1
listBanners.append(file)
#print(str(i), file)
stat_max = i
x = dir_path
return x
| <filename>banners/bannerRan.py
import random
import sys
import time
import json
import os
import warnings
import numpy as np
import glob, os
stat_mini = 1
stat_max = 0
listBanners = []
#HOW TO USE IT:
#1 copy the opening.txt
#2 remove the graphic (but do keep top logo for consistency)
#3 add ASCII art that is 78 or less characters in width
#4 save txt file under a complete new name
class bannerRan:
def __init__(self):
banner_number = load_banner() #insert function to get random
self.banner_number = banner_number
def load_banner():
global stat_max
global stat_mini
global listBanners
hey = scanBanners() #load text and get proper numbers
choose_between = r(stat_mini, stat_max)
x = random.choice(listBanners)
return x
def r(x,y): #randmom, picks between X and Y
return int(str(random.randint(x,y)))
def scanBanners():
global stat_max
global listBanners
dir_path = os.path.dirname(os.path.realpath(__file__)) # directory of banners path
#os.chdir("")
i = 0
for file in glob.glob("banners/*.txt"):
i+=1
listBanners.append(file)
#print(str(i), file)
stat_max = i
x = dir_path
return x
| en | 0.72102 | #HOW TO USE IT: #1 copy the opening.txt #2 remove the graphic (but do keep top logo for consistency) #3 add ASCII art that is 78 or less characters in width #4 save txt file under a complete new name #insert function to get random #load text and get proper numbers #randmom, picks between X and Y # directory of banners path #os.chdir("") #print(str(i), file) | 3.18566 | 3 |
BondMarket/app/theme_lib.py | Meith0717/BondMarket | 0 | 10265 | <reponame>Meith0717/BondMarket<filename>BondMarket/app/theme_lib.py
from dataclasses import dataclass
@dataclass
class theme:
name : str
bg_color : str
fg_color : str
lb_color : str
ttk_theme : str
LIGHT = theme(
name='LIGHT',
bg_color=None,
fg_color='black',
lb_color='#f0f0f0',
ttk_theme='xpnative'
)
DARK = theme(
name='DARK',
bg_color='#424242',
fg_color='white',
lb_color='#424242',
ttk_theme='black'
)
| from dataclasses import dataclass
@dataclass
class theme:
name : str
bg_color : str
fg_color : str
lb_color : str
ttk_theme : str
LIGHT = theme(
name='LIGHT',
bg_color=None,
fg_color='black',
lb_color='#f0f0f0',
ttk_theme='xpnative'
)
DARK = theme(
name='DARK',
bg_color='#424242',
fg_color='white',
lb_color='#424242',
ttk_theme='black'
) | none | 1 | 2.749645 | 3 |
|
run.py | rimijoker/CA-MTL | 1 | 10266 | import os
import sys
import re
import json
import logging
import torch
from transformers import (
HfArgumentParser,
set_seed,
AutoTokenizer,
AutoConfig,
EvalPrediction,
)
from src.model.ca_mtl import CaMtl, CaMtlArguments
from src.utils.misc import MultiTaskDataArguments, Split
from src.mtl_trainer import MultiTaskTrainer, MultiTaskTrainingArguments
from src.data.mtl_dataset import MultiTaskDataset
from src.data.task_dataset import TaskDataset
logger = logging.getLogger(__name__)
def setup_logging(training_args):
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
training_args.local_rank,
training_args.device,
training_args.n_gpu,
bool(training_args.local_rank != -1),
training_args.fp16,
)
def parse_cmd_args():
parser = HfArgumentParser(
(
CaMtlArguments,
MultiTaskDataArguments,
MultiTaskTrainingArguments,
)
)
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
model_args, data_args, training_args = parser.parse_json_file(
json_file=os.path.abspath(sys.argv[1])
)
else:
(
model_args,
data_args,
training_args,
) = parser.parse_args_into_dataclasses()
logger.info("Training/evaluation parameters %s", training_args)
return model_args, data_args, training_args
def create_eval_datasets(mode, data_args, tokenizer):
eval_datasets = {}
for task_id, task_name in enumerate(data_args.tasks):
eval_datasets[task_name] = TaskDataset(
task_name, task_id, data_args, tokenizer, mode=mode
)
if task_name == "mnli":
# Loop to handle MNLI double evaluation (matched, mis-matched)
eval_datasets["mnli-mm"] = TaskDataset(
"mnli-mm", task_id, data_args, tokenizer, mode=mode
)
return eval_datasets
def main():
model_args, data_args, training_args = parse_cmd_args()
setup_logging(training_args)
set_seed(training_args.seed)
config = AutoConfig.from_pretrained(
CaMtl.get_base_model(model_args.model_name_or_path),
)
model = CaMtl.from_pretrained(
CaMtl.get_base_model(model_args.model_name_or_path),
model_args,
data_args,
config=config)
model.freeze_encoder_layers(model_args)
logger.info(model)
tokenizer = AutoTokenizer.from_pretrained(
CaMtl.get_base_model(model_args.model_name_or_path),
)
logger.info("Training tasks: %s", ", ".join([t for t in data_args.tasks]))
trainer = MultiTaskTrainer(
tokenizer,
data_args,
model=model,
args=training_args,
train_dataset=MultiTaskDataset(data_args, tokenizer, limit_length=50)
if training_args.do_train
else None,
eval_datasets=create_eval_datasets(Split.dev, data_args, tokenizer)
if training_args.do_eval or training_args.evaluate_during_training
else None,
test_datasets=create_eval_datasets(Split.test, data_args, tokenizer)
if training_args.do_predict
else None,
)
if training_args.do_train:
trainer.train(
model_path=model_args.model_name_or_path
if os.path.isdir(model_args.model_name_or_path)
else None
)
if training_args.do_eval:
trainer.evaluate()
if training_args.do_predict:
trainer.predict()
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
| import os
import sys
import re
import json
import logging
import torch
from transformers import (
HfArgumentParser,
set_seed,
AutoTokenizer,
AutoConfig,
EvalPrediction,
)
from src.model.ca_mtl import CaMtl, CaMtlArguments
from src.utils.misc import MultiTaskDataArguments, Split
from src.mtl_trainer import MultiTaskTrainer, MultiTaskTrainingArguments
from src.data.mtl_dataset import MultiTaskDataset
from src.data.task_dataset import TaskDataset
logger = logging.getLogger(__name__)
def setup_logging(training_args):
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
)
logger.warning(
"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s",
training_args.local_rank,
training_args.device,
training_args.n_gpu,
bool(training_args.local_rank != -1),
training_args.fp16,
)
def parse_cmd_args():
parser = HfArgumentParser(
(
CaMtlArguments,
MultiTaskDataArguments,
MultiTaskTrainingArguments,
)
)
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
model_args, data_args, training_args = parser.parse_json_file(
json_file=os.path.abspath(sys.argv[1])
)
else:
(
model_args,
data_args,
training_args,
) = parser.parse_args_into_dataclasses()
logger.info("Training/evaluation parameters %s", training_args)
return model_args, data_args, training_args
def create_eval_datasets(mode, data_args, tokenizer):
eval_datasets = {}
for task_id, task_name in enumerate(data_args.tasks):
eval_datasets[task_name] = TaskDataset(
task_name, task_id, data_args, tokenizer, mode=mode
)
if task_name == "mnli":
# Loop to handle MNLI double evaluation (matched, mis-matched)
eval_datasets["mnli-mm"] = TaskDataset(
"mnli-mm", task_id, data_args, tokenizer, mode=mode
)
return eval_datasets
def main():
model_args, data_args, training_args = parse_cmd_args()
setup_logging(training_args)
set_seed(training_args.seed)
config = AutoConfig.from_pretrained(
CaMtl.get_base_model(model_args.model_name_or_path),
)
model = CaMtl.from_pretrained(
CaMtl.get_base_model(model_args.model_name_or_path),
model_args,
data_args,
config=config)
model.freeze_encoder_layers(model_args)
logger.info(model)
tokenizer = AutoTokenizer.from_pretrained(
CaMtl.get_base_model(model_args.model_name_or_path),
)
logger.info("Training tasks: %s", ", ".join([t for t in data_args.tasks]))
trainer = MultiTaskTrainer(
tokenizer,
data_args,
model=model,
args=training_args,
train_dataset=MultiTaskDataset(data_args, tokenizer, limit_length=50)
if training_args.do_train
else None,
eval_datasets=create_eval_datasets(Split.dev, data_args, tokenizer)
if training_args.do_eval or training_args.evaluate_during_training
else None,
test_datasets=create_eval_datasets(Split.test, data_args, tokenizer)
if training_args.do_predict
else None,
)
if training_args.do_train:
trainer.train(
model_path=model_args.model_name_or_path
if os.path.isdir(model_args.model_name_or_path)
else None
)
if training_args.do_eval:
trainer.evaluate()
if training_args.do_predict:
trainer.predict()
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
| en | 0.717475 | # Loop to handle MNLI double evaluation (matched, mis-matched) # For xla_spawn (TPUs) | 1.848349 | 2 |
examples/readWebsocket.py | uadlq/PhyPiDAQ-PiOS11 | 0 | 10267 | #!/usr/bin/env python3
"""Read data in CSV format from websocket
"""
import sys
import asyncio
import websockets
# read url from command line
if len(sys.argv) >= 2:
uri = sys.argv[1]
else:
# host url and port
uri = "ws://localhost:8314"
print("*==* ", sys.argv[0], " Lese Daten von url ", uri)
async def read_ws():
"""asynchronous read from websocket
"""
async with websockets.connect(uri, ping_interval=None) as websocket:
# test connection
await websocket.send("req_connect")
answ = await websocket.recv()
if answ == "ack_connect":
print("** connected to websocket ", uri)
# get data
await websocket.send("getData")
while True:
inp = await websocket.recv()
if inp == '\n': # empty record, end
print("empty input - closing")
sys.exit(0)
else:
print('read: %s ' % inp, end='')
# run web client
asyncio.get_event_loop().run_until_complete(read_ws())
| #!/usr/bin/env python3
"""Read data in CSV format from websocket
"""
import sys
import asyncio
import websockets
# read url from command line
if len(sys.argv) >= 2:
uri = sys.argv[1]
else:
# host url and port
uri = "ws://localhost:8314"
print("*==* ", sys.argv[0], " Lese Daten von url ", uri)
async def read_ws():
"""asynchronous read from websocket
"""
async with websockets.connect(uri, ping_interval=None) as websocket:
# test connection
await websocket.send("req_connect")
answ = await websocket.recv()
if answ == "ack_connect":
print("** connected to websocket ", uri)
# get data
await websocket.send("getData")
while True:
inp = await websocket.recv()
if inp == '\n': # empty record, end
print("empty input - closing")
sys.exit(0)
else:
print('read: %s ' % inp, end='')
# run web client
asyncio.get_event_loop().run_until_complete(read_ws())
| en | 0.68723 | #!/usr/bin/env python3 Read data in CSV format from websocket # read url from command line # host url and port asynchronous read from websocket # test connection # get data # empty record, end # run web client | 3.16624 | 3 |
settings/__init__.py | arcana261/python-grpc-boilerplate | 0 | 10268 | <gh_stars>0
import os
import sys
import itertools
import json
_NONE = object()
class SettingManager:
_sentry = object()
def __init__(self):
self.env = os.getenv('ENV', 'prd')
try:
self._default = __import__('settings.default', fromlist=['*'])
except ModuleNotFoundError:
self._default = object()
try:
self._env = __import__('settings.{}'.format(self.env), fromlist=['*'])
except ModuleNotFoundError:
self._env = object()
self._loaded = []
def load(self, filename, fmt='json'):
filename = os.path.abspath(filename)
if fmt == 'json':
with open(filename) as f:
self._loaded.append((filename, json.load(f)))
def unload(self, filename):
filename = os.path.abspath(filename)
self._loaded = [(f, v) for f, v in self._loaded if f != filename]
def __getattr__(self, item):
result = SettingManager._sentry
for _, values in self._loaded:
if item in values:
result = values[item]
result = os.getenv(item, result)
if result is SettingManager._sentry:
result = getattr(self._env, item, getattr(self._default, item, SettingManager._sentry))
if result is SettingManager._sentry:
raise AttributeError
return result
def __contains__(self, item):
try:
self.__getattr__(item)
return True
except AttributeError:
return False
def get(self, item, default=_NONE):
try:
return self.__getattr__(item)
except AttributeError:
if default is not _NONE:
return default
raise AttributeError
def __iter__(self):
chained = itertools.chain(getattr(self._default, '__dict__', dict()).keys(),
getattr(self._env, '__dict__', dict()).keys())
for _, values in self._loaded:
chained = itertools.chain(chained, values.keys())
return iter(filter(lambda x: not x.startswith('_'), set(chained)))
sys.modules[__name__] = SettingManager()
| import os
import sys
import itertools
import json
_NONE = object()
class SettingManager:
_sentry = object()
def __init__(self):
self.env = os.getenv('ENV', 'prd')
try:
self._default = __import__('settings.default', fromlist=['*'])
except ModuleNotFoundError:
self._default = object()
try:
self._env = __import__('settings.{}'.format(self.env), fromlist=['*'])
except ModuleNotFoundError:
self._env = object()
self._loaded = []
def load(self, filename, fmt='json'):
filename = os.path.abspath(filename)
if fmt == 'json':
with open(filename) as f:
self._loaded.append((filename, json.load(f)))
def unload(self, filename):
filename = os.path.abspath(filename)
self._loaded = [(f, v) for f, v in self._loaded if f != filename]
def __getattr__(self, item):
result = SettingManager._sentry
for _, values in self._loaded:
if item in values:
result = values[item]
result = os.getenv(item, result)
if result is SettingManager._sentry:
result = getattr(self._env, item, getattr(self._default, item, SettingManager._sentry))
if result is SettingManager._sentry:
raise AttributeError
return result
def __contains__(self, item):
try:
self.__getattr__(item)
return True
except AttributeError:
return False
def get(self, item, default=_NONE):
try:
return self.__getattr__(item)
except AttributeError:
if default is not _NONE:
return default
raise AttributeError
def __iter__(self):
chained = itertools.chain(getattr(self._default, '__dict__', dict()).keys(),
getattr(self._env, '__dict__', dict()).keys())
for _, values in self._loaded:
chained = itertools.chain(chained, values.keys())
return iter(filter(lambda x: not x.startswith('_'), set(chained)))
sys.modules[__name__] = SettingManager() | none | 1 | 2.350849 | 2 |
|
roblox/partials/partialgroup.py | speer-kinjo/ro.py | 28 | 10269 | """
This file contains partial objects related to Roblox groups.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from ..bases.basegroup import BaseGroup
from ..bases.baseuser import BaseUser
if TYPE_CHECKING:
from ..client import Client
class AssetPartialGroup(BaseGroup):
"""
Represents a partial group in the context of a Roblox asset.
Intended to parse the `data[0]["creator"]` data from https://games.roblox.com/v1/games.
Attributes:
_client: The Client object, which is passed to all objects this Client generates.
id: The group's name.
creator: The group's owner.
name: The group's name.
"""
def __init__(self, client: Client, data: dict):
"""
Arguments:
client: The Client.
data: The data from the endpoint.
"""
self._client: Client = client
self.creator: BaseUser = BaseUser(client=client, user_id=data["Id"])
self.id: int = data["CreatorTargetId"]
self.name: str = data["Name"]
super().__init__(client, self.id)
def __repr__(self):
return f"<{self.__class__.__name__} id={self.id} name={self.name!r}>"
class UniversePartialGroup(BaseGroup):
"""
Represents a partial group in the context of a Roblox universe.
Attributes:
_data: The data we get back from the endpoint.
_client: The client object, which is passed to all objects this client generates.
id: Id of the group
name: Name of the group
"""
def __init__(self, client: Client, data: dict):
"""
Arguments:
client: The ClientSharedObject.
data: The data from the endpoint.
"""
self._client: Client = client
self.id = data["id"]
self.name: str = data["name"]
super().__init__(client, self.id)
def __repr__(self):
return f"<{self.__class__.__name__} id={self.id} name={self.name!r}>"
| """
This file contains partial objects related to Roblox groups.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from ..bases.basegroup import BaseGroup
from ..bases.baseuser import BaseUser
if TYPE_CHECKING:
from ..client import Client
class AssetPartialGroup(BaseGroup):
"""
Represents a partial group in the context of a Roblox asset.
Intended to parse the `data[0]["creator"]` data from https://games.roblox.com/v1/games.
Attributes:
_client: The Client object, which is passed to all objects this Client generates.
id: The group's name.
creator: The group's owner.
name: The group's name.
"""
def __init__(self, client: Client, data: dict):
"""
Arguments:
client: The Client.
data: The data from the endpoint.
"""
self._client: Client = client
self.creator: BaseUser = BaseUser(client=client, user_id=data["Id"])
self.id: int = data["CreatorTargetId"]
self.name: str = data["Name"]
super().__init__(client, self.id)
def __repr__(self):
return f"<{self.__class__.__name__} id={self.id} name={self.name!r}>"
class UniversePartialGroup(BaseGroup):
"""
Represents a partial group in the context of a Roblox universe.
Attributes:
_data: The data we get back from the endpoint.
_client: The client object, which is passed to all objects this client generates.
id: Id of the group
name: Name of the group
"""
def __init__(self, client: Client, data: dict):
"""
Arguments:
client: The ClientSharedObject.
data: The data from the endpoint.
"""
self._client: Client = client
self.id = data["id"]
self.name: str = data["name"]
super().__init__(client, self.id)
def __repr__(self):
return f"<{self.__class__.__name__} id={self.id} name={self.name!r}>"
| en | 0.809606 | This file contains partial objects related to Roblox groups. Represents a partial group in the context of a Roblox asset. Intended to parse the `data[0]["creator"]` data from https://games.roblox.com/v1/games. Attributes: _client: The Client object, which is passed to all objects this Client generates. id: The group's name. creator: The group's owner. name: The group's name. Arguments: client: The Client. data: The data from the endpoint. Represents a partial group in the context of a Roblox universe. Attributes: _data: The data we get back from the endpoint. _client: The client object, which is passed to all objects this client generates. id: Id of the group name: Name of the group Arguments: client: The ClientSharedObject. data: The data from the endpoint. | 2.923253 | 3 |
services/UserService.py | erginbalta/FarmChain | 1 | 10270 | <filename>services/UserService.py
import mysql.connector
import socket
from contextlib import closing
import json
import random
packetType= ["INF","TRN","USR"]
database = mysql.connector.connect(
host="localhost",
user="root",
port="3307",
passwd="<PASSWORD>",
database="farmchain"
)
def userIdCreator():
data = []
numericId = 0
id = ""
with open("/datas/userInformation.json",'r') as f:
user = json.load(f)
numericId = len(user) + 1
id = str(packetType[2])+str(numericId)
return id
def transactionIdCreator():
idKey = packetType[1]
numericId = random.randint(10000,99999)
id = idKey+str(numericId)
return id
def getUserConnectionInfo():
hst = socket.gethostname()
usrHost = socket.gethostbyname(hst)
usrPort = findFreePort()
return [usrHost,usrPort]
def findFreePort():
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
def checkOnlineMiners():
mycursor = database.cursor()
sql = "select * from miners where status = 1;"
mycursor.execute(sql)
result = mycursor.fetchall()
return result
def minerInfo():
result = checkOnlineMiners()
info = result[0]
host = result[1]
port = result[2]
return [host,port]
def userInfoPacket(password,name,surname,company,status):
info = getUserConnectionInfo()
userId = userIdCreator()
name = str(name).lower()
surname = str(surname).lower()
company = str(company).lower()
status = str(status).lower()
packet = [packetType[0],[userId,password,name,surname,company,status],info[0],info[1]]
return packet
def transactionPacketCreator(productId,productName,productNumber,fromPlace,toPlace,date):
info = getUserConnectionInfo()
transactionId = transactionIdCreator()
productName = str(productName).lower()
fromPlace = str(fromPlace).lower()
toPlace = str(toPlace).lower()
packet = [packetType[1],[transactionId,productId,productName,productNumber,fromPlace,toPlace,date],info[0],info[1]]
return packet
| <filename>services/UserService.py
import mysql.connector
import socket
from contextlib import closing
import json
import random
packetType= ["INF","TRN","USR"]
database = mysql.connector.connect(
host="localhost",
user="root",
port="3307",
passwd="<PASSWORD>",
database="farmchain"
)
def userIdCreator():
data = []
numericId = 0
id = ""
with open("/datas/userInformation.json",'r') as f:
user = json.load(f)
numericId = len(user) + 1
id = str(packetType[2])+str(numericId)
return id
def transactionIdCreator():
idKey = packetType[1]
numericId = random.randint(10000,99999)
id = idKey+str(numericId)
return id
def getUserConnectionInfo():
hst = socket.gethostname()
usrHost = socket.gethostbyname(hst)
usrPort = findFreePort()
return [usrHost,usrPort]
def findFreePort():
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
def checkOnlineMiners():
mycursor = database.cursor()
sql = "select * from miners where status = 1;"
mycursor.execute(sql)
result = mycursor.fetchall()
return result
def minerInfo():
result = checkOnlineMiners()
info = result[0]
host = result[1]
port = result[2]
return [host,port]
def userInfoPacket(password,name,surname,company,status):
info = getUserConnectionInfo()
userId = userIdCreator()
name = str(name).lower()
surname = str(surname).lower()
company = str(company).lower()
status = str(status).lower()
packet = [packetType[0],[userId,password,name,surname,company,status],info[0],info[1]]
return packet
def transactionPacketCreator(productId,productName,productNumber,fromPlace,toPlace,date):
info = getUserConnectionInfo()
transactionId = transactionIdCreator()
productName = str(productName).lower()
fromPlace = str(fromPlace).lower()
toPlace = str(toPlace).lower()
packet = [packetType[1],[transactionId,productId,productName,productNumber,fromPlace,toPlace,date],info[0],info[1]]
return packet
| none | 1 | 2.468183 | 2 |
|
tests/blackbox/access_settings/test_bb_access_settings.py | csanders-git/waflz | 1 | 10271 | #!/usr/bin/python
'''Test WAF Access settings'''
#TODO: make so waflz_server only runs once and then can post to it
# ------------------------------------------------------------------------------
# Imports
# ------------------------------------------------------------------------------
import pytest
import subprocess
import os
import sys
import json
from pprint import pprint
import time
import requests
# ------------------------------------------------------------------------------
# Constants
# ------------------------------------------------------------------------------
G_TEST_HOST = 'http://127.0.0.1:12345/'
# ------------------------------------------------------------------------------
# globals
# ------------------------------------------------------------------------------
g_server_pid = -1
# ------------------------------------------------------------------------------
#
# ------------------------------------------------------------------------------
def run_command(command):
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
return (p.returncode, stdout, stderr)
# ------------------------------------------------------------------------------
#setup_func
# ------------------------------------------------------------------------------
@pytest.fixture()
def setup_func():
global g_server_pid
l_cwd = os.getcwd()
l_file_path = os.path.dirname(os.path.abspath(__file__))
l_ruleset_path = os.path.realpath(os.path.join(l_file_path, '../../data/waf/ruleset'))
l_geoip2city_path = os.path.realpath(os.path.join(l_file_path, '../../data/waf/db/GeoLite2-City.mmdb'));
l_geoip2ISP_path = os.path.realpath(os.path.join(l_file_path, '../../data/waf/db/GeoLite2-ASN.mmdb'));
l_profile_path = os.path.realpath(os.path.join(l_file_path, 'test_bb_access_settings.waf.prof.json'))
l_waflz_server_path = os.path.abspath(os.path.join(l_file_path, '../../../build/util/waflz_server/waflz_server'))
l_subproc = subprocess.Popen([l_waflz_server_path,
'-f', l_profile_path,
'-r', l_ruleset_path,
'-g', l_geoip2city_path,
'-s', l_geoip2ISP_path])
time.sleep(1)
g_server_pid = l_subproc.pid
time.sleep(1)
print 'setup g_server_pid: %d'%(g_server_pid)
time.sleep(1)
# ------------------------------------------------------------------------------
#teardown_func
# ------------------------------------------------------------------------------
def teardown_func():
global g_server_pid
time.sleep(.5)
print 'teardown g_server_pid: %d'%(g_server_pid)
if g_server_pid != -1:
l_code, l_out, l_err = run_command('kill -9 %d'%(g_server_pid))
time.sleep(.5)
# ------------------------------------------------------------------------------
# test_bb_modsecurity_ec_access_settings_ignore_args
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_01_block_not_in_ignore_args(setup_func):
#"ignore_query_args": ["ignore", "this", "crap"]
l_uri = G_TEST_HOST + '?' + 'arg1&arg2&arg3&arg4&arg5'
l_headers = {"host": "myhost.com"}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) > 0
print json.dumps(l_r_json,indent=4)
assert l_r_json['rule_intercept_status'] == 403
#assert 'modsecurity_crs_23_request_limits.conf' in l_r_json['sub_event'][0]['rule_file']
# ensure 403 because exceeded max_num_args
assert 'Too many arguments in' in l_r_json['rule_msg']
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_02_bypass_in_ignore_args
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_02_bypass_in_ignore_args():
#Test that passing ignore args lets it bypass
#Max arg limit it 4, we pass 7
l_uri = G_TEST_HOST + '?' + 'arg1&arg2&arg3&arg4&ignore&this&crap'
l_headers = {"host": "myhost.com"}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_03_block_headers_not_in_ignore_header_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_03_block_headers_not_in_ignore_header_list():
#ignore_header": ["(?i)(benign-header)", "super-whatever-header", "^D.*"]
l_uri = G_TEST_HOST
l_headers = {"host": "myhost.com",
"kooky-Header" : "function () { doing this is kinda dumb"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
print l_r_json
#We got an event
assert len(l_r_json) > 0
# detect a bash shellshock
assert 'Bash shellshock attack detected' in l_r_json['sub_event'][0]['rule_msg']
assert 'REQUEST_HEADERS' in l_r_json['sub_event'][0]['matched_var']['name']
assert 'ZnVuY3Rpb24gKCkgeyBkb2luZyB0aGlzIGlzIGtpbmRhIGR1bWI=' in l_r_json['sub_event'][0]['matched_var']['value']
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_04_bypass_headers_in_ignore_header_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_04_bypass_headers_in_ignore_header_list():
#Test ignore headers are ignored
l_uri = G_TEST_HOST
l_headers = {"host": "myhost.com",
"Benign-Header" : "function () { doing this is kinda dumb",
"super-whatever-header" : "function () { doing this is kinda dumb"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# -------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_05_bypass_headers_in_ignore_header_list_regex
# -------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_05_bypass_headers_in_ignore_header_list_regex():
########################################
# Test regex "^D.*"
########################################
l_uri = G_TEST_HOST
#anything that starts with D should be ignored
l_headers = {"host": "myhost.com",
"Doopdoop" : "function () { doing this is kinda dumb",
"Duper-duper-deader" : "function () { doing this is kinda dumb"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_06_block_cookie_not_in_ignore_cookie_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_06_block_cookie_not_in_ignore_cookie_list():
#"ignore_cookie": ["(?i)(sketchy_origin)", "(?i)(yousocrazy)"]
l_uri = G_TEST_HOST
l_headers = {"host": "myhost.com",
"Cookie": "blahblah=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) > 0
# detect a bash shellshock
assert 'Bash shellshock attack detected' in l_r_json['sub_event'][0]['rule_msg']
assert 'REQUEST_HEADERS' in l_r_json['sub_event'][0]['matched_var']['name']
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_07_bypass_cookie_in_ignore_cookie_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_07_bypass_cookie_in_ignore_cookie_list():
#"ignore_cookie": ["(?i)(sketchy_origin)", "(?i)(yousocrazy)"]
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com",
"Cookie" : "SkeTchy_Origin=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
#We get no event
assert len(l_r_json) == 0
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com",
"Cookie" : "SkeTchy_Origin=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_08_ignore_cookie_in_ignore_cookie_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_08_bypass_cookie_in_ignore_cookie_list_regex():
########################################
# Test regex "^[0-9_].*$"
########################################
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com",
"Cookie" : "0_123_ADB__bloop=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_09_block_disallowed_http_method
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_09_block_disallowed_http_method():
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com"
}
l_r = requests.put(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) > 0
assert 'Method is not allowed by policy' in l_r_json['rule_msg']
teardown_func()
| #!/usr/bin/python
'''Test WAF Access settings'''
#TODO: make so waflz_server only runs once and then can post to it
# ------------------------------------------------------------------------------
# Imports
# ------------------------------------------------------------------------------
import pytest
import subprocess
import os
import sys
import json
from pprint import pprint
import time
import requests
# ------------------------------------------------------------------------------
# Constants
# ------------------------------------------------------------------------------
G_TEST_HOST = 'http://127.0.0.1:12345/'
# ------------------------------------------------------------------------------
# globals
# ------------------------------------------------------------------------------
g_server_pid = -1
# ------------------------------------------------------------------------------
#
# ------------------------------------------------------------------------------
def run_command(command):
p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
return (p.returncode, stdout, stderr)
# ------------------------------------------------------------------------------
#setup_func
# ------------------------------------------------------------------------------
@pytest.fixture()
def setup_func():
global g_server_pid
l_cwd = os.getcwd()
l_file_path = os.path.dirname(os.path.abspath(__file__))
l_ruleset_path = os.path.realpath(os.path.join(l_file_path, '../../data/waf/ruleset'))
l_geoip2city_path = os.path.realpath(os.path.join(l_file_path, '../../data/waf/db/GeoLite2-City.mmdb'));
l_geoip2ISP_path = os.path.realpath(os.path.join(l_file_path, '../../data/waf/db/GeoLite2-ASN.mmdb'));
l_profile_path = os.path.realpath(os.path.join(l_file_path, 'test_bb_access_settings.waf.prof.json'))
l_waflz_server_path = os.path.abspath(os.path.join(l_file_path, '../../../build/util/waflz_server/waflz_server'))
l_subproc = subprocess.Popen([l_waflz_server_path,
'-f', l_profile_path,
'-r', l_ruleset_path,
'-g', l_geoip2city_path,
'-s', l_geoip2ISP_path])
time.sleep(1)
g_server_pid = l_subproc.pid
time.sleep(1)
print 'setup g_server_pid: %d'%(g_server_pid)
time.sleep(1)
# ------------------------------------------------------------------------------
#teardown_func
# ------------------------------------------------------------------------------
def teardown_func():
global g_server_pid
time.sleep(.5)
print 'teardown g_server_pid: %d'%(g_server_pid)
if g_server_pid != -1:
l_code, l_out, l_err = run_command('kill -9 %d'%(g_server_pid))
time.sleep(.5)
# ------------------------------------------------------------------------------
# test_bb_modsecurity_ec_access_settings_ignore_args
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_01_block_not_in_ignore_args(setup_func):
#"ignore_query_args": ["ignore", "this", "crap"]
l_uri = G_TEST_HOST + '?' + 'arg1&arg2&arg3&arg4&arg5'
l_headers = {"host": "myhost.com"}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) > 0
print json.dumps(l_r_json,indent=4)
assert l_r_json['rule_intercept_status'] == 403
#assert 'modsecurity_crs_23_request_limits.conf' in l_r_json['sub_event'][0]['rule_file']
# ensure 403 because exceeded max_num_args
assert 'Too many arguments in' in l_r_json['rule_msg']
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_02_bypass_in_ignore_args
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_02_bypass_in_ignore_args():
#Test that passing ignore args lets it bypass
#Max arg limit it 4, we pass 7
l_uri = G_TEST_HOST + '?' + 'arg1&arg2&arg3&arg4&ignore&this&crap'
l_headers = {"host": "myhost.com"}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_03_block_headers_not_in_ignore_header_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_03_block_headers_not_in_ignore_header_list():
#ignore_header": ["(?i)(benign-header)", "super-whatever-header", "^D.*"]
l_uri = G_TEST_HOST
l_headers = {"host": "myhost.com",
"kooky-Header" : "function () { doing this is kinda dumb"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
print l_r_json
#We got an event
assert len(l_r_json) > 0
# detect a bash shellshock
assert 'Bash shellshock attack detected' in l_r_json['sub_event'][0]['rule_msg']
assert 'REQUEST_HEADERS' in l_r_json['sub_event'][0]['matched_var']['name']
assert 'ZnVuY3Rpb24gKCkgeyBkb2luZyB0aGlzIGlzIGtpbmRhIGR1bWI=' in l_r_json['sub_event'][0]['matched_var']['value']
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_04_bypass_headers_in_ignore_header_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_04_bypass_headers_in_ignore_header_list():
#Test ignore headers are ignored
l_uri = G_TEST_HOST
l_headers = {"host": "myhost.com",
"Benign-Header" : "function () { doing this is kinda dumb",
"super-whatever-header" : "function () { doing this is kinda dumb"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# -------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_05_bypass_headers_in_ignore_header_list_regex
# -------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_05_bypass_headers_in_ignore_header_list_regex():
########################################
# Test regex "^D.*"
########################################
l_uri = G_TEST_HOST
#anything that starts with D should be ignored
l_headers = {"host": "myhost.com",
"Doopdoop" : "function () { doing this is kinda dumb",
"Duper-duper-deader" : "function () { doing this is kinda dumb"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_06_block_cookie_not_in_ignore_cookie_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_06_block_cookie_not_in_ignore_cookie_list():
#"ignore_cookie": ["(?i)(sketchy_origin)", "(?i)(yousocrazy)"]
l_uri = G_TEST_HOST
l_headers = {"host": "myhost.com",
"Cookie": "blahblah=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) > 0
# detect a bash shellshock
assert 'Bash shellshock attack detected' in l_r_json['sub_event'][0]['rule_msg']
assert 'REQUEST_HEADERS' in l_r_json['sub_event'][0]['matched_var']['name']
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_07_bypass_cookie_in_ignore_cookie_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_07_bypass_cookie_in_ignore_cookie_list():
#"ignore_cookie": ["(?i)(sketchy_origin)", "(?i)(yousocrazy)"]
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com",
"Cookie" : "SkeTchy_Origin=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
#We get no event
assert len(l_r_json) == 0
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com",
"Cookie" : "SkeTchy_Origin=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_08_ignore_cookie_in_ignore_cookie_list
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_08_bypass_cookie_in_ignore_cookie_list_regex():
########################################
# Test regex "^[0-9_].*$"
########################################
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com",
"Cookie" : "0_123_ADB__bloop=function () { asdf asdf asdf"
}
l_r = requests.get(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) == 0
# ------------------------------------------------------------------------------
# test_bb_modsec_ec_access_settings_09_block_disallowed_http_method
# ------------------------------------------------------------------------------
def test_bb_modsec_ec_access_settings_09_block_disallowed_http_method():
l_uri = G_TEST_HOST
l_headers = {"host" : "myhost.com"
}
l_r = requests.put(l_uri, headers=l_headers)
assert l_r.status_code == 200
l_r_json = l_r.json()
assert len(l_r_json) > 0
assert 'Method is not allowed by policy' in l_r_json['rule_msg']
teardown_func()
| en | 0.232524 | #!/usr/bin/python Test WAF Access settings #TODO: make so waflz_server only runs once and then can post to it # ------------------------------------------------------------------------------ # Imports # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Constants # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # globals # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ #setup_func # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ #teardown_func # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # test_bb_modsecurity_ec_access_settings_ignore_args # ------------------------------------------------------------------------------ #"ignore_query_args": ["ignore", "this", "crap"] #assert 'modsecurity_crs_23_request_limits.conf' in l_r_json['sub_event'][0]['rule_file'] # ensure 403 because exceeded max_num_args # ------------------------------------------------------------------------------ # test_bb_modsec_ec_access_settings_02_bypass_in_ignore_args # ------------------------------------------------------------------------------ #Test that passing ignore args lets it bypass #Max arg limit it 4, we pass 7 # ------------------------------------------------------------------------------ # test_bb_modsec_ec_access_settings_03_block_headers_not_in_ignore_header_list # ------------------------------------------------------------------------------ #ignore_header": ["(?i)(benign-header)", "super-whatever-header", "^D.*"] #We got an event # detect a bash shellshock # ------------------------------------------------------------------------------ # test_bb_modsec_ec_access_settings_04_bypass_headers_in_ignore_header_list # ------------------------------------------------------------------------------ #Test ignore headers are ignored # ------------------------------------------------------------------------------- # test_bb_modsec_ec_access_settings_05_bypass_headers_in_ignore_header_list_regex # ------------------------------------------------------------------------------- ######################################## # Test regex "^D.*" ######################################## #anything that starts with D should be ignored # ------------------------------------------------------------------------------ # test_bb_modsec_ec_access_settings_06_block_cookie_not_in_ignore_cookie_list # ------------------------------------------------------------------------------ #"ignore_cookie": ["(?i)(sketchy_origin)", "(?i)(yousocrazy)"] # detect a bash shellshock # ------------------------------------------------------------------------------ # test_bb_modsec_ec_access_settings_07_bypass_cookie_in_ignore_cookie_list # ------------------------------------------------------------------------------ #"ignore_cookie": ["(?i)(sketchy_origin)", "(?i)(yousocrazy)"] #We get no event # ------------------------------------------------------------------------------ # test_bb_modsec_ec_access_settings_08_ignore_cookie_in_ignore_cookie_list # ------------------------------------------------------------------------------ ######################################## # Test regex "^[0-9_].*$" ######################################## # ------------------------------------------------------------------------------ # test_bb_modsec_ec_access_settings_09_block_disallowed_http_method # ------------------------------------------------------------------------------ | 1.7865 | 2 |
cosmosis/runtime/analytics.py | ktanidis2/Modified_CosmoSIS_for_galaxy_number_count_angular_power_spectra | 1 | 10272 | <gh_stars>1-10
#coding: utf-8
from __future__ import print_function
from builtins import zip
from builtins import object
from cosmosis import output as output_module
import numpy as np
import sys
import os
class Analytics(object):
def __init__(self, params, pool=None):
self.params = params
self.pool = pool
self.total_steps = 0
nparam = len(params)
self.means = np.zeros(nparam)
self.m2 = np.zeros(nparam)
self.cov_times_n = np.zeros((nparam,nparam))
def add_traces(self, traces):
if traces.shape[1] != len(self.params):
raise RuntimeError("The number of traces added to Analytics "
"does not match the number of varied "
"parameters!")
num = float(self.total_steps)
for x in traces:
num += 1.0
delta = x - self.means
old_means = self.means.copy()
self.means += delta/num
self.m2 += delta*(x - self.means)
self.cov_times_n += np.outer(x-self.means, x-old_means)
self.total_steps += traces.shape[0]
def trace_means(self):
if self.pool:
return np.array(self.pool.gather(self.means)).T
else:
return self.means
def trace_variances(self):
if self.total_steps > 1:
local_variance = self.m2 / float(self.total_steps-1)
if self.pool:
return np.array(self.pool.gather(local_variance)).T
else:
return local_variance
return None
def gelman_rubin(self, quiet=True):
# takes current traces and returns
if self.pool is None or not self.pool.size > 1:
raise RuntimeError("Gelman-Rubin statistic is only "
"valid for multiple chains.")
if self.total_steps == 0:
raise RuntimeError("Gelman-Rubin statistic not "
"defined for 0-length chains.")
# gather trace statistics to master process
means = self.trace_means()
variances = self.trace_variances()
if self.pool.is_master():
B_over_n = np.var(means, ddof=1, axis=1)
B = B_over_n * self.total_steps
W = np.mean(variances, axis=1)
V = ((1. - 1./self.total_steps) * W +
(1. + 1./self.pool.size) * B_over_n)
# TODO: check for 0-values in W
Rhat = np.sqrt(V/W)
else:
Rhat = None
Rhat = self.pool.bcast(Rhat)
if not quiet and self.pool.is_master():
print()
print("Gelman-Rubin:")
for (p,R) in zip(self.params, Rhat):
print(" ", p, " ", R)
print("Worst = ", Rhat.max())
print()
return Rhat
| #coding: utf-8
from __future__ import print_function
from builtins import zip
from builtins import object
from cosmosis import output as output_module
import numpy as np
import sys
import os
class Analytics(object):
def __init__(self, params, pool=None):
self.params = params
self.pool = pool
self.total_steps = 0
nparam = len(params)
self.means = np.zeros(nparam)
self.m2 = np.zeros(nparam)
self.cov_times_n = np.zeros((nparam,nparam))
def add_traces(self, traces):
if traces.shape[1] != len(self.params):
raise RuntimeError("The number of traces added to Analytics "
"does not match the number of varied "
"parameters!")
num = float(self.total_steps)
for x in traces:
num += 1.0
delta = x - self.means
old_means = self.means.copy()
self.means += delta/num
self.m2 += delta*(x - self.means)
self.cov_times_n += np.outer(x-self.means, x-old_means)
self.total_steps += traces.shape[0]
def trace_means(self):
if self.pool:
return np.array(self.pool.gather(self.means)).T
else:
return self.means
def trace_variances(self):
if self.total_steps > 1:
local_variance = self.m2 / float(self.total_steps-1)
if self.pool:
return np.array(self.pool.gather(local_variance)).T
else:
return local_variance
return None
def gelman_rubin(self, quiet=True):
# takes current traces and returns
if self.pool is None or not self.pool.size > 1:
raise RuntimeError("Gelman-Rubin statistic is only "
"valid for multiple chains.")
if self.total_steps == 0:
raise RuntimeError("Gelman-Rubin statistic not "
"defined for 0-length chains.")
# gather trace statistics to master process
means = self.trace_means()
variances = self.trace_variances()
if self.pool.is_master():
B_over_n = np.var(means, ddof=1, axis=1)
B = B_over_n * self.total_steps
W = np.mean(variances, axis=1)
V = ((1. - 1./self.total_steps) * W +
(1. + 1./self.pool.size) * B_over_n)
# TODO: check for 0-values in W
Rhat = np.sqrt(V/W)
else:
Rhat = None
Rhat = self.pool.bcast(Rhat)
if not quiet and self.pool.is_master():
print()
print("Gelman-Rubin:")
for (p,R) in zip(self.params, Rhat):
print(" ", p, " ", R)
print("Worst = ", Rhat.max())
print()
return Rhat | en | 0.748833 | #coding: utf-8 # takes current traces and returns # gather trace statistics to master process # TODO: check for 0-values in W | 2.507523 | 3 |
sdcc2elf.py | Vector35/llil_transpiler | 14 | 10273 | <filename>sdcc2elf.py
#!/usr/bin/env python
#
# convert SDCC .rel files to 32-bit ELF relocatable
#
# resulting file is simple:
#
# ------------------------
# ELF header
# ------------------------
# .text section
# .shstrtab section
# .strtab section
# .symtab section
# ------------------------
# NULL elf32_shdr
# .text elf32_shdr
# .shstrtab elf32_shdr
# .symtab elf32_shdr
# .strtab elf32_shdr
# ------------------------
import os
import re
import sys
from struct import pack
#------------------------------------------------------------------------------
# ELF helpers
#------------------------------------------------------------------------------
(PF_X, PF_W, PF_R) = (1,2,4)
(SHT_NULL, SHT_PROGBITS, SHT_STRTAB) = (0,1,3)
sz_ehdr = 0x34
sz_shdr = 0x28
def align(fp, to=4, pad=b'\x00'):
while fp.tell() % to:
fp.write(pad)
#------------------------------------------------------------------------------
# read .map file for symbols
#------------------------------------------------------------------------------
fpath_map = sys.argv[2]
assert fpath_map.endswith('.map')
with open(fpath_map) as fp:
lines = fp.readlines()
(_CODE_ADDR, _CODE_SZ) = (None, None)
(i_code, i_header) = (None, None)
for (i, line) in enumerate(lines):
if line.startswith('_CODE'):
m = re.match(r'^_CODE\s+([A-F0-9]{8})\s+([A-F0-9]{8})', line)
(addr, size) = map(lambda x: int(x, 16), m.group(1,2))
if not i_code:
i_code = i
_CODE_ADDR = addr
_CODE_SZ = size
else:
if addr != _CODE_ADDR:
raise Exception('conflicting code segment addresses')
if size != _CODE_SZ:
raise Exception('conflicting code segment sizes')
if line.startswith('_HEADER0'):
i_header = i
break
assert i_code and i_header and i_code < i_header
syms = []
for line in lines[i_code:i_header]:
m = re.search(r'([A-F0-9]{8})\s+(_\w+)', line)
if m:
(addr, symname) = m.group(1, 2)
print('found %s: %s' % (addr, symname))
syms.append((symname, int(addr, 16)));
assert syms
print('_CODE [%08X, %08X)' % (_CODE_ADDR, _CODE_ADDR+_CODE_SZ))
print('_CODE symbols from')
for (name, addr) in syms:
print('%08X: %s' % (addr, name))
#------------------------------------------------------------------------------
# read .ihx file
#------------------------------------------------------------------------------
fpath_ihx = sys.argv[1]
assert fpath_ihx.endswith('.ihx')
code_area = [b'\x00'] * (_CODE_ADDR + _CODE_SZ)
with open(fpath_ihx) as fp:
for line in fp.readlines():
m = re.match(r'^:(..)(....)00(.*)(..)', line)
if m:
(count, addr, data, csum) = m.group(1,2,3,4)
count = int(count,16)
assert count == len(data)/2
addr = int(addr,16)
if not (addr >= _CODE_ADDR and addr < (_CODE_ADDR + _CODE_SZ)):
continue
print('%08X: ' % addr, end='')
for i in range(count):
byte_str = data[2*i]+data[2*i+1]
print('%s ' % byte_str, end='')
code_area[addr + i] = pack('B', int(byte_str, 16))
print('')
continue
m = re.match(r'^:00000001FF', line)
if m:
break
raise Exception('got unexpected IHX line: %s' % line)
assert code_area
#print(code_area)
#------------------------------------------------------------------------------
# write ELF
#------------------------------------------------------------------------------
# process symbols, build string table
syms = sorted(syms, key=lambda name_addr: name_addr[1])
func2size = {}
func2stroffs = {}
strtab = b'\x00'
for i in range(len(syms)):
(name, addr) = syms[i]
if i == len(syms)-1:
func2size[name] = len(code_area) - addr
else:
func2size[name] = syms[i+1][1] - addr
func2stroffs[name] = len(strtab)
strtab = strtab + name.encode('utf-8') + b'\x00'
print('%04X: %s size %X' % (addr, name, func2size[name]))
fp = open('tests.elf', 'wb')
# elf32_hdr (placeholder, we'll come back to fill in offsets)
print('elf32_hdr @ %X' % fp.tell())
fp.write(b'\x00' * sz_ehdr)
# .text section contents
o_text = fp.tell()
print('placing .text @ %X' % o_text)
for byte in code_area:
fp.write(byte)
sz_text = fp.tell() - o_text
# .shstrtab section contents
scn_shstrtab = b'\x00.text\x00.shstrtab\x00.symtab\x00.strtab\x00'
align(fp)
o_shstrtab = fp.tell()
print('placing .shstrtab @ %X' % o_shstrtab)
fp.write(scn_shstrtab)
sz_shstrtab = fp.tell() - o_shstrtab
# .symtab section contents
align(fp)
o_symtab = fp.tell()
print('placing .symtab @ %X' % o_symtab)
for (name, addr) in syms:
st_name = func2stroffs[name]
st_value = addr
st_size = func2size[name]
st_info = 0x12 # bind:1(GLOBAL) type:2(FUNC)
st_other = 0
st_shndx = 0x1 # section header index: 0'th: NULL 1'th: .text
Elf32_Sym = pack('<IIIBBH', st_name, st_value, st_size, st_info, st_other, st_shndx)
fp.write(Elf32_Sym)
sz_symtab = fp.tell() - o_symtab
# .strtab section contents
align(fp)
o_strtab = fp.tell()
print('placing .strtab @ %X' % o_strtab)
fp.write(strtab)
sz_strtab = fp.tell() - o_strtab
# null section header (index 0)
align(fp)
o_shdr_null = fp.tell()
print('placing shdr NULL @ %X' % o_shdr_null)
fp.write(b'\x00' * sz_shdr)
# .text section header (index 1)
o_shdr_text = fp.tell()
print('placing shdr .text @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.text')
sh_type = 1 # SHT_PROGBITS
sh_flags = 6 # ALLOC|EXECINSTR
sh_addr = 0
sh_offset = o_text
sh_size = sz_text
sh_link = 0
sh_info = 0
sh_addralign = 4
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# .shstrtab section header (index 2)
o_shdr_shstrtab = fp.tell()
print('placing shdr .shstrtab @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.shstrtab')
sh_type = 3 #SHT_STRTAB
sh_flags = 0
sh_addr = 0
sh_offset = o_shstrtab
sh_size = sz_shstrtab
sh_link = 0
sh_info = 0
sh_addralign = 1
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# .symtab section header (index 3)
o_shdr_symtab = fp.tell()
print('placing shdr .symtab @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.symtab')
sh_type = 2 #SHT_SYMTAB
sh_flags = 0
sh_addr = 0
sh_offset = o_symtab
sh_size = sz_symtab
sh_link = 4 # link to scn #4 (find strings in .strtab)
sh_info = 0
sh_addralign = 4
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# .strtab section header (index 4)
o_shdr_strtab = fp.tell()
print('placing shdr .strtab @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.strtab')
sh_type = 3 #SHT_STRTAB
sh_flags = 0
sh_addr = 0
sh_offset = o_strtab
sh_size = sz_strtab
sh_link = 0
sh_info = 0
sh_addralign = 1
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# seek back, write real elf header
hdr = b'\x7FELF'
hdr += b'\x01' # e_ident[EI_CLASS] 32-bit
hdr += b'\x01' # e_ident[EI_DATA] LSB (little-end)
hdr += b'\x01\x00\x00' # version, osabi, abiversion
hdr += b'\x00'*7
assert len(hdr) == 16
hdr += pack('<H', 1) # e_type = ET_REL
hdr += pack('<H', 220) # e_machine = EM_Z80
hdr += pack('<I', 1) # e_version = EV_CURRENT
hdr += pack('<I', 0) # e_entry
hdr += pack('<I', 0) # e_phoff
hdr += pack('<I', o_shdr_null) # e_shoff
hdr += pack('<I', 0) # e_flags
hdr += pack('<H', sz_ehdr) # e_ehsize
hdr += pack('<H', 0) # e_phentsize
hdr += pack('<H', 0) # e_phnum
hdr += pack('<H', sz_shdr) # e_shentsize
hdr += pack('<H', 5) # e_shnum
hdr += pack('<H', 2) # e_shstrndx = index of .shstrtab
assert len(hdr) == sz_ehdr
fp.seek(0, os.SEEK_SET)
fp.write(hdr)
# done!
fp.close()
| <filename>sdcc2elf.py
#!/usr/bin/env python
#
# convert SDCC .rel files to 32-bit ELF relocatable
#
# resulting file is simple:
#
# ------------------------
# ELF header
# ------------------------
# .text section
# .shstrtab section
# .strtab section
# .symtab section
# ------------------------
# NULL elf32_shdr
# .text elf32_shdr
# .shstrtab elf32_shdr
# .symtab elf32_shdr
# .strtab elf32_shdr
# ------------------------
import os
import re
import sys
from struct import pack
#------------------------------------------------------------------------------
# ELF helpers
#------------------------------------------------------------------------------
(PF_X, PF_W, PF_R) = (1,2,4)
(SHT_NULL, SHT_PROGBITS, SHT_STRTAB) = (0,1,3)
sz_ehdr = 0x34
sz_shdr = 0x28
def align(fp, to=4, pad=b'\x00'):
while fp.tell() % to:
fp.write(pad)
#------------------------------------------------------------------------------
# read .map file for symbols
#------------------------------------------------------------------------------
fpath_map = sys.argv[2]
assert fpath_map.endswith('.map')
with open(fpath_map) as fp:
lines = fp.readlines()
(_CODE_ADDR, _CODE_SZ) = (None, None)
(i_code, i_header) = (None, None)
for (i, line) in enumerate(lines):
if line.startswith('_CODE'):
m = re.match(r'^_CODE\s+([A-F0-9]{8})\s+([A-F0-9]{8})', line)
(addr, size) = map(lambda x: int(x, 16), m.group(1,2))
if not i_code:
i_code = i
_CODE_ADDR = addr
_CODE_SZ = size
else:
if addr != _CODE_ADDR:
raise Exception('conflicting code segment addresses')
if size != _CODE_SZ:
raise Exception('conflicting code segment sizes')
if line.startswith('_HEADER0'):
i_header = i
break
assert i_code and i_header and i_code < i_header
syms = []
for line in lines[i_code:i_header]:
m = re.search(r'([A-F0-9]{8})\s+(_\w+)', line)
if m:
(addr, symname) = m.group(1, 2)
print('found %s: %s' % (addr, symname))
syms.append((symname, int(addr, 16)));
assert syms
print('_CODE [%08X, %08X)' % (_CODE_ADDR, _CODE_ADDR+_CODE_SZ))
print('_CODE symbols from')
for (name, addr) in syms:
print('%08X: %s' % (addr, name))
#------------------------------------------------------------------------------
# read .ihx file
#------------------------------------------------------------------------------
fpath_ihx = sys.argv[1]
assert fpath_ihx.endswith('.ihx')
code_area = [b'\x00'] * (_CODE_ADDR + _CODE_SZ)
with open(fpath_ihx) as fp:
for line in fp.readlines():
m = re.match(r'^:(..)(....)00(.*)(..)', line)
if m:
(count, addr, data, csum) = m.group(1,2,3,4)
count = int(count,16)
assert count == len(data)/2
addr = int(addr,16)
if not (addr >= _CODE_ADDR and addr < (_CODE_ADDR + _CODE_SZ)):
continue
print('%08X: ' % addr, end='')
for i in range(count):
byte_str = data[2*i]+data[2*i+1]
print('%s ' % byte_str, end='')
code_area[addr + i] = pack('B', int(byte_str, 16))
print('')
continue
m = re.match(r'^:00000001FF', line)
if m:
break
raise Exception('got unexpected IHX line: %s' % line)
assert code_area
#print(code_area)
#------------------------------------------------------------------------------
# write ELF
#------------------------------------------------------------------------------
# process symbols, build string table
syms = sorted(syms, key=lambda name_addr: name_addr[1])
func2size = {}
func2stroffs = {}
strtab = b'\x00'
for i in range(len(syms)):
(name, addr) = syms[i]
if i == len(syms)-1:
func2size[name] = len(code_area) - addr
else:
func2size[name] = syms[i+1][1] - addr
func2stroffs[name] = len(strtab)
strtab = strtab + name.encode('utf-8') + b'\x00'
print('%04X: %s size %X' % (addr, name, func2size[name]))
fp = open('tests.elf', 'wb')
# elf32_hdr (placeholder, we'll come back to fill in offsets)
print('elf32_hdr @ %X' % fp.tell())
fp.write(b'\x00' * sz_ehdr)
# .text section contents
o_text = fp.tell()
print('placing .text @ %X' % o_text)
for byte in code_area:
fp.write(byte)
sz_text = fp.tell() - o_text
# .shstrtab section contents
scn_shstrtab = b'\x00.text\x00.shstrtab\x00.symtab\x00.strtab\x00'
align(fp)
o_shstrtab = fp.tell()
print('placing .shstrtab @ %X' % o_shstrtab)
fp.write(scn_shstrtab)
sz_shstrtab = fp.tell() - o_shstrtab
# .symtab section contents
align(fp)
o_symtab = fp.tell()
print('placing .symtab @ %X' % o_symtab)
for (name, addr) in syms:
st_name = func2stroffs[name]
st_value = addr
st_size = func2size[name]
st_info = 0x12 # bind:1(GLOBAL) type:2(FUNC)
st_other = 0
st_shndx = 0x1 # section header index: 0'th: NULL 1'th: .text
Elf32_Sym = pack('<IIIBBH', st_name, st_value, st_size, st_info, st_other, st_shndx)
fp.write(Elf32_Sym)
sz_symtab = fp.tell() - o_symtab
# .strtab section contents
align(fp)
o_strtab = fp.tell()
print('placing .strtab @ %X' % o_strtab)
fp.write(strtab)
sz_strtab = fp.tell() - o_strtab
# null section header (index 0)
align(fp)
o_shdr_null = fp.tell()
print('placing shdr NULL @ %X' % o_shdr_null)
fp.write(b'\x00' * sz_shdr)
# .text section header (index 1)
o_shdr_text = fp.tell()
print('placing shdr .text @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.text')
sh_type = 1 # SHT_PROGBITS
sh_flags = 6 # ALLOC|EXECINSTR
sh_addr = 0
sh_offset = o_text
sh_size = sz_text
sh_link = 0
sh_info = 0
sh_addralign = 4
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# .shstrtab section header (index 2)
o_shdr_shstrtab = fp.tell()
print('placing shdr .shstrtab @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.shstrtab')
sh_type = 3 #SHT_STRTAB
sh_flags = 0
sh_addr = 0
sh_offset = o_shstrtab
sh_size = sz_shstrtab
sh_link = 0
sh_info = 0
sh_addralign = 1
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# .symtab section header (index 3)
o_shdr_symtab = fp.tell()
print('placing shdr .symtab @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.symtab')
sh_type = 2 #SHT_SYMTAB
sh_flags = 0
sh_addr = 0
sh_offset = o_symtab
sh_size = sz_symtab
sh_link = 4 # link to scn #4 (find strings in .strtab)
sh_info = 0
sh_addralign = 4
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# .strtab section header (index 4)
o_shdr_strtab = fp.tell()
print('placing shdr .strtab @ %X' % fp.tell())
sh_name = scn_shstrtab.index(b'.strtab')
sh_type = 3 #SHT_STRTAB
sh_flags = 0
sh_addr = 0
sh_offset = o_strtab
sh_size = sz_strtab
sh_link = 0
sh_info = 0
sh_addralign = 1
sh_entsize = 0
tmp = pack('<IIIIIIIIII', \
sh_name, sh_type, sh_flags, sh_addr, sh_offset, sh_size, sh_link, sh_info, \
sh_addralign, sh_entsize)
fp.write(tmp)
# seek back, write real elf header
hdr = b'\x7FELF'
hdr += b'\x01' # e_ident[EI_CLASS] 32-bit
hdr += b'\x01' # e_ident[EI_DATA] LSB (little-end)
hdr += b'\x01\x00\x00' # version, osabi, abiversion
hdr += b'\x00'*7
assert len(hdr) == 16
hdr += pack('<H', 1) # e_type = ET_REL
hdr += pack('<H', 220) # e_machine = EM_Z80
hdr += pack('<I', 1) # e_version = EV_CURRENT
hdr += pack('<I', 0) # e_entry
hdr += pack('<I', 0) # e_phoff
hdr += pack('<I', o_shdr_null) # e_shoff
hdr += pack('<I', 0) # e_flags
hdr += pack('<H', sz_ehdr) # e_ehsize
hdr += pack('<H', 0) # e_phentsize
hdr += pack('<H', 0) # e_phnum
hdr += pack('<H', sz_shdr) # e_shentsize
hdr += pack('<H', 5) # e_shnum
hdr += pack('<H', 2) # e_shstrndx = index of .shstrtab
assert len(hdr) == sz_ehdr
fp.seek(0, os.SEEK_SET)
fp.write(hdr)
# done!
fp.close()
| en | 0.244853 | #!/usr/bin/env python # # convert SDCC .rel files to 32-bit ELF relocatable # # resulting file is simple: # # ------------------------ # ELF header # ------------------------ # .text section # .shstrtab section # .strtab section # .symtab section # ------------------------ # NULL elf32_shdr # .text elf32_shdr # .shstrtab elf32_shdr # .symtab elf32_shdr # .strtab elf32_shdr # ------------------------ #------------------------------------------------------------------------------ # ELF helpers #------------------------------------------------------------------------------ #------------------------------------------------------------------------------ # read .map file for symbols #------------------------------------------------------------------------------ #------------------------------------------------------------------------------ # read .ihx file #------------------------------------------------------------------------------ #print(code_area) #------------------------------------------------------------------------------ # write ELF #------------------------------------------------------------------------------ # process symbols, build string table # elf32_hdr (placeholder, we'll come back to fill in offsets) # .text section contents # .shstrtab section contents # .symtab section contents # bind:1(GLOBAL) type:2(FUNC) # section header index: 0'th: NULL 1'th: .text # .strtab section contents # null section header (index 0) # .text section header (index 1) # SHT_PROGBITS # ALLOC|EXECINSTR # .shstrtab section header (index 2) #SHT_STRTAB # .symtab section header (index 3) #SHT_SYMTAB # link to scn #4 (find strings in .strtab) # .strtab section header (index 4) #SHT_STRTAB # seek back, write real elf header # e_ident[EI_CLASS] 32-bit # e_ident[EI_DATA] LSB (little-end) # version, osabi, abiversion # e_type = ET_REL # e_machine = EM_Z80 # e_version = EV_CURRENT # e_entry # e_phoff # e_shoff # e_flags # e_ehsize # e_phentsize # e_phnum # e_shentsize # e_shnum # e_shstrndx = index of .shstrtab # done! | 2.546184 | 3 |
eval.py | nikinsta/deep-siamese-text-similarity-on-python-3 | 0 | 10274 | #! /usr/bin/env python
import tensorflow as tf
import numpy as np
import os
import time
import datetime
from tensorflow.contrib import learn
from input_helpers import InputHelper
# Parameters
# ==================================================
# Eval Parameters
tf.flags.DEFINE_integer("batch_size", 64, "Batch Size (default: 64)")
tf.flags.DEFINE_string("checkpoint_dir", "", "Checkpoint directory from training run")
tf.flags.DEFINE_string("eval_filepath", "match_valid.tsv", "Evaluate on this data (Default: None)")
tf.flags.DEFINE_string("vocab_filepath", "runs/1479874609/checkpoints/vocab", "Load training time vocabulary (Default: None)")
tf.flags.DEFINE_string("model", "runs/1479874609/checkpoints/model-32000", "Load trained model checkpoint (Default: None)")
# Misc Parameters
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
FLAGS = tf.flags.FLAGS
FLAGS._parse_flags()
print("\nParameters:")
for attr, value in sorted(FLAGS.__flags.items()):
print("{}={}".format(attr.upper(), value))
print("")
if FLAGS.eval_filepath==None or FLAGS.vocab_filepath==None or FLAGS.model==None :
print("Eval or Vocab filepaths are empty.")
exit()
# load data and map id-transform based on training time vocabulary
inpH = InputHelper()
x1_test,x2_test,y_test = inpH.getTestDataSet(FLAGS.eval_filepath, FLAGS.vocab_filepath, 30)
print("\nEvaluating...\n")
# Evaluation
# ==================================================
checkpoint_file = FLAGS.model
print(checkpoint_file)
graph = tf.Graph()
with graph.as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Load the saved meta graph and restore variables
saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
sess.run(tf.initialize_all_variables())
saver.restore(sess, checkpoint_file)
# Get the placeholders from the graph by name
input_x1 = graph.get_operation_by_name("input_x1").outputs[0]
input_x2 = graph.get_operation_by_name("input_x2").outputs[0]
input_y = graph.get_operation_by_name("input_y").outputs[0]
dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]
# Tensors we want to evaluate
predictions = graph.get_operation_by_name("output/distance").outputs[0]
accuracy = graph.get_operation_by_name("accuracy/accuracy").outputs[0]
sim = graph.get_operation_by_name("accuracy/temp_sim").outputs[0]
#emb = graph.get_operation_by_name("embedding/W").outputs[0]
#embedded_chars = tf.nn.embedding_lookup(emb,input_x)
# Generate batches for one epoch
batches = inpH.batch_iter(list(zip(x1_test,x2_test,y_test)), 2*FLAGS.batch_size, 1, shuffle=False)
# Collect the predictions here
all_predictions = []
all_d=[]
for db in batches:
x1_dev_b,x2_dev_b,y_dev_b = zip(*db)
batch_predictions, batch_acc, sim = sess.run([predictions,accuracy,sim], {input_x1: x1_dev_b, input_x2: x2_dev_b, input_y:y_dev_b, dropout_keep_prob: 1.0})
all_predictions = np.concatenate([all_predictions, batch_predictions])
print(batch_predictions)
all_d = np.concatenate([all_d, sim])
print("DEV acc {}".format(batch_acc))
for ex in all_predictions:
print(ex)
correct_predictions = float(np.mean(all_d == y_test))
print("Accuracy: {:g}".format(correct_predictions))
| #! /usr/bin/env python
import tensorflow as tf
import numpy as np
import os
import time
import datetime
from tensorflow.contrib import learn
from input_helpers import InputHelper
# Parameters
# ==================================================
# Eval Parameters
tf.flags.DEFINE_integer("batch_size", 64, "Batch Size (default: 64)")
tf.flags.DEFINE_string("checkpoint_dir", "", "Checkpoint directory from training run")
tf.flags.DEFINE_string("eval_filepath", "match_valid.tsv", "Evaluate on this data (Default: None)")
tf.flags.DEFINE_string("vocab_filepath", "runs/1479874609/checkpoints/vocab", "Load training time vocabulary (Default: None)")
tf.flags.DEFINE_string("model", "runs/1479874609/checkpoints/model-32000", "Load trained model checkpoint (Default: None)")
# Misc Parameters
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
FLAGS = tf.flags.FLAGS
FLAGS._parse_flags()
print("\nParameters:")
for attr, value in sorted(FLAGS.__flags.items()):
print("{}={}".format(attr.upper(), value))
print("")
if FLAGS.eval_filepath==None or FLAGS.vocab_filepath==None or FLAGS.model==None :
print("Eval or Vocab filepaths are empty.")
exit()
# load data and map id-transform based on training time vocabulary
inpH = InputHelper()
x1_test,x2_test,y_test = inpH.getTestDataSet(FLAGS.eval_filepath, FLAGS.vocab_filepath, 30)
print("\nEvaluating...\n")
# Evaluation
# ==================================================
checkpoint_file = FLAGS.model
print(checkpoint_file)
graph = tf.Graph()
with graph.as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Load the saved meta graph and restore variables
saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
sess.run(tf.initialize_all_variables())
saver.restore(sess, checkpoint_file)
# Get the placeholders from the graph by name
input_x1 = graph.get_operation_by_name("input_x1").outputs[0]
input_x2 = graph.get_operation_by_name("input_x2").outputs[0]
input_y = graph.get_operation_by_name("input_y").outputs[0]
dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]
# Tensors we want to evaluate
predictions = graph.get_operation_by_name("output/distance").outputs[0]
accuracy = graph.get_operation_by_name("accuracy/accuracy").outputs[0]
sim = graph.get_operation_by_name("accuracy/temp_sim").outputs[0]
#emb = graph.get_operation_by_name("embedding/W").outputs[0]
#embedded_chars = tf.nn.embedding_lookup(emb,input_x)
# Generate batches for one epoch
batches = inpH.batch_iter(list(zip(x1_test,x2_test,y_test)), 2*FLAGS.batch_size, 1, shuffle=False)
# Collect the predictions here
all_predictions = []
all_d=[]
for db in batches:
x1_dev_b,x2_dev_b,y_dev_b = zip(*db)
batch_predictions, batch_acc, sim = sess.run([predictions,accuracy,sim], {input_x1: x1_dev_b, input_x2: x2_dev_b, input_y:y_dev_b, dropout_keep_prob: 1.0})
all_predictions = np.concatenate([all_predictions, batch_predictions])
print(batch_predictions)
all_d = np.concatenate([all_d, sim])
print("DEV acc {}".format(batch_acc))
for ex in all_predictions:
print(ex)
correct_predictions = float(np.mean(all_d == y_test))
print("Accuracy: {:g}".format(correct_predictions))
| en | 0.594187 | #! /usr/bin/env python # Parameters # ================================================== # Eval Parameters # Misc Parameters # load data and map id-transform based on training time vocabulary # Evaluation # ================================================== # Load the saved meta graph and restore variables # Get the placeholders from the graph by name # Tensors we want to evaluate #emb = graph.get_operation_by_name("embedding/W").outputs[0] #embedded_chars = tf.nn.embedding_lookup(emb,input_x) # Generate batches for one epoch # Collect the predictions here | 2.301202 | 2 |
accounts/views.py | nikhiljohn10/django-auth | 0 | 10275 | <gh_stars>0
from django.urls import reverse
from django.conf import settings
from django.contrib import messages
from django.shortcuts import render, redirect
from django.core.mail import send_mail
from django.contrib.auth import login, logout, views, authenticate
from django.views.generic.edit import CreateView
from django.contrib.sessions.models import Session
from django.contrib.auth.decorators import login_required, permission_required
from accounts.tools import activater, mailer
from accounts.forms import SignUpForm, LoginForm
from accounts.models import User
@login_required
@permission_required("is_staff", login_url='/dashboard/')
def gmail(request):
request.session['oauth_state'] = mailer.auth_state
return redirect(mailer.auth_uri)
@login_required
@permission_required("is_staff", login_url='/dashboard/')
def gmail_verify(request):
code = request.GET.get('code','')
state = request.GET.get('state','')
if code and state == request.session['oauth_state']:
mailer.verify(code)
return redirect('dash:gmail')
class UserLogin(views.LoginView):
template_name = 'auth/login.html'
authentication_form = LoginForm
def form_valid(self, form):
user = form.get_user()
login(self.request, user)
if not self.request.POST.get('remember_me', None):
self.request.session.set_expiry(0)
messages.info(self.request, f"You are now logged in as {user}")
return redirect(self.get_success_url())
class SignUpView(CreateView):
form_class = SignUpForm
template_name = 'auth/signup.html'
def form_valid(self, form):
if mailer.activated:
user = form.save()
mailer.send_mail(
"Django Verification Code",
"Hi "+str(user)+",\nClick this link to activate: " +
reverse('auth:verify_email', args=(
user, activater.make_token(user))),
[user.email])
login(self.request, user)
else:
messages.error(self.request,
"Gmail is not activate. Contact site administrator.")
return redirect('auth:signup')
return redirect('core:home')
def user_manage_permission(user, username):
if not user.is_staff:
if user.username == username:
return True
else:
if user.username != username:
return True
return False
@login_required
@permission_required("is_staff", login_url='/dashboard/')
def user_force_logout(request, username):
user = User.objects.get(username=username)
sessions = [s.delete() for s in Session.objects.all()
if s.get_decoded().get('_auth_user_id') == str(user.id)]
print(sessions)
return redirect('dash:users')
def user_verify_email(request, username, token):
user = User.objects.get(username=username)
if activater.check_token(user, token):
print(user, "is verified")
user.email_verified = True
user.save()
return redirect('dash:users')
@login_required
def user_disable(request, username):
if user_manage_permission(request.user, username):
user = User.objects.get(username=username)
user.is_active = False
user.save()
messages.error(request, 'Profile successfully disabled.')
else:
messages.error(
request, 'You are not allowed to perform this operation.')
if request.user.is_staff:
return redirect('dash:users')
else:
return redirect('dash:profile')
@login_required
def user_enable(request, username):
if user_manage_permission(request.user, username):
user = User.objects.get(username=username)
user.is_active = True
user.save()
messages.success(request, 'Profile successfully enabled.')
else:
messages.error(
request, 'You are not allowed to perform this operation.')
if request.user.is_staff:
return redirect('dash:users')
else:
return redirect('dash:profile')
@login_required
def user_delete(request, username):
if user_manage_permission(request.user, username):
user = User.objects.get(username=username)
user.delete()
messages.error(request, 'Profile successfully deleted.')
else:
messages.error(
request, 'You are not allowed to perform this operation.')
if request.user.is_staff:
return redirect('dash:users')
else:
return redirect('dash:profile')
user_login = UserLogin.as_view()
user_signup = SignUpView.as_view()
user_logout = views.LogoutView.as_view()
| from django.urls import reverse
from django.conf import settings
from django.contrib import messages
from django.shortcuts import render, redirect
from django.core.mail import send_mail
from django.contrib.auth import login, logout, views, authenticate
from django.views.generic.edit import CreateView
from django.contrib.sessions.models import Session
from django.contrib.auth.decorators import login_required, permission_required
from accounts.tools import activater, mailer
from accounts.forms import SignUpForm, LoginForm
from accounts.models import User
@login_required
@permission_required("is_staff", login_url='/dashboard/')
def gmail(request):
request.session['oauth_state'] = mailer.auth_state
return redirect(mailer.auth_uri)
@login_required
@permission_required("is_staff", login_url='/dashboard/')
def gmail_verify(request):
code = request.GET.get('code','')
state = request.GET.get('state','')
if code and state == request.session['oauth_state']:
mailer.verify(code)
return redirect('dash:gmail')
class UserLogin(views.LoginView):
template_name = 'auth/login.html'
authentication_form = LoginForm
def form_valid(self, form):
user = form.get_user()
login(self.request, user)
if not self.request.POST.get('remember_me', None):
self.request.session.set_expiry(0)
messages.info(self.request, f"You are now logged in as {user}")
return redirect(self.get_success_url())
class SignUpView(CreateView):
form_class = SignUpForm
template_name = 'auth/signup.html'
def form_valid(self, form):
if mailer.activated:
user = form.save()
mailer.send_mail(
"Django Verification Code",
"Hi "+str(user)+",\nClick this link to activate: " +
reverse('auth:verify_email', args=(
user, activater.make_token(user))),
[user.email])
login(self.request, user)
else:
messages.error(self.request,
"Gmail is not activate. Contact site administrator.")
return redirect('auth:signup')
return redirect('core:home')
def user_manage_permission(user, username):
if not user.is_staff:
if user.username == username:
return True
else:
if user.username != username:
return True
return False
@login_required
@permission_required("is_staff", login_url='/dashboard/')
def user_force_logout(request, username):
user = User.objects.get(username=username)
sessions = [s.delete() for s in Session.objects.all()
if s.get_decoded().get('_auth_user_id') == str(user.id)]
print(sessions)
return redirect('dash:users')
def user_verify_email(request, username, token):
user = User.objects.get(username=username)
if activater.check_token(user, token):
print(user, "is verified")
user.email_verified = True
user.save()
return redirect('dash:users')
@login_required
def user_disable(request, username):
if user_manage_permission(request.user, username):
user = User.objects.get(username=username)
user.is_active = False
user.save()
messages.error(request, 'Profile successfully disabled.')
else:
messages.error(
request, 'You are not allowed to perform this operation.')
if request.user.is_staff:
return redirect('dash:users')
else:
return redirect('dash:profile')
@login_required
def user_enable(request, username):
if user_manage_permission(request.user, username):
user = User.objects.get(username=username)
user.is_active = True
user.save()
messages.success(request, 'Profile successfully enabled.')
else:
messages.error(
request, 'You are not allowed to perform this operation.')
if request.user.is_staff:
return redirect('dash:users')
else:
return redirect('dash:profile')
@login_required
def user_delete(request, username):
if user_manage_permission(request.user, username):
user = User.objects.get(username=username)
user.delete()
messages.error(request, 'Profile successfully deleted.')
else:
messages.error(
request, 'You are not allowed to perform this operation.')
if request.user.is_staff:
return redirect('dash:users')
else:
return redirect('dash:profile')
user_login = UserLogin.as_view()
user_signup = SignUpView.as_view()
user_logout = views.LogoutView.as_view() | none | 1 | 2.163409 | 2 |
|
python/testData/intentions/PyAnnotateVariableTypeIntentionTest/annotationTupleType.py | truthiswill/intellij-community | 2 | 10276 | v<caret>ar = (1, 'foo', None) | v<caret>ar = (1, 'foo', None) | none | 1 | 1.302547 | 1 |
|
bot/venv/lib/python3.7/site-packages/scipy/version.py | manaccac/sc2_bot | 76 | 10277 | <gh_stars>10-100
# THIS FILE IS GENERATED FROM SCIPY SETUP.PY
short_version = '1.5.4'
version = '1.5.4'
full_version = '1.5.4'
git_revision = '19acfed431060aafaa963f7e530c95e70cd4b85c'
release = True
if not release:
version = full_version
| # THIS FILE IS GENERATED FROM SCIPY SETUP.PY
short_version = '1.5.4'
version = '1.5.4'
full_version = '1.5.4'
git_revision = '19acfed431060aafaa963f7e530c95e70cd4b85c'
release = True
if not release:
version = full_version | en | 0.387835 | # THIS FILE IS GENERATED FROM SCIPY SETUP.PY | 0.960018 | 1 |
emrichen/input/__init__.py | jbek7/emrichen | 0 | 10278 | <reponame>jbek7/emrichen
from typing import TextIO, Union
from .json import load_json
from .yaml import load_yaml
PARSERS = {
'yaml': load_yaml,
'json': load_json,
}
def parse(data: Union[TextIO, str], format: str):
if format in PARSERS:
return PARSERS[format](data)
else:
raise ValueError('No parser for format {format}'.format(format=format))
| from typing import TextIO, Union
from .json import load_json
from .yaml import load_yaml
PARSERS = {
'yaml': load_yaml,
'json': load_json,
}
def parse(data: Union[TextIO, str], format: str):
if format in PARSERS:
return PARSERS[format](data)
else:
raise ValueError('No parser for format {format}'.format(format=format)) | none | 1 | 2.85197 | 3 |
|
sdk/python/pulumi_aws_native/workspaces/get_workspace.py | pulumi/pulumi-aws-native | 29 | 10279 | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'GetWorkspaceResult',
'AwaitableGetWorkspaceResult',
'get_workspace',
'get_workspace_output',
]
@pulumi.output_type
class GetWorkspaceResult:
def __init__(__self__, bundle_id=None, directory_id=None, id=None, root_volume_encryption_enabled=None, tags=None, user_volume_encryption_enabled=None, volume_encryption_key=None, workspace_properties=None):
if bundle_id and not isinstance(bundle_id, str):
raise TypeError("Expected argument 'bundle_id' to be a str")
pulumi.set(__self__, "bundle_id", bundle_id)
if directory_id and not isinstance(directory_id, str):
raise TypeError("Expected argument 'directory_id' to be a str")
pulumi.set(__self__, "directory_id", directory_id)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if root_volume_encryption_enabled and not isinstance(root_volume_encryption_enabled, bool):
raise TypeError("Expected argument 'root_volume_encryption_enabled' to be a bool")
pulumi.set(__self__, "root_volume_encryption_enabled", root_volume_encryption_enabled)
if tags and not isinstance(tags, list):
raise TypeError("Expected argument 'tags' to be a list")
pulumi.set(__self__, "tags", tags)
if user_volume_encryption_enabled and not isinstance(user_volume_encryption_enabled, bool):
raise TypeError("Expected argument 'user_volume_encryption_enabled' to be a bool")
pulumi.set(__self__, "user_volume_encryption_enabled", user_volume_encryption_enabled)
if volume_encryption_key and not isinstance(volume_encryption_key, str):
raise TypeError("Expected argument 'volume_encryption_key' to be a str")
pulumi.set(__self__, "volume_encryption_key", volume_encryption_key)
if workspace_properties and not isinstance(workspace_properties, dict):
raise TypeError("Expected argument 'workspace_properties' to be a dict")
pulumi.set(__self__, "workspace_properties", workspace_properties)
@property
@pulumi.getter(name="bundleId")
def bundle_id(self) -> Optional[str]:
return pulumi.get(self, "bundle_id")
@property
@pulumi.getter(name="directoryId")
def directory_id(self) -> Optional[str]:
return pulumi.get(self, "directory_id")
@property
@pulumi.getter
def id(self) -> Optional[str]:
return pulumi.get(self, "id")
@property
@pulumi.getter(name="rootVolumeEncryptionEnabled")
def root_volume_encryption_enabled(self) -> Optional[bool]:
return pulumi.get(self, "root_volume_encryption_enabled")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence['outputs.WorkspaceTag']]:
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="userVolumeEncryptionEnabled")
def user_volume_encryption_enabled(self) -> Optional[bool]:
return pulumi.get(self, "user_volume_encryption_enabled")
@property
@pulumi.getter(name="volumeEncryptionKey")
def volume_encryption_key(self) -> Optional[str]:
return pulumi.get(self, "volume_encryption_key")
@property
@pulumi.getter(name="workspaceProperties")
def workspace_properties(self) -> Optional['outputs.WorkspaceProperties']:
return pulumi.get(self, "workspace_properties")
class AwaitableGetWorkspaceResult(GetWorkspaceResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetWorkspaceResult(
bundle_id=self.bundle_id,
directory_id=self.directory_id,
id=self.id,
root_volume_encryption_enabled=self.root_volume_encryption_enabled,
tags=self.tags,
user_volume_encryption_enabled=self.user_volume_encryption_enabled,
volume_encryption_key=self.volume_encryption_key,
workspace_properties=self.workspace_properties)
def get_workspace(id: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetWorkspaceResult:
"""
Resource Type definition for AWS::WorkSpaces::Workspace
"""
__args__ = dict()
__args__['id'] = id
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('aws-native:workspaces:getWorkspace', __args__, opts=opts, typ=GetWorkspaceResult).value
return AwaitableGetWorkspaceResult(
bundle_id=__ret__.bundle_id,
directory_id=__ret__.directory_id,
id=__ret__.id,
root_volume_encryption_enabled=__ret__.root_volume_encryption_enabled,
tags=__ret__.tags,
user_volume_encryption_enabled=__ret__.user_volume_encryption_enabled,
volume_encryption_key=__ret__.volume_encryption_key,
workspace_properties=__ret__.workspace_properties)
@_utilities.lift_output_func(get_workspace)
def get_workspace_output(id: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetWorkspaceResult]:
"""
Resource Type definition for AWS::WorkSpaces::Workspace
"""
...
| # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'GetWorkspaceResult',
'AwaitableGetWorkspaceResult',
'get_workspace',
'get_workspace_output',
]
@pulumi.output_type
class GetWorkspaceResult:
def __init__(__self__, bundle_id=None, directory_id=None, id=None, root_volume_encryption_enabled=None, tags=None, user_volume_encryption_enabled=None, volume_encryption_key=None, workspace_properties=None):
if bundle_id and not isinstance(bundle_id, str):
raise TypeError("Expected argument 'bundle_id' to be a str")
pulumi.set(__self__, "bundle_id", bundle_id)
if directory_id and not isinstance(directory_id, str):
raise TypeError("Expected argument 'directory_id' to be a str")
pulumi.set(__self__, "directory_id", directory_id)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if root_volume_encryption_enabled and not isinstance(root_volume_encryption_enabled, bool):
raise TypeError("Expected argument 'root_volume_encryption_enabled' to be a bool")
pulumi.set(__self__, "root_volume_encryption_enabled", root_volume_encryption_enabled)
if tags and not isinstance(tags, list):
raise TypeError("Expected argument 'tags' to be a list")
pulumi.set(__self__, "tags", tags)
if user_volume_encryption_enabled and not isinstance(user_volume_encryption_enabled, bool):
raise TypeError("Expected argument 'user_volume_encryption_enabled' to be a bool")
pulumi.set(__self__, "user_volume_encryption_enabled", user_volume_encryption_enabled)
if volume_encryption_key and not isinstance(volume_encryption_key, str):
raise TypeError("Expected argument 'volume_encryption_key' to be a str")
pulumi.set(__self__, "volume_encryption_key", volume_encryption_key)
if workspace_properties and not isinstance(workspace_properties, dict):
raise TypeError("Expected argument 'workspace_properties' to be a dict")
pulumi.set(__self__, "workspace_properties", workspace_properties)
@property
@pulumi.getter(name="bundleId")
def bundle_id(self) -> Optional[str]:
return pulumi.get(self, "bundle_id")
@property
@pulumi.getter(name="directoryId")
def directory_id(self) -> Optional[str]:
return pulumi.get(self, "directory_id")
@property
@pulumi.getter
def id(self) -> Optional[str]:
return pulumi.get(self, "id")
@property
@pulumi.getter(name="rootVolumeEncryptionEnabled")
def root_volume_encryption_enabled(self) -> Optional[bool]:
return pulumi.get(self, "root_volume_encryption_enabled")
@property
@pulumi.getter
def tags(self) -> Optional[Sequence['outputs.WorkspaceTag']]:
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="userVolumeEncryptionEnabled")
def user_volume_encryption_enabled(self) -> Optional[bool]:
return pulumi.get(self, "user_volume_encryption_enabled")
@property
@pulumi.getter(name="volumeEncryptionKey")
def volume_encryption_key(self) -> Optional[str]:
return pulumi.get(self, "volume_encryption_key")
@property
@pulumi.getter(name="workspaceProperties")
def workspace_properties(self) -> Optional['outputs.WorkspaceProperties']:
return pulumi.get(self, "workspace_properties")
class AwaitableGetWorkspaceResult(GetWorkspaceResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetWorkspaceResult(
bundle_id=self.bundle_id,
directory_id=self.directory_id,
id=self.id,
root_volume_encryption_enabled=self.root_volume_encryption_enabled,
tags=self.tags,
user_volume_encryption_enabled=self.user_volume_encryption_enabled,
volume_encryption_key=self.volume_encryption_key,
workspace_properties=self.workspace_properties)
def get_workspace(id: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetWorkspaceResult:
"""
Resource Type definition for AWS::WorkSpaces::Workspace
"""
__args__ = dict()
__args__['id'] = id
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('aws-native:workspaces:getWorkspace', __args__, opts=opts, typ=GetWorkspaceResult).value
return AwaitableGetWorkspaceResult(
bundle_id=__ret__.bundle_id,
directory_id=__ret__.directory_id,
id=__ret__.id,
root_volume_encryption_enabled=__ret__.root_volume_encryption_enabled,
tags=__ret__.tags,
user_volume_encryption_enabled=__ret__.user_volume_encryption_enabled,
volume_encryption_key=__ret__.volume_encryption_key,
workspace_properties=__ret__.workspace_properties)
@_utilities.lift_output_func(get_workspace)
def get_workspace_output(id: Optional[pulumi.Input[str]] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> pulumi.Output[GetWorkspaceResult]:
"""
Resource Type definition for AWS::WorkSpaces::Workspace
"""
...
| en | 0.897218 | # coding=utf-8 # *** WARNING: this file was generated by the Pulumi SDK Generator. *** # *** Do not edit by hand unless you're certain you know what you are doing! *** # pylint: disable=using-constant-test Resource Type definition for AWS::WorkSpaces::Workspace Resource Type definition for AWS::WorkSpaces::Workspace | 1.825763 | 2 |
testtools/__init__.py | afy2103/spambayes-9-10-Frozen | 0 | 10280 | __author__ = 'AlexYang'
| __author__ = 'AlexYang'
| none | 1 | 0.949948 | 1 |
|
tests/test_distance.py | mkclairhong/quail | 1 | 10281 | <reponame>mkclairhong/quail
# -*- coding: utf-8 -*-
from quail.distance import *
import numpy as np
import pytest
from scipy.spatial.distance import cdist
def test_match():
a = 'A'
b = 'B'
assert np.equal(match(a, b), 1)
def test_euclidean_list():
a = [0, 1, 0]
b = [0, 1, 0]
assert np.equal(euclidean(a, b), 0)
def test_euclidean_array():
a = np.array([0, 1, 0])
b = np.array([0, 1, 0])
assert np.equal(euclidean(a, b), 0)
def test_correlation_list():
a = [0, 1, 0]
b = [0, 1, 0]
assert np.equal(correlation(a, b), 1)
def test_correlation_array():
a = np.array([0, 1, 0])
b = np.array([0, 1, 0])
assert np.equal(correlation(a, b), 1)
| # -*- coding: utf-8 -*-
from quail.distance import *
import numpy as np
import pytest
from scipy.spatial.distance import cdist
def test_match():
a = 'A'
b = 'B'
assert np.equal(match(a, b), 1)
def test_euclidean_list():
a = [0, 1, 0]
b = [0, 1, 0]
assert np.equal(euclidean(a, b), 0)
def test_euclidean_array():
a = np.array([0, 1, 0])
b = np.array([0, 1, 0])
assert np.equal(euclidean(a, b), 0)
def test_correlation_list():
a = [0, 1, 0]
b = [0, 1, 0]
assert np.equal(correlation(a, b), 1)
def test_correlation_array():
a = np.array([0, 1, 0])
b = np.array([0, 1, 0])
assert np.equal(correlation(a, b), 1) | en | 0.769321 | # -*- coding: utf-8 -*- | 2.58452 | 3 |
utils/manisfestManager.py | ovitrac/pizza3 | 1 | 10282 | #!/usr/bin/env python
###############################################################################
# #
# manifestManager.py #
# #
# Work with online data manifests (creating / syncing / validating) #
# #
# Copyright (C) <NAME> #
# #
###############################################################################
# #
# This program is free software: you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation, either version 3 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program. If not, see <http://www.gnu.org/licenses/>. #
# #
###############################################################################
__author__ = "<NAME>"
__copyright__ = "Copyright 2014"
__credits__ = ["<NAME>"]
__license__ = "GPLv3"
__maintainer__ = "<NAME>"
__email__ = "<EMAIL>"
__version__ = "0.35"
###############################################################################
###############################################################################
###############################################################################
###############################################################################
__MANIFEST__ = ".dmanifest"
###############################################################################
###############################################################################
###############################################################################
###############################################################################
# system includes
import os
import hashlib
import urllib.request, urllib.error, urllib.parse
import urllib.request, urllib.parse, urllib.error
import shutil
import errno
# local includes
from fileEntity import FileEntity as FE
###############################################################################
###############################################################################
###############################################################################
###############################################################################
class ManifestManager(object):
"""Use this interface for storing and managing file and paths"""
def __init__(self, manType=None, timeout=30):
self.timeout = timeout
self.myExtensions = [".py",".sh"]
self.files = []
if manType is not None:
self.type = manType
else:
self.type = "generic"
def createManifest(self, path, manifestName=None):
"""inventory all files in path and create a manifest file"""
if manifestName is None:
manifestName = __MANIFEST__
# make the root file entity
root_path = os.path.abspath(path)
root_fe = FE('root', ".", None, "-", 0)
self.files.append(root_fe)
# now make all the ones below
parents = [root_fe]
dirs, files = self.listdir(path)[:2]
self.walk(parents, root_path, '', dirs, files, skipFile=manifestName)
with open(os.path.join(path, manifestName), 'w') as man_fh:
# print the header
man_fh.write("#\t::: %s ::: \tPizza3 manifest version %s\n\n" % (self.type, __version__))
for f in self.files:
if f.parent is not None:
man_fh.write("%s\n" % f)
def diffManifests(self,
localManifestLocation,
sourceManifestLocation,
localManifestName=None,
sourceManifestName=None,
printDiffs=False):
"""check for any differences between two manifests
if remote is true then sourceManifestLocation is a URL
returns a list of files that need to be updated
"""
if localManifestName is None:
localManifestName = __MANIFEST__
if sourceManifestName is None:
sourceManifestName = __MANIFEST__
# get the "type" of the local manifest
l_type = "generic"
with open(os.path.join(localManifestLocation, localManifestName)) as l_man:
for line in l_man:
if line[0] == "#":
l_type = self.getManType(line)
break
# load the source manifest
s_type = "generic"
source_man = {}
source = ""
# first we assume it is remote
try:
s_man = urllib.request.urlopen(sourceManifestLocation + "/" + sourceManifestName, None, self.timeout)
source = sourceManifestLocation + "/"
except ValueError:
# then it is probably a file
s_man = open(os.path.join(sourceManifestLocation, sourceManifestName))
source = os.path.join(sourceManifestLocation) + os.path.sep
except urllib.error.URLError:
# problems connecting to server, perhaps user is behind a proxy or firewall
print("Error: failed to connect to server.")
return (None, None, None, None, None)
first_line = True
for line in s_man:
if first_line:
first_line = False
if line[0] == "#":
# get the type of the manifest
s_type = self.getManType(line)
if s_type != l_type:
print("Error: type of source manifest (%s) does not match type of local manifest (%s)" % (s_type, l_type))
return (None, None, None, None, None)
else:
# no type specified
print("Error: type of source manifest is not specified. Is this a valid manifest file?")
return (None, None, None, None, None)
self.type = l_type
if line[0] != "#":
fields = line.rstrip().split("\t")
# set the dict up as {path => [hash, size, seenLocal]
source_man[fields[0]] = [fields[1], fields[2], False]
# keep lists of modifications
deleted = []
addedDirs = []
addedFiles = []
modified = []
with open(os.path.join(localManifestLocation, localManifestName)) as l_man:
for line in l_man:
if line[0] != "#":
fields = line.rstrip().split("\t")
try:
if source_man[fields[0]][0] != fields[1]:
# hashes don't match
modified.append(fields[0])
# seen this file
source_man[fields[0]][2] = True
except KeyError:
# this file has been deleted from the source manifest
deleted.append(fields[0])
# check for new files
for f in list(source_man.keys()):
if source_man[f][2] == False:
if source_man[f][0] == '-':
addedDirs.append(f)
else:
addedFiles.append(f)
if printDiffs:
new_size = 0
modified_size = 0
for f in addedFiles:
new_size += int(source_man[f][1])
for f in modified:
modified_size += int(source_man[f][1])
if len(addedFiles) > 0:
print("#------------------------------------------------------")
print("# Source contains %d new file(s) (%s)" % (len(addedFiles), self.formatData(new_size)))
for f in addedFiles:
print("\t".join([self.formatData(int(source_man[f][1])), f]))
if len(addedDirs) > 0:
print("#------------------------------------------------------")
print("# Source contains %d new folders(s)" % (len(addedDirs)))
for f in addedDirs:
print(f)
if len(modified) > 0:
print("#------------------------------------------------------")
print("# Source contains %d modified file(s) (%s)" % (len(modified), self.formatData(modified_size)))
for f in modified:
print(f)
if len(deleted) > 0:
print("#------------------------------------------------------")
print("# %d files have been deleted in the source:" % len(deleted))
for f in deleted:
print(f)
else:
return (source,
[(a, source_man[a]) for a in addedFiles],
[(a, source_man[a]) for a in addedDirs],
deleted,
[(m, source_man[m]) for m in modified])
def updateManifest(self,
localManifestLocation,
sourceManifestLocation,
localManifestName=None,
sourceManifestName=None,
prompt=True):
"""Update local files based on remote changes"""
# get the diffs
source, added_files, added_dirs, deleted, modified = self.diffManifests(localManifestLocation,
sourceManifestLocation,
localManifestName,
sourceManifestName)
# bail if the diff failed
if source is None:
return False
# no changes by default
do_down = False
if prompt:
total_size = 0
for f in added_files:
total_size += int(f[1][1])
for f in modified:
total_size += int(f[1][1])
if total_size != 0:
print("****************************************************************")
print("%d new file(s) to be downloaded from source" % len(added_files))
print("%d existing file(s) to be updated" % len(modified))
print("%s will need to be downloaded" % self.formatData(total_size))
do_down = self.promptUserDownload()
if not do_down:
print("Download aborted")
update_manifest = False
if do_down:
update_manifest = True
for add in added_dirs:
# make the dirs first
full_path = os.path.abspath(os.path.join(localManifestLocation, add[0]))
self.makeSurePathExists(full_path)
for add in added_files:
full_path = os.path.abspath(os.path.join(localManifestLocation, add[0]))
urllib.request.urlretrieve(source+add[0], full_path)
for modify in modified:
full_path = os.path.abspath(os.path.join(localManifestLocation, modify[0]))
urllib.request.urlretrieve(source+modify[0], full_path)
if update_manifest:
print("(re) creating manifest file (please be patient)")
self.createManifest(localManifestLocation, manifestName=localManifestName)
return True
def getManType(self, line):
"""Work out the manifest type from the first line of the file"""
return line.rstrip().split("##")[1]
def formatData(self, amount):
"""Pretty print file sizes"""
if amount < 1024*1024:
return "%d B" % amount
elif amount < 1024*1024*1024:
return "%0.2f MB" % (float(amount)/(1024.*1024.))
elif amount < 1024*1024*1024*1024:
return "%0.2f GB" % (float(amount)/(1024.*1024.*1024.))
elif amount < 1024*1024*1024*1024*1024:
return "%0.2f TB" % (float(amount)/(1024.*1024.*1024.*1024.))
#-----------------------------------------------------------------------------
# FS utilities
def makeSurePathExists(self, path):
try:
os.makedirs(path)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
def promptUserDownload(self):
"""Check that the user is OK with making changes"""
input_not_ok = True
minimal=False
valid_responses = {'Y':True,'N':False}
vrs = ",".join([x.lower() for x in list(valid_responses.keys())])
while(input_not_ok):
if(minimal):
option = input("Download? ("+vrs+") : ").upper()
else:
option = input("Confirm you want to download this data\n" \
"Changes *WILL* be permanent\n" \
"Continue? ("+vrs+") : ").upper()
if(option in valid_responses):
print("****************************************************************")
return valid_responses[option]
else:
print("ERROR: unrecognised choice '"+option+"'")
minimal = True
def walk(self, parents, full_path, rel_path, dirs, files, skipFile=__MANIFEST__):
"""recursive walk through directory tree"""
# first do files here
for f in files:
if (f != skipFile) and os.path.splitext(f)[1] in self.myExtensions:
path = os.path.join(full_path, f)
self.files.append(FE(f,
rel_path,
parents[-1],
self.hashfile(path),
os.path.getsize(path)
)
)
for d in dirs:
# the walk will go into these dirs first
tmp_fe = FE(d, rel_path, parents[-1], "-", 0)
self.files.append(tmp_fe)
parents.append(tmp_fe)
new_full_path = os.path.join(full_path, d)
new_rel_path = os.path.join(rel_path, d)
new_dirs, new_files = self.listdir(new_full_path)[:2]
self.walk(parents, new_full_path, new_rel_path, new_dirs, new_files)
parents.pop()
def listdir(self, path):
"""List dirs, files etc in path (one dir deep)"""
dirs, files, links = [], [], []
for name in os.listdir(path):
path_name = os.path.join(path, name)
if os.path.isdir(path_name):
dirs.append(name)
elif os.path.isfile(path_name):
files.append(name)
elif os.path.islink(path_name):
links.append(name)
return dirs, files, links
def hashfile(self, fileName, blocksize=65536):
"""Hash a file and return the digest"""
hasher = hashlib.sha256()
with open(fileName,"rb") as fh:
buf = fh.read(blocksize)
while len(buf) > 0:
hasher.update(buf.strip())
buf = fh.read(blocksize)
return hasher.hexdigest()
return "?"
###############################################################################
###############################################################################
###############################################################################
###############################################################################
# %% DEBUG
# ===================================================
# main()
# ===================================================
# for debugging purposes (code called as a script)
# the code is called from here
# ===================================================
if __name__ == '__main__':
man = ManifestManager()
man.createManifest("/home/olivi/billy/python",manifestName="Pizza3.manifest") | #!/usr/bin/env python
###############################################################################
# #
# manifestManager.py #
# #
# Work with online data manifests (creating / syncing / validating) #
# #
# Copyright (C) <NAME> #
# #
###############################################################################
# #
# This program is free software: you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation, either version 3 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program. If not, see <http://www.gnu.org/licenses/>. #
# #
###############################################################################
__author__ = "<NAME>"
__copyright__ = "Copyright 2014"
__credits__ = ["<NAME>"]
__license__ = "GPLv3"
__maintainer__ = "<NAME>"
__email__ = "<EMAIL>"
__version__ = "0.35"
###############################################################################
###############################################################################
###############################################################################
###############################################################################
__MANIFEST__ = ".dmanifest"
###############################################################################
###############################################################################
###############################################################################
###############################################################################
# system includes
import os
import hashlib
import urllib.request, urllib.error, urllib.parse
import urllib.request, urllib.parse, urllib.error
import shutil
import errno
# local includes
from fileEntity import FileEntity as FE
###############################################################################
###############################################################################
###############################################################################
###############################################################################
class ManifestManager(object):
"""Use this interface for storing and managing file and paths"""
def __init__(self, manType=None, timeout=30):
self.timeout = timeout
self.myExtensions = [".py",".sh"]
self.files = []
if manType is not None:
self.type = manType
else:
self.type = "generic"
def createManifest(self, path, manifestName=None):
"""inventory all files in path and create a manifest file"""
if manifestName is None:
manifestName = __MANIFEST__
# make the root file entity
root_path = os.path.abspath(path)
root_fe = FE('root', ".", None, "-", 0)
self.files.append(root_fe)
# now make all the ones below
parents = [root_fe]
dirs, files = self.listdir(path)[:2]
self.walk(parents, root_path, '', dirs, files, skipFile=manifestName)
with open(os.path.join(path, manifestName), 'w') as man_fh:
# print the header
man_fh.write("#\t::: %s ::: \tPizza3 manifest version %s\n\n" % (self.type, __version__))
for f in self.files:
if f.parent is not None:
man_fh.write("%s\n" % f)
def diffManifests(self,
localManifestLocation,
sourceManifestLocation,
localManifestName=None,
sourceManifestName=None,
printDiffs=False):
"""check for any differences between two manifests
if remote is true then sourceManifestLocation is a URL
returns a list of files that need to be updated
"""
if localManifestName is None:
localManifestName = __MANIFEST__
if sourceManifestName is None:
sourceManifestName = __MANIFEST__
# get the "type" of the local manifest
l_type = "generic"
with open(os.path.join(localManifestLocation, localManifestName)) as l_man:
for line in l_man:
if line[0] == "#":
l_type = self.getManType(line)
break
# load the source manifest
s_type = "generic"
source_man = {}
source = ""
# first we assume it is remote
try:
s_man = urllib.request.urlopen(sourceManifestLocation + "/" + sourceManifestName, None, self.timeout)
source = sourceManifestLocation + "/"
except ValueError:
# then it is probably a file
s_man = open(os.path.join(sourceManifestLocation, sourceManifestName))
source = os.path.join(sourceManifestLocation) + os.path.sep
except urllib.error.URLError:
# problems connecting to server, perhaps user is behind a proxy or firewall
print("Error: failed to connect to server.")
return (None, None, None, None, None)
first_line = True
for line in s_man:
if first_line:
first_line = False
if line[0] == "#":
# get the type of the manifest
s_type = self.getManType(line)
if s_type != l_type:
print("Error: type of source manifest (%s) does not match type of local manifest (%s)" % (s_type, l_type))
return (None, None, None, None, None)
else:
# no type specified
print("Error: type of source manifest is not specified. Is this a valid manifest file?")
return (None, None, None, None, None)
self.type = l_type
if line[0] != "#":
fields = line.rstrip().split("\t")
# set the dict up as {path => [hash, size, seenLocal]
source_man[fields[0]] = [fields[1], fields[2], False]
# keep lists of modifications
deleted = []
addedDirs = []
addedFiles = []
modified = []
with open(os.path.join(localManifestLocation, localManifestName)) as l_man:
for line in l_man:
if line[0] != "#":
fields = line.rstrip().split("\t")
try:
if source_man[fields[0]][0] != fields[1]:
# hashes don't match
modified.append(fields[0])
# seen this file
source_man[fields[0]][2] = True
except KeyError:
# this file has been deleted from the source manifest
deleted.append(fields[0])
# check for new files
for f in list(source_man.keys()):
if source_man[f][2] == False:
if source_man[f][0] == '-':
addedDirs.append(f)
else:
addedFiles.append(f)
if printDiffs:
new_size = 0
modified_size = 0
for f in addedFiles:
new_size += int(source_man[f][1])
for f in modified:
modified_size += int(source_man[f][1])
if len(addedFiles) > 0:
print("#------------------------------------------------------")
print("# Source contains %d new file(s) (%s)" % (len(addedFiles), self.formatData(new_size)))
for f in addedFiles:
print("\t".join([self.formatData(int(source_man[f][1])), f]))
if len(addedDirs) > 0:
print("#------------------------------------------------------")
print("# Source contains %d new folders(s)" % (len(addedDirs)))
for f in addedDirs:
print(f)
if len(modified) > 0:
print("#------------------------------------------------------")
print("# Source contains %d modified file(s) (%s)" % (len(modified), self.formatData(modified_size)))
for f in modified:
print(f)
if len(deleted) > 0:
print("#------------------------------------------------------")
print("# %d files have been deleted in the source:" % len(deleted))
for f in deleted:
print(f)
else:
return (source,
[(a, source_man[a]) for a in addedFiles],
[(a, source_man[a]) for a in addedDirs],
deleted,
[(m, source_man[m]) for m in modified])
def updateManifest(self,
localManifestLocation,
sourceManifestLocation,
localManifestName=None,
sourceManifestName=None,
prompt=True):
"""Update local files based on remote changes"""
# get the diffs
source, added_files, added_dirs, deleted, modified = self.diffManifests(localManifestLocation,
sourceManifestLocation,
localManifestName,
sourceManifestName)
# bail if the diff failed
if source is None:
return False
# no changes by default
do_down = False
if prompt:
total_size = 0
for f in added_files:
total_size += int(f[1][1])
for f in modified:
total_size += int(f[1][1])
if total_size != 0:
print("****************************************************************")
print("%d new file(s) to be downloaded from source" % len(added_files))
print("%d existing file(s) to be updated" % len(modified))
print("%s will need to be downloaded" % self.formatData(total_size))
do_down = self.promptUserDownload()
if not do_down:
print("Download aborted")
update_manifest = False
if do_down:
update_manifest = True
for add in added_dirs:
# make the dirs first
full_path = os.path.abspath(os.path.join(localManifestLocation, add[0]))
self.makeSurePathExists(full_path)
for add in added_files:
full_path = os.path.abspath(os.path.join(localManifestLocation, add[0]))
urllib.request.urlretrieve(source+add[0], full_path)
for modify in modified:
full_path = os.path.abspath(os.path.join(localManifestLocation, modify[0]))
urllib.request.urlretrieve(source+modify[0], full_path)
if update_manifest:
print("(re) creating manifest file (please be patient)")
self.createManifest(localManifestLocation, manifestName=localManifestName)
return True
def getManType(self, line):
"""Work out the manifest type from the first line of the file"""
return line.rstrip().split("##")[1]
def formatData(self, amount):
"""Pretty print file sizes"""
if amount < 1024*1024:
return "%d B" % amount
elif amount < 1024*1024*1024:
return "%0.2f MB" % (float(amount)/(1024.*1024.))
elif amount < 1024*1024*1024*1024:
return "%0.2f GB" % (float(amount)/(1024.*1024.*1024.))
elif amount < 1024*1024*1024*1024*1024:
return "%0.2f TB" % (float(amount)/(1024.*1024.*1024.*1024.))
#-----------------------------------------------------------------------------
# FS utilities
def makeSurePathExists(self, path):
try:
os.makedirs(path)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
def promptUserDownload(self):
"""Check that the user is OK with making changes"""
input_not_ok = True
minimal=False
valid_responses = {'Y':True,'N':False}
vrs = ",".join([x.lower() for x in list(valid_responses.keys())])
while(input_not_ok):
if(minimal):
option = input("Download? ("+vrs+") : ").upper()
else:
option = input("Confirm you want to download this data\n" \
"Changes *WILL* be permanent\n" \
"Continue? ("+vrs+") : ").upper()
if(option in valid_responses):
print("****************************************************************")
return valid_responses[option]
else:
print("ERROR: unrecognised choice '"+option+"'")
minimal = True
def walk(self, parents, full_path, rel_path, dirs, files, skipFile=__MANIFEST__):
"""recursive walk through directory tree"""
# first do files here
for f in files:
if (f != skipFile) and os.path.splitext(f)[1] in self.myExtensions:
path = os.path.join(full_path, f)
self.files.append(FE(f,
rel_path,
parents[-1],
self.hashfile(path),
os.path.getsize(path)
)
)
for d in dirs:
# the walk will go into these dirs first
tmp_fe = FE(d, rel_path, parents[-1], "-", 0)
self.files.append(tmp_fe)
parents.append(tmp_fe)
new_full_path = os.path.join(full_path, d)
new_rel_path = os.path.join(rel_path, d)
new_dirs, new_files = self.listdir(new_full_path)[:2]
self.walk(parents, new_full_path, new_rel_path, new_dirs, new_files)
parents.pop()
def listdir(self, path):
"""List dirs, files etc in path (one dir deep)"""
dirs, files, links = [], [], []
for name in os.listdir(path):
path_name = os.path.join(path, name)
if os.path.isdir(path_name):
dirs.append(name)
elif os.path.isfile(path_name):
files.append(name)
elif os.path.islink(path_name):
links.append(name)
return dirs, files, links
def hashfile(self, fileName, blocksize=65536):
"""Hash a file and return the digest"""
hasher = hashlib.sha256()
with open(fileName,"rb") as fh:
buf = fh.read(blocksize)
while len(buf) > 0:
hasher.update(buf.strip())
buf = fh.read(blocksize)
return hasher.hexdigest()
return "?"
###############################################################################
###############################################################################
###############################################################################
###############################################################################
# %% DEBUG
# ===================================================
# main()
# ===================================================
# for debugging purposes (code called as a script)
# the code is called from here
# ===================================================
if __name__ == '__main__':
man = ManifestManager()
man.createManifest("/home/olivi/billy/python",manifestName="Pizza3.manifest") | de | 0.35631 | #!/usr/bin/env python ############################################################################### # # # manifestManager.py # # # # Work with online data manifests (creating / syncing / validating) # # # # Copyright (C) <NAME> # # # ############################################################################### # # # This program is free software: you can redistribute it and/or modify # # it under the terms of the GNU General Public License as published by # # the Free Software Foundation, either version 3 of the License, or # # (at your option) any later version. # # # # This program is distributed in the hope that it will be useful, # # but WITHOUT ANY WARRANTY; without even the implied warranty of # # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # # GNU General Public License for more details. # # # # You should have received a copy of the GNU General Public License # # along with this program. If not, see <http://www.gnu.org/licenses/>. # # # ############################################################################### ############################################################################### ############################################################################### ############################################################################### ############################################################################### ############################################################################### ############################################################################### ############################################################################### ############################################################################### # system includes # local includes ############################################################################### ############################################################################### ############################################################################### ############################################################################### Use this interface for storing and managing file and paths inventory all files in path and create a manifest file # make the root file entity # now make all the ones below # print the header check for any differences between two manifests if remote is true then sourceManifestLocation is a URL returns a list of files that need to be updated # get the "type" of the local manifest # load the source manifest # first we assume it is remote # then it is probably a file # problems connecting to server, perhaps user is behind a proxy or firewall # get the type of the manifest # no type specified # set the dict up as {path => [hash, size, seenLocal] # keep lists of modifications # hashes don't match # seen this file # this file has been deleted from the source manifest # check for new files Update local files based on remote changes # get the diffs # bail if the diff failed # no changes by default # make the dirs first Work out the manifest type from the first line of the file #")[1] Pretty print file sizes #----------------------------------------------------------------------------- # FS utilities Check that the user is OK with making changes recursive walk through directory tree # first do files here # the walk will go into these dirs first List dirs, files etc in path (one dir deep) Hash a file and return the digest ############################################################################### ############################################################################### ############################################################################### ############################################################################### # %% DEBUG # =================================================== # main() # =================================================== # for debugging purposes (code called as a script) # the code is called from here # =================================================== | 1.474562 | 1 |
temperature.py | rhwlr/TEST_PRELIM_SKILLS_EXAM | 0 | 10283 | class Temperature:
def __init__(self, kelvin=None, celsius=None, fahrenheit=None):
values = [x for x in [kelvin, celsius, fahrenheit] if x]
if len(values) < 1:
raise ValueError('Need argument')
if len(values) > 1:
raise ValueError('Only one argument')
if celsius is not None:
self.kelvin = celsius + 273.15
elif fahrenheit is not None:
self.kelvin = (fahrenheit - 32) * 5 / 9 + 273.15
else:
self.kelvin = kelvin
if self.kelvin < 0:
raise ValueError('Temperature in Kelvin cannot be negative')
def __str__(self):
return f'Temperature = {self.kelvin} Kelvins'
| class Temperature:
def __init__(self, kelvin=None, celsius=None, fahrenheit=None):
values = [x for x in [kelvin, celsius, fahrenheit] if x]
if len(values) < 1:
raise ValueError('Need argument')
if len(values) > 1:
raise ValueError('Only one argument')
if celsius is not None:
self.kelvin = celsius + 273.15
elif fahrenheit is not None:
self.kelvin = (fahrenheit - 32) * 5 / 9 + 273.15
else:
self.kelvin = kelvin
if self.kelvin < 0:
raise ValueError('Temperature in Kelvin cannot be negative')
def __str__(self):
return f'Temperature = {self.kelvin} Kelvins'
| none | 1 | 3.667491 | 4 |
|
af/shovel/test_canning.py | mimi89999/pipeline | 0 | 10284 | <gh_stars>0
#!/usr/bin/env python2.7
import unittest
import canning
class TestNop(unittest.TestCase):
def test_nop(self):
canning.NopTeeFd.write("asdf")
class TestSlice(unittest.TestCase):
REPORT = "20130505T065614Z-VN-AS24173-dns_consistency-no_report_id-0.1.0-probe.yaml"
@staticmethod
def rpt(year):
assert year < 10000
return "{:04d}1231T065614Z-VN-AS24173-dns_consistency-no_report_id-0.1.0-probe.yaml".format(
year
)
def test_empty(self):
asis, tarfiles = canning.pack_bucket(tuple())
self.assertFalse(asis)
self.assertFalse(tarfiles)
def test_badname(self):
self.assertRaises(RuntimeError, canning.pack_bucket, [("foo", 42)])
self.assertRaises(
RuntimeError, canning.pack_bucket, [("2013-05-05/" + self.REPORT, 42)]
)
def test_single(self):
for sz in [0, 1, 65 * 1048576]:
asis, tarfiles = canning.pack_bucket([(self.REPORT, sz)])
self.assertEqual(asis, [self.REPORT])
self.assertFalse(tarfiles)
def test_packing(self):
asis, tarfiles = canning.pack_bucket(
[(self.rpt(0), 42), (self.rpt(1), 64), (self.rpt(2), 64 * 1048576)]
)
self.assertEqual(asis, [self.rpt(2)])
self.assertEqual(tarfiles, {"dns_consistency.0.tar": map(self.rpt, (0, 1))})
def test_stupid(self): # FIXME: is it really good behaviour?...
asis, tarfiles = canning.pack_bucket(
[(self.rpt(0), 42), (self.rpt(1), 64 * 1048576 - 1), (self.rpt(2), 64)]
)
self.assertEqual(asis, map(self.rpt, (0, 1, 2)))
self.assertEqual(tarfiles, {})
if __name__ == "__main__":
unittest.main()
| #!/usr/bin/env python2.7
import unittest
import canning
class TestNop(unittest.TestCase):
def test_nop(self):
canning.NopTeeFd.write("asdf")
class TestSlice(unittest.TestCase):
REPORT = "20130505T065614Z-VN-AS24173-dns_consistency-no_report_id-0.1.0-probe.yaml"
@staticmethod
def rpt(year):
assert year < 10000
return "{:04d}1231T065614Z-VN-AS24173-dns_consistency-no_report_id-0.1.0-probe.yaml".format(
year
)
def test_empty(self):
asis, tarfiles = canning.pack_bucket(tuple())
self.assertFalse(asis)
self.assertFalse(tarfiles)
def test_badname(self):
self.assertRaises(RuntimeError, canning.pack_bucket, [("foo", 42)])
self.assertRaises(
RuntimeError, canning.pack_bucket, [("2013-05-05/" + self.REPORT, 42)]
)
def test_single(self):
for sz in [0, 1, 65 * 1048576]:
asis, tarfiles = canning.pack_bucket([(self.REPORT, sz)])
self.assertEqual(asis, [self.REPORT])
self.assertFalse(tarfiles)
def test_packing(self):
asis, tarfiles = canning.pack_bucket(
[(self.rpt(0), 42), (self.rpt(1), 64), (self.rpt(2), 64 * 1048576)]
)
self.assertEqual(asis, [self.rpt(2)])
self.assertEqual(tarfiles, {"dns_consistency.0.tar": map(self.rpt, (0, 1))})
def test_stupid(self): # FIXME: is it really good behaviour?...
asis, tarfiles = canning.pack_bucket(
[(self.rpt(0), 42), (self.rpt(1), 64 * 1048576 - 1), (self.rpt(2), 64)]
)
self.assertEqual(asis, map(self.rpt, (0, 1, 2)))
self.assertEqual(tarfiles, {})
if __name__ == "__main__":
unittest.main() | en | 0.713974 | #!/usr/bin/env python2.7 # FIXME: is it really good behaviour?... | 2.742623 | 3 |
Exercicios/ex061.py | jlsmirandela/Curso_Python | 0 | 10285 | <reponame>jlsmirandela/Curso_Python
print('-+-' *10)
print(' <NAME> PA')
print('+-+' * 10)
c = 1
ter = int(input('Insira o primeiro termo - '))
rz = int(input('Insira a razão - '))
while c <= 10:
print(ter, ' → ', end=' ')
ter += rz
c += 1
print('FIM')
| print('-+-' *10)
print(' <NAME> PA')
print('+-+' * 10)
c = 1
ter = int(input('Insira o primeiro termo - '))
rz = int(input('Insira a razão - '))
while c <= 10:
print(ter, ' → ', end=' ')
ter += rz
c += 1
print('FIM') | none | 1 | 3.760039 | 4 |
|
gpytorch/models/approximate_gp.py | phumm/gpytorch | 1 | 10286 | <reponame>phumm/gpytorch
#!/usr/bin/env python3
from .gp import GP
from .pyro import _PyroMixin # This will only contain functions if Pyro is installed
class ApproximateGP(GP, _PyroMixin):
def __init__(self, variational_strategy):
super().__init__()
self.variational_strategy = variational_strategy
def forward(self, x):
"""
As in the exact GP setting, the user-defined forward method should return the GP prior mean and covariance
evaluated at input locations x.
"""
raise NotImplementedError
def pyro_guide(self, input, beta=1.0, name_prefix=""):
"""
(For Pyro integration only). The component of a `pyro.guide` that
corresponds to drawing samples from the latent GP function.
Args:
:attr:`input` (:obj:`torch.Tensor`)
The inputs :math:`\mathbf X`.
:attr:`beta` (float, default=1.)
How much to scale the :math:`\text{KL} [ q(\mathbf f) \Vert p(\mathbf f) ]`
term by.
:attr:`name_prefix` (str, default="")
A name prefix to prepend to pyro sample sites.
"""
return super().pyro_guide(input, beta=beta, name_prefix=name_prefix)
def pyro_model(self, input, beta=1.0, name_prefix=""):
r"""
(For Pyro integration only). The component of a `pyro.model` that
corresponds to drawing samples from the latent GP function.
Args:
:attr:`input` (:obj:`torch.Tensor`)
The inputs :math:`\mathbf X`.
:attr:`beta` (float, default=1.)
How much to scale the :math:`\text{KL} [ q(\mathbf f) \Vert p(\mathbf f) ]`
term by.
:attr:`name_prefix` (str, default="")
A name prefix to prepend to pyro sample sites.
Returns: :obj:`torch.Tensor` samples from :math:`q(\mathbf f)`
"""
return super().pyro_model(input, beta=beta, name_prefix=name_prefix)
def __call__(self, inputs, prior=False, **kwargs):
if inputs.dim() == 1:
inputs = inputs.unsqueeze(-1)
return self.variational_strategy(inputs, prior=prior)
| #!/usr/bin/env python3
from .gp import GP
from .pyro import _PyroMixin # This will only contain functions if Pyro is installed
class ApproximateGP(GP, _PyroMixin):
def __init__(self, variational_strategy):
super().__init__()
self.variational_strategy = variational_strategy
def forward(self, x):
"""
As in the exact GP setting, the user-defined forward method should return the GP prior mean and covariance
evaluated at input locations x.
"""
raise NotImplementedError
def pyro_guide(self, input, beta=1.0, name_prefix=""):
"""
(For Pyro integration only). The component of a `pyro.guide` that
corresponds to drawing samples from the latent GP function.
Args:
:attr:`input` (:obj:`torch.Tensor`)
The inputs :math:`\mathbf X`.
:attr:`beta` (float, default=1.)
How much to scale the :math:`\text{KL} [ q(\mathbf f) \Vert p(\mathbf f) ]`
term by.
:attr:`name_prefix` (str, default="")
A name prefix to prepend to pyro sample sites.
"""
return super().pyro_guide(input, beta=beta, name_prefix=name_prefix)
def pyro_model(self, input, beta=1.0, name_prefix=""):
r"""
(For Pyro integration only). The component of a `pyro.model` that
corresponds to drawing samples from the latent GP function.
Args:
:attr:`input` (:obj:`torch.Tensor`)
The inputs :math:`\mathbf X`.
:attr:`beta` (float, default=1.)
How much to scale the :math:`\text{KL} [ q(\mathbf f) \Vert p(\mathbf f) ]`
term by.
:attr:`name_prefix` (str, default="")
A name prefix to prepend to pyro sample sites.
Returns: :obj:`torch.Tensor` samples from :math:`q(\mathbf f)`
"""
return super().pyro_model(input, beta=beta, name_prefix=name_prefix)
def __call__(self, inputs, prior=False, **kwargs):
if inputs.dim() == 1:
inputs = inputs.unsqueeze(-1)
return self.variational_strategy(inputs, prior=prior) | en | 0.645391 | #!/usr/bin/env python3 # This will only contain functions if Pyro is installed As in the exact GP setting, the user-defined forward method should return the GP prior mean and covariance evaluated at input locations x. (For Pyro integration only). The component of a `pyro.guide` that corresponds to drawing samples from the latent GP function. Args: :attr:`input` (:obj:`torch.Tensor`) The inputs :math:`\mathbf X`. :attr:`beta` (float, default=1.) How much to scale the :math:`\text{KL} [ q(\mathbf f) \Vert p(\mathbf f) ]` term by. :attr:`name_prefix` (str, default="") A name prefix to prepend to pyro sample sites. (For Pyro integration only). The component of a `pyro.model` that corresponds to drawing samples from the latent GP function. Args: :attr:`input` (:obj:`torch.Tensor`) The inputs :math:`\mathbf X`. :attr:`beta` (float, default=1.) How much to scale the :math:`\text{KL} [ q(\mathbf f) \Vert p(\mathbf f) ]` term by. :attr:`name_prefix` (str, default="") A name prefix to prepend to pyro sample sites. Returns: :obj:`torch.Tensor` samples from :math:`q(\mathbf f)` | 2.455583 | 2 |
tests/test_ping.py | d-wysocki/flask-resty | 86 | 10287 | import pytest
from flask_resty import Api
from flask_resty.testing import assert_response
# -----------------------------------------------------------------------------
@pytest.fixture(autouse=True)
def routes(app):
api = Api(app, "/api")
api.add_ping("/ping")
# -----------------------------------------------------------------------------
def test_ping(base_client):
response = base_client.get("/ping")
assert_response(response, 200)
assert response.get_data(as_text=True) == ""
| import pytest
from flask_resty import Api
from flask_resty.testing import assert_response
# -----------------------------------------------------------------------------
@pytest.fixture(autouse=True)
def routes(app):
api = Api(app, "/api")
api.add_ping("/ping")
# -----------------------------------------------------------------------------
def test_ping(base_client):
response = base_client.get("/ping")
assert_response(response, 200)
assert response.get_data(as_text=True) == ""
| en | 0.12172 | # ----------------------------------------------------------------------------- # ----------------------------------------------------------------------------- | 2.447855 | 2 |
tests/test_vetters.py | pllim/exovetter | 0 | 10288 | <filename>tests/test_vetters.py
from numpy.testing import assert_allclose
from astropy.io import ascii
from astropy import units as u
import lightkurve as lk
from exovetter import const as exo_const
from exovetter import vetters
from exovetter.tce import Tce
from astropy.utils.data import get_pkg_data_filename
def get_wasp18_tce():
tce = Tce(period=0.94124 * u.day,
epoch=58374.669883 * u.day,
epoch_offset=-2400000.5 * u.day,
depth=0.00990112 * exo_const.frac_amp,
duration=0.08932 * u.day,
event_name='WASP-18 b',
target_name='WASP-18',
snr=50)
return tce
def get_wasp18_lightcurve():
lc_file = get_pkg_data_filename("data/wasp18b_flat_lightcurve.csv")
lc_table = ascii.read(lc_file, data_start=1)
lc = lk.LightCurve(time=lc_table['col2'], flux=lc_table['col3'],
flux_err=lc_table['col4'], time_format="btjd")
return lc
def test_vetters():
tce = get_wasp18_tce()
lc = get_wasp18_lightcurve()
metrics = dict()
vetter_list = [vetters.Lpp(),
vetters.OddEven(),
vetters.TransitPhaseCoverage()]
for v in vetter_list:
vetter = v
_ = vetter.run(tce, lc)
metrics.update(vetter.__dict__)
assert_allclose(metrics['norm_lpp'], 7.93119, rtol=1e-3)
assert_allclose(metrics['tp_cover'], 1.0, rtol=1e-5)
assert_allclose(metrics['odd_depth'][0], 0.99, rtol=1e-1)
| <filename>tests/test_vetters.py
from numpy.testing import assert_allclose
from astropy.io import ascii
from astropy import units as u
import lightkurve as lk
from exovetter import const as exo_const
from exovetter import vetters
from exovetter.tce import Tce
from astropy.utils.data import get_pkg_data_filename
def get_wasp18_tce():
tce = Tce(period=0.94124 * u.day,
epoch=58374.669883 * u.day,
epoch_offset=-2400000.5 * u.day,
depth=0.00990112 * exo_const.frac_amp,
duration=0.08932 * u.day,
event_name='WASP-18 b',
target_name='WASP-18',
snr=50)
return tce
def get_wasp18_lightcurve():
lc_file = get_pkg_data_filename("data/wasp18b_flat_lightcurve.csv")
lc_table = ascii.read(lc_file, data_start=1)
lc = lk.LightCurve(time=lc_table['col2'], flux=lc_table['col3'],
flux_err=lc_table['col4'], time_format="btjd")
return lc
def test_vetters():
tce = get_wasp18_tce()
lc = get_wasp18_lightcurve()
metrics = dict()
vetter_list = [vetters.Lpp(),
vetters.OddEven(),
vetters.TransitPhaseCoverage()]
for v in vetter_list:
vetter = v
_ = vetter.run(tce, lc)
metrics.update(vetter.__dict__)
assert_allclose(metrics['norm_lpp'], 7.93119, rtol=1e-3)
assert_allclose(metrics['tp_cover'], 1.0, rtol=1e-5)
assert_allclose(metrics['odd_depth'][0], 0.99, rtol=1e-1)
| none | 1 | 2.232303 | 2 |
|
pyqtgraph/dockarea/DockDrop.py | hishizuka/pyqtgraph | 2,762 | 10289 | <reponame>hishizuka/pyqtgraph
# -*- coding: utf-8 -*-
from ..Qt import QtCore, QtGui
class DockDrop(object):
"""Provides dock-dropping methods"""
def __init__(self, allowedAreas=None):
object.__init__(self)
if allowedAreas is None:
allowedAreas = ['center', 'right', 'left', 'top', 'bottom']
self.allowedAreas = set(allowedAreas)
self.setAcceptDrops(True)
self.dropArea = None
self.overlay = DropAreaOverlay(self)
self.overlay.raise_()
def resizeOverlay(self, size):
self.overlay.resize(size)
def raiseOverlay(self):
self.overlay.raise_()
def dragEnterEvent(self, ev):
src = ev.source()
if hasattr(src, 'implements') and src.implements('dock'):
#print "drag enter accept"
ev.accept()
else:
#print "drag enter ignore"
ev.ignore()
def dragMoveEvent(self, ev):
#print "drag move"
# QDragMoveEvent inherits QDropEvent which provides posF()
# PyQt6 provides only position()
posF = ev.posF() if hasattr(ev, 'posF') else ev.position()
ld = posF.x()
rd = self.width() - ld
td = posF.y()
bd = self.height() - td
mn = min(ld, rd, td, bd)
if mn > 30:
self.dropArea = "center"
elif (ld == mn or td == mn) and mn > self.height()/3.:
self.dropArea = "center"
elif (rd == mn or ld == mn) and mn > self.width()/3.:
self.dropArea = "center"
elif rd == mn:
self.dropArea = "right"
elif ld == mn:
self.dropArea = "left"
elif td == mn:
self.dropArea = "top"
elif bd == mn:
self.dropArea = "bottom"
if ev.source() is self and self.dropArea == 'center':
#print " no self-center"
self.dropArea = None
ev.ignore()
elif self.dropArea not in self.allowedAreas:
#print " not allowed"
self.dropArea = None
ev.ignore()
else:
#print " ok"
ev.accept()
self.overlay.setDropArea(self.dropArea)
def dragLeaveEvent(self, ev):
self.dropArea = None
self.overlay.setDropArea(self.dropArea)
def dropEvent(self, ev):
area = self.dropArea
if area is None:
return
if area == 'center':
area = 'above'
self.area.moveDock(ev.source(), area, self)
self.dropArea = None
self.overlay.setDropArea(self.dropArea)
class DropAreaOverlay(QtGui.QWidget):
"""Overlay widget that draws drop areas during a drag-drop operation"""
def __init__(self, parent):
QtGui.QWidget.__init__(self, parent)
self.dropArea = None
self.hide()
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_TransparentForMouseEvents)
def setDropArea(self, area):
self.dropArea = area
if area is None:
self.hide()
else:
## Resize overlay to just the region where drop area should be displayed.
## This works around a Qt bug--can't display transparent widgets over QGLWidget
prgn = self.parent().rect()
rgn = QtCore.QRect(prgn)
w = min(30, prgn.width()/3.)
h = min(30, prgn.height()/3.)
if self.dropArea == 'left':
rgn.setWidth(w)
elif self.dropArea == 'right':
rgn.setLeft(rgn.left() + prgn.width() - w)
elif self.dropArea == 'top':
rgn.setHeight(h)
elif self.dropArea == 'bottom':
rgn.setTop(rgn.top() + prgn.height() - h)
elif self.dropArea == 'center':
rgn.adjust(w, h, -w, -h)
self.setGeometry(rgn)
self.show()
self.update()
def paintEvent(self, ev):
if self.dropArea is None:
return
p = QtGui.QPainter(self)
rgn = self.rect()
p.setBrush(QtGui.QBrush(QtGui.QColor(100, 100, 255, 50)))
p.setPen(QtGui.QPen(QtGui.QColor(50, 50, 150), 3))
p.drawRect(rgn)
| # -*- coding: utf-8 -*-
from ..Qt import QtCore, QtGui
class DockDrop(object):
"""Provides dock-dropping methods"""
def __init__(self, allowedAreas=None):
object.__init__(self)
if allowedAreas is None:
allowedAreas = ['center', 'right', 'left', 'top', 'bottom']
self.allowedAreas = set(allowedAreas)
self.setAcceptDrops(True)
self.dropArea = None
self.overlay = DropAreaOverlay(self)
self.overlay.raise_()
def resizeOverlay(self, size):
self.overlay.resize(size)
def raiseOverlay(self):
self.overlay.raise_()
def dragEnterEvent(self, ev):
src = ev.source()
if hasattr(src, 'implements') and src.implements('dock'):
#print "drag enter accept"
ev.accept()
else:
#print "drag enter ignore"
ev.ignore()
def dragMoveEvent(self, ev):
#print "drag move"
# QDragMoveEvent inherits QDropEvent which provides posF()
# PyQt6 provides only position()
posF = ev.posF() if hasattr(ev, 'posF') else ev.position()
ld = posF.x()
rd = self.width() - ld
td = posF.y()
bd = self.height() - td
mn = min(ld, rd, td, bd)
if mn > 30:
self.dropArea = "center"
elif (ld == mn or td == mn) and mn > self.height()/3.:
self.dropArea = "center"
elif (rd == mn or ld == mn) and mn > self.width()/3.:
self.dropArea = "center"
elif rd == mn:
self.dropArea = "right"
elif ld == mn:
self.dropArea = "left"
elif td == mn:
self.dropArea = "top"
elif bd == mn:
self.dropArea = "bottom"
if ev.source() is self and self.dropArea == 'center':
#print " no self-center"
self.dropArea = None
ev.ignore()
elif self.dropArea not in self.allowedAreas:
#print " not allowed"
self.dropArea = None
ev.ignore()
else:
#print " ok"
ev.accept()
self.overlay.setDropArea(self.dropArea)
def dragLeaveEvent(self, ev):
self.dropArea = None
self.overlay.setDropArea(self.dropArea)
def dropEvent(self, ev):
area = self.dropArea
if area is None:
return
if area == 'center':
area = 'above'
self.area.moveDock(ev.source(), area, self)
self.dropArea = None
self.overlay.setDropArea(self.dropArea)
class DropAreaOverlay(QtGui.QWidget):
"""Overlay widget that draws drop areas during a drag-drop operation"""
def __init__(self, parent):
QtGui.QWidget.__init__(self, parent)
self.dropArea = None
self.hide()
self.setAttribute(QtCore.Qt.WidgetAttribute.WA_TransparentForMouseEvents)
def setDropArea(self, area):
self.dropArea = area
if area is None:
self.hide()
else:
## Resize overlay to just the region where drop area should be displayed.
## This works around a Qt bug--can't display transparent widgets over QGLWidget
prgn = self.parent().rect()
rgn = QtCore.QRect(prgn)
w = min(30, prgn.width()/3.)
h = min(30, prgn.height()/3.)
if self.dropArea == 'left':
rgn.setWidth(w)
elif self.dropArea == 'right':
rgn.setLeft(rgn.left() + prgn.width() - w)
elif self.dropArea == 'top':
rgn.setHeight(h)
elif self.dropArea == 'bottom':
rgn.setTop(rgn.top() + prgn.height() - h)
elif self.dropArea == 'center':
rgn.adjust(w, h, -w, -h)
self.setGeometry(rgn)
self.show()
self.update()
def paintEvent(self, ev):
if self.dropArea is None:
return
p = QtGui.QPainter(self)
rgn = self.rect()
p.setBrush(QtGui.QBrush(QtGui.QColor(100, 100, 255, 50)))
p.setPen(QtGui.QPen(QtGui.QColor(50, 50, 150), 3))
p.drawRect(rgn) | en | 0.723234 | # -*- coding: utf-8 -*- Provides dock-dropping methods #print "drag enter accept" #print "drag enter ignore" #print "drag move" # QDragMoveEvent inherits QDropEvent which provides posF() # PyQt6 provides only position() #print " no self-center" #print " not allowed" #print " ok" Overlay widget that draws drop areas during a drag-drop operation ## Resize overlay to just the region where drop area should be displayed. ## This works around a Qt bug--can't display transparent widgets over QGLWidget | 2.803189 | 3 |
data/download.py | pyaf/google-ai-open-images-object-detection-track | 0 | 10290 | import os
from subprocess import call
files = ['000002b66c9c498e.jpg', '000002b97e5471a0.jpg', '000002c707c9895e.jpg', '0000048549557964.jpg', '000004f4400f6ec5.jpg', '0000071d71a0a6f6.jpg', '000013ba71c12506.jpg', '000018acd19b4ad3.jpg', '00001bc2c4027449.jpg', '00001bcc92282a38.jpg', '0000201cd362f303.jpg', '000020780ccee28d.jpg', '000023aa04ab09ed.jpg', '0000253ea4ecbf19.jpg', '000025ea48cab6fc.jpg', '0000271195f2c007.jpg', '0000286a5c6a3eb5.jpg', '00002b368e91b947.jpg', '00002f4ff380c64c.jpg', '0000313e5dccf13b.jpg', '000032046c3f8371.jpg', '00003223e04e2e66.jpg', '0000333f08ced1cd.jpg']
for file in files:
if not os.path.exists('train/' + file + '.jpg'):
spath = "gs://open-images-dataset/train/%s " % file
call(["gsutil", "cp", spath, 'train/'])
print(file, 'done', 'count:')
else:
print(file, 'already downloaded')
| import os
from subprocess import call
files = ['000002b66c9c498e.jpg', '000002b97e5471a0.jpg', '000002c707c9895e.jpg', '0000048549557964.jpg', '000004f4400f6ec5.jpg', '0000071d71a0a6f6.jpg', '000013ba71c12506.jpg', '000018acd19b4ad3.jpg', '00001bc2c4027449.jpg', '00001bcc92282a38.jpg', '0000201cd362f303.jpg', '000020780ccee28d.jpg', '000023aa04ab09ed.jpg', '0000253ea4ecbf19.jpg', '000025ea48cab6fc.jpg', '0000271195f2c007.jpg', '0000286a5c6a3eb5.jpg', '00002b368e91b947.jpg', '00002f4ff380c64c.jpg', '0000313e5dccf13b.jpg', '000032046c3f8371.jpg', '00003223e04e2e66.jpg', '0000333f08ced1cd.jpg']
for file in files:
if not os.path.exists('train/' + file + '.jpg'):
spath = "gs://open-images-dataset/train/%s " % file
call(["gsutil", "cp", spath, 'train/'])
print(file, 'done', 'count:')
else:
print(file, 'already downloaded')
| none | 1 | 2.214116 | 2 |
|
cisco-ios-xr/ydk/models/cisco_ios_xr/SNMP_FRAMEWORK_MIB.py | bopopescu/ACI | 0 | 10291 | """ SNMP_FRAMEWORK_MIB
"""
from collections import OrderedDict
from ydk.types import Entity, EntityPath, Identity, Enum, YType, YLeaf, YLeafList, YList, LeafDataList, Bits, Empty, Decimal64
from ydk.filters import YFilter
from ydk.errors import YError, YModelError
from ydk.errors.error_handler import handle_type_error as _handle_type_error
class SnmpSecurityLevel(Enum):
"""
SnmpSecurityLevel (Enum Class)
.. data:: noAuthNoPriv = 1
.. data:: authNoPriv = 2
.. data:: authPriv = 3
"""
noAuthNoPriv = Enum.YLeaf(1, "noAuthNoPriv")
authNoPriv = Enum.YLeaf(2, "authNoPriv")
authPriv = Enum.YLeaf(3, "authPriv")
class SNMPFRAMEWORKMIB(Entity):
"""
.. attribute:: snmpengine
**type**\: :py:class:`Snmpengine <ydk.models.cisco_ios_xr.SNMP_FRAMEWORK_MIB.SNMPFRAMEWORKMIB.Snmpengine>`
"""
_prefix = 'SNMP_FRAMEWORK_MIB'
_revision = '2002-10-14'
def __init__(self):
super(SNMPFRAMEWORKMIB, self).__init__()
self._top_entity = None
self.yang_name = "SNMP-FRAMEWORK-MIB"
self.yang_parent_name = "SNMP-FRAMEWORK-MIB"
self.is_top_level_class = True
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_container_classes = OrderedDict([("snmpEngine", ("snmpengine", SNMPFRAMEWORKMIB.Snmpengine))])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict()
self.snmpengine = SNMPFRAMEWORKMIB.Snmpengine()
self.snmpengine.parent = self
self._children_name_map["snmpengine"] = "snmpEngine"
self._children_yang_names.add("snmpEngine")
self._segment_path = lambda: "SNMP-FRAMEWORK-MIB:SNMP-FRAMEWORK-MIB"
class Snmpengine(Entity):
"""
.. attribute:: snmpengineid
**type**\: str
**pattern:** (([0\-9a\-fA\-F]){2}(\:([0\-9a\-fA\-F]){2})\*)?
.. attribute:: snmpengineboots
**type**\: int
**range:** 1..2147483647
.. attribute:: snmpenginetime
**type**\: int
**range:** 0..2147483647
.. attribute:: snmpenginemaxmessagesize
**type**\: int
**range:** 484..2147483647
"""
_prefix = 'SNMP_FRAMEWORK_MIB'
_revision = '2002-10-14'
def __init__(self):
super(SNMPFRAMEWORKMIB.Snmpengine, self).__init__()
self.yang_name = "snmpEngine"
self.yang_parent_name = "SNMP-FRAMEWORK-MIB"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_container_classes = OrderedDict([])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict([
('snmpengineid', YLeaf(YType.str, 'snmpEngineID')),
('snmpengineboots', YLeaf(YType.int32, 'snmpEngineBoots')),
('snmpenginetime', YLeaf(YType.int32, 'snmpEngineTime')),
('snmpenginemaxmessagesize', YLeaf(YType.int32, 'snmpEngineMaxMessageSize')),
])
self.snmpengineid = None
self.snmpengineboots = None
self.snmpenginetime = None
self.snmpenginemaxmessagesize = None
self._segment_path = lambda: "snmpEngine"
self._absolute_path = lambda: "SNMP-FRAMEWORK-MIB:SNMP-FRAMEWORK-MIB/%s" % self._segment_path()
def __setattr__(self, name, value):
self._perform_setattr(SNMPFRAMEWORKMIB.Snmpengine, ['snmpengineid', 'snmpengineboots', 'snmpenginetime', 'snmpenginemaxmessagesize'], name, value)
def clone_ptr(self):
self._top_entity = SNMPFRAMEWORKMIB()
return self._top_entity
| """ SNMP_FRAMEWORK_MIB
"""
from collections import OrderedDict
from ydk.types import Entity, EntityPath, Identity, Enum, YType, YLeaf, YLeafList, YList, LeafDataList, Bits, Empty, Decimal64
from ydk.filters import YFilter
from ydk.errors import YError, YModelError
from ydk.errors.error_handler import handle_type_error as _handle_type_error
class SnmpSecurityLevel(Enum):
"""
SnmpSecurityLevel (Enum Class)
.. data:: noAuthNoPriv = 1
.. data:: authNoPriv = 2
.. data:: authPriv = 3
"""
noAuthNoPriv = Enum.YLeaf(1, "noAuthNoPriv")
authNoPriv = Enum.YLeaf(2, "authNoPriv")
authPriv = Enum.YLeaf(3, "authPriv")
class SNMPFRAMEWORKMIB(Entity):
"""
.. attribute:: snmpengine
**type**\: :py:class:`Snmpengine <ydk.models.cisco_ios_xr.SNMP_FRAMEWORK_MIB.SNMPFRAMEWORKMIB.Snmpengine>`
"""
_prefix = 'SNMP_FRAMEWORK_MIB'
_revision = '2002-10-14'
def __init__(self):
super(SNMPFRAMEWORKMIB, self).__init__()
self._top_entity = None
self.yang_name = "SNMP-FRAMEWORK-MIB"
self.yang_parent_name = "SNMP-FRAMEWORK-MIB"
self.is_top_level_class = True
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_container_classes = OrderedDict([("snmpEngine", ("snmpengine", SNMPFRAMEWORKMIB.Snmpengine))])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict()
self.snmpengine = SNMPFRAMEWORKMIB.Snmpengine()
self.snmpengine.parent = self
self._children_name_map["snmpengine"] = "snmpEngine"
self._children_yang_names.add("snmpEngine")
self._segment_path = lambda: "SNMP-FRAMEWORK-MIB:SNMP-FRAMEWORK-MIB"
class Snmpengine(Entity):
"""
.. attribute:: snmpengineid
**type**\: str
**pattern:** (([0\-9a\-fA\-F]){2}(\:([0\-9a\-fA\-F]){2})\*)?
.. attribute:: snmpengineboots
**type**\: int
**range:** 1..2147483647
.. attribute:: snmpenginetime
**type**\: int
**range:** 0..2147483647
.. attribute:: snmpenginemaxmessagesize
**type**\: int
**range:** 484..2147483647
"""
_prefix = 'SNMP_FRAMEWORK_MIB'
_revision = '2002-10-14'
def __init__(self):
super(SNMPFRAMEWORKMIB.Snmpengine, self).__init__()
self.yang_name = "snmpEngine"
self.yang_parent_name = "SNMP-FRAMEWORK-MIB"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_container_classes = OrderedDict([])
self._child_list_classes = OrderedDict([])
self._leafs = OrderedDict([
('snmpengineid', YLeaf(YType.str, 'snmpEngineID')),
('snmpengineboots', YLeaf(YType.int32, 'snmpEngineBoots')),
('snmpenginetime', YLeaf(YType.int32, 'snmpEngineTime')),
('snmpenginemaxmessagesize', YLeaf(YType.int32, 'snmpEngineMaxMessageSize')),
])
self.snmpengineid = None
self.snmpengineboots = None
self.snmpenginetime = None
self.snmpenginemaxmessagesize = None
self._segment_path = lambda: "snmpEngine"
self._absolute_path = lambda: "SNMP-FRAMEWORK-MIB:SNMP-FRAMEWORK-MIB/%s" % self._segment_path()
def __setattr__(self, name, value):
self._perform_setattr(SNMPFRAMEWORKMIB.Snmpengine, ['snmpengineid', 'snmpengineboots', 'snmpenginetime', 'snmpenginemaxmessagesize'], name, value)
def clone_ptr(self):
self._top_entity = SNMPFRAMEWORKMIB()
return self._top_entity
| en | 0.23202 | SNMP_FRAMEWORK_MIB SnmpSecurityLevel (Enum Class) .. data:: noAuthNoPriv = 1 .. data:: authNoPriv = 2 .. data:: authPriv = 3 .. attribute:: snmpengine **type**\: :py:class:`Snmpengine <ydk.models.cisco_ios_xr.SNMP_FRAMEWORK_MIB.SNMPFRAMEWORKMIB.Snmpengine>` .. attribute:: snmpengineid **type**\: str **pattern:** (([0\-9a\-fA\-F]){2}(\:([0\-9a\-fA\-F]){2})\*)? .. attribute:: snmpengineboots **type**\: int **range:** 1..2147483647 .. attribute:: snmpenginetime **type**\: int **range:** 0..2147483647 .. attribute:: snmpenginemaxmessagesize **type**\: int **range:** 484..2147483647 | 2.099786 | 2 |
handlers/redirects.py | Bainky/Ventify | 6 | 10292 | from aiogram.utils.markdown import hide_link
from aiogram.types import CallbackQuery
from loader import dp
from utils import (
get_object,
get_attributes_of_object
)
from keyboards import (
anime_choose_safe_category,
anime_sfw_categories,
anime_nsfw_categories,
animals_categories,
menu_with_categories,
control_buttons
)
@dp.callback_query_handler(text="menu")
async def call_menu_with_categories(call: CallbackQuery):
"""
Function for sending a menu,
with a selection of safe content
"""
await call.answer()
# Editing the message
await call.message.edit_text(
text=(
"<b>🔗 Select a category to get a picture.</b>"
),
reply_markup=menu_with_categories()
)
@dp.callback_query_handler(text="anime")
async def call_anime_categories(call: CallbackQuery):
"""
Redirect to select anime actions
"""
await call.answer()
# Editing the message
await call.message.edit_text(
text=(
"<b>⚜️ Choose what content you want to see.</b>"
),
reply_markup=anime_choose_safe_category()
)
@dp.callback_query_handler(text=["sfw", "nsfw"])
async def call_nsfw_categories(call: CallbackQuery):
"""
Redirect to anime content
"""
data = call.data.upper()
message = call.message
# Send answer
await call.answer()
if data == "SFW":
kb = anime_sfw_categories()
else:
kb = anime_nsfw_categories()
# Editing the message
await message.edit_text(
text=(
f"<b>🍿 You are in the {data} category.</b>"
),
reply_markup=kb
)
@dp.callback_query_handler(text="animals")
async def call_anime_categories(call: CallbackQuery):
"""
Redirect to animals content
"""
await call.answer()
# Editing the message
await call.message.edit_text(
text=(
"<b>🦄 You are in the category with animals.</b>"
),
reply_markup=animals_categories()
)
@dp.callback_query_handler()
async def call_get_photography(call: CallbackQuery):
"""
Function for sending photos
"""
message = call.message
data = call.data
# Get json document
api = get_attributes_of_object()
if data == "generate_new":
data = message.text.split("#")[1]
obj = api[data]["object"]
atr = api[data]["attribute"]
mark = api[data]["entity"]
if mark == "anime":
mark = api[data]["safe"]
if mark == "memes":
mark = "menu"
# We get a link to the preview photo
link = await get_object(obj, atr)
await call.answer()
# Editing the message
await message.edit_text(
text=(
f"{hide_link(link)} #{data}"
),
reply_markup=control_buttons(mark)
) | from aiogram.utils.markdown import hide_link
from aiogram.types import CallbackQuery
from loader import dp
from utils import (
get_object,
get_attributes_of_object
)
from keyboards import (
anime_choose_safe_category,
anime_sfw_categories,
anime_nsfw_categories,
animals_categories,
menu_with_categories,
control_buttons
)
@dp.callback_query_handler(text="menu")
async def call_menu_with_categories(call: CallbackQuery):
"""
Function for sending a menu,
with a selection of safe content
"""
await call.answer()
# Editing the message
await call.message.edit_text(
text=(
"<b>🔗 Select a category to get a picture.</b>"
),
reply_markup=menu_with_categories()
)
@dp.callback_query_handler(text="anime")
async def call_anime_categories(call: CallbackQuery):
"""
Redirect to select anime actions
"""
await call.answer()
# Editing the message
await call.message.edit_text(
text=(
"<b>⚜️ Choose what content you want to see.</b>"
),
reply_markup=anime_choose_safe_category()
)
@dp.callback_query_handler(text=["sfw", "nsfw"])
async def call_nsfw_categories(call: CallbackQuery):
"""
Redirect to anime content
"""
data = call.data.upper()
message = call.message
# Send answer
await call.answer()
if data == "SFW":
kb = anime_sfw_categories()
else:
kb = anime_nsfw_categories()
# Editing the message
await message.edit_text(
text=(
f"<b>🍿 You are in the {data} category.</b>"
),
reply_markup=kb
)
@dp.callback_query_handler(text="animals")
async def call_anime_categories(call: CallbackQuery):
"""
Redirect to animals content
"""
await call.answer()
# Editing the message
await call.message.edit_text(
text=(
"<b>🦄 You are in the category with animals.</b>"
),
reply_markup=animals_categories()
)
@dp.callback_query_handler()
async def call_get_photography(call: CallbackQuery):
"""
Function for sending photos
"""
message = call.message
data = call.data
# Get json document
api = get_attributes_of_object()
if data == "generate_new":
data = message.text.split("#")[1]
obj = api[data]["object"]
atr = api[data]["attribute"]
mark = api[data]["entity"]
if mark == "anime":
mark = api[data]["safe"]
if mark == "memes":
mark = "menu"
# We get a link to the preview photo
link = await get_object(obj, atr)
await call.answer()
# Editing the message
await message.edit_text(
text=(
f"{hide_link(link)} #{data}"
),
reply_markup=control_buttons(mark)
) | en | 0.634356 | Function for sending a menu,
with a selection of safe content # Editing the message Redirect to select anime actions # Editing the message Redirect to anime content # Send answer # Editing the message Redirect to animals content # Editing the message Function for sending photos # Get json document # We get a link to the preview photo # Editing the message #{data}" | 2.767742 | 3 |
1stRound/Medium/322-Coin Change/DP.py | ericchen12377/Leetcode-Algorithm-Python | 2 | 10293 | class Solution:
def coinChange(self, coins: List[int], amount: int) -> int:
M = float('inf')
# dynamic programming
dp = [0] + [M] * amount
for i in range(1, amount+1):
dp[i] = 1 + min([dp[i-c] for c in coins if i >= c] or [M])
return dp[-1] if dp[-1] < M else -1
| class Solution:
def coinChange(self, coins: List[int], amount: int) -> int:
M = float('inf')
# dynamic programming
dp = [0] + [M] * amount
for i in range(1, amount+1):
dp[i] = 1 + min([dp[i-c] for c in coins if i >= c] or [M])
return dp[-1] if dp[-1] < M else -1
| en | 0.760267 | # dynamic programming | 3.276278 | 3 |
segmentation_test/Scripts/medpy_graphcut_voxel.py | rominashirazi/SpineSegmentation | 0 | 10294 | #!c:\users\hooma\documents\github\spinesegmentation\segmentation_test\scripts\python.exe
"""
Execute a graph cut on a voxel image based on some foreground and background markers.
Copyright (C) 2013 <NAME>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
# build-in modules
from argparse import RawTextHelpFormatter
import argparse
import logging
import os
# third-party modules
import scipy
# path changes
# own modules
from medpy.core import ArgumentError, Logger
from medpy.io import load, save, header
from medpy import graphcut
from medpy.graphcut.wrapper import split_marker
# information
__author__ = "<NAME>"
__version__ = "r0.3.1, 2012-03-23"
__email__ = "<EMAIL>"
__status__ = "Release"
__description__ = """
Perform a binary graph cut using Boykov's max-flow/min-cut algorithm.
This implementation does only compute a boundary term and does not use
any regional term. The desired boundary term can be selected via the
--boundary argument. Depending on the selected term, an additional
image has to be supplied as badditional.
In the case of the difference of means, it is the original image.
Furthermore the algorithm requires a binary image with foreground
markers and a binary image with background markers.
Additionally a filename for the created binary mask marking foreground
and background has to be supplied.
Note that the input images must be of the same dimensionality,
otherwise an exception is thrown.
Note to take into account the input images orientation.
Note that the quality of the resulting segmentations depends also on
the quality of the supplied markers.
Copyright (C) 2013 <NAME>
This program comes with ABSOLUTELY NO WARRANTY; This is free software,
and you are welcome to redistribute it under certain conditions; see
the LICENSE file or <http://www.gnu.org/licenses/> for details.
"""
# code
def main():
# parse cmd arguments
parser = getParser()
parser.parse_args()
args = getArguments(parser)
# prepare logger
logger = Logger.getInstance()
if args.debug: logger.setLevel(logging.DEBUG)
elif args.verbose: logger.setLevel(logging.INFO)
# check if output image exists
if not args.force:
if os.path.exists(args.output):
logger.warning('The output image {} already exists. Exiting.'.format(args.output))
exit(-1)
# select boundary term
['diff_linear', 'diff_exp', 'diff_div', 'diff_pow', 'max_linear', 'max_exp', 'max_div', 'max_pow']
if 'diff_linear' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_linear
logger.info('Selected boundary term: linear difference of intensities')
elif 'diff_exp' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_exponential
logger.info('Selected boundary term: exponential difference of intensities')
elif 'diff_div' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_division
logger.info('Selected boundary term: divided difference of intensities')
elif 'diff_pow' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_power
logger.info('Selected boundary term: power based / raised difference of intensities')
elif 'max_linear' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_linear
logger.info('Selected boundary term: linear maximum of intensities')
elif 'max_exp' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_exponential
logger.info('Selected boundary term: exponential maximum of intensities')
elif 'max_div' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_division
logger.info('Selected boundary term: divided maximum of intensities')
elif 'max_pow' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_power
logger.info('Selected boundary term: power based / raised maximum of intensities')
# load input images
badditional_image_data, reference_header = load(args.badditional)
markers_image_data, _ = load(args.markers)
# split marker image into fg and bg images
fgmarkers_image_data, bgmarkers_image_data = split_marker(markers_image_data)
# check if all images dimensions are the same
if not (badditional_image_data.shape == fgmarkers_image_data.shape == bgmarkers_image_data.shape):
logger.critical('Not all of the supplied images are of the same shape.')
raise ArgumentError('Not all of the supplied images are of the same shape.')
# extract spacing if required
if args.spacing:
spacing = header.get_pixel_spacing(reference_header)
logger.info('Taking spacing of {} into account.'.format(spacing))
else:
spacing = False
# generate graph
logger.info('Preparing BK_MFMC C++ graph...')
gcgraph = graphcut.graph_from_voxels(fgmarkers_image_data,
bgmarkers_image_data,
boundary_term = boundary_term,
boundary_term_args = (badditional_image_data, args.sigma, spacing))
# execute min-cut
logger.info('Executing min-cut...')
maxflow = gcgraph.maxflow()
logger.debug('Maxflow is {}'.format(maxflow))
# reshape results to form a valid mask
logger.info('Applying results...')
result_image_data = scipy.zeros(bgmarkers_image_data.size, dtype=scipy.bool_)
for idx in range(len(result_image_data)):
result_image_data[idx] = 0 if gcgraph.termtype.SINK == gcgraph.what_segment(idx) else 1
result_image_data = result_image_data.reshape(bgmarkers_image_data.shape)
# save resulting mask
save(result_image_data.astype(scipy.bool_), args.output, reference_header, args.force)
logger.info('Successfully terminated.')
def getArguments(parser):
"Provides additional validation of the arguments collected by argparse."
return parser.parse_args()
def getParser():
"Creates and returns the argparse parser object."
parser = argparse.ArgumentParser(description=__description__, formatter_class=RawTextHelpFormatter)
parser.add_argument('sigma', type=float, help='The sigma required for the boundary terms.')
parser.add_argument('badditional', help='The additional image required by the boundary term. See there for details.')
parser.add_argument('markers', help='Image containing the foreground (=1) and background (=2) markers.')
parser.add_argument('output', help='The output image containing the segmentation.')
parser.add_argument('--boundary', default='diff_exp', help='The boundary term to use. Note that the ones prefixed with diff_ require the original image, while the ones prefixed with max_ require the gradient image.', choices=['diff_linear', 'diff_exp', 'diff_div', 'diff_pow', 'max_linear', 'max_exp', 'max_div', 'max_pow'])
parser.add_argument('-s', dest='spacing', action='store_true', help='Set this flag to take the pixel spacing of the image into account. The spacing data will be extracted from the baddtional image.')
parser.add_argument('-f', dest='force', action='store_true', help='Set this flag to silently override files that exist.')
parser.add_argument('-v', dest='verbose', action='store_true', help='Display more information.')
parser.add_argument('-d', dest='debug', action='store_true', help='Display debug information.')
return parser
if __name__ == "__main__":
main() | #!c:\users\hooma\documents\github\spinesegmentation\segmentation_test\scripts\python.exe
"""
Execute a graph cut on a voxel image based on some foreground and background markers.
Copyright (C) 2013 <NAME>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
# build-in modules
from argparse import RawTextHelpFormatter
import argparse
import logging
import os
# third-party modules
import scipy
# path changes
# own modules
from medpy.core import ArgumentError, Logger
from medpy.io import load, save, header
from medpy import graphcut
from medpy.graphcut.wrapper import split_marker
# information
__author__ = "<NAME>"
__version__ = "r0.3.1, 2012-03-23"
__email__ = "<EMAIL>"
__status__ = "Release"
__description__ = """
Perform a binary graph cut using Boykov's max-flow/min-cut algorithm.
This implementation does only compute a boundary term and does not use
any regional term. The desired boundary term can be selected via the
--boundary argument. Depending on the selected term, an additional
image has to be supplied as badditional.
In the case of the difference of means, it is the original image.
Furthermore the algorithm requires a binary image with foreground
markers and a binary image with background markers.
Additionally a filename for the created binary mask marking foreground
and background has to be supplied.
Note that the input images must be of the same dimensionality,
otherwise an exception is thrown.
Note to take into account the input images orientation.
Note that the quality of the resulting segmentations depends also on
the quality of the supplied markers.
Copyright (C) 2013 <NAME>
This program comes with ABSOLUTELY NO WARRANTY; This is free software,
and you are welcome to redistribute it under certain conditions; see
the LICENSE file or <http://www.gnu.org/licenses/> for details.
"""
# code
def main():
# parse cmd arguments
parser = getParser()
parser.parse_args()
args = getArguments(parser)
# prepare logger
logger = Logger.getInstance()
if args.debug: logger.setLevel(logging.DEBUG)
elif args.verbose: logger.setLevel(logging.INFO)
# check if output image exists
if not args.force:
if os.path.exists(args.output):
logger.warning('The output image {} already exists. Exiting.'.format(args.output))
exit(-1)
# select boundary term
['diff_linear', 'diff_exp', 'diff_div', 'diff_pow', 'max_linear', 'max_exp', 'max_div', 'max_pow']
if 'diff_linear' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_linear
logger.info('Selected boundary term: linear difference of intensities')
elif 'diff_exp' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_exponential
logger.info('Selected boundary term: exponential difference of intensities')
elif 'diff_div' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_division
logger.info('Selected boundary term: divided difference of intensities')
elif 'diff_pow' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_difference_power
logger.info('Selected boundary term: power based / raised difference of intensities')
elif 'max_linear' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_linear
logger.info('Selected boundary term: linear maximum of intensities')
elif 'max_exp' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_exponential
logger.info('Selected boundary term: exponential maximum of intensities')
elif 'max_div' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_division
logger.info('Selected boundary term: divided maximum of intensities')
elif 'max_pow' == args.boundary:
boundary_term = graphcut.energy_voxel.boundary_maximum_power
logger.info('Selected boundary term: power based / raised maximum of intensities')
# load input images
badditional_image_data, reference_header = load(args.badditional)
markers_image_data, _ = load(args.markers)
# split marker image into fg and bg images
fgmarkers_image_data, bgmarkers_image_data = split_marker(markers_image_data)
# check if all images dimensions are the same
if not (badditional_image_data.shape == fgmarkers_image_data.shape == bgmarkers_image_data.shape):
logger.critical('Not all of the supplied images are of the same shape.')
raise ArgumentError('Not all of the supplied images are of the same shape.')
# extract spacing if required
if args.spacing:
spacing = header.get_pixel_spacing(reference_header)
logger.info('Taking spacing of {} into account.'.format(spacing))
else:
spacing = False
# generate graph
logger.info('Preparing BK_MFMC C++ graph...')
gcgraph = graphcut.graph_from_voxels(fgmarkers_image_data,
bgmarkers_image_data,
boundary_term = boundary_term,
boundary_term_args = (badditional_image_data, args.sigma, spacing))
# execute min-cut
logger.info('Executing min-cut...')
maxflow = gcgraph.maxflow()
logger.debug('Maxflow is {}'.format(maxflow))
# reshape results to form a valid mask
logger.info('Applying results...')
result_image_data = scipy.zeros(bgmarkers_image_data.size, dtype=scipy.bool_)
for idx in range(len(result_image_data)):
result_image_data[idx] = 0 if gcgraph.termtype.SINK == gcgraph.what_segment(idx) else 1
result_image_data = result_image_data.reshape(bgmarkers_image_data.shape)
# save resulting mask
save(result_image_data.astype(scipy.bool_), args.output, reference_header, args.force)
logger.info('Successfully terminated.')
def getArguments(parser):
"Provides additional validation of the arguments collected by argparse."
return parser.parse_args()
def getParser():
"Creates and returns the argparse parser object."
parser = argparse.ArgumentParser(description=__description__, formatter_class=RawTextHelpFormatter)
parser.add_argument('sigma', type=float, help='The sigma required for the boundary terms.')
parser.add_argument('badditional', help='The additional image required by the boundary term. See there for details.')
parser.add_argument('markers', help='Image containing the foreground (=1) and background (=2) markers.')
parser.add_argument('output', help='The output image containing the segmentation.')
parser.add_argument('--boundary', default='diff_exp', help='The boundary term to use. Note that the ones prefixed with diff_ require the original image, while the ones prefixed with max_ require the gradient image.', choices=['diff_linear', 'diff_exp', 'diff_div', 'diff_pow', 'max_linear', 'max_exp', 'max_div', 'max_pow'])
parser.add_argument('-s', dest='spacing', action='store_true', help='Set this flag to take the pixel spacing of the image into account. The spacing data will be extracted from the baddtional image.')
parser.add_argument('-f', dest='force', action='store_true', help='Set this flag to silently override files that exist.')
parser.add_argument('-v', dest='verbose', action='store_true', help='Display more information.')
parser.add_argument('-d', dest='debug', action='store_true', help='Display debug information.')
return parser
if __name__ == "__main__":
main() | en | 0.843025 | #!c:\users\hooma\documents\github\spinesegmentation\segmentation_test\scripts\python.exe Execute a graph cut on a voxel image based on some foreground and background markers. Copyright (C) 2013 <NAME> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. # build-in modules # third-party modules # path changes # own modules # information Perform a binary graph cut using Boykov's max-flow/min-cut algorithm. This implementation does only compute a boundary term and does not use any regional term. The desired boundary term can be selected via the --boundary argument. Depending on the selected term, an additional image has to be supplied as badditional. In the case of the difference of means, it is the original image. Furthermore the algorithm requires a binary image with foreground markers and a binary image with background markers. Additionally a filename for the created binary mask marking foreground and background has to be supplied. Note that the input images must be of the same dimensionality, otherwise an exception is thrown. Note to take into account the input images orientation. Note that the quality of the resulting segmentations depends also on the quality of the supplied markers. Copyright (C) 2013 <NAME> This program comes with ABSOLUTELY NO WARRANTY; This is free software, and you are welcome to redistribute it under certain conditions; see the LICENSE file or <http://www.gnu.org/licenses/> for details. # code # parse cmd arguments # prepare logger # check if output image exists # select boundary term # load input images # split marker image into fg and bg images # check if all images dimensions are the same # extract spacing if required # generate graph # execute min-cut # reshape results to form a valid mask # save resulting mask | 3.006441 | 3 |
_notes/book/conf.py | AstroMatt/astronaut-training-en | 1 | 10295 | author = '<NAME>'
email = '<EMAIL>'
project = 'Astronaut Training Program'
description = 'Astronaut Training Program'
extensions = [
'sphinx.ext.todo',
'sphinx.ext.imgmath',
]
todo_emit_warnings = False
todo_include_todos = True
exclude_patterns = []
# -----------------------------------------------------------------------------
# Standard book config
# -----------------------------------------------------------------------------
import os
import re
import subprocess
import sys
from datetime import datetime
needs_sphinx = '2.2'
mathjax_path = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-MML-AM_CHTML'
mathjax_config = {
'extensions': ['tex2jax.js'],
'jax': ['input/TeX', 'output/HTML-CSS'],
}
html_theme = 'sphinx_rtd_theme'
exclude_patterns = exclude_patterns + [
'.*',
'venv*',
'virtualenv*',
'_extensions',
'_img',
'_slides',
'_static',
'_themes',
'_tmp',
'*/_template.rst',
'*/contrib/*',
'*/solution/*',
'*/solutions/*',
'**.ipynb_checkpoints',
'README.rst',
'TODO.rst',
]
numfig_format = {
'section': 'Sect. %s.',
'figure': 'Fig. %s.',
'table': 'Tab. %s.',
'code-block': 'Code Listing %s.',
}
language = 'en'
source_directory = '.'
master_doc = 'index'
highlight_language = 'python3'
pygments_style = 'borland'
numfig = True
templates_path = ['_templates']
source_suffix = ['.rst']
imgmath_image_format = 'svg'
today_fmt = '%Y-%m-%d'
project_slug = re.sub(r'[\W]+', '', project)
sha1 = subprocess.Popen('git log -1 --format="%h"', stdout=subprocess.PIPE, shell=True).stdout.read().decode().replace('\n', '')
now = datetime.now()
year = now.year
today = now.strftime('%Y-%m-%d')
version = f'#{sha1}, {today}'
release = f'#{sha1}, {today}'
copyright = f'{year}, {author} <{email}>'
extensions_dir = os.path.join(os.path.dirname(__file__), '', '_extensions')
sys.path.append(extensions_dir)
htmlhelp_basename = project
html_theme_path = ['_themes']
html_static_path = ['_static']
html_favicon = '_static/favicon.png'
html_sidebars = {'sidebar': ['localtoc.html', 'sourcelink.html', 'searchbox.html']}
html_show_sphinx = False
html_context = {
'css_files': [
'_static/theme-overrides.css',
],
}
latex_documents = [(master_doc, f'{project_slug}.tex', project, author, 'manual')]
latex_elements = {
'papersize': 'a4paper',
'pointsize': '10pt',
'figure_align': 'htbp',
# Fix for: LaTeX Backend Fails with Citations In Figure Captions
'preamble': r"""
\usepackage{etoolbox}
\AtBeginEnvironment{figure}{\renewcommand{\phantomsection}{}}
"""
}
epub_title = project
epub_author = author
epub_publisher = author
epub_copyright = copyright
epub_exclude_files = ['search.html']
man_pages = [
(master_doc, project_slug, project, [author], 1)
]
texinfo_documents = [
(master_doc, project_slug, project, author, project, '', 'Miscellaneous'),
]
| author = '<NAME>'
email = '<EMAIL>'
project = 'Astronaut Training Program'
description = 'Astronaut Training Program'
extensions = [
'sphinx.ext.todo',
'sphinx.ext.imgmath',
]
todo_emit_warnings = False
todo_include_todos = True
exclude_patterns = []
# -----------------------------------------------------------------------------
# Standard book config
# -----------------------------------------------------------------------------
import os
import re
import subprocess
import sys
from datetime import datetime
needs_sphinx = '2.2'
mathjax_path = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-MML-AM_CHTML'
mathjax_config = {
'extensions': ['tex2jax.js'],
'jax': ['input/TeX', 'output/HTML-CSS'],
}
html_theme = 'sphinx_rtd_theme'
exclude_patterns = exclude_patterns + [
'.*',
'venv*',
'virtualenv*',
'_extensions',
'_img',
'_slides',
'_static',
'_themes',
'_tmp',
'*/_template.rst',
'*/contrib/*',
'*/solution/*',
'*/solutions/*',
'**.ipynb_checkpoints',
'README.rst',
'TODO.rst',
]
numfig_format = {
'section': 'Sect. %s.',
'figure': 'Fig. %s.',
'table': 'Tab. %s.',
'code-block': 'Code Listing %s.',
}
language = 'en'
source_directory = '.'
master_doc = 'index'
highlight_language = 'python3'
pygments_style = 'borland'
numfig = True
templates_path = ['_templates']
source_suffix = ['.rst']
imgmath_image_format = 'svg'
today_fmt = '%Y-%m-%d'
project_slug = re.sub(r'[\W]+', '', project)
sha1 = subprocess.Popen('git log -1 --format="%h"', stdout=subprocess.PIPE, shell=True).stdout.read().decode().replace('\n', '')
now = datetime.now()
year = now.year
today = now.strftime('%Y-%m-%d')
version = f'#{sha1}, {today}'
release = f'#{sha1}, {today}'
copyright = f'{year}, {author} <{email}>'
extensions_dir = os.path.join(os.path.dirname(__file__), '', '_extensions')
sys.path.append(extensions_dir)
htmlhelp_basename = project
html_theme_path = ['_themes']
html_static_path = ['_static']
html_favicon = '_static/favicon.png'
html_sidebars = {'sidebar': ['localtoc.html', 'sourcelink.html', 'searchbox.html']}
html_show_sphinx = False
html_context = {
'css_files': [
'_static/theme-overrides.css',
],
}
latex_documents = [(master_doc, f'{project_slug}.tex', project, author, 'manual')]
latex_elements = {
'papersize': 'a4paper',
'pointsize': '10pt',
'figure_align': 'htbp',
# Fix for: LaTeX Backend Fails with Citations In Figure Captions
'preamble': r"""
\usepackage{etoolbox}
\AtBeginEnvironment{figure}{\renewcommand{\phantomsection}{}}
"""
}
epub_title = project
epub_author = author
epub_publisher = author
epub_copyright = copyright
epub_exclude_files = ['search.html']
man_pages = [
(master_doc, project_slug, project, [author], 1)
]
texinfo_documents = [
(master_doc, project_slug, project, author, project, '', 'Miscellaneous'),
]
| en | 0.307774 | # ----------------------------------------------------------------------------- # Standard book config # ----------------------------------------------------------------------------- # Fix for: LaTeX Backend Fails with Citations In Figure Captions \usepackage{etoolbox} \AtBeginEnvironment{figure}{\renewcommand{\phantomsection}{}} | 1.463351 | 1 |
tutorial_application/forms.py | yamasakih/django_rdkit_tutorial | 2 | 10296 | from django_rdkit import models
from django.forms.models import ModelForm
from .models import Compound
class SubstructureSearchForm(ModelForm):
class Meta:
model = Compound
fields = ('molecule', )
| from django_rdkit import models
from django.forms.models import ModelForm
from .models import Compound
class SubstructureSearchForm(ModelForm):
class Meta:
model = Compound
fields = ('molecule', )
| none | 1 | 1.472171 | 1 |
|
data_structures/trees/tree.py | onyonkaclifford/data-structures-and-algorithms | 0 | 10297 | from abc import ABC, abstractmethod
from typing import Any, Generator, Iterable, List, Union
class Empty(Exception):
pass
class Tree(ABC):
"""A tree is a hierarchical collection of nodes containing items, with each node having a unique parent and zero,
one or many children items. The topmost element in a non-empty tree, the root, has no parent. Tree vocabularies
include, but are not limited to:
1. Root - the topmost element in a non-empty tree, it has no parent
2. Leaf - a node with zero children
3. Siblings - nodes that share a parent node
4. Edge - a pair of nodes such the one is the parent of the other
5. Path - a collection of nodes such that any pair of adjacent nodes have a parent/child relationship
6. Height - number of edges between a node and it's furthest leaf
7. Depth - number of edges between a node and the root
8. Level - number of nodes in the path between a node and the root, inclusive of both the node itself and the root
9. Ordered tree - a tree with a meaningful organisation among its nodes such that its nodes can be arranged in a
linear manner from first to last
"""
class _Node:
def __init__(self, key, value, parent=None, children: Union[List, None] = None):
self.key = key
self.value = value
self.parent = parent
self.children = children if children is not None else []
class _Position:
"""A representation of the position of a node within a tree"""
def __init__(self, belongs_to, node):
self.__variables = {"belongs_to": belongs_to}
self.__node = node
def is_owned_by(self, owner):
"""Check whether position belongs to the tree, owner. Time complexity: O(1).
:param owner: object to check whether it's the owner of this position
:returns: True of the position is owned by the object passed, else False
"""
return owner is self.__variables["belongs_to"]
def manipulate_variables(self, owner, method: str, *params):
"""Manipulate member variables of this position. Methods of the owner list are the only ones that can call
this method. Time complexity: O(1).
:param owner: tree object that owns this position
:param method: method name of tree object that will manipulate the member variables of this position
:param params: extra optional parameters to pass to the method
:returns: the return value of the tree method whose name is passed
"""
if not self.is_owned_by(owner):
raise ValueError("Position doesn't belong to the passed owner")
return getattr(owner, method)(self.__variables, *params)
def manipulate_node(self, owner, method: str, *params):
"""Manipulate the node held by this position. Methods of the owner list are the only ones that can call
this method. Time complexity: O(1).
:param owner: tree object that owns this position
:param method: method name of tree object that will manipulate the node contained in this position
:param params: extra optional parameters to pass to the method
:returns: the return value of the tree method whose name is passed
"""
if not self.is_owned_by(owner):
raise ValueError("Position doesn't belong to the passed owner")
return getattr(owner, method)(self.__node, *params)
def get_data(self):
"""Return the data stored by the node held by this position. Time complexity: O(1).
:returns: data stored in node contained in this position
"""
return self.__node.key, self.__node.value
def __init__(self):
self._root: Union[Tree._Node, None] = None
self._length = 0
self.__generator: Union[Generator, None] = None
def __len__(self) -> int:
"""Return total number of items in tree
:return: count of items in tree
"""
return self._length
def __repr__(self) -> str:
"""Return a string representation of the tree
:return: the string representation of the tree
"""
def helper(current_position):
children = self.get_children(current_position)
num_of_children = len(children)
last_child_idx = num_of_children - 1
data_dict["string_data"] += f"{current_position.get_data()[0]}"
for i, j in enumerate(children):
data_dict["string_data"] += "(" if i == 0 else ", "
helper(j)
data_dict["string_data"] += ")" if i == last_child_idx else ""
if self.is_empty():
return ""
data_dict = {"string_data": ""}
helper(Tree._Position(self, self._root))
return data_dict["string_data"]
def __iter__(self) -> Iterable:
"""Return a tree iterable
:return: tree iterable
"""
return self
def __next__(self) -> _Position:
"""Return next position of tree iterator, implemented based on level-order traversal
:return: next position
:raises StopIteration: when the cursor denoting the current position surpasses the last position of the tree
"""
if self.__generator is None:
self.__generator = self.traverse_tree_level_order()
try:
next_position = next(self.__generator)
except StopIteration as e:
self.__generator = None
raise e
return next_position
@staticmethod
def _validate_node(node):
"""Helper function to check if the node passed is a tree node. Returns the node passed if the validation
passes, else raises a TypeError. Time complexity: O(1).
:param node: node to validate
:returns: the node passed if it passes validation
:raises TypeError: if the validation fails
"""
if not isinstance(node, Tree._Node):
raise TypeError("Not a tree node")
return node
@staticmethod
def _invalidate_position(variables):
"""Helper function to set the belongs_to key of a dictionary to None. Used to revoke the ownership of a
position by this tree. Time complexity: O(1).
:returns: the dictionary passed, with the belongs_to key set to None
"""
variables["belongs_to"] = None
return variables
def is_empty(self) -> bool:
"""Return True if tree is empty, else False. Time complexity: O(1).
:returns: True if tree is empty, else False
"""
return self._root is None
def is_root(self, position: _Position) -> bool:
"""Check if the passed position contains the root node. Time complexity: O(1).
:returns: True if the passed position holds the root node, else False
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
return node.parent is None
def is_leaf(self, position: _Position) -> bool:
"""Check if the passed position contains a leaf. Time complexity: O(1).
:returns: True if the passed position holds a leaf node, else False
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
return len(self.get_children(position)) == 0
def get_root(self) -> Union[_Position, None]:
"""Return the root position. Time complexity: O(1).
:returns: the root position
"""
if self.is_empty():
return None
else:
return Tree._Position(self, self._root)
def get_parent(self, position: _Position) -> Union[_Position, None]:
"""Return the parent of the given position. Time complexity: O(1).
:param position: position containing the node whose parent is being sought
:returns: the position of parent of the node contained in the passed position. None if the position passed
contains the root node.
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
if self.is_root(Tree._Position(self, node)):
return None
else:
return Tree._Position(self, node.parent)
def get_children(self, position: _Position) -> Union[List[_Position], None]:
"""Return the children of the given position. Time complexity: O(1).
:param position: position containing the node whose children are being sought
:returns: the positions of the children of the node contained in the passed position. None if the position has
no children.
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
children = node.children
if children is None:
return None
else:
return [Tree._Position(self, i) for i in children if i is not None]
def get_siblings(self, position: _Position) -> Union[List[_Position], None]:
"""Return the siblings of the given position. Time complexity: O(1).
:param position: position containing the node whose children are being sought
:returns: the positions of the siblings of the node contained in the passed position
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
parent = node.parent
if parent is None:
return []
return [Tree._Position(self, i) for i in parent.children if i is not node]
def get_height_of_node(self, position: _Position) -> int:
"""Return the number of edges between a node and the farthest leaf among its descendants. Time complexity:
O(n).
:param position: position containing the node whose height is being sought
:returns: the number of edges between a node and the farthest leaf among its descendants
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
if self.is_leaf(position):
return 0
return 1 + max(self.get_height_of_node(p) for p in self.get_children(position))
def get_height_of_tree(self) -> int:
"""Return the number of edges between the root node and the farthest leaf. Time complexity: O(n).
:returns: the number of edges between the root node and the farthest leaf
"""
if self.is_empty():
raise Empty("Tree is empty")
return self.get_height_of_node(Tree._Position(self, self._root))
def get_depth_of_node(self, position: _Position) -> int:
"""Return the number of edges between a node and the root. Time complexity: O(n).
:param position: position containing the node whose depth is being sought
:returns: the number of edges between a node and the root
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
if self.is_root(position):
return 0
return 1 + self.get_depth_of_node(self.get_parent(position))
def get_depth_of_tree(self) -> int:
"""Return the number of edges between the farthest leaf and the root. Time complexity: O(n).
:returns: the number of edges between the farthest leaf and the root
"""
return self.get_height_of_tree()
def get_level_of_node(self, position: _Position) -> int:
"""Return the number of nodes between a node and the root, inclusive of itself. Time complexity: O(n).
:param position: position containing the node whose level is being sought
:returns: the number of nodes between a node and the root, inclusive of itself
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
return 1 + self.get_depth_of_node(position)
def traverse_subtree_pre_order(self, position: _Position) -> Generator:
"""Pre-order traverse subtree whose root is the passed position and return a generator of the positions it
contains
:param position: position containing the node that's the root of the subtree to be traversed
:returns: a generator of the positions
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
yield position
for i in self.get_children(position):
for j in self.traverse_subtree_pre_order(i):
yield j
def traverse_tree_pre_order(self) -> Generator:
"""Pre-order traverse tree and return a generator of the positions it contains
:returns: a generator of the positions
"""
position = self.get_root()
if position is not None:
for i in self.traverse_subtree_pre_order(position):
yield i
def traverse_subtree_post_order(self, position: _Position) -> Generator:
"""Post-order traverse subtree whose root is the passed position and return a generator of the positions it
contains
:param position: position containing the node that's the root of the subtree to be traversed
:returns: a generator of the positions
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
for i in self.get_children(position):
for j in self.traverse_subtree_post_order(i):
yield j
yield position
def traverse_tree_post_order(self) -> Generator:
"""Post-order traverse tree and return a generator of the positions it contains
:returns: a generator of the positions
"""
position = self.get_root()
if position is not None:
for i in self.traverse_subtree_post_order(position):
yield i
def traverse_subtree_level_order(self, position: _Position) -> Generator:
"""Level-by-level traverse subtree whose root is the passed position and return a generator of the positions it
contains
:param position: position containing the node that's the root of the subtree to be traversed
:returns: a generator of the positions
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
def helper(root_node, level):
if root_node is not None:
if level == 1:
yield Tree._Position(self, root_node)
elif level > 1:
for child in root_node.children:
for k in helper(child, level - 1):
yield k
node = position.manipulate_node(self, "_validate_node")
number_of_levels = self.get_height_of_node(position) + 1
for i in range(1, number_of_levels + 1):
for j in helper(node, i):
yield j
def traverse_tree_level_order(self) -> Generator:
"""Level-by-level traverse tree and return a generator of the positions it contains
:returns: a generator of the positions
"""
position = self.get_root()
if position is not None:
for i in self.traverse_subtree_level_order(position):
yield i
def delete(self, position: _Position) -> None:
"""Delete a value from the tree
:param position: position containing the node to be removed from the tree
"""
self._length -= 1
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
def insert_node(node_to_insert, is_node_left_child, parent_node):
if node_to_insert is not None:
node_to_insert.parent = parent_node
if is_node_left_child is not None:
if is_node_left_child:
parent_node.children[0] = node_to_insert
else:
parent_node.children[1] = node_to_insert
def delete_node(node_to_delete, is_root):
parent = node_to_delete.parent
left = node_to_delete.children[0]
right = node_to_delete.children[1]
is_left_child = None if parent is None else node_to_delete.key < parent.key
if left is None:
insert_node(right, is_left_child, parent)
if is_root:
self._root = right
else:
current_node = left
right_child = current_node.children[1]
if right_child is None:
current_node.children[1] = right
insert_node(current_node, is_left_child, parent)
if is_root:
self._root = current_node
else:
new_node = Tree._Node(
right_child.key,
right_child.value,
children=[current_node, right],
)
insert_node(new_node, is_left_child, parent)
if is_root:
self._root = new_node
delete_node(right_child, False)
node = position.manipulate_node(self, "_validate_node")
is_root_node = self.is_root(position)
_ = position.manipulate_variables(self, "_invalidate_position")
delete_node(node, is_root_node)
@abstractmethod
def insert(self, key: Any, value: Any) -> None:
"""Insert a value into the tree
:param key: unique identifier of the item to be added to the tree
:param value: item to be added to the tree
"""
self._length += 1
| from abc import ABC, abstractmethod
from typing import Any, Generator, Iterable, List, Union
class Empty(Exception):
pass
class Tree(ABC):
"""A tree is a hierarchical collection of nodes containing items, with each node having a unique parent and zero,
one or many children items. The topmost element in a non-empty tree, the root, has no parent. Tree vocabularies
include, but are not limited to:
1. Root - the topmost element in a non-empty tree, it has no parent
2. Leaf - a node with zero children
3. Siblings - nodes that share a parent node
4. Edge - a pair of nodes such the one is the parent of the other
5. Path - a collection of nodes such that any pair of adjacent nodes have a parent/child relationship
6. Height - number of edges between a node and it's furthest leaf
7. Depth - number of edges between a node and the root
8. Level - number of nodes in the path between a node and the root, inclusive of both the node itself and the root
9. Ordered tree - a tree with a meaningful organisation among its nodes such that its nodes can be arranged in a
linear manner from first to last
"""
class _Node:
def __init__(self, key, value, parent=None, children: Union[List, None] = None):
self.key = key
self.value = value
self.parent = parent
self.children = children if children is not None else []
class _Position:
"""A representation of the position of a node within a tree"""
def __init__(self, belongs_to, node):
self.__variables = {"belongs_to": belongs_to}
self.__node = node
def is_owned_by(self, owner):
"""Check whether position belongs to the tree, owner. Time complexity: O(1).
:param owner: object to check whether it's the owner of this position
:returns: True of the position is owned by the object passed, else False
"""
return owner is self.__variables["belongs_to"]
def manipulate_variables(self, owner, method: str, *params):
"""Manipulate member variables of this position. Methods of the owner list are the only ones that can call
this method. Time complexity: O(1).
:param owner: tree object that owns this position
:param method: method name of tree object that will manipulate the member variables of this position
:param params: extra optional parameters to pass to the method
:returns: the return value of the tree method whose name is passed
"""
if not self.is_owned_by(owner):
raise ValueError("Position doesn't belong to the passed owner")
return getattr(owner, method)(self.__variables, *params)
def manipulate_node(self, owner, method: str, *params):
"""Manipulate the node held by this position. Methods of the owner list are the only ones that can call
this method. Time complexity: O(1).
:param owner: tree object that owns this position
:param method: method name of tree object that will manipulate the node contained in this position
:param params: extra optional parameters to pass to the method
:returns: the return value of the tree method whose name is passed
"""
if not self.is_owned_by(owner):
raise ValueError("Position doesn't belong to the passed owner")
return getattr(owner, method)(self.__node, *params)
def get_data(self):
"""Return the data stored by the node held by this position. Time complexity: O(1).
:returns: data stored in node contained in this position
"""
return self.__node.key, self.__node.value
def __init__(self):
self._root: Union[Tree._Node, None] = None
self._length = 0
self.__generator: Union[Generator, None] = None
def __len__(self) -> int:
"""Return total number of items in tree
:return: count of items in tree
"""
return self._length
def __repr__(self) -> str:
"""Return a string representation of the tree
:return: the string representation of the tree
"""
def helper(current_position):
children = self.get_children(current_position)
num_of_children = len(children)
last_child_idx = num_of_children - 1
data_dict["string_data"] += f"{current_position.get_data()[0]}"
for i, j in enumerate(children):
data_dict["string_data"] += "(" if i == 0 else ", "
helper(j)
data_dict["string_data"] += ")" if i == last_child_idx else ""
if self.is_empty():
return ""
data_dict = {"string_data": ""}
helper(Tree._Position(self, self._root))
return data_dict["string_data"]
def __iter__(self) -> Iterable:
"""Return a tree iterable
:return: tree iterable
"""
return self
def __next__(self) -> _Position:
"""Return next position of tree iterator, implemented based on level-order traversal
:return: next position
:raises StopIteration: when the cursor denoting the current position surpasses the last position of the tree
"""
if self.__generator is None:
self.__generator = self.traverse_tree_level_order()
try:
next_position = next(self.__generator)
except StopIteration as e:
self.__generator = None
raise e
return next_position
@staticmethod
def _validate_node(node):
"""Helper function to check if the node passed is a tree node. Returns the node passed if the validation
passes, else raises a TypeError. Time complexity: O(1).
:param node: node to validate
:returns: the node passed if it passes validation
:raises TypeError: if the validation fails
"""
if not isinstance(node, Tree._Node):
raise TypeError("Not a tree node")
return node
@staticmethod
def _invalidate_position(variables):
"""Helper function to set the belongs_to key of a dictionary to None. Used to revoke the ownership of a
position by this tree. Time complexity: O(1).
:returns: the dictionary passed, with the belongs_to key set to None
"""
variables["belongs_to"] = None
return variables
def is_empty(self) -> bool:
"""Return True if tree is empty, else False. Time complexity: O(1).
:returns: True if tree is empty, else False
"""
return self._root is None
def is_root(self, position: _Position) -> bool:
"""Check if the passed position contains the root node. Time complexity: O(1).
:returns: True if the passed position holds the root node, else False
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
return node.parent is None
def is_leaf(self, position: _Position) -> bool:
"""Check if the passed position contains a leaf. Time complexity: O(1).
:returns: True if the passed position holds a leaf node, else False
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
return len(self.get_children(position)) == 0
def get_root(self) -> Union[_Position, None]:
"""Return the root position. Time complexity: O(1).
:returns: the root position
"""
if self.is_empty():
return None
else:
return Tree._Position(self, self._root)
def get_parent(self, position: _Position) -> Union[_Position, None]:
"""Return the parent of the given position. Time complexity: O(1).
:param position: position containing the node whose parent is being sought
:returns: the position of parent of the node contained in the passed position. None if the position passed
contains the root node.
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
if self.is_root(Tree._Position(self, node)):
return None
else:
return Tree._Position(self, node.parent)
def get_children(self, position: _Position) -> Union[List[_Position], None]:
"""Return the children of the given position. Time complexity: O(1).
:param position: position containing the node whose children are being sought
:returns: the positions of the children of the node contained in the passed position. None if the position has
no children.
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
children = node.children
if children is None:
return None
else:
return [Tree._Position(self, i) for i in children if i is not None]
def get_siblings(self, position: _Position) -> Union[List[_Position], None]:
"""Return the siblings of the given position. Time complexity: O(1).
:param position: position containing the node whose children are being sought
:returns: the positions of the siblings of the node contained in the passed position
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
node = position.manipulate_node(self, "_validate_node")
parent = node.parent
if parent is None:
return []
return [Tree._Position(self, i) for i in parent.children if i is not node]
def get_height_of_node(self, position: _Position) -> int:
"""Return the number of edges between a node and the farthest leaf among its descendants. Time complexity:
O(n).
:param position: position containing the node whose height is being sought
:returns: the number of edges between a node and the farthest leaf among its descendants
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
if self.is_leaf(position):
return 0
return 1 + max(self.get_height_of_node(p) for p in self.get_children(position))
def get_height_of_tree(self) -> int:
"""Return the number of edges between the root node and the farthest leaf. Time complexity: O(n).
:returns: the number of edges between the root node and the farthest leaf
"""
if self.is_empty():
raise Empty("Tree is empty")
return self.get_height_of_node(Tree._Position(self, self._root))
def get_depth_of_node(self, position: _Position) -> int:
"""Return the number of edges between a node and the root. Time complexity: O(n).
:param position: position containing the node whose depth is being sought
:returns: the number of edges between a node and the root
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
if self.is_root(position):
return 0
return 1 + self.get_depth_of_node(self.get_parent(position))
def get_depth_of_tree(self) -> int:
"""Return the number of edges between the farthest leaf and the root. Time complexity: O(n).
:returns: the number of edges between the farthest leaf and the root
"""
return self.get_height_of_tree()
def get_level_of_node(self, position: _Position) -> int:
"""Return the number of nodes between a node and the root, inclusive of itself. Time complexity: O(n).
:param position: position containing the node whose level is being sought
:returns: the number of nodes between a node and the root, inclusive of itself
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
return 1 + self.get_depth_of_node(position)
def traverse_subtree_pre_order(self, position: _Position) -> Generator:
"""Pre-order traverse subtree whose root is the passed position and return a generator of the positions it
contains
:param position: position containing the node that's the root of the subtree to be traversed
:returns: a generator of the positions
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
yield position
for i in self.get_children(position):
for j in self.traverse_subtree_pre_order(i):
yield j
def traverse_tree_pre_order(self) -> Generator:
"""Pre-order traverse tree and return a generator of the positions it contains
:returns: a generator of the positions
"""
position = self.get_root()
if position is not None:
for i in self.traverse_subtree_pre_order(position):
yield i
def traverse_subtree_post_order(self, position: _Position) -> Generator:
"""Post-order traverse subtree whose root is the passed position and return a generator of the positions it
contains
:param position: position containing the node that's the root of the subtree to be traversed
:returns: a generator of the positions
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
for i in self.get_children(position):
for j in self.traverse_subtree_post_order(i):
yield j
yield position
def traverse_tree_post_order(self) -> Generator:
"""Post-order traverse tree and return a generator of the positions it contains
:returns: a generator of the positions
"""
position = self.get_root()
if position is not None:
for i in self.traverse_subtree_post_order(position):
yield i
def traverse_subtree_level_order(self, position: _Position) -> Generator:
"""Level-by-level traverse subtree whose root is the passed position and return a generator of the positions it
contains
:param position: position containing the node that's the root of the subtree to be traversed
:returns: a generator of the positions
"""
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
def helper(root_node, level):
if root_node is not None:
if level == 1:
yield Tree._Position(self, root_node)
elif level > 1:
for child in root_node.children:
for k in helper(child, level - 1):
yield k
node = position.manipulate_node(self, "_validate_node")
number_of_levels = self.get_height_of_node(position) + 1
for i in range(1, number_of_levels + 1):
for j in helper(node, i):
yield j
def traverse_tree_level_order(self) -> Generator:
"""Level-by-level traverse tree and return a generator of the positions it contains
:returns: a generator of the positions
"""
position = self.get_root()
if position is not None:
for i in self.traverse_subtree_level_order(position):
yield i
def delete(self, position: _Position) -> None:
"""Delete a value from the tree
:param position: position containing the node to be removed from the tree
"""
self._length -= 1
if not position.is_owned_by(self):
raise ValueError("Position doesn't belong to this tree")
def insert_node(node_to_insert, is_node_left_child, parent_node):
if node_to_insert is not None:
node_to_insert.parent = parent_node
if is_node_left_child is not None:
if is_node_left_child:
parent_node.children[0] = node_to_insert
else:
parent_node.children[1] = node_to_insert
def delete_node(node_to_delete, is_root):
parent = node_to_delete.parent
left = node_to_delete.children[0]
right = node_to_delete.children[1]
is_left_child = None if parent is None else node_to_delete.key < parent.key
if left is None:
insert_node(right, is_left_child, parent)
if is_root:
self._root = right
else:
current_node = left
right_child = current_node.children[1]
if right_child is None:
current_node.children[1] = right
insert_node(current_node, is_left_child, parent)
if is_root:
self._root = current_node
else:
new_node = Tree._Node(
right_child.key,
right_child.value,
children=[current_node, right],
)
insert_node(new_node, is_left_child, parent)
if is_root:
self._root = new_node
delete_node(right_child, False)
node = position.manipulate_node(self, "_validate_node")
is_root_node = self.is_root(position)
_ = position.manipulate_variables(self, "_invalidate_position")
delete_node(node, is_root_node)
@abstractmethod
def insert(self, key: Any, value: Any) -> None:
"""Insert a value into the tree
:param key: unique identifier of the item to be added to the tree
:param value: item to be added to the tree
"""
self._length += 1
| en | 0.846048 | A tree is a hierarchical collection of nodes containing items, with each node having a unique parent and zero, one or many children items. The topmost element in a non-empty tree, the root, has no parent. Tree vocabularies include, but are not limited to: 1. Root - the topmost element in a non-empty tree, it has no parent 2. Leaf - a node with zero children 3. Siblings - nodes that share a parent node 4. Edge - a pair of nodes such the one is the parent of the other 5. Path - a collection of nodes such that any pair of adjacent nodes have a parent/child relationship 6. Height - number of edges between a node and it's furthest leaf 7. Depth - number of edges between a node and the root 8. Level - number of nodes in the path between a node and the root, inclusive of both the node itself and the root 9. Ordered tree - a tree with a meaningful organisation among its nodes such that its nodes can be arranged in a linear manner from first to last A representation of the position of a node within a tree Check whether position belongs to the tree, owner. Time complexity: O(1). :param owner: object to check whether it's the owner of this position :returns: True of the position is owned by the object passed, else False Manipulate member variables of this position. Methods of the owner list are the only ones that can call this method. Time complexity: O(1). :param owner: tree object that owns this position :param method: method name of tree object that will manipulate the member variables of this position :param params: extra optional parameters to pass to the method :returns: the return value of the tree method whose name is passed Manipulate the node held by this position. Methods of the owner list are the only ones that can call this method. Time complexity: O(1). :param owner: tree object that owns this position :param method: method name of tree object that will manipulate the node contained in this position :param params: extra optional parameters to pass to the method :returns: the return value of the tree method whose name is passed Return the data stored by the node held by this position. Time complexity: O(1). :returns: data stored in node contained in this position Return total number of items in tree :return: count of items in tree Return a string representation of the tree :return: the string representation of the tree Return a tree iterable :return: tree iterable Return next position of tree iterator, implemented based on level-order traversal :return: next position :raises StopIteration: when the cursor denoting the current position surpasses the last position of the tree Helper function to check if the node passed is a tree node. Returns the node passed if the validation passes, else raises a TypeError. Time complexity: O(1). :param node: node to validate :returns: the node passed if it passes validation :raises TypeError: if the validation fails Helper function to set the belongs_to key of a dictionary to None. Used to revoke the ownership of a position by this tree. Time complexity: O(1). :returns: the dictionary passed, with the belongs_to key set to None Return True if tree is empty, else False. Time complexity: O(1). :returns: True if tree is empty, else False Check if the passed position contains the root node. Time complexity: O(1). :returns: True if the passed position holds the root node, else False Check if the passed position contains a leaf. Time complexity: O(1). :returns: True if the passed position holds a leaf node, else False Return the root position. Time complexity: O(1). :returns: the root position Return the parent of the given position. Time complexity: O(1). :param position: position containing the node whose parent is being sought :returns: the position of parent of the node contained in the passed position. None if the position passed contains the root node. Return the children of the given position. Time complexity: O(1). :param position: position containing the node whose children are being sought :returns: the positions of the children of the node contained in the passed position. None if the position has no children. Return the siblings of the given position. Time complexity: O(1). :param position: position containing the node whose children are being sought :returns: the positions of the siblings of the node contained in the passed position Return the number of edges between a node and the farthest leaf among its descendants. Time complexity: O(n). :param position: position containing the node whose height is being sought :returns: the number of edges between a node and the farthest leaf among its descendants Return the number of edges between the root node and the farthest leaf. Time complexity: O(n). :returns: the number of edges between the root node and the farthest leaf Return the number of edges between a node and the root. Time complexity: O(n). :param position: position containing the node whose depth is being sought :returns: the number of edges between a node and the root Return the number of edges between the farthest leaf and the root. Time complexity: O(n). :returns: the number of edges between the farthest leaf and the root Return the number of nodes between a node and the root, inclusive of itself. Time complexity: O(n). :param position: position containing the node whose level is being sought :returns: the number of nodes between a node and the root, inclusive of itself Pre-order traverse subtree whose root is the passed position and return a generator of the positions it contains :param position: position containing the node that's the root of the subtree to be traversed :returns: a generator of the positions Pre-order traverse tree and return a generator of the positions it contains :returns: a generator of the positions Post-order traverse subtree whose root is the passed position and return a generator of the positions it contains :param position: position containing the node that's the root of the subtree to be traversed :returns: a generator of the positions Post-order traverse tree and return a generator of the positions it contains :returns: a generator of the positions Level-by-level traverse subtree whose root is the passed position and return a generator of the positions it contains :param position: position containing the node that's the root of the subtree to be traversed :returns: a generator of the positions Level-by-level traverse tree and return a generator of the positions it contains :returns: a generator of the positions Delete a value from the tree :param position: position containing the node to be removed from the tree Insert a value into the tree :param key: unique identifier of the item to be added to the tree :param value: item to be added to the tree | 4.014211 | 4 |
nodes/audio.py | sddhrthrt/COVFEFE | 0 | 10298 | <filename>nodes/audio.py
from abc import ABC, abstractmethod
import os
import logging
from nodes.helper import FileOutputNode
from utils import file_utils
from utils import signal_processing as sp
from utils.shell_run import shell_run
from config import OPENSMILE_HOME
class Mp3ToWav(FileOutputNode):
def run(self, mp3_file):
self.log(logging.INFO, "Starting %s" % (mp3_file))
if not mp3_file.endswith(".mp3"):
self.log(logging.ERROR,"Failed %s. Not mp3 file" % (mp3_file))
return
wav_file = self.derive_new_file_path(mp3_file, "wav")
if file_utils.should_run(mp3_file, wav_file):
res = shell_run(["lame", "--decode", mp3_file, wav_file])
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with lame error code %i" % (mp3_file, wav_file, res))
return
self.log(logging.INFO, "Done %s -> %s" % (mp3_file, wav_file))
self.emit(wav_file)
class ResampleWav(FileOutputNode):
def setup(self, new_sr):
self.new_sr = new_sr
def run(self, wav_file):
self.log(logging.INFO, "Starting %s" % (wav_file))
if not wav_file.endswith(".wav"):
self.log(logging.ERROR,"Failed %s. Not wav file" % (wav_file))
return
new_wav_file = self.derive_new_file_path(wav_file, "wav")
if file_utils.should_run(wav_file, new_wav_file):
res = shell_run(["sox", wav_file, "--rate", str(self.new_sr), new_wav_file])
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with lame error code %i" % (wav_file, new_wav_file, res))
return
self.log(logging.INFO, "Done %s -> %s" % (wav_file, new_wav_file))
self.emit(new_wav_file)
class ShellCommand(FileOutputNode):
"""
Take as input a format string representing a shell command that can accept an in_file and out_file.
For example "someCommand -i {in_file} -o {out_file}"
ext: Extension of the output file, ex. "wav", "csv"
"""
def setup(self, command, ext):
self.command = command
self.ext = ext
def run(self, in_file):
self.log(logging.INFO, "Starting %s" % (in_file))
out_file = self.derive_new_file_path(in_file, self.ext)
if file_utils.should_run(in_file, out_file):
cmd = self.command.format(in_file=in_file, out_file=out_file)
res = shell_run(cmd.split(" "))
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with error code %i. cmd: %s" % (in_file, out_file, res, cmd))
return
self.log(logging.INFO, "Done %s -> %s" % (in_file, out_file))
self.emit(out_file)
class OpenSmileRunner(FileOutputNode):
"""
conf_file: Either absolute path to an opensmile conf file or the name of a config file in opensmile's config folder
out_flag: Flag to use for the output file.
extra_flags: A string of extra flags to pass to SMILExtract.
out_ext: Extension of the output file
"""
def setup(self, conf_file, out_flag="-csvoutput", extra_flags="-nologfile -noconsoleoutput -appendcsv 0", out_ext="csv"):
self.conf_file = file_utils.locate_file(conf_file, [os.path.join(OPENSMILE_HOME, "config")])
self.extra_flags = extra_flags.split(" ")
self.out_flag = out_flag
self.out_ext = out_ext
self.opensmile_exec = file_utils.locate_file("SMILExtract", [OPENSMILE_HOME, os.path.join(OPENSMILE_HOME, "bin")], use_path=True)
def run(self, in_file):
self.log(logging.INFO, "Starting %s" % (in_file))
out_file = self.derive_new_file_path(in_file, self.out_ext)
if file_utils.should_run(in_file, out_file):
cmd = [self.opensmile_exec, "-C", self.conf_file, "-I", in_file, self.out_flag, out_file] + self.extra_flags
res = shell_run(cmd)
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with SmileExtract error code %i. cmd: %s" % (in_file, out_file, res, " ".join(cmd)))
return
self.log(logging.INFO, "Done %s -> %s" % (in_file, out_file))
self.emit([out_file])
class IS10_Paraling(OpenSmileRunner):
def get_conf_name(self):
return "IS10_paraling.conf"
def get_command(self, wav_file, out_file):
return [self.os_exec, "-C", self.conf_file, "-I", wav_file, "-csvoutput", out_file, "-nologfile", "-noconsoleoutput", "-appendcsv", "0"]
class IS10_Paraling_lld(OpenSmileRunner):
def get_conf_name(self):
return "IS10_paraling.conf"
def get_command(self, wav_file, out_file):
return [self.os_exec, "-C", self.conf_file, "-I", wav_file, "-lldcsvoutput", out_file, "-nologfile", "-noconsoleoutput", "-appendcsv", "0"]
class SplitSegments(FileOutputNode):
"""
segment_mapping_fn is a pointer to a function that takes as input a file and sample rate and returns a
list of all the segments in that file in the format [(start1, end1, segname1), (start2, end2, segname2), ...] where
start and end are in given in samples. Each tuple in the list can also have a 4th item, which can be any string.
This string will get saved in segname.txt
This is useful for isolating events of interest in audio files. For example, if the segment mapping
function returns a list of where all speech occurs in the input audio, this will isolate all occurrences of
speech into individual files. The 4th item may contain the annotation of what was said in the segment.
"""
def setup(self, segment_mapping_fn):
self.segment_mapping_fn = segment_mapping_fn
def run(self, in_file):
self.log(logging.INFO, "Starting %s" % (in_file))
if not in_file.endswith(".wav"):
self.log(logging.ERROR, "Failed %s. Not wav file" % (in_file))
return
sr, original_data = sp.read_wave(in_file, first_channel=True)
segments = self.segment_mapping_fn(in_file, sr)
for segment in segments:
if len(segment) == 3:
start, end, seg_name = segment
extra_info = None
elif len(segment) == 4:
start, end, seg_name, extra_info = segment
else:
self.log(logging.ERROR, "Failed %s. Segment length must be 3 or 4" % (in_file))
return
seg_path = os.path.join(self.out_dir, "%s.wav" % seg_name)
sp.write_wav(seg_path, sr, original_data[start:end])
extra_path = None
if extra_info:
extra_path = os.path.join(self.out_dir, "%s.txt" % seg_name)
with open(extra_path, "w") as f:
f.write(extra_info)
self.emit([seg_path, extra_path])
| <filename>nodes/audio.py
from abc import ABC, abstractmethod
import os
import logging
from nodes.helper import FileOutputNode
from utils import file_utils
from utils import signal_processing as sp
from utils.shell_run import shell_run
from config import OPENSMILE_HOME
class Mp3ToWav(FileOutputNode):
def run(self, mp3_file):
self.log(logging.INFO, "Starting %s" % (mp3_file))
if not mp3_file.endswith(".mp3"):
self.log(logging.ERROR,"Failed %s. Not mp3 file" % (mp3_file))
return
wav_file = self.derive_new_file_path(mp3_file, "wav")
if file_utils.should_run(mp3_file, wav_file):
res = shell_run(["lame", "--decode", mp3_file, wav_file])
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with lame error code %i" % (mp3_file, wav_file, res))
return
self.log(logging.INFO, "Done %s -> %s" % (mp3_file, wav_file))
self.emit(wav_file)
class ResampleWav(FileOutputNode):
def setup(self, new_sr):
self.new_sr = new_sr
def run(self, wav_file):
self.log(logging.INFO, "Starting %s" % (wav_file))
if not wav_file.endswith(".wav"):
self.log(logging.ERROR,"Failed %s. Not wav file" % (wav_file))
return
new_wav_file = self.derive_new_file_path(wav_file, "wav")
if file_utils.should_run(wav_file, new_wav_file):
res = shell_run(["sox", wav_file, "--rate", str(self.new_sr), new_wav_file])
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with lame error code %i" % (wav_file, new_wav_file, res))
return
self.log(logging.INFO, "Done %s -> %s" % (wav_file, new_wav_file))
self.emit(new_wav_file)
class ShellCommand(FileOutputNode):
"""
Take as input a format string representing a shell command that can accept an in_file and out_file.
For example "someCommand -i {in_file} -o {out_file}"
ext: Extension of the output file, ex. "wav", "csv"
"""
def setup(self, command, ext):
self.command = command
self.ext = ext
def run(self, in_file):
self.log(logging.INFO, "Starting %s" % (in_file))
out_file = self.derive_new_file_path(in_file, self.ext)
if file_utils.should_run(in_file, out_file):
cmd = self.command.format(in_file=in_file, out_file=out_file)
res = shell_run(cmd.split(" "))
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with error code %i. cmd: %s" % (in_file, out_file, res, cmd))
return
self.log(logging.INFO, "Done %s -> %s" % (in_file, out_file))
self.emit(out_file)
class OpenSmileRunner(FileOutputNode):
"""
conf_file: Either absolute path to an opensmile conf file or the name of a config file in opensmile's config folder
out_flag: Flag to use for the output file.
extra_flags: A string of extra flags to pass to SMILExtract.
out_ext: Extension of the output file
"""
def setup(self, conf_file, out_flag="-csvoutput", extra_flags="-nologfile -noconsoleoutput -appendcsv 0", out_ext="csv"):
self.conf_file = file_utils.locate_file(conf_file, [os.path.join(OPENSMILE_HOME, "config")])
self.extra_flags = extra_flags.split(" ")
self.out_flag = out_flag
self.out_ext = out_ext
self.opensmile_exec = file_utils.locate_file("SMILExtract", [OPENSMILE_HOME, os.path.join(OPENSMILE_HOME, "bin")], use_path=True)
def run(self, in_file):
self.log(logging.INFO, "Starting %s" % (in_file))
out_file = self.derive_new_file_path(in_file, self.out_ext)
if file_utils.should_run(in_file, out_file):
cmd = [self.opensmile_exec, "-C", self.conf_file, "-I", in_file, self.out_flag, out_file] + self.extra_flags
res = shell_run(cmd)
if res != 0:
self.log(logging.ERROR,"Failed %s -> %s with SmileExtract error code %i. cmd: %s" % (in_file, out_file, res, " ".join(cmd)))
return
self.log(logging.INFO, "Done %s -> %s" % (in_file, out_file))
self.emit([out_file])
class IS10_Paraling(OpenSmileRunner):
def get_conf_name(self):
return "IS10_paraling.conf"
def get_command(self, wav_file, out_file):
return [self.os_exec, "-C", self.conf_file, "-I", wav_file, "-csvoutput", out_file, "-nologfile", "-noconsoleoutput", "-appendcsv", "0"]
class IS10_Paraling_lld(OpenSmileRunner):
def get_conf_name(self):
return "IS10_paraling.conf"
def get_command(self, wav_file, out_file):
return [self.os_exec, "-C", self.conf_file, "-I", wav_file, "-lldcsvoutput", out_file, "-nologfile", "-noconsoleoutput", "-appendcsv", "0"]
class SplitSegments(FileOutputNode):
"""
segment_mapping_fn is a pointer to a function that takes as input a file and sample rate and returns a
list of all the segments in that file in the format [(start1, end1, segname1), (start2, end2, segname2), ...] where
start and end are in given in samples. Each tuple in the list can also have a 4th item, which can be any string.
This string will get saved in segname.txt
This is useful for isolating events of interest in audio files. For example, if the segment mapping
function returns a list of where all speech occurs in the input audio, this will isolate all occurrences of
speech into individual files. The 4th item may contain the annotation of what was said in the segment.
"""
def setup(self, segment_mapping_fn):
self.segment_mapping_fn = segment_mapping_fn
def run(self, in_file):
self.log(logging.INFO, "Starting %s" % (in_file))
if not in_file.endswith(".wav"):
self.log(logging.ERROR, "Failed %s. Not wav file" % (in_file))
return
sr, original_data = sp.read_wave(in_file, first_channel=True)
segments = self.segment_mapping_fn(in_file, sr)
for segment in segments:
if len(segment) == 3:
start, end, seg_name = segment
extra_info = None
elif len(segment) == 4:
start, end, seg_name, extra_info = segment
else:
self.log(logging.ERROR, "Failed %s. Segment length must be 3 or 4" % (in_file))
return
seg_path = os.path.join(self.out_dir, "%s.wav" % seg_name)
sp.write_wav(seg_path, sr, original_data[start:end])
extra_path = None
if extra_info:
extra_path = os.path.join(self.out_dir, "%s.txt" % seg_name)
with open(extra_path, "w") as f:
f.write(extra_info)
self.emit([seg_path, extra_path])
| en | 0.87316 | Take as input a format string representing a shell command that can accept an in_file and out_file. For example "someCommand -i {in_file} -o {out_file}" ext: Extension of the output file, ex. "wav", "csv" conf_file: Either absolute path to an opensmile conf file or the name of a config file in opensmile's config folder out_flag: Flag to use for the output file. extra_flags: A string of extra flags to pass to SMILExtract. out_ext: Extension of the output file segment_mapping_fn is a pointer to a function that takes as input a file and sample rate and returns a list of all the segments in that file in the format [(start1, end1, segname1), (start2, end2, segname2), ...] where start and end are in given in samples. Each tuple in the list can also have a 4th item, which can be any string. This string will get saved in segname.txt This is useful for isolating events of interest in audio files. For example, if the segment mapping function returns a list of where all speech occurs in the input audio, this will isolate all occurrences of speech into individual files. The 4th item may contain the annotation of what was said in the segment. | 2.441041 | 2 |
texar/torch/modules/pretrained/gpt2.py | VegB/VLN-Transformer | 19 | 10299 | <reponame>VegB/VLN-Transformer
# Copyright 2019 The Texar Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utils of GPT2 Modules.
"""
import json
import os
import warnings
from abc import ABC
from typing import Any, Dict
import torch
from texar.torch.modules.pretrained.pretrained_base import PretrainedMixin
__all__ = [
"PretrainedGPT2Mixin",
]
_GPT2_PATH = "https://storage.googleapis.com/gpt-2/models/"
_CHECKPOINT_FILES = [
"checkpoint", "encoder.json", "hparams.json", "vocab.bpe",
"model.ckpt.data-00000-of-00001", "model.ckpt.index", "model.ckpt.meta"]
class PretrainedGPT2Mixin(PretrainedMixin, ABC):
r"""A mixin class to support loading pre-trained checkpoints for modules
that implement the GPT2 model.
The GPT2 model was proposed in
`Language Models are Unsupervised Multitask Learners`_
by `Radford et al.` from OpenAI. It is a unidirectional Transformer model
pre-trained using the vanilla language modeling objective on a large corpus.
The available GPT2 models are as follows:
* ``gpt2-small``: Small version of GPT-2, 124M parameters.
* ``gpt2-medium``: Medium version of GPT-2, 355M parameters.
* ``gpt2-large``: Large version of GPT-2, 774M parameters.
We provide the following GPT2 classes:
* :class:`~texar.torch.modules.GPT2Encoder` for text encoding.
* :class:`~texar.torch.modules.GPT2Decoder` for text generation and
decoding.
* :class:`~texar.torch.modules.GPT2Classifier` for text classification and
sequence tagging.
.. _`Language Models are Unsupervised Multitask Learners`:
https://openai.com/blog/better-language-models/
"""
_MODEL_NAME = "GPT2"
_MODEL2URL = {
'gpt2-small': [_GPT2_PATH + f"124M/{file}"
for file in _CHECKPOINT_FILES],
'gpt2-medium': [_GPT2_PATH + f"355M/{file}"
for file in _CHECKPOINT_FILES],
'gpt2-large': [_GPT2_PATH + f"774M/{file}"
for file in _CHECKPOINT_FILES],
}
_IS_DECODE = False
# Raise warning for the deprecated pre-trained model names
class MyDict(dict):
def __contains__(self, key):
if key == '117M':
warnings.warn("Pre-trained model name '117M' is deprecated, "
"use 'gpt2-small' instead.", UserWarning)
return True
elif key == '345M':
warnings.warn("Pre-trained model name '345M' is deprecated, "
"use 'gpt2-medium' instead.", UserWarning)
return True
else:
return super().__contains__(key)
_DEPRECATED_MODEL2URL = {
'117M': [_GPT2_PATH + f"124M/{file}" for file in _CHECKPOINT_FILES],
'345M': [_GPT2_PATH + f"355M/{file}" for file in _CHECKPOINT_FILES],
}
_MODEL2URL.update(_DEPRECATED_MODEL2URL)
_MODEL2URL = MyDict(_MODEL2URL) # type: ignore
def _transform_config(self, pretrained_model_name: str, # type: ignore
cache_dir: str) -> Dict[str, Any]:
info = list(os.walk(cache_dir))
root, _, files = info[0]
config_path = None
for file in files:
if file.endswith('hparams.json'):
config_path = os.path.join(root, file)
if config_path is None:
raise ValueError(f"Cannot find the config file in {cache_dir}")
with open(config_path) as f:
config_gpt = json.loads(f.read())
hidden_dim = config_gpt["n_embd"]
configs = {
"vocab_size": config_gpt["n_vocab"],
"context_size": config_gpt["n_ctx"],
"embedding_size": config_gpt["n_embd"], "embed": {
"dim": hidden_dim,
},
"position_size": config_gpt["n_ctx"],
"position_embed": {
"dim": hidden_dim
}
}
module_name = 'decoder' if self._IS_DECODE else 'encoder'
configs.update({module_name: {
"dim": hidden_dim,
"num_blocks": config_gpt["n_layer"],
"embedding_dropout": 0,
"residual_dropout": 0,
"multihead_attention": {
"use_bias": True,
"num_units": hidden_dim,
"num_heads": config_gpt["n_head"],
"output_dim": hidden_dim,
},
"initializer": {
"type": "variance_scaling_initializer",
"kwargs": {
"factor": 1.0,
"mode": "FAN_AVG",
"uniform": True,
},
},
"poswise_feedforward": {
"layers": [
{
"type": "Linear",
"kwargs": {
"in_features": hidden_dim,
"out_features": hidden_dim * 4,
"bias": True,
}
},
{
"type": "GPTGELU",
"kwargs": {}
},
{
"type": "Linear",
"kwargs": {
"in_features": hidden_dim * 4,
"out_features": hidden_dim,
"bias": True,
}
}
],
"name": "ffn",
},
}})
if self._IS_DECODE:
configs[module_name].update({'use_gpt_config': True})
else:
configs[module_name].update({'use_bert_config': False})
return configs
def _init_from_checkpoint(self, pretrained_model_name: str,
cache_dir: str,
load_output_layer: bool = True, **kwargs):
r"""Initialize model parameters from weights stored in the pre-trained
checkpoint.
Args:
pretrained_model_name (str): Name of the pre-trained model.
cache_dir (str): Path to the cache directory.
load_output_layer (bool): If `False`, will not load weights of the
output layer. Set this argument to `False` when loading weights
into a GPT2 encoder. Defaults to `True`.
"""
try:
import numpy as np
import tensorflow as tf
except ImportError:
print("Loading TensorFlow models in PyTorch requires installing "
"TensorFlow. Please see https://www.tensorflow.org/install/ "
"for installation instructions.")
raise
module_name = 'decoder' if self._IS_DECODE else 'encoder'
global_tensor_map = {
"model/wte": "word_embedder.embedding",
"model/wpe": "position_embedder.embedding",
"model/ln_f/b": module_name + ".final_layer_norm.bias",
"model/ln_f/g": module_name + ".final_layer_norm.weight",
}
layer_tensor_map = {
"ln_1/b": module_name + ".self_attn_layer_norm.{}.bias",
"ln_1/g": module_name + ".self_attn_layer_norm.{}.weight",
"ln_2/b": module_name + ".poswise_layer_norm.{}.bias",
"ln_2/g": module_name + ".poswise_layer_norm.{}.weight",
"mlp/c_fc/b": module_name + ".poswise_networks.{}._layers.0.bias",
"mlp/c_proj/b": module_name + ".poswise_networks.{}._layers.2.bias",
"attn/c_proj/b": module_name + ".self_attns.{}.O_dense.bias",
}
layer_transpose_map = {
"mlp/c_fc/w": module_name + ".poswise_networks.{}._layers.0.weight",
"mlp/c_proj/w": module_name + ".poswise_networks.{}._layers.2."
"weight",
"attn/c_proj/w": module_name + ".self_attns.{}.O_dense.weight",
}
tf_path = os.path.abspath(os.path.join(cache_dir, 'model.ckpt'))
# Load weights from TF model
init_vars = tf.train.list_variables(tf_path)
names = []
arrays = []
for name, _ in init_vars:
array = tf.train.load_variable(tf_path, name)
names.append(name)
arrays.append(array.squeeze())
tensor_names = []
for name, _ in self.named_parameters():
tensor_names.append(name)
for name, array in zip(names, arrays):
if name in global_tensor_map:
v_name = global_tensor_map[name]
if name == "model/wte":
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
if load_output_layer:
output_pointer = self._name_to_variable(
"decoder._output_layer.weight")
assert output_pointer.shape == array.shape
output_pointer.data = torch.from_numpy(array)
elif name == "model/wpe":
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
else:
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
else:
name_tmp = name.split("/")
layer_no = name_tmp[1][1:]
name = "/".join(name_tmp[2:])
if name in layer_tensor_map:
v_name = layer_tensor_map[name].format(layer_no)
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
elif name in layer_transpose_map:
v_name = layer_transpose_map[name].format(layer_no)
pointer = self._name_to_variable(v_name)
array_t = np.transpose(array)
assert pointer.shape == array_t.shape
pointer.data = torch.from_numpy(array_t)
elif name == "attn/c_attn/w":
index_d = array.shape[-1] // 3
Q_w = np.transpose(array[:, :index_d])
K_w = np.transpose(array[:, index_d: 2 * index_d])
V_w = np.transpose(array[:, 2 * index_d:])
q_weight = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.Q_dense.weight")
k_weight = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.K_dense.weight")
v_weight = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.V_dense.weight")
assert q_weight.shape == Q_w.shape
assert k_weight.shape == K_w.shape
assert v_weight.shape == V_w.shape
q_weight.data = torch.from_numpy(Q_w)
k_weight.data = torch.from_numpy(K_w)
v_weight.data = torch.from_numpy(V_w)
elif name == "attn/c_attn/b":
d = array.shape[0]
Q_b = array[: d // 3]
K_b = array[d // 3: 2 * d // 3]
V_b = array[2 * d // 3:]
q_bias = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.Q_dense.bias")
k_bias = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.K_dense.bias")
v_bias = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.V_dense.bias")
assert q_bias.shape == Q_b.shape
assert k_bias.shape == K_b.shape
assert v_bias.shape == V_b.shape
q_bias.data = torch.from_numpy(Q_b)
k_bias.data = torch.from_numpy(K_b)
v_bias.data = torch.from_numpy(V_b)
else:
print("Name error", name)
raise Exception
| # Copyright 2019 The Texar Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Utils of GPT2 Modules.
"""
import json
import os
import warnings
from abc import ABC
from typing import Any, Dict
import torch
from texar.torch.modules.pretrained.pretrained_base import PretrainedMixin
__all__ = [
"PretrainedGPT2Mixin",
]
_GPT2_PATH = "https://storage.googleapis.com/gpt-2/models/"
_CHECKPOINT_FILES = [
"checkpoint", "encoder.json", "hparams.json", "vocab.bpe",
"model.ckpt.data-00000-of-00001", "model.ckpt.index", "model.ckpt.meta"]
class PretrainedGPT2Mixin(PretrainedMixin, ABC):
r"""A mixin class to support loading pre-trained checkpoints for modules
that implement the GPT2 model.
The GPT2 model was proposed in
`Language Models are Unsupervised Multitask Learners`_
by `Radford et al.` from OpenAI. It is a unidirectional Transformer model
pre-trained using the vanilla language modeling objective on a large corpus.
The available GPT2 models are as follows:
* ``gpt2-small``: Small version of GPT-2, 124M parameters.
* ``gpt2-medium``: Medium version of GPT-2, 355M parameters.
* ``gpt2-large``: Large version of GPT-2, 774M parameters.
We provide the following GPT2 classes:
* :class:`~texar.torch.modules.GPT2Encoder` for text encoding.
* :class:`~texar.torch.modules.GPT2Decoder` for text generation and
decoding.
* :class:`~texar.torch.modules.GPT2Classifier` for text classification and
sequence tagging.
.. _`Language Models are Unsupervised Multitask Learners`:
https://openai.com/blog/better-language-models/
"""
_MODEL_NAME = "GPT2"
_MODEL2URL = {
'gpt2-small': [_GPT2_PATH + f"124M/{file}"
for file in _CHECKPOINT_FILES],
'gpt2-medium': [_GPT2_PATH + f"355M/{file}"
for file in _CHECKPOINT_FILES],
'gpt2-large': [_GPT2_PATH + f"774M/{file}"
for file in _CHECKPOINT_FILES],
}
_IS_DECODE = False
# Raise warning for the deprecated pre-trained model names
class MyDict(dict):
def __contains__(self, key):
if key == '117M':
warnings.warn("Pre-trained model name '117M' is deprecated, "
"use 'gpt2-small' instead.", UserWarning)
return True
elif key == '345M':
warnings.warn("Pre-trained model name '345M' is deprecated, "
"use 'gpt2-medium' instead.", UserWarning)
return True
else:
return super().__contains__(key)
_DEPRECATED_MODEL2URL = {
'117M': [_GPT2_PATH + f"124M/{file}" for file in _CHECKPOINT_FILES],
'345M': [_GPT2_PATH + f"355M/{file}" for file in _CHECKPOINT_FILES],
}
_MODEL2URL.update(_DEPRECATED_MODEL2URL)
_MODEL2URL = MyDict(_MODEL2URL) # type: ignore
def _transform_config(self, pretrained_model_name: str, # type: ignore
cache_dir: str) -> Dict[str, Any]:
info = list(os.walk(cache_dir))
root, _, files = info[0]
config_path = None
for file in files:
if file.endswith('hparams.json'):
config_path = os.path.join(root, file)
if config_path is None:
raise ValueError(f"Cannot find the config file in {cache_dir}")
with open(config_path) as f:
config_gpt = json.loads(f.read())
hidden_dim = config_gpt["n_embd"]
configs = {
"vocab_size": config_gpt["n_vocab"],
"context_size": config_gpt["n_ctx"],
"embedding_size": config_gpt["n_embd"], "embed": {
"dim": hidden_dim,
},
"position_size": config_gpt["n_ctx"],
"position_embed": {
"dim": hidden_dim
}
}
module_name = 'decoder' if self._IS_DECODE else 'encoder'
configs.update({module_name: {
"dim": hidden_dim,
"num_blocks": config_gpt["n_layer"],
"embedding_dropout": 0,
"residual_dropout": 0,
"multihead_attention": {
"use_bias": True,
"num_units": hidden_dim,
"num_heads": config_gpt["n_head"],
"output_dim": hidden_dim,
},
"initializer": {
"type": "variance_scaling_initializer",
"kwargs": {
"factor": 1.0,
"mode": "FAN_AVG",
"uniform": True,
},
},
"poswise_feedforward": {
"layers": [
{
"type": "Linear",
"kwargs": {
"in_features": hidden_dim,
"out_features": hidden_dim * 4,
"bias": True,
}
},
{
"type": "GPTGELU",
"kwargs": {}
},
{
"type": "Linear",
"kwargs": {
"in_features": hidden_dim * 4,
"out_features": hidden_dim,
"bias": True,
}
}
],
"name": "ffn",
},
}})
if self._IS_DECODE:
configs[module_name].update({'use_gpt_config': True})
else:
configs[module_name].update({'use_bert_config': False})
return configs
def _init_from_checkpoint(self, pretrained_model_name: str,
cache_dir: str,
load_output_layer: bool = True, **kwargs):
r"""Initialize model parameters from weights stored in the pre-trained
checkpoint.
Args:
pretrained_model_name (str): Name of the pre-trained model.
cache_dir (str): Path to the cache directory.
load_output_layer (bool): If `False`, will not load weights of the
output layer. Set this argument to `False` when loading weights
into a GPT2 encoder. Defaults to `True`.
"""
try:
import numpy as np
import tensorflow as tf
except ImportError:
print("Loading TensorFlow models in PyTorch requires installing "
"TensorFlow. Please see https://www.tensorflow.org/install/ "
"for installation instructions.")
raise
module_name = 'decoder' if self._IS_DECODE else 'encoder'
global_tensor_map = {
"model/wte": "word_embedder.embedding",
"model/wpe": "position_embedder.embedding",
"model/ln_f/b": module_name + ".final_layer_norm.bias",
"model/ln_f/g": module_name + ".final_layer_norm.weight",
}
layer_tensor_map = {
"ln_1/b": module_name + ".self_attn_layer_norm.{}.bias",
"ln_1/g": module_name + ".self_attn_layer_norm.{}.weight",
"ln_2/b": module_name + ".poswise_layer_norm.{}.bias",
"ln_2/g": module_name + ".poswise_layer_norm.{}.weight",
"mlp/c_fc/b": module_name + ".poswise_networks.{}._layers.0.bias",
"mlp/c_proj/b": module_name + ".poswise_networks.{}._layers.2.bias",
"attn/c_proj/b": module_name + ".self_attns.{}.O_dense.bias",
}
layer_transpose_map = {
"mlp/c_fc/w": module_name + ".poswise_networks.{}._layers.0.weight",
"mlp/c_proj/w": module_name + ".poswise_networks.{}._layers.2."
"weight",
"attn/c_proj/w": module_name + ".self_attns.{}.O_dense.weight",
}
tf_path = os.path.abspath(os.path.join(cache_dir, 'model.ckpt'))
# Load weights from TF model
init_vars = tf.train.list_variables(tf_path)
names = []
arrays = []
for name, _ in init_vars:
array = tf.train.load_variable(tf_path, name)
names.append(name)
arrays.append(array.squeeze())
tensor_names = []
for name, _ in self.named_parameters():
tensor_names.append(name)
for name, array in zip(names, arrays):
if name in global_tensor_map:
v_name = global_tensor_map[name]
if name == "model/wte":
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
if load_output_layer:
output_pointer = self._name_to_variable(
"decoder._output_layer.weight")
assert output_pointer.shape == array.shape
output_pointer.data = torch.from_numpy(array)
elif name == "model/wpe":
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
else:
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
else:
name_tmp = name.split("/")
layer_no = name_tmp[1][1:]
name = "/".join(name_tmp[2:])
if name in layer_tensor_map:
v_name = layer_tensor_map[name].format(layer_no)
pointer = self._name_to_variable(v_name)
assert pointer.shape == array.shape
pointer.data = torch.from_numpy(array)
elif name in layer_transpose_map:
v_name = layer_transpose_map[name].format(layer_no)
pointer = self._name_to_variable(v_name)
array_t = np.transpose(array)
assert pointer.shape == array_t.shape
pointer.data = torch.from_numpy(array_t)
elif name == "attn/c_attn/w":
index_d = array.shape[-1] // 3
Q_w = np.transpose(array[:, :index_d])
K_w = np.transpose(array[:, index_d: 2 * index_d])
V_w = np.transpose(array[:, 2 * index_d:])
q_weight = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.Q_dense.weight")
k_weight = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.K_dense.weight")
v_weight = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.V_dense.weight")
assert q_weight.shape == Q_w.shape
assert k_weight.shape == K_w.shape
assert v_weight.shape == V_w.shape
q_weight.data = torch.from_numpy(Q_w)
k_weight.data = torch.from_numpy(K_w)
v_weight.data = torch.from_numpy(V_w)
elif name == "attn/c_attn/b":
d = array.shape[0]
Q_b = array[: d // 3]
K_b = array[d // 3: 2 * d // 3]
V_b = array[2 * d // 3:]
q_bias = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.Q_dense.bias")
k_bias = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.K_dense.bias")
v_bias = self._name_to_variable(
f"{module_name}.self_attns.{layer_no}.V_dense.bias")
assert q_bias.shape == Q_b.shape
assert k_bias.shape == K_b.shape
assert v_bias.shape == V_b.shape
q_bias.data = torch.from_numpy(Q_b)
k_bias.data = torch.from_numpy(K_b)
v_bias.data = torch.from_numpy(V_b)
else:
print("Name error", name)
raise Exception | en | 0.765635 | # Copyright 2019 The Texar Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Utils of GPT2 Modules. A mixin class to support loading pre-trained checkpoints for modules that implement the GPT2 model. The GPT2 model was proposed in `Language Models are Unsupervised Multitask Learners`_ by `Radford et al.` from OpenAI. It is a unidirectional Transformer model pre-trained using the vanilla language modeling objective on a large corpus. The available GPT2 models are as follows: * ``gpt2-small``: Small version of GPT-2, 124M parameters. * ``gpt2-medium``: Medium version of GPT-2, 355M parameters. * ``gpt2-large``: Large version of GPT-2, 774M parameters. We provide the following GPT2 classes: * :class:`~texar.torch.modules.GPT2Encoder` for text encoding. * :class:`~texar.torch.modules.GPT2Decoder` for text generation and decoding. * :class:`~texar.torch.modules.GPT2Classifier` for text classification and sequence tagging. .. _`Language Models are Unsupervised Multitask Learners`: https://openai.com/blog/better-language-models/ # Raise warning for the deprecated pre-trained model names # type: ignore # type: ignore Initialize model parameters from weights stored in the pre-trained checkpoint. Args: pretrained_model_name (str): Name of the pre-trained model. cache_dir (str): Path to the cache directory. load_output_layer (bool): If `False`, will not load weights of the output layer. Set this argument to `False` when loading weights into a GPT2 encoder. Defaults to `True`. # Load weights from TF model | 1.772229 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.