code
stringlengths 114
1.05M
| path
stringlengths 3
312
| quality_prob
float64 0.5
0.99
| learning_prob
float64 0.2
1
| filename
stringlengths 3
168
| kind
stringclasses 1
value |
---|---|---|---|---|---|
# Simply Block
[](https://github.com/simplyblock-io/ultra/actions/workflows/docker-image.yml)
## Web Api app
Please see this document
[WebApp/README.md](../WebApp/README.md)
## Command Line Interface
```bash
$ sbcli --help
usage: sbcli [-h] [-d] {storage-node,cluster,lvol,mgmt,pool} ...
Ultra management CLI
positional arguments:
{storage-node,cluster,lvol,mgmt-node,pool}
storage-node Storage node commands
cluster Cluster commands
lvol lvol commands
mgmt Management node commands
pool Pool commands
optional arguments:
-h, --help show this help message and exit
-d, --debug Print debug messages
```
## Cluster commands
```bash
$ sbcli cluster -h
usage: sbcli cluster [-h] {init,status,suspend,unsuspend,add-dev-model,rm-dev-model,add-host-auth,rm-host-auth,get-capacity,get-io-stats,set-log-level,get-event-log} ...
Cluster commands
positional arguments:
{init,status,suspend,unsuspend,add-dev-model,rm-dev-model,add-host-auth,rm-host-auth,get-capacity,get-io-stats,set-log-level,get-event-log}
init Create an empty cluster
status Show cluster status
suspend Suspend cluster. The cluster will stop processing all IO. Attention! This will cause an "all paths down" event for nvmeof/iscsi volumes on all hosts connected to
the cluster.
unsuspend Unsuspend cluster. The cluster will start processing IO again.
add-dev-model Add a device to the white list by the device model id. When adding nodes to the cluster later on, all devices of the specified model-ids, which are present on the
node to be added to the cluster, are added to the storage node for the cluster. This does not apply for already added devices, but only affect devices on additional
nodes, which will be added to the cluster. It is always possible to also add devices present on a server to a node manually.
rm-dev-model Remove device from the white list by the device model id. This does not apply for already added devices, but only affect devices on additional nodes, which will be
added to the cluster.
add-host-auth If the "authorized hosts only" security feature is turned on, hosts must be explicitly added to the cluster via their nqn before they can discover the subsystem
initiate a connection.
rm-host-auth If the "authorized hosts only" security feature is turned on, this function removes hosts, which have been added to the cluster via their nqn, from the list of
authorized hosts. After a host has been removed, it cannot connect any longer to the subsystem and cluster.
get-capacity Returns the current total available capacity, utilized capacity (in percent and absolute) and provisioned capacity (in percent and absolute) in GB in the cluster.
get-io-stats Returns the io statistics. If --history is not selected, this is a monitor, which updates current statistics records every two seconds (similar to ping):read-iops
write-iops total-iops read-mbs write-mbs total-mbs
set-log-level Defines the detail of the log information collected and stored
get-event-log returns cluster event log in syslog format
optional arguments:
-h, --help show this help message and exit
```
### Initializing new cluster
```bash
$ sbcli cluster init -h
usage: sbcli cluster init [-h] --blk_size {512,4096} --page_size_in_blocks PAGE_SIZE_IN_BLOCKS --model_ids MODEL_IDS [MODEL_IDS ...] --ha_type {single,ha} --tls {on,off} --auth-hosts-only
{on,off} --dhchap {off,one-way,bi-direct} [--NQN NQN] [--iSCSI]
Create an empty cluster
optional arguments:
-h, --help show this help message and exit
--blk_size {512,4096}
The block size in bytes
--page_size_in_blocks PAGE_SIZE_IN_BLOCKS
The size of a data page in logical blocks
--model_ids MODEL_IDS [MODEL_IDS ...]
a list of supported NVMe device model-ids
--ha_type {single,ha}
Can be "single" for single node clusters or "HA", which requires at least 3 nodes
--tls {on,off} TCP/IP transport security can be turned on and off. If turned on, both hosts and storage nodes must authenticate the connection via TLS certificates
--auth-hosts-only {on,off}
if turned on, hosts must be explicitely added to the cluster to be able to connect to any NVMEoF subsystem in the cluster
--dhchap {off,one-way,bi-direct}
if set to "one-way", hosts must present a secret whenever a connection is initiated between a host and a logical volume on a NVMEoF subsystem in the cluster. If set
to "bi-directional", both host and subsystem must present a secret to authenticate each other
--NQN NQN cluster NQN subsystem
--iSCSI The cluster supports iscsi LUNs in addition to nvmeof volumes
```
### Show cluster status
```bash
$ sbcli cluster status -h
usage: sbcli cluster status [-h] cluster-id
Show cluster status
positional arguments:
cluster-id the cluster UUID
optional arguments:
-h, --help show this help message and exit
```
### Suspend cluster
```bash
$ sbcli cluster suspend -h
usage: sbcli cluster suspend [-h] cluster-id
Suspend cluster. The cluster will stop processing all IO.
Attention! This will cause an "all paths down" event for nvmeof/iscsi volumes on all hosts connected to the cluster.
positional arguments:
cluster-id the cluster UUID
optional arguments:
-h, --help show this help message and exit
```
### Unsuspend cluster
```bash
$ sbcli cluster unsuspend -h
usage: sbcli cluster unsuspend [-h] cluster-id
Unsuspend cluster. The cluster will start processing IO again.
positional arguments:
cluster-id the cluster UUID
optional arguments:
-h, --help show this help message and exit
```
### Add device model to the NVMe devices whitelist
```bash
$ sbcli cluster add-dev-model -h
usage: sbcli cluster add-dev-model [-h] cluster-id model-ids [model-ids ...]
Add a device to the white list by the device model id. When adding nodes to the cluster later on, all devices of the specified model-ids, which are present on the node to be added to the
cluster, are added to the storage node for the cluster. This does not apply for already added devices, but only affect devices on additional nodes, which will be added to the cluster. It
is always possible to also add devices present on a server to a node manually.
positional arguments:
cluster-id the cluster UUID
model-ids a list of supported NVMe device model-ids
optional arguments:
-h, --help show this help message and exit
```
### Remove device model from the NVMe devices whitelist
```bash
$ sbcli cluster rm-dev-model -h
usage: sbcli cluster rm-dev-model [-h] cluster-id model-ids [model-ids ...]
Remove device from the white list by the device model id. This does not apply for already added devices, but only affect devices on additional nodes, which will be added to the cluster.
positional arguments:
cluster-id the cluster UUID
model-ids a list of NVMe device model-ids
optional arguments:
-h, --help show this help message and exit
```
### Add host auth
```bash
$ sbcli cluster add-host-auth -h
usage: sbcli cluster add-host-auth [-h] cluster-id host-nqn
If the "authorized hosts only" security feature is turned on, hosts must be explicitly added to the cluster via their nqn before they can discover the subsystem initiate a connection.
positional arguments:
cluster-id the cluster UUID
host-nqn NQN of the host to allow to discover and connect to teh cluster
optional arguments:
-h, --help show this help message and exit
```
### Remove host auth
```bash
$ sbcli cluster rm-host-auth -h
usage: sbcli cluster rm-host-auth [-h] cluster-id host-nqn
If the "authorized hosts only" security feature is turned on, this function removes hosts, which have been added to the cluster via their nqn, from the list
of authorized hosts. After a host has been removed, it cannot connect any longer to the subsystem and cluster.
positional arguments:
cluster-id the cluster UUID
host-nqn NQN of the host to remove from the allowed hosts list
optional arguments:
-h, --help show this help message and exit
```
### Get total cluster capacity
```bash
$ sbcli cluster get-capacity -h
usage: sbcli cluster get-capacity [-h] [--history HISTORY] cluster-id
Returns the current total available capacity, utilized capacity (in percent and absolute) and provisioned capacity (in percent and absolute) in GB in the cluster.
positional arguments:
cluster-id the cluster UUID
optional arguments:
-h, --help show this help message and exit
--history HISTORY (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).
```
### Return io statistics of a cluster
```bash
$ sbcli cluster get-io-stats -h
usage: sbcli cluster get-io-stats [-h] [--history HISTORY] cluster-id
Returns the io statistics. If --history is not selected, this is a monitor, which updates current statistics records every two seconds (similar to ping):read-iops write-iops total-iops
read-mbs write-mbs total-mbs
positional arguments:
cluster-id the cluster UUID
optional arguments:
-h, --help show this help message and exit
--history HISTORY (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).
```
### Set log level
```bash
$ sbcli cluster set-log-level -h
usage: sbcli cluster set-log-level [-h] cluster-id {debug,test,prod}
Defines the detail of the log information collected and stored
positional arguments:
cluster-id the cluster UUID
{debug,test,prod} Log level
optional arguments:
-h, --help show this help message and exit
```
### Get events log
```bash
$ sbcli cluster get-event-log -h
usage: sbcli cluster get-event-log [-h] [--from FROM] [--to TO] cluster-id
returns cluster event log in syslog format
positional arguments:
cluster-id the cluster UUID
optional arguments:
-h, --help show this help message and exit
--from FROM from time, format: dd-mm-yy hh:mm
--to TO to time, format: dd-mm-yy hh:mm
```
## Storage node commands
```bash
$ sbcli storage-node -h
usage: sbcli storage-node [-h]
{add,remove,list,restart,shutdown,suspend,resume,get-io-stats,list-devices,reset-device,run-smart,add-device,replace,remove-device,set-ro-device,set-failed-device,set-online-device,get-capacity-device,get-io-stats-device,get-event-log,get-log-page-device}
...
Storage node commands
positional arguments:
{add,remove,list,restart,shutdown,suspend,resume,get-io-stats,list-devices,reset-device,run-smart,add-device,replace,remove-device,set-ro-device,set-failed-device,set-online-device,get-capacity-device,get-io-stats-device,get-event-log,get-log-page-device}
add Add storage node
remove Remove storage node
list List storage nodes
restart Restart a storage node. All functions and device drivers will be reset. During restart, the node does not accept IO. In a high-availability setup, this will not
impact operations.
shutdown Shutdown a storage node. Once the command is issued, the node will stop accepting IO,but IO, which was previously received, will still be processed. In a high-
availability setup, this will not impact operations.
suspend Suspend a storage node. The node will stop accepting new IO, but will finish processing any IO, which has been received already.
resume Resume a storage node
get-io-stats Returns the current io statistics of a node
list-devices List storage devices
reset-device Reset storage device
run-smart Run tests against storage device
add-device Add a new storage device
replace Replace a storage node. This command is run on the new physical server, which is expected to replace the old server. Attention!!! All the nvme devices, which are
part of the cluster to which the node belongs, must be inserted into the new server before this command is run. The old node will be de-commissioned and cannot be
used any more.
remove-device Remove a storage device. The device will become unavailable, independently if it was physically removed from the server. This function can be used if auto-detection
of removal did not work or if the device must be maintained otherwise while remaining inserted into the server.
set-ro-device Set storage device read only
set-failed-device Set storage device to failed state. This command can be used, if an administrator believes that the device must be changed, but its status and health state do not
lead to an automatic detection of the failure state. Attention!!! The failed state is final, all data on the device will be automatically recovered to other devices
in the cluster.
set-online-device Set storage device to online state
get-capacity-device
Returns the size, absolute and relative utilization of the device in bytes
get-io-stats-device
Returns the io statistics. If --history is not selected, this is a monitor, which updates current statistics records every two seconds (similar to ping):read-iops
write-iops total-iops read-mbs write-mbs total-mbs
get-event-log Returns storage node event log in syslog format. This includes events from the storage node itself, the network interfaces and all devices on the node, including
health status information and updates.
get-log-page-device
Get nvme log-page information from the device. Attention! The availability of particular log pages depends on the device model. For more information, see nvme
specification.
optional arguments:
-h, --help show this help message and exit
```
### Add new storage node
- must be run on the storage node itself
```bash
$ sbcli storage-node add -h
usage: sbcli storage-node add [-h] [--data-nics DATA_NICS [DATA_NICS ...]] [--distr] cluster-id ifname
Add storage node
positional arguments:
cluster-id UUID of the cluster to which the node will belong
ifname Management interface name
optional arguments:
-h, --help show this help message and exit
--data-nics DATA_NICS [DATA_NICS ...]
Data interface names
--distr Install Disturb spdk app instead of default: ultra21
```
### Remove storage node
```bash
$ sbcli storage-node remove -h
usage: sbcli storage-node remove [-h] node-id
Remove storage node
positional arguments:
node-id node-id of storage node
optional arguments:
-h, --help show this help message and exit
```
### List storage nodes
```bash
$ sbcli storage-node list -h
usage: sbcli storage-node list [-h] [--json] cluster-id
List storage nodes
positional arguments:
cluster-id id of the cluster for which nodes are listed
optional arguments:
-h, --help show this help message and exit
--json Print outputs in json format
```
### Restart storage node
- must be run on the storage node itself
```bash
$ sbcli storage-node restart -h
usage: sbcli storage-node restart [-h] [-t] cluster-id
Restart a storage node. All functions and device drivers will be reset. During restart, the node does not accept IO. In a high-availability setup, this will not impact operations.
positional arguments:
cluster-id the cluster UUID to which the node belongs
optional arguments:
-h, --help show this help message and exit
-t, --test Run smart test on the NVMe devices
```
### Shutdown a storage node
```bash
$ sbcli storage-node shutdown -h
usage: sbcli storage-node shutdown [-h] cluster-id
Shutdown a storage node. Once the command is issued, the node will stop accepting IO,but IO, which was previously received, will still be processed. In a high-availability setup, this will
not impact operations.
positional arguments:
cluster-id the cluster UUID to which the node belongs
optional arguments:
-h, --help show this help message and exit
```
### Suspend a storage node
```bash
$ sbcli storage-node suspend -h
usage: sbcli storage-node suspend [-h] cluster-id
Suspend a storage node. The node will stop accepting new IO, but will finish processing any IO, which has been received already.
positional arguments:
cluster-id the cluster UUID to which the node belongs
optional arguments:
-h, --help show this help message and exit
```
### Resume a storage node
```bash
$ sbcli storage-node resume -h
usage: sbcli storage-node resume [-h] cluster-id
Resume a storage node
positional arguments:
cluster-id the cluster UUID to which the node belongs
optional arguments:
-h, --help show this help message and exit
```
### Returns the current io statistics of a node
```bash
$ sbcli storage-node get-io-stats -h
usage: sbcli storage-node get-io-stats [-h] cluster-id
Returns the current io statistics of a node
positional arguments:
cluster-id the cluster UUID
optional arguments:
-h, --help show this help message and exit
```
### List storage devices
```bash
$ sbcli storage-node list-devices -h
usage: sbcli storage-node list-devices [-h] node-id [-a] [-s {node-seq,dev-seq,serial}] [--json]
List storage devices
positional arguments:
node-id the node's UUID
optional arguments:
-h, --help show this help message and exit
-a, --all List all devices in the cluster
-s {node-seq,dev-seq,serial}, --sort {node-seq,dev-seq,serial}
Sort the outputs
--json Print outputs in json format
```
### Reset a storage device
```bash
$ sbcli storage-node reset-device -h
usage: sbcli storage-node reset-device [-h] device-id
Reset storage device
positional arguments:
device-id the devices's UUID
optional arguments:
-h, --help show this help message and exit
```
### Run smart tests against a storage device
```bash
$ sbcli storage-node run-smart -h
usage: sbcli storage-node run-smart [-h] device-id
Run tests against storage device
positional arguments:
device-id the devices's UUID
optional arguments:
-h, --help show this help message and exit
```
### Add a new storage device
```bash
$ sbcli storage-node add-device -h
usage: sbcli storage-node add-device [-h] name
Add a new storage device
positional arguments:
name Storage device name (as listed in the operating system). The device will be de-ttached from the operating system and attached to the storage node
optional arguments:
-h, --help show this help message and exit
```
### Replace a storage node
```bash
$ sbcli storage-node replace -h
usage: sbcli storage-node replace [-h] node-id ifname [--data-nics DATA_NICS [DATA_NICS ...]]
Replace a storage node. This command is run on the new physical server, which is expected to replace the old server. Attention!!! All the nvme devices, which are part of the cluster to
which the node belongs, must be inserted into the new server before this command is run. The old node will be de-commissioned and cannot be used any more.
positional arguments:
node-id UUID of the node to be replaced
ifname Management interface name
optional arguments:
-h, --help show this help message and exit
--data-nics DATA_NICS [DATA_NICS ...]
Data interface names
```
### Remove a storage device
```bash
$ sbcli storage-node remove-device -h
usage: sbcli storage-node remove-device [-h] device-id
Remove a storage device. The device will become unavailable, independently if it was physically removed from the server. This function can be used if auto-detection of removal did not work
or if the device must be maintained otherwise while remaining inserted into the server.
positional arguments:
device-id Storage device ID
optional arguments:
-h, --help show this help message and exit
```
### Set storage device to failed state
```bash
$ sbcli storage-node set-failed-device -h
usage: sbcli storage-node set-failed-device [-h] device-id
Set storage device to failed state. This command can be used, if an administrator believes that the device must be changed, but its status and health state do not lead to an automatic
detection of the failure state. Attention!!! The failed state is final, all data on the device will be automatically recovered to other devices in the cluster.
positional arguments:
device-id Storage device ID
optional arguments:
-h, --help show this help message and exit
```
### Set storage device to online state
```bash
$ sbcli storage-node set-online-device -h
usage: sbcli storage-node set-online-device [-h] device-id
Set storage device to online state
positional arguments:
device-id Storage device ID
optional arguments:
-h, --help show this help message and exit
```
### Returns the size of a device
```bash
$ sbcli storage-node get-capacity-device -h
usage: sbcli storage-node get-capacity-device [-h] [--history HISTORY] device-id
Returns the size, absolute and relative utilization of the device in bytes
positional arguments:
device-id Storage device ID
optional arguments:
-h, --help show this help message and exit
--history HISTORY list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYh
```
### Returns the io statistics of a device
```bash
$ sbcli storage-node get-io-stats-device -h
usage: sbcli storage-node get-io-stats-device [-h] [--history HISTORY] device-id
Returns the io statistics. If --history is not selected, this is a monitor, which updates current statistics records every two seconds (similar to ping):read-iops write-iops total-iops
read-mbs write-mbs total-mbs
positional arguments:
device-id Storage device ID
optional arguments:
-h, --help show this help message and exit
--history HISTORY list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYh
```
### Returns storage node event log in syslog format
```bash
$ sbcli storage-node get-event-log -h
usage: sbcli storage-node get-event-log [-h] [--from FROM] [--to TO] node-id
Returns storage node event log in syslog format. This includes events from the storage node itself, the network interfaces and all devices on the node, including health status information
and updates.
positional arguments:
node-id Storage node ID
optional arguments:
-h, --help show this help message and exit
--from FROM from time, format: dd-mm-yy hh:mm
--to TO to time, format: dd-mm-yy hh:mm
```
### Get nvme log-page information from the device
```bash
$ sbcli storage-node get-log-page-device -h
usage: sbcli storage-node get-log-page-device [-h] device-id {error,smart,telemetry,dev-self-test,endurance,persistent-event}
Get nvme log-page information from the device. Attention! The availability of particular log pages depends on the device model. For more information, see nvme specification.
positional arguments:
device-id Storage device ID
{error,smart,telemetry,dev-self-test,endurance,persistent-event}
Can be [error , smart , telemetry , dev-self-test , endurance , persistent-event]
optional arguments:
-h, --help show this help message and exit
```
## LVol commands
```bash
$ sbcli lvol -h
usage: sbcli lvol [-h]
{add,qos-set,list,get,delete,connect,resize,set-read-only,create-snapshot,clone,get-host-secret,get-ctrl-secret,move ,replicate
,inflate,get-capacity,get-io-stats} ...
LVol commands
positional arguments:
{add,qos-set,list,get,delete,connect,resize,set-read-only,create-snapshot,clone,get-host-secret,get-ctrl-secret,move ,replicate ,inflate,get-capacity,get-io-stats}
add Add LVol, if both --comp and --crypto are used, then the compress bdev will be at the top
qos-set Change qos settings for an active logical volume
list List all LVols
get Get LVol details
delete Delete LVol
connect show connection string to LVol host
resize Resize LVol
set-read-only Set LVol Read-only
create-snapshot Create snapshot from LVol
clone create LVol based on a snapshot
get-host-secret Returns the auto-generated host secret required for the nvmeof connection between host and cluster
get-ctrl-secret Returns the auto-generated controller secret required for the nvmeof connection between host and cluster
move Moves a full copy of the logical volume between clusters
replicate Create a replication path between two volumes in two clusters
inflate Inflate a clone to "full" logical volume and disconnect it from its parent snapshot.
get-capacity Returns the current provisioned capacity
get-io-stats Returns either the current io statistics
optional arguments:
-h, --help show this help message and exit
```
### Add LVol
```bash
$ sbcli lvol add -h
usage:sbcli lvol add [-h] [--compress] [--encrypt] [--thick] [--iscsi] [--node-ha {0,1,2}] [--dev-redundancy] [--max-w-iops MAX_W_IOPS] [--max-r-iops MAX_R_IOPS] [--max-r-mbytes MAX_R_MBYTES] [--max-w-mbytes MAX_W_MBYTES]
[--distr] [--distr-ndcs DISTR_NDCS] [--distr-npcs DISTR_NPCS] [--distr-alloc_names DISTR_ALLOC_NAMES]
name size pool hostname
Add LVol, if both --comp and --crypto are used, then the compress bdev will be at the top
positional arguments:
name LVol name or id
size LVol size: 10M, 10G, 10(bytes)
pool Pool UUID or name
hostname Storage node hostname
optional arguments:
-h, --help show this help message and exit
--compress Use inline data compression and de-compression on the logical volume
--encrypt Use inline data encryption and de-cryption on the logical volume
--thick Deactivate thin provisioning
--iscsi Use ISCSI fabric type instead of NVMEoF; in this case, it is required to specify the cluster hosts to which the volume will be attached
--node-ha {0,1,2} The maximum amount of concurrent node failures accepted without interruption of operations
--dev-redundancy {1,2} supported minimal concurrent device failures without data loss
--max-w-iops MAX_W_IOPS
Maximum Write IO Per Second
--max-r-iops MAX_R_IOPS
Maximum Read IO Per Second
--max-r-mbytes MAX_R_MBYTES
Maximum Read Mega Bytes Per Second
--max-w-mbytes MAX_W_MBYTES
Maximum Write Mega Bytes Per Second
--distr Use Disturb bdev
```
### Set QOS options for LVol
```bash
$ sbcli lvol qos-set [-h]
usage: sbcli lvol qos-set [-h] [--max-w-iops MAX_W_IOPS] [--max-r-iops MAX_R_IOPS] [--max-r-mbytes MAX_R_MBYTES] [--max-w-mbytes MAX_W_MBYTES] id
Change qos settings for an active logical volume
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
--max-w-iops MAX_W_IOPS
Maximum Write IO Per Second
--max-r-iops MAX_R_IOPS
Maximum Read IO Per Second
--max-r-mbytes MAX_R_MBYTES
Maximum Read Mega Bytes Per Second
--max-w-mbytes MAX_W_MBYTES
Maximum Write Mega Bytes Per Second
```
### List LVols
```bash
$ sbcli lvol list -h
usage: sbcli lvol list [-h] [--cluster-id CLUSTER_ID] [--json]
List all LVols
optional arguments:
-h, --help show this help message and exit
--cluster-id CLUSTER_ID
List LVols in particular cluster
--json Print outputs in json format
```
### Get LVol details
```bash
$ sbcli lvol get -h
usage: sbcli lvol get [-h] [--json] id
Get LVol details
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
--json Print outputs in json format
```
### Delete LVol
```bash
$ sbcli lvol delete -h
usage: sbcli lvol delete [-h] [--force] id
Delete LVol. This is only possible, if no more snapshots and non-inflated clones of the volume exist. The volume must be suspended before it can be deleted.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
--force Force delete LVol from the cluster
```
### Show nvme-cli connection commands
```bash
$ sbcli lvol connect -h
usage: sbcli lvol lvol connect [-h] id
show connection strings to cluster. Multiple connections to the cluster are always available for multi-pathing and high-availability.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Resize
```bash
$ sbcli lvol resize -h
usage: sbcli lvol resize [-h] id size
Resize LVol. The lvol cannot be exceed the maximum size for lvols. It cannot exceed total remaining provisioned space in pool. It cannot drop below the current utilization.
positional arguments:
id LVol id
size New LVol size size: 10M, 10G, 10(bytes)
optional arguments:
-h, --help show this help message and exit
```
### Set read only
```bash
$ sbcli lvol set-read-only -h
usage: sbcli lvol set-read-only [-h] id
Set LVol Read-only. Current write IO in flight will still be processed, but for new IO, only read and unmap IO are possible.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Set read-write
```bash
$ sbcli lvol set-read-write -h
usage: sbcli lvol set-read-write [-h] id
Set LVol Read-Write.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Suspend
```bash
$ sbcli lvol suspend -h
usage: sbcli lvol suspend [-h] id
Suspend LVol. IO in flight will still be processed, but new IO is not accepted. Make sure that the volume is detached from all hosts before suspending it to avoid IO errors.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Unsuspend
```bash
$ sbcli lvol unsuspend -h
usage: sbcli lvol unsuspend [-h] id
Unsuspend LVol. IO may be resumed.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Create Snapshot from LVol
```bash
$ sbcli lvol create-snapshot -h
usage: sbcli lvol create-snapshot [-h] id name
Create snapshot from LVol
positional arguments:
id LVol id
name snapshot name
optional arguments:
-h, --help show this help message and exit
```
### Clone: Create LVol based on a snapshot
```bash
$ sbcli lvol clone -h
usage: sbcli lvol clone [-h] snapshot_id clone_name
Create LVol based on a snapshot
positional arguments:
snapshot_id snapshot UUID
clone_name clone name
optional arguments:
-h, --help show this help message and exit
```
### Returns the host secret
```bash
$ sbcli lvol get-host-secret -h
usage: sbcli lvol get-host-secret [-h] id
Returns the auto-generated host secret required for the nvmeof connection between host and cluster
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Returns the controller secret
```bash
$ sbcli lvol get-ctrl-secret -h
usage: sbcli lvol get-ctrl-secret [-h] id
Returns the auto-generated controller secret required for the nvmeof connection between host and cluster
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Moves a full copy of the logical volume between clusters
```bash
$ sbcli lvol move -h
usage: sbcli lvol move [-h] id cluster-id node-id
Moves a full copy of the logical volume between clusters
positional arguments:
id LVol id
cluster-id Destination Cluster ID
node-id Destination Node ID
optional arguments:
-h, --help show this help message and exit
```
### Create a replication path between two volumes in two clusters
```bash
$ sbcli lvol replicate -h
usage: sbcli lvol replicate [-h] [--asynchronous] id cluster-a cluster-b
Create a replication path between two volumes in two clusters
positional arguments:
id LVol id
cluster-a A Cluster ID
cluster-b B Cluster ID
optional arguments:
-h, --help show this help message and exit
--asynchronous Replication may be performed synchronously(default) and asynchronously
```
### Inflate a clone to "full" logical volume and disconnect it from its parent snapshot.
```bash
$ sbcli lvol inflate -h
usage: sbcli lvol inflate [-h] id
Inflate a clone to "full" logical volume and disconnect it from its parent snapshot.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
```
### Returns the current LVol provisioned capacity
```bash
$ sbcli lvol get-capacity -h
usage: sbcli lvol get-capacity [-h] [--history HISTORY] id
Returns the current (or historic) provisioned and utilized (in percent and absolute) capacity.
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
--history HISTORY (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).
```
### Returns either the current io statistics
```bash
$ sbcli lvol get-io-stats -h
usage: sbcli lvol get-io-stats [-h] [--history HISTORY] id
Returns either the current or historic io statistics (read-IO, write-IO, total-IO, read mbs, write mbs, total mbs).
positional arguments:
id LVol id
optional arguments:
-h, --help show this help message and exit
--history HISTORY (XXdYYh), list history records (one for every 15 minutes) for XX days and YY hours (up to 10 days in total).
```
## Management node commands
```bash
$ sbcli mgmt -h
usage: sbcli mgmt [-h] {add,list,remove,show,status} ...
Management node commands
positional arguments:
{add,list,remove,show,status}
add Add Management node to the cluster
list List Management nodes
remove Remove Management node
show List management nodes
status Show management cluster status
optional arguments:
-h, --help show this help message and exit
```
### Add management node
```bash
$ sbcli mgmt add -h
usage: sbcli mgmt add [-h] ip_port
Add Management node to the cluster
positional arguments:
ip_port Docker server IP:PORT
optional arguments:
-h, --help show this help message and exit
```
### List management nodes
```bash
$ sbcli mgmt list -h
usage: sbcli mgmt list [-h] [--json]
List Management nodes
optional arguments:
-h, --help show this help message and exit
--json Print outputs in json format
```
### Remove management node
```bash
$ sbcli mgmt remove -h
usage: sbcli mgmt remove [-h] hostname
Remove Management node
positional arguments:
hostname hostname
optional arguments:
-h, --help show this help message and exit
```
### Show management nodes
```bash
$ sbcli mgmt show -h
usage: sbcli mgmt show [-h]
List management nodes
optional arguments:
-h, --help show this help message and exit
```
### Show management cluster status
```bash
$ sbcli mgmt status -h
usage: sbcli mgmt status [-h]
Show management cluster status
optional arguments:
-h, --help show this help message and exit
```
## Pool commands
```bash
$ sbcli pool -h
usage: sbcli pool [-h] {add,set,list,get,delete,enable,disable,get-secret,set-secret} ...
Pool commands
positional arguments:
{add,set,list,get,delete,enable,disable,get-secret,set-secret}
add Add a new Pool
set Set pool attributes
list List pools
get get pool details
delete delete pool. It is only possible to delete a pool if it is empty (no provisioned logical volumes contained).
enable Set pool status to Active
disable Set pool status to Inactive. Attention! This will suspend all new IO to the pool! IO in flight processing will be completed.
get-secret Returns auto generated, 20 characters secret.
set-secret Updates the secret (replaces the existing one with a new one) and returns the new one.
optional arguments:
-h, --help show this help message and exit
```
### Add pool
- QOS parameters are optional but when used in a pool, new Lvol
creation will require the active qos parameter.
- User can use both QOS parameters (--max-rw-iops, --max-rw-mbytes)
and will apply which limit comes first.
```bash
$ sbcli pool add -h
usage: sbcli pool add [-h] [--pool-max POOL_MAX] [--lvol-max LVOL_MAX] [--max-w-iops MAX_W_IOPS] [--max-r-iops MAX_R_IOPS] [--max-r-mbytes MAX_R_MBYTES] [--max-w-mbytes MAX_W_MBYTES]
[--has-secret]
name
Add a new Pool
positional arguments:
name Pool name
optional arguments:
-h, --help show this help message and exit
--pool-max POOL_MAX Pool maximum size: 20M, 20G, 0(default)
--lvol-max LVOL_MAX LVol maximum size: 20M, 20G, 0(default)
--max-w-iops MAX_W_IOPS
Maximum Write IO Per Second
--max-r-iops MAX_R_IOPS
Maximum Read IO Per Second
--max-r-mbytes MAX_R_MBYTES
Maximum Read Mega Bytes Per Second
--max-w-mbytes MAX_W_MBYTES
Maximum Write Mega Bytes Per Second
--has-secret Pool is created with a secret (all further API interactions with the pool and logical volumes in the pool require this secret)
```
### Set pool attributes
```bash
$ sbcli pool set -h
usage: sbcli pool set [-h] [--pool-max POOL_MAX] [--lvol-max LVOL_MAX] [--max-w-iops MAX_W_IOPS] [--max-r-iops MAX_R_IOPS] [--max-r-mbytes MAX_R_MBYTES] [--max-w-mbytes MAX_W_MBYTES] id
Set pool attributes
positional arguments:
id Pool UUID
optional arguments:
-h, --help show this help message and exit
--pool-max POOL_MAX Pool maximum size: 20M, 20G
--lvol-max LVOL_MAX LVol maximum size: 20M, 20G
--max-w-iops MAX_W_IOPS
Maximum Write IO Per Second
--max-r-iops MAX_R_IOPS
Maximum Read IO Per Second
--max-r-mbytes MAX_R_MBYTES
Maximum Read Mega Bytes Per Second
--max-w-mbytes MAX_W_MBYTES
Maximum Write Mega Bytes Per Second
```
### List pools
```bash
$ sbcli pool list -h
usage: sbcli pool list [-h] [--json] [--cluster-id]
List pools
optional arguments:
-h, --help show this help message and exit
--json Print outputs in json format
--cluster-id ID of the cluster
```
### Get pool details
```bash
$ sbcli pool get -h
usage: sbcli pool get [-h] [--json] id
get pool details
positional arguments:
id pool uuid
optional arguments:
-h, --help show this help message and exit
--json Print outputs in json format
```
### Delete pool
```bash
$ sbcli pool delete -h
usage: sbcli pool delete [-h] id
delete pool. It is only possible to delete a pool if it is empty (no provisioned logical volumes contained).
positional arguments:
id pool uuid
optional arguments:
-h, --help show this help message and exit
```
### Set pool status to Active
```bash
$ sbcli pool enable -h
usage: sbcli pool enable [-h] pool_id
Set pool status to Active
positional arguments:
pool_id pool uuid
optional arguments:
-h, --help show this help message and exit
```
### Set pool status to Inactive
```bash
$ sbcli pool disable -h
usage: sbcli pool disable [-h] pool_id
Set pool status to Inactive. Attention! This will suspend all new IO to the pool! IO in flight processing will be completed.
positional arguments:
pool_id pool uuid
optional arguments:
-h, --help show this help message and exit
```
### Get pool secret
```bash
$ sbcli pool get-secret -h
usage: sbcli pool get-secret [-h] pool_id
Returns auto generated, 20 characters secret.
positional arguments:
pool_id pool uuid
optional arguments:
-h, --help show this help message and exit
```
### Update pool secret
```bash
$ sbcli pool set-secret -h
usage: sbcli pool set-secret [-h] pool_id
Updates the secret (replaces the existing one with a new one) and returns the new one.
positional arguments:
pool_id pool uuid
optional arguments:
-h, --help show this help message and exit
```
## Snapshot commands
```bash
$ sbcli snapshot -h
usage: sbcli snapshot [-h] {add,list,delete,clone} ...
Snapshot commands
positional arguments:
{add,list,delete,clone}
add Create new snapshot
list List snapshots
delete Delete a snapshot
clone Create LVol from snapshot
optional arguments:
-h, --help show this help message and exit
```
### Create snapshot
```bash
$ sbcli snapshot add -h
usage: sbcli snapshot add [-h] id name
Create new snapshot
positional arguments:
id LVol UUID
name snapshot name
optional arguments:
-h, --help show this help message and exit
```
### List snapshots
```bash
$ sbcli snapshot list -h
usage: sbcli snapshot list [-h]
List snapshots
optional arguments:
-h, --help show this help message and exit
```
### Delete snapshots
```bash
$ sbcli snapshot delete -h
usage: sbcli snapshot delete [-h] id
Delete a snapshot
positional arguments:
id snapshot UUID
optional arguments:
-h, --help show this help message and exit
```
### Clone snapshots
```bash
$ sbcli snapshot clone -h
usage: sbcli snapshot clone [-h] id lvol_name
Create LVol from snapshot
positional arguments:
id snapshot UUID
lvol_name LVol name
optional arguments:
-h, --help show this help message and exit
```
|
/sbcli-3.11.zip/sbcli-3.11/README.md
| 0.514644 | 0.738528 |
README.md
|
pypi
|
from typing import Mapping
from management.models.base_model import BaseModel
class PortStat(BaseModel):
attributes = {
"uuid": {"type": str, 'default': ""},
"node_id": {"type": str, 'default': ""},
"date": {"type": int, 'default': 0},
"bytes_sent": {"type": int, 'default': 0},
"bytes_received": {"type": int, 'default': 0},
"packets_sent": {"type": int, 'default': 0},
"packets_received": {"type": int, 'default': 0},
"errin": {"type": int, 'default': 0},
"errout": {"type": int, 'default': 0},
"dropin": {"type": int, 'default': 0},
"dropout": {"type": int, 'default': 0},
"out_speed": {"type": int, 'default': 0},
"in_speed": {"type": int, 'default': 0},
}
def __init__(self, data=None):
super(PortStat, self).__init__()
self.set_attrs(self.attributes, data)
self.object_type = "object"
def get_id(self):
return "%s/%s/%s" % (self.node_id, self.uuid, self.date)
class LVolStat(BaseModel):
attributes = {
"uuid": {"type": str, 'default': ""},
"node_id": {"type": str, 'default': ""},
"date": {"type": int, 'default': 0},
"read_bytes_per_sec": {"type": int, 'default': 0},
"read_iops": {"type": int, 'default': 0},
"write_bytes_per_sec": {"type": int, 'default': 0},
"write_iops": {"type": int, 'default': 0},
"unmapped_bytes_per_sec": {"type": int, 'default': 0},
"read_latency_ticks": {"type": int, 'default': 0}, # read access latency, but unclear how to interpret yet
"write_latency_ticks": {"type": int, 'default': 0}, # write access latency, but unclear how to interpret yet
"queue_depth": {"type": int, 'default': 0}, # queue depth, not included into CLI, but let us store now
"io_time": {"type": int, 'default': 0}, # this is important to calculate utilization of disk (io_time2 - io_time1)/elapsed_time = % utilization of disk
"weighted_io_timev": {"type": int, 'default': 0}, # still unclear how to use, but let us store now.
"stats": {"type": dict, 'default': {}},
# capacity attributes
"data_nr": {"type": int, 'default': 0},
"freepg_cnt": {"type": int, 'default': 0},
"pagesz": {"type": int, 'default': 0},
"blks_in_pg": {"type": int, 'default': 0},
}
def __init__(self, data=None):
super(LVolStat, self).__init__()
self.set_attrs(self.attributes, data)
self.object_type = "object"
def get_id(self):
return "%s/%s/%s" % (self.node_id, self.uuid, self.date)
class DeviceStat(LVolStat):
def __init__(self, data=None):
super(LVolStat, self).__init__()
self.set_attrs(self.attributes, data)
self.object_type = "object"
def get_id(self):
return "%s/%s/%s" % (self.node_id, self.uuid, self.date)
|
/sbcli-3.11.zip/sbcli-3.11/management/models/device_stat.py
| 0.573081 | 0.305931 |
device_stat.py
|
pypi
|
import pprint
import json
from typing import Mapping
import fdb
class BaseModel(object):
def __init__(self):
self._attribute_map = {
"id": {"type": str, "default": ""},
"name": {"type": str, "default": self.__class__.__name__},
"object_type": {"type": str, "default": ""},
}
self.set_attrs({}, {})
def get_id(self):
return self.id
def set_attrs(self, attributes, data):
self._attribute_map.update(attributes)
self.from_dict(data)
def from_dict(self, data):
for attr in self._attribute_map:
if data is not None and attr in data:
dtype = self._attribute_map[attr]['type']
value = data[attr]
if dtype in [int, float, str, bool]:
value = self._attribute_map[attr]['type'](data[attr])
elif hasattr(dtype, '__origin__'):
if dtype.__origin__ == list:
if hasattr(dtype, "__args__") and hasattr(dtype.__args__[0], "from_dict"):
value = [dtype.__args__[0]().from_dict(item) for item in data[attr]]
else:
value = data[attr]
elif dtype.__origin__ == Mapping:
if hasattr(dtype, "__args__") and hasattr(dtype.__args__[1], "from_dict"):
value = {item: dtype.__args__[1]().from_dict(data[attr][item]) for item in data[attr]}
else:
value = self._attribute_map[attr]['type'](data[attr])
elif hasattr(dtype, '__origin__') and hasattr(dtype, "__args__"):
if hasattr(dtype.__args__[0], "from_dict"):
value = dtype.__args__[0]().from_dict(data[attr])
else:
value = self._attribute_map[attr]['type'](data[attr])
setattr(self, attr, value)
else:
setattr(self, attr, self._attribute_map[attr]['default'])
return self
def to_dict(self):
self.id = self.get_id()
result = {}
for attr in self._attribute_map:
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(lambda x: x.to_dict() if hasattr(x, "to_dict") else x, value))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def get_clean_dict(self):
data = self.to_dict()
for key in BaseModel()._attribute_map:
if key in data:
del data[key]
return data
def to_str(self):
"""Returns the string representation of the model
:rtype: str
"""
return pprint.pformat(self.to_dict())
def read_from_db(self, kv_store, id="", limit=0, reverse=False):
try:
objects = []
prefix = "%s/%s/%s" % (self.object_type, self.name, id)
for k, v in kv_store.db.get_range_startswith(prefix.encode('utf-8'), limit=limit, reverse=reverse):
objects.append(self.__class__().from_dict(json.loads(v)))
return objects
except fdb.impl.FDBError as e:
print("Error reading from FDB!")
print(f"Error code: {e.code}")
print(f"Error message: {e.description.decode('utf-8')}")
exit(1)
def get_last(self, kv_store):
id = "/".join(self.get_id().split("/")[:2])
objects = self.read_from_db(kv_store, id=id, limit=1, reverse=True)
if objects:
return objects[0]
return None
def write_to_db(self, kv_store):
try:
prefix = "%s/%s/%s" % (self.object_type, self.name, self.get_id())
st = json.dumps(self.to_dict())
kv_store.db.set(prefix.encode(), st.encode())
return True
except fdb.impl.FDBError as e:
print("Error Writing to FDB!")
print(e)
exit(1)
def remove(self, kv_store):
prefix = "%s/%s/%s" % (self.object_type, self.name, self.get_id())
return kv_store.db.clear(prefix.encode())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
return self.get_id() == other.get_id()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
|
/sbcli-3.11.zip/sbcli-3.11/management/models/base_model.py
| 0.548432 | 0.228038 |
base_model.py
|
pypi
|
from datetime import datetime
from typing import List
from management.models.base_model import BaseModel
from management.models.iface import IFace
from management.models.nvme_device import NVMeDevice
class LVol(BaseModel):
attributes = {
"lvol_name": {"type": str, 'default': ""},
"size": {"type": int, 'default': 0},
"uuid": {"type": str, 'default': ""},
"base_bdev": {"type": str, 'default': ""},
"lvol_bdev": {"type": str, 'default': ""},
"comp_bdev": {"type": str, 'default': ""},
"crypto_bdev": {"type": str, 'default': ""},
"nvme_dev": {"type": NVMeDevice, 'default': None},
"pool_uuid": {"type": str, 'default': ""},
"hostname": {"type": str, 'default': ""},
"node_id": {"type": str, 'default': ""},
"mode": {"type": str, 'default': "read-write"},
"lvol_type": {"type": str, 'default': "lvol"}, # lvol, compressed, crypto, dedup
"bdev_stack": {"type": List, 'default': []},
"crypto_key_name": {"type": str, 'default': ""},
"rw_ios_per_sec": {"type": int, 'default': 0},
"rw_mbytes_per_sec": {"type": int, 'default': 0},
"r_mbytes_per_sec": {"type": int, 'default': 0},
"w_mbytes_per_sec": {"type": int, 'default': 0},
}
def __init__(self, data=None):
super(LVol, self).__init__()
self.set_attrs(self.attributes, data)
self.object_type = "object"
def get_id(self):
return self.uuid
class StorageNode(BaseModel):
STATUS_ONLINE = 'online'
STATUS_OFFLINE = 'offline'
STATUS_ERROR = 'error'
STATUS_REPLACED = 'replaced'
STATUS_SUSPENDED = 'suspended'
STATUS_IN_CREATION = 'in_creation'
STATUS_IN_SHUTDOWN = 'in_shutdown'
STATUS_RESTARTING = 'restarting'
STATUS_REMOVED = 'removed'
attributes = {
"uuid": {"type": str, 'default': ""},
"baseboard_sn": {"type": str, 'default': ""},
"system_uuid": {"type": str, 'default': ""},
"hostname": {"type": str, 'default': ""},
"host_nqn": {"type": str, 'default': ""},
"subsystem": {"type": str, 'default': ""},
"nvme_devices": {"type": List[NVMeDevice], 'default': []},
"sequential_number": {"type": int, 'default': 0},
"partitions_count": {"type": int, 'default': 0},
"ib_devices": {"type": List[IFace], 'default': []},
"status": {"type": str, 'default': "in_creation"},
"updated_at": {"type": str, 'default': str(datetime.now())},
"create_dt": {"type": str, 'default': str(datetime.now())},
"remove_dt": {"type": str, 'default': str(datetime.now())},
"mgmt_ip": {"type": str, 'default': ""},
"rpc_port": {"type": int, 'default': -1},
"rpc_username": {"type": str, 'default': ""},
"rpc_password": {"type": str, 'default': ""},
"data_nics": {"type": List[IFace], 'default': []},
"lvols": {"type": List[str], 'default': []},
"services": {"type": List[str], 'default': []},
}
def __init__(self, data=None):
super(StorageNode, self).__init__()
self.set_attrs(self.attributes, data)
self.object_type = "object"
def get_id(self):
return self.uuid
|
/sbcli-3.11.zip/sbcli-3.11/management/models/storage_node.py
| 0.716318 | 0.266181 |
storage_node.py
|
pypi
|
# SBcoyote: An Extensible Python-Based Reaction Editor and Viewer.
## Introduction
SBcoyote, initially called PyRKViewer or Coyote, is a cross-platform visualization tool for drawing reaction networks written with the
[wxPython](https://www.wxpython.org/) framework. It can draw reactants, products, reactions, and compartments, and its features include but are not limited to:
* Support for floating and boundary species.
* Reactions can be displayed using Bezier curves and straight lines.
* Plugin support, with some plugin examples: arrow designer, random network, auto layout, etc.
## Citing
If you are using any of the code, please cite the article (https://doi.org/10.1016/j.biosystems.2023.105001).
## Installing SBcoyote
* Install Python 3.8, 3.9 or 3.10 if not already in the system.
* Go to the command line and type `pip install SBcoyote`.
* If wxPython doesn't get installed automatically, please try to install wxPython 4.1.1 or 4.2.0 manually referring to https://wxpython.org/pages/downloads/index.html. Note wxPython 4.1.1 does not work with Python 3.10.
* To run the application, simply type in the command line `SBcoyote`.
## Documentation
The full documentation can be found at: https://sys-bio.github.io/SBcoyote/
## Visualization Example
Here is a visualization example by SBcoyote for the large-scale Escherichia coli core
metabolism network (King et al., 2015; Orth et al., 2010).
<img src="https://raw.githubusercontent.com/sys-bio/SBcoyote/main/examples/ecoli.png" width="500" height="400">
## Installation Options for Developers
### Installing with Poetry
1. If you do not have poetry installed on your computer, follow the quick steps shown [here](https://python-poetry.org/docs/).
2. Once you have poetry installed, you will download SBcoyote. Click the green button at the top of this page that says “Code” and choose “Download ZIP”, then unzip the folder to your desired directory. Make a note of the directory location as you will need it for the next step.
3. Open your terminal and navigate to the directory containing SBcoyote.
4. Once inside the main folder of the application you can install the dependencies. To install the base dependencies simply run `poetry install`. To install the optional ones as well, run `poetry install -E simulation`. Note that this step may take a while. To learn more about which set of dependencies is right for you, refer to the [Dependencies](#Dependencies) section below.
5. Finally, you will run the application with the command `poetry run SBcoyote`.
After you have completed all of these steps, you will not have to repeat them every time you want to run the application. Once the setup is done you will only need to open the terminal, navigate into the folder that contains your SBcoyote application, and run the command `poetry run SBcoyote`.
### Installing without Poetry
We strongly advise following the steps above as it makes the set-up process much faster and simpler. However, to install SBcoyote without Poetry, here is the process you will follow:
1. First, download SBcoyote. Click the green button at the top of this page that says “Code” and choose “Download ZIP”, then unzip the folder to your desired directory. Make a note of the directory location as you will need it for the next step.
2. Open your terminal and navigate to the directory containing SBcoyote.
3. To install the base set of dependencies, you will run `pip install -r requirements.txt`. Then if you want to install the optional dependencies as well, run `pip install -r requirements-simulation.txt`. To learn more about which set of dependencies is right for you, refer to the [Dependencies](#Dependencies) section below.
4. Finally, you will run the application with the command `python -m rkviewer.main`.
After you have completed all of these steps, you will not have to repeat them every time you want to run the application. Once the setup is done you will only need to open the terminal, navigate into the folder that contains your SBcoyote application, and run the command `python -m rkviewer.main`.
### Running
* If you have poetry, simply run `poetry run SBcoyote`.
* Otherwise, in your virtual environment, run `python -m rkviewer.main`.
* Then, check out the [documentation](#documentation).
## Development Setup
### Dependencies
We are using [poetry](https://python-poetry.org/) for dependency management. If you are just looking
to build and run, though, you can work solely with `pip` as well.
There are currently three dependency groups: "base", "development", and "simulation".
* "base" is the bare minimum requirements to run the application without any plugins.
* "development" includes the additional requirements for development, such as for documentation
and testing.
* "simulation" includes a large set of dependencies required for running simulation related plugins. (This is in addition to the base requirements).
The dependency groups are specified in `pyproject.toml` for `poetry`. There are additionally
`requirement.txt` files generated by `poetry`, including `requirements.txt`, `requirements-dev.txt`,
and `requirements-simulation.txt`. If you do not have poetry, you can opt for those as well. If you are
using linux, extra work would need to be done on installing wxPython. Please refer to the
"Linux Notes" section below.
### Installing Dependencies
`poetry` is recommended for installing dependencies. Simply `poetry install` for the base
dependencies and `poetry install -E simulation` to install the optional ones as well.
If you don't have poetry, you can simply run `pip install -r <>` for any of the aforementioned
`requirements.txt` files.
### Running locally
* If you have poetry, simply `poetry run SBcoyote`.
* Otherwise, in your virtual environment, run `python -m rkviewer.main`.
## Development Distributing
* Use `poetry build` and `poetry publish`. Refer to [poetry docs](https://python-poetry.org/docs/)
for more detail.
* To re-generate the `requirements*.txt`, run `scripts/gen_requirements.py`.
### Bundling an Executable with PyInstaller
**NOTE: This section is obsolete for now, as we are currently distributing with pip.**
* Always run `pyinstaller rkviewer.spec` when `rkviewer.spec` is present.
* If somehow `rkviewer.spec` went missing or you want to regenerate the build specs, run `pyinstaller -F --windowed --add-data ext/Iodine.dll;. main.py` on Windows or `pyinstaller -F -- windowed --add-data ext/Iodine.dll:. main.py` on Linux/Mac to generate a file named `main.spec`. Note that if a `main.spec` file is already present **it will be overwritten**.
## Development for Different Platforms
The python version for development was 3.7.7.
### Mac Notes
* Note that on MacOS, if you wish to use SBcoyote in a virtual environment, use `venv` instead of
`virtualenv`, due to the latter's issues with wxPython.
* pyinstaller and wxPython require a python built with `enable-framework` on. Therefore, one should do `env PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.7.7` and
use that Python installation for building.
* If the text is blurry in the app bundled by `pyinstaller`, one needs to add an entry in the pyinstaller settings as described [here](https://stackoverflow.com/a/40676321).
### Linux Notes
* To install wxPython on linux, see https://wxpython.org/blog/2017-08-17-builds-for-linux-with-pip/index.html. `requirements-dev.txt` and `requirements.txt` assume the user is on Ubuntu 18.04 for readthedocs. If you have a different distro and have trouble using `requirements.txt`, just install wxPython manually using the previous link.
* Related to the last note, if readthedocs start having trouble building wxPython, understand that it might be because readthedocs updated its distro from Ubuntu 18.04. Go to `requirements-dev.txt` and change the line above `wxPython` to look in the appropriate link.
* i.e. `-f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-18.04/ \n wxPython==4.1.1`
## Future Development
### Testing and Profiling
* To run all tests, go to project root and run `python -m unittest discover`.
* To run a specific test suite, run e.g. `python -m unittest test.api.test_node`.
* Or even more specific: `python -m unittest test.api.test_node.TestNode.test_add_nodes`.
* To profile the application, run `python -m cProfile -o rkviewer.stat main.py`.
* To visualize the profile result, run `tuna rkviewer.stat`.
### Building Local Docs
* Run `sphinx-apidoc -f -o docs/source/rkviewer rkviewer rkviewer/plugin rkviewer/resources ` to regenerate the full reference doc source
code if new files were added to the package rkviewer.
* Run `sphinx-build -b html docs\source docs\build`.
### Note on Style
Usually snake_case is used for function names. However, to retain some degree of backwards
compatibility for wxPython, subclasses of wxPython classes use PascalCase for their methods, e.g. `Canvas::RegisterAllChildren`.
### TODOs
* ENHANCEMENT: Add support for multiple net IDs. Currently all net IDs are set to 0 by default.
### Shapes TODOs
* Events (NodeModified)
### Roadmap for Shape Engine
A shape "engine" allows the user to specify custom composite shapes for nodes and compartments.
Composite shapes are constructed out of primitives such as circles, (rounded) rectangles, polygons,
etc.
RKViewer provides a default list of (composite) shapes, but the user may also create their own
shapes out of primitives. A (composite) shape is formed out of one or many primitives, each
scaled, rotated, and translated by certain amounts. User-created shapes will be
associated with each model in the exported `.json` files.
A shape-creation plugin may be created in the future to facilitate the process of designing
complex shapes.
Here is the roadmap for the shape engine:
* Create preliminary list of primitives and a default list of shapes. Allow model loader/saver to
reference that list.
* Modify renderer to be able to render these default shapes.
* Modify inspector to allow the user to change the properties of the primitives in the shape, such
as colors, border thickness, etc.
* Modify model loader/saver to allow users to create custom shape lists manually.
* Write shape-creation plugin?
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/README.md
| 0.887692 | 0.668718 |
README.md
|
pypi
|
from collections import defaultdict
from dataclasses import dataclass, fields, is_dataclass
from typing import (
Any,
Callable,
DefaultDict,
Dict,
List,
Optional,
Set,
Tuple,
Type, Union,
)
import wx
from rkviewer.canvas.geometry import Vec2
# ------------------------------------------------------------
from inspect import getframeinfo, stack
# ------------------------------------------------------------
class CanvasEvent:
def to_tuple(self):
assert is_dataclass(self), "as_tuple requires the CanvasEvent instance to be a dataclass!"
return tuple(getattr(self, f.name) for f in fields(self))
@dataclass
class SelectionDidUpdateEvent(CanvasEvent):
"""Called after the list of selected nodes and/or reactions has changed.
Attributes:
node_indices: The indices of the list of selected nodes.
reaction_indices: The indices of the list of selected reactions.
compartment_indices: The indices of the list of selected compartments.
"""
node_indices: Set[int]
reaction_indices: Set[int]
compartment_indices: Set[int]
@dataclass
class CanvasDidUpdateEvent(CanvasEvent):
"""Called after the canvas has been updated by the controller."""
pass
@dataclass
class DidNewNetworkEvent(CanvasEvent):
""" Called when the canvas is cleared by choosing "New".
"""
pass
@dataclass
class DidMoveNodesEvent(CanvasEvent):
"""Called after the position of a node changes but has not been committed to the model.
This event may be called many times, continuously as the user drags a group of nodes. Note that
only after the drag operation has ended, is model notified of the move for undo purposes. See
DidCommitDragEvent.
Attributes:
node_indices: The indices of the nodes that were moved.
offset: The position offset. If all nodes were moved by the same offset, then a single Vec2
is given; otherwise, a list of offsets are given, with each offset matching a node.
dragged: Whether the resize operation was done by the user dragging, and not, for exmaple,
through the form.
by_user: Whether the event was performed by the user or through a plugin.
"""
node_indices: List[int]
offset: Union[Vec2, List[Vec2]]
dragged: bool
by_user: bool = True
@dataclass
class DidMoveCompartmentsEvent(CanvasEvent):
"""
Same as `DidMoveNodesEvent` but for compartments.
Attributes:
compartment_indices: The indices of the compartments that were moved.
offset: The position offset. If all compartments were moved by the same offset,
then a single Vec2 is given; otherwise, a list of offsets are given,
with each offset matching a node.
dragged: Whether the resize operation was done by the user dragging, and not, for example,
through the form.
by_user: Whether the event was performed by the user or through a plugin.
"""
compartment_indices: List[int]
offset: Union[Vec2, List[Vec2]]
dragged: bool
by_user: bool = True
@dataclass
class DidResizeNodesEvent(CanvasEvent):
"""Called after the list of selected nodes has been resized.
Attributes:
node_indices: The indices of the list of resized nodes.
ratio: The resize ratio.
dragged: Whether the resize operation was done by the user dragging, and not, for exmaple,
through the form.
by_user: Whether the event was performed by the user or through a plugin.
"""
node_indices: List[int]
ratio: Vec2
dragged: bool
by_user: bool = True
@dataclass
class DidResizeCompartmentsEvent(CanvasEvent):
"""Called after the list of selected compartments has been resized.
Attributes:
compartment_indices: The indices of the list of resized compartments.
ratio: The resize ratio.
dragged: Whether the resize operation was done by the user dragging, and not, for exmaple,
through the form.
by_user: Whether the event was performed by the user or through a plugin.
"""
compartment_indices: List[int]
ratio: Union[Vec2, List[Vec2]]
dragged: bool
by_user: bool = True
@dataclass
class DidCommitDragEvent(CanvasEvent):
"""Dispatched after any continuously emitted dragging event has concluded.
This is dispatched for any event that is posted in quick intervals while the mouse left
button is held while moving, i.e. "dragging" events. This includes: DidMoveNodesEvent,
DidMoveCompartmentsEvent, DidResizeNodesEvent, DidResizeCompartmentsEvent,
and DidResizeMoveBezierHandlesEvent. This event is emitted after the left mouse button is
released, the model is notified of the change, and the action is complete.
"""
source: Any
@dataclass
class DidMoveBezierHandleEvent(CanvasEvent):
"""Dispatched after a Bezier handle is moved.
Attributes:
net_index: The network index.
reaction_index: The reaction index.
node_index: The index of the node whose Bezier handle moved. -1 if the source centroid
handle was moved, or -2 if the dest centroid handle was moved.
direct: Automatically true when by_user is False. Otherwise, True if the handle is
moved by the user dragging the handle directly, and False if the handle was moved
by the user dragging the node associated with that handle.
by_user: Whether the event was performed by the user or through a plugin.
"""
net_index: int
reaction_index: int
node_index: int
by_user: bool
direct: bool
@dataclass
class DidMoveReactionCenterEvent(CanvasEvent):
"""Dispatched after the reaction center is moved by the user.
Note that this is not triggered if the center moved automatically due to nodes moving.
Attributes:
net_index: The network index.
reaction_index: The reaction index.
offset: The amount moved.
dragged: Whether the center is moved by the user dragging (it could have been through the
form).
"""
net_index: int
reaction_index: int
offset: Vec2
dragged: bool
@dataclass
class DidAddNodeEvent(CanvasEvent):
"""Called after a node has been added.
Attributes:
node: The index of the node that was added.
Note:
This event triggers only if the user has performed a drag operation, and not, for example,
if the user moved a node in the edit panel.
TODO in the documentation that this event and related ones (and DidDelete-) are emitted before
controller.end_group() is called. As an alternative, maybe create a call_after() function
similar to wxPython? it should be called in OnIdle() or Refresh()
"""
node: int
@dataclass
class DidDeleteEvent(CanvasEvent):
"""Called after a node has been deleted.
Attributes:
node_indices: The set of nodes (indices )that were deleted.
reaction_indices: The set of reactions (indices) that were deleted.
compartment_indices: The set of compartment (indices) that were deleted.
"""
node_indices: Set[int]
reaction_indices: Set[int]
compartment_indices: Set[int]
@dataclass
class DidAddReactionEvent(CanvasEvent):
"""Called after a reaction has been added.
Attributes:
reaction: The Reaction that was added.
"""
index: int
sources: List[int]
targets: List[int]
@dataclass
class DidAddCompartmentEvent(CanvasEvent):
"""Called after a compartment has been added.
Attributes:
compartment: The Compartment that was added.
"""
index: int
@dataclass
class DidChangeCompartmentOfNodesEvent(CanvasEvent):
"""Called after one or more nodes have been moved to a new compartment.
Attributes:
node_indices: The list of node indices that changed compartment.
old_compi: The old compartment index, -1 for base compartment.
new_compi: The new compartment index, -1 for base compartment.
by_user: Whether this event was triggered directly by a user action, as opposed to by a
plugin.
"""
node_indices: List[int]
old_compi: int
new_compi: int
by_user: bool = True
@dataclass
class DidModifyNodesEvent(CanvasEvent):
"""Called after a property of one or more nodes has been modified, excluding position or size.
For position and size events, see DidMove...Event() and DidResize...Event()
Attributes:
nodes: The indices of the list of nodes that were modified.
by_user: Whether this event was triggered directly by a user action and not, for example,
by a plugin.
"""
indices: List[int]
by_user: bool = True
@dataclass
class DidModifyReactionEvent(CanvasEvent):
"""Called after a property of one or more nodes has been modified, excluding position.
Attributes:
indices: The indices of the list of reactions that were modified.
by_user: Whether this event was triggered directly by a user action and not, for example,
by a plugin.
"""
indices: List[int]
by_user: bool = True
@dataclass
class DidModifyCompartmentsEvent(CanvasEvent):
"""Called after a property of one or more compartments has been modified, excluding position or size.
For position and size events, see DidMove...Event() and DidResize...Event()
Attributes:
indices: The indices of list of compartments that were modified.
"""
indices: List[int]
@dataclass
class DidUndoEvent(CanvasEvent):
"""Called after an undo action is done."""
by_user: bool = True
@dataclass
class DidRedoEvent(CanvasEvent):
"""Called after a redo action is done."""
by_user: bool = True
@dataclass
class DidPaintCanvasEvent(CanvasEvent):
"""Called after the canvas has been painted.
Attributes:
gc: The graphics context of the canvas.
"""
gc: wx.GraphicsContext
EventCallback = Callable[[CanvasEvent], None]
class HandlerNode:
next_: Optional['HandlerNode']
prev: Optional['HandlerNode']
handler: EventCallback
def __init__(self, handler: EventCallback):
self.handler = handler
self.next_ = None
# Maps CanvasElement to a dict that maps events to handler nodes
handler_map: Dict[int, Tuple['HandlerChain', HandlerNode]] = dict()
# Maps event to a chain of handlers
event_chains: DefaultDict[Type[CanvasEvent], 'HandlerChain'] = defaultdict(lambda: HandlerChain())
handler_id = 0
class HandlerChain:
head: Optional[HandlerNode]
tail: Optional[HandlerNode]
def __init__(self):
self.head = None
self.tail = None
self.it_cur = None
def remove(self, node: HandlerNode):
if node.prev is not None:
node.prev.next_ = node.next_
else:
self.head = node.next_
if node.next_ is not None:
node.next_.prev = node.prev
else:
self.tail = node.prev
def __iter__(self):
self.it_cur = self.head
return self
def __next__(self):
if self.it_cur is None:
raise StopIteration()
ret = self.it_cur.handler
self.it_cur = self.it_cur.next_
return ret
def append(self, handler: EventCallback) -> HandlerNode:
node = HandlerNode(handler)
if self.head is None:
assert self.tail is None
self.head = self.tail = node
else:
assert self.tail is not None
node.prev = self.tail
self.tail.next_ = node
self.tail = node
return node
def bind_handler(evt_cls: Type[CanvasEvent], callback: EventCallback) -> int:
global handler_id
ret = handler_id
chain = event_chains[evt_cls]
hnode = chain.append(callback)
handler_map[ret] = (chain, hnode)
handler_id += 1
return ret
def unbind_handler(handler_id: int):
chain, hnode = handler_map[handler_id]
chain.remove(hnode)
del handler_map[handler_id]
def post_event(evt: CanvasEvent):
'''
# debugging
if not str(evt)[:14]=="DidPaintCanvas":
caller = getframeinfo(stack()[1][0])
print("%s:%d - %s" % (caller.filename, caller.lineno, str(evt)))
'''
for callback in iter(event_chains[type(evt)]):
callback(evt)
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/rkviewer/events.py
| 0.896686 | 0.599339 |
events.py
|
pypi
|
from dataclasses import dataclass
from enum import Enum
from rkviewer.config import Color
import wx
import abc
import copy
from typing import Any, List, Optional, Set, Tuple
from .canvas.geometry import Vec2
from .canvas.data import Compartment, Node, Reaction, ModifierTipStyle, CompositeShape
class IController(abc.ABC):
"""The inteface class for a controller
The abc.ABC (Abstract Base Class) is used to enforce the MVC interface more
strictly.
The methods with name beginning with Try- are usually called by the RKView after
some user input. If the action tried in such a method succeeds, the Controller
should request the view to be redrawn; otherwise, an error message might be shown.
"""
@abc.abstractmethod
def group_action(self) -> Any:
pass
@abc.abstractmethod
def undo(self) -> bool:
"""Try to undo last operation"""
pass
@abc.abstractmethod
def redo(self) -> bool:
"""Try to redo last undone operation"""
pass
@abc.abstractmethod
def clear_network(self, neti: int):
pass
@abc.abstractmethod
def add_node_g(self, neti: int, node: Node) -> int:
"""Try to add the given Node to the canvas. Return index of the node added."""
pass
@abc.abstractmethod
def add_compartment_g(self, neti: int, compartment: Compartment) -> int:
"""Try to add the given Compartment to the canvas. Return index of added comp."""
pass
@abc.abstractmethod
def add_alias_node(self, neti: int, original_index: int, pos: Vec2, size: Vec2) -> int:
pass
@abc.abstractmethod
def alias_for_reaction(self, neti: int, reai: int, nodei: int, pos: Vec2, size: Vec2):
"""See Iodine aliasForReaction for documentation"""
pass
@abc.abstractmethod
def move_node(self, neti: int, nodei: int, pos: Vec2, allowNegativeCoords: bool = False) -> bool:
"""Try to move the give node. TODO only accept node ID and new location"""
pass
@abc.abstractmethod
def set_node_size(self, neti: int, nodei: int, size: Vec2) -> bool:
"""Try to move the give node. TODO only accept node ID and new location"""
pass
@abc.abstractmethod
def rename_node(self, neti: int, nodei: int, new_id: str) -> bool:
pass
@abc.abstractmethod
def set_node_floating_status(self, neti: int, nodei: int, floatingStatus: bool):
pass
@abc.abstractmethod
def set_node_locked_status(self, neti: int, nodei: int, lockedNode: bool):
pass
@abc.abstractmethod
def set_node_fill_rgb(self, neti: int, nodei: int, color: wx.Colour) -> bool:
pass
@abc.abstractmethod
def set_node_fill_alpha(self, neti: int, nodei: int, alpha: int) -> bool:
pass
@abc.abstractmethod
def set_node_border_rgb(self, neti: int, nodei: int, color: wx.Colour) -> bool:
pass
@abc.abstractmethod
def set_node_border_alpha(self, neti: int, nodei: int, alpha: int) -> bool:
pass
@abc.abstractmethod
def set_node_border_width(self, neti: int, nodei: int, width: float) -> bool:
pass
@abc.abstractmethod
def rename_reaction(self, neti: int, reai: int, new_id: str) -> bool:
pass
@abc.abstractmethod
def set_reaction_line_thickness(self, neti: int, reai: int, thickness: float) -> bool:
pass
@abc.abstractmethod
def set_reaction_fill_rgb(self, neti: int, reai: int, color: wx.Colour) -> bool:
pass
@abc.abstractmethod
def set_reaction_fill_alpha(self, neti: int, reai: int, alpha: int) -> bool:
pass
@abc.abstractmethod
def set_reaction_ratelaw(self, neti: int, reai: int, ratelaw: str) -> bool:
pass
@abc.abstractmethod
def set_reaction_center(self, neti: int, reai: int, center_pos: Optional[Vec2]):
pass
@abc.abstractmethod
def set_reaction_modifiers(self, neti: int, reai: int, modifiers: List[int]):
pass
@abc.abstractmethod
def get_reaction_modifiers(self, neti: int, reai: int) -> List[int]:
pass
@abc.abstractmethod
def set_modifier_tip_style(self, neti: int, reai: int, style: ModifierTipStyle):
pass
@abc.abstractmethod
def get_modifier_tip_style(self, neti: int, reai: int) -> ModifierTipStyle:
pass
@abc.abstractmethod
def delete_node(self, neti: int, nodei: int) -> bool:
pass
@abc.abstractmethod
def delete_reaction(self, neti: int, reai: int) -> bool:
pass
@abc.abstractmethod
def delete_compartment(self, neti: int, compi: int) -> bool:
pass
@abc.abstractmethod
def set_src_node_stoich(self, neti: int, reai: int, nodei: int, stoich: float) -> bool:
pass
@abc.abstractmethod
def get_dest_node_stoich(self, neti: int, reai: int, nodei: int) -> float:
pass
@abc.abstractmethod
def set_dest_node_stoich(self, neti: int, reai: int, nodei: int, stoich: float) -> bool:
pass
@abc.abstractmethod
def get_src_node_stoich(self, neti: int, reai: int, nodei: int) -> float:
pass
@abc.abstractmethod
def set_src_node_handle(self, neti: int, reai: int, nodei: int, pos: Vec2):
pass
@abc.abstractmethod
def set_dest_node_handle(self, neti: int, reai: int, nodei: int, pos: Vec2):
pass
@abc.abstractmethod
def set_center_handle(self, neti: int, reai: int, pos: Vec2):
pass
@abc.abstractmethod
def get_src_node_handle(self, neti: int, reai: int, nodei: int) -> Vec2:
pass
@abc.abstractmethod
def get_dest_node_handle(self, neti: int, reai: int, nodei: int) -> Vec2:
pass
@abc.abstractmethod
def get_center_handle(self, neti: int, reai: int) -> Vec2:
pass
@abc.abstractmethod
def get_list_of_src_indices(self, neti: int, reai: int) -> List[int]:
pass
@abc.abstractmethod
def get_list_of_dest_indices(self, neti: int, reai: int) -> List[int]:
pass
@abc.abstractmethod
def get_reactions_as_reactant(self, neti: int, nodei: int) -> Set[int]:
pass
@abc.abstractmethod
def get_reactions_as_product(self, neti: int, nodei: int) -> Set[int]:
pass
@abc.abstractmethod
def get_list_of_node_ids(self, neti: int) -> List[str]:
"""Try getting the list of node IDs"""
pass
@abc.abstractmethod
def get_node_indices(self, neti: int) -> Set[int]:
pass
@abc.abstractmethod
def get_reaction_indices(self, neti: int) -> Set[int]:
pass
@abc.abstractmethod
def get_compartment_indices(self, neti: int) -> Set[int]:
pass
@abc.abstractmethod
def get_list_of_nodes(self, neti: int) -> List[Node]:
pass
@abc.abstractmethod
def get_list_of_reactions(self, neti: int) -> List[Reaction]:
pass
@abc.abstractmethod
def get_list_of_compartments(self, neti: int) -> List[Compartment]:
pass
@abc.abstractmethod
def rename_compartment(self, neti: int, compi: int, new_id: str):
pass
@abc.abstractmethod
def move_compartment(self, neti: int, compi: int, pos: Vec2):
pass
@abc.abstractmethod
def set_compartment_size(self, neti: int, compi: int, size: Vec2):
pass
@abc.abstractmethod
def set_compartment_fill(self, neti: int, compi: int, fill: wx.Colour):
pass
@abc.abstractmethod
def set_compartment_border(self, neti: int, compi: int, fill: wx.Colour):
pass
@abc.abstractmethod
def set_compartment_border_width(self, neti: int, compi: int, width: float):
pass
@abc.abstractmethod
def set_compartment_volume(self, neti: int, compi: int, volume: float):
pass
@abc.abstractmethod
def set_compartment_of_node(self, neti: int, nodei: int, compi: int):
pass
@abc.abstractmethod
def get_compartment_of_node(self, neti: int, nodei: int) -> int:
pass
@abc.abstractmethod
def get_nodes_in_compartment(self, neti: int, cmpi: int) -> List[int]:
pass
@abc.abstractmethod
def get_node_index(self, neti: int, node_id: str) -> int:
pass
@abc.abstractmethod
def get_node_id(self, neti: int, nodei: int) -> str:
pass
@abc.abstractmethod
def get_reaction_index(self, neti: int, rxn_id: str) -> int:
pass
@abc.abstractmethod
def set_reaction_bezier_curves(self, neti: int, reai: int, bezierCurves: bool) -> Reaction:
pass
@abc.abstractmethod
def add_reaction_g(self, neti: int, reaction: Reaction) -> int:
pass
@abc.abstractmethod
def get_node_by_index(self, neti: int, nodei: int) -> Node:
pass
@abc.abstractmethod
def get_reaction_by_index(self, neti: int, reai: int) -> Reaction:
pass
@abc.abstractmethod
def get_compartment_by_index(self, neti: int, compi: int) -> Compartment:
pass
@abc.abstractmethod
def update_view(self):
"""Immediately update the view with using latest model."""
pass
@abc.abstractmethod
def dump_network(self, neti: int):
pass
@abc.abstractmethod
def load_network(self, json_obj: Any) -> int:
pass
@abc.abstractmethod
def new_network(self):
"""Create a new network.
Since there is only one tab for now, this merely clears the the current network. Also,
does not clear undo stack.
"""
pass
@abc.abstractmethod
def set_application_position(self, pos: wx.Point):
pass
@abc.abstractmethod
def get_application_position(self) -> wx.Point:
pass
@abc.abstractmethod
def get_composite_shape_list(self, neti: int) -> List[CompositeShape]:
pass
@abc.abstractmethod
def get_composite_shape_at(self, neti: int, shapei: int) -> List[CompositeShape]:
pass
@abc.abstractmethod
def get_node_shape(self, neti: int, nodei: int) -> CompositeShape:
pass
@abc.abstractmethod
def get_node_shape_index(self, neti: int, nodei: int) -> int:
pass
@abc.abstractmethod
def set_node_shape_index(self, neti: int, nodei: int, shapei: int):
pass
@abc.abstractmethod
def set_node_primitive_property(self, neti: int, nodei: int, primitive_index: int, prop_name: str, prop_value):
pass
class IView(abc.ABC):
"""The inteface class for a controller
The abc.ABC (Abstract Base Class) is used to enforce the MVC interface more
strictly.
"""
@abc.abstractmethod
def bind_controller(self, controller: IController):
"""Bind the controller. This needs to be called after a controller is
created and before any other method is called.
"""
pass
@abc.abstractmethod
def main_loop(self):
"""Run the main loop. This is blocking right now. This may be modified to
become non-blocking in the future if required.
"""
pass
@abc.abstractmethod
def update_all(self, nodes, reactions, compartments):
"""Update all the graph objects, and redraw everything at the end"""
pass
class ModelError(Exception):
"""Base class for other exceptions"""
pass
class IDNotFoundError(ModelError):
pass
class IDRepeatError(ModelError):
pass
class NodeNotFreeError(ModelError):
pass
class NetIndexError(ModelError):
pass
class ReactionIndexError(ModelError):
pass
class NodeIndexError(ModelError):
pass
class CompartmentIndexError(ModelError):
pass
class StoichError(ModelError):
pass
class StackEmptyError(ModelError):
pass
class JSONError(ModelError):
pass
class FileError(ModelError):
pass
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/rkviewer/mvc.py
| 0.896614 | 0.340184 |
mvc.py
|
pypi
|
import wx
import abc
import math
from typing import Collection, Generic, List, Optional, Set, TypeVar, Callable
from .geometry import Rect, Vec2, rotate_unit
from .data import Node, Reaction
def get_nodes_by_idx(nodes: List[Node], indices: Collection[int]):
"""Simple helper that maps the given list of indices to their corresponding nodes."""
ret = [n for n in nodes if n.index in indices]
assert len(ret) == len(indices)
return ret
def get_rxns_by_idx(rxns: List[Reaction], indices: Collection[int]):
"""Simple helper that maps the given list of indices to their corresponding rxns."""
ret = [n for n in rxns if n.index in indices]
assert len(ret) == len(indices)
return ret
def get_nodes_by_ident(nodes: List[Node], ids: Collection[str]):
"""Simple helper that maps the given list of IDs to their corresponding nodes."""
ret = [n for n in nodes if n.id in ids]
assert len(ret) == len(ids)
return ret
def draw_rect(gc: wx.GraphicsContext, rect: Rect, *, fill: Optional[wx.Colour] = None,
border: Optional[wx.Colour] = None, border_width: float = 1,
fill_style=wx.BRUSHSTYLE_SOLID, border_style=wx.PENSTYLE_SOLID, corner_radius: float = 0):
"""Draw a rectangle with the given graphics context.
Either fill or border must be specified to avoid drawing an entirely transparent rectangle.
Args:
gc: The graphics context.
rect: The rectangle to draw.
fill: If specified, the fill color of the rectangle.
border: If specified, the border color of the rectangle.
border_width: The width of the borders. Defaults to 1. This cannot be 0 when border
is specified.
corner_radius: The corner radius of the rounded rectangle. Defaults to 0.
"""
assert not(fill is None and border is None), \
"Both 'fill' and 'border' are None, but at least one of them should be provided"
assert not (border is not None and border_width == 0), \
"'border_width' cannot be 0 when 'border' is specified"
x, y = rect.position
width, height = rect.size
pen: wx.Pen
brush: wx.Brush
# set up brush and pen if applicable
if fill is not None:
brush = gc.CreateBrush(wx.Brush(fill, fill_style))
else:
brush = wx.TRANSPARENT_BRUSH
if border is not None:
pen = gc.CreatePen(wx.GraphicsPenInfo(border).Width(border_width).Style(border_style))
else:
pen = wx.TRANSPARENT_PEN
gc.SetPen(pen)
gc.SetBrush(brush)
# draw rect
gc.DrawRoundedRectangle(x, y, width, height, corner_radius)
"""Classes for the observer-Subject interface. See https://en.wikipedia.org/wiki/Observer_pattern
"""
T = TypeVar('T')
# TODO add SetObserver, which allows delaying callback and combining multiple notify calls.
# e.g. with group_action()
class Observer(abc.ABC, Generic[T]):
"""Observer abstract base class; encapsulates object of type T."""
def __init__(self, update_callback: Callable[[T], None]):
self.update = update_callback
class Subject(Generic[T]):
"""Subject abstract base class; encapsulates object of type T."""
_observers: List[Observer]
_item: T
def __init__(self, item):
self._observers = list()
self._item = item
def attach(self, observer: Observer):
"""Attach an observer."""
self._observers.append(observer)
def detach(self, observer: Observer):
"""Detach an observer."""
self._observers.remove(observer)
def notify(self) -> None:
"""Trigger an update in each Subject."""
for observer in self._observers:
observer.update(self._item)
class SetSubject(Subject[Set[T]]):
"""Subject class that encapsulates a set."""
def __init__(self, *args):
super().__init__(set(*args))
def item_copy(self) -> Set:
"""Return a copy of the encapsulated set."""
return set(self._item)
def contains(self, val: T) -> bool:
return val in self._item
def set_item(self, item: Set):
"""Update the value of the item, notifying observers if the new value differs from the old.
"""
equal = self._item == item
self._item = item
if not equal:
self.notify()
def remove(self, el: T):
"""Remove an element from the set, notifying observers if the set changed."""
equal = el not in self._item
self._item.remove(el)
if not equal:
self.notify()
def add(self, el: T):
"""Add an element from the set, notifying observers if the set changed."""
equal = el in self._item
self._item.add(el)
if not equal:
self.notify()
def union(self, other: Set[T]):
prev_len = len(self._item)
self._item |= other
if len(self._item) != prev_len:
self.notify()
def intersect(self, other: Set[T]):
prev_len = len(self._item)
self._item &= other
if len(self._item) != prev_len:
self.notify()
def __len__(self):
return len(self._item)
def __contains__(self, val: T):
return val in self._item
# the higher the value, the closer the src handle is to the centroid. 1/2 for halfway in-between
# update also for prd handle
CENTER_RATIO = 2/3
DUPLICATE_RATIO = 3/4
DUPLICATE_ROT = -math.pi/3
def default_handle_positions(centroid: Vec2, reactants: List[Node], products: List[Node]):
src_handle_pos = reactants[0].rect.center_point * (1 - CENTER_RATIO) + centroid * CENTER_RATIO
handle_positions = [(n.rect.center_point + centroid) / 2 for n in reactants]
react_indices = [n.index for n in reactants]
for prod in products:
p_rect = prod.rect
if prod.index in react_indices:
# If also a reactant, shift the handle to not have the curves completely overlap
diff = centroid - p_rect.center_point
length = diff.norm * DUPLICATE_RATIO
new_dir = rotate_unit(diff, DUPLICATE_ROT)
handle_positions.append(p_rect.center_point + new_dir * length)
else:
#handle_positions.append((p_rect.center_point + centroid) / 2)
prd_handle_pos = p_rect.center_point*(1-CENTER_RATIO) + centroid*CENTER_RATIO
handle_positions.append(prd_handle_pos)
return [src_handle_pos] + handle_positions
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/rkviewer/canvas/utils.py
| 0.868841 | 0.558146 |
utils.py
|
pypi
|
# pylint: disable=maybe-no-member
from sortedcontainers.sortedlist import SortedKeyList
from rkviewer.canvas.elements import CanvasElement, CompartmentElt, NodeElement
from rkviewer.canvas.state import cstate
from rkviewer.config import Color
import wx
import abc
from typing import Callable, List, cast
from .geometry import Vec2, Rect, clamp_point, pt_in_rect
from .utils import draw_rect
class CanvasOverlay(abc.ABC):
"""Abstract class for a fixed-position overlay within the canvas.
Attributes:
hovering: Used to set whether the mouse is current hovering over the overlay.
Note:
Overlays use device positions since it makes the most sense for these static items.
"""
hovering: bool
_size: Vec2 #: Private attribute for the 'size' property.
_position: Vec2 #: Private attribute for the 'position' property.
@property
def size(self) -> Vec2:
"""Return the size (i.e. of a rectangle) of the overlay."""
return self._size
@property
def position(self) -> Vec2:
"""Return the position (i.e. of the top-left corner) of the overlay."""
return self._position
@position.setter
def position(self, val: Vec2):
self._position = val
@abc.abstractmethod
def DoPaint(self, gc: wx.GraphicsContext):
"""Re-paint the overlay."""
pass
@abc.abstractmethod
def OnLeftDown(self, device_pos: Vec2):
"""Trigger a mouse left button down event on the overlay."""
pass
@abc.abstractmethod
def OnLeftUp(self, device_pos: Vec2):
"""Trigger a mouse left button up event on the overlay."""
pass
@abc.abstractmethod
def OnMotion(self, device_pos: Vec2, is_down: bool):
"""Trigger a mouse motion event on the overlay."""
pass
# TODO refactor this as a CanvasElement and delete this file
class Minimap(CanvasOverlay):
"""The minimap class that derives from CanvasOverlay.
Attributes:
Callback: Type of the callback function called when the position of the minimap changes.
window_pos: Position of the canvas window, as updated by canvas.
window_size: Size of the canvas window, as updated by canvas.
device_pos: The device position (i.e. seen on screen) of the top left corner. Used for
determining user click/drag offset. It is important to use the device_pos,
since it does not change, whereas window_pos (logical position) changes based
on scrolling. This coupled with delays in update causes very noticeable jitters
when dragging.
elements: The list of elements updated by canvas.
"""
Callback = Callable[[Vec2], None]
window_pos: Vec2
window_size: Vec2
device_pos: Vec2
elements: SortedKeyList
_position: Vec2 #: Unscrolled, i.e. logical position of the minimap. This varies by scrolling.
_realsize: Vec2 #: Full size of the canvas
_width: int
_callback: Callback #: the function called when the minimap position changes
_dragging: bool
_drag_rel: Vec2
"""Position of the mouse relative to the top-left corner of the visible window handle on
minimap, the moment when dragging starts. We keep this relative distance invariant while
dragging. This is used because scrolling is discrete, so we cannot add relative distance
dragged since errors will accumulate.
"""
def __init__(self, *, pos: Vec2, device_pos: Vec2, width: int, realsize: Vec2, window_pos: Vec2 = Vec2(),
window_size: Vec2, pos_callback: Callback):
"""The constructor of the minimap
Args:
pos: The position of the minimap relative to the top-left corner of the canvas window.
width: The width of the minimap. The height will be set according to perspective.
realsize: The actual, full size of the canvas.
window_pos: The starting position of the window.
window_size: The starting size of the window.
pos_callback: The callback function called when the minimap window changes position.
"""
self._position = pos
self.device_pos = device_pos # should stay fixed
self._width = width
self.realsize = realsize # use the setter to set the _size as well
self.window_pos = window_pos
self.window_size = window_size
self.elements = SortedKeyList()
self._callback = pos_callback
self._dragging = False
self._drag_rel = Vec2()
self.hovering = False
@property
def realsize(self):
"""The actual, full size of the canvas, including those not visible on screen."""
return self._realsize
@realsize.setter
def realsize(self, val: Vec2):
self._realsize = val
self._size = Vec2(self._width, self._width * val.y / val.x)
@property
def dragging(self):
"""Whether the user is current dragging on the minimap window."""
return self._dragging
def DoPaint(self, gc: wx.GraphicsContext):
# TODO move this somewhere else
BACKGROUND_USUAL = wx.Colour(155, 155, 155, 50)
FOREGROUND_USUAL = wx.Colour(255, 255, 255, 100)
BACKGROUND_FOCUS = wx.Colour(155, 155, 155, 80)
FOREGROUND_FOCUS = wx.Colour(255, 255, 255, 130)
FOREGROUND_DRAGGING = wx.Colour(255, 255, 255, 200)
background = BACKGROUND_FOCUS if (self.hovering or self._dragging) else BACKGROUND_USUAL
foreground = FOREGROUND_USUAL
if self._dragging:
foreground = FOREGROUND_DRAGGING
elif self.hovering:
foreground = FOREGROUND_FOCUS
scale = self._size.x / self._realsize.x
draw_rect(gc, Rect(self.position, self._size), fill=background)
my_botright = self.position + self._size
win_pos = self.window_pos * scale + self.position
win_size = self.window_size * scale
# clip window size
span = my_botright - win_pos
win_size = win_size.reduce2(min, span)
# draw visible rect
draw_rect(gc, Rect(win_pos, win_size), fill=foreground)
for el in self.elements:
pos: Vec2
size: Vec2
fc: wx.Colour
if isinstance(el, NodeElement):
el = cast(NodeElement, el)
pos = el.node.position * scale + self.position
size = el.node.size * scale
fc = (el.node.fill_color or Color(128, 128, 128)).to_wxcolour()
elif isinstance(el, CompartmentElt):
el = cast(CompartmentElt, el)
pos = el.compartment.position * scale + self.position
size = el.compartment.size * scale
fc = el.compartment.fill
else:
continue
color = wx.Colour(fc.Red(), fc.Green(), fc.Blue(), 100)
draw_rect(gc, Rect(pos, size), fill=color)
def OnLeftDown(self, device_pos: Vec2):
if not self._dragging:
scale = self._size.x / self._realsize.x
pos = device_pos - self.device_pos
if pt_in_rect(pos, Rect(self.window_pos * scale, self.window_size * scale)):
self._dragging = True
self._drag_rel = pos - self.window_pos * scale
else:
topleft = pos - self.window_size * scale / 2
self._callback(topleft / scale * cstate.scale)
def OnLeftUp(self, _: Vec2):
self._dragging = False
def OnMotion(self, device_pos: Vec2, is_down: bool):
scale = self._size.x / self._realsize.x
pos = device_pos - self.device_pos
pos = clamp_point(pos, Rect(Vec2(), self.size))
if is_down:
if not self._dragging:
topleft = pos - self.window_size * scale / 2
self._callback(topleft / scale * cstate.scale)
else:
actual_pos = pos - self._drag_rel
self._callback(actual_pos / scale * cstate.scale)
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/rkviewer/canvas/overlays.py
| 0.783864 | 0.406302 |
overlays.py
|
pypi
|
import abc
from dataclasses import dataclass
# pylint: disable=maybe-no-member
from enum import Enum, auto
from typing import List, Optional
import wx
from rkviewer.canvas.data import Node
from rkviewer.canvas.geometry import Vec2
from rkviewer.events import (DidAddCompartmentEvent, DidAddNodeEvent,
DidAddReactionEvent,
DidChangeCompartmentOfNodesEvent,
DidCommitDragEvent, DidDeleteEvent,
DidModifyCompartmentsEvent, DidModifyNodesEvent,
DidModifyReactionEvent, DidMoveBezierHandleEvent,
DidMoveCompartmentsEvent, DidMoveNodesEvent,
DidPaintCanvasEvent, DidRedoEvent,
DidResizeCompartmentsEvent, DidResizeNodesEvent,
DidUndoEvent, SelectionDidUpdateEvent)
class PluginCategory(Enum):
"""The category of a plugin. Determines in which tab the plugin is placed on the toolbar."""
# MAIN = 0
ANALYSIS = auto()
APPEARANCE = auto()
MATH = auto()
MODELS = auto()
UTILITIES = auto()
VISUALIZATION = auto()
MISC = auto()
CATEGORY_NAMES = {
PluginCategory.ANALYSIS: 'Analysis',
PluginCategory.APPEARANCE: 'Appearance',
PluginCategory.MATH: 'Math',
PluginCategory.MODELS: 'Models',
PluginCategory.UTILITIES: 'Utilities',
PluginCategory.VISUALIZATION: 'Visualization',
PluginCategory.MISC: 'Misc',
}
@dataclass
class PluginMetadata:
"""
Metadata for the plugin.
Attributes:
name: The full name of the plugin.
author: The author of the plugin.
version: The version string of the plugin.
short_desc: A short description of the plugin. This is displayed as a tooltip, introduction,
etc.
long_desc: A long, detailed description of the plugin. This is shown in the plugin details
page, as a comprehensive description of what this plugin does. This string will
be rendered as HTML.
category: The category of the plugin.
short_name: If specified, the abbreviated name for situations where the width is small (e.g.
the toolbar).
icon: The bitmap for the plugin's icon. Leave as None for a generic default icon.
"""
name: str
author: str
version: str
short_desc: str
long_desc: str
category: PluginCategory
short_name: Optional[str] = None
icon: Optional[wx.Bitmap] = None
class PluginType(Enum):
"""
Enumeration of plugin types, dictating how a plugin would appear in the application.
NULL: Null enumeration. There should not be a plugin instance with this type.
COMMAND: A command plugin. See CommandPlugin for more details.
WINDOWED: A windowed plugin. See WindowedPlugin for more details.
"""
NULL = 0
COMMAND = 1
WINDOWED = 2
class Plugin:
"""
The base class for a Plugin.
The user should not directly instantiate this but rather one of its subclasses,
e.g. CommandPlugin.
"""
metadata: PluginMetadata
ptype: PluginType
def __init__(self, ptype: PluginType):
"""
Creating a Plugin object.
Args:
self (self): Plugin you are creating.
metadata (PluginMetadata): metadata information of plugin.
ptype (PluginType): defines the type of plugin to create.
"""
self.ptype = ptype
def get_settings_schema(self):
"""Return the setting schema for this plugin.
TODO document
"""
return None
def on_did_add_node(self, evt: DidAddNodeEvent):
"""Called after a node is added."""
pass
def on_did_move_nodes(self, evt: DidMoveNodesEvent):
"""Called as nodes are moved.
Note:
This is called many times, continuously as the ndoes are being moved. Therefore, you
should not call any API functions that would modify the state of the model (including
api.group_action()), as otherwise you would have registered hundreds of actions in
the undo/redo stack.
"""
pass
def on_did_resize_nodes(self, evt: DidResizeNodesEvent):
"""Called after nodes are resized."""
pass
def on_did_add_compartment(self, evt: DidAddCompartmentEvent):
"""Called after a compartment is added."""
pass
def on_did_resize_compartments(self, evt: DidResizeCompartmentsEvent):
"""Called after compartments are resized."""
pass
def on_did_move_compartments(self, evt: DidMoveCompartmentsEvent):
"""Called as compartments are moved.
Note:
See on_did_move_nodes() for cautious notes on usage.
"""
pass
def on_did_add_reaction(self, evt: DidAddReactionEvent):
"""Called after a reaction is added."""
pass
def on_did_undo(self, evt: DidUndoEvent):
"""Called after an undo operation is performed."""
pass
def on_did_redo(self, evt: DidRedoEvent):
"""Called after an redo operation is performed."""
pass
def on_did_delete(self, evt: DidDeleteEvent):
"""Called after items (nodes, reactions, and/or compartments) are deleted."""
pass
def on_did_commit_drag(self, evt: DidCommitDragEvent):
"""Called after a dragging operation has been committed to the model."""
pass
def on_did_paint_canvas(self, evt: DidPaintCanvasEvent):
"""Called each time the canvas is painted."""
pass
def on_selection_did_change(self, evt: SelectionDidUpdateEvent):
"""Called after the set of selected items have changed."""
pass
def on_did_move_bezier_handle(self, evt: DidMoveBezierHandleEvent):
"""Called as the Bezier handles are being moved.
Note:
See on_did_move_nodes() for cautious notes on usage.
"""
pass
def on_did_modify_nodes(self, evt: DidModifyNodesEvent):
"""Called after properties of nodes (other than position/size) have been modified"""
pass
def on_did_modify_reactions(self, evt: DidModifyReactionEvent):
"""Called after properties of reactions have been modified."""
pass
def on_did_modify_compartments(self, evt: DidModifyCompartmentsEvent):
"""Called after properties of compartments (other than position/size) have been modified"""
pass
def on_did_change_compartment_of_nodes(self, evt: DidChangeCompartmentOfNodesEvent):
"""Called after the compartment that some nodes are in has changed."""
pass
class CommandPlugin(Plugin, abc.ABC):
"""Base class for simple plugins that is essentially one single command.
One may think of a CommandPlugin as (obviously) a command, or a sort of macro in the simpler
cases. The user may invoke the command defined when they click on the associated menu item
under the "Plugins" menu, or they may be able to use a keybaord shortcut, once that is
implemented. To subclass CommandPlugin one needs to override `run()`.
Attributes:
metadata (PluginMetadata): metadata information of plugin.
"""
def __init__(self):
super().__init__(PluginType.COMMAND)
@abc.abstractmethod
def run(self):
"""Called when the user invokes this command manually.
This should implement whatever action/macro that this Plugin claims to execute.
"""
pass
class WindowedPlugin(Plugin, abc.ABC):
"""Base class for plugins with an associated popup window.
When the user clicks the menu item of this plugin under the "Plugins" menu, a popup dialog
is created, which may display data, and which the user may interact with. This type of
plugin is suitable to more complex or visually-based plugins, such as that utilizing a
chart or an interactive form.
To implement a subclass of WindowedPlugin, one needs to override the method `create_window`.
Attributes:
dialog: The popup dialog window that this plugin is in.
metadata: metadata information of plugin.
"""
dialog: Optional[wx.Dialog]
def __init__(self):
super().__init__(PluginType.WINDOWED)
self.dialog = None
@abc.abstractmethod
def create_window(self, dialog: wx.Window) -> wx.Window:
"""Called when the user requests a dialog window from the plugin.
For one overriding this method, they should either create or reuse a `wx.Window` instance
to display in a dialog. One likely wants to bind events to the controls inside the returned
`wx.Window` to capture user input.
"""
pass
def on_did_create_dialog(self):
"""Called after the parent dialog has been created and initialized.
Here you may change the position, style, etc. of the dialog by accessing the `self.dialog`
member.
"""
pass
def on_will_close_window(self, evt):
"""TODO not implemented"""
evt.Skip()
def on_did_focus(self):
"""TODO not implemented"""
pass
def on_did_unfocus(self):
"""TODO not implemented"""
pass
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/rkviewer/plugin/classes.py
| 0.818628 | 0.243879 |
classes.py
|
pypi
|
from rkviewer.utils import opacity_mul
from rkviewer.canvas.state import ArrowTip
import wx
from typing import List, Tuple
from rkviewer.plugin import api
from rkviewer.plugin.classes import PluginMetadata, WindowedPlugin, PluginCategory
from rkviewer.plugin.api import Vec2
class DesignerWindow(wx.Window):
"""
The arrow designer window.
"""
def __init__(self, parent, arrow_tip: ArrowTip):
"""
Initialize the arrow designer window with the given starting arrow tip.
Args:
parent: The parent window.
arrow_tip: ArrowTip object defining the arrow tip used.
"""
dim = Vec2(22, 16)
self.csize = 20
size = dim * self.csize + Vec2.repeat(1)
super().__init__(parent, size=size.as_tuple())
# add 1 to range end, so that the grid rectangle side will be included.
rows = [r for r in range(0, int(size.y), self.csize)]
cols = [c for c in range(0, int(size.x), self.csize)]
self.begin_points = list()
self.end_points = list()
for r in rows:
self.begin_points.append(wx.Point2D(0, r))
self.end_points.append(wx.Point2D(size.x - 1, r))
for c in cols:
self.begin_points.append(wx.Point2D(c, 0))
self.end_points.append(wx.Point2D(c, size.y - 1))
self.handle_c = api.get_theme('handle_color')
self.hl_handle_c = api.get_theme('highlighted_handle_color')
self.handle_pen = wx.Pen(self.handle_c)
self.hl_handle_pen = wx.Pen(self.hl_handle_c)
self.handle_brush = wx.Brush(self.handle_c)
self.hl_handle_brush = wx.Brush(self.hl_handle_c)
phantom_c = opacity_mul(self.handle_c, 0.5)
self.phantom_pen = wx.Pen(phantom_c)
self.phantom_brush = wx.Brush(phantom_c)
self.arrow_tip = arrow_tip
self.radius = 12
self.radius_sq = self.radius ** 2
self.hover_idx = -1
self.dragged_point = None
self.dragging = False
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.Bind(wx.EVT_LEFT_DOWN, self.OnLeftDown)
self.Bind(wx.EVT_LEFT_UP, self.OnLeftUp)
self.Bind(wx.EVT_MOTION, self.OnMotion)
self.SetDoubleBuffered(True)
def OnPaint(self, evt):
"""
Overrides wx Paint event to draw the grid, arrow, etc. as if on a canvas.
Args:
self: the Designer Window to initialize.
evt: the event being executed.
"""
dc = wx.PaintDC(self)
gc = wx.GraphicsContext.Create(dc)
self.draw_background(gc)
self.draw_points(gc, self.arrow_tip.points, self.radius)
evt.Skip()
def OnLeftDown(self, evt: wx.MouseEvent):
"""
Handler for mouse left button down event.
"""
if self.hover_idx != -1:
self.dragging = True
def OnLeftUp(self, evt: wx.MouseEvent):
"""
Handler for mouse left button up event.
"""
if self.dragging:
assert self.dragged_point is not None
drop = self.projected_landing(self.dragged_point)
assert self.hover_idx != -1
self.dragging = False
self.arrow_tip.points[self.hover_idx] = Vec2(drop.x // self.csize, drop.y // self.csize)
self.update_hover_idx(Vec2(evt.GetPosition()))
self.Refresh()
def update_arrow_tip(self, arrow_tip: ArrowTip):
"""
Updating the current arrow tip in designer.
Args:
self: the Designer Window to initialize.
arrow_tip: modified arrow tip.
"""
self.arrow_tip = arrow_tip
self.Refresh()
def projected_landing(self, point: Vec2) -> Vec2:
"""
Return the projected discrete landing point for the cursor.
This is to make sure the user sees where the dragged arrow tip point will be dropped on
the grid.
Args:
point: The cursor position relative to the window.
Returns:
Vec2 : projected point for landing.
"""
lx = point.x - point.x % self.csize
ly = point.y - point.y % self.csize
drop_x: float
drop_y: float
if point.x - lx < self.csize / 2:
drop_x = lx
else:
drop_x = lx + self.csize
if point.y - ly < self.csize / 2:
drop_y = ly
else:
drop_y = ly + self.csize
return Vec2(drop_x, drop_y)
def OnMotion(self, evt: wx.MouseEvent):
"""
Handler for mouse motion events.
Args:
self: the Designer Window to initialize.
evt: the event being executed.
"""
pos = Vec2(evt.GetPosition())
if self.dragging:
self.dragged_point = pos
else:
self.update_hover_idx(pos)
evt.Skip()
self.Refresh()
def update_hover_idx(self, pos: Vec2):
"""
Helper to update the hovered arrow tip point index.
"""
self.hover_idx = -1
for i, pt in enumerate(self.arrow_tip.points):
pt *= self.csize
if (pos - pt).norm_sq <= self.radius_sq:
self.hover_idx = i
break
def draw_background(self, gc: wx.GraphicsContext):
"""
Drawing the gridlines background.
"""
gc.SetPen(wx.Pen(wx.BLACK))
gc.StrokeLineSegments(self.begin_points, self.end_points)
def draw_points(self, gc: wx.GraphicsContext, points: List[Vec2], radius: float):
"""
Drawing points for arrow.
Args:
gc: The Graphics context to modify.
points: The points to be drawn, in counterclockwise order, with the last point being
the tip.
radius: The radius of the points.
"""
gc.SetPen(wx.Pen(wx.BLACK, 2))
gc.SetBrush(wx.Brush(wx.BLACK, wx.BRUSHSTYLE_FDIAGONAL_HATCH))
plotted = [p * self.csize for p in points] # points to be plotted
if self.dragging:
assert self.hover_idx != -1 and self.dragged_point is not None
plotted[self.hover_idx] = self.dragged_point
gc.DrawLines([wx.Point2D(*p) for p in plotted] + [wx.Point2D(*plotted[0])])
for i, p in enumerate(plotted):
if self.dragging and i == self.hover_idx:
continue
if i == 3:
# the last point is the tip, so draw it in a different color
gc.SetPen(wx.BLACK_PEN)
gc.SetBrush(wx.BLACK_BRUSH)
else:
gc.SetPen(self.handle_pen)
gc.SetBrush(self.handle_brush)
self.draw_point(gc, p, radius)
# Draw the hover point
if self.hover_idx != -1:
point = self.dragged_point if self.dragging else plotted[self.hover_idx]
assert self.hover_idx >= 0 and self.hover_idx < 4
assert point is not None
gc.SetPen(self.hl_handle_pen)
gc.SetBrush(self.hl_handle_brush)
self.draw_point(gc, point, radius)
if self.dragging:
assert self.dragged_point is not None
drop_point = self.projected_landing(self.dragged_point)
gc.SetPen(self.phantom_pen)
gc.SetBrush(self.phantom_brush)
self.draw_point(gc, drop_point, radius)
def draw_point(self, gc: wx.GraphicsContext, point: Vec2, radius: float):
"""
Drawing a single point.
Args:
gc: Graphics context to modify.
point: Point to be drawn.
radius: Radius of the point.
"""
center = point - Vec2.repeat(radius / 2)
gc.DrawEllipse(center.x, center.y, radius, radius)
class ArrowDesigner(WindowedPlugin):
"""
The ArrowDesigner plugin that subclasses WindowedPlugin.
"""
metadata = PluginMetadata(
name='ArrowDesigner',
author='Gary Geng',
version='1.0.1',
short_desc='Arrow tip designer for reactions.',
long_desc='Arrow tip designer for reactions.',
category=PluginCategory.APPEARANCE,
)
def __init__(self):
super().__init__()
self.arrow_tip = api.get_arrow_tip()
def create_window(self, dialog):
"""
Called when creating a window. Create the designer window as well as control buttons.
"""
window = wx.Window(dialog, size=(500, 500))
sizer = wx.BoxSizer(wx.VERTICAL)
self.designer = DesignerWindow(window, self.arrow_tip)
save_btn = wx.Button(window, label='Save')
save_btn.Bind(wx.EVT_BUTTON, self.OnSave)
restore_btn = wx.Button(window, label='Restore default')
restore_btn.Bind(wx.EVT_BUTTON, self.OnRestore)
sizerflags = wx.SizerFlags().Align(wx.ALIGN_CENTER_HORIZONTAL).Border(wx.TOP, 20)
sizer.Add(self.designer, sizerflags)
sizer.Add(save_btn, sizerflags)
sizer.Add(restore_btn, sizerflags)
#dialog.SetSizer(sizer)
window.SetSizer(sizer)
return window
def OnSave(self, evt):
"""
Handler for the "save" button. Save the new arrow tip.
"""
api.set_arrow_tip(self.arrow_tip)
api.refresh_canvas()
def OnRestore(self, evt):
"""
Update the arrow point to be set to default values.
"""
default_tip = api.get_default_arrow_tip()
api.set_arrow_tip(default_tip)
self.designer.update_arrow_tip(default_tip)
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/rkviewer_plugins/arrow_designer.py
| 0.923113 | 0.303887 |
arrow_designer.py
|
pypi
|
import wx
from rkviewer.plugin.classes import PluginMetadata, WindowedPlugin, PluginCategory
from rkviewer.plugin import api
from rkviewer.plugin.api import Vec2
import numpy as np
import math
class AlignCircle(WindowedPlugin):
metadata = PluginMetadata(
name='AlignCircle',
author='Evan Yip and Jin Xu',
version='1.0.0',
short_desc='Align Circle',
long_desc='Aligns the nodes into a circle',
category=PluginCategory.VISUALIZATION
)
def __init__(self):
"""
Initialize the AlignCircle
Args:
self
"""
# allows the align circle class to inherit
# the methods of the Windowed Plugin Class
super().__init__()
def create_window(self, dialog):
"""
Create a window to do the structural analysis.
Args:
self
dialog
"""
# Making the window
window = wx.Panel(dialog, pos=(5, 100), size=(300, 155))
wx.StaticText(window, -1, 'Select nodes to arrange in circle',
(15, 10))
# Use default radius
wx.StaticText(window, -1, 'Use default radius:', (15, 30))
self.defaultCheck = wx.CheckBox(window, -1, pos=(150, 30))
self.defaultCheck.SetValue(True)
# Making radius setable
wx.StaticText(window, -1, 'Input desired radius:', (15, 55))
self.radiusText = wx.TextCtrl(window, -1, '0', (150, 55),
size=(120, 22))
self.radiusText.SetInsertionPoint(0)
self.radiusText.Bind(wx.EVT_TEXT, self.OnText_radius) # binding test
self.radiusValue = float(self.radiusText.GetValue())
# Making the toggle button
apply_btn = wx.ToggleButton(window, -1, 'Apply', (100, 85),
size=(80, 22))
apply_btn.SetValue(False)
# Binding the method to the button
apply_btn.Bind(wx.EVT_TOGGLEBUTTON, self.Apply)
return window
def find_center(self, num_nodes, nodes):
"""
Takes in the number of nodes and list of node indices and
computes the optimal centerpoint for the circle
Parameters: num_nodes(int) - number of nodes,
node_indices(list) - list of nodes indices
Returns: center(tuple)
"""
# Max dimension of the node size
max_dim = 0 # will consider this the diameter of a circle node
for i in nodes:
# Get node
node = api.get_node_by_index(0, i)
for dim in node.size:
if dim > max_dim:
max_dim = dim
# Approximate circumference estimate
spacing = max_dim / 4
circum_estimate = num_nodes * (max_dim + spacing)
# Computing radius
r = circum_estimate / (2*np.pi) + max_dim
center = (r, r)
return center
def cart(self, r, theta):
"""
Converts from polar coordinates to cartesian coordinates
Parameters: r(double), theta(double in radians)
Returns: (x,y) cartesian coordinate tuple
"""
x = r * math.cos(theta)
y = r * math.sin(theta)
return x, y
def get_new_position(self, node_index, r, theta):
"""
Takes in the node index and outputs the new position for that node
Parameters: r(double), theta(double in radians)
Returns: (x,y) cartesian coordinate tuple
"""
node = api.get_node_by_index(0, node_index)
size = node.size # (width, height)
# accounting for slight offset, converting to cartesian
nodeCenter = self.cart(r - size[0], theta)
# accounting for node position being specified by top left corner
x = r + nodeCenter[0] - size[0]/2
y = r + nodeCenter[1] + size[1]/2
return Vec2(x, y)
def OnText_radius(self, event):
"""
Catches exception if self.radiusText can not be converted
to a floating point number. Opens a window.
"""
update = event.GetString()
if update != '':
try:
self.radiusValue = float(self.radiusText.GetValue())
except:
wx.MessageBox("Please enter a floating point number"
"for the desired radius", "Message",
wx.OK | wx.ICON_INFORMATION)
def CheckSelection(self, nodes):
"""
Verifies that there are selected nodes. Raises window if
no nodes are selected, but apply button is pressed
"""
if nodes == 0:
wx.MessageBox("Please select desired nodes to arrange"
"in circle", "Message", wx.OK | wx.ICON_INFORMATION)
return True
def Apply(self, event):
"""
If apply button is clicked, the nodes will be arranged in a circle
using either a default radius or a user input radius.
"""
def translate_nodes(node_indices, r, phi):
"""
Takes in list of node indices, desired radius, and phi
and moves the current node
to its new position
"""
# Iterate through the nodes and change their position.
node_num = 0
for i in node_ind:
theta = node_num * phi # angle position for node
newPos = self.get_new_position(i, r, theta)
if newPos[0] < 0 or newPos[1] < 0:
wx.MessageBox("Please increase radius size",
"Message", wx.OK | wx.ICON_INFORMATION)
return
api.move_node(0, i, newPos, False)
node_num += 1 # updating node number
# Get button and checkbox state
btn_state = event.GetEventObject().GetValue()
# chk_state = self.OnDefaultCheck()
chk_state = self.defaultCheck.GetValue()
# If the button is pressed, arrange the nodes into a circle
if btn_state is True:
# Get number of nodes selected
node_len = len(api.selected_node_indices())
# get list of node indices
node_ind = api.selected_node_indices()
# If no nodes selected raise error
select = self.CheckSelection(node_len)
if select is True:
return
# If default radius is checked
if chk_state is True:
# Compute the expected center of the circle
center = self.find_center(node_len, node_ind)
# Compute the angle step between each node
phi = 2 * math.pi / node_len
r = center[0] # radius
translate_nodes(node_ind, r, phi)
else:
r = self.radiusValue
center = Vec2(r, r)
phi = 2 * math.pi / node_len # angle step between each node
translate_nodes(node_ind, r, phi) # translating nodes
# Making the reactions straight lines
rxns = api.get_reaction_indices(0)
for rxn in rxns:
api.update_reaction(net_index=0, reaction_index=rxn,
use_bezier=False)
# Setting button state back to False (unclicked)
event.GetEventObject().SetValue(False)
|
/sbcoyote-1.4.4.tar.gz/sbcoyote-1.4.4/rkviewer_plugins/align_circle.py
| 0.778776 | 0.331647 |
align_circle.py
|
pypi
|
[](https://travis-ci.org/tfgm/sbedecoder)
Python based Simple Binary Encoding (SBE) decoder
=================================================
Overview
--------
sbedecoder is a simple python package for parsing SBE encoded data.
sbedecoder dynamically generates an SBE parser from an xml description of the format. This is accomplished by
creating an instance of `SBESchema()` and calling it's `parse()` method with a file name:
from sbedecoder import SBESchema
schema = SBESchema()
schema.parse('path/to/schema.xml')
The `SBESchema()` can be initialized with `include_message_size_header=True` if the messages being parsed
require an extra 2 byte (unit16) framing message_size_header field (i.e. for CME MDP 3.0).
By default, message names are derived from the "name" field of the message definition in the schema.
In some cases (i.e. CME MDP 3.0), the message "description" field of the message definition provides a
more friendly name for the message. To use message descriptions as the name of the message,
initialize your SBESchema with `use_description_as_message_name=True`.
For convenience, an `MDPSchema()` subclass of `SBESchema()` is provided with `include_message_size_header=True`
and `use_description_as_message_name=True` specifically to handle CME Group MDP 3.0 schema's.
Messages are parsed from any structure that looks like a buffer containing the raw binary
data (buffer, str, bytearay, etc). To parse SBE encoded data into a message based on a
schema instance, just call `SBEMessage.parse_message()`:
from sbedecoder import SBEMessage
message = SBEMessage.parse_message(schema, msg_buffer, offset=0)
`offset` is an optional parameter that indicates where within the msg_buffer the message
starts (including the size header if the schema has `include_message_size_header` set).
A parsed message is represented as an instance of the `SBEMessage()` class. `SBEMessages()` are
comprised of zero or more `sbedecoder.message.SBEField()` instances and zero or more
`sbedecoder.message.SBERepeatingGroup()` instances. An `SBEField()` object can be one of a primitive
`TypeMessageField()`, a `SetMessageField()` or an `EnumMessageField()`
**Note:** Unless using code generation, you cannot store the messages for later processing.
You must process the messages on each iteration, because the messages re-use instances of
field objects, wrapping them around new values.
The CME Group sends MDP 3.0 messages in packets that include a 4 byte sequence number
and a 8 byte timestamp. In addition, there can be multiple messages in a single packet
and each message is framed the with a 2 byte (unit16) message size field as mentioned above.
To parse these messages, you can create a `MDPSchema()`, use that to create a
`MDPMessageFactory()` and then create a `SBEParser()` which can then iterate over the messages in
a packet like this:
from sbedecoder import MDPSchema
from sbedecoder import MDPMessageFactory
from sbedecoder import SBEParser
schema = SBESchema()
schema.parse('path/to/schema.xml')
message_factory = MDPMessageFactory(schema)
message_parser = SBEParser(message_factory)
for packet in SOME_DATASOURCE:
for message in message_parser.parse(packet, offset=12):
process(message)
This "Message Factory" concept could easily be extended to new framing schemes by creating a new sub class of `SBEMessageFactory()`
For more information on SBE, see: http://www.fixtradingcommunity.org/pg/structure/tech-specs/simple-binary-encoding.
Install
-------
The sbedecoder project is available on PyPI:
pip install sbedecoder
If you are installing from source:
python setup.py install
**Note**: The SBE decoder has only been tested with python 2.7 and 3.6. On Windows, we typically use the
Anaconda python distribution. Anaconda does not distribute python's test code. If you have
issues with dpkt (ImportError: No module named test), you can either install the latest dpkt
from source (https://github.com/kbandla/dpkt) or just comment out the import (from test import
pystone) in ..\\Anaconda\\lib\\site-packages\\dpkt\\decorators.py. Newer versions of dpkt no
longer have this dependency.
mdp_decoder.py
--------------
mdp_decoder.py serves as an example of using the sbedecoder package. It is a full decoder for processing CME Group
MDP 3.0 (MDP3) messages from a pcap file. For help with using mdp_decoder.py:
mdp_decoder.py --help
An SBE template for CME Group MDP 3.0 market data can be found at
ftp://ftp.cmegroup.com/SBEFix/Production/Templates/templates_FixBinary.xml
Example output:
:packet - timestamp: 2015-06-25 09:45:01.924492 sequence_number: 93696727 sending_time: 1435243501924423666
::MDIncrementalRefreshVolume - transact_time: 1435243501923350056 match_event_indicator: LastVolumeMsg (2)
:::no_md_entries - num_groups: 1
::::md_entry_size: 4483 security_id: 559884 rpt_seq: 2666379 md_update_action: New (0) md_entry_type: e
::MDIncrementalRefreshBook - transact_time: 1435243501923350056 match_event_indicator: LastQuoteMsg (4)
:::no_md_entries - num_groups: 2
::::md_entry_px: 18792.0 ({'mantissa': 187920000000, 'exponent': -7}) md_entry_size: 1 security_id: 559884 rpt_seq: 2666380 number_of_orders: 1 md_price_level: 1 md_update_action: Delete (2) md_entry_type: Bid (0)
::::md_entry_px: 18746.0 ({'mantissa': 187460000000, 'exponent': -7}) md_entry_size: 6 security_id: 559884 rpt_seq: 2666381 number_of_orders: 1 md_price_level: 10 md_update_action: New (0) md_entry_type: Bid (0)
Example output (with `--pretty`):
```
packet - timestamp: 2016-03-10 15:33:21.301819 sequence_number: 76643046 sending_time: 1454679022595400091
Message 1 of 2: TID 32 (MDIncrementalRefreshBook) v6
TransactTime (60): 02/05/2016 07:30:22.595256135 (1454679022595256135)
MatchEventIndicator (5799): LastQuoteMsg
NoMDEntries (268): 1
Entry 1
MDEntryPx (270): 98890000000 (9889.0)
MDEntrySize (271): 296
SecurityID (48): 807004
RptSeq (83): 14273794
NumberOfOrders (346): 16
MDPriceLevel (1023): 2
MDUpdateAction (279): Change
MDEntryType (269): Offer
Message 2 of 2: TID 32 (MDIncrementalRefreshBook) v6
TransactTime (60): 02/05/2016 07:30:22.595256135 (1454679022595256135)
MatchEventIndicator (5799): LastImpliedMsg, EndOfEvent
NoMDEntries (268): 8
Entry 1
MDEntryPx (270): 475000000 (47.5)
MDEntrySize (271): 296
SecurityID (48): 817777
RptSeq (83): 1573080
NumberOfOrders (346): Null
MDPriceLevel (1023): 2
MDUpdateAction (279): Change
MDEntryType (269): ImpliedBid
Entry 2...
```
mdp_book_builder.py
-------------------
mdp_book_builder.py serves as an example of using the sbedecoder package to build limit orderbooks for a given contract.
For help with using mdp_book_builder.py:
mdp_book_builder.py --help
Versioning
----------
sbedecoder supports the `sinceVersion` attribute of fields, enumerants, groups, ..., etc, and so it can
decode older (e.g. archived) binary data so long as the schema has evolved correctly to maintain support
for the old format
Performance
-----------
sbedecoder itself isn't optimized for performance however it can be adequate for simple backtesting scenarios amd
post trade analytics. Due to the amount of printing done by mdp_decoder.py, it can be quite slow to parse large
pcap files.
PyPy
----
For improved performance (4 to 5x), sbedecoder will run under PyPy. Assuming your pypy install is in /opt:
/opt/pypy/bin/pip install lxml
/opt/pypy/bin/pip install dpkt
/opt/pypy/bin/pypy setup.py install
Code Generation
---------------
A SBE class generator script is available to generate a python file that contains the class definitions that match those
that are created dynamically via the SBESchema.parse method.
For help with using sbe_class_generator.py:
sbe_class_generator.py --help
An usage would be (from the generator directory):
/sbe_class_generator.py --schema schema.xml --output generated.py --template ./sbe_message.tmpl
This command will output a file called generated.py containing the class definitions that were dynamically created
while parsing the 'schema.xml' file. The template file used to generated the classes is contained in sbe_message.tmpl.
The generated.py file can simply be used for examining the class construction, or it can replace the contents of the
generated.py file in the sbedecoder core project. By replacing the generated.py file in the sbedecoder package, a
developer will get access to the class definitions in the IDE.
In order to make use of the standard parser functionality using the generated code one should use the SBESchema.load
method instead of the parse method.
An example of how to do this is below and is contained in the mdp_book_builder.py script:
try:
from sbedecoder.generated import __messages__ as generated_messages
mdp_schema.load(generated_messages)
except:
mdp_schema.parse(args.schema)
|
/sbedecoder-0.1.10.tar.gz/sbedecoder-0.1.10/README.md
| 0.420005 | 0.857768 |
README.md
|
pypi
|
from datetime import datetime
def mdp3time(t):
dt = datetime.fromtimestamp(t // 1000000000)
s = dt.strftime('%m/%d/%Y %H:%M:%S')
s += '.' + str(int(t % 1000000000)).zfill(9)
return s
def adjustField(field, secdef):
if field.semantic_type == 'UTCTimestamp':
return '{} ({})'.format(mdp3time(field.value), field.value)
# Add the symbol from the secdef file if we can
if secdef and field.id == '48':
security_id = field.value
symbol_info = secdef.lookup_security_id(security_id)
if symbol_info:
symbol = symbol_info[0]
return '{} [{}]'.format(security_id, symbol)
# Use enum name rather than description, to match MC
if hasattr(field, 'enumerant'):
value = field.enumerant
else:
value = field.value
# Make prices match MC (no decimal)
if field.semantic_type == 'Price':
if value is not None:
value = '{} ({})'.format(int(float(value) * 10000000), value)
value = '<Empty>' if value == '' else value
value = 'Null' if value is None else value
return value
def pretty_print(msg, i, n, secdef):
print(' Message %d of %d: TID %d (%s) v%d' % (i + 1, n, msg.template_id.value, msg.name, msg.version.value))
for field in [x for x in msg.fields if x.original_name[0].isupper()]:
if field.since_version > msg.version.value: # field is later version than msg
continue
value = adjustField(field, secdef)
if field.id:
print(' %s (%s): %s' % (field.original_name, field.id, value))
else:
print(' %s: %s' % (field.original_name, value))
for group_container in msg.groups:
if group_container.since_version > msg.version.value:
continue
print(' %s (%d): %d' % (
group_container.original_name, group_container.id, group_container.num_groups))
for i_instance, group_instance in enumerate(group_container):
print(' Entry %d' % (i_instance + 1))
for field in group_instance.fields:
if field.since_version > msg.version.value:
continue
value = adjustField(field, secdef)
print(' %s (%s): %s' % (field.original_name, field.id, value))
|
/sbedecoder-0.1.10.tar.gz/sbedecoder-0.1.10/mdp/prettyprinter.py
| 0.525856 | 0.266512 |
prettyprinter.py
|
pypi
|
from struct import unpack_from
from .orderbook import OrderBook
class PacketProcessor(object):
def __init__(self, mdp_parser, secdef, security_id_filter=None):
self.mdp_parser = mdp_parser
self.secdef = secdef
self.security_id_filter = security_id_filter
self.stream_sequence_number = -1 # Note: currently only handles a single stream
self.sending_time = None
self.orderbook_handler = None
# We only keep track of the base books, implieds aren't handled
self.base_orderbooks = {}
def handle_packet(self, received_time, mdp_packet):
sequence_number = unpack_from('<i', mdp_packet, offset=0)[0]
if sequence_number <= self.stream_sequence_number:
# already have seen this packet
return
if self.stream_sequence_number + 1 != sequence_number:
print('warning: stream sequence gap from {} to {}'.format(self.stream_sequence_number, sequence_number))
sending_time = unpack_from('<Q', mdp_packet, offset=4)[0]
self.stream_sequence_number = sequence_number
self.sending_time = sending_time
for mdp_message in self.mdp_parser.parse(mdp_packet, offset=12):
self.handle_message(sequence_number, sending_time, received_time, mdp_message)
def handle_message(self, stream_sequence_number, sending_time, received_time, mdp_message):
# We only care about the incremental refresh book packets at this point
if mdp_message.template_id.value == 32:
self.handle_incremental_refresh_book(stream_sequence_number, sending_time, received_time, mdp_message)
elif mdp_message.template_id.value == 42:
self.handle_incremental_refresh_trade_summary(stream_sequence_number, sending_time, received_time, mdp_message)
def handle_incremental_refresh_book(self, stream_sequence_number, sending_time, received_time, incremental_message):
updated_books = set() # Note: we batch all the updates from a single packet into one update
for md_entry in incremental_message.no_md_entries:
security_id = md_entry.security_id.value
if self.security_id_filter and security_id not in self.security_id_filter:
continue
if security_id not in self.base_orderbooks:
security_info = self.secdef.lookup_security_id(security_id)
if security_info:
symbol, depth = security_info
ob = OrderBook(security_id, depth, symbol)
self.base_orderbooks[security_id] = ob
else:
# Can't properly handle an orderbook without knowing the depth
self.base_orderbooks[security_id] = None
orderbook = self.base_orderbooks[security_id]
if not orderbook:
return
md_entry_price = md_entry.md_entry_px.value
md_entry_size = md_entry.md_entry_size.value
rpt_sequence = md_entry.rpt_seq.value
number_of_orders = md_entry.number_of_orders.value
md_price_level = md_entry.md_price_level.value
md_update_action = md_entry.md_update_action.value
md_entry_type = md_entry.md_entry_type.value
visible_updated = orderbook.handle_update(sending_time, received_time, stream_sequence_number, rpt_sequence,
md_price_level, md_entry_type, md_update_action, md_entry_price, md_entry_size, number_of_orders)
if visible_updated:
updated_books.add(orderbook)
if self.orderbook_handler and getattr(self.orderbook_handler, 'on_orderbook'):
for orderbook in updated_books:
self.orderbook_handler.on_orderbook(orderbook)
def handle_incremental_refresh_trade_summary(self, stream_sequence_number, sending_time, received_time, incremental_message):
for md_entry in incremental_message.no_md_entries:
security_id = md_entry.security_id.value
if self.security_id_filter and security_id not in self.security_id_filter:
continue
if security_id not in self.base_orderbooks:
security_info = self.secdef.lookup_security_id(security_id)
if security_info:
symbol, depth = security_info
self.base_orderbooks[security_id] = OrderBook(security_id, depth, symbol)
else:
# Can't properly handle an orderbook without knowing the depth
self.base_orderbooks[security_id] = None
orderbook = self.base_orderbooks[security_id]
if not orderbook:
return
md_entry_price = md_entry.md_entry_px.value
md_entry_size = md_entry.md_entry_size.value
rpt_sequence = md_entry.rpt_seq.value
aggressor_side = md_entry.aggressor_side.value
number_of_orders = md_entry.number_of_orders.value
orderbook.handle_trade(sending_time, received_time, stream_sequence_number, rpt_sequence,
md_entry_price, md_entry_size, aggressor_side)
if self.orderbook_handler and getattr(self.orderbook_handler, 'on_trade'):
self.orderbook_handler.on_trade(orderbook)
|
/sbedecoder-0.1.10.tar.gz/sbedecoder-0.1.10/mdp/orderbook/packet_processor.py
| 0.618204 | 0.169578 |
packet_processor.py
|
pypi
|
import numpy as np
import h5py
import argparse
import os
import plotting
def main(filename, save_location, iter_min, iter_max):
# Path to run-info file
fp_in = filename
# Path to output directory
fp_out = save_location + '/'
# Range to plot
iteration_range = [iter_min, iter_max]
# Plot height
y_limit = 10
# Make appropriate sub-directory if it doesn't exist already
# Don't overwrite data without user's permission
try:
os.mkdir(save_location)
print(f'Creating output directory...')
except FileExistsError:
while True:
response = input(f'Directory {save_location}/ already exists. You could be overwriting existing data, continue (Y/N)? ')
if response.lower() == 'y':
print(f'Continuing with plotting script...')
break
elif response.lower() == 'n':
print('Exiting plotting script...')
return
with h5py.File(fp_in, 'r') as f:
subregion_max = f['params']['num_subregions'][()]
ds_boundary_key = f'subregion-{subregion_max-1}-flux'
# Plot stream bed for iter_min to iter_max
# This method assumes plotting iteration 0 means plotting
# the model particles at the start of iteration 0. Since the
# model particles are stored (runpy) at the end of each iteration,
# we need to plot 1 index back.
print(f'Plotting stream bed...')
for iter in range(iteration_range[0], iteration_range[1]+1):
if iter == 0:
model_particles = np.array(f['initial_values']['model'])
else:
model_particles = np.array(f[f'iteration_{iter-1}']['model'])
plotting.stream(iter, np.array(f['initial_values']['bed']),
model_particles,
f['params']['x_max'][()],
y_limit,
fp_out)
print(f'Creating gif of stream bed...')
# Create a gif of the stream bed for iter_min to iter_max
plotting.stream_gif(iter_min, iter_max, fp_out)
print(f'Plotting flux and age plot...')
# Plot downstream boundary crossing histogram
plotting.crossing_info(f['final_metrics']['subregions'][ds_boundary_key][()],
f['params']['n_iterations'][()], 1, fp_out)
# Plot timeseries of particle age and downstream crossings
plotting.crossing_info2(f['final_metrics']['subregions'][ds_boundary_key][()],
np.array(f['final_metrics']['avg_age']),
f['params']['n_iterations'][()], 1, fp_out)
def parse_arguments():
parser = argparse.ArgumentParser(description='Plotting script using the Python shelf files created by a py_BeRCM model run')
parser.add_argument('path_to_file', help='Path to hdf5 file being plotted')
parser.add_argument('save_location', help='Path to plot output location')
parser.add_argument('iter_min', help='First iteration for stream plot')
parser.add_argument('iter_max', help='Final iteration for stream plot')
args = parser.parse_args()
return args.path_to_file, args.save_location, int(args.iter_min), int(args.iter_max)
if __name__ == '__main__':
# TODO: Replace save location param with the filename parsed of path and format
path_to_file, save_location, iter_min, iter_max = parse_arguments()
main(path_to_file, save_location, iter_min, iter_max)
|
/sbelt-1.0.1-py3-none-any.whl/plots/plot_maker.py
| 0.61878 | 0.224842 |
plot_maker.py
|
pypi
|
import logging
import requests
from .exceptions import ApiError, BadCredentialsException
class Client(object):
"""Performs requests to the SBER API."""
URL = 'https://3dsec.sberbank.ru/payment/rest/'
def __init__(self, username: str = None, password: str = None, token: str = None):
"""
:param username: Логин служебной учётной записи продавца. При передаче логина и пароля для аутентификации в
платёжном шлюзе параметр token передавать не нужно.
:param password: Пароль служебной учётной записи продавца. При передаче логина и пароля для аутентификации в
платёжном шлюзе параметр token передавать не нужно.
:param token: Значение, которое используется для аутентификации продавца при отправке запросов в платёжный шлюз.
При передаче этого параметра параметры userName и password передавать не нужно.
Чтобы получить открытый ключ, обратитесь в техническую поддержку.
"""
if username and password:
self.username = username
self.password = password
self.params = {
'userName': self.username,
'password': self.password,
}
elif token:
self.token = token
self.params = {
'token': self.token,
}
else:
raise BadCredentialsException('Авторизация через логин/пароль или токен, выберите что-то одно')
self.default_headers = {
'Accept': 'application/json',
'Accept-Encoding': 'gzip,deflate,sdch',
'Cache-Control': 'no-cache',
'Content-Type': 'application/json',
}
self.logger = logging.getLogger('sber')
def _get(self, url: str, params: dict = None):
resp = requests.get(url, headers=self.default_headers, params=params)
if not resp.ok:
raise ApiError(resp.status_code, resp.text)
return resp.json()
def _post(self, url: str, params: dict = None):
resp = requests.post(url, headers=self.default_headers, params=params)
if not resp.ok:
raise ApiError(resp.status_code, resp.text)
return resp.json()
def register_order(self, order_number: str, amount: int, return_url: str, **kwargs: dict):
"""
Запрос регистрации заказа (register.do)
https://securepayments.sberbank.ru/wiki/doku.php/integration:api:rest:requests:register
:param order_number: Номер (идентификатор) заказа в системе магазина, уникален
для каждого магазина в пределах системы. Если номер заказа
генерируется на стороне платёжного шлюза, этот параметр
передавать необязательно.
:param amount: Сумма возврата в минимальных единицах валюты (в копейках или центах).
:param return_url: Адрес, на который требуется перенаправить пользователя в
случае успешной оплаты. Адрес должен быть указан полностью,
включая используемый протокол.
:param kwargs: Необязательные данные.
"""
url = f"{self.URL}register.do"
params = {
'orderNumber': order_number,
'amount': amount,
'returnUrl': return_url,
}
# py3.9: self.params |= params
return self._post(url, params={**self.params, **params})
def register_order_pre_auth(self, order_number: str, amount: str, return_url: str, **kwargs: dict):
"""
Запрос регистрации заказа с предавторизацией (registerPreAuth.do)
https://securepayments.sberbank.ru/wiki/doku.php/integration:api:rest:requests:registerpreauth
:param order_number: Номер (идентификатор) заказа в системе магазина, уникален
для каждого магазина в пределах системы. Если номер заказа
генерируется на стороне платёжного шлюза, этот параметр
передавать необязательно.
:param amount: Сумма возврата в минимальных единицах валюты (в копейках или центах).
:param return_url: Адрес, на который требуется перенаправить пользователя в
случае успешной оплаты. Адрес должен быть указан полностью,
включая используемый протокол.
:param kwargs: Необязательные данные.
"""
url = f"{self.URL}registerPreAuth.do"
params = {
'orderNumber': order_number,
'amount': amount,
'returnUrl': return_url,
}
if kwargs.get('description'):
params['description'] = kwargs.get('description')
# py3.9: self.params |= params
return self._post(url, params={**self.params, **params})
def deposit(self, order_id: str, amount: int = 0):
"""
Запрос завершения на полную сумму в деньгах (deposit.do)
https://securepayments.sberbank.ru/wiki/doku.php/integration:api:rest:requests:deposit
Для запроса завершения ранее пред авторизованного заказа используется запрос deposit.do.
:param order_id: Номер заказа в платежной системе. Уникален в пределах системы. Отсутствует если регистрация
заказа на удалась по причине ошибки, детализированной в ErrorCode.
:param amount: Сумма возврата в минимальных единицах валюты (в копейках или центах).
Внимание! Если в этом параметре указать 0, завершение произойдёт на всю предавторизованную сумму.
"""
url = f"{self.URL}deposit.do"
params = {
'orderId': order_id,
'amount': amount,
}
return self._post(url, params={**self.params, **params})
def reverse(self, order_id: str, amount: int = None, **kwargs: dict):
"""
Запрос отмены оплаты заказа (reverse.do)
https://securepayments.sberbank.ru/wiki/doku.php/integration:api:rest:requests:reverse
Для запроса отмены оплаты заказа используется запрос reverse.do. Функция отмены доступна в течение ограниченного
времени после оплаты, точные сроки необходимо уточнять в «Сбербанке». Нельзя проводить отмены и возвраты
по заказам, инициализирующим регулярные платежи, т. к. в этих случаях не происходит реального списания денег.
Операция отмены оплаты может быть совершена только один раз. Если она закончится ошибкой, то повторная операция
отмены платежа не пройдёт. Эта функция доступна магазинам по согласованию с банком. Для выполнения операции
отмены продавец должен обладать соответствующими правами.
При проведении частичной отмены (отмены части оплаты) сумма частичной отмены передается в необязательном
параметре amount. Частичная отмена возможна при наличии у магазина соответствующего разрешения в системе.
Частичная отмена невозможна для заказов с фискализацией, корзиной и лоялти.
:param order_id: Номер заказа в платежной системе. Уникален в пределах системы. Отсутствует если регистрация
заказа на удалась по причине ошибки, детализированной в ErrorCode.
:param amount: Сумма частичной отмены. Параметр, обязательный для частичной отмены.
"""
url = f"{self.URL}reverse.do"
params = {
'orderId': order_id,
'amount': amount,
}
return self._post(url, params={**self.params, **params})
def refund(self, order_id: str, amount: int = 0):
"""
Запрос возврата на полную сумму в деньгах (refund.do)
https://securepayments.sberbank.ru/wiki/doku.php/integration:api:rest:requests:refund
По этому запросу средства по указанному заказу будут возвращены плательщику. Запрос закончится ошибкой в случае,
если средства по этому заказу не были списаны. Система позволяет возвращать средства более одного раза,
но в общей сложности не более первоначальной суммы списания.
Для выполнения операции возврата необходимо наличие соответствующих права в системе.
:param order_id: Номер заказа в платежной системе. Уникален в пределах системы. Отсутствует если регистрация
заказа на удалась по причине ошибки, детализированной в ErrorCode.
:param amount: Сумма возврата в минимальных единицах валюты (в копейках или центах).
Внимание! Если в запросе указать amount=0, производится возврат на всю сумму заказа.
"""
url = f"{self.URL}refund.do"
params = {
'orderId': order_id,
'amount': amount,
}
return self._post(url, params={**self.params, **params})
def get_order_status(self, order_id: str):
"""
Расширенный запрос состояния заказа
https://securepayments.sberbank.ru/wiki/doku.php/integration:api:rest:requests:getorderstatusextended
:param order_id: Номер заказа в платежной системе. Уникален в пределах системы. Отсутствует если регистрация
заказа на удалась по причине ошибки, детализированной в ErrorCode.
"""
url = f"{self.URL}getOrderStatusExtended.do"
params = {
'orderId': order_id,
}
return self._post(url, params={**self.params, **params})
|
/sber-payments-0.3.0.tar.gz/sber-payments-0.3.0/src/sber_payments/client.py
| 0.506836 | 0.365683 |
client.py
|
pypi
|
import datetime
from cryptography import x509
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.x509.oid import NameOID
def generate_keys():
private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
backend=default_backend()
)
public_key = private_key.public_key()
return private_key, public_key
def private_key_write_to_file(private_key, filename, password=''):
with open(filename, "wb") as key_file:
key_file.write(private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.BestAvailableEncryption(password),
))
def public_key_write_to_file(public_key, filename):
# Save the Public key in PEM format
with open(filename, "wb") as key_file:
key_file.write(public_key.public_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PublicFormat.SubjectPublicKeyInfo,
))
def x509_certificate_write_to_file(certificate, filename):
with open(filename, "wb") as cert_file:
cert_file.write(certificate.public_bytes(serialization.Encoding.PEM))
def certificate_x509_build(private_key, public_key):
one_day = datetime.timedelta(1, 0, 0)
builder = x509.CertificateBuilder()
builder = builder.subject_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
]))
builder = builder.issuer_name(x509.Name([
x509.NameAttribute(NameOID.COMMON_NAME, u'cryptography.io'),
]))
builder = builder.not_valid_before(datetime.datetime.today() - one_day)
builder = builder.not_valid_after(datetime.datetime.today() + (one_day * 30))
builder = builder.serial_number(x509.random_serial_number())
builder = builder.public_key(public_key)
builder = builder.add_extension(
x509.SubjectAlternativeName([x509.DNSName(u'cryptography.io')]),
critical=False
)
builder = builder.add_extension(
x509.BasicConstraints(ca=False, path_length=None), critical=True,
)
return builder.sign(
private_key=private_key, algorithm=hashes.SHA256(),
backend=default_backend()
)
|
/sberbank_async_cryptography-1.0.0-py3-none-any.whl/sb_async_cryptography/keys_generation.py
| 0.600423 | 0.158923 |
keys_generation.py
|
pypi
|
from binascii import a2b_base64
from Cryptodome.Hash import SHA512
from Cryptodome.PublicKey import RSA
from Cryptodome.Signature import PKCS1_v1_5 as PKCS
from Cryptodome.Util.asn1 import DerSequence
def get_public_key_from_file(file_name):
with open(file_name) as f:
pem = f.read()
lines = pem.replace(" ", '').split()
der = a2b_base64(''.join(lines[1:-1]))
# Extract subjectPublicKeyInfo field from X.509 certificate (see RFC3280)
cert = DerSequence()
cert.decode(der)
tbsCertificate = DerSequence()
tbsCertificate.decode(cert[0])
subjectPublicKeyInfo = tbsCertificate[6]
# Initialize RSA key
publicKey = RSA.importKey(subjectPublicKeyInfo)
return publicKey.publickey()
def private_key_import_from_file(filename, password):
key_file = open(filename, 'rb')
return RSA.importKey(key_file.read(), passphrase=password)
def public_key_import_from_x509_certificate_file(file_name):
with open(file_name) as key_file:
certificate = key_file.read()
return public_key_import_from_x509_certificate_string(certificate)
def public_key_import_from_x509_certificate_string(certificate_string):
if certificate_string.find('-----BEGIN CERTIFICATE-----') \
and not certificate_string.find('-----BEGIN CERTIFICATE-----\n') \
and not certificate_string.find('-----BEGIN CERTIFICATE-----\r\n'):
certificate_string = certificate_string.replace('-----BEGIN CERTIFICATE-----', '-----BEGIN CERTIFICATE-----\n')
if certificate_string.find('\n-----END CERTIFICATE-----') \
and not certificate_string.find('\n-----END CERTIFICATE-----') \
and not certificate_string.find('\r\n-----END CERTIFICATE-----'):
certificate_string = certificate_string.replace('-----END CERTIFICATE-----', '\n-----END CERTIFICATE-----')
lines = certificate_string.replace(" ", '').split()
der = a2b_base64(''.join(lines[1:-1]))
# Extract subjectPublicKeyInfo field from X.509 certificate (see RFC3280)
cert = DerSequence()
cert.decode(der)
tbs_certificate = DerSequence()
tbs_certificate.decode(cert[0])
subject_public_key_info = tbs_certificate[6]
# Initialize RSA key
public_key = RSA.importKey(subject_public_key_info)
return public_key.publickey()
def private_key_sign_message(private_key, message):
# RSA Signature Generation
h = SHA512.new()
h.update(message)
signer = PKCS.new(private_key)
return signer.sign(h)
def public_key_verify_signature(public_key, signature, message):
# At the receiver side, verification can be done like using the public part of the RSA key:
# RSA Signature Verification
h = SHA512.new()
h.update(message)
verifier = PKCS.new(public_key)
if verifier.verify(h, signature):
return True
else:
return False
|
/sberbank_async_cryptography-1.0.0-py3-none-any.whl/sb_async_cryptography/signature.py
| 0.749637 | 0.196961 |
signature.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
import attr
from ..models.search_products_response_facets_item_options_item import SearchProductsResponseFacetsItemOptionsItem
from ..types import UNSET, Unset
T = TypeVar("T", bound="SearchProductsResponseFacetsItem")
@attr.s(auto_attribs=True)
class SearchProductsResponseFacetsItem:
"""
Attributes:
key (Union[Unset, str]):
name (Union[Unset, str]):
type (Union[Unset, str]):
options (Union[Unset, List[SearchProductsResponseFacetsItemOptionsItem]]):
"""
key: Union[Unset, str] = UNSET
name: Union[Unset, str] = UNSET
type: Union[Unset, str] = UNSET
options: Union[Unset, List[SearchProductsResponseFacetsItemOptionsItem]] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
key = self.key
name = self.name
type = self.type
options: Union[Unset, List[Dict[str, Any]]] = UNSET
if not isinstance(self.options, Unset):
options = []
for options_item_data in self.options:
options_item = options_item_data.to_dict()
options.append(options_item)
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if key is not UNSET:
field_dict["key"] = key
if name is not UNSET:
field_dict["name"] = name
if type is not UNSET:
field_dict["type"] = type
if options is not UNSET:
field_dict["options"] = options
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
key = d.pop("key", UNSET)
name = d.pop("name", UNSET)
type = d.pop("type", UNSET)
options = []
_options = d.pop("options", UNSET)
for options_item_data in _options or []:
options_item = SearchProductsResponseFacetsItemOptionsItem.from_dict(options_item_data)
options.append(options_item)
search_products_response_facets_item = cls(
key=key,
name=name,
type=type,
options=options,
)
search_products_response_facets_item.additional_properties = d
return search_products_response_facets_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/search_products_response_facets_item.py
| 0.83471 | 0.166608 |
search_products_response_facets_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreRetailerAppearance")
@attr.s(auto_attribs=True)
class StoreRetailerAppearance:
"""
Attributes:
background_color (Union[Unset, str]):
image_color (Union[Unset, str]):
black_theme (Union[Unset, bool]):
logo_image (Union[Unset, str]):
side_image (Union[Unset, str]):
mini_logo_image (Union[Unset, str]):
"""
background_color: Union[Unset, str] = UNSET
image_color: Union[Unset, str] = UNSET
black_theme: Union[Unset, bool] = UNSET
logo_image: Union[Unset, str] = UNSET
side_image: Union[Unset, str] = UNSET
mini_logo_image: Union[Unset, str] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
background_color = self.background_color
image_color = self.image_color
black_theme = self.black_theme
logo_image = self.logo_image
side_image = self.side_image
mini_logo_image = self.mini_logo_image
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if background_color is not UNSET:
field_dict["background_color"] = background_color
if image_color is not UNSET:
field_dict["image_color"] = image_color
if black_theme is not UNSET:
field_dict["black_theme"] = black_theme
if logo_image is not UNSET:
field_dict["logo_image"] = logo_image
if side_image is not UNSET:
field_dict["side_image"] = side_image
if mini_logo_image is not UNSET:
field_dict["mini_logo_image"] = mini_logo_image
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
background_color = d.pop("background_color", UNSET)
image_color = d.pop("image_color", UNSET)
black_theme = d.pop("black_theme", UNSET)
logo_image = d.pop("logo_image", UNSET)
side_image = d.pop("side_image", UNSET)
mini_logo_image = d.pop("mini_logo_image", UNSET)
store_retailer_appearance = cls(
background_color=background_color,
image_color=image_color,
black_theme=black_theme,
logo_image=logo_image,
side_image=side_image,
mini_logo_image=mini_logo_image,
)
store_retailer_appearance.additional_properties = d
return store_retailer_appearance
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_retailer_appearance.py
| 0.833155 | 0.284651 |
store_retailer_appearance.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StorePaymentMethodsStoresItemPaymentMethod")
@attr.s(auto_attribs=True)
class StorePaymentMethodsStoresItemPaymentMethod:
"""
Attributes:
id (Union[Unset, int]):
name (Union[Unset, str]):
environment (Union[Unset, str]):
key (Union[Unset, str]):
"""
id: Union[Unset, int] = UNSET
name: Union[Unset, str] = UNSET
environment: Union[Unset, str] = UNSET
key: Union[Unset, str] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
id = self.id
name = self.name
environment = self.environment
key = self.key
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if id is not UNSET:
field_dict["id"] = id
if name is not UNSET:
field_dict["name"] = name
if environment is not UNSET:
field_dict["environment"] = environment
if key is not UNSET:
field_dict["key"] = key
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
id = d.pop("id", UNSET)
name = d.pop("name", UNSET)
environment = d.pop("environment", UNSET)
key = d.pop("key", UNSET)
store_payment_methods_stores_item_payment_method = cls(
id=id,
name=name,
environment=environment,
key=key,
)
store_payment_methods_stores_item_payment_method.additional_properties = d
return store_payment_methods_stores_item_payment_method
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_payment_methods_stores_item_payment_method.py
| 0.850717 | 0.167287 |
store_payment_methods_stores_item_payment_method.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreCity")
@attr.s(auto_attribs=True)
class StoreCity:
"""
Attributes:
id (Union[Unset, int]):
name (Union[Unset, str]):
name_in (Union[Unset, str]):
name_from (Union[Unset, str]):
name_to (Union[Unset, str]):
slug (Union[Unset, str]):
"""
id: Union[Unset, int] = UNSET
name: Union[Unset, str] = UNSET
name_in: Union[Unset, str] = UNSET
name_from: Union[Unset, str] = UNSET
name_to: Union[Unset, str] = UNSET
slug: Union[Unset, str] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
id = self.id
name = self.name
name_in = self.name_in
name_from = self.name_from
name_to = self.name_to
slug = self.slug
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if id is not UNSET:
field_dict["id"] = id
if name is not UNSET:
field_dict["name"] = name
if name_in is not UNSET:
field_dict["name_in"] = name_in
if name_from is not UNSET:
field_dict["name_from"] = name_from
if name_to is not UNSET:
field_dict["name_to"] = name_to
if slug is not UNSET:
field_dict["slug"] = slug
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
id = d.pop("id", UNSET)
name = d.pop("name", UNSET)
name_in = d.pop("name_in", UNSET)
name_from = d.pop("name_from", UNSET)
name_to = d.pop("name_to", UNSET)
slug = d.pop("slug", UNSET)
store_city = cls(
id=id,
name=name,
name_in=name_in,
name_from=name_from,
name_to=name_to,
slug=slug,
)
store_city.additional_properties = d
return store_city
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_city.py
| 0.810666 | 0.18881 |
store_city.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="SearchProductsResponseRootCategoriesOptionsItem")
@attr.s(auto_attribs=True)
class SearchProductsResponseRootCategoriesOptionsItem:
"""
Attributes:
name (Union[Unset, str]):
permalink (Union[Unset, str]):
value (Union[Unset, int]):
count (Union[Unset, int]):
active (Union[Unset, bool]):
"""
name: Union[Unset, str] = UNSET
permalink: Union[Unset, str] = UNSET
value: Union[Unset, int] = UNSET
count: Union[Unset, int] = UNSET
active: Union[Unset, bool] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
name = self.name
permalink = self.permalink
value = self.value
count = self.count
active = self.active
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if name is not UNSET:
field_dict["name"] = name
if permalink is not UNSET:
field_dict["permalink"] = permalink
if value is not UNSET:
field_dict["value"] = value
if count is not UNSET:
field_dict["count"] = count
if active is not UNSET:
field_dict["active"] = active
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
name = d.pop("name", UNSET)
permalink = d.pop("permalink", UNSET)
value = d.pop("value", UNSET)
count = d.pop("count", UNSET)
active = d.pop("active", UNSET)
search_products_response_root_categories_options_item = cls(
name=name,
permalink=permalink,
value=value,
count=count,
active=active,
)
search_products_response_root_categories_options_item.additional_properties = d
return search_products_response_root_categories_options_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/search_products_response_root_categories_options_item.py
| 0.856317 | 0.174287 |
search_products_response_root_categories_options_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
import attr
from ..models.store_retailer_appearance import StoreRetailerAppearance
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreRetailer")
@attr.s(auto_attribs=True)
class StoreRetailer:
"""
Attributes:
id (Union[Unset, int]):
name (Union[Unset, str]):
color (Union[Unset, str]):
secondary_color (Union[Unset, str]):
logo (Union[Unset, str]):
logo_background_color (Union[Unset, str]):
slug (Union[Unset, str]):
description (Union[Unset, str]):
icon (Union[Unset, str]):
is_alcohol (Union[Unset, bool]):
is_agent_contract_types (Union[Unset, bool]):
home_page_departments_depth (Union[Unset, int]):
appearance (Union[Unset, StoreRetailerAppearance]):
services (Union[Unset, List[str]]):
"""
id: Union[Unset, int] = UNSET
name: Union[Unset, str] = UNSET
color: Union[Unset, str] = UNSET
secondary_color: Union[Unset, str] = UNSET
logo: Union[Unset, str] = UNSET
logo_background_color: Union[Unset, str] = UNSET
slug: Union[Unset, str] = UNSET
description: Union[Unset, str] = UNSET
icon: Union[Unset, str] = UNSET
is_alcohol: Union[Unset, bool] = UNSET
is_agent_contract_types: Union[Unset, bool] = UNSET
home_page_departments_depth: Union[Unset, int] = UNSET
appearance: Union[Unset, StoreRetailerAppearance] = UNSET
services: Union[Unset, List[str]] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
id = self.id
name = self.name
color = self.color
secondary_color = self.secondary_color
logo = self.logo
logo_background_color = self.logo_background_color
slug = self.slug
description = self.description
icon = self.icon
is_alcohol = self.is_alcohol
is_agent_contract_types = self.is_agent_contract_types
home_page_departments_depth = self.home_page_departments_depth
appearance: Union[Unset, Dict[str, Any]] = UNSET
if not isinstance(self.appearance, Unset):
appearance = self.appearance.to_dict()
services: Union[Unset, List[str]] = UNSET
if not isinstance(self.services, Unset):
services = self.services
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if id is not UNSET:
field_dict["id"] = id
if name is not UNSET:
field_dict["name"] = name
if color is not UNSET:
field_dict["color"] = color
if secondary_color is not UNSET:
field_dict["secondary_color"] = secondary_color
if logo is not UNSET:
field_dict["logo"] = logo
if logo_background_color is not UNSET:
field_dict["logo_background_color"] = logo_background_color
if slug is not UNSET:
field_dict["slug"] = slug
if description is not UNSET:
field_dict["description"] = description
if icon is not UNSET:
field_dict["icon"] = icon
if is_alcohol is not UNSET:
field_dict["is_alcohol"] = is_alcohol
if is_agent_contract_types is not UNSET:
field_dict["is_agent_contract_types"] = is_agent_contract_types
if home_page_departments_depth is not UNSET:
field_dict["home_page_departments_depth"] = home_page_departments_depth
if appearance is not UNSET:
field_dict["appearance"] = appearance
if services is not UNSET:
field_dict["services"] = services
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
id = d.pop("id", UNSET)
name = d.pop("name", UNSET)
color = d.pop("color", UNSET)
secondary_color = d.pop("secondary_color", UNSET)
logo = d.pop("logo", UNSET)
logo_background_color = d.pop("logo_background_color", UNSET)
slug = d.pop("slug", UNSET)
description = d.pop("description", UNSET)
icon = d.pop("icon", UNSET)
is_alcohol = d.pop("is_alcohol", UNSET)
is_agent_contract_types = d.pop("is_agent_contract_types", UNSET)
home_page_departments_depth = d.pop("home_page_departments_depth", UNSET)
_appearance = d.pop("appearance", UNSET)
appearance: Union[Unset, StoreRetailerAppearance]
if isinstance(_appearance, Unset):
appearance = UNSET
else:
appearance = StoreRetailerAppearance.from_dict(_appearance)
services = cast(List[str], d.pop("services", UNSET))
store_retailer = cls(
id=id,
name=name,
color=color,
secondary_color=secondary_color,
logo=logo,
logo_background_color=logo_background_color,
slug=slug,
description=description,
icon=icon,
is_alcohol=is_alcohol,
is_agent_contract_types=is_agent_contract_types,
home_page_departments_depth=home_page_departments_depth,
appearance=appearance,
services=services,
)
store_retailer.additional_properties = d
return store_retailer
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_retailer.py
| 0.770465 | 0.183667 |
store_retailer.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
import datetime
import attr
from dateutil.parser import isoparse
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreLicensesItem")
@attr.s(auto_attribs=True)
class StoreLicensesItem:
"""
Attributes:
kind (Union[Unset, str]):
number (Union[Unset, str]):
issue_date (Union[Unset, datetime.date]):
end_date (Union[Unset, datetime.date]):
"""
kind: Union[Unset, str] = UNSET
number: Union[Unset, str] = UNSET
issue_date: Union[Unset, datetime.date] = UNSET
end_date: Union[Unset, datetime.date] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
kind = self.kind
number = self.number
issue_date: Union[Unset, str] = UNSET
if not isinstance(self.issue_date, Unset):
issue_date = self.issue_date.isoformat()
end_date: Union[Unset, str] = UNSET
if not isinstance(self.end_date, Unset):
end_date = self.end_date.isoformat()
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if kind is not UNSET:
field_dict["kind"] = kind
if number is not UNSET:
field_dict["number"] = number
if issue_date is not UNSET:
field_dict["issue_date"] = issue_date
if end_date is not UNSET:
field_dict["end_date"] = end_date
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
kind = d.pop("kind", UNSET)
number = d.pop("number", UNSET)
_issue_date = d.pop("issue_date", UNSET)
issue_date: Union[Unset, datetime.date]
if isinstance(_issue_date, Unset):
issue_date = UNSET
else:
issue_date = isoparse(_issue_date).date()
_end_date = d.pop("end_date", UNSET)
end_date: Union[Unset, datetime.date]
if isinstance(_end_date, Unset):
end_date = UNSET
else:
end_date = isoparse(_end_date).date()
store_licenses_item = cls(
kind=kind,
number=number,
issue_date=issue_date,
end_date=end_date,
)
store_licenses_item.additional_properties = d
return store_licenses_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_licenses_item.py
| 0.838283 | 0.154376 |
store_licenses_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreStoreShippingMethodsItemShippingMethod")
@attr.s(auto_attribs=True)
class StoreStoreShippingMethodsItemShippingMethod:
"""
Attributes:
name (Union[Unset, str]):
kind (Union[Unset, str]):
id (Union[Unset, int]):
"""
name: Union[Unset, str] = UNSET
kind: Union[Unset, str] = UNSET
id: Union[Unset, int] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
name = self.name
kind = self.kind
id = self.id
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if name is not UNSET:
field_dict["name"] = name
if kind is not UNSET:
field_dict["kind"] = kind
if id is not UNSET:
field_dict["id"] = id
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
name = d.pop("name", UNSET)
kind = d.pop("kind", UNSET)
id = d.pop("id", UNSET)
store_store_shipping_methods_item_shipping_method = cls(
name=name,
kind=kind,
id=id,
)
store_store_shipping_methods_item_shipping_method.additional_properties = d
return store_store_shipping_methods_item_shipping_method
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_store_shipping_methods_item_shipping_method.py
| 0.853882 | 0.17006 |
store_store_shipping_methods_item_shipping_method.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreLocation")
@attr.s(auto_attribs=True)
class StoreLocation:
"""
Attributes:
id (Union[Unset, int]):
full_address (Union[Unset, str]):
city (Union[Unset, str]):
street (Union[Unset, str]):
building (Union[Unset, str]):
block (Union[Unset, str]):
floor (Union[Unset, str]):
apartment (Union[Unset, str]):
entrance (Union[Unset, str]):
elevator (Union[Unset, str]):
region (Union[Unset, str]):
comments (Union[Unset, str]):
phone (Union[Unset, str]):
area (Union[Unset, str]):
settlement (Union[Unset, str]):
lat (Union[Unset, float]):
lon (Union[Unset, float]):
city_kladr_id (Union[Unset, str]):
street_kladr_id (Union[Unset, str]):
user_id (Union[Unset, str]):
door_phone (Union[Unset, str]):
kind (Union[Unset, str]):
delivery_to_door (Union[Unset, bool]):
"""
id: Union[Unset, int] = UNSET
full_address: Union[Unset, str] = UNSET
city: Union[Unset, str] = UNSET
street: Union[Unset, str] = UNSET
building: Union[Unset, str] = UNSET
block: Union[Unset, str] = UNSET
floor: Union[Unset, str] = UNSET
apartment: Union[Unset, str] = UNSET
entrance: Union[Unset, str] = UNSET
elevator: Union[Unset, str] = UNSET
region: Union[Unset, str] = UNSET
comments: Union[Unset, str] = UNSET
phone: Union[Unset, str] = UNSET
area: Union[Unset, str] = UNSET
settlement: Union[Unset, str] = UNSET
lat: Union[Unset, float] = UNSET
lon: Union[Unset, float] = UNSET
city_kladr_id: Union[Unset, str] = UNSET
street_kladr_id: Union[Unset, str] = UNSET
user_id: Union[Unset, str] = UNSET
door_phone: Union[Unset, str] = UNSET
kind: Union[Unset, str] = UNSET
delivery_to_door: Union[Unset, bool] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
id = self.id
full_address = self.full_address
city = self.city
street = self.street
building = self.building
block = self.block
floor = self.floor
apartment = self.apartment
entrance = self.entrance
elevator = self.elevator
region = self.region
comments = self.comments
phone = self.phone
area = self.area
settlement = self.settlement
lat = self.lat
lon = self.lon
city_kladr_id = self.city_kladr_id
street_kladr_id = self.street_kladr_id
user_id = self.user_id
door_phone = self.door_phone
kind = self.kind
delivery_to_door = self.delivery_to_door
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if id is not UNSET:
field_dict["id"] = id
if full_address is not UNSET:
field_dict["full_address"] = full_address
if city is not UNSET:
field_dict["city"] = city
if street is not UNSET:
field_dict["street"] = street
if building is not UNSET:
field_dict["building"] = building
if block is not UNSET:
field_dict["block"] = block
if floor is not UNSET:
field_dict["floor"] = floor
if apartment is not UNSET:
field_dict["apartment"] = apartment
if entrance is not UNSET:
field_dict["entrance"] = entrance
if elevator is not UNSET:
field_dict["elevator"] = elevator
if region is not UNSET:
field_dict["region"] = region
if comments is not UNSET:
field_dict["comments"] = comments
if phone is not UNSET:
field_dict["phone"] = phone
if area is not UNSET:
field_dict["area"] = area
if settlement is not UNSET:
field_dict["settlement"] = settlement
if lat is not UNSET:
field_dict["lat"] = lat
if lon is not UNSET:
field_dict["lon"] = lon
if city_kladr_id is not UNSET:
field_dict["city_kladr_id"] = city_kladr_id
if street_kladr_id is not UNSET:
field_dict["street_kladr_id"] = street_kladr_id
if user_id is not UNSET:
field_dict["user_id"] = user_id
if door_phone is not UNSET:
field_dict["door_phone"] = door_phone
if kind is not UNSET:
field_dict["kind"] = kind
if delivery_to_door is not UNSET:
field_dict["delivery_to_door"] = delivery_to_door
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
id = d.pop("id", UNSET)
full_address = d.pop("full_address", UNSET)
city = d.pop("city", UNSET)
street = d.pop("street", UNSET)
building = d.pop("building", UNSET)
block = d.pop("block", UNSET)
floor = d.pop("floor", UNSET)
apartment = d.pop("apartment", UNSET)
entrance = d.pop("entrance", UNSET)
elevator = d.pop("elevator", UNSET)
region = d.pop("region", UNSET)
comments = d.pop("comments", UNSET)
phone = d.pop("phone", UNSET)
area = d.pop("area", UNSET)
settlement = d.pop("settlement", UNSET)
lat = d.pop("lat", UNSET)
lon = d.pop("lon", UNSET)
city_kladr_id = d.pop("city_kladr_id", UNSET)
street_kladr_id = d.pop("street_kladr_id", UNSET)
user_id = d.pop("user_id", UNSET)
door_phone = d.pop("door_phone", UNSET)
kind = d.pop("kind", UNSET)
delivery_to_door = d.pop("delivery_to_door", UNSET)
store_location = cls(
id=id,
full_address=full_address,
city=city,
street=street,
building=building,
block=block,
floor=floor,
apartment=apartment,
entrance=entrance,
elevator=elevator,
region=region,
comments=comments,
phone=phone,
area=area,
settlement=settlement,
lat=lat,
lon=lon,
city_kladr_id=city_kladr_id,
street_kladr_id=street_kladr_id,
user_id=user_id,
door_phone=door_phone,
kind=kind,
delivery_to_door=delivery_to_door,
)
store_location.additional_properties = d
return store_location
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_location.py
| 0.777975 | 0.193948 |
store_location.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreStoreZonesItem")
@attr.s(auto_attribs=True)
class StoreStoreZonesItem:
"""
Attributes:
id (Union[Unset, int]):
name (Union[Unset, str]):
area (Union[Unset, List[List[List[float]]]]):
"""
id: Union[Unset, int] = UNSET
name: Union[Unset, str] = UNSET
area: Union[Unset, List[List[List[float]]]] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
id = self.id
name = self.name
area: Union[Unset, List[List[List[float]]]] = UNSET
if not isinstance(self.area, Unset):
area = []
for area_item_data in self.area:
area_item = []
for area_item_item_data in area_item_data:
area_item_item = area_item_item_data
area_item.append(area_item_item)
area.append(area_item)
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if id is not UNSET:
field_dict["id"] = id
if name is not UNSET:
field_dict["name"] = name
if area is not UNSET:
field_dict["area"] = area
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
id = d.pop("id", UNSET)
name = d.pop("name", UNSET)
area = []
_area = d.pop("area", UNSET)
for area_item_data in _area or []:
area_item = []
_area_item = area_item_data
for area_item_item_data in _area_item:
area_item_item = cast(List[float], area_item_item_data)
area_item.append(area_item_item)
area.append(area_item)
store_store_zones_item = cls(
id=id,
name=name,
area=area,
)
store_store_zones_item.additional_properties = d
return store_store_zones_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_store_zones_item.py
| 0.812384 | 0.205555 |
store_store_zones_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="SearchProductsResponseFacetsItemOptionsItem")
@attr.s(auto_attribs=True)
class SearchProductsResponseFacetsItemOptionsItem:
"""
Attributes:
value (Union[Unset, int]):
count (Union[Unset, int]):
active (Union[Unset, bool]):
"""
value: Union[Unset, int] = UNSET
count: Union[Unset, int] = UNSET
active: Union[Unset, bool] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
value = self.value
count = self.count
active = self.active
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if value is not UNSET:
field_dict["value"] = value
if count is not UNSET:
field_dict["count"] = count
if active is not UNSET:
field_dict["active"] = active
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
value = d.pop("value", UNSET)
count = d.pop("count", UNSET)
active = d.pop("active", UNSET)
search_products_response_facets_item_options_item = cls(
value=value,
count=count,
active=active,
)
search_products_response_facets_item_options_item.additional_properties = d
return search_products_response_facets_item_options_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/search_products_response_facets_item_options_item.py
| 0.870968 | 0.223504 |
search_products_response_facets_item_options_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="SearchProductsResponseProductsItem")
@attr.s(auto_attribs=True)
class SearchProductsResponseProductsItem:
"""
Attributes:
id (Union[Unset, str]):
legacy_offer_id (Union[Unset, int]):
legacy_product_id (Union[Unset, int]):
sku (Union[Unset, str]):
retailer_sku (Union[Unset, str]):
name (Union[Unset, str]):
price (Union[Unset, int]):
original_price (Union[Unset, int]):
discount (Union[Unset, int]):
human_volume (Union[Unset, str]):
volume (Union[Unset, int]):
volume_type (Union[Unset, str]):
items_per_pack (Union[Unset, int]):
discount_ends_at (Union[Unset, str]):
price_type (Union[Unset, str]):
grams_per_unit (Union[Unset, int]):
unit_price (Union[Unset, int]):
original_unit_price (Union[Unset, int]):
slug (Union[Unset, str]):
max_select_quantity (Union[Unset, int]):
canonical_url (Union[Unset, str]):
available (Union[Unset, bool]):
vat_info (Union[Unset, str]):
bmpl_info (Union[Unset, str]):
promo_badge_ids (Union[Unset, List[str]]):
requirements (Union[Unset, List[str]]):
image_urls (Union[Unset, List[str]]):
"""
id: Union[Unset, str] = UNSET
legacy_offer_id: Union[Unset, int] = UNSET
legacy_product_id: Union[Unset, int] = UNSET
sku: Union[Unset, str] = UNSET
retailer_sku: Union[Unset, str] = UNSET
name: Union[Unset, str] = UNSET
price: Union[Unset, int] = UNSET
original_price: Union[Unset, int] = UNSET
discount: Union[Unset, int] = UNSET
human_volume: Union[Unset, str] = UNSET
volume: Union[Unset, int] = UNSET
volume_type: Union[Unset, str] = UNSET
items_per_pack: Union[Unset, int] = UNSET
discount_ends_at: Union[Unset, str] = UNSET
price_type: Union[Unset, str] = UNSET
grams_per_unit: Union[Unset, int] = UNSET
unit_price: Union[Unset, int] = UNSET
original_unit_price: Union[Unset, int] = UNSET
slug: Union[Unset, str] = UNSET
max_select_quantity: Union[Unset, int] = UNSET
canonical_url: Union[Unset, str] = UNSET
available: Union[Unset, bool] = UNSET
vat_info: Union[Unset, str] = UNSET
bmpl_info: Union[Unset, str] = UNSET
promo_badge_ids: Union[Unset, List[str]] = UNSET
requirements: Union[Unset, List[str]] = UNSET
image_urls: Union[Unset, List[str]] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
id = self.id
legacy_offer_id = self.legacy_offer_id
legacy_product_id = self.legacy_product_id
sku = self.sku
retailer_sku = self.retailer_sku
name = self.name
price = self.price
original_price = self.original_price
discount = self.discount
human_volume = self.human_volume
volume = self.volume
volume_type = self.volume_type
items_per_pack = self.items_per_pack
discount_ends_at = self.discount_ends_at
price_type = self.price_type
grams_per_unit = self.grams_per_unit
unit_price = self.unit_price
original_unit_price = self.original_unit_price
slug = self.slug
max_select_quantity = self.max_select_quantity
canonical_url = self.canonical_url
available = self.available
vat_info = self.vat_info
bmpl_info = self.bmpl_info
promo_badge_ids: Union[Unset, List[str]] = UNSET
if not isinstance(self.promo_badge_ids, Unset):
promo_badge_ids = self.promo_badge_ids
requirements: Union[Unset, List[str]] = UNSET
if not isinstance(self.requirements, Unset):
requirements = self.requirements
image_urls: Union[Unset, List[str]] = UNSET
if not isinstance(self.image_urls, Unset):
image_urls = self.image_urls
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if id is not UNSET:
field_dict["id"] = id
if legacy_offer_id is not UNSET:
field_dict["legacy_offer_id"] = legacy_offer_id
if legacy_product_id is not UNSET:
field_dict["legacy_product_id"] = legacy_product_id
if sku is not UNSET:
field_dict["sku"] = sku
if retailer_sku is not UNSET:
field_dict["retailer_sku"] = retailer_sku
if name is not UNSET:
field_dict["name"] = name
if price is not UNSET:
field_dict["price"] = price
if original_price is not UNSET:
field_dict["original_price"] = original_price
if discount is not UNSET:
field_dict["discount"] = discount
if human_volume is not UNSET:
field_dict["human_volume"] = human_volume
if volume is not UNSET:
field_dict["volume"] = volume
if volume_type is not UNSET:
field_dict["volume_type"] = volume_type
if items_per_pack is not UNSET:
field_dict["items_per_pack"] = items_per_pack
if discount_ends_at is not UNSET:
field_dict["discount_ends_at"] = discount_ends_at
if price_type is not UNSET:
field_dict["price_type"] = price_type
if grams_per_unit is not UNSET:
field_dict["grams_per_unit"] = grams_per_unit
if unit_price is not UNSET:
field_dict["unit_price"] = unit_price
if original_unit_price is not UNSET:
field_dict["original_unit_price"] = original_unit_price
if slug is not UNSET:
field_dict["slug"] = slug
if max_select_quantity is not UNSET:
field_dict["max_select_quantity"] = max_select_quantity
if canonical_url is not UNSET:
field_dict["canonical_url"] = canonical_url
if available is not UNSET:
field_dict["available"] = available
if vat_info is not UNSET:
field_dict["vat_info"] = vat_info
if bmpl_info is not UNSET:
field_dict["bmpl_info"] = bmpl_info
if promo_badge_ids is not UNSET:
field_dict["promo_badge_ids"] = promo_badge_ids
if requirements is not UNSET:
field_dict["requirements"] = requirements
if image_urls is not UNSET:
field_dict["image_urls"] = image_urls
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
id = d.pop("id", UNSET)
legacy_offer_id = d.pop("legacy_offer_id", UNSET)
legacy_product_id = d.pop("legacy_product_id", UNSET)
sku = d.pop("sku", UNSET)
retailer_sku = d.pop("retailer_sku", UNSET)
name = d.pop("name", UNSET)
price = d.pop("price", UNSET)
original_price = d.pop("original_price", UNSET)
discount = d.pop("discount", UNSET)
human_volume = d.pop("human_volume", UNSET)
volume = d.pop("volume", UNSET)
volume_type = d.pop("volume_type", UNSET)
items_per_pack = d.pop("items_per_pack", UNSET)
discount_ends_at = d.pop("discount_ends_at", UNSET)
price_type = d.pop("price_type", UNSET)
grams_per_unit = d.pop("grams_per_unit", UNSET)
unit_price = d.pop("unit_price", UNSET)
original_unit_price = d.pop("original_unit_price", UNSET)
slug = d.pop("slug", UNSET)
max_select_quantity = d.pop("max_select_quantity", UNSET)
canonical_url = d.pop("canonical_url", UNSET)
available = d.pop("available", UNSET)
vat_info = d.pop("vat_info", UNSET)
bmpl_info = d.pop("bmpl_info", UNSET)
promo_badge_ids = cast(List[str], d.pop("promo_badge_ids", UNSET))
requirements = cast(List[str], d.pop("requirements", UNSET))
image_urls = cast(List[str], d.pop("image_urls", UNSET))
search_products_response_products_item = cls(
id=id,
legacy_offer_id=legacy_offer_id,
legacy_product_id=legacy_product_id,
sku=sku,
retailer_sku=retailer_sku,
name=name,
price=price,
original_price=original_price,
discount=discount,
human_volume=human_volume,
volume=volume,
volume_type=volume_type,
items_per_pack=items_per_pack,
discount_ends_at=discount_ends_at,
price_type=price_type,
grams_per_unit=grams_per_unit,
unit_price=unit_price,
original_unit_price=original_unit_price,
slug=slug,
max_select_quantity=max_select_quantity,
canonical_url=canonical_url,
available=available,
vat_info=vat_info,
bmpl_info=bmpl_info,
promo_badge_ids=promo_badge_ids,
requirements=requirements,
image_urls=image_urls,
)
search_products_response_products_item.additional_properties = d
return search_products_response_products_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/search_products_response_products_item.py
| 0.745676 | 0.178293 |
search_products_response_products_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreTenantsItem")
@attr.s(auto_attribs=True)
class StoreTenantsItem:
"""
Attributes:
id (Union[Unset, str]):
name (Union[Unset, str]):
hostname (Union[Unset, str]):
preferred_card_payment_method (Union[Unset, str]):
"""
id: Union[Unset, str] = UNSET
name: Union[Unset, str] = UNSET
hostname: Union[Unset, str] = UNSET
preferred_card_payment_method: Union[Unset, str] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
id = self.id
name = self.name
hostname = self.hostname
preferred_card_payment_method = self.preferred_card_payment_method
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if id is not UNSET:
field_dict["id"] = id
if name is not UNSET:
field_dict["name"] = name
if hostname is not UNSET:
field_dict["hostname"] = hostname
if preferred_card_payment_method is not UNSET:
field_dict["preferred_card_payment_method"] = preferred_card_payment_method
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
id = d.pop("id", UNSET)
name = d.pop("name", UNSET)
hostname = d.pop("hostname", UNSET)
preferred_card_payment_method = d.pop("preferred_card_payment_method", UNSET)
store_tenants_item = cls(
id=id,
name=name,
hostname=hostname,
preferred_card_payment_method=preferred_card_payment_method,
)
store_tenants_item.additional_properties = d
return store_tenants_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_tenants_item.py
| 0.834474 | 0.156362 |
store_tenants_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="SearchProductsResponseSortItem")
@attr.s(auto_attribs=True)
class SearchProductsResponseSortItem:
"""
Attributes:
key (Union[Unset, str]):
name (Union[Unset, str]):
order (Union[Unset, str]):
active (Union[Unset, bool]):
"""
key: Union[Unset, str] = UNSET
name: Union[Unset, str] = UNSET
order: Union[Unset, str] = UNSET
active: Union[Unset, bool] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
key = self.key
name = self.name
order = self.order
active = self.active
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if key is not UNSET:
field_dict["key"] = key
if name is not UNSET:
field_dict["name"] = name
if order is not UNSET:
field_dict["order"] = order
if active is not UNSET:
field_dict["active"] = active
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
key = d.pop("key", UNSET)
name = d.pop("name", UNSET)
order = d.pop("order", UNSET)
active = d.pop("active", UNSET)
search_products_response_sort_item = cls(
key=key,
name=name,
order=order,
active=active,
)
search_products_response_sort_item.additional_properties = d
return search_products_response_sort_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/search_products_response_sort_item.py
| 0.863435 | 0.192862 |
search_products_response_sort_item.py
|
pypi
|
from typing import Any, BinaryIO, Dict, List, Optional, TextIO, Tuple, Type, TypeVar, Union, cast
import attr
from ..types import UNSET, Unset
T = TypeVar("T", bound="StoreStoreScheduleTemplateDeliveryTimesItem")
@attr.s(auto_attribs=True)
class StoreStoreScheduleTemplateDeliveryTimesItem:
"""
Attributes:
start (Union[Unset, str]):
end (Union[Unset, str]):
orders_limit (Union[Unset, int]):
surge_amount (Union[Unset, str]):
shipment_min_kilos (Union[Unset, str]):
shipment_max_kilos (Union[Unset, str]):
shipments_excess_kilos (Union[Unset, str]):
shipments_excess_items_count (Union[Unset, str]):
closing_time_gap (Union[Unset, int]):
kind (Union[Unset, str]):
store_zone_ids (Union[Unset, List[str]]):
"""
start: Union[Unset, str] = UNSET
end: Union[Unset, str] = UNSET
orders_limit: Union[Unset, int] = UNSET
surge_amount: Union[Unset, str] = UNSET
shipment_min_kilos: Union[Unset, str] = UNSET
shipment_max_kilos: Union[Unset, str] = UNSET
shipments_excess_kilos: Union[Unset, str] = UNSET
shipments_excess_items_count: Union[Unset, str] = UNSET
closing_time_gap: Union[Unset, int] = UNSET
kind: Union[Unset, str] = UNSET
store_zone_ids: Union[Unset, List[str]] = UNSET
additional_properties: Dict[str, Any] = attr.ib(init=False, factory=dict)
def to_dict(self) -> Dict[str, Any]:
start = self.start
end = self.end
orders_limit = self.orders_limit
surge_amount = self.surge_amount
shipment_min_kilos = self.shipment_min_kilos
shipment_max_kilos = self.shipment_max_kilos
shipments_excess_kilos = self.shipments_excess_kilos
shipments_excess_items_count = self.shipments_excess_items_count
closing_time_gap = self.closing_time_gap
kind = self.kind
store_zone_ids: Union[Unset, List[str]] = UNSET
if not isinstance(self.store_zone_ids, Unset):
store_zone_ids = self.store_zone_ids
field_dict: Dict[str, Any] = {}
field_dict.update(self.additional_properties)
field_dict.update({})
if start is not UNSET:
field_dict["start"] = start
if end is not UNSET:
field_dict["end"] = end
if orders_limit is not UNSET:
field_dict["orders_limit"] = orders_limit
if surge_amount is not UNSET:
field_dict["surge_amount"] = surge_amount
if shipment_min_kilos is not UNSET:
field_dict["shipment_min_kilos"] = shipment_min_kilos
if shipment_max_kilos is not UNSET:
field_dict["shipment_max_kilos"] = shipment_max_kilos
if shipments_excess_kilos is not UNSET:
field_dict["shipments_excess_kilos"] = shipments_excess_kilos
if shipments_excess_items_count is not UNSET:
field_dict["shipments_excess_items_count"] = shipments_excess_items_count
if closing_time_gap is not UNSET:
field_dict["closing_time_gap"] = closing_time_gap
if kind is not UNSET:
field_dict["kind"] = kind
if store_zone_ids is not UNSET:
field_dict["store_zone_ids"] = store_zone_ids
return field_dict
@classmethod
def from_dict(cls: Type[T], src_dict: Dict[str, Any]) -> T:
d = src_dict.copy()
start = d.pop("start", UNSET)
end = d.pop("end", UNSET)
orders_limit = d.pop("orders_limit", UNSET)
surge_amount = d.pop("surge_amount", UNSET)
shipment_min_kilos = d.pop("shipment_min_kilos", UNSET)
shipment_max_kilos = d.pop("shipment_max_kilos", UNSET)
shipments_excess_kilos = d.pop("shipments_excess_kilos", UNSET)
shipments_excess_items_count = d.pop("shipments_excess_items_count", UNSET)
closing_time_gap = d.pop("closing_time_gap", UNSET)
kind = d.pop("kind", UNSET)
store_zone_ids = cast(List[str], d.pop("store_zone_ids", UNSET))
store_store_schedule_template_delivery_times_item = cls(
start=start,
end=end,
orders_limit=orders_limit,
surge_amount=surge_amount,
shipment_min_kilos=shipment_min_kilos,
shipment_max_kilos=shipment_max_kilos,
shipments_excess_kilos=shipments_excess_kilos,
shipments_excess_items_count=shipments_excess_items_count,
closing_time_gap=closing_time_gap,
kind=kind,
store_zone_ids=store_zone_ids,
)
store_store_schedule_template_delivery_times_item.additional_properties = d
return store_store_schedule_template_delivery_times_item
@property
def additional_keys(self) -> List[str]:
return list(self.additional_properties.keys())
def __getitem__(self, key: str) -> Any:
return self.additional_properties[key]
def __setitem__(self, key: str, value: Any) -> None:
self.additional_properties[key] = value
def __delitem__(self, key: str) -> None:
del self.additional_properties[key]
def __contains__(self, key: str) -> bool:
return key in self.additional_properties
|
/sbermarket_api-0.0.3-py3-none-any.whl/sbermarket_api/models/store_store_schedule_template_delivery_times_item.py
| 0.815783 | 0.150029 |
store_store_schedule_template_delivery_times_item.py
|
pypi
|
# `sbi`: simulation-based inference
`sbi`: A Python toolbox for simulation-based inference.

Inference can be run in a single
line of code:
```python
posterior = infer(simulator, prior, method='SNPE', num_simulations=1000)
```
- To learn about the general motivation behind simulation-based inference, and the
inference methods included in `sbi`, read on below.
- For example applications to canonical problems in neuroscience, browse the recent
research article [Training deep neural density estimators to identify mechanistic models of neural dynamics](https://doi.org/10.7554/eLife.56261).
- If you want to get started using `sbi` on your own problem, jump to
[installation](install.md) and then check out the [tutorial](tutorial/00_getting_started.md).
## Motivation and approach
Many areas of science and engineering make extensive use of complex, stochastic,
numerical simulations to describe the structure and dynamics of the processes being
investigated.
A key challenge in simulation-based science is constraining these simulation models'
parameters, which are intepretable quantities, with observational data. Bayesian
inference provides a general and powerful framework to invert the simulators, i.e.
describe the parameters which are consistent both with empirical data and prior
knowledge.
In the case of simulators, a key quantity required for statistical inference, the
likelihood of observed data given parameters, $\mathcal{L}(\theta) = p(x_o|\theta)$, is
typically intractable, rendering conventional statistical approaches inapplicable.
`sbi` implements powerful machine-learning methods that address this problem. Roughly,
these algorithms can be categorized as:
- Sequential Neural Posterior Estimation (SNPE),
- Sequential Neural Likelihood Estimation (SNLE), and
- Sequential Neural Ratio Estimation (SNRE).
Depending on the characteristics of the problem, e.g. the dimensionalities of the
parameter space and the observation space, one of the methods will be more suitable.

**Goal: Algorithmically identify mechanistic models which are consistent with data.**
Each of the methods above needs three inputs: A candidate mechanistic model, prior
knowledge or constraints on model parameters, and observational data (or summary statistics
thereof).
The methods then proceed by
1. sampling parameters from the prior followed by simulating synthetic data from
these parameters,
2. learning the (probabilistic) association between data (or
data features) and underlying parameters, i.e. to learn statistical inference from
simulated data. The way in which this association is learned differs between the
above methods, but all use deep neural networks.
3. This learned neural network is then applied to empirical data to derive the full
space of parameters consistent with the data and the prior, i.e. the posterior
distribution. High posterior probability is assigned to parameters which are
consistent with both the data and the prior, low probability to inconsistent
parameters. While SNPE directly learns the posterior distribution, SNLE and SNRE need
an extra MCMC sampling step to construct a posterior.
4. If needed, an initial estimate of the posterior can be used to adaptively generate
additional informative simulations.
## Publications
See [Cranmer, Brehmer, Louppe (2020)](https://doi.org/10.1073/pnas.1912789117) for a recent
review on simulation-based inference.
The following papers offer additional details on the inference methods included in
`sbi`. You can find a tutorial on how to run each of these methods [here](https://www.mackelab.org/sbi/tutorial/16_implemented_algorithms/).
### Posterior estimation (SNPE)
- **Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation**<br> by Papamakarios & Murray (NeurIPS 2016) <br>[[PDF]](https://papers.nips.cc/paper/6084-fast-free-inference-of-simulation-models-with-bayesian-conditional-density-estimation.pdf) [[BibTeX]](https://papers.nips.cc/paper/6084-fast-free-inference-of-simulation-models-with-bayesian-conditional-density-estimation/bibtex)
- **Flexible statistical inference for mechanistic models of neural dynamics** <br> by Lueckmann, Goncalves, Bassetto, Öcal, Nonnenmacher & Macke (NeurIPS 2017) <br>[[PDF]](https://papers.nips.cc/paper/6728-flexible-statistical-inference-for-mechanistic-models-of-neural-dynamics.pdf) [[BibTeX]](https://papers.nips.cc/paper/6728-flexible-statistical-inference-for-mechanistic-models-of-neural-dynamics/bibtex)
- **Automatic posterior transformation for likelihood-free inference**<br>by Greenberg, Nonnenmacher & Macke (ICML 2019) <br>[[PDF]](http://proceedings.mlr.press/v97/greenberg19a/greenberg19a.pdf) [[BibTeX]](data:text/plain;charset=utf-8,%0A%0A%0A%0A%0A%0A%40InProceedings%7Bpmlr-v97-greenberg19a%2C%0A%20%20title%20%3D%20%09%20%7BAutomatic%20Posterior%20Transformation%20for%20Likelihood-Free%20Inference%7D%2C%0A%20%20author%20%3D%20%09%20%7BGreenberg%2C%20David%20and%20Nonnenmacher%2C%20Marcel%20and%20Macke%2C%20Jakob%7D%2C%0A%20%20booktitle%20%3D%20%09%20%7BProceedings%20of%20the%2036th%20International%20Conference%20on%20Machine%20Learning%7D%2C%0A%20%20pages%20%3D%20%09%20%7B2404--2414%7D%2C%0A%20%20year%20%3D%20%09%20%7B2019%7D%2C%0A%20%20editor%20%3D%20%09%20%7BChaudhuri%2C%20Kamalika%20and%20Salakhutdinov%2C%20Ruslan%7D%2C%0A%20%20volume%20%3D%20%09%20%7B97%7D%2C%0A%20%20series%20%3D%20%09%20%7BProceedings%20of%20Machine%20Learning%20Research%7D%2C%0A%20%20address%20%3D%20%09%20%7BLong%20Beach%2C%20California%2C%20USA%7D%2C%0A%20%20month%20%3D%20%09%20%7B09--15%20Jun%7D%2C%0A%20%20publisher%20%3D%20%09%20%7BPMLR%7D%2C%0A%20%20pdf%20%3D%20%09%20%7Bhttp%3A%2F%2Fproceedings.mlr.press%2Fv97%2Fgreenberg19a%2Fgreenberg19a.pdf%7D%2C%0A%20%20url%20%3D%20%09%20%7Bhttp%3A%2F%2Fproceedings.mlr.press%2Fv97%2Fgreenberg19a.html%7D%2C%0A%20%20abstract%20%3D%20%09%20%7BHow%20can%20one%20perform%20Bayesian%20inference%20on%20stochastic%20simulators%20with%20intractable%20likelihoods%3F%20A%20recent%20approach%20is%20to%20learn%20the%20posterior%20from%20adaptively%20proposed%20simulations%20using%20neural%20network-based%20conditional%20density%20estimators.%20However%2C%20existing%20methods%20are%20limited%20to%20a%20narrow%20range%20of%20proposal%20distributions%20or%20require%20importance%20weighting%20that%20can%20limit%20performance%20in%20practice.%20Here%20we%20present%20automatic%20posterior%20transformation%20(APT)%2C%20a%20new%20sequential%20neural%20posterior%20estimation%20method%20for%20simulation-based%20inference.%20APT%20can%20modify%20the%20posterior%20estimate%20using%20arbitrary%2C%20dynamically%20updated%20proposals%2C%20and%20is%20compatible%20with%20powerful%20flow-based%20density%20estimators.%20It%20is%20more%20flexible%2C%20scalable%20and%20efficient%20than%20previous%20simulation-based%20inference%20techniques.%20APT%20can%20operate%20directly%20on%20high-dimensional%20time%20series%20and%20image%20data%2C%20opening%20up%20new%20applications%20for%20likelihood-free%20inference.%7D%0A%7D%0A)
- **Truncated proposals for scalable and hassle-free simulation-based inference** <br> by Deistler, Goncalves & Macke (NeurIPS 2022) <br>[[Paper]](https://arxiv.org/abs/2210.04815)
### Likelihood-estimation (SNLE)
- **Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows**<br>by Papamakarios, Sterratt & Murray (AISTATS 2019) <br>[[PDF]](http://proceedings.mlr.press/v89/papamakarios19a/papamakarios19a.pdf) [[BibTeX]](https://gpapamak.github.io/bibtex/snl.bib)
- **Variational methods for simulation-based inference** <br> by Glöckler, Deistler, Macke (ICLR 2022) <br>[[Paper]](https://arxiv.org/abs/2203.04176)
- **Flexible and efficient simulation-based inference for models of decision-making** <br> by Boelts, Lueckmann, Gao, Macke (Elife 2022) <br>[[Paper]](https://elifesciences.org/articles/77220)
### Likelihood-ratio-estimation (SNRE)
- **Likelihood-free MCMC with Amortized Approximate Likelihood Ratios**<br>by Hermans, Begy & Louppe (ICML 2020) <br>[[PDF]](http://proceedings.mlr.press/v119/hermans20a/hermans20a.pdf)
- **On Contrastive Learning for Likelihood-free Inference**<br>Durkan, Murray & Papamakarios (ICML 2020) <br>[[PDF]](http://proceedings.mlr.press/v119/durkan20a/durkan20a.pdf)
- **Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation**<br>by Delaunoy, Hermans, Rozet, Wehenkel & Louppe (NeurIPS 2022) <br>[[PDF]](https://arxiv.org/pdf/2208.13624.pdf)
- **Contrastive Neural Ratio Estimation**<br>Benjamin Kurt Miller, Christoph Weniger, Patrick Forré (NeurIPS 2022) <br>[[PDF]](https://arxiv.org/pdf/2210.06170.pdf)
### Utilities
- **Restriction estimator**<br>by Deistler, Macke & Goncalves (PNAS 2022) <br>[[Paper]](https://www.pnas.org/doi/10.1073/pnas.2207632119)
- **Simulation-based calibration**<br>by Talts, Betancourt, Simpson, Vehtari, Gelman (arxiv 2018) <br>[[Paper]](https://arxiv.org/abs/1804.06788))
- **Expected coverage (sample-based)**<br>as computed in Deistler, Goncalves, Macke [[Paper]](https://arxiv.org/abs/2210.04815) and in Rozet, Louppe [[Paper]](https://matheo.uliege.be/handle/2268.2/12993)
|
/sbi-0.21.0.tar.gz/sbi-0.21.0/docs/docs/index.md
| 0.608361 | 0.993136 |
index.md
|
pypi
|
# Efficient handling of invalid simulation outputs
For many simulators, the output of the simulator can be ill-defined or it can have non-sensical values. For example, in neuroscience models, if a specific parameter set does not produce a spike, features such as the spike shape can not be computed. When using `sbi`, such simulations that have `NaN` or `inf` in their output are discarded during neural network training. This can lead to inefficetive use of simulation budget: we carry out many simulations, but a potentially large fraction of them is discarded.
In this tutorial, we show how we can use `sbi` to learn regions in parameter space that produce `valid` simulation outputs, and thereby improve the sampling efficiency. The key idea of the method is to use a classifier to distinguish parameters that lead to `valid` simulations from regions that lead to `invalid` simulations. After we have obtained the region in parameter space that produes `valid` simulation outputs, we train the deep neural density estimator used in `SNPE`. The method was originally proposed in [Lueckmann, Goncalves et al. 2017](https://arxiv.org/abs/1711.01861) and later used in [Deistler et al. 2021](https://www.biorxiv.org/content/10.1101/2021.07.30.454484v3.abstract).
## Main syntax
```python
from sbi.inference import SNPE
from sbi.utils import RestrictionEstimator
restriction_estimator = RestrictionEstimator(prior=prior)
proposals = [prior]
for r in range(num_rounds):
theta, x = simulate_for_sbi(simulator, proposals[-1], 1000)
restriction_estimator.append_simulations(theta, x)
if (
r < num_rounds - 1
): # training not needed in last round because classifier will not be used anymore.
classifier = restriction_estimator.train()
proposals.append(restriction_estimator.restrict_prior())
all_theta, all_x, _ = restriction_estimator.get_simulations()
inference = SNPE(prior=prior)
density_estimator = inference.append_simulations(all_theta, all_x).train()
posterior = inference.build_posterior()
```
## Further explanation in a toy example
```python
from sbi.inference import SNPE, simulate_for_sbi
from sbi.utils import RestrictionEstimator, BoxUniform
from sbi.analysis import pairplot
import torch
_ = torch.manual_seed(2)
```
We will define a simulator with two parameters and two simulation outputs. The simulator produces `NaN` whenever the first parameter is below `0.0`. If it is above `0.0` the simulator simply perturbs the parameter set with Gaussian noise:
```python
def simulator(theta):
perturbed_theta = theta + 0.5 * torch.randn(2)
perturbed_theta[theta[:, 0] < 0.0] = torch.as_tensor([float("nan"), float("nan")])
return perturbed_theta
```
The prior is a uniform distribution in [-2, 2]:
```python
prior = BoxUniform(-2 * torch.ones(2), 2 * torch.ones(2))
```
We then begin by drawing samples from the prior and simulating them. Looking at the simulation outputs, half of them contain `NaN`:
```python
theta, x = simulate_for_sbi(simulator, prior, 1000)
print("Simulation outputs: ", x)
```
Running 1000 simulations.: 0%| | 0/1000 [00:00<?, ?it/s]
Simulation outputs: tensor([[ 0.0411, -0.5656],
[ 0.0096, -1.0841],
[ 1.2937, 0.9448],
...,
[ nan, nan],
[ nan, nan],
[ 2.7940, 0.6461]])
The simulations that contain `NaN` are wasted, and we want to learn to "restrict" the prior such that it produces only `valid` simulation outputs. To do so, we set up the `RestrictionEstimator`:
```python
restriction_estimator = RestrictionEstimator(prior=prior)
```
The `RestrictionEstimator` trains a classifier to distinguish parameters that lead to `valid` simulation outputs from parameters that lead to `invalid` simulation outputs
```python
restriction_estimator.append_simulations(theta, x)
classifier = restriction_estimator.train()
```
Training neural network. Epochs trained: 35
We can inspect the `restricted_prior`, i.e. the parameters that the classifier believes will lead to `valid` simulation outputs, with:
```python
restricted_prior = restriction_estimator.restrict_prior()
samples = restricted_prior.sample((10_000,))
_ = pairplot(samples, limits=[[-2, 2], [-2, 2]], fig_size=(4, 4))
```
The classifier rejected 51.6% of all samples. You will get a speed-up of 106.5%.

Indeed, parameter sets sampled from the `restricted_prior` always have a first parameter larger than `0.0`. These are the ones that produce `valid` simulation outputs (see our definition of the simulator above). We can then use the `restricted_prior` to generate more simulations. Almost all of them will have `valid` simulation outputs:
```python
new_theta, new_x = simulate_for_sbi(simulator, restricted_prior, 1000)
print("Simulation outputs: ", new_x)
```
The classifier rejected 50.9% of all samples. You will get a speed-up of 103.6%.
Running 1000 simulations.: 0%| | 0/1000 [00:00<?, ?it/s]
Simulation outputs: tensor([[ 0.6834, -0.2415],
[ 1.3459, 1.5373],
[ 2.1092, 1.9180],
...,
[ 0.8845, 0.4036],
[ 1.9111, 1.2526],
[ 0.8320, 2.3755]])
We can now use **all** simulations and run `SNPE` as always:
```python
restriction_estimator.append_simulations(
new_theta, new_x
) # Gather the new simulations in the `restriction_estimator`.
(
all_theta,
all_x,
_,
) = restriction_estimator.get_simulations() # Get all simulations run so far.
inference = SNPE(prior=prior)
density_estimator = inference.append_simulations(all_theta, all_x).train()
posterior = inference.build_posterior()
posterior_samples = posterior.sample((10_000,), x=torch.ones(2))
_ = pairplot(posterior_samples, limits=[[-2, 2], [-2, 2]], fig_size=(3, 3))
```
WARNING:root:Found 523 NaN simulations and 0 Inf simulations. They will be excluded from training.
Neural network successfully converged after 118 epochs.
Drawing 10000 posterior samples: 0%| | 0/10000 [00:00<?, ?it/s]

## Further options for tuning the algorithm
- the whole procedure can be repeated many times (see the loop shown in "Main syntax" in this tutorial)
- the classifier is trained to be relatively conservative, i.e. it will try to be very sure that a specific parameter set can indeed not produce `valid` simulation outputs. If you are ok with the restricted prior potentially ignoring a small fraction of parameter sets that might have produced `valid` data, you can use `restriction_estimator.restrict_prior(allowed_false_negatives=...)`. The argument `allowed_false_negatives` sets the fraction of potentially ignored parameter sets. A higher value will lead to more `valid` simulations.
- By default, the algorithm considers simulations that have at least one `NaN` of `inf` as `invalid`. You can specify custom criterions with `RestrictionEstimator(decision_criterion=...)`
|
/sbi-0.21.0.tar.gz/sbi-0.21.0/docs/docs/tutorial/08_restriction_estimator.md
| 0.669096 | 0.991472 |
08_restriction_estimator.md
|
pypi
|
# SBI with trial-based data and models of mixed data types
Trial-based data often has the property that the individual trials can be assumed to be independent and identically distributed (iid), i.e., they are assumed to have the same underlying model parameters. For example, in a decision-making experiments, the experiment is often repeated in trials with the same experimental settings and conditions. The corresponding set of trials is then assumed to be "iid".
### Amortization of neural network training with likelihood-based SBI
For some SBI variants the iid assumption can be exploited: when using a likelihood-based SBI method (`SNLE`, `SNRE`) one can train the density or ratio estimator on single-trial data, and then perform inference with `MCMC`. Crucially, because the data is iid and the estimator is trained on single-trial data, one can repeat the inference with a different `x_o` (a different set of trials, or different number of trials) without having to retrain the density estimator. One can interpet this as amortization of the SBI training: we can obtain a neural likelihood, or likelihood-ratio estimate for new `x_o`s without retraining, but we still have to run `MCMC` or `VI` to do inference.
In addition, one can not only change the number of trials of a new `x_o`, but also the entire inference setting. For example, one can apply hierarchical inference scenarios with changing hierarchical denpendencies between the model parameters--all without having to retrain the density estimator because that is based on estimating single-trail likelihoods.
Let us first have a look how trial-based inference works in `SBI` before we discuss models with "mixed data types".
## SBI with trial-based data
For illustration we use a simple linear Gaussian simulator, as in previous tutorials. The simulator takes a single parameter (vector), the mean of the Gaussian and its variance is set to one. We define a Gaussian prior over the mean and perform inference. The observed data is again a from a Gaussian with some fixed "ground-truth" parameter $\theta_o$. Crucially, the observed data `x_o` can consist of multiple samples given the same ground-truth parameters and these samples are then iid:
$$
\theta \sim \mathcal{N}(\mu_0,\; \Sigma_0) \\
x | \theta \sim \mathcal{N}(\theta,\; \Sigma=I) \\
\mathbf{x_o} = \{x_o^i\}_{i=1}^N \sim \mathcal{N}(\theta_o,\; \Sigma=I)
$$
For this toy problem the ground-truth posterior is well defined, it is again a Gaussian, centered on the mean of $\mathbf{x_o}$ and with variance scaled by the number of trials $N$, i.e., the more trials we observe, the more information about the underlying $\theta_o$ we have and the more concentrated the posteriors becomes.
We will illustrate this below:
```python
import torch
import matplotlib.pyplot as plt
from torch import zeros, ones, eye
from torch.distributions import MultivariateNormal
from sbi.inference import SNLE, prepare_for_sbi, simulate_for_sbi
from sbi.analysis import pairplot
from sbi.utils.metrics import c2st
from sbi.simulators.linear_gaussian import (
linear_gaussian,
true_posterior_linear_gaussian_mvn_prior,
)
# Seeding
torch.manual_seed(1);
```
```python
# Gaussian simulator
theta_dim = 2
x_dim = theta_dim
# likelihood_mean will be likelihood_shift+theta
likelihood_shift = -1.0 * zeros(x_dim)
likelihood_cov = 0.3 * eye(x_dim)
prior_mean = zeros(theta_dim)
prior_cov = eye(theta_dim)
prior = MultivariateNormal(loc=prior_mean, covariance_matrix=prior_cov)
# Define Gaussian simulator
simulator, prior = prepare_for_sbi(
lambda theta: linear_gaussian(theta, likelihood_shift, likelihood_cov), prior
)
# Use built-in function to obtain ground-truth posterior given x_o
def get_true_posterior_samples(x_o, num_samples=1):
return true_posterior_linear_gaussian_mvn_prior(
x_o, likelihood_shift, likelihood_cov, prior_mean, prior_cov
).sample((num_samples,))
```
### The analytical posterior concentrates around true parameters with increasing number of IID trials
```python
num_trials = [1, 5, 15, 20]
theta_o = zeros(1, theta_dim)
# Generate multiple x_os with increasing number of trials.
xos = [theta_o.repeat(nt, 1) for nt in num_trials]
# Obtain analytical posterior samples for each of them.
ss = [get_true_posterior_samples(xo, 5000) for xo in xos]
```
```python
# Plot them in one pairplot as contours (obtained via KDE on the samples).
fig, ax = pairplot(
ss,
points=theta_o,
diag="kde",
upper="contour",
kde_offdiag=dict(bins=50),
kde_diag=dict(bins=100),
contour_offdiag=dict(levels=[0.95]),
points_colors=["k"],
points_offdiag=dict(marker="*", markersize=10),
)
plt.sca(ax[1, 1])
plt.legend(
[f"{nt} trials" if nt > 1 else f"{nt} trial" for nt in num_trials]
+ [r"$\theta_o$"],
frameon=False,
fontsize=12,
);
```
/home/janfb/qode/sbi/sbi/analysis/plot.py:425: UserWarning: No contour levels were found within the data range.
levels=opts["contour_offdiag"]["levels"],

Indeed, with increasing number of trials the posterior density concentrates around the true underlying parameter.
## Trial-based inference with NLE
(S)NLE can easily perform inference given multiple IID x because it is based on learning the likelihood. Once the likelihood is learned on single trials, i.e., a neural network that given a single observation and a parameter predicts the likelihood of that observation given the parameter, one can perform MCMC to obtain posterior samples.
MCMC relies on evaluating ratios of likelihoods of candidate parameters to either accept or reject them to be posterior samples. When inferring the posterior given multiple IID observation, these likelihoods are just the joint likelihoods of each IID observation given the current parameter candidate. Thus, given a neural likelihood from SNLE, we can calculate these joint likelihoods and perform MCMC given IID data, we just have to multiply together (or add in log-space) the individual trial-likelihoods (`sbi` takes care of that).
```python
# Train SNLE.
inferer = SNLE(prior, show_progress_bars=True, density_estimator="mdn")
theta, x = simulate_for_sbi(simulator, prior, 10000, simulation_batch_size=1000)
inferer.append_simulations(theta, x).train(training_batch_size=100);
```
Running 10000 simulations.: 0%| | 0/10000 [00:00<?, ?it/s]
Neural network successfully converged after 40 epochs.
```python
# Obtain posterior samples for different number of iid xos.
samples = []
num_samples = 5000
mcmc_parameters = dict(
num_chains=50,
thin=10,
warmup_steps=50,
init_strategy="proposal",
)
mcmc_method = "slice_np_vectorized"
posterior = inferer.build_posterior(
mcmc_method=mcmc_method,
mcmc_parameters=mcmc_parameters,
)
# Generate samples with MCMC given the same set of x_os as above.
for xo in xos:
samples.append(posterior.sample(sample_shape=(num_samples,), x=xo))
```
MCMC init with proposal: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 13448.45it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 75000/75000 [00:34<00:00, 2173.97it/s]
/home/janfb/qode/sbi/sbi/utils/sbiutils.py:282: UserWarning: An x with a batch size of 5 was passed. It will be interpreted as a batch of independent and identically
distributed data X={x_1, ..., x_n}, i.e., data generated based on the
same underlying (unknown) parameter. The resulting posterior will be with
respect to entire batch, i.e,. p(theta | X).
respect to entire batch, i.e,. p(theta | X)."""
MCMC init with proposal: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 16636.14it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 75000/75000 [00:40<00:00, 1871.97it/s]
/home/janfb/qode/sbi/sbi/utils/sbiutils.py:282: UserWarning: An x with a batch size of 15 was passed. It will be interpreted as a batch of independent and identically
distributed data X={x_1, ..., x_n}, i.e., data generated based on the
same underlying (unknown) parameter. The resulting posterior will be with
respect to entire batch, i.e,. p(theta | X).
respect to entire batch, i.e,. p(theta | X)."""
MCMC init with proposal: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 16856.78it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 75000/75000 [01:00<00:00, 1234.76it/s]
/home/janfb/qode/sbi/sbi/utils/sbiutils.py:282: UserWarning: An x with a batch size of 20 was passed. It will be interpreted as a batch of independent and identically
distributed data X={x_1, ..., x_n}, i.e., data generated based on the
same underlying (unknown) parameter. The resulting posterior will be with
respect to entire batch, i.e,. p(theta | X).
respect to entire batch, i.e,. p(theta | X)."""
MCMC init with proposal: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 16580.90it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 75000/75000 [01:13<00:00, 1020.25it/s]
Note that `sbi` warns about `iid-x` with increasing number of trial here. We ignore the warning because that's exactly what we want to do.
```python
# Plot them in one pairplot as contours (obtained via KDE on the samples).
fig, ax = pairplot(
samples,
points=theta_o,
diag="kde",
upper="contour",
kde_offdiag=dict(bins=50),
kde_diag=dict(bins=100),
contour_offdiag=dict(levels=[0.95]),
points_colors=["k"],
points_offdiag=dict(marker="*", markersize=10),
)
plt.sca(ax[1, 1])
plt.legend(
[f"{nt} trials" if nt > 1 else f"{nt} trial" for nt in num_trials]
+ [r"$\theta_o$"],
frameon=False,
fontsize=12,
);
```
/home/janfb/qode/sbi/sbi/analysis/plot.py:425: UserWarning: No contour levels were found within the data range.
levels=opts["contour_offdiag"]["levels"],

The pairplot above already indicates that (S)NLE is well able to obtain accurate posterior samples also for increasing number of trials (note that we trained the single-round version of SNLE so that we did not have to re-train it for new $x_o$).
Quantitatively we can measure the accuracy of SNLE by calculating the `c2st` score between SNLE and the true posterior samples, where the best accuracy is perfect for `0.5`:
```python
cs = [c2st(torch.from_numpy(s1), torch.from_numpy(s2)) for s1, s2 in zip(ss, samples)]
for _ in range(len(num_trials)):
print(f"c2st score for num_trials={num_trials[_]}: {cs[_].item():.2f}")
```
c2st score for num_trials=1: 0.51
c2st score for num_trials=5: 0.50
c2st score for num_trials=15: 0.53
c2st score for num_trials=20: 0.55
This inference procedure would work similarly when using `SNRE`. However, note that it does not work for `SNPE` because in `SNPE` we are learning the posterior directly so that whenever `x_o` changes (in terms of the number of trials or the parameter dependencies) the posterior changes and `SNPE` needs to be trained again.
## Trial-based SBI with mixed data types
In some cases, models with trial-based data additionally return data with mixed data types, e.g., continous and discrete data. For example, most computational models of decision-making have continuous reaction times and discrete choices as output.
This can induce a problem when performing trial-based SBI that relies on learning a neural likelihood. The problem is that it is challenging for most density estimators to handle both, continous and discrete data at the same time. There has been developed a method for solving this problem, it's called __Mixed Neural Likelihood Estimation__ (MNLE). It works just like NLE, but with mixed data types. The trick is that it learns two separate density estimators, one for the discrete part of the data, and one for the continuous part, and combines the two to obtain the final neural likelihood. Crucially, the continuous density estimator is trained conditioned on the output of the discrete one, such that statistical dependencies between the discrete and continous data (e.g., between choices and reaction times) are modeled as well. The interested reader is referred to the original paper available [here](https://www.biorxiv.org/content/10.1101/2021.12.22.473472v2).
MNLE was recently added to `sbi` (see [PR](https://github.com/mackelab/sbi/pull/638)) and follow the same API as `SNLE`.
## Toy problem for `MNLE`
To illustrate `MNLE` we set up a toy simulator that outputs mixed data and for which we know the likelihood such we can obtain reference posterior samples via MCMC.
__Simulator__: To simulate mixed data we do the following
- Sample reaction time from `inverse Gamma`
- Sample choices from `Binomial`
- Return reaction time $rt \in (0, \infty$ and choice index $c \in \{0, 1\}$
$$
c \sim \text{Binomial}(\rho) \\
rt \sim \text{InverseGamma}(\alpha=2, \beta) \\
$$
__Prior__: The priors of the two parameters $\rho$ and $\beta$ are independent. We define a `Beta` prior over the probabilty parameter of the `Binomial` used in the simulator and a `Gamma` prior over the shape-parameter of the `inverse Gamma` used in the simulator:
$$
p(\beta, \rho) = p(\beta) \; p(\rho) ; \\
p(\beta) = \text{Gamma}(1, 0.5) \\
p(\text{probs}) = \text{Beta}(2, 2)
$$
Because the `InverseGamma` and the `Binomial` likelihoods are well-defined we can perform MCMC on this problem and obtain reference-posterior samples.
```python
from sbi.inference import MNLE
from pyro.distributions import InverseGamma
from torch.distributions import Beta, Binomial, Gamma
from sbi.utils import MultipleIndependent
from sbi.inference import MCMCPosterior, VIPosterior, RejectionPosterior
from sbi.utils.torchutils import atleast_2d
from sbi.utils import mcmc_transform
from sbi.inference.potentials.base_potential import BasePotential
```
```python
# Toy simulator for mixed data
def mixed_simulator(theta):
beta, ps = theta[:, :1], theta[:, 1:]
choices = Binomial(probs=ps).sample()
rts = InverseGamma(concentration=2 * torch.ones_like(beta), rate=beta).sample()
return torch.cat((rts, choices), dim=1)
# Potential function to perform MCMC to obtain the reference posterior samples.
class PotentialFunctionProvider(BasePotential):
allow_iid_x = True # type: ignore
def __init__(self, prior, x_o, device="cpu"):
super().__init__(prior, x_o, device)
def __call__(self, theta, track_gradients: bool = True):
theta = atleast_2d(theta)
with torch.set_grad_enabled(track_gradients):
iid_ll = self.iid_likelihood(theta)
return iid_ll + self.prior.log_prob(theta)
def iid_likelihood(self, theta):
lp_choices = torch.stack(
[
Binomial(probs=th.reshape(1, -1)).log_prob(self.x_o[:, 1:])
for th in theta[:, 1:]
],
dim=1,
)
lp_rts = torch.stack(
[
InverseGamma(
concentration=2 * torch.ones_like(beta_i), rate=beta_i
).log_prob(self.x_o[:, :1])
for beta_i in theta[:, :1]
],
dim=1,
)
joint_likelihood = (lp_choices + lp_rts).squeeze()
assert joint_likelihood.shape == torch.Size([x_o.shape[0], theta.shape[0]])
return joint_likelihood.sum(0)
```
```python
# Define independent prior.
prior = MultipleIndependent(
[
Gamma(torch.tensor([1.0]), torch.tensor([0.5])),
Beta(torch.tensor([2.0]), torch.tensor([2.0])),
],
validate_args=False,
)
```
### Obtain reference-posterior samples via analytical likelihood and MCMC
```python
torch.manual_seed(42)
num_trials = 10
num_samples = 1000
theta_o = prior.sample((1,))
x_o = mixed_simulator(theta_o.repeat(num_trials, 1))
```
```python
true_posterior = MCMCPosterior(
potential_fn=PotentialFunctionProvider(prior, x_o),
proposal=prior,
method="slice_np_vectorized",
theta_transform=mcmc_transform(prior, enable_transform=True),
**mcmc_parameters,
)
true_samples = true_posterior.sample((num_samples,))
```
/home/janfb/qode/sbi/sbi/utils/sbiutils.py:282: UserWarning: An x with a batch size of 10 was passed. It will be interpreted as a batch of independent and identically
distributed data X={x_1, ..., x_n}, i.e., data generated based on the
same underlying (unknown) parameter. The resulting posterior will be with
respect to entire batch, i.e,. p(theta | X).
respect to entire batch, i.e,. p(theta | X)."""
MCMC init with proposal: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 4365.61it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 35000/35000 [02:39<00:00, 219.06it/s]
### Train MNLE and generate samples via MCMC
```python
# Training data
num_simulations = 5000
theta = prior.sample((num_simulations,))
x = mixed_simulator(theta)
# Train MNLE and obtain MCMC-based posterior.
trainer = MNLE()
estimator = trainer.append_simulations(theta, x).train()
```
Neural network successfully converged after 84 epochs.
```python
posterior = trainer.build_posterior(estimator, prior)
```
```python
# Training data
num_simulations = 5000
theta = prior.sample((num_simulations,))
x = mixed_simulator(theta)
# Train MNLE and obtain MCMC-based posterior.
trainer = MNLE(prior)
estimator = trainer.append_simulations(theta, x).train()
mnle_posterior = trainer.build_posterior(
mcmc_method="slice_np_vectorized", mcmc_parameters=mcmc_parameters
)
mnle_samples = mnle_posterior.sample((num_samples,), x=x_o)
```
/home/janfb/qode/sbi/sbi/neural_nets/mnle.py:64: UserWarning: The mixed neural likelihood estimator assumes that x contains
continuous data in the first n-1 columns (e.g., reaction times) and
categorical data in the last column (e.g., corresponding choices). If
this is not the case for the passed `x` do not use this function.
this is not the case for the passed `x` do not use this function."""
Neural network successfully converged after 35 epochs.
MCMC init with proposal: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 6125.93it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 35000/35000 [01:26<00:00, 404.04it/s]
### Compare MNLE and reference posterior
```python
# Plot them in one pairplot as contours (obtained via KDE on the samples).
fig, ax = pairplot(
[
prior.sample((1000,)),
true_samples,
mnle_samples,
],
points=theta_o,
diag="kde",
upper="contour",
kde_offdiag=dict(bins=50),
kde_diag=dict(bins=100),
contour_offdiag=dict(levels=[0.95]),
points_colors=["k"],
points_offdiag=dict(marker="*", markersize=10),
labels=[r"$\beta$", r"$\rho$"],
)
plt.sca(ax[1, 1])
plt.legend(
["Prior", "Reference", "MNLE", r"$\theta_o$"],
frameon=False,
fontsize=12,
);
```
/home/janfb/qode/sbi/sbi/analysis/plot.py:425: UserWarning: No contour levels were found within the data range.
levels=opts["contour_offdiag"]["levels"],

We see that the inferred `MNLE` posterior nicely matches the reference posterior, and how both inferred a posterior that is quite different from the prior.
Because MNLE training is amortized we can obtain another posterior given a different observation with potentially a different number of trials, just by running MCMC again (without re-training `MNLE`):
### Repeat inference with different `x_o` that has more trials
```python
num_trials = 100
x_o = mixed_simulator(theta_o.repeat(num_trials, 1))
true_samples = true_posterior.sample((num_samples,), x=x_o)
mnle_samples = mnle_posterior.sample((num_samples,), x=x_o)
```
/home/janfb/qode/sbi/sbi/utils/sbiutils.py:282: UserWarning: An x with a batch size of 100 was passed. It will be interpreted as a batch of independent and identically
distributed data X={x_1, ..., x_n}, i.e., data generated based on the
same underlying (unknown) parameter. The resulting posterior will be with
respect to entire batch, i.e,. p(theta | X).
respect to entire batch, i.e,. p(theta | X)."""
MCMC init with proposal: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 4685.01it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 35000/35000 [02:47<00:00, 209.25it/s]
MCMC init with proposal: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 6136.69it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 35000/35000 [08:23<00:00, 69.57it/s]
```python
# Plot them in one pairplot as contours (obtained via KDE on the samples).
fig, ax = pairplot(
[
prior.sample((1000,)),
true_samples,
mnle_samples,
],
points=theta_o,
diag="kde",
upper="contour",
kde_offdiag=dict(bins=50),
kde_diag=dict(bins=100),
contour_offdiag=dict(levels=[0.95]),
points_colors=["k"],
points_offdiag=dict(marker="*", markersize=10),
labels=[r"$\beta$", r"$\rho$"],
)
plt.sca(ax[1, 1])
plt.legend(
["Prior", "Reference", "MNLE", r"$\theta_o$"],
frameon=False,
fontsize=12,
);
```
/home/janfb/qode/sbi/sbi/analysis/plot.py:425: UserWarning: No contour levels were found within the data range.
levels=opts["contour_offdiag"]["levels"],

Again we can see that the posteriors match nicely. In addition, we observe that the posterior variance reduces as we increase the number of trials, similar to the illustration with the Gaussian example at the beginning of the tutorial.
A final note: `MNLE` is trained on single-trial data. Theoretically, density estimation is perfectly accurate only in the limit of infinite training data. Thus, training with a finite amount of training data naturally induces a small bias in the density estimator. As we observed above, this bias is so small that we don't really notice it, e.g., the `c2st` scores were close to 0.5. However, when we increase the number of trials in `x_o` dramatically (on the order of 1000s) the small bias can accumulate over the trials and inference with `MNLE` can become less accurate.
```python
```
|
/sbi-0.21.0.tar.gz/sbi-0.21.0/docs/docs/tutorial/14_multi-trial-data-and-mixed-data-types.md
| 0.852353 | 0.987711 |
14_multi-trial-data-and-mixed-data-types.md
|
pypi
|
# Crafting summary statistics
Many simulators produce outputs that are high-dimesional. For example, a simulator might generate a time series or an image. In a [previous tutorial](https://www.mackelab.org/sbi/tutorial/05_embedding_net/), we discussed how a neural networks can be used to learn summary statistics from such data. In this notebook, we will instead focus on hand-crafting summary statistics. We demonstrate that the choice of summary statistics can be crucial for the performance of the inference algorithm.
```python
import numpy as np
import torch
import matplotlib.pyplot as plt
import matplotlib as mpl
# sbi
import sbi.utils as utils
from sbi.inference.base import infer
from sbi.inference import SNPE, prepare_for_sbi, simulate_for_sbi
from sbi.utils.get_nn_models import posterior_nn
from sbi.analysis import pairplot
```
```python
# remove top and right axis from plots
mpl.rcParams["axes.spines.right"] = False
mpl.rcParams["axes.spines.top"] = False
```
This notebook is not intended to provide a one-fits-all approach. In fact it argues against this: it argues for the user to carefully construct their summary statistics to (i) further help the user understand his observed data, (ii) help them understand exactly what they want the model to recover from the observation and (iii) help the inference framework itself.
# Example 1: The quadratic function
Assume we have a simulator that is given by a quadratic function:
$x(t) = a\cdot t^2 + b\cdot t + c + \epsilon$,
where $\epsilon$ is Gaussian observation noise and $\theta = \{a, b, c\}$ are the parameters. Given an observed quadratic function $x_o$, we would like to recover the posterior over parameters $a_o$, $b_o$ and $c_o$.
## 1.1 Prior over parameters
First we define a prior distribution over parameters $a$, $b$ and $c$. Here, we use a uniform prior for $a$, $b$ and $c$ to go from $-1$ to $1$.
```python
prior_min = [-1, -1, -1]
prior_max = [1, 1, 1]
prior = utils.torchutils.BoxUniform(
low=torch.as_tensor(prior_min), high=torch.as_tensor(prior_max)
)
```
## 1.2 Simulator
Defining some helper functions first:
```python
def create_t_x(theta, seed=None):
"""Return an t, x array for plotting based on params"""
if theta.ndim == 1:
theta = theta[np.newaxis, :]
if seed is not None:
rng = np.random.RandomState(seed)
else:
rng = np.random.RandomState()
t = np.linspace(-1, 1, 200)
ts = np.repeat(t[:, np.newaxis], theta.shape[0], axis=1)
x = (
theta[:, 0] * ts**2
+ theta[:, 1] * ts
+ theta[:, 2]
+ 0.01 * rng.randn(ts.shape[0], theta.shape[0])
)
return t, x
def eval(theta, t, seed=None):
"""Evaluate the quadratic function at `t`"""
if theta.ndim == 1:
theta = theta[np.newaxis, :]
if seed is not None:
rng = np.random.RandomState(seed)
else:
rng = np.random.RandomState()
return theta[:, 0] * t**2 + theta[:, 1] * t + theta[:, 2] + 0.01 * rng.randn(1)
```
In this example, we generate the observation $x_o$ from parameters $\theta_o=(a_o, b_o, c_o)=(0.3, -0.2, -0.1)$. The observation as follows.
```python
theta_o = np.array([0.3, -0.2, -0.1])
t, x = create_t_x(theta_o)
plt.plot(t, x, "k")
```
[<matplotlib.lines.Line2D at 0x7f828b191d60>]

## 1.3 Summary statistics
We will compare two methods for defining summary statistics. One method uses three summary statistics which are function evaluations at three points in time. The other method uses a single summary statistic: the mean squared error between the observed and the simulated trace. In the second case, one then tries to obtain the posterior $p(\theta | 0)$, i.e. the error being zero. These two methods are implemented below:
<br>
$\textbf{get_3_values()}$ returns 3 function evaluations at $x=-0.5, x=0$ and $x=0.75$.
<br>
$\textbf{get_MSE()}$ returns the mean squared error between true and a quadratic function corresponding to a prior distributions sample.
```python
def get_3_values(theta, seed=None):
"""
Return 3 'y' values corresponding to x=-0.5,0,0.75 as summary statistic vector
"""
return np.array(
[
eval(theta, -0.5, seed=seed),
eval(theta, 0, seed=seed),
eval(theta, 0.75, seed=seed),
]
).T
```
```python
def get_MSE(theta, theta_o, seed=None):
"""
Return the mean-squared error (MSE) i.e. Euclidean distance from the observation function
"""
_, y = create_t_x(theta_o, seed=seed) # truth
_, y_ = create_t_x(theta, seed=seed) # simulations
return np.mean(np.square(y_ - y), axis=0, keepdims=True).T # MSE
```
Let's try a couple of samples from our prior and see their summary statistics. Notice that these indeed change in small amounts every time you rerun it due to the noise, except if you set the seed.
## 1.4 Simulating data
Let us see various plots of prior samples and their summary statistics versus the truth, i.e. our artificial observation.
```python
t, x_truth = create_t_x(theta_o)
plt.plot(t, x_truth, "k", zorder=1, label="truth")
n_samples = 100
theta = prior.sample((n_samples,))
t, x = create_t_x(theta.numpy())
plt.plot(t, x, "grey", zorder=0)
plt.legend()
```
<matplotlib.legend.Legend at 0x7f8289154eb0>

In summary, we defined reasonable summary statistics and, a priori, there might be an appararent reason why one method would be better than another. When we do inference, we'd like our posterior to focus around parameter samples that have their simulated MSE very close to 0 (i.e. the truth MSE summary statistic) or their 3 extracted $(t, x)$ coordinates to be the truthful ones.
## 1.5 Inference
### 1.5.1 Using the MSE
Let's see if we can use the MSE to recover the true observation parameters $\theta_o=(a_0,b_0,c_0)$.
```python
theta = prior.sample((1000,))
x = get_MSE(theta.numpy(), theta_o)
theta = torch.as_tensor(theta, dtype=torch.float32)
x = torch.as_tensor(x, dtype=torch.float32)
```
```python
inference = SNPE(prior)
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
```
Neural network successfully converged after 181 epochs.
Now that we've build the posterior as such, we can see how likely it finds certain parameters given that we tell it that we've observed a certain summary statistic (in this case the MSE). We can then sample from it.
```python
x_o = torch.as_tensor(
[
[
0.0,
]
]
)
theta_p = posterior.sample((10000,), x=x_o)
```
Drawing 10000 posterior samples: 0%| | 0/10000 [00:00<?, ?it/s]
```python
fig, axes = pairplot(
theta_p,
limits=list(zip(prior_min, prior_max)),
ticks=list(zip(prior_min, prior_max)),
figsize=(7, 7),
labels=["a", "b", "c"],
points_offdiag={"markersize": 6},
points_colors="r",
points=theta_o,
);
```

The posterior seems to pretty broad: i.e. it is not so certain about the 'true' parameters (here showcased in red).
```python
x_o_t, x_o_x = create_t_x(theta_o)
plt.plot(x_o_t, x_o_x, "k", zorder=1, label="truth")
theta_p = posterior.sample((10,), x=x_o)
x_t, x_x = create_t_x(theta_p.numpy())
plt.plot(x_t, x_x, "grey", zorder=0)
plt.legend()
```
Drawing 10 posterior samples: 0%| | 0/10 [00:00<?, ?it/s]
<matplotlib.legend.Legend at 0x7f82882cd670>

The functions are a bit closer to the observation than prior samples, but many posterior samples generate activity that is very far off from the observation. We would expect `sbi` do better on such a simple example. So what's going on? Do we need more simulations? Feel free to try, but below we will show that one can use the same number of simulation samples with different summary statistics and do much better.
### 1.5.2 Using 3 coordinates as summary statistics
```python
x = get_3_values(theta.numpy())
x = torch.as_tensor(x, dtype=torch.float32)
```
```python
inference = SNPE(prior)
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
```
Neural network successfully converged after 127 epochs.
The observation is now given by the values of the observed trace at three different coordinates:
```python
x_o = torch.as_tensor(get_3_values(theta_o), dtype=float)
```
```python
theta_p = posterior.sample((10000,), x=x_o)
fig, axes = pairplot(
theta_p,
limits=list(zip(prior_min, prior_max)),
ticks=list(zip(prior_min, prior_max)),
figsize=(7, 7),
labels=["a", "b", "c"],
points_offdiag={"markersize": 6},
points_colors="r",
points=theta_o,
);
```
Drawing 10000 posterior samples: 0%| | 0/10000 [00:00<?, ?it/s]

```python
x_o_x, x_o_y = create_t_x(theta_o)
plt.plot(x_o_x, x_o_y, "k", zorder=1, label="truth")
theta_p = posterior.sample((100,), x=x_o)
ind_10_highest = np.argsort(np.array(posterior.log_prob(theta=theta_p, x=x_o)))[-10:]
theta_p_considered = theta_p[ind_10_highest, :]
x_x, x_y = create_t_x(theta_p_considered.numpy())
plt.plot(x_x, x_y, "grey", zorder=0)
plt.legend()
```
Drawing 100 posterior samples: 0%| | 0/100 [00:00<?, ?it/s]
<matplotlib.legend.Legend at 0x7f82885b4af0>

Ok this definitely seems to work! The posterior correctly focuses on the true parameters with greater confidence. You can experiment yourself how this improves further with more training samples or you could try to see how many you'd exactly need to keep having a satisfyingly looking posterior and high posterior sample simulations.
So, what's up with the MSE? Why does it not seem so informative to constrain the posterior? In 1.6, we'll see both the power and pitfalls of summary statistics.
## 1.6 Prior simulations' summary statistics vs observed summary statistics
Let's try to understand this...Let's look at a histogram of the four summary statistics we've experimented with, and see how they compare to our observed truth summary statistic vector:
```python
stats = np.concatenate(
(get_3_values(theta.numpy()), get_MSE(theta.numpy(), theta_o)), axis=1
)
x_o = np.concatenate((get_3_values(theta_o), np.asarray([[0.0]])), axis=1)
features = ["y @ x=-0.5", "y @ x=0", "y @ x=0.7", "MSE"]
fig, axes = plt.subplots(1, 4, figsize=(10, 3))
xlabelfontsize = 10
for i, ax in enumerate(axes.reshape(-1)):
ax.hist(
stats[:, i],
color=["grey"],
alpha=0.5,
bins=30,
density=True,
histtype="stepfilled",
label=["simulations"],
)
ax.axvline(x_o[:, i], label="observation")
ax.set_xlabel(features[i], fontsize=xlabelfontsize)
if i == 3:
ax.legend()
plt.tight_layout()
```

We see that for the coordinates (three plots on the left), simulations cover the observation. That is: it covers it from the left and right side in each case. For the MSE, simulations never truly reach the observation $0.0$.
For the trained neural network, it is strongly preferable if the simulations cover the observation. In that case, the neural network can **interpolate** between simulated data. Contrary to that, for the MSE, the neural network has to **extrapolate**: it never observes a simulation that is to the left of the observation and has to extrapolate to the region of MSE=$0.0$. This seems like a technical point but, as we saw above, it makes a huge difference in performance.
## 1.7 Explicit recommendations
We give some explicit recommendation when using summary statistics
- Visualize the histogram of each summary statistic and plot the value of the observation. If, for some summary statistics, the observation is not covered (or is at the very border, e.g. the MSE above), the trained neural network will struggle.
- Do not use an "error" as summary statistic. This is common in optimization (e.g. genetic algorithms), but it often leads to trouble in `sbi` due to the reason above.
- Only use summary statistics that are necessary. The less summary statistics you use, the less can go wrong with them. Of course, you have to ensure that the summary statistics describe the raw data sufficiently well.
|
/sbi-0.21.0.tar.gz/sbi-0.21.0/docs/docs/tutorial/10_crafting_summary_statistics.md
| 0.746324 | 0.992154 |
10_crafting_summary_statistics.md
|
pypi
|
# Active subspaces for sensitivity analysis
A standard method to analyse dynamical systems such as models of neural dynamics is to use a sensitivity analysis. We can use the posterior obtained with `sbi`, to perform such analyses.
## Main syntax
```python
from sbi.analysis import ActiveSubspace
sensitivity = ActiveSubspace(posterior.set_default_x(x_o))
e_vals, e_vecs = sensitivity.find_directions(posterior_log_prob_as_property=True)
projected_data = sensitivity.project(theta_project, num_dimensions=1)
```
## Example and further explanation
```python
import torch
from torch.distributions import MultivariateNormal
from sbi.analysis import ActiveSubspace, pairplot
from sbi.simulators import linear_gaussian
from sbi.inference import simulate_for_sbi, infer
_ = torch.manual_seed(0)
```
Let's define a simple Gaussian toy example:
```python
prior = MultivariateNormal(0.0 * torch.ones(2), 2 * torch.eye(2))
def simulator(theta):
return linear_gaussian(
theta, -0.8 * torch.ones(2), torch.tensor([[1.0, 0.98], [0.98, 1.0]])
)
posterior = infer(simulator, prior, num_simulations=2000, method="SNPE").set_default_x(
torch.zeros(2)
)
```
Running 2000 simulations.: 0%| | 0/2000 [00:00<?, ?it/s]
Neural network successfully converged after 117 epochs.
```python
posterior_samples = posterior.sample((2000,))
```
Drawing 2000 posterior samples: 0%| | 0/2000 [00:00<?, ?it/s]
```python
_ = pairplot(posterior_samples, limits=[[-3, 3], [-3, 3]], figsize=(4, 4))
```

When performing a sensitivity analysis on this model, we would expect that there is one direction that is less sensitive (from bottom left to top right, along the vector [1, 1]) and one direction that is more sensitive (from top left to bottom right, along [1, -1]). We can recover these directions with the `ActiveSubspace` module in `sbi`.
```python
sensitivity = ActiveSubspace(posterior)
e_vals, e_vecs = sensitivity.find_directions(posterior_log_prob_as_property=True)
```
Drawing 1000 posterior samples: 0%| | 0/1000 [00:00<?, ?it/s]
The method `.find_active()` returns eigenvalues and the corresponding eigenvectors. It does so by computing the matrix:
$M = \mathbb{E}_{p(\theta|x_o)}[\nabla_{\theta}p(\theta|x_o)^T \nabla_{\theta}p(\theta|x_o)$]
It then does an eigendecomposition:
$M = Q \Lambda Q^{-1}$
A strong eigenvalue indicates that the gradient of the posterior density is large, i.e. the system output is sensitive to changes along the direction of the corresponding eigenvector (or `active`). The eigenvalue corresponding to the vector `[0.68, -0.73]` is much larger than the eigenvalue of `[0.73, 0.67]`. This matches the intuition we developed above.
```python
print("Eigenvalues: \n", e_vals, "\n")
print("Eigenvectors: \n", e_vecs)
```
Eigenvalues:
tensor([2.3552e-06, 7.0754e-05])
Eigenvectors:
tensor([[-0.7066, -0.7076],
[-0.7076, 0.7066]])
Lastly, we can project the data into the active dimensions. In this case, we will just use one active dimension:
```python
projected_data = sensitivity.project(posterior_samples, num_dimensions=1)
```
## Some technical details
- The gradients and thus the eigenvectors are computed in z-scored space. The mean and standard deviation are computed w.r.t. the prior distribution. Thus, the gradients (and thus the eigenvales) reflect changes on the scale of the prior.
- The expected value used to compute the matrix $M$ is estimated using `1000` posterior samples. This value can be set with the `.find_active(num_monte_carlo_samples=...)` variable.
- How does this relate to Principal Component Analysis (PCA)? In the example above, the results of PCA would be very similar. However, there are two main differences to PCA: First, PCA ignores local changes in the posterior, whereas the active subspace can change a lot (since it computes the gradient, which is a local quantity). Second, active subspaces can be used characterize the sensitivity of any other quantity w.r.t. circuit parameters. This is outlined below:
## Computing the sensitivity of a specific summary statistic
Above, we have shown how to identify directions along which the posterior probability changes rapidly. Notably, the posterior probability reflects how consistent a specific parameter set is with **all** summary statistics, i.e. the entire $x_o$. Sometimes, we might be interested in investigating how a specific features is influenced by the parameters. This feature could be one of the values of $x_o$, but it could also be a different property.
As a neuroscience example, in Deistler et al. 2021, we obtained the posterior distribution given burst durations and delays between bursts. After having obtained the posterior, we then wanted to analyse the sensitivity of metabolic cost w.r.t. circuit parameters. The framework we presented above can easily be extended to study such questions.
```python
prior = MultivariateNormal(0.0 * torch.ones(2), 2 * torch.eye(2))
def simulator(theta):
return linear_gaussian(theta, -0.8 * torch.ones(2), torch.eye(2))
posterior = infer(simulator, prior, num_simulations=2000, method="SNPE").set_default_x(
torch.zeros(2)
)
```
Running 2000 simulations.: 0%| | 0/2000 [00:00<?, ?it/s]
Neural network successfully converged after 139 epochs.
```python
_ = pairplot(posterior.sample((10_000,)), limits=[[-3, 3], [-3, 3]], figsize=(4, 4))
```
Drawing 10000 posterior samples: 0%| | 0/10000 [00:00<?, ?it/s]

```python
sensitivity = ActiveSubspace(posterior)
```
This time, we begin by drawing samples from the posterior and then computing the desired property for each of the samples (i.e. you will probably have to run simulations for each theta and extract the property from the simulation output). As an example, we assume that the property is just the cube of the first dimension of the simulation output:
```python
theta, x = simulate_for_sbi(simulator, posterior, 5000)
property_ = x[:, :1] ** 3 # E.g. metabolic cost.
```
Drawing 5000 posterior samples: 0%| | 0/5000 [00:00<?, ?it/s]
Running 5000 simulations.: 0%| | 0/5000 [00:00<?, ?it/s]
To investigate the sensitivity of a given parameter, we train a neural network to predict the `property_` from the parameters and then analyse the neural network as above:
$M = \mathbb{E}_{p(\theta|x_o)}[\nabla_{\theta}f(\theta)^T \nabla_{\theta}f(\theta)$]
where $f(\cdot)$ is the trained neural network.
```python
_ = sensitivity.add_property(theta, property_).train()
e_vals, e_vecs = sensitivity.find_directions()
```
Training neural network. Epochs trained: 24
Drawing 1000 posterior samples: 0%| | 0/1000 [00:00<?, ?it/s]
```python
print("Eigenvalues: \n", e_vals, "\n")
print("Eigenvectors: \n", e_vecs)
```
Eigenvalues:
tensor([2.8801e-06, 6.1131e-05])
Eigenvectors:
tensor([[ 0.0362, 0.9993],
[ 0.9993, -0.0362]])
As we can see, one of the eigenvalues is much smaller than the other one. The larger eigenvalue represents (approximately) the vector `[1.0, 0.0]`. This makes sense, because only the `property_` is influenced only by the first output which, in turn, is influenced only by the first parameter.
|
/sbi-0.21.0.tar.gz/sbi-0.21.0/docs/docs/tutorial/09_sensitivity_analysis.md
| 0.810066 | 0.985186 |
09_sensitivity_analysis.md
|
pypi
|
# Inference on Hodgkin-Huxley model: tutorial
In this tutorial, we use `sbi` to do inference on a [Hodgkin-Huxley model](https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model) from neuroscience (Hodgkin and Huxley, 1952). We will learn two parameters ($\bar g_{Na}$,$\bar g_K$) based on a current-clamp recording, that we generate synthetically (in practice, this would be an experimental observation).
Note, you find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/examples/00_HH_simulator.ipynb](https://github.com/mackelab/sbi/blob/main/examples/00_HH_simulator.ipynb) in the `sbi` repository.
First we are going to import basic packages.
```python
import numpy as np
import torch
# visualization
import matplotlib as mpl
import matplotlib.pyplot as plt
# sbi
from sbi import utils as utils
from sbi import analysis as analysis
from sbi.inference.base import infer
```
```python
# remove top and right axis from plots
mpl.rcParams["axes.spines.right"] = False
mpl.rcParams["axes.spines.top"] = False
```
## Different required components
Before running inference, let us define the different required components:
1. observed data
1. prior over model parameters
2. simulator
## 1. Observed data
Let us assume we current-clamped a neuron and recorded the following voltage trace:
<img src="https://raw.githubusercontent.com/mackelab/delfi/master/docs/docs/tutorials/observed_voltage_trace.png" width="480">
<br>
In fact, this voltage trace was not measured experimentally but synthetically generated by simulating a Hodgkin-Huxley model with particular parameters ($\bar g_{Na}$,$\bar g_K$). We will come back to this point later in the tutorial.
## 2. Simulator
We would like to infer the posterior over the two parameters ($\bar g_{Na}$,$\bar g_K$) of a Hodgkin-Huxley model, given the observed electrophysiological recording above. The model has channel kinetics as in [Pospischil et al. 2008](https://link.springer.com/article/10.1007/s00422-008-0263-8), and is defined by the following set of differential equations (parameters of interest highlighted in orange):
$$
\scriptsize
\begin{align}
C_m\frac{dV}{dt}&=g_1\left(E_1-V\right)+
\color{orange}{\bar{g}_{Na}}m^3h\left(E_{Na}-V\right)+
\color{orange}{\bar{g}_{K}}n^4\left(E_K-V\right)+
\bar{g}_Mp\left(E_K-V\right)+
I_{inj}+
\sigma\eta\left(t\right)\\
\frac{dq}{dt}&=\frac{q_\infty\left(V\right)-q}{\tau_q\left(V\right)},\;q\in\{m,h,n,p\}
\end{align}
$$
Above, $V$ represents the membrane potential, $C_m$ is the membrane capacitance, $g_{\text{l}}$ is the leak conductance, $E_{\text{l}}$ is the membrane reversal potential, $\bar{g}_c$ is the density of channels of type $c$ ($\text{Na}^+$, $\text{K}^+$, M), $E_c$ is the reversal potential of $c$, ($m$, $h$, $n$, $p$) are the respective channel gating kinetic variables, and $\sigma \eta(t)$ is the intrinsic neural noise. The right hand side of the voltage dynamics is composed of a leak current, a voltage-dependent $\text{Na}^+$ current, a delayed-rectifier $\text{K}^+$ current, a slow voltage-dependent $\text{K}^+$ current responsible for spike-frequency adaptation, and an injected current $I_{\text{inj}}$. Channel gating variables $q$ have dynamics fully characterized by the neuron membrane potential $V$, given the respective steady-state $q_{\infty}(V)$ and time constant $\tau_{q}(V)$ (details in Pospischil et al. 2008).
The input current $I_{\text{inj}}$ is defined as
```python
from HH_helper_functions import syn_current
I, t_on, t_off, dt, t, A_soma = syn_current()
```
The Hodgkin-Huxley simulator is given by:
```python
from HH_helper_functions import HHsimulator
```
Putting the input current and the simulator together:
```python
def run_HH_model(params):
params = np.asarray(params)
# input current, time step
I, t_on, t_off, dt, t, A_soma = syn_current()
t = np.arange(0, len(I), 1) * dt
# initial voltage
V0 = -70
states = HHsimulator(V0, params.reshape(1, -1), dt, t, I)
return dict(data=states.reshape(-1), time=t, dt=dt, I=I.reshape(-1))
```
To get an idea of the output of the Hodgkin-Huxley model, let us generate some voltage traces for different parameters ($\bar g_{Na}$,$\bar g_K$), given the input current $I_{\text{inj}}$:
```python
# three sets of (g_Na, g_K)
params = np.array([[50.0, 1.0], [4.0, 1.5], [20.0, 15.0]])
num_samples = len(params[:, 0])
sim_samples = np.zeros((num_samples, len(I)))
for i in range(num_samples):
sim_samples[i, :] = run_HH_model(params=params[i, :])["data"]
```
```python
# colors for traces
col_min = 2
num_colors = num_samples + col_min
cm1 = mpl.cm.Blues
col1 = [cm1(1.0 * i / num_colors) for i in range(col_min, num_colors)]
fig = plt.figure(figsize=(7, 5))
gs = mpl.gridspec.GridSpec(2, 1, height_ratios=[4, 1])
ax = plt.subplot(gs[0])
for i in range(num_samples):
plt.plot(t, sim_samples[i, :], color=col1[i], lw=2)
plt.ylabel("voltage (mV)")
ax.set_xticks([])
ax.set_yticks([-80, -20, 40])
ax = plt.subplot(gs[1])
plt.plot(t, I * A_soma * 1e3, "k", lw=2)
plt.xlabel("time (ms)")
plt.ylabel("input (nA)")
ax.set_xticks([0, max(t) / 2, max(t)])
ax.set_yticks([0, 1.1 * np.max(I * A_soma * 1e3)])
ax.yaxis.set_major_formatter(mpl.ticker.FormatStrFormatter("%.2f"))
plt.show()
```

As can be seen, the voltage traces can be quite diverse for different parameter values.
Often, we are not interested in matching the exact trace, but only in matching certain features thereof. In this example of the Hodgkin-Huxley model, the summary features are the number of spikes, the mean resting potential, the standard deviation of the resting potential, and the first four voltage moments: mean, standard deviation, skewness and kurtosis. Using the function `calculate_summary_statistics()` imported below, we obtain these statistics from the output of the Hodgkin Huxley simulator.
```python
from HH_helper_functions import calculate_summary_statistics
```
Lastly, we define a function that performs all of the above steps at once. The function `simulation_wrapper` takes in conductance values, runs the Hodgkin Huxley model and then returns the summary statistics.
```python
def simulation_wrapper(params):
"""
Returns summary statistics from conductance values in `params`.
Summarizes the output of the HH simulator and converts it to `torch.Tensor`.
"""
obs = run_HH_model(params)
summstats = torch.as_tensor(calculate_summary_statistics(obs))
return summstats
```
`sbi` takes any function as simulator. Thus, `sbi` also has the flexibility to use simulators that utilize external packages, e.g., Brian (http://briansimulator.org/), nest (https://www.nest-simulator.org/), or NEURON (https://neuron.yale.edu/neuron/). External simulators do not even need to be Python-based as long as they store simulation outputs in a format that can be read from Python. All that is necessary is to wrap your external simulator of choice into a Python callable that takes a parameter set and outputs a set of summary statistics we want to fit the parameters to.
## 3. Prior over model parameters
Now that we have the simulator, we need to define a function with the prior over the model parameters ($\bar g_{Na}$,$\bar g_K$), which in this case is chosen to be a Uniform distribution:
```python
prior_min = [0.5, 1e-4]
prior_max = [80.0, 15.0]
prior = utils.torchutils.BoxUniform(
low=torch.as_tensor(prior_min), high=torch.as_tensor(prior_max)
)
```
## Inference
Now that we have all the required components, we can run inference with SNPE to identify parameters whose activity matches this trace.
```python
posterior = infer(
simulation_wrapper, prior, method="SNPE", num_simulations=300, num_workers=4
)
```
HBox(children=(FloatProgress(value=0.0, description='Running 300 simulations in 300 batches.', max=300.0, styl…
Neural network successfully converged after 233 epochs.
Note `sbi` can parallelize your simulator. If you experience problems with parallelization, try setting `num_workers=1` and please give us an error report as a [GitHub issue](https://github.com/mackelab/sbi/issues).
### Coming back to the observed data
As mentioned at the beginning of the tutorial, the observed data are generated by the Hodgkin-Huxley model with a set of known parameters ($\bar g_{Na}$,$\bar g_K$). To illustrate how to compute the summary statistics of the observed data, let us regenerate the observed data:
```python
# true parameters and respective labels
true_params = np.array([50.0, 5.0])
labels_params = [r"$g_{Na}$", r"$g_{K}$"]
```
```python
observation_trace = run_HH_model(true_params)
observation_summary_statistics = calculate_summary_statistics(observation_trace)
```
As we already shown above, the observed voltage traces look as follows:
```python
fig = plt.figure(figsize=(7, 5))
gs = mpl.gridspec.GridSpec(2, 1, height_ratios=[4, 1])
ax = plt.subplot(gs[0])
plt.plot(observation_trace["time"], observation_trace["data"])
plt.ylabel("voltage (mV)")
plt.title("observed data")
plt.setp(ax, xticks=[], yticks=[-80, -20, 40])
ax = plt.subplot(gs[1])
plt.plot(observation_trace["time"], I * A_soma * 1e3, "k", lw=2)
plt.xlabel("time (ms)")
plt.ylabel("input (nA)")
ax.set_xticks([0, max(observation_trace["time"]) / 2, max(observation_trace["time"])])
ax.set_yticks([0, 1.1 * np.max(I * A_soma * 1e3)])
ax.yaxis.set_major_formatter(mpl.ticker.FormatStrFormatter("%.2f"))
```

## Analysis of the posterior given the observed data
After running the inference algorithm, let us inspect the inferred posterior distribution over the parameters ($\bar g_{Na}$,$\bar g_K$), given the observed trace. To do so, we first draw samples (i.e. consistent parameter sets) from the posterior:
```python
samples = posterior.sample((10000,), x=observation_summary_statistics)
```
HBox(children=(FloatProgress(value=0.0, description='Drawing 10000 posterior samples', max=10000.0, style=Prog…
```python
fig, axes = analysis.pairplot(
samples,
limits=[[0.5, 80], [1e-4, 15.0]],
ticks=[[0.5, 80], [1e-4, 15.0]],
figsize=(5, 5),
points=true_params,
points_offdiag={"markersize": 6},
points_colors="r",
);
```

As can be seen, the inferred posterior contains the ground-truth parameters (red) in a high-probability region. Now, let us sample parameters from the posterior distribution, simulate the Hodgkin-Huxley model for this parameter set and compare the simulations with the observed data:
```python
# Draw a sample from the posterior and convert to numpy for plotting.
posterior_sample = posterior.sample((1,), x=observation_summary_statistics).numpy()
```
HBox(children=(FloatProgress(value=0.0, description='Drawing 1 posterior samples', max=1.0, style=ProgressStyl…
```python
fig = plt.figure(figsize=(7, 5))
# plot observation
t = observation_trace["time"]
y_obs = observation_trace["data"]
plt.plot(t, y_obs, lw=2, label="observation")
# simulate and plot samples
x = run_HH_model(posterior_sample)
plt.plot(t, x["data"], "--", lw=2, label="posterior sample")
plt.xlabel("time (ms)")
plt.ylabel("voltage (mV)")
ax = plt.gca()
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles[::-1], labels[::-1], bbox_to_anchor=(1.3, 1), loc="upper right")
ax.set_xticks([0, 60, 120])
ax.set_yticks([-80, -20, 40]);
```

As can be seen, the sample from the inferred posterior leads to simulations that closely resemble the observed data, confirming that `SNPE` did a good job at capturing the observed data in this simple case.
## References
A. L. Hodgkin and A. F. Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4):500–544, 1952.
M. Pospischil, M. Toledo-Rodriguez, C. Monier, Z. Piwkowska, T. Bal, Y. Frégnac, H. Markram, and A. Destexhe. Minimal Hodgkin-Huxley type models for different classes of cortical and thalamic neurons. Biological Cybernetics, 99(4-5), 2008.
|
/sbi-0.21.0.tar.gz/sbi-0.21.0/docs/docs/examples/00_HH_simulator.md
| 0.779616 | 0.984246 |
00_HH_simulator.md
|
pypi
|
[](https://pypi.org/project/sbibm/)  [](https://github.com/sbi-benchmark/sbibm/blob/master/CONTRIBUTING.md) [](https://github.com/psf/black)
# Simulation-Based Inference Benchmark
This repository contains a simulation-based inference benchmark framework, `sbibm`, which we describe in the [associated manuscript "Benchmarking Simulation-based Inference"](http://proceedings.mlr.press/v130/lueckmann21a.html). A short summary of the paper and interactive results can be found on the project website: https://sbi-benchmark.github.io
The benchmark framework includes tasks, reference posteriors, metrics, plotting, and integrations with SBI toolboxes. The framework is designed to be highly extensible and easily used in new research projects as we show below.
In order to emphasize that `sbibm` can be used independently of any particular analysis pipeline, we split the code for reproducing the experiments of the manuscript into a seperate repository hosted at [github.com/sbi-benchmark/results/](https://github.com/sbi-benchmark/results/tree/main/benchmarking_sbi). Besides the pipeline to reproduce the manuscripts' experiments, full results including dataframes for quick comparisons are hosted in that repository.
If you have questions or comments, please do not hesitate [to contact us](mailto:[email protected]) or [open an issue](https://github.com/sbi-benchmark/sbibm/issues). We [invite contributions](CONTRIBUTING.md), e.g., of new tasks, novel metrics, or wrappers for other SBI toolboxes.
## Installation
Assuming you have a working Python environment, simply install `sbibm` via `pip`:
```commandline
$ pip install sbibm
```
ODE based models (currently SIR and Lotka-Volterra models) use [Julia](https://julialang.org) via [`diffeqtorch`](https://github.com/sbi-benchmark/diffeqtorch). If you are planning to use these tasks, please additionally follow the [installation instructions of `diffeqtorch`](https://github.com/sbi-benchmark/diffeqtorch#installation). If you are not planning to simulate these tasks for now, you can skip this step.
## Quickstart
A quick demonstration of `sbibm`, see further below for more in-depth explanations:
```python
import sbibm
task = sbibm.get_task("two_moons") # See sbibm.get_available_tasks() for all tasks
prior = task.get_prior()
simulator = task.get_simulator()
observation = task.get_observation(num_observation=1) # 10 per task
# These objects can then be used for custom inference algorithms, e.g.
# we might want to generate simulations by sampling from prior:
thetas = prior(num_samples=10_000)
xs = simulator(thetas)
# Alternatively, we can import existing algorithms, e.g:
from sbibm.algorithms import rej_abc # See help(rej_abc) for keywords
posterior_samples, _, _ = rej_abc(task=task, num_samples=10_000, num_observation=1, num_simulations=100_000)
# Once we got samples from an approximate posterior, compare them to the reference:
from sbibm.metrics import c2st
reference_samples = task.get_reference_posterior_samples(num_observation=1)
c2st_accuracy = c2st(reference_samples, posterior_samples)
# Visualise both posteriors:
from sbibm.visualisation import fig_posterior
fig = fig_posterior(task_name="two_moons", observation=1, samples=[posterior_samples])
# Note: Use fig.show() or fig.save() to show or save the figure
# Get results from other algorithms for comparison:
from sbibm.visualisation import fig_metric
results_df = sbibm.get_results(dataset="main_paper.csv")
fig = fig_metric(results_df.query("task == 'two_moons'"), metric="C2ST")
```
## Tasks
You can then see the list of available tasks by calling `sbibm.get_available_tasks()`. If we wanted to use, say, the `two_moons` task, we can load it using `sbibm.get_task`, as in:
```python
import sbibm
task = sbibm.get_task("slcp")
```
Next, we might want to get `prior` and `simulator`:
```python
prior = task.get_prior()
simulator = task.get_simulator()
```
If we call `prior()` we get a single draw from the prior distribution. `num_samples` can be provided as an optional argument. The following would generate 100 samples from the simulator:
```python
thetas = prior(num_samples=100)
xs = simulator(thetas)
```
`xs` is a `torch.Tensor` with shape `(100, 8)`, since for SLCP the data is eight-dimensional. Note that if required, conversion to and from `torch.Tensor` is very easy: Convert to a numpy array using `.numpy()`, e.g., `xs.numpy()`. For the reverse, use `torch.from_numpy()` on a numpy array.
Some algorithms might require evaluating the pdf of the prior distribution, which can be obtained as a [`torch.Distribution` instance](https://pytorch.org/docs/stable/distributions.html) using `task.get_prior_dist()`, which exposes `log_prob` and `sample` methods. The parameters of the prior can be picked up as a dictionary as parameters using `task.get_prior_params()`.
For each task, the benchmark contains 10 observations and respective reference posteriors samples. To fetch the first observation and respective reference posterior samples:
```python
observation = task.get_observation(num_observation=1)
reference_samples = task.get_reference_posterior_samples(num_observation=1)
```
Every tasks has a couple of informative attributes, including:
```python
task.dim_data # dimensionality data, here: 8
task.dim_parameters # dimensionality parameters, here: 5
task.num_observations # number of different observations x_o available, here: 10
task.name # name: slcp
task.name_display # name_display: SLCP
```
Finally, if you want to have a look at the source code of the task, take a look in `sbibm/tasks/slcp/task.py`. If you wanted to implement a new task, we would recommend modelling them after the existing ones. You will see that each task has a private `_setup` method that was used to generate the reference posterior samples.
## Algorithms
As mentioned in the intro, `sbibm` wraps a number of third-party packages to run various algorithms. We found it easiest to give each algorithm the same interface: In general, each algorithm specifies a `run` function that gets `task` and hyperparameters as arguments, and eventually returns the required `num_posterior_samples`. That way, one can simply import the run function of an algorithm, tune it on any given task, and return metrics on the returned samples. Wrappers for external toolboxes implementing algorithms are in the subfolder `sbibm/algorithms`. Currently, integrations with [`sbi`](https://www.mackelab.org/sbi/), [`pyabc`](https://pyabc.readthedocs.io), [`pyabcranger`](https://github.com/diyabc/abcranger), as well as an experimental integration with [`elfi`](https://github.com/elfi-dev/elfi) are provided.
## Metrics
In order to compare algorithms on the benchmarks, a number of different metrics can be computed. Each task comes with reference samples for each observation. Depending on the benchmark, these are either obtained by making use of an analytic solution for the posterior or a customized likelihood-based approach.
A number of metrics can be computed by comparing algorithm samples to reference samples. In order to do so, a number of different two-sample tests can be computed (see `sbibm/metrics`). These test follow a simple interface, just requiring to pass samples from reference and algorithm.
For example, in order to compute C2ST:
```python
import torch
from sbibm.metrics.c2st import c2st
from sbibm.algorithms import rej_abc
reference_samples = task.get_reference_posterior_samples(num_observation=1)
algorithm_samples, _, _ = rej_abc(task=task, num_samples=10_000, num_simulations=100_000, num_observation=1)
c2st_accuracy = c2st(reference_samples, algorithm_samples)
```
For more info, see `help(c2st)`.
## Figures
`sbibm` includes code for plotting results, for instance, to plot metrics on a specific task:
```python
from sbibm.visualisation import fig_metric
results_df = sbibm.get_results(dataset="main_paper.csv")
results_subset = results_df.query("task == 'two_moons'")
fig = fig_metric(results_subset, metric="C2ST") # Use fig.show() or fig.save() to show or save the figure
```
It can also be used to plot posteriors, e.g., to compare the results of an inference algorithm against reference samples:
```python
from sbibm.visualisation import fig_posterior
fig = fig_posterior(task_name="two_moons", observation=1, samples=[algorithm_samples])
```
## Results and Experiments
We host results and the code for reproducing the experiments of the manuscript in a seperate repository at [github.com/sbi-benchmark/results](https://github.com/sbi-benchmark/results/tree/main/benchmarking_sbi): This includes the pipeline to reproduce the manuscripts' experiments as well as dataframes for new comparisons.
## Citation
The manuscript is [available through PMLR](http://proceedings.mlr.press/v130/lueckmann21a.html):
```bibtex
@InProceedings{lueckmann2021benchmarking,
title = {Benchmarking Simulation-Based Inference},
author = {Lueckmann, Jan-Matthis and Boelts, Jan and Greenberg, David and Goncalves, Pedro and Macke, Jakob},
booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics},
pages = {343--351},
year = {2021},
editor = {Banerjee, Arindam and Fukumizu, Kenji},
volume = {130},
series = {Proceedings of Machine Learning Research},
month = {13--15 Apr},
publisher = {PMLR}
}
```
## Support
This work was supported by the German Research Foundation (DFG; SFB 1233 PN 276693517, SFB 1089, SPP 2041, Germany’s Excellence Strategy – EXC number 2064/1 PN 390727645) and the German Federal Ministry of Education and Research (BMBF; project ’[ADIMEM](https://fit.uni-tuebingen.de/Project/Details?id=9199)’, FKZ 01IS18052 A-D).
## License
MIT
|
/sbibm-1.1.0.tar.gz/sbibm-1.1.0/README.md
| 0.811003 | 0.988142 |
README.md
|
pypi
|
from functools import partial
import distrax
import haiku as hk
import jax
import matplotlib.pyplot as plt
import optax
import seaborn as sns
from jax import numpy as jnp
from jax import random
from surjectors import (
Chain,
MaskedAutoregressive,
Permutation,
TransformedDistribution,
)
from surjectors.conditioners import MADE
from surjectors.util import unstack
from sbijax import SNL
from sbijax.mcmc import sample_with_slice
def prior_model_fns():
p = distrax.Independent(distrax.Normal(jnp.zeros(2), jnp.ones(2)), 1)
return p.sample, p.log_prob
def simulator_fn(seed, theta):
p = distrax.Normal(jnp.zeros_like(theta), 0.1)
y = theta + p.sample(seed=seed)
return y
def log_density_fn(theta, y):
prior = distrax.Independent(distrax.Normal(jnp.zeros(2), jnp.ones(2)), 1)
likelihood = distrax.MultivariateNormalDiag(
theta, 0.1 * jnp.ones_like(theta)
)
lp = jnp.sum(prior.log_prob(theta)) + jnp.sum(likelihood.log_prob(y))
return lp
def make_model(dim):
def _bijector_fn(params):
means, log_scales = unstack(params, -1)
return distrax.ScalarAffine(means, jnp.exp(log_scales))
def _flow(method, **kwargs):
layers = []
order = jnp.arange(dim)
for i in range(5):
layer = MaskedAutoregressive(
bijector_fn=_bijector_fn,
conditioner=MADE(
dim,
[50, dim * 2],
2,
w_init=hk.initializers.TruncatedNormal(0.001),
b_init=jnp.zeros,
activation=jax.nn.tanh,
),
)
order = order[::-1]
layers.append(layer)
layers.append(Permutation(order, 1))
chain = Chain(layers)
base_distribution = distrax.Independent(
distrax.Normal(jnp.zeros(dim), jnp.ones(dim)),
1,
)
td = TransformedDistribution(base_distribution, chain)
return td(method, **kwargs)
td = hk.transform(_flow)
td = hk.without_apply_rng(td)
return td
def run():
y_observed = jnp.array([-2.0, 1.0])
log_density_partial = partial(log_density_fn, y=y_observed)
log_density = lambda x: jax.vmap(log_density_partial)(x)
prior_simulator_fn, prior_logdensity_fn = prior_model_fns()
fns = (prior_simulator_fn, prior_logdensity_fn), simulator_fn
snl = SNL(fns, make_model(2))
optimizer = optax.adam(1e-3)
params, info = snl.fit(
random.PRNGKey(23),
y_observed,
optimizer=optimizer,
n_rounds=3,
max_n_iter=100,
batch_size=64,
n_early_stopping_patience=5,
sampler="slice",
)
slice_samples = sample_with_slice(
hk.PRNGSequence(0), log_density, 4, 2000, 1000, prior_simulator_fn
)
slice_samples = slice_samples.reshape(-1, 2)
snl_samples, _ = snl.sample_posterior(
params, 4, 2000, 1000, sampler="slice"
)
print(f"Took n={snl.n_total_simulations} simulations in total")
fig, axes = plt.subplots(2, 2)
for i in range(2):
sns.histplot(
slice_samples[:, i], color="darkgrey", ax=axes.flatten()[i]
)
sns.histplot(
snl_samples[:, i], color="darkblue", ax=axes.flatten()[i + 2]
)
axes.flatten()[i].set_title(rf"Sampled posterior $\theta_{i}$")
axes.flatten()[i + 2].set_title(rf"Approximated posterior $\theta_{i}$")
sns.despine()
plt.tight_layout()
plt.show()
if __name__ == "__main__":
run()
|
/sbijax-0.0.12.tar.gz/sbijax-0.0.12/examples/bivariate_gaussian_snl.py
| 0.732592 | 0.592843 |
bivariate_gaussian_snl.py
|
pypi
|
import distrax
import haiku as hk
import jax.nn
import matplotlib.pyplot as plt
import optax
import seaborn as sns
from jax import numpy as jnp
from jax import random
from surjectors import Chain, TransformedDistribution
from surjectors.bijectors.masked_autoregressive import MaskedAutoregressive
from surjectors.bijectors.permutation import Permutation
from surjectors.conditioners import MADE
from surjectors.util import unstack
from sbijax import SNP
def prior_model_fns():
p = distrax.Independent(distrax.Normal(jnp.zeros(2), jnp.ones(2)), 1)
return p.sample, p.log_prob
def simulator_fn(seed, theta):
p = distrax.Normal(jnp.zeros_like(theta), 0.1)
y = theta + p.sample(seed=seed)
return y
def make_model(dim):
def _bijector_fn(params):
means, log_scales = unstack(params, -1)
return distrax.ScalarAffine(means, jnp.exp(log_scales))
def _flow(method, **kwargs):
layers = []
order = jnp.arange(dim)
for i in range(5):
layer = MaskedAutoregressive(
bijector_fn=_bijector_fn,
conditioner=MADE(
dim,
[50, dim * 2],
2,
w_init=hk.initializers.TruncatedNormal(0.001),
b_init=jnp.zeros,
activation=jax.nn.tanh,
),
)
order = order[::-1]
layers.append(layer)
layers.append(Permutation(order, 1))
chain = Chain(layers)
base_distribution = distrax.Independent(
distrax.Normal(jnp.zeros(dim), jnp.ones(dim)),
1,
)
td = TransformedDistribution(base_distribution, chain)
return td(method, **kwargs)
td = hk.transform(_flow)
return td
def run():
y_observed = jnp.array([-2.0, 1.0])
prior_simulator_fn, prior_logdensity_fn = prior_model_fns()
fns = (prior_simulator_fn, prior_logdensity_fn), simulator_fn
optimizer = optax.adamw(1e-04)
snp = SNP(fns, make_model(2))
params, info = snp.fit(
random.PRNGKey(2),
y_observed,
n_rounds=3,
optimizer=optimizer,
n_early_stopping_patience=10,
batch_size=64,
n_atoms=10,
max_n_iter=100,
)
print(f"Took n={snp.n_total_simulations} simulations in total")
snp_samples, _ = snp.sample_posterior(params, 10000)
fig, axes = plt.subplots(2)
for i, ax in enumerate(axes):
sns.histplot(snp_samples[:, i], color="darkblue", ax=ax)
ax.set_xlim([-3.0, 3.0])
sns.despine()
plt.tight_layout()
plt.show()
if __name__ == "__main__":
run()
|
/sbijax-0.0.12.tar.gz/sbijax-0.0.12/examples/bivariate_gaussian_snp.py
| 0.637482 | 0.469095 |
bivariate_gaussian_snp.py
|
pypi
|
import argparse
from functools import partial
import distrax
import haiku as hk
import jax
import matplotlib.pyplot as plt
import numpy as np
import optax
import pandas as pd
import seaborn as sns
from jax import numpy as jnp
from jax import random
from jax import scipy as jsp
from jax import vmap
from surjectors import Chain, TransformedDistribution
from surjectors.bijectors.masked_autoregressive import MaskedAutoregressive
from surjectors.bijectors.permutation import Permutation
from surjectors.conditioners import MADE, mlp_conditioner
from surjectors.surjectors.affine_masked_autoregressive_inference_funnel import ( # type: ignore # noqa: E501
AffineMaskedAutoregressiveInferenceFunnel,
)
from surjectors.util import unstack
from sbijax import SNL
from sbijax.mcmc.slice import sample_with_slice
def prior_model_fns():
p = distrax.Independent(
distrax.Uniform(jnp.full(5, -3.0), jnp.full(5, 3.0)), 1
)
return p.sample, p.log_prob
def likelihood_fn(theta, y):
mu = jnp.tile(theta[:2], 4)
s1, s2 = theta[2] ** 2, theta[3] ** 2
corr = s1 * s2 * jnp.tanh(theta[4])
cov = jnp.array([[s1**2, corr], [corr, s2**2]])
cov = jsp.linalg.block_diag(*[cov for _ in range(4)])
p = distrax.MultivariateNormalFullCovariance(mu, cov)
return p.log_prob(y)
def simulator_fn(seed, theta):
orig_shape = theta.shape
if theta.ndim == 2:
theta = theta[:, None, :]
us_key, noise_key = random.split(seed)
def _unpack_params(ps):
m0 = ps[..., [0]]
m1 = ps[..., [1]]
s0 = ps[..., [2]] ** 2
s1 = ps[..., [3]] ** 2
r = np.tanh(ps[..., [4]])
return m0, m1, s0, s1, r
m0, m1, s0, s1, r = _unpack_params(theta)
us = distrax.Normal(0.0, 1.0).sample(
seed=us_key, sample_shape=(theta.shape[0], theta.shape[1], 4, 2)
)
xs = jnp.empty_like(us)
xs = xs.at[:, :, :, 0].set(s0 * us[:, :, :, 0] + m0)
y = xs.at[:, :, :, 1].set(
s1 * (r * us[:, :, :, 0] + np.sqrt(1.0 - r**2) * us[:, :, :, 1]) + m1
)
if len(orig_shape) == 2:
y = y.reshape((*theta.shape[:1], 8))
else:
y = y.reshape((*theta.shape[:2], 8))
return y
def make_model(dim, use_surjectors):
def _bijector_fn(params):
means, log_scales = unstack(params, -1)
return distrax.ScalarAffine(means, jnp.exp(log_scales))
def _decoder_fn(n_dim):
decoder_net = mlp_conditioner(
[50, n_dim * 2],
w_init=hk.initializers.TruncatedNormal(stddev=0.001),
)
def _fn(z):
params = decoder_net(z)
mu, log_scale = jnp.split(params, 2, -1)
return distrax.Independent(
distrax.Normal(mu, jnp.exp(log_scale)), 1
)
return _fn
def _flow(method, **kwargs):
layers = []
n_dimension = dim
order = jnp.arange(n_dimension)
for i in range(5):
if i == 2 and use_surjectors:
n_latent = 6
layer = AffineMaskedAutoregressiveInferenceFunnel(
n_latent,
_decoder_fn(n_dimension - n_latent),
conditioner=MADE(
n_latent,
[50, n_latent * 2],
2,
w_init=hk.initializers.TruncatedNormal(0.001),
b_init=jnp.zeros,
activation=jax.nn.tanh,
),
)
n_dimension = n_latent
order = order[::-1]
order = order[:n_dimension] - jnp.min(order[:n_dimension])
else:
layer = MaskedAutoregressive(
bijector_fn=_bijector_fn,
conditioner=MADE(
n_dimension,
[50, n_dimension * 2],
2,
w_init=hk.initializers.TruncatedNormal(0.001),
b_init=jnp.zeros,
activation=jax.nn.tanh,
),
)
order = order[::-1]
layers.append(layer)
layers.append(Permutation(order, 1))
chain = Chain(layers)
base_distribution = distrax.Independent(
distrax.Normal(jnp.zeros(n_dimension), jnp.ones(n_dimension)),
reinterpreted_batch_ndims=1,
)
td = TransformedDistribution(base_distribution, chain)
return td(method, **kwargs)
td = hk.transform(_flow)
td = hk.without_apply_rng(td)
return td
def run(use_surjectors):
len_theta = 5
# this is the thetas used in SNL
# thetas = jnp.array([-0.7, -2.9, -1.0, -0.9, 0.6])
y_observed = jnp.array(
[
[
-0.9707123,
-2.9461224,
-0.4494722,
-3.4231849,
-0.13285634,
-3.364017,
-0.85367596,
-2.4271638,
]
]
)
prior_simulator_fn, prior_fn = prior_model_fns()
fns = (prior_simulator_fn, prior_fn), simulator_fn
def log_density_fn(theta, y):
prior_lp = prior_fn(theta)
likelihood_lp = likelihood_fn(theta, y)
lp = jnp.sum(prior_lp) + jnp.sum(likelihood_lp)
return lp
log_density_partial = partial(log_density_fn, y=y_observed)
log_density = lambda x: vmap(log_density_partial)(x)
model = make_model(y_observed.shape[1], use_surjectors)
snl = SNL(fns, model)
optimizer = optax.adam(1e-3)
params, info = snl.fit(
random.PRNGKey(23),
y_observed,
optimizer,
n_rounds=5,
max_n_iter=100,
batch_size=64,
n_early_stopping_patience=5,
sampler="slice",
)
slice_samples = sample_with_slice(
hk.PRNGSequence(12), log_density, 4, 5000, 2500, prior_simulator_fn
)
slice_samples = slice_samples.reshape(-1, len_theta)
snl_samples, _ = snl.sample_posterior(params, 4, 5000, 2500)
g = sns.PairGrid(pd.DataFrame(slice_samples))
g.map_upper(sns.scatterplot, color="black", marker=".", edgecolor=None, s=2)
g.map_diag(plt.hist, color="black")
for ax in g.axes.flatten():
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
g.fig.set_figheight(5)
g.fig.set_figwidth(5)
plt.show()
fig, axes = plt.subplots(len_theta, 2)
for i in range(len_theta):
sns.histplot(slice_samples[:, i], color="darkgrey", ax=axes[i, 0])
sns.histplot(snl_samples[:, i], color="darkblue", ax=axes[i, 1])
axes[i, 0].set_title(rf"Sampled posterior $\theta_{i}$")
axes[i, 1].set_title(rf"Approximated posterior $\theta_{i}$")
for j in range(2):
axes[i, j].set_xlim(-5, 5)
sns.despine()
plt.tight_layout()
plt.show()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--use-surjectors", action="store_true", default=True)
args = parser.parse_args()
run(args.use_surjectors)
|
/sbijax-0.0.12.tar.gz/sbijax-0.0.12/examples/slcp_snl_masked_autoregressive.py
| 0.538983 | 0.489381 |
slcp_snl_masked_autoregressive.py
|
pypi
|
import argparse
from functools import partial
import distrax
import haiku as hk
import jax
import matplotlib.pyplot as plt
import numpy as np
import optax
import pandas as pd
import seaborn as sns
from jax import numpy as jnp
from jax import random
from jax import scipy as jsp
from surjectors import (
AffineMaskedCouplingInferenceFunnel,
Chain,
MaskedCoupling,
TransformedDistribution,
)
from surjectors.conditioners import mlp_conditioner
from surjectors.util import make_alternating_binary_mask
from sbijax import SNL
from sbijax.mcmc import sample_with_slice
def prior_model_fns():
p = distrax.Independent(
distrax.Uniform(jnp.full(5, -3.0), jnp.full(5, 3.0)), 1
)
return p.sample, p.log_prob
def likelihood_fn(theta, y):
mu = jnp.tile(theta[:2], 4)
s1, s2 = theta[2] ** 2, theta[3] ** 2
corr = s1 * s2 * jnp.tanh(theta[4])
cov = jnp.array([[s1**2, corr], [corr, s2**2]])
cov = jsp.linalg.block_diag(*[cov for _ in range(4)])
p = distrax.MultivariateNormalFullCovariance(mu, cov)
return p.log_prob(y)
def simulator_fn(seed, theta):
orig_shape = theta.shape
if theta.ndim == 2:
theta = theta[:, None, :]
us_key, noise_key = random.split(seed)
def _unpack_params(ps):
m0 = ps[..., [0]]
m1 = ps[..., [1]]
s0 = ps[..., [2]] ** 2
s1 = ps[..., [3]] ** 2
r = np.tanh(ps[..., [4]])
return m0, m1, s0, s1, r
m0, m1, s0, s1, r = _unpack_params(theta)
us = distrax.Normal(0.0, 1.0).sample(
seed=us_key, sample_shape=(theta.shape[0], theta.shape[1], 4, 2)
)
xs = jnp.empty_like(us)
xs = xs.at[:, :, :, 0].set(s0 * us[:, :, :, 0] + m0)
y = xs.at[:, :, :, 1].set(
s1 * (r * us[:, :, :, 0] + np.sqrt(1.0 - r**2) * us[:, :, :, 1]) + m1
)
if len(orig_shape) == 2:
y = y.reshape((*theta.shape[:1], 8))
else:
y = y.reshape((*theta.shape[:2], 8))
return y
def make_model(dim, use_surjectors):
def _bijector_fn(params):
means, log_scales = jnp.split(params, 2, -1)
return distrax.ScalarAffine(means, jnp.exp(log_scales))
def _conditional_fn(n_dim):
decoder_net = mlp_conditioner([32, 32, n_dim * 2])
def _fn(z):
params = decoder_net(z)
mu, log_scale = jnp.split(params, 2, -1)
return distrax.Independent(
distrax.Normal(mu, jnp.exp(log_scale)), 1
)
return _fn
def _flow(method, **kwargs):
layers = []
n_dimension = dim
for i in range(5):
mask = make_alternating_binary_mask(n_dimension, i % 2 == 0)
if i == 2 and use_surjectors:
n_latent = 6
layer = AffineMaskedCouplingInferenceFunnel(
n_latent,
_conditional_fn(n_dimension - n_latent),
mlp_conditioner([32, 32, n_dimension * 2]),
)
n_dimension = n_latent
else:
layer = MaskedCoupling(
mask=mask,
bijector=_bijector_fn,
conditioner=mlp_conditioner([32, 32, n_dimension * 2]),
)
layers.append(layer)
chain = Chain(layers)
base_distribution = distrax.Independent(
distrax.Normal(jnp.zeros(n_dimension), jnp.ones(n_dimension)),
reinterpreted_batch_ndims=1,
)
td = TransformedDistribution(base_distribution, chain)
return td(method, **kwargs)
td = hk.transform(_flow)
td = hk.without_apply_rng(td)
return td
def run(use_surjectors):
len_theta = 5
# this is the thetas used in SNL
# thetas = jnp.array([-0.7, -2.9, -1.0, -0.9, 0.6])
y_observed = jnp.array(
[
[
-0.9707123,
-2.9461224,
-0.4494722,
-3.4231849,
-0.13285634,
-3.364017,
-0.85367596,
-2.4271638,
]
]
)
prior_simulator_fn, prior_fn = prior_model_fns()
fns = (prior_simulator_fn, prior_fn), simulator_fn
def log_density_fn(theta, y):
prior_lp = prior_fn(theta)
likelihood_lp = likelihood_fn(theta, y)
lp = jnp.sum(prior_lp) + jnp.sum(likelihood_lp)
return lp
log_density_partial = partial(log_density_fn, y=y_observed)
log_density = lambda x: jax.vmap(log_density_partial)(x)
snl = SNL(fns, make_model(y_observed.shape[1], use_surjectors))
optimizer = optax.adam(1e-3)
params, info = snl.fit(
random.PRNGKey(23),
y_observed,
optimizer,
n_rounds=5,
max_n_iter=100,
batch_size=64,
n_early_stopping_patience=10,
sampler="slice",
)
slice_samples = sample_with_slice(
hk.PRNGSequence(12), log_density, 4, 5000, 2500, prior_simulator_fn
)
slice_samples = slice_samples.reshape(-1, len_theta)
snl_samples, _ = snl.sample_posterior(params, 4, 5000, 2500)
g = sns.PairGrid(pd.DataFrame(slice_samples))
g.map_upper(sns.scatterplot, color="black", marker=".", edgecolor=None, s=2)
g.map_diag(plt.hist, color="black")
for ax in g.axes.flatten():
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
g.fig.set_figheight(5)
g.fig.set_figwidth(5)
plt.show()
fig, axes = plt.subplots(len_theta, 2)
for i in range(len_theta):
sns.histplot(slice_samples[:, i], color="darkgrey", ax=axes[i, 0])
sns.histplot(snl_samples[:, i], color="darkblue", ax=axes[i, 1])
axes[i, 0].set_title(rf"Sampled posterior $\theta_{i}$")
axes[i, 1].set_title(rf"Approximated posterior $\theta_{i}$")
for j in range(2):
axes[i, j].set_xlim(-5, 5)
sns.despine()
plt.tight_layout()
plt.show()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--use-surjectors", action="store_true", default=True)
args = parser.parse_args()
run(args.use_surjectors)
|
/sbijax-0.0.12.tar.gz/sbijax-0.0.12/examples/slcp_snl_masked_coupling.py
| 0.593138 | 0.424412 |
slcp_snl_masked_coupling.py
|
pypi
|
from functools import partial
import distrax
import haiku as hk
import jax
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from jax import numpy as jnp
from jax import random
from jax import scipy as jsp
from jax import vmap
from sbijax import SMCABC
from sbijax.mcmc import sample_with_slice
def prior_model_fns():
p = distrax.Independent(
distrax.Uniform(jnp.full(5, -3.0), jnp.full(5, 3.0)), 1
)
return p.sample, p.log_prob
def likelihood_fn(theta, y):
mu = jnp.tile(theta[:2], 4)
s1, s2 = theta[2] ** 2, theta[3] ** 2
corr = s1 * s2 * jnp.tanh(theta[4])
cov = jnp.array([[s1**2, corr], [corr, s2**2]])
cov = jsp.linalg.block_diag(*[cov for _ in range(4)])
p = distrax.MultivariateNormalFullCovariance(mu, cov)
return p.log_prob(y)
def simulator_fn(seed, theta):
orig_shape = theta.shape
if theta.ndim == 2:
theta = theta[:, None, :]
us_key, noise_key = random.split(seed)
def _unpack_params(ps):
m0 = ps[..., [0]]
m1 = ps[..., [1]]
s0 = ps[..., [2]] ** 2
s1 = ps[..., [3]] ** 2
r = np.tanh(ps[..., [4]])
return m0, m1, s0, s1, r
m0, m1, s0, s1, r = _unpack_params(theta)
us = distrax.Normal(0.0, 1.0).sample(
seed=us_key, sample_shape=(theta.shape[0], theta.shape[1], 4, 2)
)
xs = jnp.empty_like(us)
xs = xs.at[:, :, :, 0].set(s0 * us[:, :, :, 0] + m0)
y = xs.at[:, :, :, 1].set(
s1 * (r * us[:, :, :, 0] + np.sqrt(1.0 - r**2) * us[:, :, :, 1]) + m1
)
if len(orig_shape) == 2:
y = y.reshape((*theta.shape[:1], 8))
else:
y = y.reshape((*theta.shape[:2], 8))
return y
def summary_fn(y):
if y.ndim == 2:
y = y[None, ...]
sumr = jnp.mean(y, axis=1, keepdims=True)
return sumr
def distance_fn(y_simulated, y_observed):
diff = y_simulated - y_observed
dist = jax.vmap(lambda el: jnp.linalg.norm(el))(diff)
return dist
def run():
len_theta = 5
# this is the thetas used in SNL
# thetas = jnp.array([-0.7, -2.9, -1.0, -0.9, 0.6])
y_observed = jnp.array(
[
[
-0.9707123,
-2.9461224,
-0.4494722,
-3.4231849,
-0.13285634,
-3.364017,
-0.85367596,
-2.4271638,
]
]
)
prior_simulator_fn, prior_logdensity_fn = prior_model_fns()
fns = (prior_simulator_fn, prior_logdensity_fn), simulator_fn
smc = SMCABC(fns, summary_fn, distance_fn)
smc.fit(23, y_observed)
smc_samples, _ = smc.sample_posterior(5, 1000, 10, 0.9, 500)
def log_density_fn(theta, y):
prior_lp = prior_logdensity_fn(theta)
likelihood_lp = likelihood_fn(theta, y)
lp = jnp.sum(prior_lp) + jnp.sum(likelihood_lp)
return lp
log_density_partial = partial(log_density_fn, y=y_observed)
log_density = lambda x: vmap(log_density_partial)(x)
slice_samples = sample_with_slice(
hk.PRNGSequence(12), log_density, 4, 10000, 5000, prior_simulator_fn
)
slice_samples = slice_samples.reshape(-1, len_theta)
g = sns.PairGrid(pd.DataFrame(slice_samples))
g.map_upper(sns.scatterplot, color="black", marker=".", edgecolor=None, s=2)
g.map_diag(plt.hist, color="black")
for ax in g.axes.flatten():
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
g.fig.set_figheight(5)
g.fig.set_figwidth(5)
plt.show()
fig, axes = plt.subplots(len_theta, 2)
for i in range(len_theta):
sns.histplot(slice_samples[:, i], color="darkgrey", ax=axes[i, 0])
sns.histplot(smc_samples[:, i], color="darkblue", ax=axes[i, 1])
axes[i, 0].set_title(rf"Sampled posterior $\theta_{i}$")
axes[i, 1].set_title(rf"Approximated posterior $\theta_{i}$")
for j in range(2):
axes[i, j].set_xlim(-5, 5)
sns.despine()
plt.tight_layout()
plt.show()
if __name__ == "__main__":
run()
|
/sbijax-0.0.12.tar.gz/sbijax-0.0.12/examples/slcp_smcabc.py
| 0.692018 | 0.727189 |
slcp_smcabc.py
|
pypi
|
from typing import Callable, Dict, Type, TypeVar
from pydantic import ValidationError
from bin.commands.command import Command
from bin.commands.errors import CommandFailedError, CommandNotFound, CommandParseError
from bin.env.errors import EnvProvisionError
from bin.process.process import Process
ExitCode = int
T = TypeVar("T", bound=Exception)
KeyType = Type[T]
OnErrorCallback = Callable[[T, Command, Process], ExitCode]
class ErrorHandler:
def __init__(self) -> None:
self.__handlers: Dict[KeyType, OnErrorCallback] = {} # type: ignore
def register(self, error: KeyType, callback: OnErrorCallback) -> None: # type: ignore
self.__handlers[error] = callback
def handle(self, exc: T, cmd: Command, process: Process) -> ExitCode:
if type(exc) in self.__handlers:
return self.__handlers[type(exc)](exc, cmd, process)
if Exception in self.__handlers:
return self.__handlers[Exception](exc, cmd, process)
return 1
def get_error_handler() -> ErrorHandler:
handler = ErrorHandler()
handler.register(CommandParseError, __command_parse_error)
handler.register(ValidationError, __validation_error)
handler.register(EnvProvisionError, __env_provision_error)
handler.register(CommandNotFound, __command_not_found)
handler.register(CommandFailedError, __command_failed)
handler.register(Exception, __exception)
return handler
def __command_parse_error(exc: CommandParseError, _: Command, process: Process) -> int:
return __log_exception(process, str(exc), "invalid bin file")
def __validation_error(_: ValidationError, __: Command, process: Process) -> int:
process.log_error("error: invalid bin file (exit code 1)")
return 1
def __env_provision_error(exc: EnvProvisionError, __: Command, process: Process) -> int:
return __log_exception(process, str(exc), "env loading failed")
def __command_not_found(exc: CommandNotFound, cmd: Command, process: Process) -> int:
try:
cmd.run(process, ["--help"])
process.log_error(f"\nerror: {str(exc)} (exit code 1)")
except Exception:
process.log_error(f"error: {str(exc)}\n\nunexpected error happened when running --help (exit code 1)")
return 1
def __command_failed(exc: CommandFailedError, _: Command, process: Process) -> int:
return __log_exception(process, str(exc), "command failed")
def __exception(exc: Exception, _: Command, process: Process) -> int:
return __log_exception(process, str(exc), "unhandled error")
def __log_exception(process: Process, details: str, error_message: str, exit_code: int = 1) -> int:
process.log(details)
process.log_error(f"\nerror: {error_message} (exit code {exit_code})")
return exit_code
|
/sbin-1.0.0.tar.gz/sbin-1.0.0/bin/error_handler.py
| 0.700997 | 0.221098 |
error_handler.py
|
pypi
|
from typing import Any, Dict, List, Optional, Union
from pydantic import root_validator, validator
from bin.commands.command import Command
from bin.commands.errors import CommandParseError
from bin.commands.internal.factory import InternalCommandFactory
from bin.custom_commands.basic_command import BasicCommand
from bin.custom_commands.command_tree import CommandTree
from bin.custom_commands.dtos.base_custom_command_dto import BaseCustomCommandDto
from bin.custom_commands.dtos.str_custom_command_dto import StrCustomCommandDto
from bin.env.env import Env
from bin.models.bin_base_model import BinBaseModel
from bin.models.str_command import StrCommand
class CommandTreeDto(BinBaseModel, BaseCustomCommandDto):
env: Env = Env({})
run: Optional[StrCommand]
alias: Optional[str]
aliases: List[str] = []
subcommands: Dict[str, Union["CommandTreeDto", StrCustomCommandDto]] = {}
@root_validator(pre=True)
def must_contain_at_least_a_command(cls, values: Dict[str, Any]) -> Dict[str, Any]:
run = values.get("run")
subcommands = values.get("subcommands", {})
if run is None and len(subcommands) == 0:
raise ValueError("Command tree must have at least a runnable command")
return values
@validator("aliases")
def aliases_must_be_unique(cls, values: List[str]) -> List[str]:
if len(values) != len(set(values)):
raise ValueError("command aliases are clashing")
return values
@validator("aliases", each_item=True)
def aliases_item_must_not_clash_with_subcommands(cls, value: str, values: Dict[str, Any]) -> str:
subcommands = values.get("subcommands", {})
if value in subcommands:
raise ValueError("command aliases are clashing")
if value == values.get("alias"):
raise ValueError("command aliases are clashing")
return value
def to_command(self, self_name: str) -> Dict[str, Command]:
if self.run is not None and len(self.subcommands) == 0:
return self.__to_command_dict(self_name, BasicCommand.no_help(self.run))
self_cmd = None
if self.run is not None:
self_cmd = BasicCommand.no_help(self.run)
cmd = CommandTree(self_cmd=self_cmd, subcommands_tree=self.__to_subcommands_dict(self_name))
return self.__to_command_dict(self_name, cmd)
def __to_command_dict(self, self_name: str, cmd: Command) -> Dict[str, Command]:
aliases = self.__aliases()
if self_name in aliases:
raise CommandParseError(f"conflicting subcommand names or aliases in '{self_name}'")
wrapped = InternalCommandFactory.wrap_with_env(self.env, cmd)
result = {self_name: wrapped}
for alias in aliases:
result[alias] = wrapped
return result
def __to_subcommands_dict(self, self_name: str) -> Dict[str, Command]:
result: Dict[str, Command] = {}
for name, cmd_dto in self.subcommands.items():
cmd_dict = cmd_dto.to_command(name)
result_candidate = {**result, **cmd_dict}
if len(result_candidate) != len(result) + len(cmd_dict):
raise CommandParseError(f"conflicting subcommand names or aliases in '{self_name}'")
result = result_candidate
return result
def __aliases(self) -> List[str]:
if self.alias is None:
return self.aliases
return self.aliases + [self.alias]
|
/sbin-1.0.0.tar.gz/sbin-1.0.0/bin/custom_commands/dtos/command_tree_dto.py
| 0.794823 | 0.291086 |
command_tree_dto.py
|
pypi
|
from typing import List
from bin.models.bin_base_model import BinBaseModel
from bin.process.helpable import Helpable
_PRE_CMD_SPACING = " " * 2
_DEFAULT_SPACING = " " * 4
_NAIVE_SPACING = " " * 2
class CommandSummary(BinBaseModel):
command: str
help: str
def cmd_part_length(self) -> int:
return len(f"{_PRE_CMD_SPACING}{self.command}{_DEFAULT_SPACING}")
def longest_single_word_line_length(self) -> int:
longest_word_length = max(len(word) for word in self.help.split(" "))
longest_word_dummy = "x" * longest_word_length
return len(f"{_PRE_CMD_SPACING}{self.command}{_DEFAULT_SPACING}{longest_word_dummy}\n")
def naive_format(self) -> str:
return f"{_PRE_CMD_SPACING}{self.command}{_NAIVE_SPACING}{self.help}\n"
def pretty_format(self, cmd_nb_cols: int, help_nb_cols: int) -> str:
current_line = ""
lines = []
for word in self.help.split(" "):
if len(current_line) == 0:
current_line = word
elif len(f"{current_line} {word}") > help_nb_cols:
lines.append(current_line)
current_line = word
else:
current_line += f" {word}"
lines.append(current_line)
line_format = f"{{:{cmd_nb_cols}}}{{}}\n"
text = line_format.format(f"{_PRE_CMD_SPACING}{self.command}", lines[0])
for line in lines[1:]:
text += line_format.format("", line)
return text
class CommandHelp(BinBaseModel, Helpable):
help: str
summaries: List[CommandSummary] = []
@staticmethod
def only_self(self_help: str) -> "CommandHelp":
return CommandHelp(help=self_help, summaries=[])
def format_help(self, max_cols: int) -> str:
message = f"usage:\n {self.help}"
if len(self.summaries) == 0:
return message
message += "\n\ncommands:\n"
longest_cmd_length = 0
longest_single_word_line_length = 0
for cmd_summary in self.summaries:
longest_cmd_length = max((cmd_summary.cmd_part_length()), longest_cmd_length)
longest_single_word_line_length = max(
longest_single_word_line_length, cmd_summary.longest_single_word_line_length()
)
if longest_single_word_line_length > max_cols:
message += self._naive_summaries()
else:
message += self._pretty_summaries(longest_cmd_length, max_cols - longest_cmd_length)
return message[:-1]
def _naive_summaries(self) -> str:
text = ""
for summary in self._sorted_summaries():
text += summary.naive_format()
return text
def _pretty_summaries(self, cmd_nb_cols: int, help_nb_cols: int) -> str:
text = ""
for summary in self._sorted_summaries():
text += summary.pretty_format(cmd_nb_cols, help_nb_cols)
return text
def _sorted_summaries(self) -> List[CommandSummary]:
return sorted(self.summaries, key=lambda el: el.command)
|
/sbin-1.0.0.tar.gz/sbin-1.0.0/bin/commands/command_help.py
| 0.621656 | 0.179728 |
command_help.py
|
pypi
|
import copy
import dataclasses
import hashlib
import inspect
import json
import re
import typing as tp
from collections.abc import Mapping, Sequence, Set
from datetime import date, datetime
import numpy as np
from playground.core.expcache.types_ import LazyObject, is_primitive_type
def function_source_hash(f: tp.Callable) -> str:
stripped_source_code = re.sub(r"(\s+|\n+)", "", inspect.getsource(f))
return hashlib.sha256(stripped_source_code.encode()).hexdigest()
def hash_arguments(arguments: tp.Dict[str, tp.Any]) -> str:
hashable_arguments = make_hashable(o=arguments)
string_arguments = json.dumps(hashable_arguments)
return hashlib.sha256(string_arguments.encode()).hexdigest()
def make_hashable(o: tp.Any, do_copy: bool = True) -> tp.Dict[tp.Any, tp.Any]:
if is_primitive_type(o):
return o # type: ignore
elif isinstance(o, Set):
return tuple([make_hashable(e) for e in sorted(o)]) # type: ignore
elif isinstance(o, Sequence):
return tuple([make_hashable(e) for e in o]) # type: ignore
elif isinstance(o, np.ndarray):
if o.ndim > 1:
return tuple([make_hashable(e) for e in o]) # type: ignore
return tuple(o.tolist()) # type: ignore
elif dataclasses.is_dataclass(o):
return make_hashable(dataclasses.asdict(o), do_copy=False) # type: ignore
elif o.__class__ in JSON_TYPE_CASTER:
return make_hashable(JSON_TYPE_CASTER[o.__class__](o)) # type: ignore
elif not isinstance(o, Mapping):
raise ValueError(
f"String converter is not registered for an argument of class {o.__class__}."
" Hash function will be unstable without it because of the default python behavior"
)
new_o = copy.deepcopy(o) if do_copy else o
for k, v in new_o.items():
new_o[k] = make_hashable(v) # type: ignore
return new_o # type: ignore
JSON_TYPE_CASTER = {
LazyObject: lambda lazy_object: repr(lazy_object),
datetime: lambda x: str(x),
date: lambda x: str(x),
}
|
/sbm_exp_sdk-0.3.0-py3-none-any.whl/playground/core/expcache/hashing.py
| 0.587707 | 0.168857 |
hashing.py
|
pypi
|
import typing as tp
from collections.abc import Mapping, Sequence, Set
from dataclasses import is_dataclass
import numpy as np
import pandas as pd
import polars as pl
from scipy.sparse import coo_matrix, csc_matrix, csr_matrix
T = tp.TypeVar("T")
class LazyObject(tp.Generic[T]):
data: tp.Optional[T]
def __init__(self, path: str, _type: tp.Type) -> None:
self.path = path
self._type = _type
self.data = None
self.expcache_ref_cnt = 0
def __str__(self) -> str:
return self.__repr__()
def __repr__(self) -> str:
return f"LazyObject(path='{self.path}', type={self._type})"
class Artifact:
def __init__(self, obj: tp.Any) -> None:
self.obj = obj
if not is_registered_custom_type(obj.__class__):
raise ValueError(f"{obj.__class__} is not supported as an artifact")
def __str__(self) -> str:
return self.__repr__()
def __repr__(self) -> str:
return f"Artifact(obj={repr(self.obj)})"
supported_primitive_types = (int, str, bool, float, bytes)
supported_numpy_types = (np.int8, np.int16, np.int32, np.int64, np.float16, np.float32, np.float64)
supported_custom_types = [pd.DataFrame, pl.DataFrame, np.ndarray, csr_matrix, coo_matrix, csc_matrix]
def is_supported_obj_type(obj: tp.Any, include_custom: bool = True) -> bool:
condition = is_primitive_type(obj) or is_collection(obj) or is_dataclass(obj)
return (condition and is_registered_custom_type(obj.__class__)) if include_custom else condition
def is_collection(obj: tp.Type) -> bool:
return isinstance(obj, (Set, Sequence, Mapping))
def is_primitive_type(obj: tp.Type) -> bool:
return isinstance(obj, supported_primitive_types)
def is_numpy_primitive_type(_type: tp.Type) -> bool:
return any(_type == primitive_type for primitive_type in supported_numpy_types)
def is_registered_custom_type(_type: tp.Type) -> bool:
return any(_type == custom_type for custom_type in supported_custom_types)
|
/sbm_exp_sdk-0.3.0-py3-none-any.whl/playground/core/expcache/types_.py
| 0.787278 | 0.23206 |
types_.py
|
pypi
|
import inspect
import os
from typing import Any, Callable, Dict, List, Optional, Set
from diskcache import Cache
from playground.core.expcache.hashing import function_source_hash, hash_arguments
if os.getenv("CACHE_PATH") is None:
raise Exception("CACHE_PATH envronment variable is not set")
cachedb = Cache(os.getenv("CACHE_PATH"))
def remove_function_from_cache(f: Callable) -> None:
source_codes: Set[str] = cachedb.get(f.__name__) or [] # type: ignore
if inspect.getsource(f) in source_codes:
source_codes.remove(inspect.getsource(f))
cachedb.set(f.__name__, list(source_codes))
if not source_codes:
fnames = get_cached_function_names()
fnames.remove(f.__name__)
cachedb.set("function_names", fnames)
f_hash = function_source_hash(f=f)
cached_arguments: List[Dict[str, Any]] = cachedb.get(f_hash) if f_hash in cachedb else []
for arguments in cached_arguments:
a_hash = hash_arguments(arguments=arguments)
full_hash = f_hash + a_hash
cachedb.delete(full_hash)
def get_cached_function_source_code_by_name(fname: str) -> Optional[List[str]]:
return cachedb.get(fname) # type: ignore
def clear_cache() -> None:
cachedb.clear()
def get_available_caches(f: Callable) -> List[Dict[str, Any]]:
f_hash = function_source_hash(f=f)
return cachedb.get(f_hash) if f_hash in cachedb else [] # type: ignore
def get_cached_function_names() -> List[str]:
return cachedb.get("function_names") or []
def is_result_available(f: Callable, arguments: Dict[str, Any]) -> bool:
f_hash = function_source_hash(f=f)
a_hash = hash_arguments(arguments=arguments)
full_hash = f_hash + a_hash
return full_hash in cachedb
def maybe_register_new_function(f: Callable) -> None:
functions_keys: Set[str] = set(cachedb.get("function_names") or [])
if f.__name__ not in functions_keys:
functions_keys.add(f.__name__)
cachedb.set("function_names", list(functions_keys))
def maybe_register_new_source_code(f: Callable) -> None:
available_source_codes: Set[str] = set(cachedb.get(f.__name__) or [])
if inspect.getsource(f) not in available_source_codes:
available_source_codes.add(inspect.getsource(f))
cachedb.set(f.__name__, list(available_source_codes))
def save_function_call_result(f: Callable, arguments: Dict[str, Any], result: Any) -> None:
maybe_register_new_function(f=f)
maybe_register_new_source_code(f=f)
f_hash = function_source_hash(f=f)
a_hash = hash_arguments(arguments=arguments)
full_hash = f_hash + a_hash
cached_arguments: List[Dict[str, Any]] = cachedb.get(f_hash) if f_hash in cachedb else []
if full_hash not in cachedb:
cached_arguments.append(arguments)
cachedb.set(f_hash, cached_arguments)
cachedb.set(full_hash, result)
def get_function_call_result(f: Callable, arguments: Dict[str, Any]) -> Optional[Any]:
f_hash = function_source_hash(f=f)
a_hash = hash_arguments(arguments=arguments)
full_hash = f_hash + a_hash
return cachedb.get(full_hash)
|
/sbm_exp_sdk-0.3.0-py3-none-any.whl/playground/core/expcache/db.py
| 0.75101 | 0.169956 |
db.py
|
pypi
|
import typing as tp
from collections.abc import Mapping, Sequence, Set
from dataclasses import is_dataclass
from playground.core.expcache.hashing import JSON_TYPE_CASTER
from playground.core.expcache.types_ import (
Artifact,
LazyObject,
is_primitive_type,
is_registered_custom_type,
is_supported_obj_type,
)
def analyze_inputs(value: tp.Any) -> tp.List[tp.Tuple[tp.Union[str, int, tp.Type]]]:
def _valid_obj(obj: tp.Any) -> bool:
return (
is_primitive_type(obj)
or obj.__class__ in JSON_TYPE_CASTER
or isinstance(obj, Set)
or isinstance(value, LazyObject)
or isinstance(value, Artifact)
or is_dataclass(obj)
)
if _valid_obj(obj=value):
return []
invalid_object_type_paths = []
if isinstance(value, Mapping):
for key in value:
if isinstance(value[key], (Mapping, Sequence)):
paths = analyze_inputs(value=value[key])
invalid_object_type_paths.extend([(key, *path) for path in paths])
elif not _valid_obj(obj=value[key]):
invalid_object_type_paths.append((key, value[key].__class__))
elif isinstance(value, Sequence):
for idx in range(len(value)):
if isinstance(value[idx], (Mapping, Sequence)):
paths = analyze_inputs(value=value[idx])
invalid_object_type_paths.extend([(idx, *path) for path in paths])
elif not _valid_obj(obj=value[idx]):
invalid_object_type_paths.append((idx, value[idx].__class__))
return invalid_object_type_paths # type: ignore
def analyze_complex_types_in_object(
value: tp.Any, only_artifacts: bool = False
) -> tp.List[tp.Tuple[tp.Union[str, int]]]:
if (
is_dataclass(value)
or is_primitive_type(value)
or isinstance(value, (Set, LazyObject))
or is_registered_custom_type(value.__class__)
):
return []
complex_objects_path = []
if isinstance(value, Mapping):
for key in value:
val = value[key]
if (only_artifacts and isinstance(val, Artifact)) or (
not only_artifacts
and not (is_supported_obj_type(obj=val, include_custom=False) or isinstance(val, LazyObject))
):
complex_objects_path.append((key,))
else:
paths = analyze_complex_types_in_object(value=val, only_artifacts=only_artifacts)
complex_objects_path.extend([(key, *path) for path in paths]) # type: ignore
elif isinstance(value, Sequence):
for idx in range(len(value)):
val = value[idx]
if (only_artifacts and isinstance(val, Artifact)) or (
not only_artifacts
and not (is_supported_obj_type(obj=val, include_custom=False) or isinstance(val, LazyObject))
):
complex_objects_path.append((idx,))
else:
paths = analyze_complex_types_in_object(value=value[idx], only_artifacts=only_artifacts)
complex_objects_path.extend([(idx, *path) for path in paths]) # type: ignore
return complex_objects_path
def analyze_lazy_objects(value: tp.Any) -> tp.List[str]:
if is_primitive_type(value) or isinstance(value, Set) or is_dataclass(value):
return []
metadata_paths = []
if isinstance(value, Mapping):
for key in value:
val = value[key]
if isinstance(val, LazyObject):
metadata_paths.append((key,))
else:
paths = analyze_lazy_objects(value=val)
metadata_paths.extend([(key, *path) for path in paths]) # type: ignore
elif isinstance(value, Sequence):
for idx in range(len(value)):
val = value[idx]
if isinstance(val, LazyObject):
metadata_paths.append((idx,))
else:
paths = analyze_lazy_objects(value=val)
metadata_paths.extend([(idx, *path) for path in paths]) # type: ignore
return metadata_paths # type: ignore
|
/sbm_exp_sdk-0.3.0-py3-none-any.whl/playground/core/expcache/analyzer.py
| 0.552298 | 0.165323 |
analyzer.py
|
pypi
|
from collections import defaultdict
from datetime import datetime
from typing import Any, Dict, Optional, Union
import pandas as pd
import polars as pl
from loguru import logger
from playground.core.data.dataset_base import Dataset
from playground.core.data.parquet_dataset import DatePartitionedParquetDataset, ParquetDataset
from playground.core.s3_source.ml_lake import s3source_mllake
from playground.core.s3_source.recsys import s3source_recsys
from pyarrow import parquet
class DataCatalog:
def __init__(
self,
datasets: Dict[str, Dict[str, Dataset]],
):
self.datasets = datasets
def add_dataset(self, layer: str, alias: str, dataset: Dataset) -> None:
if layer in self.datasets:
if alias in self.datasets[layer]:
logger.info(
f"dataset {alias} is already present in the catalog layers {layer},"
" replacing it with the new dataset"
)
self.datasets[layer][alias] = dataset
else:
self.datasets[layer] = {alias: dataset}
def load(
self,
layer: str,
alias: str,
polars: bool,
load_args: Dict[str, Any],
keys: Optional[Union[pl.DataFrame, pd.DataFrame]] = None,
) -> Union[pl.DataFrame, pd.DataFrame]:
if alias not in self.datasets[layer]:
raise ValueError(f"dataset with alias {alias} is not present in the catalog layer {layer}")
return self.datasets[layer][alias].load(polars=polars, load_args=load_args, keys=keys)
def get(self, layer: str, alias: str) -> Dataset:
if layer not in self.datasets:
raise ValueError(f"unknown layer={layer}")
if alias not in self.datasets[layer]:
raise ValueError(f"unknown alias={alias}")
return self.datasets[layer][alias]
def describe_ds(self, layer: str, alias: str, date: datetime) -> Dict[str, Any]:
dataset = CATALOG.get(layer=layer, alias=alias)
path = dataset.get_path(date=date)
parq = parquet.ParquetDataset(path, filesystem=dataset.fs)
nrows = 0
for piece in parq.pieces:
nrows += piece.get_metadata().num_rows
schema = {field.name: field.physical_type for field in parq.schema}
schema["n_rows"] = nrows
return schema
def load_datasets(
self,
polars: bool,
dataset_alias_to_load_config: Dict[str, Dict[str, Any]],
keys_dict: Optional[Dict[str, Dict[str, Union[pl.DataFrame, pd.DataFrame]]]] = None,
) -> Dict[str, Dict[str, Union[pl.DataFrame, pd.DataFrame]]]:
missing_aliases = defaultdict(list)
for layer in dataset_alias_to_load_config:
if layer not in self.datasets:
for alias in dataset_alias_to_load_config[layer]:
missing_aliases[layer].append(alias)
else:
for alias in dataset_alias_to_load_config[layer]:
if alias not in self.datasets[layer]:
missing_aliases[layer].append(alias)
if missing_aliases:
raise ValueError(f"datasets {missing_aliases} are not present in the catalog")
result: Dict[str, Dict[str, Union[pl.DataFrame, pd.DataFrame]]] = defaultdict(dict)
for layer in dataset_alias_to_load_config:
for alias, load_args in dataset_alias_to_load_config[layer].items():
logger.info(f"Loading {alias} dataset from the layer {layer}")
if keys_dict is None:
result[layer][alias] = self.load(layer=layer, alias=alias, polars=polars, load_args=load_args)
else:
result[layer][alias] = self.load(
layer=layer,
alias=alias,
polars=polars,
load_args=load_args,
keys=keys_dict.get(layer, {}).get(alias),
)
return result
CATALOG = DataCatalog(
datasets={
"raw": {
"line_items": DatePartitionedParquetDataset(
path="data/raw/instamart_db/line_items_completed", source=s3source_mllake
),
"registered_users": ParquetDataset(path="data/raw/instamart_db/registered_users", source=s3source_mllake),
"products": ParquetDataset(path="data/raw/instamart_db/products", source=s3source_mllake),
"prices": ParquetDataset(path="data/raw/instamart_db/prices", source=s3source_mllake),
"favorite_products": ParquetDataset(path="data/raw/instamart_db/favorite_products", source=s3source_mllake),
"shipments": DatePartitionedParquetDataset(path="data/raw/analytics_db/shipments", source=s3source_mllake),
"alcohol": ParquetDataset(path="data/raw/instamart_db/alcohol", source=s3source_mllake),
"master_categories": ParquetDataset(path="data/raw/instamart_db/master_categories", source=s3source_mllake),
"retailers": ParquetDataset(path="data/raw/instamart_db/retailers", source=s3source_mllake),
"stores": ParquetDataset(path="data/raw/instamart_db/stores", source=s3source_mllake),
},
"agg": {
"anonymous_user_unique_mapping": ParquetDataset(
path="data/agg/user_mappings/anonymous_user_unique_mapping", source=s3source_mllake
),
"user_product_interaction": DatePartitionedParquetDataset(
path="data/agg/user_product/user_product_interaction", source=s3source_mllake
),
},
"feat": {
"fasttext_products": ParquetDataset(path="data/feat/product/embedding_fasttext", source=s3source_mllake),
"sbert_products": ParquetDataset(path="data/feat/product/embedding_bert", source=s3source_mllake),
"product_price": DatePartitionedParquetDataset(
path="data/feat/product/price",
source=s3source_mllake,
key=["date", "store_id", "product_sku"],
),
"product_order_stats": DatePartitionedParquetDataset(
path="data/feat/product/order_stats",
source=s3source_mllake,
key=["date", "product_sku"],
),
"product_retailer_order": DatePartitionedParquetDataset(
path="data/feat/product/retailer_order",
source=s3source_mllake,
key=["date", "retailer_id", "product_sku"],
),
"product_brands": DatePartitionedParquetDataset(
path="data/feat/product/brands",
source=s3source_mllake,
key=["date", "product_sku"],
),
"retailer_order_stats": DatePartitionedParquetDataset(
path="data/feat/retailer/order_stats",
source=s3source_mllake,
key=["date", "retailer_id"],
),
"user_discount": DatePartitionedParquetDataset(
path="data/feat/user/discount",
source=s3source_mllake,
key=["date", "user_id"],
),
"user_discount_sku": DatePartitionedParquetDataset(
path="data/feat/user/discount_sku",
source=s3source_mllake,
key=["date", "user_id", "product_sku"],
),
"user_order_sku_stats": DatePartitionedParquetDataset(
path="data/feat/user/order_sku_stats",
source=s3source_mllake,
key=["date", "user_id", "product_sku"],
),
"user_product_counts": DatePartitionedParquetDataset(
path="data/feat/user_product/counts",
source=s3source_mllake,
key=["date", "user_id", "product_sku"],
),
"user_product_recency": DatePartitionedParquetDataset(
path="data/feat/user_product/recency",
source=s3source_mllake,
key=["date", "user_id", "product_sku"],
),
"user_relevance": DatePartitionedParquetDataset(
path="data/feat/user/user_relevance",
source=s3source_mllake,
key=["date", "user_id", "brand_id"],
),
"user_category_relevance": DatePartitionedParquetDataset(
path="data/feat/user/user_category_relevance",
source=s3source_mllake,
key=["date", "user_id", "master_category_id", "brand_id"],
),
"user_order_stats": DatePartitionedParquetDataset(
path="data/feat/user/order_stats",
source=s3source_mllake,
key=["date", "user_id"],
),
"user_store_stats_ctr": DatePartitionedParquetDataset(
path="data/feat/user_store/user_store_stats_ctr",
source=s3source_mllake,
key=["date", "user_id", "store_id"],
),
"store_stats_ctr": DatePartitionedParquetDataset(
path="data/feat/store/store_stats_ctr",
source=s3source_mllake,
key=["date", "store_id"],
),
"store_stats_aov": DatePartitionedParquetDataset(
path="data/feat/store/store_stats_aov",
source=s3source_mllake,
key=["date", "store_id"],
),
},
"recsys": {
"parrots": ParquetDataset(
path="recsys_2.0/prod/experiment/parrots/recsys_validation_dataset", source=s3source_recsys
),
},
}
)
|
/sbm_exp_sdk-0.3.0-py3-none-any.whl/playground/core/data/catalog.py
| 0.801354 | 0.400691 |
catalog.py
|
pypi
|
from collections.abc import Mapping
from copy import deepcopy
from datetime import datetime
from typing import Any, Dict, Optional, Union, cast
import pandas as pd
import polars as pl
from playground.core.data.dataset_base import Dataset
from playground.core.data.io import load_parquet
class ParquetDataset(Dataset):
__load_args__ = ["date"]
def load(
self,
polars: bool,
load_args: Dict[str, Any],
keys: Optional[Union[pl.DataFrame, pd.DataFrame]] = None,
) -> Union[pl.DataFrame, pd.DataFrame]:
self._check_load_args(load_args=load_args)
load_date: datetime = load_args["date"]
load_path = self.get_path(date=load_date)
columns_arg = None
if "columns" in load_args and isinstance(load_args["columns"], Mapping):
load_args = deepcopy(load_args)
columns_arg = load_args["columns"]
load_args["columns"] = list(columns_arg.keys())
load_args = {key: value for key, value in load_args.items() if key not in self.__load_args__}
data = load_parquet(fs=self.fs, path=load_path, polars=polars, **load_args)
if "frac" in load_args:
data = data.sample(frac=load_args["frac"])
if columns_arg is not None:
if polars:
data = cast(pl.DataFrame, data)
data = data.select([pl.col(col).cast(dtype) for col, dtype in columns_arg.items()])
else:
data = cast(pd.DataFrame, data)
data = data.astype(columns_arg)
if keys is not None:
if polars:
data = cast(pl.DataFrame, data)
data = data.join(keys, on=keys.columns, how="inner")
else:
data = cast(pd.DataFrame, data)
data = data.merge(keys, on=keys.columns, how="inner")
return data
class DatePartitionedParquetDataset(Dataset):
__load_args__ = ["from_date", "to_date"]
__extra_args__ = ["n_partitions"]
def get_last_n_partition_dates(self, n_partitions, to_date):
root_path = self.path.replace("/date={date}", "")
part_list = list(
filter(
lambda x: x.split("/")[-1] != "_SUCCESS" and datetime.strptime(x.split("=")[-1], "%Y-%m-%d") <= to_date,
self.fs.listdir(root_path, detail=False),
)
)[-n_partitions:]
date_list = [datetime.strptime(x.split("=")[-1], self.date_format) for x in part_list]
return date_list
def load(
self,
polars: bool,
load_args: Dict[str, Any],
keys: Optional[Union[pl.DataFrame, pd.DataFrame]] = None,
) -> Union[pl.DataFrame, pd.DataFrame]:
self._check_load_args(load_args=load_args)
from_date: datetime = load_args["from_date"]
to_date: datetime = load_args["to_date"]
columns_arg = None
if "columns" in load_args and isinstance(load_args["columns"], Mapping):
load_args = deepcopy(load_args)
columns_arg = load_args["columns"]
load_args["columns"] = list(columns_arg.keys())
n_partitions: Optional[int] = load_args.get("n_partitions")
date_list = (
self.get_last_n_partition_dates(n_partitions=n_partitions, to_date=to_date)
if n_partitions
else pd.date_range(start=from_date, end=to_date)
)
filter_args = self.__load_args__ + self.__extra_args__
load_args = {key: value for key, value in load_args.items() if key not in filter_args}
dfs = []
for date in date_list:
load_path = self.get_path(date=date)
data = load_parquet(fs=self.fs, path=load_path, polars=polars, **load_args)
if polars:
data = cast(pl.DataFrame, data)
data = data.with_columns([pl.lit(date.date()).alias("date")])
if columns_arg is not None:
data = data.select([pl.col(col).cast(dtype) for col, dtype in columns_arg.items()])
if keys is not None:
keys = cast(pl.DataFrame, keys)
keys_subset = keys.filter(pl.col("date").cast(pl.Date) == date.date()).drop("date")
data = data.join(keys_subset, on=keys_subset.columns, how="inner") # type: ignore
else:
data = cast(pd.DataFrame, data)
data["date"] = date.date()
if columns_arg is not None:
data = data.astype(columns_arg)
if keys is not None:
keys = cast(pd.DataFrame, keys)
keys_subset = keys[keys["date"].dt.date == date.date()].drop("date", axis=1)
data = data.merge(keys_subset, on=keys_subset.columns.tolist(), how="inner") # type: ignore
if "frac" in load_args:
data = data.sample(frac=load_args["frac"])
dfs.append(data)
final_df = pl.concat(dfs) if polars else pd.concat(dfs)
return final_df
|
/sbm_exp_sdk-0.3.0-py3-none-any.whl/playground/core/data/parquet_dataset.py
| 0.744935 | 0.322513 |
parquet_dataset.py
|
pypi
|
from pickle import dump as pdump
from pickle import load as pload
from tempfile import TemporaryDirectory
from typing import Any, Dict, Optional, Sequence, Union
import pandas as pd
import polars as pl
from fsspec import AbstractFileSystem
from joblib import dump as jdump
from joblib import load as jload
from loguru import logger
from pyarrow import Table, parquet
from s3fs import S3FileSystem
def write_pickle(fs: AbstractFileSystem, obj: Any, path: str) -> None:
logger.info(f"Saving {obj.__class__} to {path}")
with fs.open(path, "wb") as f:
pdump(obj, f)
def load_pickle(fs: AbstractFileSystem, path: str) -> Any:
logger.info(f"Loading pickle object from {path}")
with fs.open(path, "rb") as s3file:
return pload(s3file)
def write_joblib(fs: AbstractFileSystem, obj: Any, path: str) -> None:
logger.info(f"Saving {obj.__class__} to {path}")
with fs.open(path, "wb") as f:
jdump(obj, f)
def load_joblib(fs: AbstractFileSystem, path: str) -> Any:
logger.info(f"Loading joblib object from {path}")
with fs.open(path, "rb") as s3file:
return jload(s3file)
def write_parquet(
fs: AbstractFileSystem,
df: Union[pd.DataFrame, pl.DataFrame],
path: str,
partition_cols: Optional[Sequence[str]] = None,
overwrite: bool = True,
**kwargs,
):
"""
Запись в формате `Parquet` с использованием `pyarrow`.
Может использоваться для записи партиционированного датасета.
Для этого можно воспользоваться аргументом `partition_cols`, а можно передать этот ключ в `**kwargs`.
:param df: таблица типа `pandas.DataFrame` или `polars.DataFrame`
:param path: объектный путь типа /path/to/file или s3://bucket/path/to/file
:param partition_cols: список полей для партиционирования
:param overwrite: удалять ли существующие файлы перед записью.
Следует передать `False` при многократной записи партиционированных датасетов по одному пути.
:param kwargs: дополнительные аргументы для `pyarrow.parquet.write_to_dataset`
:return:
"""
table = df.to_arrow() if isinstance(df, pl.DataFrame) else Table.from_pandas(df)
kwargs["partition_cols"] = kwargs.get("partition_cols") or partition_cols
# this helps in the case of multiple saves to the same directory
# otherwise parquet files from different attempts can mix
if overwrite and fs.exists(path):
logger.warning(f"Deleting existing {path} because the overwrite argument is set to True")
fs.rm(path, recursive=True)
parquet.write_to_dataset(table, root_path=path, filesystem=fs, **kwargs)
logger.info(f"Written DataFrame {df.shape} to {path}")
def load_parquet(
fs: AbstractFileSystem,
path: str,
columns: Optional[Union[Dict[str, Any], Sequence[str]]] = None,
polars: bool = False,
) -> Union[pd.DataFrame, pl.DataFrame]:
"""
Чтение датасета в формате `Parquet` с использованием `pyarrow`.
Возвращает `polars.DataFrame. Для конвертации в `pandas.DataFrame` можно воспользоваться методом `df.to_pandas()`.
:param path: объектный путь типа /path/to/file или s3://bucket/path/to/file
:param columns: список полей для чтения (опционально)
:param dataset_kwargs: дополнительные аргументы для `pyarrow.parquet.ParquetDataset`
:param read_kwargs: дополнительные аргументы для `pyarrow.parquet.ParquetDataset.read`.
В частности, при помощи этих аргументов можно прочитать только определенные партиции или колонки.
Подробнее в документации `pyarrow`: https://arrow.apache.org/docs/python/dataset.html#reading-partitioned-data
:return:
"""
if columns is not None:
column_names = columns.keys() if isinstance(columns, dict) else columns
if isinstance(fs, S3FileSystem):
with TemporaryDirectory() as temp_dir:
fs.get(path, temp_dir, recursive=True)
dataset = parquet.ParquetDataset(temp_dir)
table = dataset.read(columns=column_names) if columns is not None else dataset.read()
else:
dataset = parquet.ParquetDataset(path)
table = dataset.read(columns=column_names) if columns is not None else dataset.read()
df = pl.from_arrow(table) if polars else table.to_pandas()
if columns is not None and isinstance(columns, dict):
if polars:
df = df.select([pl.col(col).cast(col_type) for col, col_type in columns.items()])
else:
df = df.astype(columns)
logger.info(f"Loaded DataFrame {df.shape} from {path}")
return df
|
/sbm_exp_sdk-0.3.0-py3-none-any.whl/playground/core/data/io.py
| 0.726814 | 0.353847 |
io.py
|
pypi
|
import numpy as np
import random
import networkx as nx
def sbm_graph(n, k, a, b):
if n % k != 0:
raise ValueError('n %k != 0')
elif a <= b:
raise ValueError('a <= b')
sizes = [int(n/k) for _ in range(k)]
_p = np.log(n) * a / n
_q = np.log(n) * b / n
if _p > 1 or _q > 1:
raise ValueError('%f (probability) larger than 1' % _p)
p = np.diag(np.ones(k) * (_p - _q)) + _q * np.ones([k, k])
return nx.generators.community.stochastic_block_model(sizes, p)
class SIBM2:
'''SIBM with two community'''
def __init__(self, graph, _beta, _gamma):
self.G = graph
self._beta = _beta
self._gamma = _gamma
self.n = len(self.G.nodes)
# randomly initiate a configuration
self.sigma = [1 for i in range(self.n)]
nodes = list(self.G)
random.Random().shuffle(nodes)
k = 2
self.k = k
for node_state in range(k):
for i in range(self.n // k):
self.sigma[nodes[i * k + node_state]] = 2 * node_state - 1
# node state is +1 or -1
self.m = self.n / 2 # number of +1
self.mixed_param = self._gamma * np.log(self.n)
self.mixed_param /= self.n
def get_dH(self, trial_location):
_sum = 0
w_s = self.sigma[trial_location]
for i in self.G[trial_location]:
_sum += self.sigma[i]
_sum *= w_s * (1 + self.mixed_param)
_sum -= self.mixed_param * (w_s * (2 * self.m - self.n) - 1)
return _sum
def _metropolis_single(self):
# randomly select one position to inspect
r = random.randint(0, self.n - 1)
delta_H = self.get_dH(r)
if delta_H < 0: # lower energy: flip for sure
self.sigma[r] *= -1
self.m += self.sigma[r]
else: # Higher energy: flip sometimes
probability = np.exp(-1.0 * self._beta * delta_H)
if np.random.rand() < probability:
self.sigma[r] *= -1
self.m += self.sigma[r]
def metropolis(self, N=40):
# iterate given rounds
for _ in range(N):
for _ in range(self.n):
self._metropolis_single()
return self.sigma
class SIBMk:
'''SIBM with multiple community'''
def __init__(self, graph, _beta, _gamma, k=2):
self.G = graph
self._beta = _beta
self._gamma = _gamma
self.n = len(self.G.nodes)
# randomly initiate a configuration
self.sigma = [1 for i in range(self.n)]
nodes = list(self.G)
random.Random().shuffle(nodes)
self.k = k
for node_state in range(k):
for i in range(self.n // k):
self.sigma[nodes[i * k + node_state]] = node_state
# node state is 0, 1, \dots, k-1
self.m = [self.n / k for i in range(k)] # each number of w^s
self.mixed_param = _gamma * np.log(self.n)
self.mixed_param /= self.n
def get_dH(self, trial_location, w_s):
_sum = 0
sigma_r = self.sigma[trial_location]
w_s_sigma_r = (w_s + sigma_r) % self.k
for i in self.G[trial_location]:
if sigma_r == self.sigma[i]:
_sum += 1
elif w_s_sigma_r == self.sigma[i]:
_sum -= 1
_sum *= (1 + self.mixed_param)
_sum += self.mixed_param * (self.m[w_s_sigma_r] - self.m[sigma_r] + 1)
return _sum
def _metropolis_single(self):
# randomly select one position to inspect
r = random.randint(0, self.n - 1)
# randomly select one new flipping state to inspect
w_s = random.randint(1, self.k - 1)
delta_H = self.get_dH(r, w_s)
if delta_H < 0: # lower energy: flip for sure
self.m[self.sigma[r]] -= 1
self.sigma[r] = (w_s + self.sigma[r]) % self.k
self.m[self.sigma[r]] += 1
else: # Higher energy: flip sometimes
probability = np.exp(-1.0 * self._beta * delta_H)
if np.random.rand() < probability:
self.m[self.sigma[r]] -= 1
self.sigma[r] = (w_s + self.sigma[r]) % self.k
self.m[self.sigma[r]] += 1
def metropolis(self, N=40):
# iterate given rounds
for _ in range(N):
for _ in range(self.n):
self._metropolis_single()
return self.sigma
def _estimate_a_b(graph, k=2):
'''for multiple communities
'''
n = len(graph.nodes)
e = len(graph.edges)
t = 0
for _, v in nx.algorithms.cluster.triangles(graph).items():
t += v
t /= 3
eq_1 = e / (n * np.log(n))
eq_2 = t / (np.log(n) ** 3)
# solve b first
coeff_3 = -1 * (k - 1)
coeff_2 = 6 * (k - 1) * eq_1
coeff_1 = -12 * (k - 1) * (eq_1 ** 2)
coeff_0 = 8 * k * (eq_1 ** 3) - 6 * eq_2
coeff = [coeff_3, coeff_2, coeff_1, coeff_0]
b = -1
for r in np.roots(coeff):
if abs(np.imag(r)) < 1e-10:
b = np.real(r)
break
a = 2 * k * eq_1 - (k - 1) * b
if b < 0 or b > a:
raise ValueError('')
return (a, b)
def SIBM(graph, k=2, max_iter=40, gamma=None, beta=None):
if beta is None:
try:
a, b = _estimate_a_b(graph, k)
except ValueError:
a, b = k, k
square_term = (a + b - k) ** 2 - 4 * a * b
if square_term > 0:
_beta_star = np.log((a + b - k - np.sqrt(square_term))/ (2 * b))
beta = 1.2 * _beta_star
else:
beta = 1.2
if gamma is None:
gamma = 6 * b
if k == 2:
sibm = SIBM2(graph, beta, gamma)
else:
sibm = SIBMk(graph, beta, gamma, k)
return sibm.metropolis(N=max_iter)
|
/sbmising-0.1.tar.gz/sbmising-0.1/sbmising.py
| 0.568895 | 0.28914 |
sbmising.py
|
pypi
|
pypi-download-stats
========================
[](https://pypi.python.org/pypi/sbnltk/)
[](https://pypi.python.org/pypi/sbnltk/)
[](https://pypi.python.org/pypi/sbnltk/)
[](https://pypi.python.org/pypi/sbnltk/)
[](https://pypi.python.org/pypi/sbnltk/)
# SBNLTK
SUST-Bangla Natural Language toolkit. A python module for Bangla NLP tasks.\
Demo Version : 1.0 \
**NEED python 3.6+ vesrion!! Use virtual Environment for not getting unessessary Issues!!**
## INSTALLATION
### PYPI INSTALLATION
```commandline
pip3 install sbnltk
pip3 install simpletransformers
```
### MANUAL INSTALLATION FROM GITHUB
* Clone this project
* Install all the requirements
* Call the setup.py from terminal
## What will you get here?
* Bangla Text Preprocessor
* Bangla word dust,punctuation,stop word removal
* Bangla word sorting according to Bangla or English alphabet
* Bangla word normalization
* Bangla word stemmer
* Bangla Sentiment analysis(logisticRegression,LinearSVC,Multilnomial_naive_bayes,Random_Forst)
* Bangla Sentiment analysis with Bert
* Bangla sentence pos tagger (static, sklearn)
* Bangla sentence pos tagger with BERT(Multilingual-cased,Multilingual uncased)
* Bangla sentence NER(Static,sklearn)
* Bangla sentence NER with BERT(Bert-Cased, Multilingual Cased/Uncased)
* Bangla word word2vec(gensim,glove,fasttext)
* Bangla sentence embedding(Contexual,Transformer/Bert)
* Bangla Document Summarization(Feature based, Contexual, sementic Based)
* Bangla Bi-lingual project(Bangla to english google translator without blocking IP)
* Bangla document information Extraction
**SEE THE CODE DOCS FOR USES!**
## TASKS, MODELS, ACCURACY, DATASET AND DOCS
| TASK | MODEL | ACCURACY | DATASET | About | Code DOCS |
|:-------------------------:|:-----------------------------------------------------------------:|:--------------:|:-----------------------:|:-----:|:---------:|
| Preprocessor | Punctuation, Stop Word, DUST removal Word normalization, others.. | ------ | ----- | |[docs](https://github.com/Foysal87/sbnltk/blob/main/docs/preprocessor.md) |
| Word tokenizers | basic tokenizers Customized tokenizers | ---- | ---- | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Tokenizer.md#word-tokenizer) |
| Sentence tokenizers | Basic tokenizers Customized tokenizers Sentence Cluster | ----- | ----- | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Tokenizer.md#sentence-tokenizer) |
| Stemmer | StemmerOP | 85.5% | ---- | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Stemmer.md) |
| Sentiment Analysis | logisticRegression | 88.5% | 20,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentiment%20Analyzer.md#logistic-regression) |
| | LinearSVC | 82.3% | 20,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentiment%20Analyzer.md#linear-svc) |
| | Multilnomial_naive_bayes | 84.1% | 20,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentiment%20Analyzer.md#multinomial-naive-bayes) |
| | Random Forest | 86.9% | 20,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentiment%20Analyzer.md#random-forest) |
| | BERT | 93.2% | 20,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentiment%20Analyzer.md#bert-sentiment-analysis) |
| POS tagger | Static method | 55.5% | 1,40,973 words | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Postag.md#static-postag) |
| | SK-LEARN classification | 81.2% | 6,000+ sentences | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Postag.md#sklearn-postag) |
| | BERT-Multilingual-Cased | 69.2% | 6,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Postag.md#bert-multilingual-cased-postag) |
| | BERT-Multilingual-Uncased | 78.7% | 6,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Postag.md#bert-multilingual-uncased-postag) |
| NER tagger | Static method | 65.3% | 4,08,837 Entity | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/NER.md#static-ner) |
| | SK-LEARN classification | 81.2% | 65,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/NER.md#sklearn-classification) |
| | BERT-Cased | 79.2% | 65,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/NER.md#bert-cased-ner) |
| | BERT-Mutilingual-Cased | 75.5% | 65,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/NER.md#bert-multilingual-cased-ner) |
| | BERT-Multilingual-Uncased | 90.5% | 65,000+ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/NER.md#bert-multilingual-uncased-ner) |
| Word Embedding | Gensim-word2vec-100D- 1,00,00,000+ tokens | - | 2,00,00,000+ sentences | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Word%20Embedding.md#gensim-word-embedding) |
| | Glove-word2vec-100D- 2,30,000+ tokens | - | 5,00,000 sentences | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Word%20Embedding.md#fasttext-word-embedding) |
| | fastext-word2vec-200D 3,00,000+ | - | 5,00,000 sentences | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Word%20Embedding.md#glove-word-embedding) |
| Sentence Embedding | Contextual sentence embedding | - | ----- | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentence%20Embedding.md#sentence-embedding-from-word2vec) |
| | Transformer embedding_hd | - | 3,00,000+ human data | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentence%20Embedding.md#sentence-embedding-transformer-human-translate-data) |
| | Transformer embedding_gd | - | 3,00,000+ google data | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Sentence%20Embedding.md#sentence-embedding-transformer-google-translate-data) |
| Extractive Summarization | Feature-based based | 70.0% f1 score | ------ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Summarization.md#feature-based-model) |
| | Transformer sentence sentiment Based | 67.0% | ------ | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Summarization.md#word2vec_based_model) |
| | Word2vec--sentences contextual Based | 60.0% | ----- | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Summarization.md#transformer_based_model) |
| Bi-lingual projects | google translator with large data detector | ---- | ---- | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/Bangla%20translator.md#using-google-translator) |
| Information Extraction | Static word features | - | | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/information%20extraction.md#feature-based-extraction) |
| | Semantic and contextual | - | | | [docs](https://github.com/Foysal87/sbnltk/blob/main/docs/information%20extraction.md#contextual-information-extraction) |
## Next releases after testing this demo
| Task | Version |
|:------------------------------:|:------------:|
| Coreference Resolution | v1.1 |
| Language translation | V1.1 |
| Masked Language model | V1.1 |
| Information retrieval Projects | V1.1 |
| Entity Segmentation | v1.3 |
| Factoid Question Answering | v1.2 |
| Question Classification | v1.2 |
| sentiment Word embedding | v1.3 |
| So many others features | --- |
## Package Installation
You have to install these packages manually, if you get any module error.
* simpletransformers
* fasttext
## Models
Everything is automated here. when you call a model for the first time, it will be downloaded automatically.
## With GPU or Without GPU
* With GPU, you can run any models without getting any warnings.
* Without GPU, You will get some warnings. But this will not affect in result.
## Motivation
With approximately 228 million native speakers and another 37 million as second language speakers,Bengali is the fifth most-spoken native
language and the seventh most spoken language by total number of speakers in the world. But still it is a low resource language. Why?
## Dataset
For all sbnltk dataset and existing Dataset, see this link [Bangla NLP Dataset](https://github.com/Foysal87/Bangla-NLP-Dataset)
## Trainer
For training, You can see this [Colab Trainer](https://github.com/Foysal87/NLP-colab-trainer) . In future i will make a Trainer module!
## When will full version come?
Very soon. We are working on paper and improvement our modules. It will be released sequentially.
## About accuracy
Accuracy can be varied for the different datasets. We measure our model with random datasets but small scale. As human resources for this project are not so large.
## Contribute Here
* If you found any issue, please create an issue or contact with me.
|
/sbnltk-1.0.7.tar.gz/sbnltk-1.0.7/README.md
| 0.654343 | 0.843315 |
README.md
|
pypi
|
from __future__ import unicode_literals
from io import StringIO
from subprocess import CalledProcessError, check_call, check_output, Popen,\
PIPE
import uuid
import threading
import time
import os
# Solution for detecting when the Selenium standalone server is ready to go by
# listening to its console output. Obtained from
# http://stackoverflow.com/questions/3076542/how-can-i-read-all-availably-data-from-subprocess-popen-stdout-non-blocking/3078292#3078292
class InputStreamChunker(threading.Thread):
"""
Threaded object / code that mediates reading output from a stream,
detects "separation markers" in the stream and spits out chunks
of original stream, split when ends of chunk are encountered.
Results are made available as a list of filled file-like objects
(your choice). Results are accessible either "asynchronously"
(you can poll at will for results in a non-blocking way) or
"synchronously" by exposing a "subscribe and wait" system based
on threading.Event flags.
Usage:
* instantiate this object
* give our input pipe as "stdout" to other subprocess and start it::
Popen(..., stdout = th.input, ...)
* (optional) subscribe to data_available event
* pull resulting file-like objects off .data (if you are "messing" with
.data from outside of the thread, be courteous and wrap the
thread-unsafe manipulations between::
obj.data_unoccupied.clear()
... mess with .data
obj.data_unoccupied.set()
The thread will not touch obj.data for the duration and will block
reading.)
License: Public domain
Absolutely no warranty provided
"""
def __init__(self, delimiter=None, outputObjConstructor=None):
"""
delimiter - the string that will be considered a delimiter for the
stream outputObjConstructor - instances of these will be attached to
self.data array (intantiator_pointer, args, kw)
"""
super(InputStreamChunker, self).__init__()
self._data_available = threading.Event()
self._data_available.clear() # parent will .wait() on this for results
self._data = []
self._data_unoccupied = threading.Event()
self._data_unoccupied.set() # parent will set this to true when self.results is being changed from outside
self._r, self._w = os.pipe() # takes all inputs. self.input = public pipe in.
self._stop = False
if not delimiter:
delimiter = str(uuid.uuid1())
self._stream_delimiter = [l for l in delimiter]
self._stream_roll_back_len = (len(delimiter) - 1) * -1
if not outputObjConstructor:
self._obj = (StringIO, (), {})
else:
self._obj = outputObjConstructor
@property
def data_available(self):
"""returns a threading.Event instance pointer that is
True (and non-blocking to .wait() ) when we attached a
new IO obj to the .data array.
Code consuming the array may decide to set it back to False
if it's done with all chunks and wants to be blocked on .wait()"""
return self._data_available
@property
def data_unoccupied(self):
"""returns a threading.Event instance pointer that is normally
True (and non-blocking to .wait() ) Set it to False with .clear()
before you start non-thread-safe manipulations (changing) .data
array. Set it back to True with .set() when you are done"""
return self._data_unoccupied
@property
def data(self):
"""returns a list of input chunks (file-like objects) captured
so far. This is a "stack" of sorts. Code consuming the chunks
would be responsible for disposing of the file-like objects.
By default, the file-like objects are instances of StringIO"""
return self._data
@property
def input(self):
"""This is a file descriptor (not a file-like).
It's the input end of our pipe which you give to other process
to be used as stdout pipe for that process"""
return self._w
def flush(self):
"""Normally a read on a pipe is blocking.
To get things moving (make the subprocess yield the buffer,
we inject our chunk delimiter into self.input
This is useful when primary subprocess does not write anything
to our in pipe, but we need to make internal pipe reader let go
of the pipe and move on with things.
"""
os.write(self._w, ''.join(self._stream_delimiter))
def stop(self):
self._stop = True
self.flush() # reader has its teeth on the pipe. This makes it let go for for a sec.
os.close(self._w)
self._data_available.set()
def __del__(self):
try:
self.stop()
except:
pass
try:
del self._w
del self._r
del self._data
except:
pass
def run(self):
""" Plan:
* We read into a fresh instance of IO obj until marker encountered.
* When marker is detected, we attach that IO obj to "results" array
and signal the calling code (through threading.Event flag) that
results are available
* repeat until .stop() was called on the thread.
"""
marker = ['' for l in self._stream_delimiter] # '' is there on purpose
tf = self._obj[0](*self._obj[1], **self._obj[2])
while not self._stop:
l = os.read(self._r, 1)
marker.pop(0)
marker.append(l)
if marker != self._stream_delimiter:
tf.write(unicode(l))
else:
# chopping off the marker first
tf.seek(self._stream_roll_back_len, 2)
tf.truncate()
tf.seek(0)
self._data_unoccupied.wait(5) # seriously, how much time is needed to get your items off the stack?
self._data.append(tf)
self._data_available.set()
tf = self._obj[0](*self._obj[1], **self._obj[2])
os.close(self._r)
tf.close()
del tf
class OutputMonitor:
"""
Configure an output stream which can tell when a particular string has
appeared.
"""
def __init__(self):
self.stream = InputStreamChunker('\n')
self.stream.daemon = True
self.stream.start()
self.lines = []
def wait_for(self, text, seconds):
"""
Returns True when the specified text has appeared in a line of the
output, or False when the specified number of seconds have passed
without that occurring.
"""
found = False
stream = self.stream
start_time = time.time()
while not found:
if time.time() - start_time > seconds:
break
stream.data_available.wait(0.5)
stream.data_unoccupied.clear()
while stream.data:
line = stream.data.pop(0)
value = line.getvalue()
if text in value:
found = True
self.lines.append(value)
stream.data_available.clear()
stream.data_unoccupied.set()
if time.time() - start_time > seconds:
break
return found
class DockerSelenium:
"""
Configuration for a Docker-hosted standalone Selenium server which can be
started and stopped as desired. Assumes that the terminal is already
configured for command-line docker usage, and uses the images provided by
https://github.com/SeleniumHQ/docker-selenium .
"""
def __init__(self, browser='chrome', port=4444, tag='2.53.0', debug=False):
self.container_id = None
self.ip_address = None
self.port = port
shared_memory = '-v /dev/shm:/dev/shm' if browser == 'chrome' else ''
debug_suffix = '-debug' if debug else ''
image_name = 'selenium/standalone-{}{}:{}'.format(browser,
debug_suffix, tag)
self.command = 'docker run -d -p {}:4444 {} {}'.format(port,
shared_memory,
image_name)
def command_executor(self):
"""Get the appropriate command executor URL for the Selenium server
running in the Docker container."""
ip_address = self.ip_address if self.ip_address else '127.0.0.1'
return 'http://{}:{}/wd/hub'.format(ip_address, self.port)
def start(self):
"""Start the Docker container"""
if self.container_id is not None:
msg = 'The Docker container is already running with ID {}'
raise Exception(msg.format(self.container_id))
process = Popen(['docker ps | grep ":{}"'.format(self.port)],
shell=True, stdout=PIPE)
(grep_output, _grep_error) = process.communicate()
lines = grep_output.split('\n')
for line in lines:
if ':{}'.format(self.port) in line:
other_id = line.split()[0]
msg = 'Port {} is already being used by container {}'
raise Exception(msg.format(self.port, other_id))
self.container_id = check_output(self.command, shell=True).strip()
try:
self.ip_address = check_output(['docker-machine', 'ip']).strip()
except (CalledProcessError, OSError):
self.ip_address = '127.0.0.1'
output = OutputMonitor()
logs_process = Popen(['docker', 'logs', '-f', self.container_id],
stdout=output.stream.input,
stderr=open(os.devnull, 'w'))
ready_log_line = 'Selenium Server is up and running'
if not output.wait_for(ready_log_line, 10):
logs_process.kill()
msg = 'Timeout starting the Selenium server Docker container:\n'
msg += '\n'.join(output.lines)
raise Exception(msg)
logs_process.kill()
def stop(self):
"""Stop the Docker container"""
if self.container_id is None:
raise Exception('No Docker Selenium container was running')
check_call(['docker', 'stop', self.container_id])
self.container_id = None
|
/sbo-selenium-0.7.2.tar.gz/sbo-selenium-0.7.2/sbo_selenium/utils.py
| 0.671578 | 0.203075 |
utils.py
|
pypi
|
import sbol3
import graphviz
import rdflib
import argparse
def graph_sbol(doc, outfile='out'):
g = doc.graph()
dot_master = graphviz.Digraph()
dot = graphviz.Digraph(name='cluster_toplevels')
for obj in doc.objects:
dot.graph_attr['style'] = 'invis'
# Graph TopLevel
obj_label = _get_node_label(g, obj.identity)
dot.node('Document')
dot.node(_strip_scheme(obj.identity))
dot.edge('Document', _strip_scheme(obj.identity))
dot_master.subgraph(dot)
for obj in doc.objects:
dot = graphviz.Digraph(name='cluster_%s' %_strip_scheme(obj.identity))
dot.graph_attr['style'] = 'invis'
# Graph owned objects
t = _visit_children(obj, [])
for start_node, edge, end_node in t:
start_label = _get_node_label(g, start_node)
end_label = _get_node_label(g, end_node)
dot.node(_strip_scheme(start_node), label=start_label)
dot.node(_strip_scheme(end_node), label=end_label)
dot.edge(_strip_scheme(start_node), _strip_scheme(end_node), label=edge, **composition_relationship)
dot_master.subgraph(dot)
for obj in doc.objects:
# Graph associations
t = _visit_associations(obj, [])
for triple in t:
start_node, edge, end_node = triple
start_label = _get_node_label(g, start_node)
end_label = _get_node_label(g, end_node)
dot_master.node(_strip_scheme(start_node), label=start_label)
dot_master.node(_strip_scheme(end_node), label=end_label)
# See https://stackoverflow.com/questions/2499032/subgraph-cluster-ranking-in-dot
# constraint=false commonly gives unnecessarily convoluted edges.
# It seems that weight=0 gives better results:
dot_master.edge(_strip_scheme(start_node), _strip_scheme(end_node), label=edge, weight='0', **association_relationship)
#print(dot_master.source)
dot_master.render(outfile, view=True)
def _get_node_label(graph, uri):
label = None
for name in graph.objects(rdflib.URIRef(uri), rdflib.URIRef('http://sbols.org/v3#name')):
return name
for display_id in graph.objects(rdflib.URIRef(uri), rdflib.URIRef('http://sbols.org/v3#displayId')):
return display_id
return uri.split('//')[-1]
def _strip_scheme(uri):
return uri.split('//')[-1]
def _visit_children(obj, triples=[]):
for property_name, sbol_property in obj.__dict__.items():
if isinstance(sbol_property, sbol3.ownedobject.OwnedObjectSingletonProperty):
child = sbol_property.get()
if child is not None:
_visit_children(child, triples)
triples.append((obj.identity,
property_name,
child.identity))
elif isinstance(sbol_property, sbol3.ownedobject.OwnedObjectListProperty):
for child in sbol_property:
_visit_children(child, triples)
triples.append((obj.identity,
property_name,
child.identity))
return triples
def _visit_associations(obj, triples=[]):
for property_name, sbol_property in obj.__dict__.items():
if isinstance(sbol_property, sbol3.refobj_property.ReferencedObjectSingleton):
referenced_object = sbol_property.get()
if referenced_object is not None:
triples.append((obj.identity,
property_name,
referenced_object))
elif isinstance(sbol_property, sbol3.refobj_property.ReferencedObjectList):
for referenced_object in sbol_property:
triples.append((obj.identity,
property_name,
referenced_object))
elif isinstance(sbol_property, sbol3.ownedobject.OwnedObjectSingletonProperty):
child = sbol_property.get()
if child is not None:
_visit_associations(child, triples)
elif isinstance(sbol_property, sbol3.ownedobject.OwnedObjectListProperty):
for child in sbol_property:
_visit_associations(child, triples)
return triples
association_relationship = {
'arrowtail' : 'odiamond',
'arrowhead' : 'vee',
'fontname' : 'Bitstream Vera Sans',
'fontsize' : '8',
'dir' : 'both'
}
composition_relationship = {
'arrowtail' : 'diamond',
'arrowhead' : 'vee',
'fontname' : 'Bitstream Vera Sans',
'fontsize' : '8',
'dir' : 'both'
}
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-i",
"--input",
dest="in_file",
help="Input PAML file",
)
args_dict = vars(parser.parse_args())
doc = sbol3.Document()
doc.read(args_dict['in_file'])
graph_sbol(doc, args_dict['in_file'].split('.')[0])
|
/sbol-utilities-1.0a14.tar.gz/sbol-utilities-1.0a14/sbol_utilities/graph_sbol.py
| 0.427994 | 0.167797 |
graph_sbol.py
|
pypi
|
import argparse
import logging
import os
import sys
import time
from typing import Union, Tuple, Optional, Sequence
import rdflib.compare
import sbol3
def _load_rdf(fpath: Union[str, bytes, os.PathLike]) -> rdflib.Graph:
rdf_format = rdflib.util.guess_format(fpath)
graph1 = rdflib.Graph()
graph1.parse(fpath, format=rdf_format)
return graph1
def _diff_graphs(g1: rdflib.Graph, g2: rdflib.Graph) -> Tuple[rdflib.Graph, rdflib.Graph, rdflib.Graph]:
iso1 = rdflib.compare.to_isomorphic(g1)
iso2 = rdflib.compare.to_isomorphic(g2)
rdf_diff = rdflib.compare.graph_diff(iso1, iso2)
return rdf_diff
def _report_triples(header: Optional[str], graph: rdflib.Graph) -> None:
if header:
print(header)
for s, p, o in graph:
print(f'\t{s}, {p}, {o}')
return None
def _report_diffs(desc1: str, in1: rdflib.Graph,
desc2: str, in2: rdflib.Graph) -> None:
if in1:
header = f'Triples in {desc1}, not in {desc2}:'
_report_triples(header, in1)
if in2:
header = f'Triples in {desc2}, not in {desc1}:'
_report_triples(header, in2)
def _diff_rdf(desc1: str, g1: rdflib.Graph, desc2: str, g2: rdflib.Graph,
silent: bool = False) -> int:
_, in1, in2 = _diff_graphs(g1, g2)
if not in1 and not in2:
return 0
else:
if not silent:
_report_diffs(desc1, in1, desc2, in2)
return 1
def file_diff(fpath1: str, fpath2: str, silent: bool = False) -> int:
"""
Compute and report the difference between two SBOL3 files
:param fpath1: path to the first SBOL3 file
:param fpath2: path to the second SBOL3 file
:param silent: whether to report differences to stdout
:return: 1 if there are differences, 0 if they are the same
"""
return _diff_rdf(fpath1, _load_rdf(fpath1), fpath2, _load_rdf(fpath2),
silent=silent)
def doc_diff(doc1: sbol3.Document, doc2: sbol3.Document,
silent: bool = False) -> int:
"""
Compute and report the difference between two SBOL3 documents
:param doc1: the first SBOL3 document
:param doc2: the second SBOL3 document
:param silent: whether to report differences to stdout
:return: 1 if there are differences, 0 if they are the same
"""
return _diff_rdf('Document 1', doc1.graph(), 'Document 2', doc2.graph(),
silent=silent)
def _init_logging(debug=False):
msg_format = "%(asctime)s.%(msecs)03dZ:%(levelname)s:%(message)s"
date_format = "%Y-%m-%dT%H:%M:%S"
level = logging.INFO
if debug:
level = logging.DEBUG
logging.basicConfig(format=msg_format, datefmt=date_format, level=level)
logging.Formatter.converter = time.gmtime
def _parse_args(args: Optional[Sequence[str]] = None):
parser = argparse.ArgumentParser()
parser.add_argument('file1', metavar='FILE1',
help='First Input File')
parser.add_argument('file2', metavar='FILE2',
help='Second Input File')
parser.add_argument('-s', '--silent', action='store_true',
help='Generate no output, only status')
parser.add_argument('--debug', action='store_true',
help='Enable debug logging (default: disabled)')
args = parser.parse_args(args)
return args
def main(argv: Optional[Sequence[str]] = None) -> int:
"""
Command line interface to sbol_diff
@param argv: command line arguments
@return: 1 if there are differences, 0 if they are the same
"""
args = _parse_args(argv)
_init_logging(args.debug)
return file_diff(args.file1, args.file2, silent=args.silent)
if __name__ == '__main__':
sys.exit(main())
|
/sbol-utilities-1.0a14.tar.gz/sbol-utilities-1.0a14/sbol_utilities/sbol_diff.py
| 0.631026 | 0.3465 |
sbol_diff.py
|
pypi
|
# Introduction
1. Create an account on SynBioHub
2. Make sure you've downloaded `parts.xml` and it is placed somewhere convenient on your computer.
3. Make sure you've downloaded `results.txt` and it is placed somewhere convenient on your computer.
4. Install SBOL library in language of choice
# Getting a Device from an SBOL Compliant XML
```
import sbol2
# Set the default namespace (e.g. “http://my_namespace.org”)
sbol2.setHomespace('http://sys-bio.org')
# Create a new SBOL document
doc = sbol2.Document()
# Load some generic parts from `parts.xml` into another Document
doc.read('../test/resources/tutorial/parts.xml')
# Inspect the Document
for obj in doc:
print(obj.displayId, obj.type)
```
# Getting a Device from Synbiohub
```
# Start an interface to igem’s public part shop on SynBioHub. Located at `https://synbiohub.org/public/igem`
igem = sbol2.PartShop('https://synbiohub.org/public/igem')
# Search the part shop for parts from the iGEM interlab study using the search term `interlab`
records = igem.search('promoter')
for r in records:
print(r.identity)
# Import the medium strength device into your document
igem.pull('https://synbiohub.org/public/igem/BBa_R0040/1', doc)
for cd in doc.componentDefinitions:
print(cd)
```
# Extracting ComponentDefinitions from a Pre-existing Device
```
# Extract the medium strength promoter `Medium_2016Interlab` from your document.
promoter = doc.componentDefinitions['https://synbiohub.org/public/igem/BBa_R0040/1']
# Extract the ribosomal binding site (rbs) `Q2` from your document.
rbs = doc.componentDefinitions['Q2']
# Extract the coding region (cds) `LuxR` from your document.
cds = doc.componentDefinitions['LuxR']
# Extract the terminator `ECK120010818` from your document.
term = doc.componentDefinitions['ECK120010818']
```
# Creating a New Device
```
# Create a new empty device named `my_device`
my_device = doc.componentDefinitions.create('my_device')
# Assemble the new device from the promoter, rbs, cds, and terminator from above.
my_device.assemblePrimaryStructure([promoter, rbs, cds, term], sbol2.IGEM_STANDARD_ASSEMBLY)
# Inspect the primary structure
for cd in my_device.getPrimaryStructure():
print(cd.displayId)
# Compile the sequence for the new device
nucleotides = my_device.compile()
seq = my_device.sequence
print(seq.elements)
# Set the role of the device with the Sequence Ontology term `gene`
my_device.roles = [sbol2.SO_GENE]
print(my_device.roles)
my_device.roles = [sbol2.SO + '0000444']
my_device.roles
```
|
/sbol2-1.4.1.tar.gz/sbol2-1.4.1/examples/tutorial.ipynb
| 0.550849 | 0.838481 |
tutorial.ipynb
|
pypi
|
# Introduction
1. Create an account on SynBioHub
2. Make sure you've downloaded `parts.xml` and it is placed somewhere convenient on your computer.
3. Make sure you've downloaded `results.txt` and it is placed somewhere convenient on your computer.
4. Install SBOL library in language of choice
# Getting a Device from an SBOL Compliant XML
```
import sbol2
# Set the default namespace (e.g. “http://my_namespace.org”)
sbol2.setHomespace('http://sys-bio.org')
# Create a new SBOL document
doc = sbol2.Document()
# Load some generic parts from `parts.xml` into another Document
doc.read('../test/resources/tutorial/parts.xml')
# Inspect the Document
for obj in doc:
print(obj.displayId, obj.type)
```
# Getting a Device from Synbiohub
```
# Start an interface to igem’s public part shop on SynBioHub. Located at `https://synbiohub.org/public/igem`
igem = sbol2.PartShop('https://synbiohub.org/public/igem')
# Search the part shop for parts from the iGEM interlab study using the search term `interlab`
records = igem.search('promoter')
for r in records:
print(r.identity)
# Import the medium strength device into your document
igem.pull('https://synbiohub.org/public/igem/BBa_R0040/1', doc)
for cd in doc.componentDefinitions:
print(cd)
```
# Extracting ComponentDefinitions from a Pre-existing Device
```
# Extract the medium strength promoter `Medium_2016Interlab` from your document.
promoter = doc.componentDefinitions['https://synbiohub.org/public/igem/BBa_R0040/1']
# Extract the ribosomal binding site (rbs) `Q2` from your document.
rbs = doc.componentDefinitions['Q2']
# Extract the coding region (cds) `LuxR` from your document.
cds = doc.componentDefinitions['LuxR']
# Extract the terminator `ECK120010818` from your document.
term = doc.componentDefinitions['ECK120010818']
```
# Creating a New Device
```
# Create a new empty device named `my_device`
my_device = doc.componentDefinitions.create('my_device')
# Assemble the new device from the promoter, rbs, cds, and terminator from above.
my_device.assemblePrimaryStructure([promoter, rbs, cds, term], sbol2.IGEM_STANDARD_ASSEMBLY)
# Inspect the primary structure
for cd in my_device.getPrimaryStructure():
print(cd.displayId)
# Compile the sequence for the new device
nucleotides = my_device.compile()
seq = my_device.sequence
print(seq.elements)
# Set the role of the device with the Sequence Ontology term `gene`
my_device.roles = [sbol2.SO_GENE]
print(my_device.roles)
my_device.roles = [sbol2.SO + '0000444']
my_device.roles
```
|
/sbol2-1.4.1.tar.gz/sbol2-1.4.1/examples/.ipynb_checkpoints/tutorial-checkpoint.ipynb
| 0.550849 | 0.838481 |
tutorial-checkpoint.ipynb
|
pypi
|
from .optimizer_iteration import OptimizerIteration
import numpy as np
import qiskit.algorithms.optimizers.optimizer as qiskitopt
from typing import Dict, Optional, Union, Callable, Tuple, List
POINT = Union[float, np.ndarray]
# qiskit documentation for base Optimizer class:
# https://qiskit.org/documentation/_modules/qiskit/algorithms/optimizers/optimizer.html
# example implementation of SPSA class derived from Optimizer:
# https://qiskit.org/documentation/_modules/qiskit/algorithms/optimizers/spsa.html
class Optimizer(qiskitopt.Optimizer):
'''
Implements surrogate-based optimization using a Gaussian kernel.
Parameters (notation from paper)
maxiter: number of optimization iterations (M)
patch_size: length of sampling hypercube sides (ℓ)
npoints_per_patch: sample points per iteration (𝜏)
epsilon_i: initial fraction of patch to exclude for optimization
region (ε_i)
epsilon_int: fraction of patch to exclude for edge effects on each
iteration (ε_int)
epsilon_f: final fraction of patch to include when performing final
averaging (ε_f)
nfev_final_avg: number of function evaluations to perform to calculate
final function value (if nfev_final_avg == 0, then
no final function value will be calculated)
'''
def __init__(
self,
maxiter: int = 100,
patch_size: float = 0.1,
npoints_per_patch: int = 20,
epsilon_i: float = 0.0,
epsilon_int: float = 0.05,
epsilon_f: float = 0.5,
nfev_final_avg: int = 0,
) -> None:
super().__init__()
# general optimizer arguments
self.maxiter = maxiter
self.patch_size = patch_size
self.npoints_per_patch = npoints_per_patch
self.epsilon_i = epsilon_i
self.epsilon_int = epsilon_int
self.epsilon_f = epsilon_f
self.nfev_final_avg = nfev_final_avg
def get_support_level(self) -> Dict:
"""Get the support level dictionary."""
return {
"initial_point": qiskitopt.OptimizerSupportLevel.required,
"gradient": qiskitopt.OptimizerSupportLevel.ignored,
"bounds": qiskitopt.OptimizerSupportLevel.ignored,
}
def minimize(
self,
fun: Callable[[POINT], float],
x0: POINT,
jac: Optional[Callable[[POINT], POINT]] = None,
bounds: Optional[List[Tuple[float, float]]] = None,
) -> qiskitopt.OptimizerResult:
"""Minimize the scalar function.
Args:
fun: The (possibly noisy) scalar function to minimize.
x0: The initial point for the minimization.
jac: The gradient of the scalar function ``fun``. Ignored.
bounds: Bounds for the variables of ``fun``. Ignored.
Returns:
The result of the optimization, containing e.g. the result
as attribute ``x``.
"""
optimizer_iteration = OptimizerIteration()
current_x = x0
local_minima_found = []
for i in range(self.maxiter):
optimize_bounds_size = (
self.patch_size
* (1.0 - self.epsilon_i)
* (1.0 - i / self.maxiter)
)
res = optimizer_iteration.minimize_kde(
fun,
current_x,
self.patch_size,
optimize_bounds_size,
self.npoints_per_patch,
)
new_x = res.x
distance = np.linalg.norm(new_x - current_x, ord=np.inf)
current_x = new_x
if distance < (self.patch_size / 2) * (1 - self.epsilon_int):
# local minimum found within this patch area
local_minima_found.append(new_x)
# use all nearby local minima to calculate the optimal x
local_minima_near_current_x = [
local_minimum
for local_minimum in local_minima_found
if (
np.linalg.norm(local_minimum - current_x, ord=np.inf)
< (self.patch_size / 2) * self.epsilon_f
)
]
optimal_x = (
np.mean(local_minima_near_current_x, axis=0)
if local_minima_near_current_x
else current_x
)
result = qiskitopt.OptimizerResult()
result.nfev = (
(self.maxiter * self.npoints_per_patch)
+ self.nfev_final_avg
)
result.nit = self.maxiter
result.x = optimal_x
if self.nfev_final_avg > 0:
result.fun = np.mean(
[fun(optimal_x) for _ in range(self.nfev_final_avg)]
)
else:
result.fun = (
'final function value not evaluated '
+ 'because nfev_final_avg == 0'
)
return result
|
/optimizer/optimizer.py
| 0.966244 | 0.589657 |
optimizer.py
|
pypi
|
import numpy as np
from scipy import optimize
from typing import Callable, Tuple, Any
def scott_bandwidth(n: int, d: int) -> float:
'''
Scott's Rule per D.W. Scott,
"Multivariate Density Estimation: Theory, Practice, and Visualization",
John Wiley & Sons, New York, Chicester, 1992
'''
return n ** (-1. / (d + 4))
class OptimizerIteration:
'''
Implements a single iteration of surrogate-based optimization using
a Gaussian kernel.
'''
def __init__(self) -> None:
pass
def get_conditional_expectation_with_gradient(
self,
training_data: np.ndarray,
x: np.ndarray,
bandwidth_function: Callable = scott_bandwidth,
) -> Tuple[np.ndarray, np.ndarray]:
# normalize training data coordinates
training_x = training_data[:, :-1]
training_x_mean = np.mean(training_x, axis=0)
training_x_std = np.std(training_x, axis=0)
training_x_std[training_x_std == 0.0] = 1.0
training_x_normalized = (training_x - training_x_mean) / training_x_std
# normalize input coordinates
x_normalized = (x - training_x_mean) / training_x_std
# normalize training data z-values
training_z = training_data[:, -1]
training_z_mean = np.mean(training_z)
training_z_std = np.std(training_z) or 1.0
training_z_normalized = (training_z - training_z_mean) / training_z_std
# get the normalized conditional expectation in z
bandwidth = bandwidth_function(*training_x.shape)
gaussians = np.exp(
-1 / (2 * bandwidth**2)
* np.linalg.norm((training_x_normalized - x_normalized), axis=1)**2
)
exp_z_normalized = (
np.sum(training_z_normalized * gaussians) / np.sum(gaussians)
)
# calculate the gradients along each x coordinate
grad_gaussians = np.array([
(1 / (bandwidth**2))
* (training_x_normalized[:, i] - x_normalized[i]) * gaussians
for i in range(len(x_normalized))
])
grad_exp_z_normalized = np.array([(
np.sum(gaussians)
* np.sum(training_z_normalized * grad_gaussians[i])
- np.sum(training_z_normalized * gaussians)
* np.sum(grad_gaussians[i])
) / (np.sum(gaussians)**2) for i in range(len(grad_gaussians))])
# undo the normalization and return the expectation value and gradients
exp_z = training_z_mean + training_z_std * exp_z_normalized
grad_exp_z = training_z_std * grad_exp_z_normalized
return exp_z, grad_exp_z
def minimize_kde(
self,
f: Callable,
patch_center_x: np.ndarray,
patch_size: float,
optimize_bounds_size: float,
npoints_per_patch: int,
) -> Any:
training_point_angles = self._generate_x_coords(
patch_center_x, patch_size, npoints_per_patch)
measured_values = np.atleast_2d(
[f(x) for x in training_point_angles]
).T
return self._minimize_kde(
training_point_angles,
measured_values,
patch_center_x,
optimize_bounds_size,
)
def _minimize_kde(
self,
angles: np.ndarray,
values: np.ndarray,
patch_center_x: np.ndarray,
optimize_bounds_size: float,
) -> Any:
training = np.concatenate((angles, values), axis=1)
num_angles = len(patch_center_x)
bounds_limits = np.array([
[
patch_center_x[angle] - optimize_bounds_size / 2,
patch_center_x[angle] + optimize_bounds_size / 2
]
for angle in range(num_angles)
])
bounds = optimize.Bounds(
lb=bounds_limits[:, 0],
ub=bounds_limits[:, 1],
keep_feasible=True
)
return optimize.minimize(
fun=lambda x:
self.get_conditional_expectation_with_gradient(training, x),
jac=True,
x0=patch_center_x,
bounds=bounds,
method="L-BFGS-B",
)
def _generate_x_coords(
self,
center: np.ndarray,
patch_size: float,
num_points: int = 40,
) -> np.ndarray:
'''
Generate num_points sample coordinates using Latin hypercube sampling.
_lhsclassic copied from:
https://github.com/tisimst/pyDOE/blob/master/pyDOE/doe_lhs.py#L123-L141
'''
def _lhsclassic(n: int, samples: int) -> np.ndarray:
# Generate the intervals
cut = np.linspace(0, 1, samples + 1)
# Fill points uniformly in each interval
u = np.random.rand(samples, n)
a = cut[:samples]
b = cut[1:samples + 1]
rdpoints = np.zeros_like(u)
for j in range(n):
rdpoints[:, j] = u[:, j] * (b - a) + a
# Make the random pairings
H = np.zeros_like(rdpoints)
for j in range(n):
order = np.random.permutation(range(samples))
H[:, j] = rdpoints[order, j]
return H
n_dim = len(center)
lhs_points = _lhsclassic(n_dim, num_points)
return np.array([
((point - 0.5) * 2 * patch_size) + center
for point in lhs_points
])
|
/optimizer/optimizer_iteration.py
| 0.9271 | 0.775817 |
optimizer_iteration.py
|
pypi
|
import math
import numpy as np
import scipy
from SALib.sample import (
saltelli,
sobol_sequence,
latin,
finite_diff,
fast_sampler,
)
#: the size of bucket to draw random samples
NUM_DATA_POINTS = 10000
SUPPORTED_RANDOM_METHODS_TITLES = {
"pseudo_random": "Pseudo Random",
"sobol_sequence": "Sobol sequence",
"saltelli": "Saltelli's extension of Sobol sequence",
"latin_hypercube": "Latin hypercube",
"finite_differences": "Finite differences",
"fast": "Fourier Amplitude Sensitivity Test (FAST)",
}
SUPPORTED_RANDOM_METHODS = tuple(t for t in SUPPORTED_RANDOM_METHODS_TITLES)
"""Supported methods to draw random numbers in samplers. Supported random methods are
as follows:
* ``pseudo_random`` from numpy
**Pseudo Random**
* ``sobol_sequence`` from *SALib*
**Sobol sequence**
* ``saltelli`` from *SALib*
**Saltelli's extension of Sobol sequence**
* ``latin_hypercube`` from *SALib*
**Latin hypercube**
* ``finite_differences`` from *SALib*
**Finite differences**
* ``fast`` from *SALib*
**Fourier Amplitude Sensitivity Test (FAST)**
"""
class NormalRandomnessManager:
"""
Randomness Manager that draw a bucket of **normally distributed** random numbers
and refill the bucket on-demand when it is exhausted.
"""
def __init__(self):
# draws of normal distribution
self.normal_draws_reserve = None
# draws of half normal distribution
self.half_normal_draws_reserve = None
def redraw_normal(self, kappa, sigma, use_vonmises=True):
r"""Redraw the bucket of normally distributed random numbers
:param kappa: the kappa :math:`\kappa` parameter to be used to draw from a
von Mises distribution
.. math::
x \sim f_\text{VonMises}( x | \mu, \kappa)
:param sigma: the sigma :math:`\sigma` parameter to be used to draw from a
normal distribution
.. math::
x \sim \mathcal{N}( x | \mu, \sigma^2)
:param use_vonmises: (Default value = True)
"""
if use_vonmises:
assert kappa is not None
dist = np.random.vonmises(0, kappa, NUM_DATA_POINTS)
else:
assert sigma is not None
dist = np.random.normal(0, sigma, NUM_DATA_POINTS)
self.normal_draws_reserve = dist
def draw_normal(
self,
origin: np.ndarray,
use_vonmises: bool = True,
kappa: float = 1,
sigma: float = math.pi / 4,
) -> np.ndarray:
r"""Draw from the normal distribution
:param origin: the origin (mean) for the random number, which will be used to
shift the number that this function returns
:param use_vonmises: use vom-Mises distribution
:param kappa: the :math:`\kappa` value
:param sigma: the :math:`\sigma` value
"""
if self.normal_draws_reserve is None or self.normal_draws_reserve.size < 1:
self.redraw_normal(
use_vonmises=use_vonmises, kappa=kappa, sigma=math.pi / 4
)
# draw from samples
draw = self.normal_draws_reserve[-1]
self.normal_draws_reserve = self.normal_draws_reserve[:-1]
# shift location
return draw + origin
def redraw_half_normal(self, start_at: np.ndarray, scale: float):
"""Re-draw from a half normal distribution
:param start_at: the origin
:param scale: the scale of the half normal
"""
dist = scipy.stats.halfnorm.rvs(loc=start_at, scale=scale, size=NUM_DATA_POINTS)
self.half_normal_draws_reserve = dist
def draw_half_normal(self, start_at: np.ndarray, scale: float = 1):
"""Draw from a half normal distribution
:param start_at: the origin
:param scale: the scale of the half normal
"""
if (
self.half_normal_draws_reserve is None
or self.half_normal_draws_reserve.size < 1
):
self.redraw_half_normal(start_at, scale)
# draw from samples
draw = self.half_normal_draws_reserve[-1]
self.half_normal_draws_reserve = self.half_normal_draws_reserve[:-1]
return draw
class RandomnessManager:
"""
A random number manager that draw a buckets of random numbers at a number and
redraw when the bucket is exhausted.
"""
def __init__(self, num_dim: int, bucket_size: int = NUM_DATA_POINTS):
"""
:param num_dim: the number of dimension
"""
# draws of random numbers
self.random_draws = {}
self.num_dim = num_dim
self.bucket_size = bucket_size
def redraw(self, random_method: str):
"""Redraw the random number with the given method
:param random_method: the random method to use
"""
problem = {
"num_vars": self.num_dim,
"names": list(range(self.num_dim)),
"bounds": [[0, 1]] * self.num_dim,
}
if random_method == "pseudo_random":
seq = np.random.random((self.bucket_size, self.num_dim))
elif random_method == "sobol_sequence":
seq = sobol_sequence.sample(self.bucket_size, self.num_dim)
elif random_method == "saltelli":
seq = saltelli.sample(problem, self.bucket_size, calc_second_order=False)
elif random_method == "latin_hypercube":
seq = latin.sample(problem, self.bucket_size)
elif random_method == "finite_differences":
seq = finite_diff.sample(problem, self.bucket_size)
elif random_method == "fast":
seq = fast_sampler.sample(problem, self.bucket_size, M=45)
else:
raise ValueError(f"Unknown random method {random_method}")
self.random_draws[random_method] = seq
def get_random(self, random_method):
"""Get one sample of random number :math:`r` where :math:`0 \le r \le 1`
:param random_method: The kind of random number method to use, must be one of
the choice in :data:`SUPPORTED_RANDOM_METHODS`
"""
if (
random_method not in self.random_draws
or self.random_draws[random_method].size < 1
):
self.redraw(random_method)
last = self.random_draws[random_method][-1]
self.random_draws[random_method] = self.random_draws[random_method][:-1]
return last
if __name__ == "__main__":
# show presentation of plotting different qrsai-random numbers
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
def show_fig(x, y, title=None):
figure(num=1, figsize=(8, 6), dpi=200)
plt.title(title)
plt.plot(x, y, "r.")
plt.show()
random_numbers = RandomnessManager(num_dim=2)
for _ in range(1):
for m in SUPPORTED_RANDOM_METHODS:
title = SUPPORTED_RANDOM_METHODS_TITLES[m]
random_numbers.redraw(m)
seq = random_numbers.random_draws[m][:NUM_DATA_POINTS]
show_fig(seq.T[0], seq.T[1], title)
|
/sbp_env-2.0.2-py3-none-any.whl/sbp_env/randomness.py
| 0.761716 | 0.521045 |
randomness.py
|
pypi
|
import logging
import math
import random
import time
import re
from typing import Optional, List, Union
import numpy as np
from tqdm import tqdm
from . import engine
from .utils.common import Node, PlanningOptions, Stats
from .utils.csv_stats_logger import setup_csv_stats_logger, get_non_existing_filename
from .visualiser import VisualiserSwitcher
from . import collisionChecker
LOGGER = logging.getLogger(__name__)
class Env:
"""Represents the planning environment.
The main planning loop happens inside this class.
"""
def __init__(self, args: PlanningOptions, fixed_seed: int = None):
"""
:param args: the dictionary of arguments to config the planning problem
:param fixed_seed: if given, fix the random seed
"""
self.started = False
if fixed_seed is not None:
np.random.seed(fixed_seed)
random.seed(fixed_seed)
print(f"Fixed random seed: {fixed_seed}")
# initialize and prepare screen
self.args = args
# setup visualiser
if self.args.no_display:
# use pass-through visualiser
VisualiserSwitcher.choose_visualiser("base")
args.sampler = args.sampler_data_pack.sampler_class(**args.as_dict())
def parse_input_pt(pt_as_str):
if pt_as_str is None:
return None
pt = pt_as_str.split(",")
if len(pt) != self.args.engine.get_dimension():
raise RuntimeError(
f"Expected to have number of dimension = {self.args.engine.get_dimension()}, "
f"but "
f"was n={len(pt)} from input '{pt_as_str}'"
)
return tuple(map(float, pt))
if type(self.args.start_pt) is str:
self.args.start_pt = parse_input_pt(self.args.start_pt)
if type(self.args.goal_pt) is str:
self.args.goal_pt = parse_input_pt(self.args.goal_pt)
self.args.planner = self.args.planner_data_pack.planner_class(self.args)
self.args.planner = self.args.planner
self.planner = self.args.planner
self.planner.args.env = self
self.visualiser = VisualiserSwitcher.env_clname(env_instance=self)
self.visualiser.visualiser_init(no_display=self.args.no_display)
start_pt, goal_pt = self.visualiser.set_start_goal_points(
start=self.args.start_pt, goal=self.args.goal_pt
)
if not self.args.engine.cc.feasible(start_pt):
raise ValueError(f"The given start conf. is not feasible {start_pt}.")
if not self.args.engine.cc.feasible(goal_pt):
raise ValueError(f"The given goal conf. is not feasible {goal_pt}.")
self.start_pt = self.goal_pt = None
if start_pt is not None:
self.start_pt = Node(start_pt)
if goal_pt is not None:
self.goal_pt = Node(goal_pt)
self.start_pt.is_start = True
self.goal_pt.is_goal = True
self.planner.add_newnode(self.start_pt)
self.visualiser.update_screen(update_all=True)
# update the string pt to object
self.args.start_pt = self.start_pt
self.args.goal_pt = self.goal_pt
self.planner.init(env=self, args=self.args)
if isinstance(self.args.engine, engine.KlamptEngine):
self.args.sampler.set_use_radian(True)
if self.args.epsilon > 1:
import warnings
warnings.warn(
f"Epsilon value is very high at {self.args.epsilon} ("
f">than 1.0). It might not work well as klampt uses "
f"radian for joints value"
)
if self.args.radius > 2:
import warnings
warnings.warn(
f"Radius value is very high at {self.args.radius} ("
f">than 2.0). It might not work well as klampt uses "
f"radian for joints value"
)
def __getattr__(self, attr):
"""This is called what self.attr doesn't exist.
Forward the call to the visualiser instance
"""
return object.__getattribute__(self.visualiser, attr)
@property
def sampler(self):
"""Pass through attribute access to sampler."""
return self.args.sampler
@staticmethod
def radian_dist(p1: np.ndarray, p2: np.ndarray):
"""Return the (possibly wrapped) distance between two vector of angles in
radians.
:param p1: first configuration :math:`q_1`
:param p2: second configuration :math:`q_2`
"""
# https://stackoverflow.com/questions/28036652/finding-the-shortest-distance-between-two-angles/28037434
diff = (p2 - p1 + np.pi) % (2 * np.pi) - np.pi
diff = np.where(diff < -np.pi, diff + (2 * np.pi), diff)
return np.linalg.norm(diff)
def step_from_to(self, p1: np.ndarray, p2: np.ndarray):
"""Get a new point from p1 to p2, according to step size.
:param p1: first configuration :math:`q_1`
:param p2: second configuration :math:`q_2`
"""
if self.args.ignore_step_size:
return p2
if np.isclose(p1, p2).all():
# p1 is at the same point as p2
return p2
unit_vector = p2 - p1
unit_vector = unit_vector / np.linalg.norm(unit_vector)
step_size = self.args.engine.dist(p1, p2)
step_size = min(step_size, self.args.epsilon)
return p1 + step_size * unit_vector
def run(self):
"""Run until we reached the specified max nodes"""
self.started = True
Stats.clear_instance()
stats = Stats.build_instance(showSampledPoint=self.args.showSampledPoint)
if self.args.save_output:
setup_csv_stats_logger(
get_non_existing_filename(
self.args.output_dir + "/%Y-%m-%d_%H-%M{}.csv"
)
)
csv_logger = logging.getLogger("CSV_STATS")
csv_logger.info(
[
"nodes",
"time",
"cc_feasibility",
"cc_visibility",
"invalid_feasibility",
"invalid_visibility",
"c_max",
]
)
start_time = time.time()
with tqdm(
total=self.args.max_number_nodes, desc=self.args.sampler.name
) as pbar:
while stats.valid_sample < self.args.max_number_nodes:
self.visualiser.update_screen()
self.planner.run_once()
pbar.n = stats.valid_sample
pbar.set_postfix(
{
"cc_fe": stats.feasible_cnt,
"cc_vi": stats.visible_cnt,
"fe": stats.invalid_samples_obstacles,
"vi": stats.invalid_samples_connections,
"c_max": self.planner.c_max,
}
)
if self.args.save_output:
csv_logger = logging.getLogger("CSV_STATS")
csv_logger.info(
[
stats.valid_sample,
time.time() - start_time,
stats.feasible_cnt,
stats.visible_cnt,
stats.invalid_samples_obstacles,
stats.invalid_samples_connections,
self.planner.c_max,
]
)
pbar.refresh()
if self.args.first_solution and self.planner.c_max < float("inf"):
break
self.visualiser.terminates_hook()
def get_solution_path(
self, as_array: bool = False
) -> Optional[Union[np.ndarray, List[Node]]]:
if self.planner.c_max >= float("inf"):
return None
nn = self.planner.goal_pt
path = []
while True:
path.append(nn)
if nn.is_start:
break
nn = nn.parent
path = reversed(path)
if as_array:
path = np.array([n.pos for n in path])
else:
path = list(path)
return path
|
/sbp_env-2.0.2-py3-none-any.whl/sbp_env/env.py
| 0.862685 | 0.205635 |
env.py
|
pypi
|
import random
from overrides import overrides
from ..samplers.baseSampler import Sampler
from ..samplers.randomPolicySampler import RandomPolicySampler
from ..utils import planner_registry
# noinspection PyAttributeOutsideInit
class BiRRTSampler(Sampler):
r"""
The sampler that is used internally by :class:`~planners.birrtPlanner.BiRRTPlanner`.
Internally, :class:`~samplers.birrtSampler.BiRRTSampler`
uses :class:`~samplers.randomPolicySampler.RandomPolicySampler` to draw from its
supported random methods.
The main differences lies in the epsilon biasing when
.. math::
p \sim \mathcal{U}(0,1) < \epsilon,
where the sampler will bias towards the correct **start** or **goal** tree
depending on the current tree :math:`\mathcal{T}_\text{current}` that
:class:`~samplers.birrtSampler.BiRRTSampler`
is currently planning for (in contrast to only always biasing towards the goal tree).
That is, :math:`p \sim \mathcal{U}(0,1)` is first drawn, then :math:`q_\text{new}`
is given by
.. math::
q_\text{new} =
\begin{cases}
q \sim \mathcal{U}(0,1)^d & \text{if } p < \epsilon\\
q_\text{target} & \text{if } \mathcal{T}_\text{current} \equiv \mathcal{
T}_{start}\\
q_\text{start} & \text{if } \mathcal{T}_\text{current} \equiv \mathcal{
T}_{target}\text{.}
\end{cases}
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.random_method = (
kwargs["random_method"] if "random_method" in kwargs else "pseudo_random"
)
@overrides
def init(self, **kwargs):
"""The delayed **initialisation** method"""
super().init(**kwargs)
self.randomSampler = RandomPolicySampler(random_method=self.random_method)
self.randomSampler.init(use_radian=self.use_radian, **kwargs)
def set_use_radian(self, value=True):
"""Overrides the super class method such that the value will be passed to the
internal :class:`samplers.randomPolicySampler.RandomPolicySampler`
:param value: whether to use radian
"""
self.use_radian = value
self.randomSampler.use_radian = value
@overrides
def get_next_pos(self) -> Sampler.GetNextPosReturnType:
"""Get next sampled position"""
# Random path
while True:
if random.random() < self.args.goalBias:
# init/goal bias
if self.args.planner.goal_tree_turn:
p = self.start_pos
else:
p = self.goal_pos
else:
p = self.randomSampler.get_next_pos()[0]
return p, self.report_success, self.report_fail
# start register
planner_registry.register_sampler(
"birrt_sampler",
sampler_class=BiRRTSampler,
)
# finish register
|
/sbp_env-2.0.2-py3-none-any.whl/sbp_env/samplers/birrtSampler.py
| 0.757974 | 0.506164 |
birrtSampler.py
|
pypi
|
import random
import numpy as np
from overrides import overrides
from ..randomness import SUPPORTED_RANDOM_METHODS, RandomnessManager
from ..samplers.baseSampler import Sampler
from ..utils import planner_registry
class RandomPolicySampler(Sampler):
r"""Uniformly and randomly samples configurations across :math:`d` where
:math:`d` is
the dimensionality of the *C-Space*.
:class:`~samplers.randomPolicySampler.RandomPolicySampler` samples configuration
:math:`q \in \mathbb{R}^d` across each dimension uniformly with an
:math:`0 \le \epsilon < 1` bias towds the goal configuration.
A random number :math:`p \sim \mathcal{U}(0,1)` is first drawn, then the
configuration :math:`q_\text{new}` that this function returns is given by
.. math::
q_\text{new} =
\begin{cases}
q \sim \mathcal{U}(0,1)^d & \text{if } p < \epsilon\\
q_\text{target} & \text{otherwise.}
\end{cases}
:py:const:`CONSTANT`
"""
@overrides
def __init__(self, random_method: str = "pseudo_random", **kwargs):
"""
:param random_method: the kind of random method to use. Must be a choice from
:data:`randomness.SUPPORTED_RANDOM_METHODS`.
:param kwargs: pass through to super class
"""
super().__init__(**kwargs)
if random_method not in SUPPORTED_RANDOM_METHODS:
raise ValueError(
"Given random_method is not valid! Valid options includes:\n"
"{}".format(
"\n".join((" - {}".format(m) for m in SUPPORTED_RANDOM_METHODS))
)
)
self.random_method = random_method
self.random = None
@overrides
def init(self, **kwargs):
"""The delayed **initialisation** method"""
super().init(**kwargs)
self.random = RandomnessManager(num_dim=self.args.engine.get_dimension())
self.use_original_method = False
@overrides
def get_next_pos(self) -> Sampler.GetNextPosReturnType:
# Random path
if random.random() < self.args.goalBias:
# goal bias
p = self.goal_pos
else:
p = self.random.get_random(self.random_method)
p = self.args.engine.transform(p)
return p, self.report_success, self.report_fail
# start register
sampler_id = "random"
planner_registry.register_sampler(sampler_id, sampler_class=RandomPolicySampler)
# finish register
|
/sbp_env-2.0.2-py3-none-any.whl/sbp_env/samplers/randomPolicySampler.py
| 0.76366 | 0.460895 |
randomPolicySampler.py
|
pypi
|
from typing import List
import networkx as nx
import numpy as np
from overrides import overrides
import tqdm
from ..utils.common import Node
from ..planners.rrtPlanner import RRTPlanner
from ..samplers import prmSampler
from ..utils import planner_registry
from ..utils.common import Colour, Stats
volume_of_unit_ball = {
1: 2,
2: 3.142,
3: 4.189,
4: 4.935,
5: 5.264,
6: 5.168,
7: 4.725,
8: 4.059,
9: 3.299,
10: 2.550,
}
def nearest_neighbours(
nodes: List[Node], poses: np.ndarray, pos: np.ndarray, radius: float
):
"""A helper function to find the nearest neighbours from a roadmap
:param nodes: the list of nodes to search against
:param poses: array of positions
:param pos: the position of interest
:param radius: the maximum radius of distance
"""
distances = np.linalg.norm(poses[: len(nodes)] - pos, axis=1)
neighbours = []
for i, d in enumerate(distances):
if d < radius:
neighbours.append(nodes[i])
return neighbours
class PRMPlanner(RRTPlanner):
"""
Probabilistic Roadmap motion planner, the multi-query sampling-based planner.
"""
@overrides
def init(self, *argv, **kwargs):
super().init(*argv, **kwargs)
self.d_threshold = self.args.epsilon
self.gamma = (
1
+ np.power(2, self.args.engine.get_dimension())
* (1 + 1.0 / self.args.engine.get_dimension())
* 10000
)
self.graph = nx.DiGraph()
self.graph.add_node(self.args.env.start_pt)
self.args.env.end_state = None
@overrides
def run_once(self):
rand_pos, _, _ = self.args.sampler.get_valid_next_pos()
Stats.get_instance().add_free()
self.add_newnode(Node(rand_pos))
def clear_graph(self):
"""Clear the current roadmap graph"""
self.graph = nx.DiGraph()
self.graph.add_node(self.args.env.start_pt)
self.args.env.end_state = None
def build_graph(self):
"""Performs the graph building process where
.. math::
G = (V, E).
"""
n = len(self.nodes)
radius = self.gamma * np.power(
np.log(n + 1) / (n + 1), 1 / self.args.engine.get_dimension()
)
for v in tqdm.tqdm(self.nodes, desc="Building graph"):
m_near = nearest_neighbours(self.nodes, self.poses, v.pos, radius)
for m_g in m_near:
if m_g is v:
continue
# check if path between(m_g,m_new) defined by motion-model is collision free
if not self.args.engine.cc.visible(m_g.pos, v.pos):
continue
self.graph.add_weighted_edges_from(
[(m_g, v, self.args.engine.dist(m_g.pos, v.pos))]
)
def get_nearest_free(self, node: Node, neighbours: List[Node]):
"""Internal method to get the closest existing node that is free to connects
to the given node.
:param node: the node of interest
:param neighbours: the list of nodes to search against
"""
nn = None
min_cost = 999999
for n in neighbours:
if n is self.args.env.start_pt or n is self.args.env.goal_pt or n is node:
continue
if not self.args.engine.cc.visible(node.pos, n.pos):
continue
if nn is None:
nn = n
min_cost = self.args.engine.dist(node.pos, n.pos)
else:
_cost = self.args.engine.dist(node.pos, n.pos)
if _cost < min_cost:
min_cost = _cost
nn = n
return nn
def get_solution(self):
"""Build the solution path"""
# get two nodes that is cloest to start/goal and are free routes
m_near = nearest_neighbours(
self.nodes, self.poses, self.args.sampler.start_pos, self.args.epsilon
)
start = self.get_nearest_free(self.args.env.start_pt, m_near)
m_near = nearest_neighbours(
self.nodes, self.poses, self.args.sampler.goal_pos, self.args.epsilon
)
goal = self.get_nearest_free(self.args.env.goal_pt, m_near)
if start is None or goal is None or not nx.has_path(self.graph, start, goal):
return float("inf")
solution_path = nx.shortest_path(self.graph, start, goal)
solution_path[0].cost = self.args.engine.dist(solution_path[0].pos, start.pos)
for i in range(1, len(solution_path)):
solution_path[i].parent = solution_path[i - 1]
solution_path[i].cost = solution_path[i - 1].cost + self.args.engine.dist(
solution_path[i].pos, solution_path[i - 1].pos
)
self.c_max = goal.cost
self.args.env.goal_pt.parent = goal
start.parent = self.args.env.start_pt
self.visualiser.draw_solution_path()
return self.c_max
def pygame_prm_planner_paint(planner):
"""Visualisation function to paint for planner
:param planner: the planner to visualise
"""
for n in planner.nodes:
planner.args.env.draw_circle(
pos=n.pos,
colour=(0, 0, 255),
radius=1.4,
layer=planner.args.env.path_layers,
)
def pygame_prm_planner_paint_when_terminate(planner):
"""Visualisation function to paint for planner when termiante
:param planner: the planner to visualise
"""
planner.build_graph()
# draw all edges
for n1, n2 in planner.graph.edges():
planner.args.env.draw_path(n1, n2, Colour.path_blue)
planner.get_solution()
planner.args.env.update_screen()
input("\nPress Enter to quit...")
def klampt_prm_paint(planner) -> None:
"""Visualiser paint function for PRM
:param planner: the planner to be visualised
"""
colour = (1, 0, 0, 1)
for n in planner.nodes:
planner.args.env.draw_node(
planner.args.engine.cc.get_eef_world_pos(n.pos), colour=colour
)
for edge in planner.graph.edges:
edge = np.array(edge).transpose()
planner.args.env.draw_path(
planner.args.engine.cc.get_eef_world_pos(n.pos),
planner.args.engine.cc.get_eef_world_pos(n.parent.pos),
colour=colour,
)
def klampt_prm_planner_paint_when_terminate(planner):
"""Visualisation function to paint for planner when termiante
:param planner: the planner to visualise
"""
planner.build_graph()
# draw all edges
for n1, n2 in planner.tree.edges():
planner.args.env.draw_path(n1, n2, Colour.path_blue)
planner.get_solution()
planner.args.env.update_screen()
input("\nPress Enter to quit...")
# start register
planner_registry.register_planner(
"prm",
planner_class=PRMPlanner,
visualise_pygame_paint=pygame_prm_planner_paint,
visualise_pygame_paint_terminate=pygame_prm_planner_paint_when_terminate,
visualise_klampt_paint=klampt_prm_paint,
visualise_klampt_paint_terminate=klampt_prm_planner_paint_when_terminate,
sampler_id=prmSampler.sampler_id,
)
# finish register
|
/sbp_env-2.0.2-py3-none-any.whl/sbp_env/planners/prmPlanner.py
| 0.850096 | 0.60288 |
prmPlanner.py
|
pypi
|
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional, Callable, Dict, Type, TYPE_CHECKING
if TYPE_CHECKING:
from ..planners.basePlanner import Planner
from ..samplers.baseSampler import Sampler
@dataclass
class BaseDataPack:
"""
Base data pack that is common to sampler and planner, which
includes visualisation method for pygame
"""
#: an unique id to identify this data-pack
name: str
visualise_pygame_paint_init: Optional[Callable]
visualise_pygame_paint: Optional[Callable]
visualise_pygame_paint_terminate: Optional[Callable]
visualise_klampt_paint_init: Optional[Callable]
visualise_klampt_paint: Optional[Callable]
visualise_klampt_paint_terminate: Optional[Callable]
@dataclass
class PlannerDataPack(BaseDataPack):
"""
Data pack to stores planner class and the sampler with `sample_id` to use
"""
#: the class to construct the planner
planner_class: Type[Planner]
#: the sampler id to associate to this planner
sampler_id: str
@dataclass
class SamplerDataPack(BaseDataPack):
"""
Data pack to store sampler class
"""
#: the class to construct the sampler
sampler_class: Type[Sampler]
#: the registry to store a dictionary to map a friendly name to a planner data pack
PLANNERS: Dict[str, PlannerDataPack] = {}
#: the registry to store a dictionary to map a friendly name to a sampler data pack
SAMPLERS: Dict[str, SamplerDataPack] = {}
def register_planner(
planner_id: str,
planner_class: Type["Planner"],
sampler_id: str,
visualise_pygame_paint_init: Optional[Callable] = None,
visualise_pygame_paint: Optional[Callable] = None,
visualise_pygame_paint_terminate: Optional[Callable] = None,
visualise_klampt_paint_init: Optional[Callable] = None,
visualise_klampt_paint: Optional[Callable] = None,
visualise_klampt_paint_terminate: Optional[Callable] = None,
) -> None:
"""Register a planner to make it available for planning.
:param planner_id: the unique planner id for this registering planner
:param planner_class: the planner class
:param sampler_id: sampler id to construct
:param visualise_pygame_paint_init: the paint function for pygame,
during initialisation
:param visualise_pygame_paint: the paint function for pygame
:param visualise_pygame_paint_terminate: the paint function for pygame,
when terminating
:param visualise_klampt_paint_init: the paint function for klampt,
during initialisation
:param visualise_klampt_paint: the paint function for klampt
:param visualise_klampt_paint_terminate: the paint function for klampt,
when terminating
"""
from ..planners.basePlanner import Planner
if planner_id in PLANNERS:
raise ValueError(f"A planner with name '{planner_id}' already exists!")
if not issubclass(planner_class, Planner):
raise TypeError(
f"The given class '{planner_class}' must derived from base "
f"type '{Planner.__name__}'!"
)
PLANNERS[planner_id] = PlannerDataPack(
name=planner_id,
visualise_pygame_paint_init=visualise_pygame_paint_init,
visualise_pygame_paint=visualise_pygame_paint,
visualise_pygame_paint_terminate=visualise_pygame_paint_terminate,
visualise_klampt_paint_init=visualise_klampt_paint_init,
visualise_klampt_paint=visualise_klampt_paint,
visualise_klampt_paint_terminate=visualise_klampt_paint_terminate,
planner_class=planner_class,
sampler_id=sampler_id,
)
def register_sampler(
sampler_id: str,
sampler_class: Type["Sampler"],
visualise_pygame_paint_init: Optional[Callable] = None,
visualise_pygame_paint: Optional[Callable] = None,
visualise_pygame_paint_terminate: Optional[Callable] = None,
visualise_klampt_paint_init: Optional[Callable] = None,
visualise_klampt_paint: Optional[Callable] = None,
visualise_klampt_paint_terminate: Optional[Callable] = None,
) -> None:
"""Register a sampler to make it available for planning.
:param sampler_id: the unique id for this sampler
:param sampler_class: the class to construct this sampler
:param visualise_pygame_paint_init: the paint function for pygame,
during initialisation
:param visualise_pygame_paint: the paint function for pygame
:param visualise_pygame_paint_terminate: the paint function for pygame,
when terminating
:param visualise_klampt_paint_init: the paint function for klampt,
during initialisation
:param visualise_klampt_paint: the paint function for klampt
:param visualise_klampt_paint_terminate: the paint function for klampt,
when terminating
"""
from ..samplers.baseSampler import Sampler
if sampler_id in SAMPLERS:
raise ValueError(f"A sampler with name '{sampler_id}' already exists!")
if not issubclass(sampler_class, Sampler):
raise TypeError(
f"The given class '{sampler_class}' must derived from base "
f"type '{Sampler.__name__}'!"
)
SAMPLERS[sampler_id] = SamplerDataPack(
name=sampler_id,
visualise_pygame_paint_init=visualise_pygame_paint_init,
visualise_pygame_paint=visualise_pygame_paint,
visualise_pygame_paint_terminate=visualise_pygame_paint_terminate,
visualise_klampt_paint_init=visualise_klampt_paint_init,
visualise_klampt_paint=visualise_klampt_paint,
visualise_klampt_paint_terminate=visualise_klampt_paint_terminate,
sampler_class=sampler_class,
)
|
/sbp_env-2.0.2-py3-none-any.whl/sbp_env/utils/planner_registry.py
| 0.911756 | 0.424591 |
planner_registry.py
|
pypi
|
0.4.0 (2023-06-30)
==================
* Updated minimum supported versions:
- Python 3.8
- `numpy` 1.18
- `astropy` 4.3
- `synphot` 1.1
- `astroquery` 0.4.5
New Features
------------
sbpy.activity
^^^^^^^^^^^^^
- Added ``VectorialModel.binned_production`` constructor for compatibility with
time-dependent production implemented in the original FORTRAN vectorial model
code by Festou. [#336]
- Added ``VMResult``, ``VMFragmentSputterPolar``, ``VMParams``,
``VMGridParams``, ``VMFragment``, and ``VMParent`` dataclasses to expose
details of ``VectorialModel`` results that may be of interest. [#336]
sbpy.calib
^^^^^^^^^^
- Added a model spectrum of the Sun from STScI's CALSPEC database (Bohlin et al.
2014, PASP 126, 711, DOI:10.1086/677655). [#371]
sbpy.data
^^^^^^^^^
- Added ``Orbit.tisserand`` to calculate the Tisserand parameter of small body's
orbits with respect to planets. [#325]
- Added ``Orbit.D_criterion`` to evaluate the D-criterion between two sets of
orbital elements. [#325]
- Added ``DataClass.__contains__`` to enable `in` operator for ``DataClass``
objects. [#357]
- Added ``DataClass.add_row``, ``DataClass.vstack`` methods. [#367]
sbpy.photometry
^^^^^^^^^^^^^^^
- Added parameter constraints to the IAU disk-integrated phase function models,
such as ``HG``, ``HG1G2``, ``HG12``, and ``HG12_Pen16``. [#366]
Documentation
^^^^^^^^^^^^^
- Index page has been reorganized. [#337]
API Changes
-----------
sbpy.activity
^^^^^^^^^^^^^
- ``VectorialModel`` now no longer takes an ``angular_substeps`` parameter. [#336]
sbpy.data
^^^^^^^^^
- IAU HG series functions moved from `sbpy.photometry.core` to `sbpy.photometry.iau`. [#354]
sbpy.photometry
^^^^^^^^^^^^^^^
- Replaced ``NonmonotonicPhaseFunctionWarning`` with
``InvalidPhaseFunctionWarning``. [#366]
Bug Fixes
---------
sbpy.calib
^^^^^^^^^^
- Updated STScI URLs for solar spectra (Castelli and Kurucz models). [#345]
sbpy.data
^^^^^^^^^
- Cometary magnitudes obtained via ``Phys.from_sbdb`` (i.e., M1 and M2) now have
appropriate units. [#349]
- Asteroids with A/ designations (e.g., A/2019 G2) are correctly identified by
``Names`` as asteroids. Improved handling of interstellar object (I/)
designations: they do not parse as cometary or asteroidal. [#334, #340]
0.3.0 (2022-04-28)
==================
New Features
------------
sbpy.activity
^^^^^^^^^^^^^
- New ``VectorialModel`` to implement the Festou (1981) model of the same name.
The code reproduces tests based on the literature within 20%, but the causes
of the differences are unknown. Help testing this new feature is appreciated.
[#278, #305]
sbpy.data
^^^^^^^^^
- ``DataClass`` fields are now checked for physically consistent units (e.g.,
heliocentric distance in units of length), or that they are ``Time`` objects,
as appropriate. [#275]
sbpy.photometry
^^^^^^^^^^^^^^^
- Add ATLAS c and o bandpasses to ``bandpass``. [#258]
sbpy.spectroscopy
^^^^^^^^^^^^^^^^^
- Add the ability to redden ``SpectralSource`` (such as the ``Sun`` model in
``sbpy.calib``) with a new ``.redden()`` method. [#289]
Bug Fixes
---------
sbpy.activity
^^^^^^^^^^^^^
- Allow apertures to be astropy ``Quantity`` objects in ``GasComa`` models,
e.g., ``Haser``. [#306]
sbpy.data
^^^^^^^^^
- Corrected ``Orbit.oo_propagate`` units on angles from degrees to radians.
[#262]
- Corrected ``Orbit`` fields from openorb to use ``'Tp'`` for perihelion date
epochs as astropy ``Time`` objects, instead of ``'Tp_jd'``. [#262]
- Corrected ``Name.from_packed`` which could not unpack strings including "j".
[#271]
- Remove hard-coded URL for JPL Horizons and astroquery's ``Horizons`` objects.
[#295]
- NaNs no longer crash ``Phys.from_sbdb``. [#297]
- When units are not included in the ``Phys.from_sbdb`` results returned from
NASA JPL, return unit-less values (and any description of the units, such as
``'density_sig'``) to the user. [#297]
- ``Names.parse_comet`` now correctly parses Pan-STARRS if included in a comet
name string, and corrected the label for fragment names in C/ objects:
``'fragm'`` --> ``'fragment'`` . [#279]
- Preserve the order of the user's requested epochs in ``Ephem.from_horizons``.
[#318]
sbpy.photometry
^^^^^^^^^^^^^^^
- Corrected PS1 filter wavelength units in ``bandpass`` from Å to nm. [#258]
- Fix ``HG1G2`` to respect the units on phase angle ``ph`` or else assume
radians. [#288]
API Changes
-----------
sbpy.data
^^^^^^^^^
- ``DataClass.field_names`` now returns a list of field names rather than a list
of internal column names. [#275]
Other Changes and Additions
---------------------------
- Improved compatibility with Python 3.8 [#259]
- Added support for astropy 4.0, drop support for astropy 3. [#260, #322]
- Infrastructure updated to use contemporary astropy project standards. [#284]
- Tests may be run in parallel with pytest, e.g., using ``-n auto``. [#297]
0.2.2 (2020-04-27)
==================
New Features
------------
None
Bug Fixes
---------
sbpy.activity
^^^^^^^^^^^^^
- Fix exception from ``Haser`` when ``CircularAperture`` in linear units is
used. [#240]
sbpy.data
^^^^^^^^^
- ``DataClass.__getitem__`` now always returns a new object of the same
class, unless a single field name is provided in which case an
astropy.Table.Column (no units provided) or astropy.units.Quantity
(units provided) is returned. [#238]
- Fixed ``Ephem.from_horizons`` to skip adding units to the ``'siderealtime'``
field if it is missing. Now, the only required field is ``'epoch'``. [#242]
- ``Ephem.from_horizons`` no longer modifies the ``epochs`` parameter in-place.
[#247]
sbpy.photometry
^^^^^^^^^^^^^^^
- Fixed ``HG12_Pen16`` calculations, which were using the 2010 G1 and G2
definitions. [#233]
- Use "Partner" under NASA logo. [#249]
API Changes
-----------
None
Other Changes and Additions
---------------------------
sbpy.activity
^^^^^^^^^^^^^
- Test ``Haser.column_density`` output for angular apertures << lengthscale.
[#243]
website
-------
- Use HTTPS everywhere. [#244]
0.2.1
=====
This version was not released.
Notes
=====
This changelog tracks changes to sbpy starting with version v0.2. Recommended
subsection titles: New Features, Bug Fixes, API Changes, and Other Changes and
Additions. Recommended sub-sub-section titles: sbpy submodules, in alphabetical
order.
|
/sbpy-0.4.0.tar.gz/sbpy-0.4.0/CHANGES.rst
| 0.902334 | 0.663505 |
CHANGES.rst
|
pypi
|
import sys
from io import TextIOWrapper, BytesIO
from numpy import array
from sbpy.data import Conf
from astropy.table import Table
from astropy.io import ascii
out = """
.. _field name list:
sbpy Field Names
================
The following table lists field names that are recognized by `sbpy`
when accessing `~sbpy.data.DataClass` objects, i.e.,
`~sbpy.data.Ephem`, `~sbpy.data.Orbit`, or `~sbpy.data.Phys`
objects. Each row of the following table represents one property; for
each property it lists its description, acceptable field names,
provenance (which `~sbpy.data.DataClass` class should be used to store
this object so that `sbpy` uses it properly), and its physical
dimension (if any).
How do I use this Table?
------------------------
As an example, imagine you are interested in storing an object's right
ascension into a `~sbpy.data.DataClass` object. The field names table
tells you that you should name the field either ``ra`` or ``RA``, that
you should use either a `~sbpy.data.Ephem` or `~sbpy.data.Obs` object
to store the data in, and that the field data should be expressed as
angles. Based on this information, we can create a `~sbpy.data.Obs`
object (presuming that the data were derived from observations):
>>> from sbpy.data import Obs
>>> import astropy.units as u
>>> obs = Obs.from_dict({'ra': [12.345, 12.346, 12.347]*u.deg})
>>> obs['ra'] # doctest: +SKIP
<Quantity [12.345, 12.346, 12.347] deg>
Since RA requires an angle as dimension, we use degrees, but we might
as well use radians - `sbpy` will convert the units where necessary.
RA has an alternative field name (``'RA'``), we can now use that name,
too, in order to retrieve the data:
>>> obs['RA'] # doctest: +SKIP
<Quantity [12.345, 12.346, 12.347] deg>
The field name list is always up to date, but it might not be
complete. If you think an important alternative name is missing,
please suggest it by opening an issue. However, keep in mind that each
alternative field name has to be **unique** and **unambiguous**. The
source list is located as ``sbpy.data.Conf.fieldnames`` in
``sbpy/data/__init__.py``.
Special Case: Epoch
-------------------
Please note that epochs generally have to be provided as
`~astropy.time.Time` objects. The advantage of using such objects is
that they can be readily transformed into a wide range of formats
(e.g., ISO, Julian Date, etc.) and time scales (e.g., UTC, TT, TDB,
etc.) Hence, `sbpy` requires that all fields referring to a point in
time be provided as `~astropy.time.Time` objects.
Field Name List
---------------
"""
# build table
data = []
for p in Conf.fieldnames_info:
data.append(['**'+p['description']+'**',
', '.join(['``'+str(f)+'``' for f in p['fieldnames']]),
', '.join([{'orbit': '`~sbpy.data.Orbit`',
'ephem': '`~sbpy.data.Ephem`',
'obs': '`~sbpy.data.Obs`',
'phys': '`~sbpy.data.Phys`'}[
m.replace(',', '')] for m in p['provenance']]),
str(p['dimension'])])
data = Table(array(data), names=('Description',
'Field Names',
'Provenance',
'Dimension'))
# redirect stdout
sys.stdout = TextIOWrapper(BytesIO(), sys.stdout.encoding)
# ascii.write will write to stdout
datarst = ascii.write(data, format='rst')
# read formatted table data
sys.stdout.seek(0)
rsttab = sys.stdout.read()
# add formatted table data to out
out += rsttab
# write fieldnames.rst
with open('sbpy/data/fieldnames.rst', 'w') as outf:
outf.write(out)
sys.stdout.close()
|
/sbpy-0.4.0.tar.gz/sbpy-0.4.0/docs/compile_fieldnames.py
| 0.722527 | 0.578448 |
compile_fieldnames.py
|
pypi
|
About sbpy
==========
What is sbpy?
-------------
`sbpy` is an `~astropy` affiliated package for small-body planetary
astronomy. It is meant to supplement functionality provided by
`~astropy` with functions and methods that are frequently used in the
context of planetary astronomy with a clear focus on asteroids and
comets.
As such, `sbpy` is open source and freely available to everyone. The development
of `sbpy` is funded through NASA Planetary Data Archiving, Restoration, and
Tools (PDART) Grant Numbers 80NSSC18K0987 and 80NSSC22K0143, but contributions
are welcome from everyone!
Why sbpy?
---------
In our interpretation, `sbpy` means *Python for Small Bodies* - it's
the simplest acronym that we came up with that would neither favor
asteroids nor comets. That's because we equally like both!
`sbpy` is motivated by the idea to provide a basis of well-tested and
well-documented methods to planetary astronomers in order to boost
productivity and reproducibility. Python has been chosen as the
language of choice as it is highly popular especially among
early-career researchers and it enables the integration of `sbpy` into
the `~astropy` ecosystem.
What is implemented in sbpy?
----------------------------
`sbpy` will provide the following functionality once the development
has been completed:
* observation planning tools tailored to moving objects
* photometry models for resolved and unresolved observations
* wrappers and tools for astrometry and orbit fitting
* spectroscopy analysis tools and models for reflected solar light and
emission from gas
* cometary gas and dust coma simulation and analysis tools
* asteroid thermal models for flux estimation and size/albedo estimation
* image enhancement tools for comet comae and PSF subtraction tools
* lightcurve and shape analysis tools
* access tools for various databases for orbital and physical data, as
well as ephemerides services
The development is expected to be completed in 2024. For an overview
of the progress of development, please have a look at the :ref:`status
page`.
Additional functionality may be implemented. If you are interested in
contributing to `sbpy`, please have a look at the :ref:`contribution guidelines <contributing>`.
Module Structure
----------------
`sbpy` consists of a number of sub-modules, each of which provides
functionality that is tailored to individual aspects asteroid and
comet research. The general module design is shown in the following
sketch.
.. figure:: static/structure.png
:alt: sbpy module structure
`sbpy` design schematic. Modules are shown as rectangular boxes,
important classes as rounded colored boxes. The left-hand side of
the schematic is mainly populated with support modules that act as
data containers and query functions. The right-hand side of the
schematic shows modules that focus on small body-related
functionality. Colored symbols match the colors and symbols of
classes and modules they are using.
The functionality of version 1.0, which will be finalized in 2024, is
detailed below. Please refer to the :ref:`status page` to inquire the
current status of each module.
`sbpy.data`
~~~~~~~~~~~
The `~sbpy.data` module provides data containers used throughout
`sbpy` for orbital elements (`~sbpy.data.Orbit`), ephemerides
(`~sbpy.data.Ephem`), observations (`~sbpy.data.Obs`), physical
properties (`~sbpy.data.Phys`), and target names
(`~sbpy.data.Names`). Instances of these classes are used as input to and
output for a wide range of top-level functions throughout `sbpy`,
guaranteeing a consistent and user-friendly API. All classes in
`~sbpy.data` provide query functions to obtain relevant information
from web-based services such as `JPL Horizons`_, `Minor Planet
Center`_ (MPC), `IMCCE`_, and `Lowell Observatory`_, providing orbital
elements at different epochs, ephemerides, physical properties,
observations reported to the MPC, (alternative) target identifiers
etc.
Additional functionality of `~sbpy.data` includes an interface to the
orbit fitting software `OpenOrb`_ and an interface to SPICE for offline
ephemerides calculations using `SpiceyPy`_, for which we will provide
utilities tailored to the needs of the small body community. Examples for how to use `sbpy` with ephemerides calculation package `PyEphem`_ and orbital integrator `REBOUND`_ (Rein and Liu 2012) will be provided as notebooks.
`~sbpy.data` also provides a range of other useful module-level
functions: `~sbpy.data.image_search`
queries the `Solar System Object Image Search function of the
Canadian Astronomy Data Centre`_, providing a table of images that
may contain the target based on its ephemerides. `~sbpy.data.sb_search` uses
IMCCE’s `Skybot`_; given a registered FITS image, the function will
search for small bodies that might be present in the image based on
their ephemerides. `~sbpy.data.pds_ferret` queries the `Small Bodies Data
Ferret`_ at the Planetary Data Systems Small Bodies Node for all
existing information on a specific small body in the PDS.
`sbpy.activity`
~~~~~~~~~~~~~~~
The `~sbpy.activity` module provides classes for modeling cometary comae, tails, ice sublimation. We have implemented the Haser gas coma model (`~sbpy.activity.Haser`, Haser 1957), and a Vectorial model is planned (`~sbpy.activity.Vectorial`, Festou 1981). Some parameters for commonly observed molecules (e.g., H\ :sub:`2`\ O, CO\ :sub:`2`\ , CO, OH, CN, C\ :sub:`2`\ ), such as photo-dissociation timescale and fluorescence band strength, are included. The gas coma classes can be used to generate aperture photometry or a synthetic image of the comet.
For dust, we have simple photometric models based on the *Afρ* and *εfρ* quantities (A'Hearn et al. 1984; Kelley et al. 2013). A syndyne/synchrone model (`~sbpy.activity.Syndynes`, Finson & Probstein 1968; Kelley et al. 2013) is planned.
The activity module includes LTE and non-LTE radiative transfer models used to determine production rates and excitation parameters, such as the temperature in the coma. In the inner regions of the coma collisions dominate molecular excitation and the resulting rotational level population is close to LTE. Beyond the LTE inner region, the level populations start to depart from the equilibrium distribution because the gas density is not high enough to reach thermodynamic equilibrium through collisions with neutrals. The inclusion of all relevant excitation processes in cometary atmospheres in a complex 3-dimensional outgassing geometry represents a state-of-the-art coma model which will provide a baseline for interpretation of cometary spectroscopy observations.
The Cowan & A'Hearn (1979) ice sublimation model (`~sbpy.activity.sublimation`), used to describe comet activity, and common parameters will also be added.
`sbpy.photometry`
~~~~~~~~~~~~~~~~~
The `~sbpy.photometry` module implements a number of light scattering
models for asteroidal surfaces and cometary coma dust. The goal of
this module is to provide a facility to fit light scattering models to
observed brightness data of asteroids, and to estimate the brightness
of asteroids and cometary comae under specified geometry based on
scattering models. Specifically, we include a number of
disk-integrated phase function models for asteroids, bidirectional
reflectance (I/F) models of particulate surfaces, and phase functions
of dust grains in cometary comae. The disk-integrated phase function
models of asteroids include the IAU adopted (H, G1 , G2) system
(Muinonen et al. 2010), the simplified (H, G12) system (Muinonen et
al. 2010) and the revised (H, G12) system (Penttila et al. 2016), as
well as the classic IAU (H, G) system. The
disk-resolved bidirectional reflectance model includes a number of
models that have been widely used in the small bodies community, such
as the Lommel-Seeliger model, Lambert model, Lunar-Lambert model,
etc. Surface facet geometries used in the different models can be
derived with methods in `~sbpy.shape`. We also include the most
commonly used 5-parameter version of the Hapke scattering
model. Empirical cometary dust phase functions are implemented, too
(Marcus 2007; Schleicher & Bair 2011,
https://asteroid.lowell.edu/comet/dustphase.html). Some
single-scattering phase functions such as the Henyey-Greenstein
function will also be implemented.
`sbpy.shape`
~~~~~~~~~~~~
The `~sbpy.shape` module provides tools for the use of 3D shape models
of small bodies and the analysis of lightcurve observations. The user
can load asteroid shapes saved in a number of common formats, such as
VRML, OBJ, into `~sbpy.shape.Kaasalainen`, and then calculate the geometry
of illumination and view for its surface facets, and manipulate
it. Furthermore, `~sbpy.shape.Kaasalainen` will provide methods for
lightcurve inversion. `~sbpy.shape` will provide an interface to use
shape models for functions in `~sbpy.photometry`.
In addition to the shape model methods, `~sbpy.shape` also provides
methods for the analysis and simulation of simple lightcurve data. The
`~sbpy.shape.Lightcurve` class provides routines to fit rotational period
(based on Lomb-Scargle routines implemented in `~astropy.stats` and other
frequency tools), Fourier coefficients, and spin pole axis
orientation. The class will also be able to simulate a lightcurve at
specified epochs with a shape model class and the associated
information such as pole orientation, illumination and viewing
geometry as provided by the `~sbpy.data.Phys` class, and a scattering model
provided through classes defined in the `~sbpy.photometry` module.
`sbpy.spectroscopy`
~~~~~~~~~~~~~~~~~~~
As part of `~sbpy.spectroscopy`, we provide routines for fitting
measured spectra, as well as simulating synthetic spectra over a wide
range of the electromagnetic spectrum. The spectral models include
emission lines relevant to observations of comet comae, as well as
reflectance spectra of asteroid and comet surfaces. The module
provides functions to fit and remove baselines or slopes, as well as
to fit emission lines or reflectance spectra.
In addition to the aforementioned functionality, we provide a class
`~sbpy.spectroscopy.Hapke` that implements Hapke spectral mixing
functionality.
This module also provides spectrophotometry methods as part of `~sbpy.spectroscopy.Spectrophotometry`. This functionality includes the transmission of spectra (empirical, generated, or from the literature) through common photometric filters, and the derivation of photometric colors from spectral slopes with `~sbpy.spectroscopy.SpectralGradient`.
`sbpy.thermal`
~~~~~~~~~~~~~~
Thermal modeling capabilities for asteroids are available through the
`~sbpy.thermal` module. The module provides implementations of the
Standard Thermal Model (`~sbpy.thermal.STM`, Morrison & Lebofsky
1979), the Fast-Rotating Model (`~sbpy.thermal.FRM`, Lebofsky &
Spencer 1989), and the popular Near-Earth Asteroid Thermal Model
(`~sbpy.thermal.NEATM`, Harris 1998) which can all be used in the same
way for estimating fluxes or fitting model solutions to observational
data.
`sbpy.imageanalysis`
~~~~~~~~~~~~~~~~~~~~
The `~sbpy.imageanalysis` module will focus on the analysis of
telescopic images. `~sbpy.imageanalysis.Centroid` provides a range of
centroiding methods, including a dedicated comet centroiding technique
that mitigates coma and tail biases (Tholen & Chesley 2004). Code
will also be developed to incorporate ephemerides into FITS image
headers to facilitate image reprojection in the rest frame of the
moving target (`~imageanalysis.moving_wcs`) for image co-addition,
e.g., using SWARP (Bertin 2002). We will modify and integrate cometary
coma enhancement code from collaborator Samarasinha
(`~imageanalysis.CometaryEnhancements`; Samarasinha & Larson 2014;
Martin et al. 2015). The coma enhancements will be coded into a plugin
for the `Ginga Image Viewer`_.
`~sbpy.imageanalysis` will also provide PSF subtraction functionality
that is utilizing and extending the Astropy affiliated package
`photutils`_; this class will provide wrappers for photutils to
simplify the application for moving object observations. Results of
imageanalysis.PSFSubtraction routines can be directly used in
imageanalysis.Cometary- Enhancements for further analysis.
`sbpy.obsutil`
~~~~~~~~~~~~~~
The `~sbpy.obsutil` module enables the user to conveniently check
observability of moving targets and to plan future observations. Using
`~spby.data.Ephem` functionality, `~sbpy.obsutil` provides tools to
identify peak observability over a range of time based on different
criteria, create observing scripts, plot quantities like airmass as a
function of time, and create finder charts for an individual
target. These functions and plots will be easily customizable and will
work identically for individual targets and large numbers of
targets. Finder charts will be produced from online sky survey data,
providing information on the target's track across the sky, it's
positional uncertainty, background stars with known magnitudes for
calibration purposes, and other moving objects.
`sbpy.bib`
~~~~~~~~~~
`~sbpy.bib` provides an innovative feature that simplifies the
acknowledgment of methods and code utilized by the user. After
activating the bibliography tracker in `~sbpy.bib`, references and
citations of all functions used by the user are tracked in the
background. The user can request a list of references that should be
cited based on sbpy functionality that was used at any time as clear
text or in the LATeX BibTex format.
`sbpy.calib`
~~~~~~~~~~~~
`sbpy.calib` includes calibration methods, including the photometric
calibration of various broad-band filters relative to the Sun's or
Vega's spectrum.
.. _user_zen:
Design Principles - The Zen of sbpy
-----------------------------------
In the design of `sbpy`, a few decisions have been made to provide a
highly flexible but still easy-to-use API. These decisions are
summarized in the :ref:`design principles`, or, the *Zen of sbpy*.
Some of these decisions affect the user directly and might be
considered unnecessarily complicated by some. Here, we review and
discuss some of these principles for the interested user.
Physical parameters are quantities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`sbpy` requires every parameter with a physical dimension (e.g.,
length, mass, velocity, etc.) to be a `astropy.units.Quantity`
object. Only dimensionless parameters (e.g., eccentricity, infrared beaming
parameter, etc.) are allowed to be dimensionless data types such as floats.
The reason for this decision is simple: every `astropy.units.Quantity`
object comes with a physical unit. Consider the following example: we
define a `~sbpy.data.Phys` object with a diameter for asteroid Ceres:
>>> from sbpy.data import Phys
>>> ceres = Phys.from_dict({'targetname': 'Ceres',
... 'diameter': 945})
Traceback (most recent call last):
...
sbpy.data.core.FieldError: Field diameter is not an instance of <class 'astropy.units.quantity.Quantity'>
`Phys.from_dict` raised an exception (`FieldError`) on 'diameter' because it was
not an `astropy.units.Quantity` object, i.e., it did not have units of length.
Of course, we know that Ceres' diameter is 945 km, but it was not clear from our
definition. Any functionality in `sbpy` would have to presume that diameters
are always given in km. This makes sense for large objects - but what about
meter-sized objects like near-Earth asteroids?
Following the
`Zen of Python <https://peps.python.org/pep-0020/>`_ (explicit
is better than implicit), we require that units are explicitly
defined:
>>> import astropy.units as u
>>> ceres = Phys.from_dict({'targetname': 'Ceres',
... 'diameter': 945*u.km})
>>> ceres
<QTable length=1>
targetname diameter
km
str5 float64
---------- --------
Ceres 945.0
This way, units and dimensions are always available where they make sense and we
can easily convert between different units:
>>> print(ceres['diameter'].to('m'))
[945000.] m
Epochs must be Time objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The same point in time can be described by a human-readable ISO time
string (``'2019-08-08 17:11:19.196'``) or a Julian Date
(``2458704.216194403``), as well as other formats. Furthermore, these
time formats return different results for different time scales: UT
ISO time ``'2019-08-08 17:11:19.196'`` converts to ``'2019-08-08
17:12:28.379'`` using the TDB time scale.
In order to minimize confusion introduced by different time formats
and time scales, `sbpy` requires that epochs and points in time are
defined as `~astropy.time.Time` objects, which resolve this confusion:
>>> from sbpy.data import Obs
>>> from astropy.time import Time
>>> obs = Obs.from_dict({'epoch': Time(['2018-01-12', '2018-01-13']),
... 'mag': [12.3, 12.6]*u.mag})
>>> obs['epoch']
<Time object: scale='utc' format='iso' value=['2018-01-12 00:00:00.000' '2018-01-13 00:00:00.000']>
`~astropy.time.Time` objects can be readily converted into other formats:
>>> obs['epoch'].jd
array([2458130.5, 2458131.5])
>>> obs['epoch'].mjd
array([58130., 58131.])
>>> obs['epoch'].decimalyear
array([2018.03013699, 2018.03287671])
>>> obs['epoch'].iso
array(['2018-01-12 00:00:00.000', '2018-01-13 00:00:00.000'], dtype='<U23')
as well as other time scales:
>>> obs['epoch'].utc.iso
array(['2018-01-12 00:00:00.000', '2018-01-13 00:00:00.000'], dtype='<U23')
>>> obs['epoch'].tdb.iso
array(['2018-01-12 00:01:09.184', '2018-01-13 00:01:09.184'], dtype='<U23')
>>> obs['epoch'].tai.iso
array(['2018-01-12 00:00:37.000', '2018-01-13 00:00:37.000'], dtype='<U23')
See :ref:`epochs` and `~astropy.time.Time` for additional information.
Use sbpy ``DataClass`` objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Finally, we require that topically similar parameters are bundled in
`~sbpy.data.DataClass` objects, which serve as data containers (see
:ref:`this page <data containers>` for an introduction).
This containerization makes it possible to keep data neatly formatted
and to minimize the number of input parameters for functions and
methods.
.. _JPL Horizons: https://ssd.jpl.nasa.gov/horizons/
.. _Minor Planet Center: https://minorplanetcenter.net/
.. _IMCCE: http://vo.imcce.fr/webservices/miriade/
.. _Lowell Observatory: https://asteroid.lowell.edu/gui/
.. _PyEphem: https://rhodesmill.org/pyephem
.. _REBOUND: https://github.com/hannorein/rebound
.. _OpenOrb: https://github.com/oorb/oorb
.. _SpiceyPy: https://github.com/AndrewAnnex/SpiceyPy
.. _web-API: https://minorplanetcenter.net/search_db
.. _Solar System Object Image Search function of the Canadian Astronomy Data Centre: https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/ssois/
.. _skybot: http://vo.imcce.fr/webservices/skybot/
.. _small bodies data ferret: https://sbnapps.psi.edu/ferret
.. _github wiki: https://github.com/mommermi/sbpy/wiki
.. _Ginga Image Viewer: https://ejeschke.github.io/ginga/
.. _photutils: https://github.com/astropy/photutils
|
/sbpy-0.4.0.tar.gz/sbpy-0.4.0/docs/about.rst
| 0.932546 | 0.893309 |
about.rst
|
pypi
|
# Haser model
Reproduce Fig. 1 of Combi et al. (2004): Column density versus project distance for a family of Haser model profiles of a daughter species.
```
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
from sbpy.activity import Haser
% matplotlib notebook
Q = 1e28 / u.s
v = 1 * u.km / u.s
parent = 1e5 * u.km
fig = plt.figure(1)
fig.clear()
ax = fig.add_subplot(111)
rho = np.logspace(-3, 1) * parent
norm = None
for ratio in [1.01, 1.6, 4, 10, 25, 63, 158, 1e5]:
daughter = parent / ratio
coma = Haser(Q, v, parent, daughter)
N = coma.column_density(rho)
norm = 10 * coma.column_density(parent)
plt.plot(rho / parent, N / norm, label=ratio)
plt.setp(ax, xlim=[1e-4, 1e2], xscale='log', xlabel='Log (projected distance)',
ylim=[1e-7, 1e3], yscale='log', ylabel='Log (relative column density)')
plt.legend()
plt.draw()
```
Reproduce Newburn and Johnson (1978) CN, C2, and C3 production rates.
```
eph = {'rh': 1.07 * u.au, 'delta': 0.363 * u.au}
aper = 12.5 * u.arcsec
Nobs = { # total number observed
'CN': 6.4e26,
'C3': 8.3e28,
'C2': 7.8e27,
}
gamma = { # length scales: parent, daughter
'CN': (1.4e4, 1.7e5) * u.km,
'C3': (1.0, 4.6e4) * u.km,
'C2': (1.0e4, 7.6e4) * u.km,
}
Q = 1 / u.s
v = 1 * u.km / u.s
Q_v = []
print('Retrieved Q/v:')
for species in Nobs:
coma = Haser(Q, v, gamma[species][0], gamma[species][1])
N = coma.total_number(aper, eph)
Q_v.append(Nobs[species] * Q / v / N)
print(' {} = {}'.format(species, Q_v[-1]))
```
|
/sbpy-0.4.0.tar.gz/sbpy-0.4.0/examples/activity/haser-model.ipynb
| 0.480722 | 0.9455 |
haser-model.ipynb
|
pypi
|
# Estimating Coma Continuum
```
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
import astropy.constants as const
from sbpy.activity import Afrho, Efrho
from mskpy.calib import solar_flux
% matplotlib notebook
afrho = Afrho(5000 * u.cm)
efrho = Efrho(afrho * 3.5)
# eph = Ephem.from_horizons(...)
eph = {'rh': 2.6 * u.au, 'delta': 3.0 * u.au, 'phase': 54 * u.deg}
wave = np.logspace(-0.3, 1.5, 1000) * u.um
nu = const.c / wave
aper = 1 * u.arcsec
S = solar_flux(wave, unit='Jy', smooth=False)
fsca = afrho.fluxd(nu, aper, eph, phasecor=True, S=S)
fth = efrho.fluxd(nu, aper, eph)
plt.clf()
plt.plot(wave, fsca, label='Scattered')
plt.plot(wave, fth, label='Thermal')
plt.plot(wave, (fsca + fth), label='Total')
plt.setp(plt.gca(), xlabel='Wavelength (μm)', ylabel='$F_ν$ (Jy)', xscale='log', yscale='log', ylim=[1e-4, 1])
plt.tight_layout()
plt.draw()
```
|
/sbpy-0.4.0.tar.gz/sbpy-0.4.0/examples/activity/estimating-coma-continuum.ipynb
| 0.667148 | 0.893263 |
estimating-coma-continuum.ipynb
|
pypi
|
def reverse_num(num):
"""Returns an integer
Reverse of a number passed as argument
"""
rev = 0
while num > 0:
rev = (rev * 10) + (num % 10)
num //= 10
return rev
def sum_of_digits(num):
"""Returns an integer
Sum of the digits of a number passed as argument
"""
s = 0
while num > 0:
s += num % 10
num //= 10
return s
def is_prime(num):
"""Returns a boolean
Checks whether the number passed as argument is a prime or composite number
"""
if num == 0 or num == 1:
return False
i = 2
while i*i < num:
if num % i == 0:
return False
i += 1
return True
def generate_primes(num1, num2):
"""Returns a list
Prime numbers generated between the given range(num1, num2)
"""
if num1 > num2:
raise Exception(
"num1 can't be greater than num2. Specify the correct range.")
if num1 == 0 or num2 == 0:
raise Exception("Specify the correct range.")
primes_generated = []
range_length = num2 - num1 + 1
primes = [True for i in range(range_length)]
if num1 == 1:
primes[num1] = False
inc_value = 2
while inc_value * inc_value <= num2:
if primes[inc_value] == True:
for i in range(inc_value * inc_value, range_length, inc_value):
primes[i] = False
inc_value += 1
for prime in range(num1, range_length):
if primes[prime]:
primes_generated.append(prime)
return primes_generated
def gcd(num1, num2):
"""Returns an integer
Greatest common divisor of the two numbers passed as arguments
"""
if num2 == 0:
return num1
return gcd(num2, num1 % num2)
def lcm(num1, num2):
"""Returns an integer
Least common multiple of the two numbers passed as arguments
"""
return num1 * num2 // gcd(num1, num2)
def get_factors(num):
"""Returns a list
Factors of the number passed as argument
"""
factors = []
inc_value = 1
while inc_value * inc_value <= num:
if num % inc_value == 0:
if num//inc_value == inc_value:
factors.append(inc_value)
else:
factors.append(inc_value)
factors.append(num//inc_value)
inc_value += 1
return factors
def factorial(num):
"""Returns an integer
Factorial of the number passed as argument
"""
fact = 1
for i in range(1, num+1):
fact *= i
return fact
def fibonacci(n):
"""Returns an integer
Nth fibonacci number
"""
a = 0
b = 1
for i in range(n-2):
c = a + b
a = b
b = c
return c
def number_of_digits(num):
count = 0
while num > 0:
count += 1
num //= 10
return count
def is_armstrong(num):
orginal_num = num
formed_num = 0
nod = number_of_digits(num)
while num > 0:
dig = num % 10
formed_num += dig ** nod
num //= 10
return formed_num == orginal_num
|
/sbpyp-0.0.1.tar.gz/sbpyp-0.0.1/src/number_utility.py
| 0.780035 | 0.813535 |
number_utility.py
|
pypi
|
sbs-gob-pe-helper
==============================
Nota : Te recomendamos revisar la [Nota legal](docs/NotaLegal.md) antes de emplear la libreria.
**sbs-gob-pe-helper** ofrece una forma Pythonica de descargar datos de mercado de la [Superintendencia de Banca y Seguros del Perú](https://www.sbs.gob.pe/), mediante web scraping (sbs web scraping).
-----------------
## Características
A continuación se encuentran las características que aborda este paquete. :
- Curva cupón cero:
- Extracción de datos desde la SBS.
- Gráfica de curva.
- Interpolación lineal de la curva.
- Vector de precio de renta fija:
- Extracción de datos desde la SBS.
- Índice spread corporativo.
- Extracción de datos desde la SBS.
---
## Instalación
Instala `sbs-gob-pe-helper` usando `pip`:
``` {.sourceCode .bash}
$ pip install sbs-gob-pe-helper
```
---
# Quick Start
## El modulo CuponCero
El módulo `CuponCero` permite acceder a los datos de bonos cupón cero de la SBS de una manera más alineada a Python:
## sbs_gob_pe_helper.CuponCero
| Parametro | Descripción |
| ------ | ------ |
|fechaProceso| Fecha de procesamiento|
|tipoCurva| Tipo de curva|
| tipoCurva | Descripciòn |
| ------ | ------ |
| CBCRS | Curva Cupon Cero CD BCRP|
| CCSDF | Curva Cupon Cero Dólares CP|
| CSBCRD | Curva Cupon Cero Dólares Sintetica|
| CCINFS | Curva Cupon Cero Inflacion Soles BCRP|
| CCCLD | Curva Cupon Cero Libor Dolares|
| CCPEDS | Curva Cupon Cero Peru Exterior Dolares - Soberana|
| CCPSS | Curva Cupon Cero Peru Soles Soberana|
| CCPVS | Curva Cupon Cero Peru Vac Soberana|
### Ejemplo
```python
import sbs_gob_pe_helper.CuponCero as cc
tp_curva = 'CCPSS'
fec_proceso = '31/07/2023'
#obtiene todos los datos de la curva de cupon cero de un determinado fecha de proceso
df_cup= cc.get_curva_cupon_cero(tipoCurva=tp_curva, fechaProceso=fec_proceso)
```

Si deseas visualizar la gráfica de la curva de cupón cero, sigue estos pasos:
```python
cc.plot_curva(df_cup)
```

Para extrapolar tasas de interes de forma lineal entonces realizar esto:
```python
#Si buscas obtener tasas para plazos no presentes en los datos de cupón cero, se llevará a cabo una interpolación utilizando los valores de plazos ya existentes.
# En el siguiente ejemplo, se calcularán las tasas correspondientes para los períodos de 30, 60 y 120.
data = {
"dias": [0, 30, 60 , 90 , 120],
}
df_test = pd.DataFrame(data)
df_test['tasas'] = df_test['dias'].apply(cc.get_tasa_interes_por_dias, args=(df_cup,))
df_test.head()
```

## El modulo VectorPrecioRentaFija
El módulo `VectorPrecioRentaFija` permite acceder a los datos de bonos cupón cero de la SBS de una manera más alineada a Python:
## sbs_gob_pe_helper.VectorPrecioRentaFija
| Parametro | Descripción |
| ------ | ------ |
|fechaProceso| Fecha de procesamiento|
|cboEmisor| Emisor|
|cboMoneda| Tipo de Moneda|
|cboRating| Rating Emisión|
| cboEmisor | Descripciòn |
| ------ | ------ |
|1000| GOB.CENTRAL|
|2011| ALICORP S.A.|
|0087| BANCO FALABELLA|
|0088| BCO RIPLEY|
|0001| BCRP|
|0011| CONTINENTAL|
|0042| CREDISCOTIA|
|0003| INTERBANK|
*Los emisores disponibles se encuentran en la siguiente página: https://www.sbs.gob.pe/app/pu/CCID/Paginas/vp_rentafija.aspx
| cboMoneda | Descripciòn |
| ------ | ------ |
| 1 | Soles|
| 2 | Soles VAC|
| 3 | Dolares|
| cboRating |
| ------ |
|A|
|A+|
|A-|
|AA|
|AA+|
|AA-|
|AAA|
|B-|
|BB|
|CP-1|
|CP-1+|
|CP-1-|
|CP-2|
|CP-2+|
### Ejemplo
```python
import sbs_gob_pe_helper.VectorPrecioRentaFija as vp
fechaProceso = '21/07/2023'
#Obtiene el vector de precios de instrumentos de renta fija disponibles en la SBS para una fecha de proceso específica:
df_vector = vp.get_vector_precios(fechaProceso=fechaProceso)
df_vector.columns.tolist()
```
```json
['Nemónico',
'ISIN/Identif.',
'Emisor',
'Moneda',
'P. Limpio (%)',
'TIR %',
'Origen(*)',
'Spreads',
'P. Limpio (monto)',
'P. Sucio (monto)',
'I.C. (monto)',
'F. Emisión',
'F. Vencimiento',
'Cupón (%)',
'Margen Libor (%)',
'TIR % S/Opc',
'Rating Emisión',
'Ult. Cupón',
'Prox. Cupón',
'Duración',
'Var PLimpio',
'Var PSucio',
'Var Tir']
```
```python
#Exhibiendo los primeros 5 registros del vector de precios:
df_vector.head()
```

---
## El modulo IndiceSpreadsCorporativo
El módulo `IndiceSpreadsCorporativo` permite acceder a los índices de spread Índice spread corporativo de la SBS mediante Python:
## sbs_gob_pe_helper.IndiceSpreadsCorporativo
| Parametro | Descripción |
| ------ | ------ |
|fechaInicial| Fecha inicial|
|fechaFinal| Fecha final|
|tipoCurva| Tipo de curva|
| tipoCurva | Descripciòn |
| ------ | ------ |
| CBCRS | Curva Cupon Cero CD BCRP|
| CCSDF | Curva Cupon Cero Dólares CP|
| CSBCRD | Curva Cupon Cero Dólares Sintetica|
| CCINFS | Curva Cupon Cero Inflacion Soles BCRP|
| CCCLD | Curva Cupon Cero Libor Dolares|
| CCPEDS | Curva Cupon Cero Peru Exterior Dolares - Soberana|
| CCPSS | Curva Cupon Cero Peru Soles Soberana|
| CCPVS | Curva Cupon Cero Peru Vac Soberana|
### Ejemplo
```python
import sbs_gob_pe_helper.VectorPrecioRentaFija as vp
import sbs_gob_pe_helper.IndiceSpreadsCorporativo as isc
tpCurva = 'CCPSS'
fInicial = '04/08/2023'
fFinal = '04/08/2023'
#Obtiene el vector de precios de instrumentos de renta fija disponibles en la SBS para una fecha de proceso específica:
df_isc = isc.get_indice_spreads_corporativo(tipoCurva=tpCurva,fechaInicial=fInicial, fechaFinal=fFinal)
df_vector.head()
```

---
## Feedback
La mejor manera de enviar comentarios es crear un problema en https://github.com/ecandela/sbs-gob-pe-helper/issues.
Si estás proponiendo una característica:
- Explica detalladamente cómo funcionaría.
- Mantén el alcance lo más limitado posible para facilitar la implementación.
- Recuerda que este es un proyecto impulsado por voluntarios y que las contribuciones son bienvenidas :)
|
/sbs-gob-pe-helper-0.0.2.tar.gz/sbs-gob-pe-helper-0.0.2/README.md
| 0.704567 | 0.873862 |
README.md
|
pypi
|
from .base import SolutionByTextAPI
class Message(SolutionByTextAPI):
"""
Send sms to subscriber.
"""
service_end_point = 'MessageRSService.svc/SendMessage'
def __init__(self, security_token:str, org_code:str, stage:str, phone:str, message:str, custom_data=None):
super().__init__(
security_token, org_code, stage
)
assert len(phone) == 10, 'Invalid phone number: Min. 10 digits'
if custom_data:
assert isinstance(custom_data, dict), 'Custom data should be dictionary'
self.phone = phone
self.message = message
self.custom_data = custom_data
def send(self):
"""
"""
recipient_list = []
recipient = {
"sendTo": self.phone,
"CustomFields": None
}
if self.custom_data:
custom_field_list = []
for key, value in self.custom_data.items():
custom_field = {}
custom_field['Key'] = key
custom_field['Value'] = value
custom_field_list.append(custom_field)
recipient["CustomFields"] = custom_field_list
recipient_list.append(recipient)
arg = {
"sendTo": recipient_list,
"message": self.message
}
return super().post(**arg)
class TemplateMessage(SolutionByTextAPI):
"""
Send template sms to subscriber.
"""
service_end_point = 'MessageRSService.svc/SendTemplateMessage'
def __init__(self, security_token:str, org_code:str, stage:str, phone:str, message:str, template_id:int, custom_data=None):
super().__init__(
security_token, org_code, stage
)
assert isinstance(template_id, int),'template id should be integer value.'
assert len(phone) == 10, 'Invalid phone number: Min. 10 digits'
if custom_data:
assert isinstance(custom_data, dict), 'Custom data should be dictionary'
self.phone = phone
self.message = message
self.template_id = template_id
self.custom_data = custom_data
def send(self):
"""
"""
recipient_list = []
recipient = {
"sendTo": self.phone,
"CustomFields": None
}
if self.custom_data:
custom_field_list = []
for key, value in self.custom_data.items():
custom_field = {}
custom_field['Key'] = key
custom_field['Value'] = value
custom_field_list.append(custom_field)
recipient["CustomFields"] = custom_field_list
recipient_list.append(recipient)
arg = {
"sendTo": recipient_list,
"templateID": self.template_id
}
return super().post(**arg)
|
/sbt-python-client-1.0.8.tar.gz/sbt-python-client-1.0.8/solutions_by_text/message.py
| 0.70202 | 0.285755 |
message.py
|
pypi
|
from abc import ABC
from enum import Enum
from typing import Type
import pandas as pd
from .configs import PortfolioAggregationConfig, ColumnsConfig
from .interfaces import EScope
class PortfolioAggregationMethod(Enum):
"""
The portfolio aggregation method determines how the temperature scores for the individual companies are aggregated
into a single portfolio score.
"""
WATS = "WATS"
TETS = "TETS"
MOTS = "MOTS"
EOTS = "EOTS"
ECOTS = "ECOTS"
AOTS = "AOTS"
ROTS = "ROTS"
@staticmethod
def is_emissions_based(method: "PortfolioAggregationMethod") -> bool:
"""
Check whether a given method is emissions-based (i.e. it uses the emissions to calculate the aggregation).
:param method: The method to check
:return:
"""
return (
method == PortfolioAggregationMethod.MOTS
or method == PortfolioAggregationMethod.EOTS
or method == PortfolioAggregationMethod.ECOTS
or method == PortfolioAggregationMethod.AOTS
or method == PortfolioAggregationMethod.ROTS
)
@staticmethod
def get_value_column(
method: "PortfolioAggregationMethod", column_config: Type[ColumnsConfig]
) -> str:
map_value_column = {
PortfolioAggregationMethod.MOTS: column_config.MARKET_CAP,
PortfolioAggregationMethod.EOTS: column_config.COMPANY_ENTERPRISE_VALUE,
PortfolioAggregationMethod.ECOTS: column_config.COMPANY_EV_PLUS_CASH,
PortfolioAggregationMethod.AOTS: column_config.COMPANY_TOTAL_ASSETS,
PortfolioAggregationMethod.ROTS: column_config.COMPANY_REVENUE,
}
return map_value_column.get(method, column_config.MARKET_CAP)
class PortfolioAggregation(ABC):
"""
This class is a base class that provides portfolio aggregation calculation.
:param config: A class defining the constants that are used throughout this class. This parameter is only required
if you'd like to overwrite a constant. This can be done by extending the PortfolioAggregationConfig
class and overwriting one of the parameters.
"""
def __init__(
self, config: Type[PortfolioAggregationConfig] = PortfolioAggregationConfig
):
self.c = config
def _check_column(self, data: pd.DataFrame, column: str):
"""
Check if a certain column is filled for all companies. If not throw an error.
:param data: The data to check
:param column: The column to check
:return:
"""
missing_data = data[pd.isnull(data[column])][self.c.COLS.COMPANY_NAME].unique()
if len(missing_data):
if column == self.c.COLS.GHG_SCOPE12 or column == self.c.COLS.GHG_SCOPE3:
raise ValueError(
"A value for {} is needed for all aggregation methods except for WATS. \nSo please try to estimate appropriate values or remove these companies from the aggregation calculation: {}".format(
column, ", ".join(missing_data)
)
)
else:
raise ValueError(
"The value for {} is missing for the following companies: {}".format(
column, ", ".join(missing_data)
)
)
def _calculate_aggregate_score(
self,
data: pd.DataFrame,
input_column: str,
portfolio_aggregation_method: PortfolioAggregationMethod,
) -> pd.Series:
"""
Aggregate the scores in a given column based on a certain portfolio aggregation method.
:param data: The data to run the calculations on
:param input_column: The input column (containing the scores)
:param portfolio_aggregation_method: The method to use
:return: The aggregates score
"""
if portfolio_aggregation_method == PortfolioAggregationMethod.WATS:
total_investment_weight = data[self.c.COLS.INVESTMENT_VALUE].sum()
try:
return data.apply(
lambda row: (row[self.c.COLS.INVESTMENT_VALUE] * row[input_column])
/ total_investment_weight,
axis=1,
)
except ZeroDivisionError:
raise ValueError("The portfolio weight is not allowed to be zero")
# Total emissions weighted temperature score (TETS)
elif portfolio_aggregation_method == PortfolioAggregationMethod.TETS:
use_S1S2 = (data[self.c.COLS.SCOPE] == EScope.S1S2) | (
data[self.c.COLS.SCOPE] == EScope.S1S2S3
)
use_S3 = (data[self.c.COLS.SCOPE] == EScope.S3) | (
data[self.c.COLS.SCOPE] == EScope.S1S2S3
)
if use_S3.any():
self._check_column(data, self.c.COLS.GHG_SCOPE3)
if use_S1S2.any():
self._check_column(data, self.c.COLS.GHG_SCOPE12)
# Calculate the total emissions of all companies
emissions = (use_S1S2 * data[self.c.COLS.GHG_SCOPE12]).sum() + (
use_S3 * data[self.c.COLS.GHG_SCOPE3]
).sum()
try:
return (
(
use_S1S2 * data[self.c.COLS.GHG_SCOPE12]
+ use_S3 * data[self.c.COLS.GHG_SCOPE3]
)
/ emissions
* data[input_column]
)
except ZeroDivisionError:
raise ValueError("The total emissions should be higher than zero")
elif PortfolioAggregationMethod.is_emissions_based(
portfolio_aggregation_method
):
# These four methods only differ in the way the company is valued.
if portfolio_aggregation_method == PortfolioAggregationMethod.ECOTS:
self._check_column(data, self.c.COLS.COMPANY_ENTERPRISE_VALUE)
self._check_column(data, self.c.COLS.CASH_EQUIVALENTS)
data[self.c.COLS.COMPANY_EV_PLUS_CASH] = (
data[self.c.COLS.COMPANY_ENTERPRISE_VALUE]
+ data[self.c.COLS.CASH_EQUIVALENTS]
)
value_column = PortfolioAggregationMethod.get_value_column(
portfolio_aggregation_method, self.c.COLS
)
# Calculate the total owned emissions of all companies
try:
self._check_column(data, self.c.COLS.INVESTMENT_VALUE)
self._check_column(data, value_column)
use_S1S2 = (data[self.c.COLS.SCOPE] == EScope.S1S2) | (
data[self.c.COLS.SCOPE] == EScope.S1S2S3
)
use_S3 = (data[self.c.COLS.SCOPE] == EScope.S3) | (
data[self.c.COLS.SCOPE] == EScope.S1S2S3
)
if use_S1S2.any():
self._check_column(data, self.c.COLS.GHG_SCOPE12)
if use_S3.any():
self._check_column(data, self.c.COLS.GHG_SCOPE3)
data[self.c.COLS.OWNED_EMISSIONS] = (
data[self.c.COLS.INVESTMENT_VALUE] / data[value_column]
) * (
use_S1S2 * data[self.c.COLS.GHG_SCOPE12]
+ use_S3 * data[self.c.COLS.GHG_SCOPE3]
)
except ZeroDivisionError:
raise ValueError(
"To calculate the aggregation, the {} column may not be zero".format(
value_column
)
)
owned_emissions = data[self.c.COLS.OWNED_EMISSIONS].sum()
try:
# Calculate the MOTS value per company
return data.apply(
lambda row: (row[self.c.COLS.OWNED_EMISSIONS] / owned_emissions)
* row[input_column],
axis=1,
)
except ZeroDivisionError:
raise ValueError("The total owned emissions can not be zero")
else:
raise ValueError("The specified portfolio aggregation method is invalid")
|
/sbti_finance_tool-1.0.9-py3-none-any.whl/SBTi/portfolio_aggregation.py
| 0.871728 | 0.32742 |
portfolio_aggregation.py
|
pypi
|
from enum import Enum
from typing import Optional, Dict, List
import pandas as pd
from pydantic import BaseModel, validator, Field
class AggregationContribution(BaseModel):
company_name: str
company_id: str
temperature_score: float
contribution_relative: Optional[float]
contribution: Optional[float]
def __getitem__(self, item):
return getattr(self, item)
class Aggregation(BaseModel):
score: float
proportion: float
contributions: List[AggregationContribution]
def __getitem__(self, item):
return getattr(self, item)
class ScoreAggregation(BaseModel):
all: Aggregation
influence_percentage: float
grouped: Dict[str, Aggregation]
def __getitem__(self, item):
return getattr(self, item)
class ScoreAggregationScopes(BaseModel):
S1S2: Optional[ScoreAggregation]
S3: Optional[ScoreAggregation]
S1S2S3: Optional[ScoreAggregation]
def __getitem__(self, item):
return getattr(self, item)
class ScoreAggregations(BaseModel):
short: Optional[ScoreAggregationScopes]
mid: Optional[ScoreAggregationScopes]
long: Optional[ScoreAggregationScopes]
def __getitem__(self, item):
return getattr(self, item)
class ScenarioInterface(BaseModel):
number: int
engagement_type: Optional[str]
class PortfolioCompany(BaseModel):
company_name: str
company_id: str
company_isin: Optional[str]
company_lei: Optional[str] = 'nan'
investment_value: float
engagement_target: Optional[bool] = False
user_fields: Optional[dict]
class IDataProviderCompany(BaseModel):
company_name: str
company_id: str
isic: str
ghg_s1s2: Optional[float]
ghg_s3: Optional[float]
country: Optional[str]
region: Optional[str]
sector: Optional[str]
industry_level_1: Optional[str]
industry_level_2: Optional[str]
industry_level_3: Optional[str]
industry_level_4: Optional[str]
company_revenue: Optional[float]
company_market_cap: Optional[float]
company_enterprise_value: Optional[float]
company_total_assets: Optional[float]
company_cash_equivalents: Optional[float]
sbti_validated: bool = Field(
False,
description='True if the SBTi target status is "Target set", false otherwise',
)
class SortableEnum(Enum):
def __str__(self):
return self.name
def __ge__(self, other):
if self.__class__ is other.__class__:
order = list(self.__class__)
return order.index(self) >= order.index(other)
return NotImplemented
def __gt__(self, other):
if self.__class__ is other.__class__:
order = list(self.__class__)
return order.index(self) > order.index(other)
return NotImplemented
def __le__(self, other):
if self.__class__ is other.__class__:
order = list(self.__class__)
return order.index(self) <= order.index(other)
return NotImplemented
def __lt__(self, other):
if self.__class__ is other.__class__:
order = list(self.__class__)
return order.index(self) < order.index(other)
return NotImplemented
class EScope(SortableEnum):
S1 = "S1"
S2 = "S2"
S3 = "S3"
S1S2 = "S1+S2"
S1S2S3 = "S1+S2+S3"
@classmethod
def get_result_scopes(cls) -> List["EScope"]:
"""
Get a list of scopes that should be calculated if the user leaves it open.
:return: A list of EScope objects
"""
return [cls.S1S2, cls.S3, cls.S1S2S3]
class ETimeFrames(SortableEnum):
SHORT = "short"
MID = "mid"
LONG = "long"
class IDataProviderTarget(BaseModel):
company_id: str
target_type: str
intensity_metric: Optional[str]
scope: EScope
coverage_s1: float
coverage_s2: float
coverage_s3: float
reduction_ambition: float
base_year: int
base_year_ghg_s1: float
base_year_ghg_s2: float
base_year_ghg_s3: float
start_year: Optional[int]
end_year: int
time_frame: Optional[ETimeFrames]
achieved_reduction: Optional[float] = 0
@validator("start_year", pre=True, always=False)
def validate_e(cls, val):
if val == "" or val == "nan" or pd.isnull(val):
return None
return val
|
/sbti_finance_tool-1.0.9-py3-none-any.whl/SBTi/interfaces.py
| 0.92698 | 0.224895 |
interfaces.py
|
pypi
|
from typing import Optional, List
import requests
from SBTi.data.data_provider import DataProvider
from SBTi.interfaces import IDataProviderTarget, IDataProviderCompany
class Bloomberg(DataProvider):
"""
Data provider skeleton for Bloomberg.
"""
def _request(self, endpoint: str, data: dict) -> Optional[object]:
"""
Request data from the server.
Note: This request does in no way reflect the actual implementation, this is only a stub to show what a
potential API request COULD look like.
:param endpoint: The endpoint of the API
:param data: The data to send as a body
:return: The returned data, None in case of an error.
"""
try:
headers = {"Authorization": "Basic: {}:{}".format("username", "password")}
r = requests.post(
"{}{}".format("host", endpoint), json=data, headers=headers
)
if r.status_code == 200:
return r.json()
except Exception as e:
return None
return None
def get_targets(self, company_ids: List[str]) -> List[IDataProviderTarget]:
"""
Get all relevant targets for a list of company ids (ISIN). This method should return a list of
IDataProviderTarget instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the targets
"""
# TODO: Make an API request
# TODO: Transform the result into a dataframe
# TODO: Make sure the columns align with those defined in the docstring
raise NotImplementedError
def get_company_data(self, company_ids: List[str]) -> List[IDataProviderCompany]:
"""
Get all relevant data for a list of company ids (ISIN). This method should return a list of IDataProviderCompany
instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the company data
"""
# TODO: Make an API request
# TODO: Transform the result into a dataframe
# TODO: Make sure the columns align with those defined in the docstring
raise NotImplementedError
def get_sbti_targets(self, companies: list) -> list:
"""
For each of the companies, get the status of their target (Target set, Committed or No target) as it's known to
the SBTi.
:param companies: A list of companies. Each company should be a dict with a "company_name" and "company_id"
field.
:return: The original list, enriched with a field called "sbti_target_status"
"""
# TODO: Make an API request
# TODO: Extract the SBTi target status from the response
# TODO: Enrich the original list with this data
raise NotImplementedError
|
/sbti_finance_tool-1.0.9-py3-none-any.whl/SBTi/data/bloomberg.py
| 0.599251 | 0.498047 |
bloomberg.py
|
pypi
|
from typing import List, Type
import requests
import pandas as pd
import warnings
from SBTi.configs import PortfolioCoverageTVPConfig
from SBTi.interfaces import IDataProviderCompany
class SBTi:
"""
Data provider skeleton for SBTi. This class only provides the sbti_validated field for existing companies.
"""
def __init__(
self, config: Type[PortfolioCoverageTVPConfig] = PortfolioCoverageTVPConfig
):
self.c = config
# Fetch CTA file from SBTi website
resp = requests.get(self.c.CTA_FILE_URL)
# If status code == 200 then Write CTA file to disk
if resp.status_code == 200:
with open(self.c.FILE_TARGETS, 'wb') as output:
output.write(resp.content)
print(f'Status code from fetching the CTA file: {resp.status_code}, 200 = OK')
# Read CTA file into pandas dataframe
# Suppress warning about openpyxl - check if this is still needed in the released version.
else:
print('Could not fetch the CTA file from the SBTi website')
print('Will read older file from this package version')
warnings.filterwarnings('ignore', category=UserWarning, module='openpyxl')
self.targets = pd.read_excel(self.c.FILE_TARGETS)
def filter_cta_file(self, targets):
"""
Filter the CTA file to create a datafram that has on row per company
with the columns "Action" and "Target".
If Action = Target then only keep the rows where Target = Near-term.
"""
# Create a new dataframe with only the columns "Action" and "Target"
# and the columns that are needed for identifying the company
targets = targets[
[
self.c.COL_COMPANY_NAME,
self.c.COL_COMPANY_ISIN,
self.c.COL_COMPANY_LEI,
self.c.COL_ACTION,
self.c.COL_TARGET
]
]
# Keep rows where Action = Target and Target = Near-term
df_nt_targets = targets[
(targets[self.c.COL_ACTION] == self.c.VALUE_ACTION_TARGET) &
(targets[self.c.COL_TARGET] == self.c.VALUE_TARGET_SET)]
# Drop duplicates in the dataframe by waterfall.
# Do company name last due to risk of misspelled names
# First drop duplicates on LEI, then on ISIN, then on company name
df_nt_targets = pd.concat([
df_nt_targets[~df_nt_targets[self.c.COL_COMPANY_LEI].isnull()].drop_duplicates(
subset=self.c.COL_COMPANY_LEI, keep='first'
),
df_nt_targets[df_nt_targets[self.c.COL_COMPANY_LEI].isnull()]
])
df_nt_targets = pd.concat([
df_nt_targets[~df_nt_targets[self.c.COL_COMPANY_ISIN].isnull()].drop_duplicates(
subset=self.c.COL_COMPANY_ISIN, keep='first'
),
df_nt_targets[df_nt_targets[self.c.COL_COMPANY_ISIN].isnull()]
])
df_nt_targets.drop_duplicates(subset=self.c.COL_COMPANY_NAME, inplace=True)
return df_nt_targets
def get_sbti_targets(
self, companies: List[IDataProviderCompany], id_map: dict
) -> List[IDataProviderCompany]:
"""
Check for each company if they have an SBTi validated target, first using the company LEI,
if available, and then using the ISIN.
:param companies: A list of IDataProviderCompany instances
:param id_map: A map from company id to a tuple of (ISIN, LEI)
:return: A list of IDataProviderCompany instances, supplemented with the SBTi information
"""
# Filter out information about targets
self.targets = self.filter_cta_file(self.targets)
for company in companies:
isin, lei = id_map.get(company.company_id)
# Check lei and length of lei to avoid zeros
if not lei.lower() == 'nan' and len(lei) > 3:
targets = self.targets[
self.targets[self.c.COL_COMPANY_LEI] == lei
]
elif not isin.lower() == 'nan':
targets = self.targets[
self.targets[self.c.COL_COMPANY_ISIN] == isin
]
else:
continue
if len(targets) > 0:
company.sbti_validated = (
self.c.VALUE_TARGET_SET in targets[self.c.COL_TARGET].values
)
return companies
|
/sbti_finance_tool-1.0.9-py3-none-any.whl/SBTi/data/sbti.py
| 0.707203 | 0.2296 |
sbti.py
|
pypi
|
from typing import List
from SBTi.data.data_provider import DataProvider
from SBTi.interfaces import IDataProviderCompany, IDataProviderTarget
class Trucost(DataProvider):
"""
Data provider skeleton for Trucost.
"""
def get_targets(self, company_ids: List[str]) -> List[IDataProviderTarget]:
"""
Get all relevant targets for a list of company ids (ISIN). This method should return a list of
IDataProviderTarget instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the targets
"""
# TODO: Make an API request
# TODO: Transform the result into a dataframe
# TODO: Make sure the columns align with those defined in the docstring
raise NotImplementedError
def get_company_data(self, company_ids: List[str]) -> List[IDataProviderCompany]:
"""
Get all relevant data for a list of company ids (ISIN). This method should return a list of IDataProviderCompany
instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the company data
"""
# TODO: Make an API request
# TODO: Transform the result into a dataframe
# TODO: Make sure the columns align with those defined in the docstring
raise NotImplementedError
def get_sbti_targets(self, companies: list) -> list:
"""
For each of the companies, get the status of their target (Target set, Committed or No target) as it's known to
the SBTi.
:param companies: A list of companies. Each company should be a dict with a "company_name" and "company_id"
field.
:return: The original list, enriched with a field called "sbti_target_status"
"""
# TODO: Make an API request
# TODO: Extract the SBTi target status from the response
# TODO: Enrich the original list with this data
raise NotImplementedError
|
/sbti_finance_tool-1.0.9-py3-none-any.whl/SBTi/data/trucost.py
| 0.461988 | 0.561335 |
trucost.py
|
pypi
|
from typing import List
from pydantic import ValidationError
import logging
import pandas as pd
from SBTi.data.data_provider import DataProvider
from SBTi.interfaces import IDataProviderCompany, IDataProviderTarget
class CSVProvider(DataProvider):
"""
Data provider skeleton for CSV files. This class serves primarily for testing purposes only!
:param config: A dictionary containing a "path" field that leads to the path of the CSV file
"""
def __init__(self, path: str, path_targets: str, encoding: str = "utf-8"):
super().__init__()
self.data = pd.read_csv(path, encoding=encoding)
self.data_targets = pd.read_csv(path_targets, encoding=encoding)
def get_targets(self, company_ids: list) -> List[IDataProviderTarget]:
"""
Get all relevant targets for a list of company ids (ISIN). This method should return a list of
IDataProviderTarget instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the targets
"""
model_targets = self._target_df_to_model(self.data_targets)
model_targets = [
target for target in model_targets if target.company_id in company_ids
]
return model_targets
def _target_df_to_model(self, df_targets):
"""
transforms target Dataframe into list of IDataProviderTarget instances
:param df_targets: pandas Dataframe with targets
:return: A list containing the targets
"""
logger = logging.getLogger(__name__)
targets = df_targets.to_dict(orient="records")
model_targets: List[IDataProviderTarget] = []
for target in targets:
try:
model_targets.append(IDataProviderTarget.parse_obj(target))
except ValidationError as e:
logger.warning(
"(one of) the target(s) of company %s is invalid and will be skipped"
% target[self.c.COMPANY_NAME]
)
pass
return model_targets
def get_company_data(self, company_ids: list) -> List[IDataProviderCompany]:
"""
Get all relevant data for a list of company ids (ISIN). This method should return a list of IDataProviderCompany
instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the company data
"""
companies = self.data.to_dict(orient="records")
model_companies: List[IDataProviderCompany] = [
IDataProviderCompany.parse_obj(company) for company in companies
]
model_companies = [
target for target in model_companies if target.company_id in company_ids
]
return model_companies
def get_sbti_targets(self, companies: list) -> list:
"""
For each of the companies, get the status of their target (Target set, Committed or No target) as it's known to
the SBTi.
:param companies: A list of companies. Each company should be a dict with a "company_name" and "company_id"
field.
:return: The original list, enriched with a field called "sbti_target_status"
"""
return self.data[
(
self.data["company_id"].isin(
[company["company_id"] for company in companies]
)
& self.data["company_id"].notnull()
)
].copy()
|
/sbti_finance_tool-1.0.9-py3-none-any.whl/SBTi/data/csv.py
| 0.820397 | 0.383237 |
csv.py
|
pypi
|
from typing import Type, List
from pydantic import ValidationError
import logging
import pandas as pd
from SBTi.data.data_provider import DataProvider
from SBTi.configs import ColumnsConfig
from SBTi.interfaces import IDataProviderCompany, IDataProviderTarget
class ExcelProvider(DataProvider):
"""
Data provider skeleton for CSV files. This class serves primarily for testing purposes only!
:param config: A dictionary containing a "path" field that leads to the path of the CSV file
"""
def __init__(self, path: str, config: Type[ColumnsConfig] = ColumnsConfig):
super().__init__()
self.data = pd.read_excel(path, sheet_name=None, skiprows=0)
self.c = config
def get_targets(self, company_ids: List[str]) -> List[IDataProviderTarget]:
"""
Get all relevant targets for a list of company ids (ISIN). This method should return a list of
IDataProviderTarget instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the targets
"""
model_targets = self._target_df_to_model(self.data["target_data"])
model_targets = [
target for target in model_targets if target.company_id in company_ids
]
return model_targets
def _target_df_to_model(self, df_targets):
"""
transforms target Dataframe into list of IDataProviderTarget instances
:param df_targets: pandas Dataframe with targets
:return: A list containing the targets
"""
logger = logging.getLogger(__name__)
targets = df_targets.to_dict(orient="records")
model_targets: List[IDataProviderTarget] = []
for target in targets:
try:
model_targets.append(IDataProviderTarget.parse_obj(target))
except ValidationError as e:
logger.warning(
"(one of) the target(s) of company %s is invalid and will be skipped"
% target[self.c.COMPANY_NAME]
)
pass
return model_targets
def get_company_data(self, company_ids: List[str]) -> List[IDataProviderCompany]:
"""
Get all relevant data for a list of company ids (ISIN). This method should return a list of IDataProviderCompany
instances.
:param company_ids: A list of company IDs (ISINs)
:return: A list containing the company data
"""
data_company = self.data["fundamental_data"]
companies = data_company.to_dict(orient="records")
model_companies: List[IDataProviderCompany] = [
IDataProviderCompany.parse_obj(company) for company in companies
]
model_companies = [
target for target in model_companies if target.company_id in company_ids
]
return model_companies
def get_sbti_targets(self, companies: list) -> list:
"""
For each of the companies, get the status of their target (Target set, Committed or No target) as it's known to
the SBTi.
:param companies: A list of companies. Each company should be a dict with a "company_name" and "company_id"
field.
:return: The original list, enriched with a field called "sbti_target_status"
"""
raise NotImplementedError
|
/sbti_finance_tool-1.0.9-py3-none-any.whl/SBTi/data/excel.py
| 0.804866 | 0.40987 |
excel.py
|
pypi
|
from typing import Any, Callable, Dict, List, Optional, Sequence, Union
import flax.linen as nn
import jax
import jax.numpy as jnp
import numpy as np
import optax
import tensorflow_probability
from flax.training.train_state import TrainState
from gymnasium import spaces
from stable_baselines3.common.type_aliases import Schedule
from sbx.common.distributions import TanhTransformedDistribution
from sbx.common.policies import BaseJaxPolicy
from sbx.common.type_aliases import RLTrainState
tfp = tensorflow_probability.substrates.jax
tfd = tfp.distributions
class Critic(nn.Module):
net_arch: Sequence[int]
use_layer_norm: bool = False
dropout_rate: Optional[float] = None
@nn.compact
def __call__(self, x: jnp.ndarray, action: jnp.ndarray) -> jnp.ndarray:
x = jnp.concatenate([x, action], -1)
for n_units in self.net_arch:
x = nn.Dense(n_units)(x)
if self.dropout_rate is not None and self.dropout_rate > 0:
x = nn.Dropout(rate=self.dropout_rate)(x, deterministic=False)
if self.use_layer_norm:
x = nn.LayerNorm()(x)
x = nn.relu(x)
x = nn.Dense(1)(x)
return x
class VectorCritic(nn.Module):
net_arch: Sequence[int]
use_layer_norm: bool = False
dropout_rate: Optional[float] = None
n_critics: int = 2
@nn.compact
def __call__(self, obs: jnp.ndarray, action: jnp.ndarray):
# Idea taken from https://github.com/perrin-isir/xpag
# Similar to https://github.com/tinkoff-ai/CORL for PyTorch
vmap_critic = nn.vmap(
Critic,
variable_axes={"params": 0}, # parameters not shared between the critics
split_rngs={"params": True, "dropout": True}, # different initializations
in_axes=None,
out_axes=0,
axis_size=self.n_critics,
)
q_values = vmap_critic(
use_layer_norm=self.use_layer_norm,
dropout_rate=self.dropout_rate,
net_arch=self.net_arch,
)(obs, action)
return q_values
class Actor(nn.Module):
net_arch: Sequence[int]
action_dim: int
log_std_min: float = -20
log_std_max: float = 2
def get_std(self):
# Make it work with gSDE
return jnp.array(0.0)
@nn.compact
def __call__(self, x: jnp.ndarray) -> tfd.Distribution: # type: ignore[name-defined]
for n_units in self.net_arch:
x = nn.Dense(n_units)(x)
x = nn.relu(x)
mean = nn.Dense(self.action_dim)(x)
log_std = nn.Dense(self.action_dim)(x)
log_std = jnp.clip(log_std, self.log_std_min, self.log_std_max)
dist = TanhTransformedDistribution(
tfd.MultivariateNormalDiag(loc=mean, scale_diag=jnp.exp(log_std)),
)
return dist
class SACPolicy(BaseJaxPolicy):
action_space: spaces.Box # type: ignore[assignment]
def __init__(
self,
observation_space: spaces.Space,
action_space: spaces.Box,
lr_schedule: Schedule,
net_arch: Optional[Union[List[int], Dict[str, List[int]]]] = None,
dropout_rate: float = 0.0,
layer_norm: bool = False,
# activation_fn: Type[nn.Module] = nn.ReLU,
use_sde: bool = False,
# Note: most gSDE parameters are not used
# this is to keep API consistent with SB3
log_std_init: float = -3,
use_expln: bool = False,
clip_mean: float = 2.0,
features_extractor_class=None,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Callable[..., optax.GradientTransformation] = optax.adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
n_critics: int = 2,
share_features_extractor: bool = False,
):
super().__init__(
observation_space,
action_space,
features_extractor_class,
features_extractor_kwargs,
optimizer_class=optimizer_class,
optimizer_kwargs=optimizer_kwargs,
squash_output=True,
)
self.dropout_rate = dropout_rate
self.layer_norm = layer_norm
if net_arch is not None:
if isinstance(net_arch, list):
self.net_arch_pi = self.net_arch_qf = net_arch
else:
self.net_arch_pi = net_arch["pi"]
self.net_arch_qf = net_arch["qf"]
else:
self.net_arch_pi = self.net_arch_qf = [256, 256]
self.n_critics = n_critics
self.use_sde = use_sde
self.key = self.noise_key = jax.random.PRNGKey(0)
def build(self, key: jax.random.KeyArray, lr_schedule: Schedule, qf_learning_rate: float) -> jax.random.KeyArray:
key, actor_key, qf_key, dropout_key = jax.random.split(key, 4)
# Keep a key for the actor
key, self.key = jax.random.split(key, 2)
# Initialize noise
self.reset_noise()
if isinstance(self.observation_space, spaces.Dict):
obs = jnp.array([spaces.flatten(self.observation_space, self.observation_space.sample())])
else:
obs = jnp.array([self.observation_space.sample()])
action = jnp.array([self.action_space.sample()])
self.actor = Actor(
action_dim=int(np.prod(self.action_space.shape)),
net_arch=self.net_arch_pi,
)
# Hack to make gSDE work without modifying internal SB3 code
self.actor.reset_noise = self.reset_noise
self.actor_state = TrainState.create(
apply_fn=self.actor.apply,
params=self.actor.init(actor_key, obs),
tx=self.optimizer_class(
learning_rate=lr_schedule(1), # type: ignore[call-arg]
**self.optimizer_kwargs,
),
)
self.qf = VectorCritic(
dropout_rate=self.dropout_rate,
use_layer_norm=self.layer_norm,
net_arch=self.net_arch_qf,
n_critics=self.n_critics,
)
self.qf_state = RLTrainState.create(
apply_fn=self.qf.apply,
params=self.qf.init(
{"params": qf_key, "dropout": dropout_key},
obs,
action,
),
target_params=self.qf.init(
{"params": qf_key, "dropout": dropout_key},
obs,
action,
),
tx=self.optimizer_class(
learning_rate=qf_learning_rate, # type: ignore[call-arg]
**self.optimizer_kwargs,
),
)
self.actor.apply = jax.jit(self.actor.apply) # type: ignore[method-assign]
self.qf.apply = jax.jit( # type: ignore[method-assign]
self.qf.apply,
static_argnames=("dropout_rate", "use_layer_norm"),
)
return key
def reset_noise(self, batch_size: int = 1) -> None:
"""
Sample new weights for the exploration matrix, when using gSDE.
"""
self.key, self.noise_key = jax.random.split(self.key, 2)
def forward(self, obs: np.ndarray, deterministic: bool = False) -> np.ndarray:
return self._predict(obs, deterministic=deterministic)
def _predict(self, observation: np.ndarray, deterministic: bool = False) -> np.ndarray: # type: ignore[override]
if deterministic:
return BaseJaxPolicy.select_action(self.actor_state, observation)
# Trick to use gSDE: repeat sampled noise by using the same noise key
if not self.use_sde:
self.reset_noise()
return BaseJaxPolicy.sample_action(self.actor_state, observation, self.noise_key)
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/sac/policies.py
| 0.943099 | 0.314643 |
policies.py
|
pypi
|
from functools import partial
from typing import Any, Dict, Optional, Tuple, Type, Union
import flax
import flax.linen as nn
import jax
import jax.numpy as jnp
import numpy as np
import optax
from flax.training.train_state import TrainState
from gymnasium import spaces
from stable_baselines3.common.buffers import ReplayBuffer
from stable_baselines3.common.noise import ActionNoise
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from sbx.common.off_policy_algorithm import OffPolicyAlgorithmJax
from sbx.common.type_aliases import ReplayBufferSamplesNp, RLTrainState
from sbx.sac.policies import SACPolicy
class EntropyCoef(nn.Module):
ent_coef_init: float = 1.0
@nn.compact
def __call__(self) -> jnp.ndarray:
log_ent_coef = self.param("log_ent_coef", init_fn=lambda key: jnp.full((), jnp.log(self.ent_coef_init)))
return jnp.exp(log_ent_coef)
class ConstantEntropyCoef(nn.Module):
ent_coef_init: float = 1.0
@nn.compact
def __call__(self) -> float:
# Hack to not optimize the entropy coefficient while not having to use if/else for the jit
# TODO: add parameter in train to remove that hack
self.param("dummy_param", init_fn=lambda key: jnp.full((), self.ent_coef_init))
return self.ent_coef_init
class SAC(OffPolicyAlgorithmJax):
policy_aliases: Dict[str, Type[SACPolicy]] = { # type: ignore[assignment]
"MlpPolicy": SACPolicy,
# Minimal dict support using flatten()
"MultiInputPolicy": SACPolicy,
}
policy: SACPolicy
action_space: spaces.Box # type: ignore[assignment]
def __init__(
self,
policy,
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule] = 3e-4,
qf_learning_rate: Optional[float] = None,
buffer_size: int = 1_000_000, # 1e6
learning_starts: int = 100,
batch_size: int = 256,
tau: float = 0.005,
gamma: float = 0.99,
train_freq: Union[int, Tuple[int, str]] = 1,
gradient_steps: int = 1,
policy_delay: int = 1,
action_noise: Optional[ActionNoise] = None,
replay_buffer_class: Optional[Type[ReplayBuffer]] = None,
replay_buffer_kwargs: Optional[Dict[str, Any]] = None,
ent_coef: Union[str, float] = "auto",
use_sde: bool = False,
sde_sample_freq: int = -1,
use_sde_at_warmup: bool = False,
tensorboard_log: Optional[str] = None,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: str = "auto",
_init_setup_model: bool = True,
) -> None:
super().__init__(
policy=policy,
env=env,
learning_rate=learning_rate,
qf_learning_rate=qf_learning_rate,
buffer_size=buffer_size,
learning_starts=learning_starts,
batch_size=batch_size,
tau=tau,
gamma=gamma,
train_freq=train_freq,
gradient_steps=gradient_steps,
action_noise=action_noise,
replay_buffer_class=replay_buffer_class,
replay_buffer_kwargs=replay_buffer_kwargs,
use_sde=use_sde,
sde_sample_freq=sde_sample_freq,
use_sde_at_warmup=use_sde_at_warmup,
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
seed=seed,
supported_action_spaces=(spaces.Box,),
support_multi_env=True,
)
self.policy_delay = policy_delay
self.ent_coef_init = ent_coef
if _init_setup_model:
self._setup_model()
def _setup_model(self) -> None:
super()._setup_model()
if not hasattr(self, "policy") or self.policy is None:
# pytype: disable=not-instantiable
self.policy = self.policy_class( # type: ignore[assignment]
self.observation_space,
self.action_space,
self.lr_schedule,
**self.policy_kwargs,
)
# pytype: enable=not-instantiable
assert isinstance(self.qf_learning_rate, float)
self.key = self.policy.build(self.key, self.lr_schedule, self.qf_learning_rate)
self.key, ent_key = jax.random.split(self.key, 2)
self.actor = self.policy.actor # type: ignore[assignment]
self.qf = self.policy.qf # type: ignore[assignment]
# The entropy coefficient or entropy can be learned automatically
# see Automating Entropy Adjustment for Maximum Entropy RL section
# of https://arxiv.org/abs/1812.05905
if isinstance(self.ent_coef_init, str) and self.ent_coef_init.startswith("auto"):
# Default initial value of ent_coef when learned
ent_coef_init = 1.0
if "_" in self.ent_coef_init:
ent_coef_init = float(self.ent_coef_init.split("_")[1])
assert ent_coef_init > 0.0, "The initial value of ent_coef must be greater than 0"
# Note: we optimize the log of the entropy coeff which is slightly different from the paper
# as discussed in https://github.com/rail-berkeley/softlearning/issues/37
self.ent_coef = EntropyCoef(ent_coef_init)
else:
# This will throw an error if a malformed string (different from 'auto') is passed
assert isinstance(
self.ent_coef_init, float
), f"Entropy coef must be float when not equal to 'auto', actual: {self.ent_coef_init}"
self.ent_coef = ConstantEntropyCoef(self.ent_coef_init) # type: ignore[assignment]
self.ent_coef_state = TrainState.create(
apply_fn=self.ent_coef.apply,
params=self.ent_coef.init(ent_key)["params"],
tx=optax.adam(
learning_rate=self.learning_rate,
),
)
# automatically set target entropy if needed
self.target_entropy = -np.prod(self.action_space.shape).astype(np.float32)
def learn(
self,
total_timesteps: int,
callback: MaybeCallback = None,
log_interval: int = 4,
tb_log_name: str = "SAC",
reset_num_timesteps: bool = True,
progress_bar: bool = False,
):
return super().learn(
total_timesteps=total_timesteps,
callback=callback,
log_interval=log_interval,
tb_log_name=tb_log_name,
reset_num_timesteps=reset_num_timesteps,
progress_bar=progress_bar,
)
def train(self, batch_size, gradient_steps):
# Sample all at once for efficiency (so we can jit the for loop)
data = self.replay_buffer.sample(batch_size * gradient_steps, env=self._vec_normalize_env)
# Pre-compute the indices where we need to update the actor
# This is a hack in order to jit the train loop
# It will compile once per value of policy_delay_indices
policy_delay_indices = {i: True for i in range(gradient_steps) if ((self._n_updates + i + 1) % self.policy_delay) == 0}
policy_delay_indices = flax.core.FrozenDict(policy_delay_indices)
if isinstance(data.observations, dict):
keys = list(self.observation_space.keys())
obs = np.concatenate([data.observations[key].numpy() for key in keys], axis=1)
next_obs = np.concatenate([data.next_observations[key].numpy() for key in keys], axis=1)
else:
obs = data.observations.numpy()
next_obs = data.next_observations.numpy()
# Convert to numpy
data = ReplayBufferSamplesNp(
obs,
data.actions.numpy(),
next_obs,
data.dones.numpy().flatten(),
data.rewards.numpy().flatten(),
)
(
self.policy.qf_state,
self.policy.actor_state,
self.ent_coef_state,
self.key,
(actor_loss_value, qf_loss_value, ent_coef_value),
) = self._train(
self.gamma,
self.tau,
self.target_entropy,
gradient_steps,
data,
policy_delay_indices,
self.policy.qf_state,
self.policy.actor_state,
self.ent_coef_state,
self.key,
)
self._n_updates += gradient_steps
self.logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
self.logger.record("train/actor_loss", actor_loss_value.item())
self.logger.record("train/critic_loss", qf_loss_value.item())
self.logger.record("train/ent_coef", ent_coef_value.item())
@staticmethod
@jax.jit
def update_critic(
gamma: float,
actor_state: TrainState,
qf_state: RLTrainState,
ent_coef_state: TrainState,
observations: np.ndarray,
actions: np.ndarray,
next_observations: np.ndarray,
rewards: np.ndarray,
dones: np.ndarray,
key: jax.random.KeyArray,
):
key, noise_key, dropout_key_target, dropout_key_current = jax.random.split(key, 4)
# sample action from the actor
dist = actor_state.apply_fn(actor_state.params, next_observations)
next_state_actions = dist.sample(seed=noise_key)
next_log_prob = dist.log_prob(next_state_actions)
ent_coef_value = ent_coef_state.apply_fn({"params": ent_coef_state.params})
qf_next_values = qf_state.apply_fn(
qf_state.target_params,
next_observations,
next_state_actions,
rngs={"dropout": dropout_key_target},
)
next_q_values = jnp.min(qf_next_values, axis=0)
# td error + entropy term
next_q_values = next_q_values - ent_coef_value * next_log_prob.reshape(-1, 1)
# shape is (batch_size, 1)
target_q_values = rewards.reshape(-1, 1) + (1 - dones.reshape(-1, 1)) * gamma * next_q_values
def mse_loss(params, dropout_key):
# shape is (n_critics, batch_size, 1)
current_q_values = qf_state.apply_fn(params, observations, actions, rngs={"dropout": dropout_key})
return 0.5 * ((target_q_values - current_q_values) ** 2).mean(axis=1).sum()
qf_loss_value, grads = jax.value_and_grad(mse_loss, has_aux=False)(qf_state.params, dropout_key_current)
qf_state = qf_state.apply_gradients(grads=grads)
return (
qf_state,
(qf_loss_value, ent_coef_value),
key,
)
@staticmethod
@jax.jit
def update_actor(
actor_state: RLTrainState,
qf_state: RLTrainState,
ent_coef_state: TrainState,
observations: np.ndarray,
key: jax.random.KeyArray,
):
key, dropout_key, noise_key = jax.random.split(key, 3)
def actor_loss(params):
dist = actor_state.apply_fn(params, observations)
actor_actions = dist.sample(seed=noise_key)
log_prob = dist.log_prob(actor_actions).reshape(-1, 1)
qf_pi = qf_state.apply_fn(
qf_state.params,
observations,
actor_actions,
rngs={"dropout": dropout_key},
)
# Take min among all critics (mean for droq)
min_qf_pi = jnp.min(qf_pi, axis=0)
ent_coef_value = ent_coef_state.apply_fn({"params": ent_coef_state.params})
actor_loss = (ent_coef_value * log_prob - min_qf_pi).mean()
return actor_loss, -log_prob.mean()
(actor_loss_value, entropy), grads = jax.value_and_grad(actor_loss, has_aux=True)(actor_state.params)
actor_state = actor_state.apply_gradients(grads=grads)
return actor_state, qf_state, actor_loss_value, key, entropy
@staticmethod
@jax.jit
def soft_update(tau: float, qf_state: RLTrainState):
qf_state = qf_state.replace(target_params=optax.incremental_update(qf_state.params, qf_state.target_params, tau))
return qf_state
@staticmethod
@jax.jit
def update_temperature(target_entropy: np.ndarray, ent_coef_state: TrainState, entropy: float):
def temperature_loss(temp_params):
ent_coef_value = ent_coef_state.apply_fn({"params": temp_params})
ent_coef_loss = ent_coef_value * (entropy - target_entropy).mean()
return ent_coef_loss
ent_coef_loss, grads = jax.value_and_grad(temperature_loss)(ent_coef_state.params)
ent_coef_state = ent_coef_state.apply_gradients(grads=grads)
return ent_coef_state, ent_coef_loss
@classmethod
@partial(jax.jit, static_argnames=["cls", "gradient_steps"])
def _train(
cls,
gamma: float,
tau: float,
target_entropy: np.ndarray,
gradient_steps: int,
data: ReplayBufferSamplesNp,
policy_delay_indices: flax.core.FrozenDict,
qf_state: RLTrainState,
actor_state: TrainState,
ent_coef_state: TrainState,
key,
):
actor_loss_value = jnp.array(0)
for i in range(gradient_steps):
def slice(x, step=i):
assert x.shape[0] % gradient_steps == 0
batch_size = x.shape[0] // gradient_steps
return x[batch_size * step : batch_size * (step + 1)]
(
qf_state,
(qf_loss_value, ent_coef_value),
key,
) = SAC.update_critic(
gamma,
actor_state,
qf_state,
ent_coef_state,
slice(data.observations),
slice(data.actions),
slice(data.next_observations),
slice(data.rewards),
slice(data.dones),
key,
)
qf_state = SAC.soft_update(tau, qf_state)
# hack to be able to jit (n_updates % policy_delay == 0)
if i in policy_delay_indices:
(actor_state, qf_state, actor_loss_value, key, entropy) = cls.update_actor(
actor_state,
qf_state,
ent_coef_state,
slice(data.observations),
key,
)
ent_coef_state, _ = SAC.update_temperature(target_entropy, ent_coef_state, entropy)
return (
qf_state,
actor_state,
ent_coef_state,
key,
(actor_loss_value, qf_loss_value, ent_coef_value),
)
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/sac/sac.py
| 0.861596 | 0.247044 |
sac.py
|
pypi
|
from typing import Any, Callable, Dict, List, Optional, Union
import flax.linen as nn
import jax
import jax.numpy as jnp
import numpy as np
import optax
from gymnasium import spaces
from stable_baselines3.common.type_aliases import Schedule
from sbx.common.policies import BaseJaxPolicy
from sbx.common.type_aliases import RLTrainState
class QNetwork(nn.Module):
n_actions: int
n_units: int = 256
@nn.compact
def __call__(self, x: jnp.ndarray) -> jnp.ndarray:
x = nn.Dense(self.n_units)(x)
x = nn.relu(x)
x = nn.Dense(self.n_units)(x)
x = nn.relu(x)
x = nn.Dense(self.n_actions)(x)
return x
class DQNPolicy(BaseJaxPolicy):
action_space: spaces.Discrete # type: ignore[assignment]
def __init__(
self,
observation_space: spaces.Space,
action_space: spaces.Discrete,
lr_schedule: Schedule,
net_arch: Optional[Union[List[int], Dict[str, List[int]]]] = None,
features_extractor_class=None,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Callable[..., optax.GradientTransformation] = optax.adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
):
super().__init__(
observation_space,
action_space,
features_extractor_class,
features_extractor_kwargs,
optimizer_class=optimizer_class,
optimizer_kwargs=optimizer_kwargs,
)
if net_arch is not None:
assert isinstance(net_arch, list)
self.n_units = net_arch[0]
else:
self.n_units = 256
def build(self, key: jax.random.KeyArray, lr_schedule: Schedule) -> jax.random.KeyArray:
key, qf_key = jax.random.split(key, 2)
obs = jnp.array([self.observation_space.sample()])
self.qf = QNetwork(n_actions=int(self.action_space.n), n_units=self.n_units)
self.qf_state = RLTrainState.create(
apply_fn=self.qf.apply,
params=self.qf.init({"params": qf_key}, obs),
target_params=self.qf.init({"params": qf_key}, obs),
tx=self.optimizer_class(
learning_rate=lr_schedule(1), # type: ignore[call-arg]
**self.optimizer_kwargs,
),
)
# TODO: jit qf.apply_fn too?
self.qf.apply = jax.jit(self.qf.apply) # type: ignore[method-assign]
return key
def forward(self, obs: np.ndarray, deterministic: bool = False) -> np.ndarray:
return self._predict(obs, deterministic=deterministic)
@staticmethod
@jax.jit
def select_action(qf_state, observations):
qf_values = qf_state.apply_fn(qf_state.params, observations)
return jnp.argmax(qf_values, axis=1).reshape(-1)
def _predict(self, observation: np.ndarray, deterministic: bool = True) -> np.ndarray: # type: ignore[override]
return DQNPolicy.select_action(self.qf_state, observation)
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/dqn/policies.py
| 0.903495 | 0.328799 |
policies.py
|
pypi
|
import warnings
from typing import Any, Dict, Optional, Tuple, Type, Union
import gymnasium as gym
import jax
import jax.numpy as jnp
import numpy as np
import optax
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from stable_baselines3.common.utils import get_linear_fn
from sbx.common.off_policy_algorithm import OffPolicyAlgorithmJax
from sbx.common.type_aliases import ReplayBufferSamplesNp, RLTrainState
from sbx.dqn.policies import DQNPolicy
class DQN(OffPolicyAlgorithmJax):
policy_aliases: Dict[str, Type[DQNPolicy]] = { # type: ignore[assignment]
"MlpPolicy": DQNPolicy,
}
# Linear schedule will be defined in `_setup_model()`
exploration_schedule: Schedule
policy: DQNPolicy
def __init__(
self,
policy,
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule] = 1e-4,
buffer_size: int = 1_000_000, # 1e6
learning_starts: int = 100,
batch_size: int = 32,
tau: float = 1.0,
gamma: float = 0.99,
target_update_interval: int = 1000,
exploration_fraction: float = 0.1,
exploration_initial_eps: float = 1.0,
exploration_final_eps: float = 0.05,
# max_grad_norm: float = 10,
train_freq: Union[int, Tuple[int, str]] = 4,
gradient_steps: int = 1,
tensorboard_log: Optional[str] = None,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: str = "auto",
_init_setup_model: bool = True,
) -> None:
super().__init__(
policy=policy,
env=env,
learning_rate=learning_rate,
buffer_size=buffer_size,
learning_starts=learning_starts,
batch_size=batch_size,
tau=tau,
gamma=gamma,
train_freq=train_freq,
gradient_steps=gradient_steps,
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
seed=seed,
sde_support=False,
supported_action_spaces=(gym.spaces.Discrete,),
support_multi_env=True,
)
self.exploration_initial_eps = exploration_initial_eps
self.exploration_final_eps = exploration_final_eps
self.exploration_fraction = exploration_fraction
self.target_update_interval = target_update_interval
# For updating the target network with multiple envs:
self._n_calls = 0
# "epsilon" for the epsilon-greedy exploration
self.exploration_rate = 0.0
if _init_setup_model:
self._setup_model()
def _setup_model(self) -> None:
super()._setup_model()
self.exploration_schedule = get_linear_fn(
self.exploration_initial_eps,
self.exploration_final_eps,
self.exploration_fraction,
)
# Account for multiple environments
# each call to step() corresponds to n_envs transitions
if self.n_envs > 1:
if self.n_envs > self.target_update_interval:
warnings.warn(
"The number of environments used is greater than the target network "
f"update interval ({self.n_envs} > {self.target_update_interval}), "
"therefore the target network will be updated after each call to env.step() "
f"which corresponds to {self.n_envs} steps."
)
self.target_update_interval = max(self.target_update_interval // self.n_envs, 1)
if not hasattr(self, "policy") or self.policy is None:
# pytype:disable=not-instantiable
self.policy = self.policy_class( # type: ignore[assignment]
self.observation_space,
self.action_space,
self.lr_schedule,
**self.policy_kwargs,
)
# pytype:enable=not-instantiable
self.key = self.policy.build(self.key, self.lr_schedule)
self.qf = self.policy.qf
def learn(
self,
total_timesteps: int,
callback: MaybeCallback = None,
log_interval: int = 4,
tb_log_name: str = "DQN",
reset_num_timesteps: bool = True,
progress_bar: bool = False,
):
return super().learn(
total_timesteps=total_timesteps,
callback=callback,
log_interval=log_interval,
tb_log_name=tb_log_name,
reset_num_timesteps=reset_num_timesteps,
progress_bar=progress_bar,
)
def train(self, batch_size, gradient_steps):
# Sample all at once for efficiency (so we can jit the for loop)
data = self.replay_buffer.sample(batch_size * gradient_steps, env=self._vec_normalize_env)
# Convert to numpy
data = ReplayBufferSamplesNp(
data.observations.numpy(),
# Convert to int64
data.actions.long().numpy(),
data.next_observations.numpy(),
data.dones.numpy().flatten(),
data.rewards.numpy().flatten(),
)
# Pre compute the slice indices
# otherwise jax will complain
indices = jnp.arange(len(data.dones)).reshape(gradient_steps, batch_size)
update_carry = {
"qf_state": self.policy.qf_state,
"gamma": self.gamma,
"data": data,
"indices": indices,
"info": {
"critic_loss": jnp.array([0.0]),
"qf_mean_value": jnp.array([0.0]),
},
}
# jit the loop similar to https://github.com/Howuhh/sac-n-jax
# we use scan to be able to play with unroll parameter
update_carry, _ = jax.lax.scan(
self._train,
update_carry,
indices,
unroll=1,
)
self.policy.qf_state = update_carry["qf_state"]
qf_loss_value = update_carry["info"]["critic_loss"]
qf_mean_value = update_carry["info"]["qf_mean_value"] / gradient_steps
self._n_updates += gradient_steps
self.logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
self.logger.record("train/critic_loss", qf_loss_value.item())
self.logger.record("train/qf_mean_value", qf_mean_value.item())
@staticmethod
@jax.jit
def update_qnetwork(
gamma: float,
qf_state: RLTrainState,
observations: np.ndarray,
replay_actions: np.ndarray,
next_observations: np.ndarray,
rewards: np.ndarray,
dones: np.ndarray,
):
# Compute the next Q-values using the target network
qf_next_values = qf_state.apply_fn(qf_state.target_params, next_observations)
# Follow greedy policy: use the one with the highest value
next_q_values = qf_next_values.max(axis=1)
# Avoid potential broadcast issue
next_q_values = next_q_values.reshape(-1, 1)
# shape is (batch_size, 1)
target_q_values = rewards.reshape(-1, 1) + (1 - dones.reshape(-1, 1)) * gamma * next_q_values
def huber_loss(params):
# Get current Q-values estimates
current_q_values = qf_state.apply_fn(params, observations)
# Retrieve the q-values for the actions from the replay buffer
current_q_values = jnp.take_along_axis(current_q_values, replay_actions, axis=1)
# Compute Huber loss (less sensitive to outliers)
return optax.huber_loss(current_q_values, target_q_values).mean(), current_q_values.mean()
(qf_loss_value, qf_mean_value), grads = jax.value_and_grad(huber_loss, has_aux=True)(qf_state.params)
qf_state = qf_state.apply_gradients(grads=grads)
return qf_state, (qf_loss_value, qf_mean_value)
@staticmethod
@jax.jit
def soft_update(tau: float, qf_state: RLTrainState):
qf_state = qf_state.replace(target_params=optax.incremental_update(qf_state.params, qf_state.target_params, tau))
return qf_state
def _on_step(self) -> None:
"""
Update the exploration rate and target network if needed.
This method is called in ``collect_rollouts()`` after each step in the environment.
"""
self._n_calls += 1
if self._n_calls % self.target_update_interval == 0:
self.policy.qf_state = DQN.soft_update(self.tau, self.policy.qf_state)
self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
self.logger.record("rollout/exploration_rate", self.exploration_rate)
def predict(
self,
observation: Union[np.ndarray, Dict[str, np.ndarray]],
state: Optional[Tuple[np.ndarray, ...]] = None,
episode_start: Optional[np.ndarray] = None,
deterministic: bool = False,
) -> Tuple[np.ndarray, Optional[Tuple[np.ndarray, ...]]]:
"""
Overrides the base_class predict function to include epsilon-greedy exploration.
:param observation: the input observation
:param state: The last states (can be None, used in recurrent policies)
:param episode_start: The last masks (can be None, used in recurrent policies)
:param deterministic: Whether or not to return deterministic actions.
:return: the model's action and the next state
(used in recurrent policies)
"""
if not deterministic and np.random.rand() < self.exploration_rate:
if self.policy.is_vectorized_observation(observation):
if isinstance(observation, dict):
n_batch = observation[list(observation.keys())[0]].shape[0]
else:
n_batch = observation.shape[0]
action = np.array([self.action_space.sample() for _ in range(n_batch)])
else:
action = np.array(self.action_space.sample())
else:
action, state = self.policy.predict(observation, state, episode_start, deterministic)
return action, state
@staticmethod
@jax.jit
def _train(carry, indices):
data = carry["data"]
qf_state, (qf_loss_value, qf_mean_value) = DQN.update_qnetwork(
carry["gamma"],
carry["qf_state"],
observations=data.observations[indices],
replay_actions=data.actions[indices],
next_observations=data.next_observations[indices],
rewards=data.rewards[indices],
dones=data.dones[indices],
)
carry["qf_state"] = qf_state
carry["info"]["critic_loss"] += qf_loss_value
carry["info"]["qf_mean_value"] += qf_mean_value
return carry, None
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/dqn/dqn.py
| 0.915583 | 0.356923 |
dqn.py
|
pypi
|
from typing import Any, Dict, Optional, Tuple, Type, Union
from stable_baselines3.common.buffers import ReplayBuffer
from stable_baselines3.common.noise import ActionNoise
from stable_baselines3.common.type_aliases import GymEnv, Schedule
from sbx.tqc.policies import TQCPolicy
from sbx.tqc.tqc import TQC
class DroQ(TQC):
policy_aliases: Dict[str, Type[TQCPolicy]] = {
"MlpPolicy": TQCPolicy,
}
def __init__(
self,
policy,
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule] = 3e-4,
qf_learning_rate: Optional[float] = None,
buffer_size: int = 1_000_000, # 1e6
learning_starts: int = 100,
batch_size: int = 256,
tau: float = 0.005,
gamma: float = 0.99,
train_freq: Union[int, Tuple[int, str]] = 1,
gradient_steps: int = 2,
# policy_delay = gradient_steps to follow original implementation
policy_delay: int = 2,
top_quantiles_to_drop_per_net: int = 2,
dropout_rate: float = 0.01,
layer_norm: bool = True,
action_noise: Optional[ActionNoise] = None,
replay_buffer_class: Optional[Type[ReplayBuffer]] = None,
replay_buffer_kwargs: Optional[Dict[str, Any]] = None,
ent_coef: Union[str, float] = "auto",
use_sde: bool = False,
sde_sample_freq: int = -1,
use_sde_at_warmup: bool = False,
tensorboard_log: Optional[str] = None,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: str = "auto",
_init_setup_model: bool = True,
) -> None:
super().__init__(
policy=policy,
env=env,
learning_rate=learning_rate,
qf_learning_rate=qf_learning_rate,
buffer_size=buffer_size,
learning_starts=learning_starts,
batch_size=batch_size,
tau=tau,
gamma=gamma,
train_freq=train_freq,
gradient_steps=gradient_steps,
policy_delay=policy_delay,
action_noise=action_noise,
replay_buffer_class=replay_buffer_class,
replay_buffer_kwargs=replay_buffer_kwargs,
use_sde=use_sde,
sde_sample_freq=sde_sample_freq,
use_sde_at_warmup=use_sde_at_warmup,
top_quantiles_to_drop_per_net=top_quantiles_to_drop_per_net,
ent_coef=ent_coef,
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
seed=seed,
_init_setup_model=False,
)
self.policy_kwargs["dropout_rate"] = dropout_rate
self.policy_kwargs["layer_norm"] = layer_norm
if _init_setup_model:
self._setup_model()
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/droq/droq.py
| 0.913387 | 0.311453 |
droq.py
|
pypi
|
from typing import Any, Callable, Dict, List, Optional, Union
import flax.linen as nn
import gymnasium as gym
import jax
import jax.numpy as jnp
import numpy as np
import optax
import tensorflow_probability
from flax.linen.initializers import constant
from flax.training.train_state import TrainState
from gymnasium import spaces
from stable_baselines3.common.type_aliases import Schedule
from sbx.common.policies import BaseJaxPolicy
tfp = tensorflow_probability.substrates.jax
tfd = tfp.distributions
class Critic(nn.Module):
n_units: int = 256
activation_fn: Callable = nn.tanh
@nn.compact
def __call__(self, x: jnp.ndarray) -> jnp.ndarray:
x = nn.Dense(self.n_units)(x)
x = self.activation_fn(x)
x = nn.Dense(self.n_units)(x)
x = self.activation_fn(x)
x = nn.Dense(1)(x)
return x
class Actor(nn.Module):
action_dim: int
n_units: int = 256
log_std_init: float = 0.0
continuous: bool = True
activation_fn: Callable = nn.tanh
def get_std(self):
# Make it work with gSDE
return jnp.array(0.0)
@nn.compact
def __call__(self, x: jnp.ndarray) -> tfd.Distribution: # type: ignore[name-defined]
x = nn.Dense(self.n_units)(x)
x = self.activation_fn(x)
x = nn.Dense(self.n_units)(x)
x = self.activation_fn(x)
mean = nn.Dense(self.action_dim)(x)
if self.continuous:
log_std = self.param("log_std", constant(self.log_std_init), (self.action_dim,))
dist = tfd.MultivariateNormalDiag(loc=mean, scale_diag=jnp.exp(log_std))
else:
dist = tfd.Categorical(logits=mean)
return dist
class PPOPolicy(BaseJaxPolicy):
def __init__(
self,
observation_space: gym.spaces.Space,
action_space: gym.spaces.Space,
lr_schedule: Schedule,
net_arch: Optional[Union[List[int], Dict[str, List[int]]]] = None,
ortho_init: bool = False,
log_std_init: float = 0.0,
activation_fn=nn.tanh,
use_sde: bool = False,
# Note: most gSDE parameters are not used
# this is to keep API consistent with SB3
use_expln: bool = False,
clip_mean: float = 2.0,
features_extractor_class=None,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Callable[..., optax.GradientTransformation] = optax.adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
share_features_extractor: bool = False,
):
if optimizer_kwargs is None:
# Small values to avoid NaN in Adam optimizer
optimizer_kwargs = {}
if optimizer_class == optax.adam:
optimizer_kwargs["eps"] = 1e-5
super().__init__(
observation_space,
action_space,
features_extractor_class,
features_extractor_kwargs,
optimizer_class=optimizer_class,
optimizer_kwargs=optimizer_kwargs,
squash_output=True,
)
self.log_std_init = log_std_init
self.activation_fn = activation_fn
if net_arch is not None:
if isinstance(net_arch, list):
self.n_units = net_arch[0]
else:
assert isinstance(net_arch, dict)
self.n_units = net_arch["pi"][0]
else:
self.n_units = 64
self.use_sde = use_sde
self.key = self.noise_key = jax.random.PRNGKey(0)
def build(self, key: jax.random.KeyArray, lr_schedule: Schedule, max_grad_norm: float) -> jax.random.KeyArray:
key, actor_key, vf_key = jax.random.split(key, 3)
# Keep a key for the actor
key, self.key = jax.random.split(key, 2)
# Initialize noise
self.reset_noise()
obs = jnp.array([self.observation_space.sample()])
if isinstance(self.action_space, spaces.Box):
actor_kwargs = {
"action_dim": int(np.prod(self.action_space.shape)),
"continuous": True,
}
elif isinstance(self.action_space, spaces.Discrete):
actor_kwargs = {
"action_dim": int(self.action_space.n),
"continuous": False,
}
else:
raise NotImplementedError(f"{self.action_space}")
self.actor = Actor(
n_units=self.n_units,
log_std_init=self.log_std_init,
activation_fn=self.activation_fn,
**actor_kwargs, # type: ignore[arg-type]
)
# Hack to make gSDE work without modifying internal SB3 code
self.actor.reset_noise = self.reset_noise
self.actor_state = TrainState.create(
apply_fn=self.actor.apply,
params=self.actor.init(actor_key, obs),
tx=optax.chain(
optax.clip_by_global_norm(max_grad_norm),
self.optimizer_class(
learning_rate=lr_schedule(1), # type: ignore[call-arg]
**self.optimizer_kwargs, # , eps=1e-5
),
),
)
self.vf = Critic(n_units=self.n_units, activation_fn=self.activation_fn)
self.vf_state = TrainState.create(
apply_fn=self.vf.apply,
params=self.vf.init({"params": vf_key}, obs),
tx=optax.chain(
optax.clip_by_global_norm(max_grad_norm),
self.optimizer_class(
learning_rate=lr_schedule(1), # type: ignore[call-arg]
**self.optimizer_kwargs, # , eps=1e-5
),
),
)
self.actor.apply = jax.jit(self.actor.apply) # type: ignore[method-assign]
self.vf.apply = jax.jit(self.vf.apply) # type: ignore[method-assign]
return key
def reset_noise(self, batch_size: int = 1) -> None:
"""
Sample new weights for the exploration matrix, when using gSDE.
"""
self.key, self.noise_key = jax.random.split(self.key, 2)
def forward(self, obs: np.ndarray, deterministic: bool = False) -> np.ndarray:
return self._predict(obs, deterministic=deterministic)
def _predict(self, observation: np.ndarray, deterministic: bool = False) -> np.ndarray: # type: ignore[override]
if deterministic:
return BaseJaxPolicy.select_action(self.actor_state, observation)
# Trick to use gSDE: repeat sampled noise by using the same noise key
if not self.use_sde:
self.reset_noise()
return BaseJaxPolicy.sample_action(self.actor_state, observation, self.noise_key)
def predict_all(self, observation: np.ndarray, key: jax.random.KeyArray) -> np.ndarray:
return self._predict_all(self.actor_state, self.vf_state, observation, key)
@staticmethod
@jax.jit
def _predict_all(actor_state, vf_state, obervations, key):
dist = actor_state.apply_fn(actor_state.params, obervations)
actions = dist.sample(seed=key)
log_probs = dist.log_prob(actions)
values = vf_state.apply_fn(vf_state.params, obervations).flatten()
return actions, log_probs, values
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/ppo/policies.py
| 0.957188 | 0.345768 |
policies.py
|
pypi
|
from typing import Any, Callable, Dict, List, Optional, Sequence, Union
import flax.linen as nn
import jax
import jax.numpy as jnp
import numpy as np
import optax
import tensorflow_probability
from flax.training.train_state import TrainState
from gymnasium import spaces
from stable_baselines3.common.type_aliases import Schedule
from sbx.common.distributions import TanhTransformedDistribution
from sbx.common.policies import BaseJaxPolicy
from sbx.common.type_aliases import RLTrainState
tfp = tensorflow_probability.substrates.jax
tfd = tfp.distributions
class Critic(nn.Module):
net_arch: Sequence[int]
use_layer_norm: bool = False
dropout_rate: Optional[float] = None
n_quantiles: int = 25
@nn.compact
def __call__(self, x: jnp.ndarray, a: jnp.ndarray, training: bool = False) -> jnp.ndarray:
x = jnp.concatenate([x, a], -1)
for n_units in self.net_arch:
x = nn.Dense(n_units)(x)
if self.dropout_rate is not None and self.dropout_rate > 0:
x = nn.Dropout(rate=self.dropout_rate)(x, deterministic=False)
if self.use_layer_norm:
x = nn.LayerNorm()(x)
x = nn.relu(x)
x = nn.Dense(self.n_quantiles)(x)
return x
class Actor(nn.Module):
net_arch: Sequence[int]
action_dim: int
log_std_min: float = -20
log_std_max: float = 2
def get_std(self):
# Make it work with gSDE
return jnp.array(0.0)
@nn.compact
def __call__(self, x: jnp.ndarray) -> tfd.Distribution: # type: ignore[name-defined]
for n_units in self.net_arch:
x = nn.Dense(n_units)(x)
x = nn.relu(x)
mean = nn.Dense(self.action_dim)(x)
log_std = nn.Dense(self.action_dim)(x)
log_std = jnp.clip(log_std, self.log_std_min, self.log_std_max)
dist = TanhTransformedDistribution(
tfd.MultivariateNormalDiag(loc=mean, scale_diag=jnp.exp(log_std)),
)
return dist
class TQCPolicy(BaseJaxPolicy):
action_space: spaces.Box # type: ignore[assignment]
def __init__(
self,
observation_space: spaces.Space,
action_space: spaces.Box,
lr_schedule: Schedule,
net_arch: Optional[Union[List[int], Dict[str, List[int]]]] = None,
dropout_rate: float = 0.0,
layer_norm: bool = False,
top_quantiles_to_drop_per_net: int = 2,
n_quantiles: int = 25,
# activation_fn: Type[nn.Module] = nn.ReLU,
use_sde: bool = False,
# Note: most gSDE parameters are not used
# this is to keep API consistent with SB3
log_std_init: float = -3,
use_expln: bool = False,
clip_mean: float = 2.0,
features_extractor_class=None,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Callable[..., optax.GradientTransformation] = optax.adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
n_critics: int = 2,
share_features_extractor: bool = False,
):
super().__init__(
observation_space,
action_space,
features_extractor_class,
features_extractor_kwargs,
optimizer_class=optimizer_class,
optimizer_kwargs=optimizer_kwargs,
squash_output=True,
)
self.dropout_rate = dropout_rate
self.layer_norm = layer_norm
if net_arch is not None:
if isinstance(net_arch, list):
self.net_arch_pi = self.net_arch_qf = net_arch
else:
self.net_arch_pi = net_arch["pi"]
self.net_arch_qf = net_arch["qf"]
else:
self.net_arch_pi = self.net_arch_qf = [256, 256]
self.n_quantiles = n_quantiles
self.n_critics = n_critics
self.top_quantiles_to_drop_per_net = top_quantiles_to_drop_per_net
# Sort and drop top k quantiles to control overestimation.
quantiles_total = self.n_quantiles * self.n_critics
top_quantiles_to_drop_per_net = self.top_quantiles_to_drop_per_net
self.n_target_quantiles = quantiles_total - top_quantiles_to_drop_per_net * self.n_critics
self.use_sde = use_sde
self.key = self.noise_key = jax.random.PRNGKey(0)
def build(self, key: jax.random.KeyArray, lr_schedule: Schedule, qf_learning_rate: float) -> jax.random.KeyArray:
key, actor_key, qf1_key, qf2_key = jax.random.split(key, 4)
key, dropout_key1, dropout_key2, self.key = jax.random.split(key, 4)
# Initialize noise
self.reset_noise()
if isinstance(self.observation_space, spaces.Dict):
obs = jnp.array([spaces.flatten(self.observation_space, self.observation_space.sample())])
else:
obs = jnp.array([self.observation_space.sample()])
action = jnp.array([self.action_space.sample()])
self.actor = Actor(
action_dim=int(np.prod(self.action_space.shape)),
net_arch=self.net_arch_pi,
)
# Hack to make gSDE work without modifying internal SB3 code
self.actor.reset_noise = self.reset_noise
self.actor_state = TrainState.create(
apply_fn=self.actor.apply,
params=self.actor.init(actor_key, obs),
tx=self.optimizer_class(
learning_rate=lr_schedule(1), # type: ignore[call-arg]
**self.optimizer_kwargs,
),
)
self.qf = Critic(
dropout_rate=self.dropout_rate,
use_layer_norm=self.layer_norm,
net_arch=self.net_arch_qf,
n_quantiles=self.n_quantiles,
)
self.qf1_state = RLTrainState.create(
apply_fn=self.qf.apply,
params=self.qf.init(
{"params": qf1_key, "dropout": dropout_key1},
obs,
action,
),
target_params=self.qf.init(
{"params": qf1_key, "dropout": dropout_key1},
obs,
action,
),
tx=optax.adam(learning_rate=qf_learning_rate), # type: ignore[call-arg]
)
self.qf2_state = RLTrainState.create(
apply_fn=self.qf.apply,
params=self.qf.init(
{"params": qf2_key, "dropout": dropout_key2},
obs,
action,
),
target_params=self.qf.init(
{"params": qf2_key, "dropout": dropout_key2},
obs,
action,
),
tx=self.optimizer_class(
learning_rate=qf_learning_rate, # type: ignore[call-arg]
**self.optimizer_kwargs,
),
)
self.actor.apply = jax.jit(self.actor.apply) # type: ignore[method-assign]
self.qf.apply = jax.jit( # type: ignore[method-assign]
self.qf.apply,
static_argnames=("dropout_rate", "use_layer_norm"),
)
return key
def reset_noise(self, batch_size: int = 1) -> None:
"""
Sample new weights for the exploration matrix, when using gSDE.
"""
self.key, self.noise_key = jax.random.split(self.key, 2)
def forward(self, obs: np.ndarray, deterministic: bool = False) -> np.ndarray:
return self._predict(obs, deterministic=deterministic)
def _predict(self, observation: np.ndarray, deterministic: bool = False) -> np.ndarray: # type: ignore[override]
if deterministic:
return BaseJaxPolicy.select_action(self.actor_state, observation)
# Trick to use gSDE: repeat sampled noise by using the same noise key
if not self.use_sde:
self.reset_noise()
return BaseJaxPolicy.sample_action(self.actor_state, observation, self.noise_key)
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/tqc/policies.py
| 0.940606 | 0.346458 |
policies.py
|
pypi
|
from functools import partial
from typing import Any, Dict, Optional, Tuple, Type, Union
import flax
import flax.linen as nn
import jax
import jax.numpy as jnp
import numpy as np
import optax
from flax.training.train_state import TrainState
from gymnasium import spaces
from stable_baselines3.common.buffers import ReplayBuffer
from stable_baselines3.common.noise import ActionNoise
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from sbx.common.off_policy_algorithm import OffPolicyAlgorithmJax
from sbx.common.type_aliases import ReplayBufferSamplesNp, RLTrainState
from sbx.tqc.policies import TQCPolicy
class EntropyCoef(nn.Module):
ent_coef_init: float = 1.0
@nn.compact
def __call__(self) -> jnp.ndarray:
log_ent_coef = self.param("log_ent_coef", init_fn=lambda key: jnp.full((), jnp.log(self.ent_coef_init)))
return jnp.exp(log_ent_coef)
class ConstantEntropyCoef(nn.Module):
ent_coef_init: float = 1.0
@nn.compact
def __call__(self) -> float:
# Hack to not optimize the entropy coefficient while not having to use if/else for the jit
# TODO: add parameter in train to remove that hack
self.param("dummy_param", init_fn=lambda key: jnp.full((), self.ent_coef_init))
return self.ent_coef_init
class TQC(OffPolicyAlgorithmJax):
policy_aliases: Dict[str, Type[TQCPolicy]] = { # type: ignore[assignment]
"MlpPolicy": TQCPolicy,
# Minimal dict support using flatten()
"MultiInputPolicy": TQCPolicy,
}
policy: TQCPolicy
action_space: spaces.Box # type: ignore[assignment]
def __init__(
self,
policy,
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule] = 3e-4,
qf_learning_rate: Optional[float] = None,
buffer_size: int = 1_000_000, # 1e6
learning_starts: int = 100,
batch_size: int = 256,
tau: float = 0.005,
gamma: float = 0.99,
train_freq: Union[int, Tuple[int, str]] = 1,
gradient_steps: int = 1,
policy_delay: int = 1,
top_quantiles_to_drop_per_net: int = 2,
action_noise: Optional[ActionNoise] = None,
replay_buffer_class: Optional[Type[ReplayBuffer]] = None,
replay_buffer_kwargs: Optional[Dict[str, Any]] = None,
ent_coef: Union[str, float] = "auto",
use_sde: bool = False,
sde_sample_freq: int = -1,
use_sde_at_warmup: bool = False,
tensorboard_log: Optional[str] = None,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: str = "auto",
_init_setup_model: bool = True,
) -> None:
super().__init__(
policy=policy,
env=env,
learning_rate=learning_rate,
qf_learning_rate=qf_learning_rate,
buffer_size=buffer_size,
learning_starts=learning_starts,
batch_size=batch_size,
tau=tau,
gamma=gamma,
train_freq=train_freq,
gradient_steps=gradient_steps,
action_noise=action_noise,
replay_buffer_class=replay_buffer_class,
replay_buffer_kwargs=replay_buffer_kwargs,
use_sde=use_sde,
sde_sample_freq=sde_sample_freq,
use_sde_at_warmup=use_sde_at_warmup,
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
seed=seed,
supported_action_spaces=(spaces.Box,),
support_multi_env=True,
)
self.policy_delay = policy_delay
self.ent_coef_init = ent_coef
self.policy_kwargs["top_quantiles_to_drop_per_net"] = top_quantiles_to_drop_per_net
if _init_setup_model:
self._setup_model()
def _setup_model(self) -> None:
super()._setup_model()
if not hasattr(self, "policy") or self.policy is None:
# pytype: disable=not-instantiable
self.policy = self.policy_class( # type: ignore[assignment]
self.observation_space,
self.action_space,
self.lr_schedule,
**self.policy_kwargs,
)
# pytype: enable=not-instantiable
assert isinstance(self.qf_learning_rate, float)
self.key = self.policy.build(self.key, self.lr_schedule, self.qf_learning_rate)
self.key, ent_key = jax.random.split(self.key, 2)
self.actor = self.policy.actor # type: ignore[assignment]
self.qf = self.policy.qf
# The entropy coefficient or entropy can be learned automatically
# see Automating Entropy Adjustment for Maximum Entropy RL section
# of https://arxiv.org/abs/1812.05905
if isinstance(self.ent_coef_init, str) and self.ent_coef_init.startswith("auto"):
# Default initial value of ent_coef when learned
ent_coef_init = 1.0
if "_" in self.ent_coef_init:
ent_coef_init = float(self.ent_coef_init.split("_")[1])
assert ent_coef_init > 0.0, "The initial value of ent_coef must be greater than 0"
# Note: we optimize the log of the entropy coeff which is slightly different from the paper
# as discussed in https://github.com/rail-berkeley/softlearning/issues/37
self.ent_coef = EntropyCoef(ent_coef_init)
else:
# This will throw an error if a malformed string (different from 'auto') is passed
assert isinstance(
self.ent_coef_init, float
), f"Entropy coef must be float when not equal to 'auto', actual: {self.ent_coef_init}"
self.ent_coef = ConstantEntropyCoef(self.ent_coef_init) # type: ignore[assignment]
self.ent_coef_state = TrainState.create(
apply_fn=self.ent_coef.apply,
params=self.ent_coef.init(ent_key)["params"],
tx=optax.adam(
learning_rate=self.learning_rate,
),
)
# automatically set target entropy if needed
self.target_entropy = -np.prod(self.action_space.shape).astype(np.float32)
def learn(
self,
total_timesteps: int,
callback: MaybeCallback = None,
log_interval: int = 4,
tb_log_name: str = "TQC",
reset_num_timesteps: bool = True,
progress_bar: bool = False,
):
return super().learn(
total_timesteps=total_timesteps,
callback=callback,
log_interval=log_interval,
tb_log_name=tb_log_name,
reset_num_timesteps=reset_num_timesteps,
progress_bar=progress_bar,
)
def train(self, batch_size, gradient_steps):
# Sample all at once for efficiency (so we can jit the for loop)
data = self.replay_buffer.sample(batch_size * gradient_steps, env=self._vec_normalize_env)
# Pre-compute the indices where we need to update the actor
# This is a hack in order to jit the train loop
# It will compile once per value of policy_delay_indices
policy_delay_indices = {i: True for i in range(gradient_steps) if ((self._n_updates + i + 1) % self.policy_delay) == 0}
policy_delay_indices = flax.core.FrozenDict(policy_delay_indices)
if isinstance(data.observations, dict):
keys = list(self.observation_space.keys())
obs = np.concatenate([data.observations[key].numpy() for key in keys], axis=1)
next_obs = np.concatenate([data.next_observations[key].numpy() for key in keys], axis=1)
else:
obs = data.observations.numpy()
next_obs = data.next_observations.numpy()
# Convert to numpy
data = ReplayBufferSamplesNp(
obs,
data.actions.numpy(),
next_obs,
data.dones.numpy().flatten(),
data.rewards.numpy().flatten(),
)
(
self.policy.qf1_state,
self.policy.qf2_state,
self.policy.actor_state,
self.ent_coef_state,
self.key,
(qf1_loss_value, qf2_loss_value, actor_loss_value, ent_coef_value),
) = self._train(
self.gamma,
self.tau,
self.target_entropy,
gradient_steps,
self.policy.n_target_quantiles,
data,
policy_delay_indices,
self.policy.qf1_state,
self.policy.qf2_state,
self.policy.actor_state,
self.ent_coef_state,
self.key,
)
self._n_updates += gradient_steps
self.logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
self.logger.record("train/actor_loss", actor_loss_value.item())
self.logger.record("train/critic_loss", qf1_loss_value.item())
self.logger.record("train/ent_coef", ent_coef_value.item())
@staticmethod
@partial(jax.jit, static_argnames=["n_target_quantiles"])
def update_critic(
gamma: float,
n_target_quantiles: int,
actor_state: TrainState,
qf1_state: RLTrainState,
qf2_state: RLTrainState,
ent_coef_state: TrainState,
observations: np.ndarray,
actions: np.ndarray,
next_observations: np.ndarray,
rewards: np.ndarray,
dones: np.ndarray,
key: jax.random.KeyArray,
):
key, noise_key, dropout_key_1, dropout_key_2 = jax.random.split(key, 4)
key, dropout_key_3, dropout_key_4 = jax.random.split(key, 3)
# sample action from the actor
dist = actor_state.apply_fn(actor_state.params, next_observations)
next_state_actions = dist.sample(seed=noise_key)
next_log_prob = dist.log_prob(next_state_actions)
ent_coef_value = ent_coef_state.apply_fn({"params": ent_coef_state.params})
qf1_next_quantiles = qf1_state.apply_fn(
qf1_state.target_params,
next_observations,
next_state_actions,
True,
rngs={"dropout": dropout_key_1},
)
qf2_next_quantiles = qf1_state.apply_fn(
qf2_state.target_params,
next_observations,
next_state_actions,
True,
rngs={"dropout": dropout_key_2},
)
# Concatenate quantiles from both critics to get a single tensor
# batch x quantiles
qf_next_quantiles = jnp.concatenate((qf1_next_quantiles, qf2_next_quantiles), axis=1)
# sort next quantiles with jax
next_quantiles = jnp.sort(qf_next_quantiles)
# Keep only the quantiles we need
next_target_quantiles = next_quantiles[:, :n_target_quantiles]
# td error + entropy term
next_target_quantiles = next_target_quantiles - ent_coef_value * next_log_prob.reshape(-1, 1)
target_quantiles = rewards.reshape(-1, 1) + (1 - dones.reshape(-1, 1)) * gamma * next_target_quantiles
# Make target_quantiles broadcastable to (batch_size, n_quantiles, n_target_quantiles).
target_quantiles = jnp.expand_dims(target_quantiles, axis=1)
def huber_quantile_loss(params, noise_key):
# Compute huber quantile loss
current_quantiles = qf1_state.apply_fn(params, observations, actions, True, rngs={"dropout": noise_key})
# convert to shape: (batch_size, n_quantiles, 1) for broadcast
current_quantiles = jnp.expand_dims(current_quantiles, axis=-1)
n_quantiles = current_quantiles.shape[1]
# Cumulative probabilities to calculate quantiles.
# shape: (n_quantiles,)
cum_prob = (jnp.arange(n_quantiles, dtype=jnp.float32) + 0.5) / n_quantiles
# convert to shape: (1, n_quantiles, 1) for broadcast
cum_prob = jnp.expand_dims(cum_prob, axis=(0, -1))
# pairwise_delta: (batch_size, n_quantiles, n_target_quantiles)
pairwise_delta = target_quantiles - current_quantiles
abs_pairwise_delta = jnp.abs(pairwise_delta)
huber_loss = jnp.where(abs_pairwise_delta > 1, abs_pairwise_delta - 0.5, pairwise_delta**2 * 0.5)
loss = jnp.abs(cum_prob - (pairwise_delta < 0).astype(jnp.float32)) * huber_loss
return loss.mean()
qf1_loss_value, grads1 = jax.value_and_grad(huber_quantile_loss, has_aux=False)(qf1_state.params, dropout_key_3)
qf2_loss_value, grads2 = jax.value_and_grad(huber_quantile_loss, has_aux=False)(qf2_state.params, dropout_key_4)
qf1_state = qf1_state.apply_gradients(grads=grads1)
qf2_state = qf2_state.apply_gradients(grads=grads2)
return (
(qf1_state, qf2_state),
(qf1_loss_value, qf2_loss_value, ent_coef_value),
key,
)
@staticmethod
@jax.jit
def update_actor(
actor_state: RLTrainState,
qf1_state: RLTrainState,
qf2_state: RLTrainState,
ent_coef_state: TrainState,
observations: np.ndarray,
key: jax.random.KeyArray,
):
key, dropout_key_1, dropout_key_2, noise_key = jax.random.split(key, 4)
def actor_loss(params):
dist = actor_state.apply_fn(params, observations)
actor_actions = dist.sample(seed=noise_key)
log_prob = dist.log_prob(actor_actions).reshape(-1, 1)
qf1_pi = qf1_state.apply_fn(
qf1_state.params,
observations,
actor_actions,
True,
rngs={"dropout": dropout_key_1},
)
qf2_pi = qf1_state.apply_fn(
qf2_state.params,
observations,
actor_actions,
True,
rngs={"dropout": dropout_key_2},
)
qf1_pi = jnp.expand_dims(qf1_pi, axis=-1)
qf2_pi = jnp.expand_dims(qf2_pi, axis=-1)
# Concatenate quantiles from both critics
# (batch, n_quantiles, n_critics)
qf_pi = jnp.concatenate((qf1_pi, qf2_pi), axis=1)
qf_pi = qf_pi.mean(axis=2).mean(axis=1, keepdims=True)
ent_coef_value = ent_coef_state.apply_fn({"params": ent_coef_state.params})
return (ent_coef_value * log_prob - qf_pi).mean(), -log_prob.mean()
(actor_loss_value, entropy), grads = jax.value_and_grad(actor_loss, has_aux=True)(actor_state.params)
actor_state = actor_state.apply_gradients(grads=grads)
return actor_state, (qf1_state, qf2_state), actor_loss_value, key, entropy
@staticmethod
@jax.jit
def soft_update(tau: float, qf1_state: RLTrainState, qf2_state: RLTrainState):
qf1_state = qf1_state.replace(target_params=optax.incremental_update(qf1_state.params, qf1_state.target_params, tau))
qf2_state = qf2_state.replace(target_params=optax.incremental_update(qf2_state.params, qf2_state.target_params, tau))
return qf1_state, qf2_state
@staticmethod
@jax.jit
def update_temperature(target_entropy: np.ndarray, ent_coef_state: TrainState, entropy: float):
def temperature_loss(temp_params):
ent_coef_value = ent_coef_state.apply_fn({"params": temp_params})
# ent_coef_loss = (jnp.log(ent_coef_value) * (entropy - target_entropy)).mean()
ent_coef_loss = ent_coef_value * (entropy - target_entropy).mean()
return ent_coef_loss
ent_coef_loss, grads = jax.value_and_grad(temperature_loss)(ent_coef_state.params)
ent_coef_state = ent_coef_state.apply_gradients(grads=grads)
return ent_coef_state, ent_coef_loss
@staticmethod
@partial(jax.jit, static_argnames=["gradient_steps", "n_target_quantiles"])
def _train(
gamma: float,
tau: float,
target_entropy: np.ndarray,
gradient_steps: int,
n_target_quantiles: int,
data: ReplayBufferSamplesNp,
policy_delay_indices: flax.core.FrozenDict,
qf1_state: RLTrainState,
qf2_state: RLTrainState,
actor_state: TrainState,
ent_coef_state: TrainState,
key,
):
actor_loss_value = jnp.array(0)
for i in range(gradient_steps):
def slice(x, step=i):
assert x.shape[0] % gradient_steps == 0
batch_size = x.shape[0] // gradient_steps
return x[batch_size * step : batch_size * (step + 1)]
(
(qf1_state, qf2_state),
(qf1_loss_value, qf2_loss_value, ent_coef_value),
key,
) = TQC.update_critic(
gamma,
n_target_quantiles,
actor_state,
qf1_state,
qf2_state,
ent_coef_state,
slice(data.observations),
slice(data.actions),
slice(data.next_observations),
slice(data.rewards),
slice(data.dones),
key,
)
qf1_state, qf2_state = TQC.soft_update(tau, qf1_state, qf2_state)
# hack to be able to jit (n_updates % policy_delay == 0)
if i in policy_delay_indices:
(actor_state, (qf1_state, qf2_state), actor_loss_value, key, entropy) = TQC.update_actor(
actor_state,
qf1_state,
qf2_state,
ent_coef_state,
slice(data.observations),
key,
)
ent_coef_state, _ = TQC.update_temperature(target_entropy, ent_coef_state, entropy)
return (
qf1_state,
qf2_state,
actor_state,
ent_coef_state,
key,
(qf1_loss_value, qf2_loss_value, actor_loss_value, ent_coef_value),
)
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/tqc/tqc.py
| 0.850996 | 0.258066 |
tqc.py
|
pypi
|
from typing import Any, Dict, List, Optional, Tuple, Type, Union
import jax
import numpy as np
from gymnasium import spaces
from stable_baselines3 import HerReplayBuffer
from stable_baselines3.common.buffers import DictReplayBuffer, ReplayBuffer
from stable_baselines3.common.noise import ActionNoise
from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
from stable_baselines3.common.policies import BasePolicy
from stable_baselines3.common.type_aliases import GymEnv, Schedule
class OffPolicyAlgorithmJax(OffPolicyAlgorithm):
def __init__(
self,
policy: Type[BasePolicy],
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule],
qf_learning_rate: Optional[float] = None,
buffer_size: int = 1_000_000, # 1e6
learning_starts: int = 100,
batch_size: int = 256,
tau: float = 0.005,
gamma: float = 0.99,
train_freq: Union[int, Tuple[int, str]] = (1, "step"),
gradient_steps: int = 1,
action_noise: Optional[ActionNoise] = None,
replay_buffer_class: Optional[Type[ReplayBuffer]] = None,
replay_buffer_kwargs: Optional[Dict[str, Any]] = None,
optimize_memory_usage: bool = False,
policy_kwargs: Optional[Dict[str, Any]] = None,
tensorboard_log: Optional[str] = None,
verbose: int = 0,
device: str = "auto",
support_multi_env: bool = False,
monitor_wrapper: bool = True,
seed: Optional[int] = None,
use_sde: bool = False,
sde_sample_freq: int = -1,
use_sde_at_warmup: bool = False,
sde_support: bool = True,
supported_action_spaces: Optional[Tuple[Type[spaces.Space], ...]] = None,
):
super().__init__(
policy=policy,
env=env,
learning_rate=learning_rate,
buffer_size=buffer_size,
learning_starts=learning_starts,
batch_size=batch_size,
tau=tau,
gamma=gamma,
train_freq=train_freq,
gradient_steps=gradient_steps,
replay_buffer_class=replay_buffer_class,
replay_buffer_kwargs=replay_buffer_kwargs,
action_noise=action_noise,
use_sde=use_sde,
sde_sample_freq=sde_sample_freq,
use_sde_at_warmup=use_sde_at_warmup,
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
seed=seed,
sde_support=sde_support,
supported_action_spaces=supported_action_spaces,
support_multi_env=support_multi_env,
)
# Will be updated later
self.key = jax.random.PRNGKey(0)
# Note: we do not allow schedule for it
self.qf_learning_rate = qf_learning_rate
def _get_torch_save_params(self):
return [], []
def _excluded_save_params(self) -> List[str]:
excluded = super()._excluded_save_params()
excluded.remove("policy")
return excluded
def set_random_seed(self, seed: Optional[int]) -> None: # type: ignore[override]
super().set_random_seed(seed)
if seed is None:
# Sample random seed
seed = np.random.randint(2**14)
self.key = jax.random.PRNGKey(seed)
def _setup_model(self) -> None:
if self.replay_buffer_class is None: # type: ignore[has-type]
if isinstance(self.observation_space, spaces.Dict):
self.replay_buffer_class = DictReplayBuffer
else:
self.replay_buffer_class = ReplayBuffer
self._setup_lr_schedule()
# By default qf_learning_rate = pi_learning_rate
self.qf_learning_rate = self.qf_learning_rate or self.lr_schedule(1)
self.set_random_seed(self.seed)
# Make a local copy as we should not pickle
# the environment when using HerReplayBuffer
replay_buffer_kwargs = self.replay_buffer_kwargs.copy()
if issubclass(self.replay_buffer_class, HerReplayBuffer): # type: ignore[arg-type]
assert self.env is not None, "You must pass an environment when using `HerReplayBuffer`"
replay_buffer_kwargs["env"] = self.env
self.replay_buffer = self.replay_buffer_class( # type: ignore[misc]
self.buffer_size,
self.observation_space,
self.action_space,
device="cpu", # force cpu device to easy torch -> numpy conversion
n_envs=self.n_envs,
optimize_memory_usage=self.optimize_memory_usage,
**replay_buffer_kwargs,
)
# Convert train freq parameter to TrainFreq object
self._convert_train_freq()
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/common/off_policy_algorithm.py
| 0.922805 | 0.25497 |
off_policy_algorithm.py
|
pypi
|
from typing import Any, Dict, List, Optional, Tuple, Type, TypeVar, Union
import gymnasium as gym
import jax
import numpy as np
import torch as th
from gymnasium import spaces
from stable_baselines3.common.buffers import RolloutBuffer
from stable_baselines3.common.callbacks import BaseCallback
from stable_baselines3.common.on_policy_algorithm import OnPolicyAlgorithm
from stable_baselines3.common.policies import BasePolicy
from stable_baselines3.common.type_aliases import GymEnv, Schedule
from stable_baselines3.common.vec_env import VecEnv
from sbx.ppo.policies import Actor, Critic, PPOPolicy
OnPolicyAlgorithmSelf = TypeVar("OnPolicyAlgorithmSelf", bound="OnPolicyAlgorithmJax")
class OnPolicyAlgorithmJax(OnPolicyAlgorithm):
policy: PPOPolicy # type: ignore[assignment]
actor: Actor
vf: Critic
def __init__(
self,
policy: Union[str, Type[BasePolicy]],
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule],
n_steps: int,
gamma: float,
gae_lambda: float,
ent_coef: float,
vf_coef: float,
max_grad_norm: float,
use_sde: bool,
sde_sample_freq: int,
tensorboard_log: Optional[str] = None,
monitor_wrapper: bool = True,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: str = "auto",
_init_setup_model: bool = True,
supported_action_spaces: Optional[Tuple[Type[spaces.Space], ...]] = None,
):
super().__init__(
policy=policy, # type: ignore[arg-type]
env=env,
learning_rate=learning_rate,
n_steps=n_steps,
gamma=gamma,
gae_lambda=gae_lambda,
ent_coef=ent_coef,
vf_coef=vf_coef,
max_grad_norm=max_grad_norm,
use_sde=use_sde,
sde_sample_freq=sde_sample_freq,
monitor_wrapper=monitor_wrapper,
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
seed=seed,
supported_action_spaces=supported_action_spaces,
_init_setup_model=_init_setup_model,
)
# Will be updated later
self.key = jax.random.PRNGKey(0)
def _get_torch_save_params(self):
return [], []
def _excluded_save_params(self) -> List[str]:
excluded = super()._excluded_save_params()
excluded.remove("policy")
return excluded
def set_random_seed(self, seed: Optional[int]) -> None: # type: ignore[override]
super().set_random_seed(seed)
if seed is None:
# Sample random seed
seed = np.random.randint(2**14)
self.key = jax.random.PRNGKey(seed)
def _setup_model(self) -> None:
self._setup_lr_schedule()
self.set_random_seed(self.seed)
self.rollout_buffer = RolloutBuffer(
self.n_steps,
self.observation_space,
self.action_space,
gamma=self.gamma,
gae_lambda=self.gae_lambda,
n_envs=self.n_envs,
device="cpu", # force cpu device to easy torch -> numpy conversion
)
def collect_rollouts(
self,
env: VecEnv,
callback: BaseCallback,
rollout_buffer: RolloutBuffer,
n_rollout_steps: int,
) -> bool:
"""
Collect experiences using the current policy and fill a ``RolloutBuffer``.
The term rollout here refers to the model-free notion and should not
be used with the concept of rollout used in model-based RL or planning.
:param env: The training environment
:param callback: Callback that will be called at each step
(and at the beginning and end of the rollout)
:param rollout_buffer: Buffer to fill with rollouts
:param n_rollout_steps: Number of experiences to collect per environment
:return: True if function returned with at least `n_rollout_steps`
collected, False if callback terminated rollout prematurely.
"""
assert self._last_obs is not None, "No previous observation was provided" # type: ignore[has-type]
# Switch to eval mode (this affects batch norm / dropout)
n_steps = 0
rollout_buffer.reset()
# Sample new weights for the state dependent exploration
if self.use_sde:
self.policy.reset_noise()
callback.on_rollout_start()
while n_steps < n_rollout_steps:
if self.use_sde and self.sde_sample_freq > 0 and n_steps % self.sde_sample_freq == 0:
# Sample a new noise matrix
self.policy.reset_noise()
if not self.use_sde or isinstance(self.action_space, gym.spaces.Discrete):
# Always sample new stochastic action
self.policy.reset_noise()
obs_tensor, _ = self.policy.prepare_obs(self._last_obs) # type: ignore[has-type]
actions, log_probs, values = self.policy.predict_all(obs_tensor, self.policy.noise_key)
actions = np.array(actions)
log_probs = np.array(log_probs)
values = np.array(values)
# Rescale and perform action
clipped_actions = actions
# Clip the actions to avoid out of bound error
if isinstance(self.action_space, gym.spaces.Box):
clipped_actions = np.clip(actions, self.action_space.low, self.action_space.high)
new_obs, rewards, dones, infos = env.step(clipped_actions)
self.num_timesteps += env.num_envs
# Give access to local variables
callback.update_locals(locals())
if callback.on_step() is False:
return False
self._update_info_buffer(infos)
n_steps += 1
if isinstance(self.action_space, gym.spaces.Discrete):
# Reshape in case of discrete action
actions = actions.reshape(-1, 1)
# Handle timeout by bootstraping with value function
# see GitHub issue #633
for idx, done in enumerate(dones):
if (
done
and infos[idx].get("terminal_observation") is not None
and infos[idx].get("TimeLimit.truncated", False)
):
terminal_obs = self.policy.prepare_obs(infos[idx]["terminal_observation"])[0]
terminal_value = np.array(
self.vf.apply( # type: ignore[union-attr]
self.policy.vf_state.params,
terminal_obs,
).flatten()
)
rewards[idx] += self.gamma * terminal_value
rollout_buffer.add(
self._last_obs, # type: ignore
actions,
rewards,
self._last_episode_starts, # type: ignore
th.as_tensor(values),
th.as_tensor(log_probs),
)
self._last_obs = new_obs # type: ignore[assignment]
self._last_episode_starts = dones
values = np.array(
self.vf.apply( # type: ignore[union-attr]
self.policy.vf_state.params,
self.policy.prepare_obs(new_obs)[0], # type: ignore[arg-type]
).flatten()
)
rollout_buffer.compute_returns_and_advantage(last_values=th.as_tensor(values), dones=dones)
callback.on_rollout_end()
return True
|
/sbx_rl-0.7.0-py3-none-any.whl/sbx/common/on_policy_algorithm.py
| 0.936952 | 0.257088 |
on_policy_algorithm.py
|
pypi
|
[](https://github.com/GuignardLab/sc3D/raw/main/LICENSE)
[](https://pypi.org/project/sc-3D)
[](https://python.org)
[](https://github.com/GuignardLab/sc3D/actions)
[](https://codecov.io/gh/GuignardLab/sc3D)
# sc3D
__sc3D__ is a Python library to handle 3D spatial transcriptomic datasets.
__To access the 3D viewer for sc3D datasets, you can go to the following link: [GuignardLab/napari-sc3D-viewer](https://github.com/GuignardLab/napari-sc3D-viewer)__
You can find it on the Guignard Lab GitHub page: [GuignardLab/sc3D](https://github.com/GuignardLab/sc3D). In there you will be able to find jupyter notebooks giving examples about how to use the datasets.
This code was developed in the context of the following study:
[**Spatial transcriptomic maps of whole mouse embryos.**](https://www.nature.com/articles/s41588-023-01435-6) *Abhishek Sampath Kumar, Luyi Tian, Adriano Bolondi et al.*
The __sc3D__ code is based on the [anndata](https://anndata.readthedocs.io/en/latest/) and [Scanpy](https://scanpy.readthedocs.io/en/stable/) libraries and allows to read, register arrays and compute 3D differential expression.
The dataset necessary to run the tests and look at the results can be downloaded [there](https://figshare.com/s/9c73df7fd39e3ca5422d) for the unregistered dataset (and test the provided algorithms) and [there](https://figshare.com/s/1c29d867bc8b90d754d2) for the already registered atlas to browse with our visualiser. You can find the visualiser [there](https://www.github.com/guignardlab/napari-sc3d-viewer).
## Description of the GitHub repository
- data: a folder containing examples for the tissue color and tissue name files
- src: a folder containing the source code
- txt: a folder containing the text describing the method (LaTeX, pdf and docx formats are available)
- README.md: this file
- notebooks/Test-embryo.ipynb: Basic read and write examples (many different ways of writing)
- notebooks/Spatial-differential-expression.ipynb: a jupyter notebook with some examples on how to perform the spatial differential expression
- notebooks/Embryo-registration.ipynb: a jupyter notebook with an example on how to do the array registration
- setup.py: Setup file to install the library
## Installation
We strongly advise to use virtual environments to install this package. For example using conda or miniconda:
```shell
conda create -n sc-3D
conda activate sc-3D
```
If necessary, install `pip`:
```shell
conda install pip
```
Then, to install the latest stable version:
```shell
pip install sc-3D
```
or to install the latest version from the GitHub repository:
```shell
git clone https://github.com/GuignardLab/sc3D.git
cd sc3D
pip install .
```
### Troubleshooting for latest M1 MacOs chips
If working with an M1 chip, it is possible that all the necessary libraries are not yet available from the usual channels.
To overcome this issue we recommand to manually install the latest, GitHub version of __sc3D__ using [miniforge](https://github.com/conda-forge/miniforge) instead of anaconda or miniconda.
Once miniforge is installed and working, you can run the following commands:
```shell
conda create -n sc-3D
conda activate sc-3D
```
to create your environment, then:
```shell
git clone https://github.com/GuignardLab/sc3D.git
cd sc3D
conda install pip scipy numpy matplotlib pandas seaborn anndata napari
pip install .
```
If the previous commands are still not working, it is possible that you need to install the `pkg-config` package. You can find some information on how to do it there: [install pkg-config](https://gist.github.com/jl/9e5ebbc9ccf44f3c804e)
## Basic usage
Once installed, the library can be called the following way:
```python
from sc3D import Embryo
```
To import some data:
**Note: at the time being, the following conventions are expected:**
- **the x-y coordinates are stored in `data.obsm['X_spatial']`**
- **the array number should be stored in `data.obs['orig.ident']` in the format `".*_[0-9]*"` where the digits after the underscore (`_`) are the id of the array**
- **the tissue type has to be stored in `data.obs['predicted.id']`**
- **the gene names have to be stored as indices or in `data.var['feature_name']`**
Since version `0.1.2`, one can specify the name of the columns where the different necessary informations are stored using the following parameters:
- `tissue_id` to inform the tissue id column
- `array_id` to inform the array/puck/slice id column
- `pos_id` to inform the position column (an `x, y` position is expected within this given column)
- `gene_name_id` to inform the gene name column
- `pos_reg_id` when to inform the registered position column (an `x, y, z` position is expected within this given column)
```python
# To read the data
embryo = Embryo('path/to/data.h5ad')
# To remove potential spatial outliers
embryo.removing_spatial_outliers(th=outlier_threshold)
# To register the arrays and compute the
# spline interpolations
embryo.reconstruct_intermediate(embryo, th_d=th_d,
genes=genes_of_interest)
# To save the dataset as a registered dataset (to then look at it in the 3D visualizer)
embryo.save_anndata('path/to/out/registered.h5ad')
# To compute the 3D differential expression for selected tissues
tissues_to_process = [5, 10, 12, 18, 21, 24, 30, 31, 33, 34, 39]
th_vol = .025
_ = embryo.get_3D_differential_expression(tissues_to_process, th_vol)
```
The dataset used for the project this code is from can be downloaded [there](https://cellxgene.cziscience.com/collections/d74b6979-efba-47cd-990a-9d80ccf29055/private) (under the name `mouse_embryo_E8.5_merged_data`)
Many other functions are available that can be found used in the two provided jupyter notebooks.
## Running the notebooks
Two example notebooks are provided.
To run them one wants to first install the jupyter notebook:
```shell
conda install jupyter
```
or
```shell
pip install jupyter
```
The notebooks can the be started from a terminal in the folder containing the `.ipynb` files with the following command:
```shell
jupyter notebook
```
The notebooks should be self content.
Note that the test dataset is not included in this repository put can be downloaded from [there](https://cellxgene.cziscience.com/collections/d74b6979-efba-47cd-990a-9d80ccf29055/private).
|
/sc-3D-1.1.0.tar.gz/sc-3D-1.1.0/README.md
| 0.693161 | 0.989511 |
README.md
|
pypi
|
import numpy as np
import math
class transformations:
_EPS = np.finfo(float).eps * 4.0
@classmethod
def unit_vector(clf, data, axis=None, out=None):
"""Return ndarray normalized by length, i.e. Euclidean norm, along axis.
>>> v0 = numpy.random.random(3)
>>> v1 = unit_vector(v0)
>>> numpy.allclose(v1, v0 / numpy.linalg.norm(v0))
True
>>> v0 = numpy.random.rand(5, 4, 3)
>>> v1 = unit_vector(v0, axis=-1)
>>> v2 = v0 / numpy.expand_dims(numpy.sqrt(numpy.sum(v0*v0, axis=2)), 2)
>>> numpy.allclose(v1, v2)
True
>>> v1 = unit_vector(v0, axis=1)
>>> v2 = v0 / numpy.expand_dims(numpy.sqrt(numpy.sum(v0*v0, axis=1)), 1)
>>> numpy.allclose(v1, v2)
True
>>> v1 = numpy.empty((5, 4, 3))
>>> unit_vector(v0, axis=1, out=v1)
>>> numpy.allclose(v1, v2)
True
>>> list(unit_vector([]))
[]
>>> list(unit_vector([1]))
[1.0]
"""
if out is None:
data = np.array(data, dtype=np.float64, copy=True)
if data.ndim == 1:
data /= math.sqrt(np.dot(data, data))
return data
else:
if out is not data:
out[:] = np.array(data, copy=False)
data = out
length = np.atleast_1d(np.sum(data * data, axis))
np.sqrt(length, length)
if axis is not None:
length = np.expand_dims(length, axis)
data /= length
if out is None:
return data
return None
@classmethod
def rotation_matrix(clf, angle, direction, point=None):
"""Return matrix to rotate about axis defined by point and direction.
>>> R = rotation_matrix(math.pi/2, [0, 0, 1], [1, 0, 0])
>>> numpy.allclose(numpy.dot(R, [0, 0, 0, 1]), [1, -1, 0, 1])
True
>>> angle = (random.random() - 0.5) * (2*math.pi)
>>> direc = numpy.random.random(3) - 0.5
>>> point = numpy.random.random(3) - 0.5
>>> R0 = rotation_matrix(angle, direc, point)
>>> R1 = rotation_matrix(angle-2*math.pi, direc, point)
>>> is_same_transform(R0, R1)
True
>>> R0 = rotation_matrix(angle, direc, point)
>>> R1 = rotation_matrix(-angle, -direc, point)
>>> is_same_transform(R0, R1)
True
>>> I = numpy.identity(4, numpy.float64)
>>> numpy.allclose(I, rotation_matrix(math.pi*2, direc))
True
>>> numpy.allclose(2, numpy.trace(rotation_matrix(math.pi/2,
... direc, point)))
True
"""
import math
sina = math.sin(angle)
cosa = math.cos(angle)
direction = clf.unit_vector(direction[:3])
# rotation matrix around unit vector
R = np.diag([cosa, cosa, cosa])
R += np.outer(direction, direction) * (1.0 - cosa)
direction *= sina
R += np.array(
[
[0.0, -direction[2], direction[1]],
[direction[2], 0.0, -direction[0]],
[-direction[1], direction[0], 0.0],
]
)
M = np.identity(4)
M[:3, :3] = R
if point is not None:
# rotation not around origin
point = np.array(point[:3], dtype=np.float64, copy=False)
M[:3, 3] = point - np.dot(R, point)
return M
@classmethod
def vector_norm(clf, data, axis=None, out=None):
"""Return length, i.e. Euclidean norm, of ndarray along axis.
>>> v = numpy.random.random(3)
>>> n = vector_norm(v)
>>> numpy.allclose(n, numpy.linalg.norm(v))
True
>>> v = numpy.random.rand(6, 5, 3)
>>> n = vector_norm(v, axis=-1)
>>> numpy.allclose(n, numpy.sqrt(numpy.sum(v*v, axis=2)))
True
>>> n = vector_norm(v, axis=1)
>>> numpy.allclose(n, numpy.sqrt(numpy.sum(v*v, axis=1)))
True
>>> v = numpy.random.rand(5, 4, 3)
>>> n = numpy.empty((5, 3))
>>> vector_norm(v, axis=1, out=n)
>>> numpy.allclose(n, numpy.sqrt(numpy.sum(v*v, axis=1)))
True
>>> vector_norm([])
0.0
>>> vector_norm([1])
1.0
"""
data = np.array(data, dtype=np.float64, copy=True)
if out is None:
if data.ndim == 1:
return math.sqrt(np.dot(data, data))
data *= data
out = np.atleast_1d(np.sum(data, axis=axis))
np.sqrt(out, out)
return out
data *= data
np.sum(data, axis=axis, out=out)
np.sqrt(out, out)
return None
@classmethod
def quaternion_matrix(clf, quaternion):
"""Return homogeneous rotation matrix from quaternion.
>>> M = quaternion_matrix([0.99810947, 0.06146124, 0, 0])
>>> numpy.allclose(M, rotation_matrix(0.123, [1, 0, 0]))
True
>>> M = quaternion_matrix([1, 0, 0, 0])
>>> numpy.allclose(M, numpy.identity(4))
True
>>> M = quaternion_matrix([0, 1, 0, 0])
>>> numpy.allclose(M, numpy.diag([1, -1, -1, 1]))
True
"""
q = np.array(quaternion, dtype=np.float64, copy=True)
n = np.dot(q, q)
if n < clf._EPS:
return np.identity(4)
q *= math.sqrt(2.0 / n)
q = np.outer(q, q)
return np.array(
[
[
1.0 - q[2, 2] - q[3, 3],
q[1, 2] - q[3, 0],
q[1, 3] + q[2, 0],
0.0,
],
[
q[1, 2] + q[3, 0],
1.0 - q[1, 1] - q[3, 3],
q[2, 3] - q[1, 0],
0.0,
],
[
q[1, 3] - q[2, 0],
q[2, 3] + q[1, 0],
1.0 - q[1, 1] - q[2, 2],
0.0,
],
[0.0, 0.0, 0.0, 1.0],
]
)
@classmethod
def affine_matrix_from_points(
clf, v0, v1, shear=True, scale=True, usesvd=True
):
"""Return affine transform matrix to register two point sets.
v0 and v1 are shape (ndims, -1) arrays of at least ndims non-homogeneous
coordinates, where ndims is the dimensionality of the coordinate space.
If shear is False, a similarity transformation matrix is returned.
If also scale is False, a rigid/Euclidean transformation matrix
is returned.
By default the algorithm by Hartley and Zissermann [15] is used.
If usesvd is True, similarity and Euclidean transformation matrices
are calculated by minimizing the weighted sum of squared deviations
(RMSD) according to the algorithm by Kabsch [8].
Otherwise, and if ndims is 3, the quaternion based algorithm by Horn [9]
is used, which is slower when using this Python implementation.
The returned matrix performs rotation, translation and uniform scaling
(if specified).
>>> v0 = [[0, 1031, 1031, 0], [0, 0, 1600, 1600]]
>>> v1 = [[675, 826, 826, 677], [55, 52, 281, 277]]
>>> affine_matrix_from_points(v0, v1)
array([[ 0.14549, 0.00062, 675.50008],
[ 0.00048, 0.14094, 53.24971],
[ 0. , 0. , 1. ]])
>>> T = translation_matrix(numpy.random.random(3)-0.5)
>>> R = random_rotation_matrix(numpy.random.random(3))
>>> S = scale_matrix(random.random())
>>> M = concatenate_matrices(T, R, S)
>>> v0 = (numpy.random.rand(4, 100) - 0.5) * 20
>>> v0[3] = 1
>>> v1 = numpy.dot(M, v0)
>>> v0[:3] += numpy.random.normal(0, 1e-8, 300).reshape(3, -1)
>>> M = affine_matrix_from_points(v0[:3], v1[:3])
>>> numpy.allclose(v1, numpy.dot(M, v0))
True
More examples in superimposition_matrix()
"""
v0 = np.array(v0, dtype=np.float64, copy=True)
v1 = np.array(v1, dtype=np.float64, copy=True)
ndims = v0.shape[0]
if ndims < 2 or v0.shape[1] < ndims or v0.shape != v1.shape:
raise ValueError("input arrays are of wrong shape or type")
# move centroids to origin
t0 = -np.mean(v0, axis=1)
M0 = np.identity(ndims + 1)
M0[:ndims, ndims] = t0
v0 += t0.reshape(ndims, 1)
t1 = -np.mean(v1, axis=1)
M1 = np.identity(ndims + 1)
M1[:ndims, ndims] = t1
v1 += t1.reshape(ndims, 1)
if shear:
# Affine transformation
A = np.concatenate((v0, v1), axis=0)
u, s, vh = np.linalg.svd(A.T)
vh = vh[:ndims].T
B = vh[:ndims]
C = vh[ndims : 2 * ndims]
t = np.dot(C, np.linalg.pinv(B))
t = np.concatenate((t, np.zeros((ndims, 1))), axis=1)
M = np.vstack((t, ((0.0,) * ndims) + (1.0,)))
elif usesvd or ndims != 3:
# Rigid transformation via SVD of covariance matrix
u, s, vh = np.linalg.svd(np.dot(v1, v0.T))
# rotation matrix from SVD orthonormal bases
R = np.dot(u, vh)
if np.linalg.det(R) < 0.0:
# R does not constitute right handed system
R -= np.outer(u[:, ndims - 1], vh[ndims - 1, :] * 2.0)
s[-1] *= -1.0
# homogeneous transformation matrix
M = np.identity(ndims + 1)
M[:ndims, :ndims] = R
else:
# Rigid transformation matrix via quaternion
# compute symmetric matrix N
xx, yy, zz = np.sum(v0 * v1, axis=1)
xy, yz, zx = np.sum(v0 * np.roll(v1, -1, axis=0), axis=1)
xz, yx, zy = np.sum(v0 * np.roll(v1, -2, axis=0), axis=1)
N = [
[xx + yy + zz, 0.0, 0.0, 0.0],
[yz - zy, xx - yy - zz, 0.0, 0.0],
[zx - xz, xy + yx, yy - xx - zz, 0.0],
[xy - yx, zx + xz, yz + zy, zz - xx - yy],
]
# quaternion: eigenvector corresponding to most positive eigenvalue
w, V = np.linalg.eigh(N)
q = V[:, np.argmax(w)]
q /= clf.vector_norm(q) # unit quaternion
# homogeneous transformation matrix
M = clf.quaternion_matrix(q)
if scale and not shear:
# Affine transformation; scale is ratio of RMS deviations from centroid
v0 *= v0
v1 *= v1
M[:ndims, :ndims] *= math.sqrt(np.sum(v1) / np.sum(v0))
# move centroids back
M = np.dot(np.linalg.inv(M1), np.dot(M, M0))
M /= M[ndims, ndims]
return M
|
/sc-3D-1.1.0.tar.gz/sc-3D-1.1.0/src/sc3D/transformations.py
| 0.878255 | 0.624064 |
transformations.py
|
pypi
|
import numpy as np
from sklearn.preprocessing import LabelEncoder
from anndata import AnnData
from typing import Optional
from time import time
from sc_FlowGrid._sc_FlowGrid import *
def calinski_harabasz_score(X, labels):
"""Compute the Calinski and Harabasz score.
It is also known as the Variance Ratio Criterion.
The score is defined as ratio between the within-cluster dispersion and
the between-cluster dispersion.
Read more in the :ref:`User Guide <calinski_harabasz_index>`.
Parameters
----------
X : array-like, shape (``n_samples``, ``n_features``)
List of ``n_features``-dimensional data points. Each row corresponds
to a single data point.
labels : array-like, shape (``n_samples``,)
Predicted labels for each sample.
Returns
-------
score : float
The resulting Calinski-Harabasz score.
References
----------
.. [1] `T. Calinski and J. Harabasz, 1974. "A dendrite method for cluster
analysis". Communications in Statistics
<https://www.tandfonline.com/doi/abs/10.1080/03610927408827101>`_
"""
#X, labels = check_X_y(X, labels)
le = LabelEncoder()
labels = le.fit_transform(labels)
n_samples, _ = X.shape
n_labels = len(le.classes_)
#check_number_of_labels(n_labels, n_samples)
extra_disp, intra_disp = 0., 0.
mean = np.mean(X, axis=0)
for k in range(n_labels):
cluster_k = X[labels == k]
mean_k = np.mean(cluster_k, axis=0)
extra_disp += len(cluster_k) * np.sum((mean_k - mean) ** 2)
intra_disp += np.sum((cluster_k - mean_k) ** 2)
return (1. if intra_disp == 0. else
extra_disp * (n_samples - n_labels) /
(intra_disp * (n_labels - 1.)))
set_n = 5
def sc_autoFlowGrid(
adata: AnnData,
set_n: int = set_n,
Bin_n: Optional[list] = None,
Eps: Optional[list] = None,
copy: bool = False,
chunked: bool = False,
chunk_size: Optional[int] = None,
) -> Optional[AnnData]:
t0=time()
adata = adata.copy() if copy else adata
Bin_n = Bin_n if Bin_n else [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]
Eps = Eps if Eps else [1.2, 1.6, 1.9, 2.1, 2.3, 2.7]
feature_data = adata.obsm['X_pca'].astype('Float64')
CHixNobs_values = {}
for bin_n in Bin_n:
for eps in Eps:
#print("0 runing time: "+ str(round(time()-t1,3)))
sc_FlowGrid(adata,bin_n,eps)
#print("1 runing time: "+ str(round(time()-t1,3)))
label_data = adata.obs['binN_'+str(bin_n)+'_eps_'+ str(eps)+'_FlowGrid'].tolist()
#print("2 runing time: "+ str(round(time()-t1,3)))
CHixNobs_values['binN_'+str(bin_n)+'_eps_'+str(eps)+'_FlowGrid'] = calinski_harabasz_score(feature_data, label_data)\
* len(set(label_data))
maxn_CHixNobs_values = sorted(CHixNobs_values, key=CHixNobs_values.get, reverse=True)[:set_n]
print(str(set_n)+ " sets of parameters are recommended.\n" +"Suggestion completed in : "+ str(round(time()-t0,3)) + " seconds.")
return maxn_CHixNobs_values
|
/sc_FlowGrid-0.1.tar.gz/sc_FlowGrid-0.1/sc_FlowGrid/_sc_autoFlowGrid.py
| 0.867345 | 0.73338 |
_sc_autoFlowGrid.py
|
pypi
|
import pandas as pd
import os
def get_data(data_name='adj_close_price',
columns=None,
start_time='2016-01-01', end_time='2021-01-01',
frequency=1):
base_data_names = ['adj_close_price', 'adj_open_price', 'adj_high_price', 'adj_low_price',
'volume', 'value', 'openint']
base_columns = ['IF', 'IC', 'IH']
if columns is None:
columns = base_columns
if data_name not in base_data_names:
return
if not isinstance(columns, list):
return
for c in columns:
if c not in base_columns:
print(f'No {c} Data')
return
if 240 % frequency != 0:
print(F'240%frequency should be 0 and frequency should be smaller than or equal to 240')
return
dirname, _ = os.path.split(os.path.abspath(__file__))
path = os.path.join(dirname, 'dataset', f'_{data_name}.csv')
output = pd.read_csv(path, index_col=0, header=0)
output.index = pd.to_datetime(output.index)
output = output.loc[pd.to_datetime(start_time):pd.to_datetime(end_time), columns]
if frequency <= 240 and frequency != 1:
if data_name in ['adj_close_price', 'openint']:
output = output.resample(f'{int(frequency)}T', label='right', closed='right').last().dropna()
elif data_name in ['adj_open_price']:
output = output.resample(f'{int(frequency)}T', label='right', closed='right').first().dropna()
elif data_name in ['adj_high_price']:
output = output.resample(f'{int(frequency)}T', label='right', closed='right').max().dropna()
elif data_name in ['adj_low_price']:
output = output.resample(f'{int(frequency)}T', label='right', closed='right').min().dropna()
elif data_name in ['volume', 'value']:
output = output.resample(f'{int(frequency)}T', label='right', closed='right').sum().dropna()
if frequency == 240:
output.index = pd.to_datetime(output.index.date)
return output
|
/sc-backtest-0.1.14.tar.gz/sc-backtest-0.1.14/sc_backtest/dataset.py
| 0.400046 | 0.284399 |
dataset.py
|
pypi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.