repo
stringlengths 7
60
| instance_id
stringlengths 11
64
| base_commit
stringlengths 40
40
| patch
stringlengths 237
114k
| test_patch
stringclasses 1
value | problem_statement
stringlengths 20
58k
| hints_text
stringlengths 0
67.7k
| created_at
timestamp[ns]date 2015-08-08 06:08:58
2024-12-12 22:07:22
| environment_setup_commit
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
sequencelengths 0
0
| PASS_TO_PASS
sequencelengths 0
0
|
---|---|---|---|---|---|---|---|---|---|---|---|
Rishit-dagli/Conformer | Rishit-dagli__Conformer-9 | 1fd223ea8e3610266141d7826fadf115065fd1a7 | diff --git a/conformer_tf/attention.py b/conformer_tf/attention.py
index d7db91f..127e6c0 100644
--- a/conformer_tf/attention.py
+++ b/conformer_tf/attention.py
@@ -13,9 +13,9 @@ def __init__(
self.heads = heads
self.scale = dim_head ** -0.5
- self.to_q = tf.keras.layers.Dense(inner_dim, input_dim=dim, use_bias=False)
- self.to_kv = tf.keras.layers.Dense(inner_dim * 2, input_dim=dim, use_bias=False)
- self.to_out = tf.keras.layers.Dense(dim, input_dim=inner_dim)
+ self.to_q = tf.keras.layers.Dense(inner_dim, use_bias=False)
+ self.to_kv = tf.keras.layers.Dense(inner_dim * 2, use_bias=False)
+ self.to_out = tf.keras.layers.Dense(dim)
self.max_pos_emb = max_pos_emb
self.rel_pos_emb = tf.keras.layers.Embedding(2 * max_pos_emb + 1, dim_head)
@@ -32,8 +32,12 @@ def call(self, inputs, context=None, mask=None, context_mask=None):
else:
has_context = True
- q, k, v = tf.split((self.to_q(inputs), *self.to_kv(context)), 2, axis=-1)
- q, k, v = map(lambda t: rearrange(t, "b n (h d) -> b h n d", h=h), (q, k, v))
+ kv = tf.split(self.to_kv(context), num_or_size_splits=2, axis=-1)
+ q, k, v = (self.to_q(inputs), *kv)
+
+ q, k, v = map(
+ lambda t: rearrange(t, "b n (h d) -> b h n d", h=heads), (q, k, v)
+ )
dots = tf.einsum("b h i d, b h j d -> b h i j", q, k) * self.scale
seq = tf.range(n)
diff --git a/conformer_tf/conformer_tf.py b/conformer_tf/conformer_tf.py
index 2650cff..d750334 100644
--- a/conformer_tf/conformer_tf.py
+++ b/conformer_tf/conformer_tf.py
@@ -1,11 +1,3 @@
-import einops
-import tensorflow as tf
-from einops import rearrange
-from einops.layers.tensorflow import Rearrange
-
-from .attention import Attention
-
-
class Swish(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(Swish, self).__init__(**kwargs)
@@ -68,7 +60,7 @@ def __init__(self, dim, mult=4, dropout=0.0, **kwargs):
super(FeedForward, self).__init__(**kwargs)
self.net = tf.keras.Sequential(
[
- tf.keras.layers.Dense(dim * mult, activation=Swish(), input_dim=dim),
+ tf.keras.layers.Dense(dim * mult, activation=Swish()),
tf.keras.layers.Dropout(dropout),
tf.keras.layers.Dense(dim, input_dim=dim * mult),
tf.keras.layers.Dropout(dropout),
@@ -162,7 +154,7 @@ def __init__(
self.post_norm = tf.keras.layers.LayerNormalization(axis=-1)
- def forward(self, inputs, mask=None):
+ def call(self, inputs, mask=None):
inputs = self.ff1(inputs) + inputs
inputs = self.attn(inputs, mask=mask) + inputs
inputs = self.conv(inputs) + inputs
diff --git a/example/conformer_example.ipynb b/example/conformer_example.ipynb
index 767d500..5df3ff2 100644
--- a/example/conformer_example.ipynb
+++ b/example/conformer_example.ipynb
@@ -1,159 +1,159 @@
{
- "nbformat": 4,
- "nbformat_minor": 0,
- "metadata": {
- "colab": {
- "name": "conformer-example.ipynb",
- "provenance": [],
- "authorship_tag": "ABX9TyM7dXN69UApQcqmFIiwdh63",
- "include_colab_link": true
- },
- "kernelspec": {
- "name": "python3",
- "display_name": "Python 3"
- },
- "language_info": {
- "name": "python"
- }
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "name": "conformer-example.ipynb",
+ "provenance": [],
+ "authorship_tag": "ABX9TyM7dXN69UApQcqmFIiwdh63",
+ "include_colab_link": true
},
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "view-in-github",
- "colab_type": "text"
- },
- "source": [
- "<a href=\"https://colab.research.google.com/github/Rishit-dagli/Conformer/blob/main/example/conformer_example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "# Conformer Example\n",
- "\n",
- "This notebook shows the the process of using the [conformer-tf](https://pypi.org/project/conformer-tf/) Python package. This repo implements [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) by Gulati et al. in TensorFlow. _**Conformer**_ achieves the best of both worlds (transformers for content-based global interactions and CNNs to exploit local features) by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way.\n",
- "\n",
- "If you find this useful please consider giving a â to [the repo](https://github.com/Rishit-dagli/Conformer). "
- ],
- "metadata": {
- "id": "lwCN6b-WWKJu"
- }
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Install the package"
- ],
- "metadata": {
- "id": "wfK31aQjWdzy"
- }
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "dOyZmWyWWCOq"
- },
- "outputs": [],
- "source": [
- "!pip install conformer-tf"
- ]
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Setup"
- ],
- "metadata": {
- "id": "DGJG1iwTWhNH"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "import tensorflow as tf\n",
- "from conformer_tf import ConformerConvModule\n",
- "from conformer_tf import ConformerBlock"
- ],
- "metadata": {
- "id": "jJghBPbMWkPh"
- },
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Create a Convolutional Module"
- ],
- "metadata": {
- "id": "EieWqqdFWxaH"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "layer = ConformerConvModule(\n",
- " dim = 512,\n",
- " causal = False, # whether it is auto-regressive\n",
- " expansion_factor = 2, # what multiple of the dimension to expand for the depthwise convolution\n",
- " kernel_size = 31,\n",
- " dropout = 0.\n",
- ")\n",
- "\n",
- "x = tf.random.normal([1, 1024, 512])\n",
- "x = layer(x) + x # (1, 1024, 512)"
- ],
- "metadata": {
- "id": "D2Nq8WstW1du"
- },
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "code",
- "source": [
- "x.shape"
- ],
- "metadata": {
- "id": "Dsfn8jOrW5Ye"
- },
- "execution_count": null,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "source": [
- "## Create a Conformer Block"
- ],
- "metadata": {
- "id": "_mLXo2zcW6Pn"
- }
- },
- {
- "cell_type": "code",
- "source": [
- "conformer_block = ConformerBlock(\n",
- " dim = 512,\n",
- " dim_head = 64,\n",
- " heads = 8,\n",
- " ff_mult = 4,\n",
- " conv_expansion_factor = 2,\n",
- " conv_kernel_size = 31,\n",
- " attn_dropout = 0.,\n",
- " ff_dropout = 0.,\n",
- " conv_dropout = 0.\n",
- ")\n",
- "\n",
- "x = tf.random.normal([1, 1024, 512])\n",
- "conformer_block(x) # (1, 1024, 512)"
- ],
- "metadata": {
- "id": "5QeHGwiaW_tX"
- },
- "execution_count": null,
- "outputs": []
- }
- ]
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "<a href=\"https://colab.research.google.com/github/Rishit-dagli/Conformer/blob/main/example/conformer_example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# Conformer Example\n",
+ "\n",
+ "This notebook shows the the process of using the [conformer-tf](https://pypi.org/project/conformer-tf/) Python package. This repo implements [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100) by Gulati et al. in TensorFlow. _**Conformer**_ achieves the best of both worlds (transformers for content-based global interactions and CNNs to exploit local features) by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way.\n",
+ "\n",
+ "If you find this useful please consider giving a â to [the repo](https://github.com/Rishit-dagli/Conformer). "
+ ],
+ "metadata": {
+ "id": "lwCN6b-WWKJu"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Install the package"
+ ],
+ "metadata": {
+ "id": "wfK31aQjWdzy"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "dOyZmWyWWCOq"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install conformer-tf"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Setup"
+ ],
+ "metadata": {
+ "id": "DGJG1iwTWhNH"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "import tensorflow as tf\n",
+ "from conformer_tf import ConformerConvModule\n",
+ "from conformer_tf import ConformerBlock"
+ ],
+ "metadata": {
+ "id": "jJghBPbMWkPh"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Create a Convolutional Module"
+ ],
+ "metadata": {
+ "id": "EieWqqdFWxaH"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "layer = ConformerConvModule(\n",
+ " dim=512,\n",
+ " causal=False, # whether it is auto-regressive\n",
+ " expansion_factor=2, # what multiple of the dimension to expand for the depthwise convolution\n",
+ " kernel_size=31,\n",
+ " dropout=0.0,\n",
+ ")\n",
+ "\n",
+ "x = tf.random.normal([1, 1024, 512])\n",
+ "x = layer(x) + x # (1, 1024, 512)"
+ ],
+ "metadata": {
+ "id": "D2Nq8WstW1du"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "x.shape"
+ ],
+ "metadata": {
+ "id": "Dsfn8jOrW5Ye"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Create a Conformer Block"
+ ],
+ "metadata": {
+ "id": "_mLXo2zcW6Pn"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "conformer_block = ConformerBlock(\n",
+ " dim=512,\n",
+ " dim_head=64,\n",
+ " heads=8,\n",
+ " ff_mult=4,\n",
+ " conv_expansion_factor=2,\n",
+ " conv_kernel_size=31,\n",
+ " attn_dropout=0.0,\n",
+ " ff_dropout=0.0,\n",
+ " conv_dropout=0.0,\n",
+ ")\n",
+ "\n",
+ "x = tf.random.normal([1, 1024, 512])\n",
+ "conformer_block(x) # (1, 1024, 512)"
+ ],
+ "metadata": {
+ "id": "5QeHGwiaW_tX"
+ },
+ "execution_count": null,
+ "outputs": []
+ }
+ ]
}
\ No newline at end of file
| Complete ConformerBlock implementation
The `ConformerBlock` class is currently WIP with some mistakes in the `Attention` class
| 2022-01-20T09:44:09 | 0.0 | [] | [] |
|||
mborgerson/fatx | mborgerson__fatx-34 | d132e7bd5de9a1c16e85ed1fcebaf9611f38d90c | diff --git a/CMakeLists.txt b/CMakeLists.txt
index 237a3f2..112e2d0 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -26,10 +26,11 @@ set(FATX_SOURCES "")
set(FATXFS_SOURCES "")
add_subdirectory(${CMAKE_SOURCE_DIR}/src)
+set(CMAKE_POSITION_INDEPENDENT_CODE ON)
add_definitions("-ansi -Wall --std=c99 -pedantic")
add_library(fatx ${FATX_SOURCES})
-if(FUSE_MODULE_NAME)
+if(FUSE_FOUND)
add_executable(fatxfs ${FATXFS_SOURCES})
target_link_libraries(fatxfs fatx ${FUSE_LDFLAGS})
target_include_directories(fatxfs PUBLIC ${FUSE_INCLUDE_DIRS})
diff --git a/build_cffi.py b/build_cffi.py
index 64a9db5..fbdf298 100644
--- a/build_cffi.py
+++ b/build_cffi.py
@@ -88,7 +88,7 @@ def ffibuilder():
struct fatx_fs *pyfatx_open_helper(void);
- int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size);
+ int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size, size_t sectors_per_cluster);
int fatx_close_device(struct fatx_fs *fs);
int fatx_open_dir(struct fatx_fs *fs, char const *path, struct fatx_dir *dir);
int fatx_read_dir(struct fatx_fs *fs, struct fatx_dir *dir, struct fatx_dirent *entry, struct fatx_attr *attr, struct fatx_dirent **result);
diff --git a/pyfatx/__init__.py b/pyfatx/__init__.py
index e90f03e..e1c7f09 100644
--- a/pyfatx/__init__.py
+++ b/pyfatx/__init__.py
@@ -60,12 +60,12 @@ def __init__(self, path: str, offset: Optional[int] = None, size: Optional[int]
'y': (0x2ee80000, 0x02ee00000),
'z': (0x5dc80000, 0x02ee00000),
'c': (0x8ca80000, 0x01f400000),
- 'e': (0xabe80000, 0x131f00000),
+ 'e': (0xabe80000, 0x1312d6000),
}
offset, size = partitions[drive]
if isinstance(path, str):
path = path.encode('utf-8')
- s = fatx_open_device(self.fs, path, offset, size, sector_size)
+ s = fatx_open_device(self.fs, path, offset, size, sector_size, 0)
if s != 0:
self.fs = None
assert s == 0
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
index 011a2a5..c425cff 100644
--- a/src/CMakeLists.txt
+++ b/src/CMakeLists.txt
@@ -21,6 +21,7 @@ set(FATX_SOURCES ${FATX_SOURCES}
${CMAKE_CURRENT_SOURCE_DIR}/fatx_attr.c
${CMAKE_CURRENT_SOURCE_DIR}/fatx_dev.c
${CMAKE_CURRENT_SOURCE_DIR}/fatx_dir.c
+ ${CMAKE_CURRENT_SOURCE_DIR}/fatx_disk.c
${CMAKE_CURRENT_SOURCE_DIR}/fatx_fat.c
${CMAKE_CURRENT_SOURCE_DIR}/fatx_file.c
${CMAKE_CURRENT_SOURCE_DIR}/fatx_internal.h
diff --git a/src/fatx.c b/src/fatx.c
index 29be6f3..815fbf9 100644
--- a/src/fatx.c
+++ b/src/fatx.c
@@ -27,7 +27,7 @@
/*
* Open a device.
*/
-int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size)
+int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size, size_t sectors_per_cluster)
{
int retval = 0;
size_t cluster_limit = 0;
@@ -35,20 +35,30 @@ int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t
if (sector_size != 512 && sector_size != 4096)
{
fatx_error(fs, "expected sector size to be 512 or 4096, got %d\n", sector_size);
- return -1;
+ return FATX_STATUS_ERROR;
}
- /* Sanity check partition offset and size. */
if (offset % sector_size)
{
- fatx_error(fs, "specified partition offset does not reside on sector boundary (%d bytes)\n", sector_size);
- return -1;
+ fatx_error(fs, "specified partition offset (0x%zx) does not reside on sector boundary (%d bytes)\n", offset, sector_size);
+ return FATX_STATUS_ERROR;
+ }
+
+ /* Compute partition size using remaining disk space and align down to nearest sector */
+ if (size == -1)
+ {
+ if (fatx_disk_size_remaining(path, offset, &size))
+ {
+ fatx_error(fs, "failed to resolve partition size");
+ return FATX_STATUS_ERROR;
+ }
+ size &= ~(sector_size - 1);
}
if (size % sector_size)
{
fatx_error(fs, "specified partition size does not reside on sector boundary (%d bytes)\n", sector_size);
- return -1;
+ return FATX_STATUS_ERROR;
}
fs->device_path = path;
@@ -60,38 +70,52 @@ int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t
if (!fs->device)
{
fatx_error(fs, "failed to open %s for reading and writing\n", path);
- return -1;
+ return FATX_STATUS_ERROR;
}
- /* Check signature. */
- if (fatx_check_partition_signature(fs))
+ if (fatx_init_superblock(fs, sectors_per_cluster))
{
- retval = -1;
- goto cleanup;
+ return FATX_STATUS_ERROR;
}
- /* Process superblock. */
- if (fatx_process_superblock(fs))
+ /* Validate that an acceptable cluster+sector combo was configured */
+ switch (fs->sectors_per_cluster)
{
- retval = -1;
- goto cleanup;
+ case 1:
+ case 2:
+ case 4:
+ case 8:
+ case 16:
+ case 32:
+ case 64:
+ case 128: // retail max
+ case 256:
+ case 512:
+ case 1024:
+ break;
+
+ default:
+ fatx_error(fs, "invalid sectors per cluster %d\n", fs->sectors_per_cluster);
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
}
fs->num_sectors = fs->partition_size / fs->sector_size;
fs->num_clusters = fs->num_sectors / fs->sectors_per_cluster;
fs->bytes_per_cluster = fs->sectors_per_cluster * fs->sector_size;
- fs->fat_offset = fs->partition_offset+FATX_FAT_OFFSET;
+ fs->fat_offset = fs->partition_offset + FATX_FAT_OFFSET;
cluster_limit = fs->num_clusters + FATX_FAT_RESERVED_ENTRIES_COUNT;
if (fs->root_cluster >= cluster_limit)
{
fatx_error(fs, "root cluster %d exceeds cluster limit\n", fs->root_cluster);
- retval = -1;
+ retval = FATX_STATUS_ERROR;
goto cleanup;
}
- if (fs->num_clusters < 65525)
+ /* NOTE: this *MUST* be kept below the Cluster Reserved marker for FAT16 */
+ if (fs->num_clusters < 0xfff0)
{
fs->fat_type = FATX_FAT_TYPE_16;
fs->fat_size = cluster_limit*2;
@@ -117,9 +141,10 @@ int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t
fatx_info(fs, " Partition Size: 0x%zx bytes\n", fs->partition_size);
fatx_info(fs, " Volume Id: %.8x\n", fs->volume_id);
fatx_info(fs, " Bytes per Sector: %d\n", fs->sector_size);
- fatx_info(fs, " # of Sectors: %d\n", fs->num_sectors);
+ fatx_info(fs, " # of Sectors: %llu\n", fs->num_sectors);
fatx_info(fs, " Sectors per Cluster: %d\n", fs->sectors_per_cluster);
fatx_info(fs, " # of Clusters: %d\n", fs->num_clusters);
+ fatx_info(fs, " Bytes per Cluster: %d\n", fs->bytes_per_cluster);
fatx_info(fs, " FAT Offset: 0x%zx bytes\n", fs->fat_offset);
fatx_info(fs, " FAT Size: 0x%zx bytes\n", fs->fat_size);
fatx_info(fs, " FAT Type: %s\n", fs->fat_type == FATX_FAT_TYPE_16 ? "16" : "32");
diff --git a/src/fatx.h b/src/fatx.h
index 6c6aea2..f02c6f9 100644
--- a/src/fatx.h
+++ b/src/fatx.h
@@ -23,6 +23,7 @@
#define _XOPEN_SOURCE 500
#include <stdint.h>
+#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sys/types.h>
@@ -42,6 +43,23 @@
#define FATX_STATUS_FILE_DELETED 1
#define FATX_STATUS_END_OF_DIR 2
+#define FATX_RETAIL_CLUSTER_SIZE (16 * 1024)
+#define FATX_RETAIL_PARTITION_COUNT 5
+
+/*
+ * This define should always be passed to fatx_open_device(...) as the
+ * sectors_per_cluster argument when opening existing FATX filesystems.
+ *
+ * It is a special value designed to tell fatx_open_device(...) that it
+ * must read the superblock of the FATX partition being opened to determine
+ * how many sectors_per_cluster the partition was formatted with.
+ *
+ * The cases where this define is NOT passed to fatx_open_device(...) is
+ * when formatting a new disk. In this case, the caller is expected to pass
+ * pass a valid non-zero sectors_per_cluster to format the partition with.
+ */
+#define FATX_READ_FROM_SUPERBLOCK 0
+
struct fatx_fs {
char const *device_path;
FILE *device;
@@ -49,7 +67,7 @@ struct fatx_fs {
size_t partition_offset;
size_t partition_size;
uint32_t volume_id;
- uint32_t num_sectors;
+ uint64_t num_sectors;
uint32_t num_clusters;
uint32_t sectors_per_cluster;
uint8_t fat_type;
@@ -90,7 +108,24 @@ struct fatx_attr {
struct fatx_ts accessed;
};
-int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size);
+/*
+ * Xbox Harddisk Partition Map
+ */
+
+struct fatx_partition_map_entry {
+ char letter;
+ size_t offset;
+ size_t size;
+};
+
+enum fatx_format {
+ FATX_FORMAT_INVALID,
+ FATX_FORMAT_RETAIL,
+ FATX_FORMAT_F_TAKES_ALL
+};
+
+/* FATX Functions */
+int fatx_open_device(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size, size_t sectors_per_cluster);
int fatx_close_device(struct fatx_fs *fs);
int fatx_open_dir(struct fatx_fs *fs, char const *path, struct fatx_dir *dir);
int fatx_read_dir(struct fatx_fs *fs, struct fatx_dir *dir, struct fatx_dirent *entry, struct fatx_attr *attr, struct fatx_dirent **result);
@@ -113,4 +148,12 @@ int fatx_rename(struct fatx_fs *fs, char const *from, char const *to);
void fatx_time_t_to_fatx_ts(const time_t in, struct fatx_ts *out);
time_t fatx_ts_to_time_t(struct fatx_ts *in);
+/* Disk Functions */
+int fatx_disk_size(char const *path, size_t *size);
+int fatx_disk_size_remaining(char const *path, size_t offset, size_t *size);
+int fatx_disk_format(struct fatx_fs *fs, char const *path, size_t sector_size, enum fatx_format format_type, size_t sectors_per_cluster);
+int fatx_disk_format_partition(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size, size_t sectors_per_cluster);
+int fatx_drive_to_offset_size(char drive_letter, size_t *offset, size_t *size);
+int fatx_disk_write_refurb_info(char const *path, uint32_t number_of_boots, uint64_t first_power_on);
+
#endif
diff --git a/src/fatx_disk.c b/src/fatx_disk.c
new file mode 100644
index 0000000..9991369
--- /dev/null
+++ b/src/fatx_disk.c
@@ -0,0 +1,249 @@
+/*
+ * FATX Filesystem Library
+ *
+ * Copyright (C) 2015 Matt Borgerson
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "fatx_internal.h"
+
+struct fatx_partition_map_entry const fatx_partition_map[] = {
+
+ /* Retail partitions */
+ { .letter = 'x', .offset = 0x00080000, .size = 0x02ee00000 },
+ { .letter = 'y', .offset = 0x2ee80000, .size = 0x02ee00000 },
+ { .letter = 'z', .offset = 0x5dc80000, .size = 0x02ee00000 },
+ { .letter = 'c', .offset = 0x8ca80000, .size = 0x01f400000 },
+ { .letter = 'e', .offset = 0xabe80000, .size = 0x1312d6000 },
+
+ /* Extended (non-retail) partitions commonly used in homebrew */
+ { .letter = 'f', .offset = 0x1dd156000, .size = -1 },
+
+};
+
+/*
+ * Given a drive letter, determine partition offset and size (in bytes).
+ */
+int fatx_drive_to_offset_size(char drive_letter, size_t *offset, size_t *size)
+{
+ struct fatx_partition_map_entry const *pi;
+
+ for (int i = 0; i < ARRAY_SIZE(fatx_partition_map); i++)
+ {
+ pi = &fatx_partition_map[i];
+
+ if (pi->letter == drive_letter)
+ {
+ *offset = pi->offset;
+ *size = pi->size;
+ return FATX_STATUS_SUCCESS;
+ }
+ }
+
+ return FATX_STATUS_ERROR;
+}
+
+/*
+ * Determine the disk size (in bytes).
+ */
+int fatx_disk_size(char const *path, size_t *size)
+{
+ FILE * device;
+ int retval;
+
+ device = fopen(path, "r");
+ if (!device)
+ {
+ fprintf(stderr, "failed to open %s for size query\n", path);
+ return FATX_STATUS_ERROR;
+ }
+
+ if (fseek(device, 0, SEEK_END))
+ {
+ fprintf(stderr, "failed to seek to end of disk\n");
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+ *size = ftell(device);
+ retval = FATX_STATUS_SUCCESS;
+
+cleanup:
+ fclose(device);
+ return retval;
+}
+
+/*
+ * Determine the remaining disk size (in bytes) from disk offset.
+ */
+int fatx_disk_size_remaining(char const *path, size_t offset, size_t *remaining_size)
+{
+ size_t disk_size;
+
+ if (fatx_disk_size(path, &disk_size))
+ {
+ return FATX_STATUS_ERROR;
+ }
+
+ if (offset > disk_size)
+ {
+ fprintf(stderr, "invalid disk offset\n");
+ return FATX_STATUS_ERROR;
+ }
+
+ *remaining_size = disk_size - offset;
+ return FATX_STATUS_SUCCESS;
+}
+
+/*
+ * Reformat a disk as FATX.
+ */
+int fatx_disk_format(struct fatx_fs *fs, char const *path, size_t sector_size, enum fatx_format format_type, size_t sectors_per_cluster)
+{
+ struct fatx_partition_map_entry const *pi;
+ size_t f_offset, f_size;
+
+ if (format_type == FATX_FORMAT_INVALID)
+ {
+ return FATX_STATUS_ERROR;
+ }
+
+ fatx_info(fs, "Writing refurb info...\n");
+ if (fatx_disk_write_refurb_info(path, 0, 0))
+ {
+ return FATX_STATUS_ERROR;
+ }
+
+ for (int i = 0; i < FATX_RETAIL_PARTITION_COUNT; i++)
+ {
+ pi = &fatx_partition_map[i];
+
+ fatx_info(fs, "-------------------------------------------\n");
+ fatx_info(fs, "Formatting partition %d (%c drive) ...\n", i, pi->letter);
+
+ /*
+ * Xapi initialization validates that the cluster size of retail
+ * partitions is 16kb when a game begins loading.
+ *
+ * For this reason, it is imperative that we do not let users
+ * configure the cluster size on retail partitions or many games
+ * will not load. Adjusting sector sizes, however, is okay.
+ */
+ if (fatx_disk_format_partition(fs, path, pi->offset, pi->size, sector_size, FATX_RETAIL_CLUSTER_SIZE / sector_size))
+ {
+ fatx_error(fs, " - failed to format partition %d\n", i);
+ return FATX_STATUS_ERROR;
+ }
+ }
+
+ if (format_type == FATX_FORMAT_RETAIL)
+ {
+ return FATX_STATUS_SUCCESS;
+ }
+ else if (format_type == FATX_FORMAT_F_TAKES_ALL)
+ {
+ fatx_drive_to_offset_size('f', &f_offset, &f_size);
+
+ fatx_info(fs, "-------------------------------------------\n");
+ fatx_info(fs, "Formatting partition %d (%c drive) ...\n", 5, 'f');
+
+ if (fatx_disk_format_partition(fs, path, f_offset, f_size, sector_size, sectors_per_cluster))
+ {
+ fatx_error(fs, " - failed to format partition %d (f-takes-all)\n", 5);
+ return FATX_STATUS_ERROR;
+ }
+ }
+
+ return FATX_STATUS_SUCCESS;
+}
+
+/*
+ * Format partition.
+ */
+int fatx_disk_format_partition(struct fatx_fs *fs, char const *path, size_t offset, size_t size, size_t sector_size, size_t sectors_per_cluster)
+{
+ int retval;
+
+ if (fatx_open_device(fs, path, offset, size, sector_size, sectors_per_cluster))
+ {
+ return FATX_STATUS_ERROR;
+ }
+
+ if (fatx_write_superblock(fs))
+ {
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+ if (fatx_init_fat(fs))
+ {
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+ if (fatx_init_root(fs))
+ {
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+ retval = FATX_STATUS_SUCCESS;
+
+cleanup:
+ fatx_close_device(fs);
+ return retval;
+}
+
+/*
+ * Write refurb sector.
+ */
+int fatx_disk_write_refurb_info(char const *path, uint32_t number_of_boots, uint64_t first_power_on)
+{
+ struct fatx_refurb_info refurb_info;
+ FILE * device;
+ int retval;
+
+ device = fopen(path, "r+b");
+ if (!device)
+ {
+ fprintf(stderr, "failed to open %s to write refurb info\n", path);
+ return FATX_STATUS_ERROR;
+ }
+
+ if (fseek(device, FATX_REFURB_OFFSET, SEEK_CUR))
+ {
+ fprintf(stderr, "failed to seek to the refurb info (offset 0x%x)\n", FATX_REFURB_OFFSET);
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+ memset(&refurb_info, 0, sizeof(struct fatx_refurb_info));
+ refurb_info.signature = FATX_REFURB_SIGNATURE;
+ refurb_info.number_of_boots = number_of_boots;
+ refurb_info.first_power_on = first_power_on;
+
+ if (fwrite(&refurb_info, sizeof(struct fatx_refurb_info), 1, device) != 1)
+ {
+ fprintf(stderr, "failed to write refurb info\n");
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+ retval = FATX_STATUS_SUCCESS;
+
+cleanup:
+ fclose(device);
+ return retval;
+}
\ No newline at end of file
diff --git a/src/fatx_fat.c b/src/fatx_fat.c
index 55f8efc..507ae87 100644
--- a/src/fatx_fat.c
+++ b/src/fatx_fat.c
@@ -22,10 +22,94 @@
static bool fatx_cluster_valid(struct fatx_fs *fs, size_t cluster)
{
- return (cluster >= FATX_FAT_RESERVED_ENTRIES_COUNT) &&
+ return (cluster >= 0) &&
(cluster < fs->num_clusters + FATX_FAT_RESERVED_ENTRIES_COUNT);
}
+/*
+ * Initialize a blank FAT.
+ */
+int fatx_init_fat(struct fatx_fs *fs)
+{
+ int64_t bytes_remaining;
+ uint8_t *chunk;
+ size_t chunk_size;
+ int retval = FATX_STATUS_SUCCESS;
+
+ if (fatx_dev_seek(fs, fs->fat_offset))
+ {
+ fatx_error(fs, "failed to seek to FAT start (offset 0x%zx)\n", fs->fat_offset);
+ return FATX_STATUS_ERROR;
+ }
+
+ /*
+ * A FAT could span multiple gigabytes with a very large partition (tbs)
+ * using small clusters, so we want to create a relatively large zero
+ * buffer in those cases while still capping how much memory we allocate
+ * for the chunked writes to zero out the FAT table.
+ */
+ chunk_size = MAX(0x4000, (fs->fat_size >> 8));
+ chunk = malloc(chunk_size);
+ if (!chunk)
+ {
+ fatx_error(fs, "failed to initialize memory for FAT wipe\n");
+ return FATX_STATUS_ERROR;
+ }
+ memset(chunk, 0x00, chunk_size);
+
+ bytes_remaining = fs->fat_size;
+ while (bytes_remaining > 0)
+ {
+ size_t bytes_to_write = MIN(chunk_size, bytes_remaining);
+ if (fatx_dev_write(fs, chunk, bytes_to_write, 1) != 1)
+ {
+ fatx_error(fs, "failed to clear FAT chunk (offset 0x%zx)\n", fs->fat_offset + (bytes_remaining - fs->fat_size));
+ retval = FATX_STATUS_ERROR;
+ break;
+ }
+ bytes_remaining -= bytes_to_write;
+ }
+
+ free(chunk);
+ return retval;
+}
+
+/*
+ * Initialize the root directory.
+ */
+int fatx_init_root(struct fatx_fs *fs)
+{
+ uint8_t *chunk;
+ int retval = FATX_STATUS_SUCCESS;
+
+ if (fatx_write_fat(fs, 0, 0xfffffff8) || fatx_mark_cluster_end(fs, fs->root_cluster))
+ {
+ fatx_error(fs, "failed to initialize FAT with root entries\n");
+ return FATX_STATUS_ERROR;
+ }
+
+ chunk = malloc(fs->bytes_per_cluster);
+ memset(chunk, FATX_END_OF_DIR_MARKER, fs->bytes_per_cluster);
+
+ if (fatx_dev_seek(fs, fs->cluster_offset))
+ {
+ fatx_error(fs, " - failed to seek to root directory cluster\n");
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+ if (fatx_dev_write(fs, chunk, fs->bytes_per_cluster, 1) != 1)
+ {
+ fatx_error(fs, " - failed to initialize root cluster\n");
+ retval = FATX_STATUS_ERROR;
+ goto cleanup;
+ }
+
+cleanup:
+ free(chunk);
+ return retval;
+}
+
/*
* Read from the FAT.
*/
@@ -100,43 +184,33 @@ int fatx_write_fat(struct fatx_fs *fs, size_t index, fatx_fat_entry entry)
*/
int fatx_get_fat_entry_type(struct fatx_fs *fs, fatx_fat_entry entry)
{
+ /*
+ * Sign-extend a 16bit FATX entry to 32bit so that the same marker
+ * checking logic can be used in the switch table below.
+ * eg. 0xFFF8 --> 0xFFFFFFF8
+ */
if (fs->fat_type == FATX_FAT_TYPE_16)
{
- entry &= FATX_FAT16_ENTRY_MASK;
+ entry = (int32_t)((int16_t)entry);
+ }
- if (entry == 0)
+ switch (entry)
+ {
+ case 0x00000000:
return FATX_CLUSTER_AVAILABLE;
-
- if (entry == 1)
+ case 0xfffffff0:
return FATX_CLUSTER_RESERVED;
-
- if (entry >= FATX_FAT_RESERVED_ENTRIES_COUNT && entry <= 0xfff6)
- return FATX_CLUSTER_DATA;
-
- if (entry == 0xfff7)
+ case 0xfffffff7:
return FATX_CLUSTER_BAD;
-
- if (entry >= 0xfff8 && entry <= 0xffff)
+ case 0xfffffff8:
+ return FATX_CLUSTER_MEDIA;
+ case 0xffffffff:
return FATX_CLUSTER_END;
}
- else
- {
- entry &= FATX_FAT32_ENTRY_MASK;
- if (entry == 0)
- return FATX_CLUSTER_AVAILABLE;
-
- if (entry == 1)
- return FATX_CLUSTER_RESERVED;
-
- if (entry >= FATX_FAT_RESERVED_ENTRIES_COUNT && entry <= 0x0ffffff6)
- return FATX_CLUSTER_DATA;
-
- if (entry == 0x0ffffff7)
- return FATX_CLUSTER_BAD;
-
- if (entry >= 0x0ffffff8 && entry <= 0x0fffffff)
- return FATX_CLUSTER_END;
+ if (entry < 0xfffffff0)
+ {
+ return FATX_CLUSTER_DATA;
}
return FATX_CLUSTER_INVALID;
@@ -208,7 +282,7 @@ int fatx_mark_cluster_end(struct fatx_fs *fs, size_t cluster)
}
else
{
- fatx_write_fat(fs, cluster, 0x0fffffff);
+ fatx_write_fat(fs, cluster, 0xffffffff);
}
return FATX_STATUS_SUCCESS;
}
diff --git a/src/fatx_internal.h b/src/fatx_internal.h
index d67a4a3..fe9c28a 100644
--- a/src/fatx_internal.h
+++ b/src/fatx_internal.h
@@ -22,15 +22,15 @@
#include "fatx.h"
-/* Offset of the filesystem signature, in bytes. */
-#define FATX_SIGNATURE_OFFSET 0
+/* FATX refurb info signature ('RFRB') */
+#define FATX_REFURB_SIGNATURE 0x42524652
+
+/* Offset of the refurb info on the physical disk */
+#define FATX_REFURB_OFFSET 0x600
/* FATX filesystem signature ('FATX') */
#define FATX_SIGNATURE 0x58544146
-/* Offset of the superblock, in bytes. */
-#define FATX_SUPERBLOCK_OFFSET 4
-
/* Size of the superblock, in bytes. */
#define FATX_SUPERBLOCK_SIZE 4096
@@ -44,10 +44,6 @@
/* Number of reserved entries in the FAT. */
#define FATX_FAT_RESERVED_ENTRIES_COUNT 1
-/* Mask to be applied when reading FAT entry values. */
-#define FATX_FAT16_ENTRY_MASK 0x0000ffff
-#define FATX_FAT32_ENTRY_MASK 0x0fffffff
-
/* Markers used in the filename_size field of the directory entry. */
#define FATX_DELETED_FILE_MARKER 0xe5
#define FATX_END_OF_DIR_MARKER 0xff
@@ -74,26 +70,44 @@
/* FAT entry types (not the actual value of the entry). */
#define FATX_CLUSTER_AVAILABLE 0
-#define FATX_CLUSTER_RESERVED 1
-#define FATX_CLUSTER_DATA 2
+#define FATX_CLUSTER_DATA 1
+#define FATX_CLUSTER_RESERVED 2
#define FATX_CLUSTER_BAD 3
-#define FATX_CLUSTER_END 4
-#define FATX_CLUSTER_INVALID 5
+#define FATX_CLUSTER_MEDIA 4
+#define FATX_CLUSTER_END 5
+#define FATX_CLUSTER_INVALID 6
/* Helpful macros. */
#define MIN(a,b) ( ( (a) <= (b) ) ? (a) : (b) )
+#define MAX(a,b) ( ( (a) >= (b) ) ? (a) : (b) )
+#define ARRAY_SIZE(array) \
+ (sizeof(array) / sizeof(array[0]))
+
+/*
+ * The refurb info as it appears on disk.
+ */
+#pragma pack(1)
+struct fatx_refurb_info {
+ uint32_t signature;
+ uint32_t number_of_boots;
+ uint64_t first_power_on;
+};
+#pragma pack()
/*
- * The superblock, as it appears on disk.
+ * The FATX superblock as it appears on disk.
*/
#pragma pack(1)
struct fatx_superblock {
+ uint32_t signature;
uint32_t volume_id;
uint32_t sectors_per_cluster;
- uint16_t root_cluster;
- uint32_t unknown1;
+ uint32_t root_cluster;
+ uint16_t unknown1;
+ uint8_t padding[4078];
};
#pragma pack()
+_Static_assert(sizeof(struct fatx_superblock) == 4096, "fatx_superblock struct *must* be 4096 bytes");
/*
* The directory entry as it appears on disk.
@@ -118,7 +132,9 @@ typedef uint32_t fatx_fat_entry;
/* Partition Functions */
int fatx_check_partition_signature(struct fatx_fs *fs);
-int fatx_process_superblock(struct fatx_fs *fs);
+int fatx_init_superblock(struct fatx_fs *fs, size_t sectors_per_cluster);
+int fatx_read_superblock(struct fatx_fs *fs);
+int fatx_write_superblock(struct fatx_fs *fs);
/* Device Functions */
int fatx_dev_seek(struct fatx_fs *fs, off_t offset);
@@ -127,6 +143,8 @@ size_t fatx_dev_read(struct fatx_fs *fs, void *buf, size_t size, size_t items);
size_t fatx_dev_write(struct fatx_fs *fs, const void *buf, size_t size, size_t items);
/* FAT Functions */
+int fatx_init_fat(struct fatx_fs *fs);
+int fatx_init_root(struct fatx_fs *fs);
int fatx_read_fat(struct fatx_fs *fs, size_t index, fatx_fat_entry *entry);
int fatx_write_fat(struct fatx_fs *fs, size_t index, fatx_fat_entry entry);
int fatx_cluster_number_to_byte_offset(struct fatx_fs *fs, size_t cluster, size_t *offset);
diff --git a/src/fatx_partition.c b/src/fatx_partition.c
index a414036..549b75a 100644
--- a/src/fatx_partition.c
+++ b/src/fatx_partition.c
@@ -17,6 +17,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
+#include <sys/time.h>
#include "fatx_internal.h"
/*
@@ -25,26 +26,51 @@
int fatx_check_partition_signature(struct fatx_fs *fs)
{
uint32_t signature;
- size_t read;
- if (fatx_dev_seek(fs, fs->partition_offset+FATX_SIGNATURE_OFFSET))
+ if (fatx_dev_seek(fs, fs->partition_offset))
{
fatx_error(fs, "failed to seek to signature\n");
- return -1;
+ return FATX_STATUS_ERROR;
}
- read = fatx_dev_read(fs, &signature, sizeof(uint32_t), 1);
-
- if (read != 1)
+ if (fatx_dev_read(fs, &signature, sizeof(uint32_t), 1) != 1)
{
fatx_error(fs, "failed to read signature from device\n");
- return -1;
+ return FATX_STATUS_ERROR;
}
if (signature != FATX_SIGNATURE)
{
fatx_error(fs, "invalid signature\n");
- return -1;
+ return FATX_STATUS_ERROR;
+ }
+
+ return FATX_STATUS_SUCCESS;
+}
+
+/*
+ * Initialize the partition with a new superblock.
+ */
+int fatx_init_superblock(struct fatx_fs *fs, size_t sectors_per_cluster)
+{
+ struct timeval time;
+
+ /* Initialize device with existing FATX superblock. */
+ if (sectors_per_cluster == FATX_READ_FROM_SUPERBLOCK)
+ {
+ if (fatx_check_partition_signature(fs) || fatx_read_superblock(fs))
+ {
+ return FATX_STATUS_ERROR;
+ }
+ }
+
+ /* Initialize device with a new FATX superblock. */
+ else
+ {
+ gettimeofday(&time, NULL);
+ fs->volume_id = time.tv_usec;
+ fs->root_cluster = 1;
+ fs->sectors_per_cluster = sectors_per_cluster;
}
return FATX_STATUS_SUCCESS;
@@ -53,22 +79,26 @@ int fatx_check_partition_signature(struct fatx_fs *fs)
/*
* Process the partition superblock.
*/
-int fatx_process_superblock(struct fatx_fs *fs)
+int fatx_read_superblock(struct fatx_fs *fs)
{
struct fatx_superblock superblock;
- size_t read;
- if (fatx_dev_seek(fs, fs->partition_offset+FATX_SUPERBLOCK_OFFSET))
+ if (fatx_dev_seek(fs, fs->partition_offset))
{
fatx_error(fs, "failed to seek to superblock\n");
- return -1;
+ return FATX_STATUS_ERROR;
}
- read = fatx_dev_read(fs, &superblock, sizeof(struct fatx_superblock), 1);
- if (read != 1)
+ if (fatx_dev_read(fs, &superblock, sizeof(struct fatx_superblock), 1) != 1)
{
fatx_error(fs, "failed to read superblock\n");
- return -1;
+ return FATX_STATUS_ERROR;
+ }
+
+ if (superblock.signature != FATX_SIGNATURE)
+ {
+ fatx_error(fs, "invalid signature\n");
+ return FATX_STATUS_ERROR;
}
fs->volume_id = superblock.volume_id;
@@ -77,3 +107,33 @@ int fatx_process_superblock(struct fatx_fs *fs)
return FATX_STATUS_SUCCESS;
}
+
+/*
+ * Write the partition superblock.
+ */
+int fatx_write_superblock(struct fatx_fs *fs)
+{
+ struct fatx_superblock superblock;
+
+ if (fatx_dev_seek(fs, fs->partition_offset))
+ {
+ fatx_error(fs, "failed to seek to superblock\n");
+ return FATX_STATUS_ERROR;
+ }
+
+ memset(&superblock, 0xFF, sizeof(struct fatx_superblock));
+
+ superblock.signature = FATX_SIGNATURE;
+ superblock.sectors_per_cluster = fs->sectors_per_cluster;
+ superblock.volume_id = fs->volume_id;
+ superblock.root_cluster = fs->root_cluster;
+ superblock.unknown1 = 0;
+
+ if (fatx_dev_write(fs, &superblock, sizeof(struct fatx_superblock), 1) != 1)
+ {
+ fatx_error(fs, "failed to write superblock\n");
+ return FATX_STATUS_ERROR;
+ }
+
+ return FATX_STATUS_SUCCESS;
+}
diff --git a/src/fatxfs_fuse.c b/src/fatxfs_fuse.c
index 3f7267c..7f01bd3 100644
--- a/src/fatxfs_fuse.c
+++ b/src/fatxfs_fuse.c
@@ -40,38 +40,26 @@ enum {
FATX_FUSE_OPT_KEY_OFFSET,
FATX_FUSE_OPT_KEY_SIZE,
FATX_FUSE_OPT_KEY_SECTOR_SIZE,
+ FATX_FUSE_OPT_KEY_SECTORS_PER_CLUSTER,
+ FATX_FUSE_OPT_KEY_FORMAT,
+ FATX_FUSE_OPT_KEY_DESTROY_DATA,
FATX_FUSE_OPT_KEY_LOG,
FATX_FUSE_OPT_KEY_LOGLEVEL,
};
-/*
- * Xbox Harddisk Partition Map
- */
-struct fatx_fuse_partition_map_entry {
- char letter;
- size_t offset;
- size_t size;
-};
-
-struct fatx_fuse_partition_map_entry const fatx_fuse_partition_map[] = {
- { .letter = 'x', .offset = 0x00080000, .size = 0x02ee00000 },
- { .letter = 'y', .offset = 0x2ee80000, .size = 0x02ee00000 },
- { .letter = 'z', .offset = 0x5dc80000, .size = 0x02ee00000 },
- { .letter = 'c', .offset = 0x8ca80000, .size = 0x01f400000 },
- { .letter = 'e', .offset = 0xabe80000, .size = 0x131f00000 },
- { .letter = '\x00', .offset = 0x00000000, .size = 0x000000000 },
-};
-
struct fatx_fuse_private_data {
- struct fatx_fs *fs;
- char const *device_path;
- char const *log_path;
- char mount_partition_drive;
- size_t mount_partition_offset;
- size_t mount_partition_size;
- size_t device_sector_size;
- FILE *log_handle;
- int log_level;
+ struct fatx_fs *fs;
+ char const *device_path;
+ char const *log_path;
+ char mount_partition_drive;
+ size_t mount_partition_offset;
+ size_t mount_partition_size;
+ size_t device_sector_size;
+ size_t device_sectors_per_cluster;
+ FILE *log_handle;
+ int log_level;
+ enum fatx_format format;
+ int format_confirm;
};
/*
@@ -102,7 +90,6 @@ int fatx_fuse_opt_proc(void *data, const char *arg, int key, struct fuse_args *o
* Helper functions.
*/
struct fatx_fuse_private_data *fatx_fuse_get_private_data(void);
-int fatx_fuse_drive_to_offset_size(char drive_letter, size_t *offset, size_t *size);
void fatx_fuse_print_usage(void);
void fatx_fuse_print_version(void);
@@ -135,26 +122,6 @@ struct fatx_fuse_private_data *fatx_fuse_get_private_data(void)
return (struct fatx_fuse_private_data *)(context->private_data);
}
-/*
- * Given a drive letter, determine partition offset and size (in bytes).
- */
-int fatx_fuse_drive_to_offset_size(char drive_letter, size_t *offset, size_t *size)
-{
- struct fatx_fuse_partition_map_entry const *pi;
-
- for (pi = &fatx_fuse_partition_map[0]; pi->letter != '\x00'; pi++)
- {
- if (pi->letter == drive_letter)
- {
- *offset = pi->offset;
- *size = pi->size;
- return 0;
- }
- }
-
- return -1;
-}
-
/*
* Initialize the filesystem
*/
@@ -578,6 +545,32 @@ int fatx_fuse_opt_proc(void *data, const char *arg, int key, struct fuse_args *o
pd->device_sector_size = strtol(arg, NULL, 0);
return 0;
+ case FATX_FUSE_OPT_KEY_SECTORS_PER_CLUSTER:
+ arg = fatx_fuse_opt_consume_key(arg);
+ pd->device_sectors_per_cluster = strtol(arg, NULL, 0);
+ return 0;
+
+ case FATX_FUSE_OPT_KEY_FORMAT:
+ arg = fatx_fuse_opt_consume_key(arg);
+ if (!strcmp(arg, "retail"))
+ {
+ pd->format = FATX_FORMAT_RETAIL;
+ }
+ else if (!strcmp(arg, "f-takes-all"))
+ {
+ pd->format = FATX_FORMAT_F_TAKES_ALL;
+ }
+ else
+ {
+ fprintf(stderr, "invalid format '%s' specified\n", arg);
+ return -1;
+ }
+ return 0;
+
+ case FATX_FUSE_OPT_KEY_DESTROY_DATA:
+ pd->format_confirm = 1;
+ return 0;
+
case FATX_FUSE_OPT_KEY_LOG:
pd->log_path = fatx_fuse_opt_consume_key(arg);
return 0;
@@ -617,19 +610,23 @@ void fatx_fuse_print_usage(void)
/* Print basic usage */
fprintf(stderr, "FATXFS - Userspace FATX Filesystem Driver\n\n");
fprintf(stderr, "Usage: %s <device> <mountpoint> [<options>]\n", prog_short_name);
- fprintf(stderr, " or: %s <device> <mountpoint> --drive=c|e|x|y|z [<options>]\n", prog_short_name);
+ fprintf(stderr, " or: %s <device> <mountpoint> --drive=c|e|x|y|z|f [<options>]\n", prog_short_name);
fprintf(stderr, " or: %s <device> <mountpoint> --offset=<offset> --size=<size> [<options>]\n\n", prog_short_name);
- fprintf(stderr, "General Options:\n"
- " -o opt, [opt...] mount options\n"
- " -h --help print help\n"
- " -V --version print version\n\n"
+ fprintf(stderr, "General options:\n"
+ " -o opt, [opt...] mount options\n"
+ " -h --help print help\n"
+ " -V --version print version\n\n"
"FATXFS options:\n"
- " --drive=<letter> mount a partition by its drive letter\n"
- " --offset=<offset> specify the offset (in bytes) of a partition manually\n"
- " --size=<size> specify the size (in bytes) of a partition manually\n"
- " --sector-size=<size> specify the size (in bytes) of a device sector (default is 512)\n"
- " --log=<log path> enable fatxfs logging\n"
- " --loglevel=<level> control the log output level (a higher value yields more output)\n\n");
+ " --drive=<letter> mount a partition by its drive letter\n"
+ " --offset=<offset> specify the offset (in bytes) of a partition manually\n"
+ " --size=<size> specify the size (in bytes) of a partition manually\n"
+ " --sector-size=<size> specify the size (in bytes) of a device sector (default is 512)\n"
+ " --log=<log path> enable fatxfs logging\n"
+ " --loglevel=<level> control the log output level (a higher value yields more output)\n\n"
+ "Disk formatting options:\n"
+ " --format=<format> specify the format (retail, f-takes-all) to initialize the device to\n"
+ " --sectors-per-cluster=<size> specify the sectors per cluster when initializing non-retail partitions (default is 128)\n"
+ " --destroy-all-existing-data acknowledge that device formatting will destroy all existing data\n\n");
/* Print FUSE options */
argv[0] = prog_short_name;
@@ -650,25 +647,29 @@ int main(int argc, char *argv[])
/* Define command line options. */
struct fuse_opt const opts [] = {
- FUSE_OPT_KEY("-h", FATX_FUSE_OPT_KEY_HELP),
- FUSE_OPT_KEY("--help", FATX_FUSE_OPT_KEY_HELP),
- FUSE_OPT_KEY("-V", FATX_FUSE_OPT_KEY_VERSION),
- FUSE_OPT_KEY("--version", FATX_FUSE_OPT_KEY_VERSION),
- FUSE_OPT_KEY("--drive=", FATX_FUSE_OPT_KEY_DRIVE),
- FUSE_OPT_KEY("--offset=", FATX_FUSE_OPT_KEY_OFFSET),
- FUSE_OPT_KEY("--size=", FATX_FUSE_OPT_KEY_SIZE),
- FUSE_OPT_KEY("--sector-size=", FATX_FUSE_OPT_KEY_SECTOR_SIZE),
- FUSE_OPT_KEY("--log=", FATX_FUSE_OPT_KEY_LOG),
- FUSE_OPT_KEY("--loglevel=", FATX_FUSE_OPT_KEY_LOGLEVEL),
+ FUSE_OPT_KEY("-h", FATX_FUSE_OPT_KEY_HELP),
+ FUSE_OPT_KEY("--help", FATX_FUSE_OPT_KEY_HELP),
+ FUSE_OPT_KEY("-V", FATX_FUSE_OPT_KEY_VERSION),
+ FUSE_OPT_KEY("--version", FATX_FUSE_OPT_KEY_VERSION),
+ FUSE_OPT_KEY("--drive=", FATX_FUSE_OPT_KEY_DRIVE),
+ FUSE_OPT_KEY("--offset=", FATX_FUSE_OPT_KEY_OFFSET),
+ FUSE_OPT_KEY("--size=", FATX_FUSE_OPT_KEY_SIZE),
+ FUSE_OPT_KEY("--sector-size=", FATX_FUSE_OPT_KEY_SECTOR_SIZE),
+ FUSE_OPT_KEY("--format=" , FATX_FUSE_OPT_KEY_FORMAT),
+ FUSE_OPT_KEY("--sectors-per-cluster=" , FATX_FUSE_OPT_KEY_SECTORS_PER_CLUSTER),
+ FUSE_OPT_KEY("--destroy-all-existing-data", FATX_FUSE_OPT_KEY_DESTROY_DATA),
+ FUSE_OPT_KEY("--log=", FATX_FUSE_OPT_KEY_LOG),
+ FUSE_OPT_KEY("--loglevel=", FATX_FUSE_OPT_KEY_LOGLEVEL),
FUSE_OPT_END,
};
/* Initialize private data. */
memset(&pd, 0, sizeof(struct fatx_fuse_private_data));
- pd.mount_partition_size = -1;
- pd.mount_partition_offset = -1;
- pd.device_sector_size = 512;
- pd.log_level = FATX_LOG_LEVEL_INFO;
+ pd.mount_partition_size = -1;
+ pd.mount_partition_offset = -1;
+ pd.device_sector_size = 512;
+ pd.device_sectors_per_cluster = 128;
+ pd.log_level = FATX_LOG_LEVEL_INFO;
/* Parse command line arguments. */
if (fuse_opt_parse(&args, &pd, opts, &fatx_fuse_opt_proc) != 0)
@@ -713,9 +714,9 @@ int main(int argc, char *argv[])
pd.mount_partition_drive = 'c';
}
- status = fatx_fuse_drive_to_offset_size(pd.mount_partition_drive,
- &pd.mount_partition_offset,
- &pd.mount_partition_size);
+ status = fatx_drive_to_offset_size(pd.mount_partition_drive,
+ &pd.mount_partition_offset,
+ &pd.mount_partition_size);
if (status)
{
fprintf(stderr, "unknown drive letter '%c'\n", pd.mount_partition_drive);
@@ -746,12 +747,32 @@ int main(int argc, char *argv[])
fatx_log_init(pd.fs, pd.log_handle, pd.log_level);
}
+ /* Reformat the drive (if desired) */
+ if (pd.format)
+ {
+ if (pd.format_confirm)
+ {
+ return fatx_disk_format(pd.fs, pd.device_path, pd.device_sector_size, pd.format, pd.device_sectors_per_cluster);
+ }
+ else
+ {
+ fprintf(stderr, "please specify --destroy-all-existing-data to perform device formatting\n");
+ return -1;
+ }
+ }
+ else if (pd.format_confirm)
+ {
+ fprintf(stderr, "--destroy-all-existing-data can only be used with --format\n");
+ return -1;
+ }
+
/* Open the device */
status = fatx_open_device(pd.fs,
pd.device_path,
pd.mount_partition_offset,
pd.mount_partition_size,
- pd.device_sector_size);
+ pd.device_sector_size,
+ FATX_READ_FROM_SUPERBLOCK);
if (status)
{
fprintf(stderr, "failed to initialize the filesystem\n");
| Support disk formatting, filesystem init
Should support disk formatting and creating initial filesystems
Feature request: F and G partitions support.
The write capability is amazing, but the lack of F and G partitions support make `fatxfs` somewhat limited as these partitions are where we store our games.
|
Support is already there, you just need to specify offset and size.
The problem is, I don't know what offset or size of my disk's partitions are.
`fdisk -l` didn't work as it doesn't know what FATX is.
The only thing I cant think of is using XBPartitioner on ogxbox to see disk partitions size.
Partitions are hardcoded now, but we can take a look at automatically discovering partition sizes based on how other tools do so
Here's xbpartitioner source for reference: https://github.com/opcow/XBpartitioner/blob/main/PartInfo.cpp
`testdisk` can list the ogxbox disk partition.
Here is a 1 TB disk with F partition take all the space:
```bash
$ sudo testdisk /list /dev/sdc
TestDisk 7.1, Data Recovery Utility, July 2019
Christophe GRENIER <[email protected]>
https://www.cgsecurity.org
Please wait...
Disk /dev/sdc - 1000 GB / 931 GiB - CHS 121601 255 63
Sector size:512
Model: TOSHIBA MQ01ABD100
Disk /dev/sdc - 1000 GB / 931 GiB - CHS 121601 255 63
Partition Start End Size in sectors
1 P FATX 0 16 17 95 172 13 1536000
FATX
2 P FATX 95 172 14 191 73 10 1536000
FATX
3 P FATX 191 73 11 286 229 7 1536000
FATX
4 P FATX 286 229 8 350 163 5 1024000
FATX
5 P FATX 350 163 6 121601 80 63 1947892144
FATX
```
Can I use that info to manually mount the partition?
That almost looks broken to me. You have 5 partitions by default, what's odd is the size of your last one which should be `Z:\` which is a cache partition. But anyway I think that's only checking at the known offsets, hacked kernels don't follow this with custom partitions.
<https://github.com/cgsecurity/testdisk/blob/b2a0d41da609e62865beec1ed6faddd12bb5bdb8/src/partxbox.c#L165>
All the partition is there except E.
But if I mount (using `--drive` or `offset/size`) it's successfully mounted.
CMIIW, the offset for F would be `0x1ddd80000`?
```bash
$ sudo fatxfs -o allow_other /dev/sdc ogxhdd --offset=0xddd80000 --size=0xe834f36000
failed to initialize the filesystem
```
Still no dice. But if the offset is correct, all I need to do is to calculate the of F?
EDIT: Found something interesting:
1. https://github.com/ldotsfan/fatx/commit/c4606b48343c202a9f4ad3a7baec3f7afdcdddd7
1. [Parsing Of Xbpartitioner Style Partition Table From Linux](https://forums.xboxscene.org/index.php/topic,94365.msg779698.html#msg779698)
Found a solution.
Based on [EatonZ's suggestion](https://www.reddit.com/r/originalxbox/comments/sp1xvo/how_to_get_partitions_offset_and_size_of/hwe47u5/), that all I need is to multiply `LBAStart` and `LBASize` by 512.
1. Read the first sector (dump disk's partition table)
```bash
dd if=/dev/sdc bs=512 count=1 > ogxboxhdd
```
2. Search for active partition (hex value `0000 8000`)
3. Multiply `LBAStart` and `LBASize` by 512.
Below is bash script based on [ldotsfan's tip](https://forums.xboxscene.org/index.php/topic,94365.msg779698.html#msg779698):
```bash
# table header
printf ' %-12s %-12s %-12s\n' "OFFSET (Hex)" "SIZE (Hex)" "SIZE (Dec)"
while read -r PSTART PEND; do
# print table
printf ' %s%-10x | %s%-10x | %-10s\n' 0x $((0x$PSTART * 0x200)) 0x $((0x$PEND * 0x200)) $((0x$PEND * 0x200))
# get active partision from boot sector dump
done < <(hexdump ogxboxhdd | awk '/0000 8000/{ print $5 $4" "$7 $6; }')
```
Result:
```bash
OFFSET (Hex) SIZE (Hex) SIZE (Dec)
0xabe80000 | 0x1312d6000 | 5120024576
0x8ca80000 | 0x1f400000 | 524288000
0x80000 | 0x2ee00000 | 786432000
0x2ee80000 | 0x2ee00000 | 786432000
0x5dc80000 | 0x2ee00000 | 786432000
0x1dd156000 | 0x7381e30000 | 496100376576
0x755ef86000 | 0x7381e30000 | 496100376576
````
Now, I can successfully mount F:
```bash
sudo ./fatxfs -o allow_other /dev/sdc ogxhdd --offset=0x1dd156000 --size=0x7381e30000
```
Or G:
```bash
sudo ./fatxfs -o allow_other /dev/sdc ogxhdd --offset=0x755ef86000 --size=0x7381e30000
```
Thanx @mborgerson and @GXTX for the assistance.
Not all kernels use the "partition table", it's a scene made up thing, some kernels will have hard set offsets for F/G.
Ah, yes.
As I've read, the original HDD not using partition table. But stored the hard coded partition offset and size in its BIOS.
[The "partition table" rising after `XBPartitioner` use](https://www.xbmc4xbox.org.uk/forum/viewtopic.php?p=16715&sid=0a86db134a5bf67531081986725c9413#p16715). | 2022-02-14T09:57:06 | 0.0 | [] | [] |
||
ValueRaider/yfinance-cache | ValueRaider__yfinance-cache-55 | c63da6afc424037806aadceb0b7e8a31deda2cb8 | diff --git a/yfinance_cache/yfc_financials_manager.py b/yfinance_cache/yfc_financials_manager.py
index 06d9ccf..77d911b 100644
--- a/yfinance_cache/yfc_financials_manager.py
+++ b/yfinance_cache/yfc_financials_manager.py
@@ -416,15 +416,19 @@ def _get_interval_from_table(self, tbl):
interval = None
intervals = [(dates[i-1] - dates[i]).days for i in range(1,len(dates))]
+ intervals = np.array(intervals)
sdm_thresold = 0.1
if len(intervals) == 1:
interval = intervals[0]
else:
- avg = mean(intervals)
- sdm = stdev(intervals) / avg
+ # First, discard impossibly small intervals:
+ f_too_small = intervals < 60
+ if f_too_small.any():
+ intervals = intervals[~f_too_small]
+ avg = np.mean(intervals)
+ sdm = np.std(intervals) / avg
if sdm > sdm_thresold:
# Discard large outliers
- intervals = np.array(intervals)
intervals = intervals[intervals<avg]
if len(intervals) == 1:
interval = intervals[0]
@@ -1167,6 +1171,11 @@ def _check_release_dates(self, releases, finType, period, refresh):
# else:
# interval_td = self._get_interval(finType, refresh)
+ # Ignore releases with no date:
+ # - can happen with nonsense financials dates from Yahoo that
+ # even my prune function couldn't safely remove
+ releases = [r for r in releases if r.release_date is not None]
+
for i0 in range(len(releases)-1):
r0 = releases[i0]
r0rd = r0.release_date
| Exception: RCI-B.TO: interval = 31 doesn't fit standard intervals
```
import yfinance_cache as yfc
dat = yfc.Ticker('RCI-B.TO')
dat.get_release_dates(check=True)
```
> Exception: RCI-B.TO: interval = 31 doesn't fit standard intervals
| Cause is a nonsense date in quarterly financials:
```
>>> dat.quarterly_incomestmt.columns
DatetimeIndex(['2023-12-31', '2023-09-30', '2023-06-30', '2023-03-31',
'2023-02-28'],
dtype='datetime64[ns]', freq=None)
```
SEDAR doesn't have anything for `2023-02-28` | 2024-04-16T20:39:45 | 0.0 | [] | [] |
||
scverse/scvi-tools | scverse__scvi-tools-3009 | 4985bb37f1330257ebfe89605942f482008260e5 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index cb781b50a6..45ccc2f743 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,20 @@ to [Semantic Versioning]. Full commit history is available in the
## Version 1.2
+### 1.2.1 (2024-XX-XX)
+
+#### Added
+
+#### Fixed
+
+- Breaking Change: Fix `get_outlier_cell_sample_pairs` function in {class}`scvi.external.MRVI`
+ to correctly compute the maxmimum log-density across in-sample cells rather than the
+ aggregated posterior log-density {pr}`3007`.
+
+#### Changed
+
+#### Removed
+
### 1.2.0 (2024-09-26)
#### Added
diff --git a/src/scvi/external/mrvi/_model.py b/src/scvi/external/mrvi/_model.py
index 4f3dabfbc4..853e1a4a63 100644
--- a/src/scvi/external/mrvi/_model.py
+++ b/src/scvi/external/mrvi/_model.py
@@ -14,6 +14,7 @@
from scvi.data import AnnDataManager, fields
from scvi.external.mrvi._module import MRVAE
from scvi.external.mrvi._types import MRVIReduction
+from scvi.external.mrvi._utils import rowwise_max_excluding_diagonal
from scvi.model.base import BaseModelClass, JaxTrainingMixin
from scvi.utils import setup_anndata_dsp
from scvi.utils._docstrings import devices_dsp
@@ -745,7 +746,10 @@ def get_aggregated_posterior(
indices: npt.ArrayLike | None = None,
batch_size: int = 256,
) -> Distribution:
- """Compute the aggregated posterior over the ``u`` latent representations.
+ """Computes the aggregated posterior over the ``u`` latent representations.
+
+ For the specified samples, it computes the aggregated posterior over the ``u`` latent
+ representations. Returns a NumPyro MixtureSameFamily distribution.
Parameters
----------
@@ -959,12 +963,13 @@ def get_outlier_cell_sample_pairs(
admissibility_threshold: float = 0.0,
batch_size: int = 256,
) -> xr.Dataset:
- """Compute outlier cell-sample pairs.
+ """Compute admissibility scores for cell-sample pairs.
- This function fits a GMM for each sample based on the latent representation of the cells in
- the sample or computes an approximate aggregated posterior for each sample. Then, for every
- cell, it computes the log-probability of the cell under the approximated posterior of each
- sample as a measure of admissibility.
+ This function computes the posterior distribution for u for each cell. Then, for every
+ cell, it computes the log-probability of the cell under the posterior of each cell
+ each sample and takes the maximum value for a given sample as a measure of admissibility
+ for that sample. Additionally, it computes a threshold that determines if
+ a cell-sample pair is admissible based on the within-sample admissibility scores.
Parameters
----------
@@ -995,21 +1000,34 @@ def get_outlier_cell_sample_pairs(
adata_s = adata[sample_idxs]
ap = self.get_aggregated_posterior(adata=adata, indices=sample_idxs)
- log_probs_s = jnp.quantile(
- ap.log_prob(adata_s.obsm["U"]).sum(axis=1), q=quantile_threshold
- )
- n_splits = adata.n_obs // batch_size
+ in_max_comp_log_probs = ap.component_distribution.log_prob(
+ np.expand_dims(adata_s.obsm["U"], ap.mixture_dim)
+ ).sum(axis=1)
+ log_probs_s = rowwise_max_excluding_diagonal(in_max_comp_log_probs)
+
log_probs_ = []
+ n_splits = adata.n_obs // batch_size
for u_rep in np.array_split(adata.obsm["U"], n_splits):
- log_probs_.append(jax.device_get(ap.log_prob(u_rep).sum(-1, keepdims=True)))
+ log_probs_.append(
+ jax.device_get(
+ ap.component_distribution.log_prob(
+ np.expand_dims(u_rep, ap.mixture_dim)
+ ) # (n_cells_batch, n_cells_ap, n_latent_dim)
+ .sum(axis=1) # (n_cells_batch, n_latent_dim)
+ .max(axis=1, keepdims=True) # (n_cells_batch, 1)
+ )
+ )
log_probs_ = np.concatenate(log_probs_, axis=0) # (n_cells, 1)
threshs.append(np.array(log_probs_s))
log_probs.append(np.array(log_probs_))
+ threshs_all = np.concatenate(threshs)
+ global_thresh = np.quantile(threshs_all, q=quantile_threshold)
+ threshs = np.array(len(log_probs) * [global_thresh])
+
log_probs = np.concatenate(log_probs, 1)
- threshs = np.array(threshs)
log_ratios = log_probs - threshs
coords = {
| Change outlier detection for MRVI to ball admissibility calculation
The outlier detection was mistakenly set to the "ap" (aggregated posterior) thresholding setting when it should have been the "ball" thresholding calculation.
The only changes in this pr are in the `get_outlier_cell_sample_pairs`, everything else is auto-linted.
| 2024-10-02T23:15:04 | 0.0 | [] | [] |
|||
tldr-pages/tldr-python-client | tldr-pages__tldr-python-client-222 | 15242e81e833f89a28a605cfb04ffed6a32af20c | diff --git a/tldr.py b/tldr.py
index bfae548..fb2682d 100755
--- a/tldr.py
+++ b/tldr.py
@@ -504,7 +504,7 @@ def main() -> None:
parser.print_help(sys.stderr)
sys.exit(1)
if options.list:
- print(get_commands(options.platform))
+ print('\n'.join(get_commands(options.platform)))
elif options.render:
for command in options.command:
if os.path.exists(command):
| The reason of returning `list` when `tldr -l`
Hi there!
Thanks for this app!
Let me ask why it returns `list` when executing `tldr -l` there ?
https://github.com/tldr-pages/tldr-python-client/blob/15242e81e833f89a28a605cfb04ffed6a32af20c/tldr.py#L304C1-L315C20
I want to use `fzf` to search through all pages, not using `tldr --search term`
i.e. `tldr $(tldr -l | fzf)`
By just returning string by `\n` we could get this effect. `return '\n'.join(commands)`
```
def get_commands(platforms: Optional[List[str]] = None) -> str:
if platforms is None:
platforms = get_platform_list()
commands = []
if os.path.exists(get_cache_dir()):
for platform in platforms:
path = os.path.join(get_cache_dir(), 'pages', platform)
if not os.path.exists(path):
continue
commands += [file[:-3] for file in os.listdir(path) if file.endswith(".md")]
return '\n'.join(commands)
```
| 2023-10-29T06:25:00 | 0.0 | [] | [] |
|||
iKernels/transformers-lightning | iKernels__transformers-lightning-70 | 989a22969807919f667757d46347c9a5144278bf | diff --git a/.version.json b/.version.json
index 5713036..501658f 100644
--- a/.version.json
+++ b/.version.json
@@ -1,5 +1,5 @@
{
"major": 0,
"minor": 5,
- "patch": 0
+ "patch": 1
}
diff --git a/transformers_lightning/optimizers/adamw_electra.py b/transformers_lightning/optimizers/adamw_electra.py
index d105c40..d384d98 100644
--- a/transformers_lightning/optimizers/adamw_electra.py
+++ b/transformers_lightning/optimizers/adamw_electra.py
@@ -85,7 +85,7 @@ def step(self, closure=None):
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1)
- exp_avg_sq.mul_(beta2).add_(grad.sqrt(), value=1 - beta2) # <-- fix is here
+ exp_avg_sq.mul_(beta2).add_(grad.sqrt(), alpha=1 - beta2) # <-- fix is here
if amsgrad:
max_exp_avg_sq = state['max_exp_avg_sq']
diff --git a/transformers_lightning/schedulers/layerwise_decay_scheduler.py b/transformers_lightning/schedulers/layerwise_decay_scheduler.py
index a6c0e94..88299b6 100644
--- a/transformers_lightning/schedulers/layerwise_decay_scheduler.py
+++ b/transformers_lightning/schedulers/layerwise_decay_scheduler.py
@@ -90,7 +90,7 @@ def __init__(
self.layerwise_lr_decay_power = layerwise_lr_decay_power
# retrieve depth for each params group
- self.depths = [group['depth'] for group in self.optimizer.param_groups]
+ self.depths = [group['depth'] for group in optimizer.param_groups]
super().__init__(optimizer, last_epoch=last_epoch, verbose=verbose)
| fixed typo in new electra- sched and optim
| 2021-02-15T20:38:52 | 0.0 | [] | [] |
|||
space-physics/pyzenodo3 | space-physics__pyzenodo3-3 | 5352036bdae3a3114d1f17700ac9c1d17501abfe | diff --git a/pyzenodo3/upload.py b/pyzenodo3/upload.py
index 316b903..85d49f7 100644
--- a/pyzenodo3/upload.py
+++ b/pyzenodo3/upload.py
@@ -50,11 +50,11 @@ def meta(inifn: Path) -> Path:
return outfn
-def check_token(token: str):
+def check_token(token: str, base_url: str):
if not isinstance(token, str) or not token:
raise TypeError("Token need to be a string")
- r = requests.get("https://zenodo.org/api/deposit/depositions", params={"access_token": token})
+ r = requests.get(f"{base_url}/deposit/depositions", params={"access_token": token})
if r.status_code != 200:
raise requests.HTTPError(f"Token accept error, status code: {r.status_code} {r.json()['message']}")
@@ -92,10 +92,10 @@ def upload_meta(token: str, metafn: Path, depid: str):
raise requests.HTTPError(f"Error in metadata upload, status code: {r.status_code} {r.json()['message']}")
-def upload_data(token: str, datafn: Path, depid: str):
+def upload_data(token: str, datafn: Path, depid: str, base_url: str):
r = requests.post(
- f"{BASE_URL}/deposit/depositions/{depid}/files",
+ f"{base_url}/deposit/depositions/{depid}/files",
params={"access_token": token},
data={"filename": str(datafn)},
files={"file": datafn.open("rb")},
@@ -107,9 +107,9 @@ def upload_data(token: str, datafn: Path, depid: str):
print(f"{datafn} ID = {depid} (DOI: 10.5281/zenodo.{depid})")
-def create(token: str) -> str:
+def create(token: str, base_url: str) -> str:
- r = requests.post(f"{BASE_URL}/deposit/depositions", params={"access_token": token}, json={}, headers=HDR)
+ r = requests.post(f"{base_url}/deposit/depositions", params={"access_token": token}, json={}, headers=HDR)
if r.status_code != 201:
raise requests.HTTPError(f"Error in creation, status code: {r.status_code} {r.json()['message']}")
@@ -117,7 +117,7 @@ def create(token: str) -> str:
return r.json()["id"]
-def upload(metafn: Path, datafn: Path, token: Union[str, Path]):
+def upload(metafn: Path, datafn: Path, token: Union[str, Path], base_url=BASE_URL):
"""takes metadata and file and uploads to Zenodo"""
datafn = Path(datafn).expanduser()
@@ -126,11 +126,11 @@ def upload(metafn: Path, datafn: Path, token: Union[str, Path]):
# %% token check
token = get_token(token)
- check_token(token)
+ check_token(token, base_url)
# %% Create new submission
- depid = create(token)
+ depid = create(token, base_url)
# %% Upload data
- upload_data(token, datafn, depid)
+ upload_data(token, datafn, depid, base_url)
# %% add metadata
diff --git a/upload_zenodo.py b/upload_zenodo.py
index 8bc1ef6..8336a0a 100755
--- a/upload_zenodo.py
+++ b/upload_zenodo.py
@@ -4,6 +4,7 @@
"""
from argparse import ArgumentParser, Namespace
import pyzenodo3.upload as zup
+from pyzenodo3.base import BASE_URL
def cmdparse() -> Namespace:
@@ -11,6 +12,7 @@ def cmdparse() -> Namespace:
p.add_argument("apikey", help="Zenodo API key", nargs="?")
p.add_argument("inifn", help="mymeta.ini file with author, title, etc.")
p.add_argument("path", help="directory or file to upload to Zenodo", nargs="?")
+ p.add_argument("--use-sandbox", help="Use sandbox.zenodo.org instead of the real site.", action='store_true')
return p.parse_args()
@@ -18,7 +20,13 @@ def main():
p = cmdparse()
metafn = zup.meta(p.inifn)
- zup.upload(metafn, p.path, p.apikey)
+
+ if p.use_sandbox:
+ base_url = "https://sandbox.zenodo.org/api"
+ else:
+ base_url = BASE_URL
+
+ zup.upload(metafn, p.path, p.apikey, base_url=base_url)
if __name__ == "__main__":
| Support using sandbox API
Hello, thanks for this lib.
Zenodo has a sandbox site and API at https://sandbox.zenodo.org. Would be useful to be able switch to that while testing things out.
| 2020-04-10T11:43:49 | 0.0 | [] | [] |
|||
precice/fenics-adapter | precice__fenics-adapter-129 | 87e03f8ec169fad7c4d19f73a9a6bd3db9244e66 | diff --git a/fenicsprecice/fenicsprecice.py b/fenicsprecice/fenicsprecice.py
index 8574009f..9c939e59 100644
--- a/fenicsprecice/fenicsprecice.py
+++ b/fenicsprecice/fenicsprecice.py
@@ -61,7 +61,6 @@ def __init__(self, adapter_config_filename='precice-adapter-config.json'):
self._read_function_space = None # initialized later
self._write_function_space = None # initialized later
self._dofmap = None # initialized later using function space provided by user
- self._coupling_subdomain = None
# coupling mesh related quantities
self._owned_vertices = Vertices(VertexType.OWNED)
@@ -166,9 +165,9 @@ def get_point_sources(self, data):
Returns
-------
x_forces : list
- List containing X component of forces with reference to respective point sources on the coupling interface.
+ List containing X component of forces with reference to respective point sources on the coupling subdomain.
y_forces : list
- List containing Y component of forces with reference to respective point sources on the coupling interface.
+ List containing Y component of forces with reference to respective point sources on the coupling subdomain.
"""
assert (self._read_function_type is FunctionType.VECTOR), \
"PointSources only supported for vector valued read data."
@@ -260,7 +259,7 @@ def write_data(self, write_function):
def initialize(self, coupling_subdomain, read_function_space=None, write_object=None, fixed_boundary=None):
"""
- Initializes the coupling interface and sets up the mesh in preCICE.
+ Initializes the coupling and sets up the mesh where coupling happens in preCICE.
Parameters
----------
@@ -355,7 +354,7 @@ def initialize(self, coupling_subdomain, read_function_space=None, write_object=
if fenics_dimensions != self._interface.get_dimensions():
raise Exception("Dimension of preCICE setup and FEniCS do not match")
- # Set vertices on the coupling interface for this rank
+ # Set vertices on the coupling subdomain for this rank
set_fenics_vertices(function_space, coupling_subdomain, self._fenics_vertices)
set_owned_vertices(function_space, coupling_subdomain, self._owned_vertices)
set_unowned_vertices(function_space, coupling_subdomain, self._unowned_vertices)
@@ -472,7 +471,7 @@ def advance(self, dt):
def finalize(self):
"""
- Completes the coupling interface execution. To be called at the end of the simulation.
+ Finalizes the coupling via preCICE and the adapter. To be called at the end of the simulation.
Notes
-----
| Support multiple coupling subdomains
https://github.com/precice/fenics-adapter/blob/7fec6f3fee30a942562145d7f5f9c1a7b68edfde/fenicsadapter/fenicsadapter.py#L89-L97
By having the `configure()` directly inside the `Adapter` class, we assume that the adapter can only work with one coupling interface. By separating the adapter class (that also handles the time control etc) from the interface itself, we could support multiple coupling interfaces (for example, for Fluid-Solid-Fluid coupling, as in the shell-and-tubes heat exchanger tutorial).
Then, the `configure()` method would not be called in the adapter itself, but in each coupling interface. The same for the methods to read/write data etc.
| The intended use of the adapter is one interface. In certain situations more than one interface might be needed, but I don't really see the need for this feature now. As a workaround I can imagine that creating two instances of the adapter object might even be sufficient. From a developer's perspective this is also the preferred solution, since supporting N coupling interfaces in the adapter would make the code considerably more complicated for an edge case.
I will close this issue. Feel free to reopen it or comment. Especially related:
* Do you have a use case for the fenics adapter with several coupling interfaces?
* Does the approach mentioned above (creating two instances of the adapter) work?
Going with "only supporting one interface" is perfectly fine, it is a design decision.
> Do you have a use case for the fenics adapter with several coupling interfaces?
Use cases I can think of:
- Heat exchanger (one solid, two fluids)
- Separating membrane (one solid, two fluids)
- Beams connected (multiple solids)
- (if you add support for flows) FSI with two flaps (one fluid, two solids)
> Does the approach mentioned above (creating two instances of the adapter) work?
I don't know anything about the FEniCS adapter code, but in the OpenFOAM adapter we have it like this and it works.
I totally misunderstood the issue. We are talking about `coupling_subdomains`, not about what we call the `interface` in the adapter:
https://github.com/precice/fenics-adapter/blob/87e03f8ec169fad7c4d19f73a9a6bd3db9244e66/fenicsprecice/fenicsprecice.py#L267-L268
https://github.com/precice/fenics-adapter/blob/87e03f8ec169fad7c4d19f73a9a6bd3db9244e66/fenicsprecice/fenicsprecice.py#L57
This is, of course, a totally valid use-case. We already have several tutorials dealing with this and I can also imagine extending the fenics adapter into this direction, if needed.
*note:* Naming with interface vs. subdomain is in some places inconsistent and confusing. I will open a PR in a moment to clean this up. | 2021-04-20T09:23:29 | 0.0 | [] | [] |
||
jnothman/UpSetPlot | jnothman__UpSetPlot-249 | 7a6387fa0dc98156c5e639c66315400de0f3777e | diff --git a/upsetplot/reformat.py b/upsetplot/reformat.py
index 05234d9..1ae253a 100644
--- a/upsetplot/reformat.py
+++ b/upsetplot/reformat.py
@@ -163,7 +163,7 @@ class QueryResult:
category_totals : Series
Total size of each category, regardless of selection.
total : number
- Total number of samples / sum of value
+ Total number of samples, or sum of sum_over value.
"""
def __init__(self, data, subset_sizes, category_totals, total):
| Checking the percentages calculation by setting the "min_subset_size" parameter
Hi,
Thank you for this very useful package.
I have a report regarding the calculation of the percentages of the various intersections: when applying a filter on the minimum size of the subsets to be displayed the calculation of the percentage is done on the total of the filtered elements and not on the total of the elements of each unfiltered set.
As you can see in the following 2 examples: in the unfiltered case (min_subset_size=0) you have total percentages with value in 100% (and the sum of the percentages on the bars is 100% correctly), while in the filtered case (min_subset_size=3) the total percentages can exceed 100% (and all the percentage values on the bars are changed so that their sum is always 100%).
I think it is more correct to always display the percentages referring to the total of each set and apply the filter to control the number of intersections to be displayed while keeping the original percentages.
```
from upsetplot import UpSet, from_contents
from random import seed, sample
seed(1)
data = {
'A': [sample(range(1, 40), 20)],
'B': [sample(range(1, 21), 10)],
'C': [sample(range(1, 21), 5)]
}
df = from_contents(data)
# upset unfiltered
us1 = UpSet(df, subset_size='count', sort_by = 'cardinality',
show_counts=True, show_percentages=True,
element_size=None,
min_subset_size=0).plot()
```
### Upset unfiltered

```
# upset filtered
us2 = UpSet(df, subset_size='count', sort_by = 'cardinality',
show_counts=True, show_percentages=True,
element_size=None,
min_subset_size=3).plot()
```
### Upset filtered

------------------------------------------
### Here is what it should look like:

Kind regards
| 2023-12-28T12:08:33 | 0.0 | [] | [] |
|||
AkariGroup/akari_software | AkariGroup__akari_software-9 | 61d8a95c03639d04c7e14ca0d13e42512075df1c | diff --git a/samples/akari_sample/script/3a_gpio_control.py b/samples/akari_sample/script/3a_gpio_control.py
index f20aefd1..99803465 100644
--- a/samples/akari_sample/script/3a_gpio_control.py
+++ b/samples/akari_sample/script/3a_gpio_control.py
@@ -87,7 +87,7 @@ def main() -> None:
print("STEP5. Reset all output")
# reset_alloutãå®è¡
m5.reset_allout()
- print(f"-> Reset")
+ print("-> Reset")
# 2ç§åæ¢
time.sleep(2)
print()
@@ -119,7 +119,7 @@ def main() -> None:
# pwmout0_valã§pwmout0ã®åºåå¤ãè¨å®
pwmout0_val = 200
# set_alloutãå®è¡
- m5.set_allout(dout0_val, dout1_val, pwmout0_val)
+ m5.set_allout(dout0=dout0_val, dout1=dout1_val, pwmout0=pwmout0_val)
print("-> Set")
# 2ç§åæ¢
time.sleep(2)
diff --git a/sdk/akari_controller/src/akari_controller/m5serial_server_py.py b/sdk/akari_controller/src/akari_controller/m5serial_server_py.py
index 4d4a0f87..f1490cbd 100644
--- a/sdk/akari_controller/src/akari_controller/m5serial_server_py.py
+++ b/sdk/akari_controller/src/akari_controller/m5serial_server_py.py
@@ -97,11 +97,16 @@ def set_pwmout(self, pin_id: int, value: int, sync: bool = True) -> None:
self._write_pin_out(sync=sync)
def set_allout(
- self, dout0_val: bool, dout1_val: bool, pwmout0_val: int, sync: bool = True
+ self,
+ *,
+ dout0: bool,
+ dout1: bool,
+ pwmout0: int,
+ sync: bool = True,
) -> None:
- self._pin_out.dout0 = dout0_val
- self._pin_out.dout1 = dout1_val
- self._pin_out.pwmout0 = pwmout0_val
+ self._pin_out.dout0 = dout0
+ self._pin_out.dout1 = dout1
+ self._pin_out.pwmout0 = pwmout0
self._write_pin_out(sync=sync)
| Create a class to represent a color
Merge #7 first
è²ã表ç¾ããããã®ã¯ã©ã¹ã追å ãã¾ããã
ã¤ãã§ã« 16-bit Color ã¸å¤æããæ©è½ãå®è£
ãã¦ãããªã»ãã以å¤ã®ä»»æã®è²ãæå®ã§ããããã«ãã¦ã¿ã¾ããã
@kazyam53 @takuya-ikeda-tri
| 2022-07-16T10:19:39 | 0.0 | [] | [] |
|||
arcee-ai/mergekit | arcee-ai__mergekit-430 | 852291726650c8dd6ac78721c2c4d0fbdafc8e3d | diff --git a/mergekit/tokenizer/build.py b/mergekit/tokenizer/build.py
index fb9f9d9c..3cefed91 100644
--- a/mergekit/tokenizer/build.py
+++ b/mergekit/tokenizer/build.py
@@ -90,7 +90,12 @@ def get_stripped_tokenizer(
del tok_dict["model"]["vocab"][tok]
def _keep_merge(m):
- toks = m.split(" ")
+ if isinstance(m, str) and m.count(" ") == 1:
+ toks = m.split(" ")
+ elif isinstance(m, list):
+ toks = m
+ else:
+ raise RuntimeError(f"Unexpected merge format: {repr(m)} ({type(m)})")
for tok in toks:
if tok in unused_toks:
return False
| Broken tokenizer in Yi-34B merge
I've been trying to merge two Yi-34B based builds using Arcee's hosted mergekit. The merge seems to be successful, with no errors shown, but no matter what tokenizer source I use, the result seems broken and I'm unable to convert to GGUF. I know there used to be a bug related to this, but I thought it was fixed.
This is the most recent YAML I used:
```
base_model: TeeZee/Kyllene-34B-v1.1
chat_template: auto
dtype: float16
merge_method: ties
models:
- model: TeeZee/Kyllene-34B-v1.1
parameters:
density: 0.5
weight: 0.5
- model: Doctor-Shotgun/Nous-Capybara-limarpv3-34B
parameters:
density: 0.5
weight: 0.5
parameters:
int8_mask: true
normalize: false
tokenizer_source: base
```
| Hi! What do you mean by broken tokenizer? I did not sure if my tokenizer were broken. I got token in texts like "<|unused115|>", "<|unused026|>" in my message after merging model.
> Hi! What do you mean by broken tokenizer? I did not sure if my tokenizer were broken. I got token in texts like "<|unused115|>", "<|unused026|>" in my message after merging model.
I was unable to convert the model to GGUF and quantize because of an error about token ids being out of range. There were tokens numbered 64000 & 64001 when the max was 63999.
I was finally able to fix this problem by redoing the merge with the added parameter "embed_slerp=true".
I too see a lot of unused tokens in the config, but I don't know if that's anything to worry about. So far, I haven't seen these show up in generated text. | 2024-10-05T19:51:36 | 0.0 | [] | [] |
||
obsidian-html/obsidian-html | obsidian-html__obsidian-html-485 | 75202f14e79e0628c6200a7f88bb6dffdf07e029 | diff --git a/obsidianhtml/ConvertVault.py b/obsidianhtml/ConvertVault.py
index 3d4788bb..11dcad79 100644
--- a/obsidianhtml/ConvertVault.py
+++ b/obsidianhtml/ConvertVault.py
@@ -994,7 +994,7 @@ def ConvertMarkdownPageToHtmlPage(fo:'OH_File', pb, backlinkNode=None, log_level
# ------------------------------------------------------------------
extensions = [
'abbr', 'attr_list', 'def_list',
- 'fenced_code',
+ 'fenced_code', 'tables',
'md_in_html', FootnoteExtension(), FormattingExtension(),
'codehilite',
CustomTocExtension(), MermaidExtension(), CallOutExtension(), 'pymdownx.arithmatex']
| markdown tables to html?
Hey bro,
Markdown tables aren't being converted to html or something. V 3.3.1

My current fix is to just put it in a `markdown code block` or converted to html using this [table generator](https://www.tablesgenerator.com/html_tables).
(btw, I love your plugin, putting up an issue no hate all love)
| just noticed that the tables on your site is not working as well. Something happened!
https://obsidian-html.github.io/Configurations/Features/Features.html
I uninstalled and reinstalled using `pip install obsidianhtml`. With this version V3.3.0, my tables are back in working order!
So the problem is a change that happened between V3.3.1 and V3.3.0.

| 2022-12-07T13:50:41 | 0.0 | [] | [] |
||
IBM/lale | IBM__lale-597 | 5219ceb020dc3affd248672d1446225d2b6b463b | diff --git a/lale/lib/aif360/util.py b/lale/lib/aif360/util.py
index 37c3dba21..3e7376b5a 100644
--- a/lale/lib/aif360/util.py
+++ b/lale/lib/aif360/util.py
@@ -780,7 +780,7 @@ def label_for_stratification(row):
def fair_stratified_train_test_split(
- X, y, favorable_labels, protected_attributes, test_size=0.25
+ X, y, favorable_labels, protected_attributes, test_size=0.25, random_state=42
):
"""
Splits X and y into random train and test subsets stratified by labels and protected attributes.
@@ -807,6 +807,28 @@ def fair_stratified_train_test_split(
Features for which fairness is desired.
+ test_size : float or int, default=0.25
+
+ If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split.
+ If int, represents the absolute number of test samples.
+
+ random_state : int, RandomState instance or None, default=42
+
+ Controls the shuffling applied to the data before applying the split.
+ Pass an integer for reproducible output across multiple function calls.
+
+ - None
+
+ RandomState used by numpy.random
+
+ - numpy.random.RandomState
+
+ Use the provided random state, only affecting other users of that same random state instance.
+
+ - integer
+
+ Explicit seed.
+
Returns
-------
result : tuple
@@ -821,7 +843,7 @@ def fair_stratified_train_test_split(
"""
stratify = column_for_stratification(X, y, favorable_labels, protected_attributes)
train_X, test_X, train_y, test_y = sklearn.model_selection.train_test_split(
- X, y, test_size=test_size, random_state=42, stratify=stratify
+ X, y, test_size=test_size, random_state=random_state, stratify=stratify
)
if hasattr(X, "json_schema"):
train_X = add_schema_adjusting_n_rows(train_X, X.json_schema)
| Allow the user to set the random state in fair_stratified_train_test_split
In the current implementation of [`fair_stratified_train_test_split`](https://github.com/IBM/lale/blob/master/lale/lib/aif360/util.py#L782) the user cannot supply the `random_state` argument as is allowed for scikit-learn's `train_test_split`. This would be useful.
| 2021-02-04T19:03:29 | 0.0 | [] | [] |
|||
columnflow/columnflow | columnflow__columnflow-194 | 8dc0f9e9f2ffb9ec53ba96125cb78be2f8ac14a3 | diff --git a/columnflow/config_util.py b/columnflow/config_util.py
index 200c28481..3bfd7e3b5 100644
--- a/columnflow/config_util.py
+++ b/columnflow/config_util.py
@@ -87,7 +87,7 @@ def create_category_combinations(
config: od.Config,
categories: dict[str, list[od.Categories]],
name_fn: Callable[[Any], str],
- kwargs_fn: Callable[[Any], dict],
+ kwargs_fn: Callable[[Any], dict] | None = None,
skip_existing: bool = True,
) -> int:
"""
@@ -102,7 +102,9 @@ def create_category_combinations(
Each newly created category is instantiated with this name as well as arbitrary keyword
arguments as returned by *kwargs_fn*. This function is called with the categories (in a
dictionary, mapped to the sequence names as given in *categories*) that contribute to the newly
- created category and should return a dictionary with at least one field ``"id"``.
+ created category and should return a dictionary. If the fields ``"id"`` and ``"selection"`` are
+ missing, they are filled with reasonable defaults leading to an auto-incremented id and a list
+ of all parent selection statements.
If the name of a new category is already known to *config* it skipped unless *skip_existing* is
*False*.
@@ -125,6 +127,7 @@ def name_fn(lepton=None, n_jets=None, n_tags=None):
def kwargs_fn(categories):
# return arguments that are forwarded to the category init
# (use id "+" here which simply increments the last taken id, see order.Category)
+ # (note that this is also the default)
return {"id": "+"}
create_category_combinations(cfg, categories, name_fn, kwargs_fn)
@@ -137,6 +140,12 @@ def kwargs_fn(categories):
if n_groups < 2:
return n_created_categories
+ # check functions
+ if not callable(name_fn):
+ raise TypeError(f"name_fn must be a function, but got {name_fn}")
+ if kwargs_fn and not callable(kwargs_fn):
+ raise TypeError(f"when set, kwargs_fn must be a function, but got {kwargs_fn}")
+
# start combining, considering one additional groups for combinatorics at a time
for _n_groups in range(2, n_groups + 1):
@@ -157,8 +166,13 @@ def kwargs_fn(categories):
if skip_existing and config.has_category(cat_name, deep=True):
continue
+ # create arguments for the new category
+ kwargs = kwargs_fn(root_cats) if callable(kwargs_fn) else {}
+ kwargs.setdefault("id", "+")
+ kwargs.setdefault("selection", [c.selection for c in root_cats.values()])
+
# create the new category
- cat = od.Category(name=cat_name, **kwargs_fn(root_cats))
+ cat = od.Category(name=cat_name, **kwargs)
n_created_categories += 1
# find direct parents and connect them
diff --git a/columnflow/production/categories.py b/columnflow/production/categories.py
index 58891452a..0abb28ecc 100644
--- a/columnflow/production/categories.py
+++ b/columnflow/production/categories.py
@@ -4,6 +4,8 @@
Column production methods related defining categories.
"""
+from collections import defaultdict
+
import law
from columnflow.selection import Selector
@@ -27,16 +29,24 @@ def category_ids(self: Producer, events: ak.Array, **kwargs) -> ak.Array:
# TODO: we maybe don't want / need to loop through all leaf categories
for cat_inst in self.config_inst.get_leaf_categories():
- # get the selector class
- selector = self.category_to_selector[cat_inst]
+ # start with a true mask
+ cat_mask = np.ones(len(events)) > 0
+
+ # loop through selectors
+ for selector in self.category_to_selectors[cat_inst]:
+ # run the selector for events that still match the mask, then AND concat
+ _cat_mask = self[selector](events[cat_mask], **kwargs)
+ cat_mask[cat_mask] &= np.asarray(_cat_mask == 1)
- # get the category mask
- cat_mask = self[selector](events, **kwargs)
+ # stop if no events are left
+ if not ak.any(cat_mask):
+ break
# covert to nullable array with the category ids or none, then apply ak.singletons
ids = ak.where(cat_mask, np.float32(cat_inst.id), np.float32(np.nan))
category_ids.append(ak.singletons(ak.nan_to_none(ids)))
+ # combine and save
category_ids = ak.concatenate(category_ids, axis=1)
events = set_ak_column(events, "category_ids", category_ids, value_type=np.int32)
@@ -45,32 +55,33 @@ def category_ids(self: Producer, events: ak.Array, **kwargs) -> ak.Array:
@category_ids.init
def category_ids_init(self: Producer) -> None:
- # store a mapping from leaf category to selector class for faster lookup
- self.category_to_selector = {}
+ # store a mapping from leaf category to selector classes for faster lookup
+ self.category_to_selectors = defaultdict(list)
# add all selectors obtained from leaf category selection expressions to the used columns
for cat_inst in self.config_inst.get_leaf_categories():
- sel = cat_inst.selection
- if Selector.derived_by(sel):
- selector = sel
- elif Selector.has_cls(sel):
- selector = Selector.get_cls(sel)
- else:
- raise Exception(
- f"selection '{sel}' of category '{cat_inst.name}' cannot be resolved to a existing "
- "Selector object",
- )
-
- # variables should refer to unexposed selectors as they should usually not
- # return SelectionResult's but a flat per-event mask
- if selector.exposed:
- logger.warning(
- f"selection of category {cat_inst.name} seems to refer to an exposed selector "
- "whose return value is most likely incompatible with category masks",
- )
-
- # update dependency sets
- self.uses.add(selector)
- self.produces.add(selector)
-
- self.category_to_selector[cat_inst] = selector
+ # treat all selections as lists
+ for sel in law.util.make_list(cat_inst.selection):
+ if Selector.derived_by(sel):
+ selector = sel
+ elif Selector.has_cls(sel):
+ selector = Selector.get_cls(sel)
+ else:
+ raise Exception(
+ f"selection '{sel}' of category '{cat_inst.name}' cannot be resolved to an "
+ "existing Selector object",
+ )
+
+ # variables should refer to unexposed selectors as they should usually not
+ # return SelectionResult's but a flat per-event mask
+ if selector.exposed:
+ logger.warning(
+ f"selection of category {cat_inst.name} seems to refer to an exposed selector "
+ "whose return value is most likely incompatible with category masks",
+ )
+
+ # update dependency sets
+ self.uses.add(selector)
+ self.produces.add(selector)
+
+ self.category_to_selectors[cat_inst].append(selector)
| Make category_id producer aware of list of selectors
The producer in [production/categories.py](https://github.com/uhh-cms/columnflow/blob/3213c1447b43ade9aa45d3b64db043ea5c26ec00/columnflow/production/categories.py) should account for category definitions that store a list of expressions, apply all of them sequentially (perhaps with an "early stopping" check though) and construct the logical AND between all returned masks.
| 2023-02-23T09:34:48 | 0.0 | [] | [] |
|||
cmu-rss-lab/roswire | cmu-rss-lab__roswire-487 | 2b0c90b5b4d000468d04e9af00766cb28210ef50 | diff --git a/setup.cfg b/setup.cfg
index e31ac654..ee80e71f 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -28,6 +28,7 @@ install_requires =
loguru ~= 0.5.3
psutil ~= 5.7.0
pyyaml ~= 5.1
+ pyparsing ~= 2.4.7
package_dir =
=src
packages = find:
diff --git a/src/roswire/common/msg.py b/src/roswire/common/msg.py
index 7dfab8f2..2e5af5fb 100644
--- a/src/roswire/common/msg.py
+++ b/src/roswire/common/msg.py
@@ -310,16 +310,13 @@ def from_string(cls, package: str, name: str, text: str) -> "MsgFormat":
@staticmethod
def sections_from_string(text: str) -> t.List[str]:
- sections: t.List[str] = []
+ sections: t.List[str] = [""]
section_index = 0
for line in (ss.strip() for ss in text.split("\n")):
if line.startswith("---"):
section_index += 1
+ sections.append("")
else:
- if len(sections) < section_index + 1:
- if len(sections) < section_index:
- sections.append("")
- sections.append("")
sections[section_index] += f"{line}\n"
return sections
| Fix corner case handling for empty action files
It turns out the action files can contain three entirely empty sections [https://github.com/ros-teleop/twist_mux_msgs/blob/melodic-devel/action/JoyPriority.action]:
```
---
---
```
This causes our current action file parser to fail.
```
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/common/action.py", line 72, in from_file
return cls.from_string(package, name, contents)
â â â â â '---\n---\n'
â â â â 'JoyPriority'
â â â 'twist_mux_msgs'
â â <classmethod object at 0x7f1f3cad41c0>
â <class 'roswire.ros1.action.ROS1ActionFormat'>
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/ros1/action.py", line 36, in from_string
sections = ROS1MsgFormat.sections_from_string(s)
â â â '---\n---\n'
â â <staticmethod object at 0x7f1f3d08c6a0>
â <class 'roswire.ros1.msg.ROS1MsgFormat'>
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/common/msg.py", line 323, in sections_from_string
sections[section_index] += f"{line}\n"
â â 2
â ['', '']
```
Fix corner case handling for empty action files
It turns out the action files can contain three entirely empty sections [https://github.com/ros-teleop/twist_mux_msgs/blob/melodic-devel/action/JoyPriority.action]:
```
---
---
```
This causes our current action file parser to fail.
```
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/common/action.py", line 72, in from_file
return cls.from_string(package, name, contents)
â â â â â '---\n---\n'
â â â â 'JoyPriority'
â â â 'twist_mux_msgs'
â â <classmethod object at 0x7f1f3cad41c0>
â <class 'roswire.ros1.action.ROS1ActionFormat'>
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/ros1/action.py", line 36, in from_string
sections = ROS1MsgFormat.sections_from_string(s)
â â â '---\n---\n'
â â <staticmethod object at 0x7f1f3d08c6a0>
â <class 'roswire.ros1.msg.ROS1MsgFormat'>
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/common/msg.py", line 323, in sections_from_string
sections[section_index] += f"{line}\n"
â â 2
â ['', '']
```
Fix corner case handling for empty action files
It turns out the action files can contain three entirely empty sections [https://github.com/ros-teleop/twist_mux_msgs/blob/melodic-devel/action/JoyPriority.action]:
```
---
---
```
This causes our current action file parser to fail.
```
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/common/action.py", line 72, in from_file
return cls.from_string(package, name, contents)
â â â â â '---\n---\n'
â â â â 'JoyPriority'
â â â 'twist_mux_msgs'
â â <classmethod object at 0x7f1f3cad41c0>
â <class 'roswire.ros1.action.ROS1ActionFormat'>
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/ros1/action.py", line 36, in from_string
sections = ROS1MsgFormat.sections_from_string(s)
â â â '---\n---\n'
â â <staticmethod object at 0x7f1f3d08c6a0>
â <class 'roswire.ros1.msg.ROS1MsgFormat'>
File "/home/chris/experiments/rosdiscover-evaluation/deps/roswire/src/roswire/common/msg.py", line 323, in sections_from_string
sections[section_index] += f"{line}\n"
â â 2
â ['', '']
```
| 2021-10-12T20:06:16 | 0.0 | [] | [] |
|||
theherk/pinkopy | theherk__pinkopy-31 | 074361c2f41238647971a6716d517d350697b4cc | diff --git a/pinkopy/base_session.py b/pinkopy/base_session.py
index 2dcff8c..22032fd 100644
--- a/pinkopy/base_session.py
+++ b/pinkopy/base_session.py
@@ -2,7 +2,11 @@
import inspect
import logging
import time
-from urllib.parse import urlencode, urljoin
+try:
+ from urllib.parse import urlencode, urljoin
+except ImportError:
+ from urllib import urlencode
+ from urlparse import urljoin
import xmltodict
from cachetools.func import ttl_cache
diff --git a/pinkopy/clients.py b/pinkopy/clients.py
index 9757625..9e370f8 100644
--- a/pinkopy/clients.py
+++ b/pinkopy/clients.py
@@ -13,7 +13,7 @@ def __init__(self, cache_methods=None, *args, **kwargs):
cache_methods = cache_methods or ['get_client',
'get_client_properties',
'get_clients']
- super().__init__(cache_methods=cache_methods, *args, **kwargs)
+ super(ClientSession, self).__init__(cache_methods=cache_methods, *args, **kwargs)
def get_client(self, client_id):
"""Get client.
diff --git a/pinkopy/commvault.py b/pinkopy/commvault.py
index 4546bcc..bccc754 100644
--- a/pinkopy/commvault.py
+++ b/pinkopy/commvault.py
@@ -17,7 +17,7 @@ class CommvaultSession(BaseSession):
"""
def __init__(self, *args, **kwargs):
"""Initialize route classes and shim."""
- super().__init__(*args, **kwargs)
+ super(CommvaultSession, self).__init__(*args, **kwargs)
self.clients = ClientSession(token=self.headers['Authtoken'], *args, **kwargs)
self.subclients = SubclientSession(token=self.headers['Authtoken'], *args, **kwargs)
diff --git a/pinkopy/jobs.py b/pinkopy/jobs.py
index 1aaf0dc..5831c2e 100644
--- a/pinkopy/jobs.py
+++ b/pinkopy/jobs.py
@@ -11,7 +11,7 @@ class JobSession(BaseSession):
def __init__(self, cache_methods=None, *args, **kwargs):
cache_methods = cache_methods or ['get_job_details',
'get_jobs']
- super().__init__(cache_methods=cache_methods, *args, **kwargs)
+ super(JobSession, self).__init__(cache_methods=cache_methods, *args, **kwargs)
def get_jobs(self, client_id, job_filter=None, last=None):
"""Get jobs.
diff --git a/pinkopy/subclients.py b/pinkopy/subclients.py
index ef18dd0..8f4c985 100644
--- a/pinkopy/subclients.py
+++ b/pinkopy/subclients.py
@@ -10,7 +10,7 @@ class SubclientSession(BaseSession):
"""Methods for subclients."""
def __init__(self, cache_methods=None, *args, **kwargs):
cache_methods = cache_methods or ['get_subclients']
- super().__init__(cache_methods=cache_methods, *args, **kwargs)
+ super(SubclientSession, self).__init__(cache_methods=cache_methods, *args, **kwargs)
def get_subclients(self, client_id):
"""Get subclients.
| Python 2.x support
I'm working on a project for which Python 2.x support from this package would be super helpful. I'm currently diving down the rabbit hole of changes that would be necessary to make this work with Python 2.x.
In the event that I'm successful in getting it working, would there be interest in having those changes contributed?
| Most definitely. We are moving away from Commvault, but this package is a great start to a full api wrapper for it. As long as your contributions are written neatly, documented, Python 3 first 2 second, and tests pass, I will absolutely take the PR.
@theherk The specific use case involves using pinkopy in conjunction with a Salt runner to orchestrate post-client install configuration via the Commvault API. I notice that on the site linked from your GitHub profile, you mention working with Saltstack. Have you attempted to use pinkopy in the same context as Salt? If so, how have you gotten around the issue of using pinkopy with Salt when Salt does not support Python 3?
Oh interesting. We use them separately. We only use the Commvault api to verify the status of backups. We do use Saltstack to configure the client, but we do it via states that do not use the api. You idea sounds like a good one.
| 2016-09-22T16:03:25 | 0.0 | [] | [] |
||
WenjieDu/PyPOTS | WenjieDu__PyPOTS-444 | d5631642c0d5905c154a2fb367561696488bbbc2 | diff --git a/README.md b/README.md
index 60b5c5a7..587594a0 100644
--- a/README.md
+++ b/README.md
@@ -161,7 +161,7 @@ And what else? Please read on ;-)
ð Time series datasets are taken as coffee beans at PyPOTS, and POTS datasets are incomplete coffee beans with missing parts that have their own meanings.
To make various public time-series datasets readily available to users,
<i>Time Series Data Beans (TSDB)</i> is created to make loading time-series datasets super easy!
-Visit [TSDB](https://github.com/WenjieDu/TSDB) right now to know more about this handy tool ð , and it now supports a total of 169 open-source datasets!
+Visit [TSDB](https://github.com/WenjieDu/TSDB) right now to know more about this handy tool ð , and it now supports a total of 170 open-source datasets!
<a href="https://github.com/WenjieDu/PyGrinder">
<img src="https://pypots.com/figs/pypots_logos/PyGrinder/logo_FFBG.svg" align="right" width="140" alt="PyGrinder logo"/>
@@ -185,8 +185,8 @@ POTS algorithms on various tasks.
<img src="https://pypots.com/figs/pypots_logos/BrewPOTS/logo_FFBG.svg" align="right" width="140" alt="BrewPOTS logo"/>
</a>
-ð Now we have the beans, the grinder, and the pot, how to brew us a cup of coffee? Tutorials are necessary!
-Considering the future workload, PyPOTS tutorials are released in a single repo,
+ð Now the beans, grinder, and pot are ready, please have a seat on the bench and let's think about how to brew us a cup of coffee.
+Tutorials are necessary! Considering the future workload, PyPOTS tutorials are released in a single repo,
and you can find them in [BrewPOTS](https://github.com/WenjieDu/BrewPOTS).
Take a look at it now, and learn how to brew your POTS datasets!
@@ -264,39 +264,38 @@ saits.load("save_it_here/saits_physionet2012.pypots") # reload the serialized m
## â Citing PyPOTS
> [!TIP]
-> **[Updates in Feb 2024]** ð Our survey paper [Deep Learning for Multivariate Time Series Imputation: A Survey](https://arxiv.org/abs/2402.04059) has been released on arXiv.
-The code is open source in the GitHub repo [Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation).
+> **[Updates in Jun 2024]** ð The 1st comprehensive time-seres imputation benchmark paper
+[TSI-Bench: Benchmarking Time Series Imputation](https://arxiv.org/abs/2406.12747) now is public available.
+The code is open source in the repo [Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation).
+With nearly 35,000 experiments, we provide a comprehensive benchmarking study on 28 imputation methods, 3 missing patterns (points, sequences, blocks),
+various missing rates, and 8 real-world datasets.
+>
+> **[Updates in Feb 2024]** ð Our survey paper [Deep Learning for Multivariate Time Series Imputation: A Survey](https://arxiv.org/abs/2402.04059) has been released on arXiv.
We comprehensively review the literature of the state-of-the-art deep-learning imputation methods for time series,
provide a taxonomy for them, and discuss the challenges and future directions in this field.
->
-> **[Updates in Jun 2023]** ð A short version of the PyPOTS paper is accepted by the 9th SIGKDD international workshop on
-Mining and Learning from Time Series ([MiLeTS'23](https://kdd-milets.github.io/milets2023/))).
-**Additionally**, PyPOTS has been included as a [PyTorch Ecosystem](https://pytorch.org/ecosystem/) project.
-The paper introducing PyPOTS is available on arXiv at [this URL](https://arxiv.org/abs/2305.18811),
-and we are pursuing to publish it in prestigious academic venues, e.g. JMLR (track for
+The paper introducing PyPOTS is available [on arXiv](https://arxiv.org/abs/2305.18811),
+A short version of it is accepted by the 9th SIGKDD international workshop on Mining and Learning from Time Series ([MiLeTS'23](https://kdd-milets.github.io/milets2023/))).
+**Additionally**, PyPOTS has been included as a [PyTorch Ecosystem](https://pytorch.org/ecosystem/) project.
+We are pursuing to publish it in prestigious academic venues, e.g. JMLR (track for
[Machine Learning Open Source Software](https://www.jmlr.org/mloss/)). If you use PyPOTS in your work,
please cite it as below and ðstar this repository to make others notice this library. ð¤
There are scientific research projects using PyPOTS and referencing in their papers.
-Here is [an incomplete list of them](https://scholar.google.com/scholar?as_ylo=2022&q=%E2%80%9CPyPOTS%E2%80%9D&hl=en>).
+Here is [an incomplete list of them](https://scholar.google.com/scholar?as_ylo=2022&q=%E2%80%9CPyPOTS%E2%80%9D&hl=en).
``` bibtex
@article{du2023pypots,
title={{PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series}},
author={Wenjie Du},
+journal={arXiv preprint arXiv:2305.18811},
year={2023},
-eprint={2305.18811},
-archivePrefix={arXiv},
-primaryClass={cs.LG},
-url={https://arxiv.org/abs/2305.18811},
-doi={10.48550/arXiv.2305.18811},
}
```
or
> Wenjie Du. (2023).
> PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series.
-> arXiv, abs/2305.18811.https://arxiv.org/abs/2305.18811
+> arXiv, abs/2305.18811. https://arxiv.org/abs/2305.18811
## â Contribution
@@ -340,7 +339,7 @@ We care about the feedback from our users, so we're building PyPOTS community on
- [Slack](https://join.slack.com/t/pypots-org/shared_invite/zt-1gq6ufwsi-p0OZdW~e9UW_IA4_f1OfxA). General discussion, Q&A, and our development team are here;
- [LinkedIn](https://www.linkedin.com/company/pypots). Official announcements and news are here;
-- [WeChat (微信å
¬ä¼å·)](https://mp.weixin.qq.com/s/sNgZmgAyxDn2sZxXoWJYMA). We also run a group chat on WeChat,
+- [WeChat (微信å
¬ä¼å·)](https://mp.weixin.qq.com/s/X3ukIgL1QpNH8ZEXq1YifA). We also run a group chat on WeChat,
and you can get the QR code from the official account after following it;
If you have any suggestions or want to contribute ideas or share time-series related papers, join us and tell.
diff --git a/README_zh.md b/README_zh.md
index 90624552..f9ab4913 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -145,7 +145,7 @@ PyPOTSå½åæ¯æå¤åéPOTSæ°æ®çæè¡¥ï¼é¢æµï¼åç±»ï¼è类以å
ð å¨PyPOTSä¸ï¼æ°æ®å¯ä»¥è¢«çä½æ¯åå¡è±ï¼èåçæºå¸¦ç¼ºå¤±å¼çPOTSæ°æ®åæ¯ä¸å®æ´çåå¡è±ã
为äºè®©ç¨æ·è½å¤è½»æ¾ä½¿ç¨åç§å¼æºçæ¶é´åºåæ°æ®éï¼æ们å建äºå¼æºæ¶é´åºåæ°æ®éçä»åº Time Series Data Beans (TSDB)ï¼å¯ä»¥å°å
¶è§ä¸ºåå¡è±ä»åºï¼ï¼
-TSDB让å è½½å¼æºæ¶åºæ°æ®éåå¾è¶
级ç®åï¼è®¿é® [TSDB](https://github.com/WenjieDu/TSDB)ï¼äºè§£æ´å¤å
³äºTSDBçä¿¡æ¯ï¼ç®åæ»å
±æ¯æ169个å¼æºæ°æ®éï¼
+TSDB让å è½½å¼æºæ¶åºæ°æ®éåå¾è¶
级ç®åï¼è®¿é® [TSDB](https://github.com/WenjieDu/TSDB)ï¼äºè§£æ´å¤å
³äºTSDBçä¿¡æ¯ï¼ç®åæ»å
±æ¯æ170个å¼æºæ°æ®éï¼
<a href="https://github.com/WenjieDu/PyGrinder">
<img src="https://pypots.com/figs/pypots_logos/PyGrinder/logo_FFBG.svg" align="right" width="140" alt="PyGrinder logo"/>
@@ -167,8 +167,8 @@ PyGrinderæ¯æ以ä¸ææ模å¼å¹¶æä¾ä¸ç¼ºå¤±ç¸å
³çå
¶ä»åè½å½æ°
<img src="https://pypots.com/figs/pypots_logos/BrewPOTS/logo_FFBG.svg" align="right" width="140" alt="BrewPOTS logo"/>
</a>
-ð ç°å¨æ们æäºåå¡è±ã磨è±æºååå¡å£¶ï¼é£ä¹å¦ä½èåä¸æ¯åå¡å¢ï¼å²æ³¡æç¨æ¯å¿
ä¸å¯å°çï¼
-èèå°æªæ¥çå·¥ä½éï¼PyPOTSçç¸å
³æç¨å°åå¸å¨ä¸ä¸ªç¬ç«çä»åº[BrewPOTS](https://github.com/WenjieDu/BrewPOTS)ä¸ã
+ð ç°å¨æ们æäºåå¡è±(beans)ã磨è±æº(grinder)ååå¡å£¶(pot)ï¼è®©æ们åå¨é¿å³(bench)ä¸æ³æ³å¦ä½èåä¸æ¯åå¡å¢ï¼
+æç¨å¿
ä¸å¯å°ï¼èèå°æªæ¥çå·¥ä½éï¼PyPOTSçç¸å
³æç¨å°åå¸å¨ä¸ä¸ªç¬ç«çä»åº[BrewPOTS](https://github.com/WenjieDu/BrewPOTS)ä¸ã
ç¹å»è®¿é®æ¥çæç¨ï¼å¦ä¹ å¦ä½èåä½ çPOTSæ°æ®ï¼
<p align="center">
@@ -244,16 +244,16 @@ saits.load("save_it_here/saits_physionet2012.pypots") # ä½ éæ¶å¯ä»¥éæ°
## â å¼ç¨PyPOTS
> [!TIP]
-> **[2024å¹´2ææ´æ°]** ð æ们ç综述论æ[Deep Learning for Multivariate Time Series Imputation: A Survey](https://arxiv.org/abs/2402.04059)
-> å·²å¨ arXiv ä¸åå¸ï¼ä»£ç ä¹å¨GitHub项ç®ï¼[Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation)ï¼ä¸å¼æºã
-> æ们å
¨é¢è°ç æ»ç»äºææ°åºäºæ·±åº¦å¦ä¹ çæ¶é´åºåæè¡¥æ¹æ³æç®å¹¶å¯¹ç°æçæ¹æ³è¿è¡åç±»ï¼é¤æ¤ä¹å¤ï¼è¿è®¨è®ºäºè¯¥é¢åå½åçææåæªæ¥åå±æ¹åã
+> **[2024å¹´6ææ´æ°]** ð 第ä¸ä¸ªå
¨é¢çæ¶é´åºåæè¡¥åºå论æ[TSI-Bench: Benchmarking Time Series Imputation](https://arxiv.org/abs/2406.12747)ç°å¨æ¥äºã
+> ææ代ç å¼æºå¨[Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation)ä»åºä¸ãéè¿è¿35,000个å®éªï¼æ们对28ç§imputationæ¹æ³ï¼3ç§ç¼ºå¤±æ¨¡å¼(ç¹ï¼åºåï¼å)ï¼åç§ç¼ºå¤±çï¼å8个çå®æ°æ®éè¿è¡äºå
¨é¢çåºåç 究ã
>
-> **[2023å¹´6ææ´æ°]** ð PyPOTSç5页çç论æ已被第9å±SIGKDD international workshop on
-Mining and Learning from Time Series ([MiLeTS'23](https://kdd-milets.github.io/milets2023/))æ¶å½ã
-> æ¤å¤ï¼PyPOTSä¹å·²è¢«çº³å
¥[PyTorch Ecosystem](https://pytorch.org/ecosystem/)ã
+> **[2024å¹´2ææ´æ°]** ð æ们ç综述论æ[Deep Learning for Multivariate Time Series Imputation: A Survey](https://arxiv.org/abs/2402.04059)
+> å·²å¨ arXiv ä¸åå¸ãæ们å
¨é¢è°ç æ»ç»äºææ°åºäºæ·±åº¦å¦ä¹ çæ¶é´åºåæè¡¥æ¹æ³æç®å¹¶å¯¹ç°æçæ¹æ³è¿è¡åç±»ï¼æ¤å¤ï¼è¿è®¨è®ºäºè¯¥é¢åå½åçææåæªæ¥åå±æ¹åã
-ä»ç»PyPOTSç论æå¯ä»¥éè¿[该é¾æ¥](https://arxiv.org/abs/2305.18811)å¨arXivä¸è·åï¼æ们æ£å¨åªåå°å
¶å表å¨æ´å
·å½±ååçå¦æ¯åç©ä¸ï¼
-ä¾å¦JMLR (track for [Machine Learning Open Source Software](https://www.jmlr.org/mloss/))ã
+PyPOTSç论æå¯ä»¥[å¨arXivä¸è·å](https://arxiv.org/abs/2305.18811)ï¼å
¶5页ççç论æ已被第9å±SIGKDD international workshop on
+Mining and Learning from Time Series ([MiLeTS'23](https://kdd-milets.github.io/milets2023/))æ¶å½ï¼ä¸æ¤åæ¶ï¼
+PyPOTSä¹å·²è¢«çº³å
¥[PyTorch Ecosystem](https://pytorch.org/ecosystem/)ãæ们æ£å¨åªåå°å
¶å表å¨æ´å
·å½±ååçå¦æ¯åç©ä¸ï¼
+å¦JMLR (track for [Machine Learning Open Source Software](https://www.jmlr.org/mloss/))ã
å¦æä½ å¨å·¥ä½ä¸ä½¿ç¨äºPyPOTSï¼è¯·æç
§ä»¥ä¸æ ¼å¼å¼ç¨æ们ç论æ并为å°é¡¹ç®è®¾ä¸ºææ ðï¼ä»¥ä¾¿è®©æ´å¤äººå
³æ³¨å°å®ï¼å¯¹æ¤æ们深表æè°¢ð¤ã
æ®ä¸å®å
¨ç»è®¡ï¼è¯¥[å表](https://scholar.google.com/scholar?as_ylo=2022&q=%E2%80%9CPyPOTS%E2%80%9D&hl=en>)为å½å使ç¨PyPOTS并å¨å
¶è®ºæä¸å¼ç¨PyPOTSçç§å¦ç 究项ç®
@@ -262,18 +262,14 @@ Mining and Learning from Time Series ([MiLeTS'23](https://kdd-milets.github.io/m
@article{du2023pypots,
title={{PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series}},
author={Wenjie Du},
+journal={arXiv preprint arXiv:2305.18811},
year={2023},
-eprint={2305.18811},
-archivePrefix={arXiv},
-primaryClass={cs.LG},
-url={https://arxiv.org/abs/2305.18811},
-doi={10.48550/arXiv.2305.18811},
}
```
æè
> Wenjie Du. (2023).
> PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series.
-> arXiv, abs/2305.18811.https://arxiv.org/abs/2305.18811
+> arXiv, abs/2305.18811. https://arxiv.org/abs/2305.18811
## â è´¡ç®å£°æ
@@ -315,7 +311,7 @@ doi={10.48550/arXiv.2305.18811},
- [Slack](https://join.slack.com/t/pypots-org/shared_invite/zt-1gq6ufwsi-p0OZdW~e9UW_IA4_f1OfxA)ï¼ä½ å¯ä»¥å¨è¿éè¿è¡æ¥å¸¸è®¨è®ºãé®ç以åä¸æ们çå¼åå¢é交æµï¼
- [é¢è±](https://www.linkedin.com/company/pypots)ï¼ä½ å¯ä»¥å¨è¿éè·åå®æ¹å
¬ååæ°é»ï¼
-- [微信å
¬ä¼å·](https://mp.weixin.qq.com/s/sNgZmgAyxDn2sZxXoWJYMA)ï¼ä½ å¯ä»¥å
³æ³¨å®æ¹å
¬ä¼å·å¹¶å å
¥å¾®ä¿¡ç¾¤èåä¸è®¨è®ºä»¥åè·åææ°å¨æï¼
+- [微信å
¬ä¼å·](https://mp.weixin.qq.com/s/X3ukIgL1QpNH8ZEXq1YifA)ï¼ä½ å¯ä»¥å
³æ³¨å®æ¹å
¬ä¼å·å¹¶å å
¥å¾®ä¿¡ç¾¤èåä¸è®¨è®ºä»¥åè·åææ°å¨æï¼
å¦æä½ æä»»ä½å»ºè®®ãæ³æ³ãææç®å享ä¸æ¶é´åºåç¸å
³ç论æï¼æ¬¢è¿å å
¥æ们ï¼
PyPOTS社åºæ¯ä¸ä¸ªå¼æ¾ãéæãå好ç社åºï¼è®©æ们å
±ååªå建设并æ¹è¿PyPOTSï¼
diff --git a/docs/index.rst b/docs/index.rst
index 9bb2ffa2..baaf6c5e 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -220,7 +220,7 @@ And what else? Please read on ;-)
ð Time series datasets are taken as coffee beans at PyPOTS, and POTS datasets are incomplete coffee beans with missing parts that have their own meanings.
To make various public time-series datasets readily available to users,
*Time Series Data Beans (TSDB)* is created to make loading time-series datasets super easy!
-Visit `TSDB <https://github.com/WenjieDu/TSDB>`_ right now to know more about this handy tool ð , and it now supports a total of 168 open-source datasets!
+Visit `TSDB <https://github.com/WenjieDu/TSDB>`_ right now to know more about this handy tool ð , and it now supports a total of 170 open-source datasets!
.. image:: https://pypots.com/figs/pypots_logos/PyGrinder/logo_FFBG.svg
:width: 150
@@ -250,8 +250,8 @@ POTS algorithms on various tasks.
:align: right
:target: https://github.com/WenjieDu/BrewPOTS
-ð Now we have the beans, the grinder, and the pot, how to brew us a cup of coffee? Tutorials are necessary!
-Considering the future workload, PyPOTS tutorials is released in a single repo,
+ð Now the beans, grinder, and pot are ready, please have a seat on the bench and let's think about how to brew us a cup of coffee.
+Tutorials are necessary! Considering the future workload, PyPOTS tutorials is released in a single repo,
and you can find them in `BrewPOTS <https://github.com/WenjieDu/BrewPOTS>`_.
Take a look at it now, and learn how to brew your POTS datasets!
@@ -351,7 +351,7 @@ We care about the feedback from our users, so we're building PyPOTS community on
- `Slack <https://join.slack.com/t/pypots-org/shared_invite/zt-1gq6ufwsi-p0OZdW~e9UW_IA4_f1OfxA>`_. General discussion, Q&A, and our development team are here;
- `LinkedIn <https://www.linkedin.com/company/pypots>`_. Official announcements and news are here;
-- `WeChat (微信å
¬ä¼å·) <https://mp.weixin.qq.com/s/sNgZmgAyxDn2sZxXoWJYMA>`_. We also run a group chat on WeChat,
+- `WeChat (微信å
¬ä¼å·) <https://mp.weixin.qq.com/s/X3ukIgL1QpNH8ZEXq1YifA>`_. We also run a group chat on WeChat,
and you can get the QR code from the official account after following it;
If you have any suggestions or want to contribute ideas or share time-series related papers, join us and tell.
diff --git a/docs/milestones.rst b/docs/milestones.rst
index 10463664..036b7ca2 100644
--- a/docs/milestones.rst
+++ b/docs/milestones.rst
@@ -15,16 +15,12 @@ please cite it as below and ðstar `PyPOTS repository <https://github.com/Wenj
.. code-block:: bibtex
:linenos:
- @article{du2023pypots,
- title={{PyPOTS: A Python Toolbox for Data Mining on Partially-Observed Time Series}},
- author={Wenjie Du},
- year={2023},
- eprint={2305.18811},
- archivePrefix={arXiv},
- primaryClass={cs.LG},
- url={https://arxiv.org/abs/2305.18811},
- doi={10.48550/arXiv.2305.18811},
- }
+ @article{du2023pypots,
+ title={{PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series}},
+ author={Wenjie Du},
+ journal={arXiv preprint arXiv:2305.18811},
+ year={2023},
+ }
or
@@ -43,10 +39,13 @@ Project Milestones
^^^^^^^^^^^^^^^^^^
- 2022-03: `PyPOTS project <https://github.com/WenjieDu/PyPOTS>`_ is initiated;
- 2022-04: PyPOTS v0.0.1 is released;
-- 2022-09: PyPOTS achieves its first 100 stars on GitHub;
+- 2022-09: PyPOTS achieves its first 100 stars âï¸ on GitHub;
- 2023-03: PyPOTS is `published on Conda-Forge <https://anaconda.org/conda-forge/pypots>`_, and users can install it via Anaconda;
- 2023-04: `PyPOTS website <https://pypots.com>`_ is launched, and PyPOTS achieves its first 10K downloads on PyPI;
- 2023-05: PyPOTS v0.1 is released, and `the preprint paper <https://arxiv.org/abs/2305.18811>`_ is published on arXiv;
- 2023-06: A short version of PyPOTS paper is accepted by the 9th SIGKDD International
Workshop on Mining and Learning from Time Series (`MiLeTS'23 <https://kdd-milets.github.io/milets2023/>`_);
- 2023-07: PyPOTS has been accepted as a `PyTorch Ecosystem <https://pytorch.org/ecosystem/>`_ project;
+- 2023-12: PyPOTS achieves its first 500 stars ð;
+- 2024-02: PyPOTS Research releases its imputation survey paper `Deep Learning for Multivariate Time Series Imputation: A Survey <https://arxiv.org/abs/2402.04059>`_;
+- 2024-06: PyPOTS Research releases the 1st comprehensive time-series imputation benchmark paper `TSI-Bench: Benchmarking Time Series Imputation <https://arxiv.org/abs/2406.12747>`_;
diff --git a/docs/references.bib b/docs/references.bib
index 8255628c..73c5c08f 100644
--- a/docs/references.bib
+++ b/docs/references.bib
@@ -737,3 +737,11 @@ @inproceedings{cao2020stemgnn
volume = {33},
year = {2020}
}
+
+@inproceedings{xu2024fits,
+title={{FITS}: Modeling Time Series with \$10k\$ Parameters},
+author={Zhijian Xu and Ailing Zeng and Qiang Xu},
+booktitle={The Twelfth International Conference on Learning Representations},
+year={2024},
+url={https://openreview.net/forum?id=bWcnvZ3qMb}
+}
diff --git a/pypots/cli/tuning.py b/pypots/cli/tuning.py
index 6d6307c8..2b80ad8c 100644
--- a/pypots/cli/tuning.py
+++ b/pypots/cli/tuning.py
@@ -6,6 +6,7 @@
# License: BSD-3-Clause
+import inspect
import os
from argparse import ArgumentParser, Namespace
@@ -237,7 +238,7 @@ def run(self):
lr = tuner_params.pop("lr")
# check if hyperparameters match
- model_all_arguments = model_class.__init__.__annotations__.keys()
+ model_all_arguments = inspect.signature(model_class).parameters.keys()
tuner_params_set = set(tuner_params.keys())
model_arguments_set = set(model_all_arguments)
if_hyperparameter_match = tuner_params_set.issubset(model_arguments_set)
diff --git a/pypots/imputation/tide/__init__.py b/pypots/imputation/tide/__init__.py
index a448168e..27d1b166 100644
--- a/pypots/imputation/tide/__init__.py
+++ b/pypots/imputation/tide/__init__.py
@@ -10,7 +10,7 @@
Notes
-----
This implementation is inspired by the official one
-https://github.com/google-research/google-research/blob/master/tide and
+https://github.com/google-research/google-research/blob/master/tide and https://github.com/lich99/TiDE
"""
diff --git a/pypots/imputation/tide/model.py b/pypots/imputation/tide/model.py
index 4bd29728..6e2bb3e1 100644
--- a/pypots/imputation/tide/model.py
+++ b/pypots/imputation/tide/model.py
@@ -23,7 +23,7 @@
class TiDE(BaseNNImputer):
"""The PyTorch implementation of the TiDE model.
- TiDE is originally proposed by Wu et al. in :cite:`wu2021TiDE`.
+ TiDE is originally proposed by Das et al. in :cite:`das2023tide`.
Parameters
----------
@@ -39,17 +39,14 @@ class TiDE(BaseNNImputer):
d_model :
The dimension of the model.
- n_heads :
- The number of heads in each layer of TiDE.
+ d_hidden :
+ The dimension of the hidden layer in the model.
- d_ffn :
- The dimension of the feed-forward network.
+ d_feature_encode :
+ The dimension of the feature encoder.
- factor :
- The factor of the auto correlation mechanism for the TiDE model.
-
- moving_avg_window_size :
- The window size of moving average.
+ d_temporal_decoder_hidden :
+ The dimension of the hidden layer in the temporal decoder.
dropout :
The dropout rate for the model.
diff --git a/setup.py b/setup.py
index 8f7b0a7b..13172658 100644
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,6 @@
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
| Failed to read an external model's all arguments during tuning
### 1. System Info
PyPOTS v0.6
### 2. Information
- [X] The official example scripts
- [ ] My own created scripts
### 3. Reproduction
https://github.com/WenjieDu/PyPOTS/blob/d5631642c0d5905c154a2fb367561696488bbbc2/pypots/cli/tuning.py#L240
### 4. Expected behavior
Can only read the parent class' arguments.
| 2024-06-21T17:19:49 | 0.0 | [] | [] |
|||
ParthJadhav/Tkinter-Designer | ParthJadhav__Tkinter-Designer-108 | 9b3967e8328eaa54a093858ae8a7504c5444f00c | diff --git a/tkdesigner/figma/frame.py b/tkdesigner/figma/frame.py
index c31bd592..43162c4d 100644
--- a/tkdesigner/figma/frame.py
+++ b/tkdesigner/figma/frame.py
@@ -104,8 +104,8 @@ def color(self) -> str:
"""
try:
color = self.node["fills"][0]["color"]
- rgba = [int(color.get(i, 0) * 255) for i in "rgba"]
- return f"#{rgba[0]:0X}{rgba[1]:0X}{rgba[2]:0X}"
+ r, g, b, *_ = [int(color.get(i, 0) * 255) for i in "rgba"]
+ return f"#{r:02X}{g:02X}{b:02X}"
except Exception:
return "#FFFFFF"
diff --git a/tkdesigner/figma/vector_elements.py b/tkdesigner/figma/vector_elements.py
index 086df009..f579a953 100644
--- a/tkdesigner/figma/vector_elements.py
+++ b/tkdesigner/figma/vector_elements.py
@@ -5,12 +5,13 @@ class Vector(Node):
def __init__(self, node):
super().__init__(node)
- def color(self):
+ def color(self) -> str:
+ """Returns HEX form of element RGB color (str)
+ """
try:
color = self.node["fills"][0]["color"]
- rgba = [int(color.get(i, 0) * 255) for i in "rgba"]
- # return tuple(rgba)
- return f"#{rgba[0]:0X}{rgba[1]:0X}{rgba[2]:0X}"
+ r, g, b, *_ = [int(color.get(i, 0) * 255) for i in "rgba"]
+ return f"#{r:02X}{g:02X}{b:02X}"
except Exception:
return "#FFFFFF"
| Invalid Color Name Error
Get a strange error trying to convert this Figma file. Seems to be the Frame background color:
https://www.figma.com/file/hNzdLAhQDcUSxMlq6JtZs3/Untitled?node-id=0%3A1
```
Creating Element { name: Text, type: TEXT }
Creating Element { name: Text, type: TEXT }
Traceback (most recent call last):
File "/Users/neilbaldwin/Documents/Projects/Tkinter-Designer/output/build/gui.py", line 22, in <module>
window.configure(bg = "#C8BB")
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/tkinter/__init__.py", line 1646, in configure
return self._configure('configure', cnf, kw)
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/tkinter/__init__.py", line 1636, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: invalid color name "#C8BB"
```
| 2021-08-01T02:19:04 | 0.0 | [] | [] |
|||
jazzband/wagtailmenus | jazzband__wagtailmenus-453 | 4e7899050b0d2ec1ae8b38327a162e3058aa1a79 | diff --git a/wagtailmenus/apps.py b/wagtailmenus/apps.py
index 45ae7d85..f60d4198 100644
--- a/wagtailmenus/apps.py
+++ b/wagtailmenus/apps.py
@@ -4,3 +4,4 @@
class WagtailMenusConfig(AppConfig):
name = 'wagtailmenus'
verbose_name = 'WagtailMenus'
+ default_auto_field = "django.db.models.AutoField"
diff --git a/wagtailmenus/templatetags/menu_tags.py b/wagtailmenus/templatetags/menu_tags.py
index 26443ddb..a98073c4 100644
--- a/wagtailmenus/templatetags/menu_tags.py
+++ b/wagtailmenus/templatetags/menu_tags.py
@@ -44,7 +44,7 @@ def main_menu(
@register.simple_tag(takes_context=True)
def flat_menu(
- context, handle, max_levels=None, show_menu_heading=False,
+ context, handle, max_levels=None, show_menu_heading=True,
apply_active_classes=False, allow_repeating_parents=True,
show_multiple_levels=True, template='', sub_menu_template='',
sub_menu_templates=None, fall_back_to_default_site_menus=None,
| No migration file ! So makemigrations break
Migrations for 'wagtailmenus':
/usr/local/lib/python3.9/site-packages/wagtailmenus/migrations/0024_alter_flatmenu_id_alter_flatmenuitem_id_and_more.py
- Alter field id on flatmenu
- Alter field id on flatmenuitem
- Alter field id on mainmenu
- Alter field id on mainmenuitem
Traceback (most recent call last):
File "/app/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 96, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/makemigrations.py", line 239, in handle
self.write_migration_files(changes)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/makemigrations.py", line 279, in write_migration_files
with open(writer.path, "w", encoding="utf-8") as fh:
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.9/site-packages/wagtailmenus/migrations/0024_alter_flatmenu_id_alter_flatmenuitem_id_and_more.py'
DEFAULT_AUTO_FIELD introduced in Django 3.2 behaviour
I'm in the process of upgrading a project to Wagtail 4 and Django 4.1. This is a project created with Django 3.1 but later upgraded to 3.2 and now 4.1.
I have been using the `DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'` for a few months without problems.
After the upgrade, when I run `makemigrations`, Django detects changes in `wagtailmenus` models that do not have migrations and creates a migration file in the site-packages directory:
```
Migrations for 'wagtailmenus':
/usr/local/lib/python3.10/site-packages/wagtailmenus/migrations/0024_alter_flatmenu_id_alter_flatmenuitem_id_and_more.py
- Alter field id on flatmenu
- Alter field id on flatmenuitem
- Alter field id on mainmenu
- Alter field id on mainmenuitem
```
I'm not sure if this is expected behaviour, I find it kind of strange that my project is creating migration files in third party package folders.
Maybe wagtailmenus needs to explicitly set the auto field for the models?
| Hi @konstantinoschristomanos , could you please share the Django version you're using?
> Hi @konstantinoschristomanos , could you please share the Django version you're using?
Sure !
Django 4.1.8
Thanks! I'm giving it a look!
Ok, the problem is likely due to having `DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"` in your `settings.py`. This is the new default since Django 3.2.
There are 2 possible fixtures (and also a workaround for you @konstantinoschristomanos).
@schlich, as a fellow maintainer, which fixture do you prefer? (oï¾vï¾)ã
1. To create a migration file that changes the AUTO_FIELD to BigAutoField. This is unlikely to break anything since Django 3.2 is already our minimal Django version
2. To add `default_auto_field = "django.db.models.AutoField"` to the `WagtailMenusConfig`. This makes the db id for menus to be an Int and not a BigInt - which I also think is fine (it's unlikely that any menu database with more than 2,147,483,647 entries haha!)
Tell me which you prefer and I create the PR.
---
@konstantinoschristomanos , in the meanwhile, I see that you are trying to install wagtailmenus in your global python. Since your global python is installed globally and your user doesn't have permission to alter it, your makemigrations process is failing.
To bypass this, you could either install a local version of python or use a virtual environment in a directory your user has 'write' permission. That way you can create the missing migration files yourself.
>
Thank you @MrCordeiro !
I am using docker for development so its the container's global python !
Yes sure ! I can't imagine a website with more entities and since an easy default_auto_field = "django.db.models.AutoField" will do the work, then we can go with solution 2.
Thanks again for the help, I will wait for the pr !
| 2023-05-08T13:24:22 | 0.0 | [] | [] |
||
InfrastructureAsCode-ch/nornir_rich | InfrastructureAsCode-ch__nornir_rich-6 | 54782b0cae57430b30e387c7e52afc33fdd1c778 | diff --git a/nornir_rich/progress_bar.py b/nornir_rich/progress_bar.py
index ee61d66..8b9d410 100644
--- a/nornir_rich/progress_bar.py
+++ b/nornir_rich/progress_bar.py
@@ -45,22 +45,21 @@ def __init__(
self.overall_progress_table = Table.grid()
self.overall_progress_table.add_row(self.progress_total)
self.progress_table = Table.grid()
- self.progress_table.add_row(
- Panel.fit(
- self.overall_progress_table,
- title="Overall Progress",
- border_style="green",
- padding=(0, 2),
- )
+
+ self.overall_panel = Panel.fit(
+ self.overall_progress_table,
+ title="Overall Progress",
+ border_style="green",
+ padding=(0, 2),
)
- self.progress_table.add_row(
- Panel.fit(
- self.progress_status,
- title="Task Exit Status",
- border_style="yellow",
- padding=(0, 2),
- )
+ self.exit_status_panel = Panel.fit(
+ self.progress_status,
+ border_style="yellow",
+ padding=(0, 2),
)
+
+ self.progress_table.add_row(self.overall_panel)
+ self.progress_table.add_row(self.exit_status_panel)
self.progress_live = Live(self.progress_table, refresh_per_second=10)
def task_started(self, task: Task) -> None:
@@ -74,6 +73,8 @@ def task_started(self, task: Task) -> None:
self.concurrent_count = workers
self.total_hosts = hosts
+ self.exit_status_panel.title = task.name
+
self.successful = self.progress_status.add_task(
"[green]Successful:", total=self.total_hosts
)
| Task Description on Progress Bar
**Feature Request**
I'd like to see the title: Overall Progress be substituted with the current task. This would help me see which task is being worked on.
| Thank you for raising this feature request. To be sure I understand correctly. In this example, you want to see the title as "random_sleep"?
```python
nr_with_processors = nr.with_processors([RichProgressBar()])
result = nr_with_processors.run(task=random_sleep)
```
@ubaumann Yes, that's right! I think passing our own description for the task being run would be effective. That way we can freely re-use a task in different contexts with the progress bars.
If we move some of the init tasks to start we could read task.name and use that value. This way it can be set simply by normal usage of nornir.run(name="some task name", task=sometask) | 2022-09-28T22:34:49 | 0.0 | [] | [] |
||
obsidian-html/obsidian-html | obsidian-html__obsidian-html-52 | 4b529add8bafb371a684d31ccde26b9cc492874f | diff --git a/obsidianhtml/__init__.py b/obsidianhtml/__init__.py
index f11d5305..8dfbf3e0 100644
--- a/obsidianhtml/__init__.py
+++ b/obsidianhtml/__init__.py
@@ -233,7 +233,7 @@ def ConvertMarkdownPageToHtmlPage(page_path_str, pb, backlinkNode=None):
# This shows the "Show Graph" button, and adds the js code to handle showing the graph
if config['toggles']['features']['graph']['enabled']:
graph_template = OpenIncludedFile('graph_template.html')
- graph_template = graph_template.replace('{id}', str(uuid.uuid4()).replace('-',''))\
+ graph_template = graph_template.replace('{id}', simpleHash(html_body))\
.replace('{pinnedNode}', node['id'])\
.replace('{html_url_prefix}', config['html_url_prefix'])\
.replace('{graph_coalesce_force}', config['toggles']['features']['graph']['coalesce_force'])
@@ -329,6 +329,12 @@ def recurseTagList(tagtree, tagpath, pb, level):
# Return link of this page, to be used by caller for building its page
return rel_dst_path_as_posix
+def simpleHash(text:str):
+ hash=0
+ for ch in text:
+ hash = ( hash*281 ^ ord(ch)*997) & 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
+ return str(hash)
+
def main():
# Show help text
| when storing the resultant html in git, too many changes as a result of the uuid.uuid4() replacement in graph_template
should use a hash that doesn't change every single time the html is generated
| 2022-02-07T17:39:24 | 0.0 | [] | [] |
|||
lm-sys/FastChat | lm-sys__FastChat-3356 | 25062a1f317564057fd786b710d83ae66997a397 | diff --git a/docker/Dockerfile b/docker/Dockerfile
index 159d4abd0..4fc41918b 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -4,4 +4,4 @@ RUN apt-get update -y && apt-get install -y python3.9 python3.9-distutils curl
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.9 get-pip.py
RUN pip3 install fschat
-RUN pip3 install fschat[model_worker,webui] pydantic==1.10.13
\ No newline at end of file
+RUN pip3 install fschat[model_worker,webui]
\ No newline at end of file
diff --git a/fastchat/serve/openai_api_server.py b/fastchat/serve/openai_api_server.py
index e18225fc8..a6ffee96b 100644
--- a/fastchat/serve/openai_api_server.py
+++ b/fastchat/serve/openai_api_server.py
@@ -22,10 +22,7 @@
from fastapi.security.http import HTTPAuthorizationCredentials, HTTPBearer
import httpx
-try:
- from pydantic.v1 import BaseSettings
-except ImportError:
- from pydantic import BaseSettings
+from pydantic_settings import BaseSettings
import shortuuid
import tiktoken
import uvicorn
@@ -133,7 +130,7 @@ async def check_api_key(
def create_error_response(code: int, message: str) -> JSONResponse:
return JSONResponse(
- ErrorResponse(message=message, code=code).dict(), status_code=400
+ ErrorResponse(message=message, code=code).model_dump(), status_code=400
)
@@ -479,8 +476,8 @@ async def create_chat_completion(request: ChatCompletionRequest):
)
)
if "usage" in content:
- task_usage = UsageInfo.parse_obj(content["usage"])
- for usage_key, usage_value in task_usage.dict().items():
+ task_usage = UsageInfo.model_validate(content["usage"])
+ for usage_key, usage_value in task_usage.model_dump().items():
setattr(usage, usage_key, getattr(usage, usage_key) + usage_value)
return ChatCompletionResponse(model=request.model, choices=choices, usage=usage)
@@ -505,7 +502,7 @@ async def chat_completion_stream_generator(
chunk = ChatCompletionStreamResponse(
id=id, choices=[choice_data], model=model_name
)
- yield f"data: {chunk.json(exclude_unset=True, ensure_ascii=False)}\n\n"
+ yield f"data: {chunk.model_dump_json(exclude_unset=True)}\n\n"
previous_text = ""
async for content in generate_completion_stream(gen_params, worker_addr):
@@ -535,10 +532,10 @@ async def chat_completion_stream_generator(
if content.get("finish_reason", None) is not None:
finish_stream_events.append(chunk)
continue
- yield f"data: {chunk.json(exclude_unset=True, ensure_ascii=False)}\n\n"
+ yield f"data: {chunk.model_dump_json(exclude_unset=True)}\n\n"
# There is not "content" field in the last delta message, so exclude_none to exclude field "content".
for finish_chunk in finish_stream_events:
- yield f"data: {finish_chunk.json(exclude_none=True, ensure_ascii=False)}\n\n"
+ yield f"data: {finish_chunk.model_dump_json(exclude_none=True)}\n\n"
yield "data: [DONE]\n\n"
@@ -612,12 +609,12 @@ async def create_completion(request: CompletionRequest):
finish_reason=content.get("finish_reason", "stop"),
)
)
- task_usage = UsageInfo.parse_obj(content["usage"])
- for usage_key, usage_value in task_usage.dict().items():
+ task_usage = UsageInfo.model_validate(content["usage"])
+ for usage_key, usage_value in task_usage.model_dump().items():
setattr(usage, usage_key, getattr(usage, usage_key) + usage_value)
return CompletionResponse(
- model=request.model, choices=choices, usage=UsageInfo.parse_obj(usage)
+ model=request.model, choices=choices, usage=UsageInfo.model_validate(usage)
)
@@ -673,10 +670,10 @@ async def generate_completion_stream_generator(
if content.get("finish_reason", None) is not None:
finish_stream_events.append(chunk)
continue
- yield f"data: {chunk.json(exclude_unset=True, ensure_ascii=False)}\n\n"
+ yield f"data: {chunk.model_dump_json(exclude_unset=True)}\n\n"
# There is not "content" field in the last delta message, so exclude_none to exclude field "content".
for finish_chunk in finish_stream_events:
- yield f"data: {finish_chunk.json(exclude_unset=True, ensure_ascii=False)}\n\n"
+ yield f"data: {finish_chunk.model_dump_json(exclude_unset=True)}\n\n"
yield "data: [DONE]\n\n"
@@ -751,7 +748,7 @@ async def create_embeddings(request: EmbeddingsRequest, model_name: str = None):
total_tokens=token_num,
completion_tokens=None,
),
- ).dict(exclude_none=True)
+ ).model_dump(exclude_none=True)
async def get_embedding(payload: Dict[str, Any]):
@@ -868,8 +865,8 @@ async def create_chat_completion(request: APIChatCompletionRequest):
finish_reason=content.get("finish_reason", "stop"),
)
)
- task_usage = UsageInfo.parse_obj(content["usage"])
- for usage_key, usage_value in task_usage.dict().items():
+ task_usage = UsageInfo.model_validate(content["usage"])
+ for usage_key, usage_value in task_usage.model_dump().items():
setattr(usage, usage_key, getattr(usage, usage_key) + usage_value)
return ChatCompletionResponse(model=request.model, choices=choices, usage=usage)
diff --git a/pyproject.toml b/pyproject.toml
index dd0b80a6b..fedd9e2dc 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -14,7 +14,7 @@ classifiers = [
]
dependencies = [
"aiohttp", "fastapi", "httpx", "markdown2[all]", "nh3", "numpy",
- "prompt_toolkit>=3.0.0", "pydantic", "psutil", "requests", "rich>=10.0.0",
+ "prompt_toolkit>=3.0.0", "pydantic<3,>=2.0.0", "pydantic-settings", "psutil", "requests", "rich>=10.0.0",
"shortuuid", "tiktoken", "uvicorn",
]
| TypeError: `dumps_kwargs` keyword arguments are no longer supported.
pydanticåºä»çæ¬1.8.0å¼å§ä¸åæ¯ædumps_kwargså
³é®ååæ°ãchunk.json()æ¹æ³ä½¿ç¨äºexclude_unset=Trueåensure_ascii=Falseè¿ä¸¤ä¸ªå
³é®ååæ°ã
`yield "{}".format(chunk.json(exclude_unset=True, ensure_ascii=False))`
æ¢æ
yield f"data: {json.dumps(chunk.model_dump(exclude_unset=True), ensure_ascii=False)}\n\n"
é®é¢ä»£ç ï¼
openai_api_server.py 506 ï¼536ï¼539
open_api_sever can't stream answer when vllm
2024-02-13 20:19:44 | ERROR | stderr | | await func()
2024-02-13 20:19:44 | ERROR | stderr | | File "/home/ubuntu/miniconda3/envs/xinfer/lib/python3.11/site-packages/starlette/responses.py", line 249, in stream_response
2024-02-13 20:19:44 | ERROR | stderr | | async for chunk in self.body_iterator:
2024-02-13 20:19:44 | ERROR | stderr | | File "/model/code/FastChat/fastchat/serve/openai_api_server.py", line 506, in chat_completion_stream_generator
2024-02-13 20:19:44 | ERROR | stderr | | yield f"data: {chunk.json(exclude_unset=True, ensure_ascii=False)}\n\n"
2024-02-13 20:19:44 | ERROR | stderr | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-02-13 20:19:44 | ERROR | stderr | | File "/home/ubuntu/miniconda3/envs/xinfer/lib/python3.11/site-packages/pydantic/main.py", line 1056, in json
2024-02-13 20:19:44 | ERROR | stderr | | raise TypeError('`dumps_kwargs` keyword arguments are no longer supported.')
2024-02-13 20:19:44 | ERROR | stderr | | TypeError: `dumps_kwargs` keyword arguments are no longer supported.
2024-02-13 20:19:44 | ERROR | stderr | +------------------------------------
fastchat 0.2.36 pytorch 2.1.2 pydantic 2.6.1 vllm 0.3.0
when pip install pydantic==1.10.14 , vllm steam is ok.
| This is likely caused due to Pydantic v2. I fixed mine by following the workaround described here https://github.com/vllm-project/vllm/issues/423
Check your version of pydantic using
pip show pydantic
if version >2.0
pip uninstall pydantic
pip install pydantic==1.10.11
Please close the issue if the fix works for you.
@rochak-chadha
update pydantic to v1.10.11 fix the problem, but will borke the gradio app
Thank you for your advice. I encountered the same issue when using model_worker. It was resolved by using pip install pydantic==1.10.14. | 2024-05-22T07:30:33 | 0.0 | [] | [] |
||
proplot-dev/proplot | proplot-dev__proplot-317 | d0bc9c0857d9295b380b8613ef9aba81d50a067c | diff --git a/WHATSNEW.rst b/WHATSNEW.rst
index acc5d1542..2490954c0 100644
--- a/WHATSNEW.rst
+++ b/WHATSNEW.rst
@@ -63,7 +63,7 @@ Style changes
Features
--------
-
+* Adding custom abc and titles (:pr:`294`) by `Pratiman Patel`.
* Significantly improve "tight layout" performance in geographic plots by skipping
artists with clipping paths/boxes set to the subplot bounds (:commit:`f891e4f0`).
* Add modifiable `proplot.figure.Figure.tight` property to retrieve/change the
diff --git a/proplot/axes/base.py b/proplot/axes/base.py
index 9283206f3..69e04327d 100644
--- a/proplot/axes/base.py
+++ b/proplot/axes/base.py
@@ -298,7 +298,7 @@
_axes_format_docstring = """
title : str, optional
The axes title.
-abc : bool or str, default: :rc:`abc`
+abc : bool or str or tuple, default: :rc:`abc`
The "a-b-c" subplot label style. Must contain the character ``a`` or ``A``,
for example ``'a.'``, or ``'A'``. If ``True`` then the default style of
``'a'`` is used. The ``a`` or ``A`` is replaced with the alphabetic character
@@ -345,10 +345,10 @@
The horizontal padding between a-b-c labels and titles in the same location.
%(units.pt)s
ltitle, ctitle, rtitle, ultitle, uctitle, urtitle, lltitle, lctitle, lrtitle \
-: str, optional
+: str, tuple, optional
Shorthands for the below keywords.
lefttitle, centertitle, righttitle, upperlefttitle, uppercentertitle, upperrighttitle, \
-lowerlefttitle, lowercentertitle, lowerrighttitle : str, optional
+lowerlefttitle, lowercentertitle, lowerrighttitle : str, tuple, optional
Additional titles in specific positions. This works as an alternative
to the ``ax.format(title='Title', titleloc=loc)`` workflow and permits
adding more than one title-like label for a single axes.
@@ -2450,9 +2450,11 @@ def _update_abc(self, **kwargs):
abc = rc.find('abc', context=True) # 1st run, or changed
if abc is True:
abc = 'a'
- if abc and (not isinstance(abc, str) or 'a' not in abc and 'A' not in abc):
- raise ValueError(f'Invalid style {abc!r}. Must include letter "a" or "A".')
- if abc and self.number is not None:
+ if isinstance(abc, tuple):
+ kw['text'] = abc[self.number - 1]
+ elif abc and (not isinstance(abc, str) or 'a' not in abc and 'A' not in abc):
+ raise ValueError(f'Invalid style {abc!r}. Must include letter "a" or "A"')
+ if isinstance(abc, str) and self.number is not None:
nabc, iabc = divmod(self.number - 1, 26)
old = re.search('[aA]', abc).group() # return the *first* 'a' or 'A'
new = (nabc + 1) * ABC_STRING[iabc]
@@ -2531,7 +2533,9 @@ def _update_title(self, loc, title=None, **kwargs):
# necesssary. For inner panels, use the border and bbox settings.
if loc not in ('left', 'right', 'center'):
kw.update(self._title_border_kwargs)
- if title is not None:
+ if isinstance(title, tuple):
+ kw['text'] = title[self.number - 1]
+ elif title is not None:
kw['text'] = title
kw.update(kwargs)
self._title_dict[loc].update(kw)
diff --git a/proplot/internals/rcsetup.py b/proplot/internals/rcsetup.py
index 7275a5c2e..eba7abedd 100644
--- a/proplot/internals/rcsetup.py
+++ b/proplot/internals/rcsetup.py
@@ -181,12 +181,15 @@ def _validate_abc(value):
Validate a-b-c setting.
"""
try:
- return _validate_bool(value)
+ if type(value) is not tuple:
+ return _validate_bool(value)
except ValueError:
pass
if isinstance(value, str) and 'a' in value.lower():
return value
- raise ValueError("A-b-c specification must be string containing 'a' or 'A'.")
+ if isinstance(value, tuple):
+ return value
+ raise ValueError("A-b-c specification must be string containing 'a' or 'A'")
def _validate_belongs(*options):
| Format function to consider custom abc and multiple titles
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
Currently, `abc` does not consider the custom text. A provision can be made to incorporate that. Multiple titles can be incorporated in the `format. I know there are ways that the functions may not be even needed. However, it is for the sake of consistency and making the user end code simple.
### Steps to reproduce `abc`
**Current Code**:
```python
import proplot as pplt
fig = pplt.figure(space=0, refwidth='10em')
axs = fig.subplots(nrows=3, ncols=2)
axs.format(
abc='A.', abcloc='ul',
xticks='null', yticks='null', facecolor='gray5',
xlabel='x axis', ylabel='y axis',
)
```
**Current behavior**:

**Modified Code**:
```python
import proplot as pplt
fig = pplt.figure(space=0, refwidth='10em')
axs = fig.subplots(nrows=3, ncols=2)
axs.format(
abc=('A','B','1','2','a','b'), abcloc='ul',
xticks='null', yticks='null', facecolor='gray5',
xlabel='x axis', ylabel='y axis',
)
```
It is expected to use the custom `abc`.
### Steps to reproduce `title`
**Current Code**:
```python
import proplot as pplt
fig = pplt.figure(space=0, refwidth='10em')
axs = fig.subplots(nrows=3, ncols=2)
axs.format(
ultitle=('Title 1'),
xticks='null', yticks='null', facecolor='gray5',
xlabel='x axis', ylabel='y axis',
)
```
**Current behavior**:

**Modified Code**:
```python
import proplot as pplt
fig = pplt.figure(space=0, refwidth='10em')
axs = fig.subplots(nrows=3, ncols=2)
axs.format(
ultitle=('Title 1', 'Title 2', 'Test 1', 'Test2', 'Exp1', 'Exp2'),
xticks='null', yticks='null', facecolor='gray5',
xlabel='x axis', ylabel='y axis',
)
```
It is expected to use the `ultitle` for different subplots. Similarly with other titles.

| 2022-01-02T10:11:38 | 0.0 | [] | [] |
|||
Asthestarsfalll/ExCore | Asthestarsfalll__ExCore-9 | 24901c2fb70f43318983fd3ed78b4074209544ce | diff --git a/example/prepare.py b/example/prepare.py
deleted file mode 100644
index 42d3524..0000000
--- a/example/prepare.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from excore import Registry
-from excore.logger import logger
-
-MODELS = Registry("Model", extra_field=["is_pretrained", "is_backbone"])
-DATAS = Registry("Data")
-TRANSFORMS = Registry("Transform")
-OPTIM = Registry("Optimizer")
-LOSSES = Registry("Loss")
-HOOKS = Registry("Hook")
-LRCHE = Registry("LRSche")
-
-
[email protected]()
-class AddModelParams:
- __HookType__ = "every_build"
- __LifeSpan__ = 1
- __CallInter__ = 1
-
- def __call__(self, cfg, module_dict, isolate_dict):
- if "Model" in module_dict:
- assert 'Optimizer' not in module_dict
- model = module_dict["Model"]
- cfg.Optimizer.add_params(params=model[0].parameters())
- logger.info("AddModelParams: add model params to optimizer")
- logger.info(cfg.Optimizer.get('params'))
- return True
- return False
-
-
[email protected](is_pretrained=True, is_backbone=True)
-class ResNet:
- def __init__(self, **kwargs):
- pass
-
-
[email protected](name="SimpleHead", is_pretrained=False, is_backbone=False)
-class Head:
- def __init__(self, **kwargs):
- pass
-
-
[email protected](is_pretrained=True)
-class FCN:
- def __init__(self, **kwargs):
- pass
-
- def parameters(self):
- return "parameters of FCN"
-
-
[email protected]()
-class AdamW:
- def __init__(self, **kwargs):
- logger.debug("AdamW kwargs: {}", kwargs)
-
-
[email protected]()
-class CityScapes:
- def __init__(self, **kwargs):
- pass
-
-
[email protected]()
-class RandomResize:
- def __init__(self, **kwargs):
- pass
-
-
[email protected]()
-class Normalize:
- def __init__(self, **kwargs):
- pass
-
-
[email protected]()
-class RandomFlip:
- def __init__(self, **kwargs):
- pass
-
[email protected]()
-class Pad:
- def __init__(self, **kwargs):
- pass
-
-
[email protected]()
-class OHEM:
- def __init__(self, **kwargs):
- pass
-
-
[email protected]()
-class CrossEntropyLoss:
- def __init__(self, **kwargs):
- pass
-
-
[email protected]()
-class BoundaryLoss:
- def __init__(self, **kwargs):
- pass
-
-
[email protected]()
-class CosDecay:
- def __init__(self, **kwargs):
- pass
diff --git a/example/register_all.py b/example/register_all.py
new file mode 100644
index 0000000..e95a751
--- /dev/null
+++ b/example/register_all.py
@@ -0,0 +1,5 @@
+from excore.registry import auto_register
+
+from src.hook.hooks import
+
+auto_register('./src')
diff --git a/example/run.py b/example/run.py
index 12c0810..c788175 100644
--- a/example/run.py
+++ b/example/run.py
@@ -1,7 +1,8 @@
-from prepare import MODELS
-
from excore import Registry, config
-from excore.logger import logger, add_logger
+from excore.logger import add_logger, logger
+
+Registry.load()
+MODELS = Registry.get_registry("Model")
def _check_func(values):
@@ -13,9 +14,9 @@ def _check_func(values):
add_logger("file_log", "./run.log")
-# æå° ææ注åç module å is_pretrained çä¿¡æ¯
+# æå° å¨æ导å
¥ç module å is_pretrained çä¿¡æ¯
logger.info(MODELS.module_table(select_info=["is_pretrained"]))
-# æå° ææ注åç module å is_backbone çä¿¡æ¯
+# æå° å¨æ导å
¥ç module å is_backbone çä¿¡æ¯
logger.info(MODELS.module_table(select_info=["is_backbone"]))
# æå° is_pretrained 为 true ç module å is_backbone çä¿¡æ¯
filtered_module_name = MODELS.filter("is_pretrained")
@@ -29,13 +30,21 @@ def _check_func(values):
)
logger.info(Registry.registry_table())
-target_module = ["Model", "Backbone", "Optimizer", "Loss", "TrainData", "LRSche", "TestData"]
+target_module = [
+ "Model",
+ "Backbone",
+ "Optimizer",
+ "Loss",
+ "TrainData",
+ "LRSche",
+ "TestData",
+]
config.set_target_modules(target_module)
# config.silent()
cfg = config.load("./configs/run.toml", target_module)
# å¤ææ¯å¦æ¯ç¸åçå®ä¾
-assert cfg.Optimizer == cfg.LRSche.CosDecay['optimizer']
-assert cfg.Model.FCN['backbone'] == cfg.Backbone
+assert cfg.Optimizer == cfg.LRSche.CosDecay["optimizer"]
+assert cfg.Model.FCN["backbone"] == cfg.Backbone
modules_dict, cfg_dict = config.build_all(cfg)
logger.debug(modules_dict)
logger.debug(cfg_dict)
diff --git a/example/src/__init__.py b/example/src/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/example/src/dataset/__init__.py b/example/src/dataset/__init__.py
new file mode 100644
index 0000000..091b4e8
--- /dev/null
+++ b/example/src/dataset/__init__.py
@@ -0,0 +1,3 @@
+from excore import Registry
+
+DATA = Registry("Data")
diff --git a/example/src/dataset/dataset.py b/example/src/dataset/dataset.py
new file mode 100644
index 0000000..b77a1b8
--- /dev/null
+++ b/example/src/dataset/dataset.py
@@ -0,0 +1,7 @@
+from . import DATA
+
+
[email protected]()
+class CityScapes:
+ def __init__(self, **kwargs):
+ pass
diff --git a/example/src/hook/__init__.py b/example/src/hook/__init__.py
new file mode 100644
index 0000000..708be38
--- /dev/null
+++ b/example/src/hook/__init__.py
@@ -0,0 +1,3 @@
+from excore.registry import Registry
+
+HOOKS = Registry("Hook")
diff --git a/example/src/hook/hooks.py b/example/src/hook/hooks.py
new file mode 100644
index 0000000..f19e660
--- /dev/null
+++ b/example/src/hook/hooks.py
@@ -0,0 +1,19 @@
+from excore.logger import logger
+from . import HOOKS
+
+
[email protected]()
+class AddModelParams:
+ __HookType__ = "every_build"
+ __LifeSpan__ = 1
+ __CallInter__ = 1
+
+ def __call__(self, cfg, module_dict, isolate_dict):
+ if "Model" in module_dict:
+ assert "Optimizer" not in module_dict
+ model = module_dict["Model"]
+ cfg.Optimizer.add_params(params=model[0].parameters())
+ logger.info("AddModelParams: add model params to optimizer")
+ logger.info(cfg.Optimizer.get("params"))
+ return True
+ return False
diff --git a/example/src/loss/__init__.py b/example/src/loss/__init__.py
new file mode 100644
index 0000000..de099ae
--- /dev/null
+++ b/example/src/loss/__init__.py
@@ -0,0 +1,3 @@
+from excore import Registry
+
+LOSSES = Registry("Loss")
diff --git a/example/src/loss/losses.py b/example/src/loss/losses.py
new file mode 100644
index 0000000..86f718c
--- /dev/null
+++ b/example/src/loss/losses.py
@@ -0,0 +1,19 @@
+from . import LOSSES
+
+
[email protected]()
+class OHEM:
+ def __init__(self, **kwargs):
+ pass
+
+
[email protected]()
+class CrossEntropyLoss:
+ def __init__(self, **kwargs):
+ pass
+
+
[email protected]()
+class BoundaryLoss:
+ def __init__(self, **kwargs):
+ pass
diff --git a/example/src/lrche/__init__.py b/example/src/lrche/__init__.py
new file mode 100644
index 0000000..2951dd8
--- /dev/null
+++ b/example/src/lrche/__init__.py
@@ -0,0 +1,3 @@
+from excore import Registry
+
+LRCHE = Registry("LRSche")
diff --git a/example/src/lrche/lrcheduler.py b/example/src/lrche/lrcheduler.py
new file mode 100644
index 0000000..dbfc163
--- /dev/null
+++ b/example/src/lrche/lrcheduler.py
@@ -0,0 +1,7 @@
+from . import LRCHE
+
+
[email protected]()
+class CosDecay:
+ def __init__(self, **kwargs):
+ pass
diff --git a/example/src/model/__init__.py b/example/src/model/__init__.py
new file mode 100644
index 0000000..bdd77b0
--- /dev/null
+++ b/example/src/model/__init__.py
@@ -0,0 +1,3 @@
+from excore import Registry
+
+MODELS = Registry("Model", extra_field=["is_pretrained", "is_backbone"])
diff --git a/example/src/model/backbone/__init__.py b/example/src/model/backbone/__init__.py
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/example/src/model/backbone/__init__.py
@@ -0,0 +1,1 @@
+
diff --git a/example/src/model/backbone/resnet.py b/example/src/model/backbone/resnet.py
new file mode 100644
index 0000000..29079d3
--- /dev/null
+++ b/example/src/model/backbone/resnet.py
@@ -0,0 +1,7 @@
+from .. import MODELS
+
+
[email protected](is_pretrained=True, is_backbone=True)
+class ResNet:
+ def __init__(self, **kwargs):
+ pass
diff --git a/example/src/model/fcn.py b/example/src/model/fcn.py
new file mode 100644
index 0000000..5d98e3e
--- /dev/null
+++ b/example/src/model/fcn.py
@@ -0,0 +1,10 @@
+from . import MODELS
+
+
[email protected](is_pretrained=True)
+class FCN:
+ def __init__(self, **kwargs):
+ pass
+
+ def parameters(self):
+ return "parameters of FCN"
diff --git a/example/src/model/head/__init__.py b/example/src/model/head/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/example/src/model/head/simple_head.py b/example/src/model/head/simple_head.py
new file mode 100644
index 0000000..55d5387
--- /dev/null
+++ b/example/src/model/head/simple_head.py
@@ -0,0 +1,7 @@
+from .. import MODELS
+
+
[email protected](name="SimpleHead", is_pretrained=False, is_backbone=False)
+class Head:
+ def __init__(self, **kwargs):
+ pass
diff --git a/example/src/optim/__init__.py b/example/src/optim/__init__.py
new file mode 100644
index 0000000..1cf9c79
--- /dev/null
+++ b/example/src/optim/__init__.py
@@ -0,0 +1,3 @@
+from excore import Registry
+
+OPTIM = Registry("Optimizer")
diff --git a/example/src/optim/op.py b/example/src/optim/op.py
new file mode 100644
index 0000000..67710c4
--- /dev/null
+++ b/example/src/optim/op.py
@@ -0,0 +1,8 @@
+from excore import logger
+from . import OPTIM
+
+
[email protected]()
+class AdamW:
+ def __init__(self, **kwargs):
+ logger.debug("AdamW kwargs: {}", kwargs)
diff --git a/example/src/transform/__init__.py b/example/src/transform/__init__.py
new file mode 100644
index 0000000..2e558f3
--- /dev/null
+++ b/example/src/transform/__init__.py
@@ -0,0 +1,3 @@
+from excore import Registry
+
+TRANSFORMS = Registry("Transform")
diff --git a/example/src/transform/transforms.py b/example/src/transform/transforms.py
new file mode 100644
index 0000000..4b8b1c4
--- /dev/null
+++ b/example/src/transform/transforms.py
@@ -0,0 +1,25 @@
+from . import TRANSFORMS
+
+
[email protected]()
+class RandomResize:
+ def __init__(self, **kwargs):
+ pass
+
+
[email protected]()
+class Normalize:
+ def __init__(self, **kwargs):
+ pass
+
+
[email protected]()
+class RandomFlip:
+ def __init__(self, **kwargs):
+ pass
+
+
[email protected]()
+class Pad:
+ def __init__(self, **kwargs):
+ pass
diff --git a/excore/__init__.py b/excore/__init__.py
index 1a8d07f..9a63f35 100644
--- a/excore/__init__.py
+++ b/excore/__init__.py
@@ -2,7 +2,7 @@
from ._constants import __author__, __version__, _cache_dir
from .config import build_all, load_config
from .logger import add_logger, logger, remove_logger
-from .registry import Registry
+from .registry import Registry, auto_register
@logger.catch
diff --git a/excore/_constants.py b/excore/_constants.py
index 1171b09..125ee05 100644
--- a/excore/_constants.py
+++ b/excore/_constants.py
@@ -3,8 +3,12 @@
__author__ = "Asthestarsfalll"
__version__ = "0.1.3"
-_cache_dir = os.path.expanduser("~/.cache/core/")
+_cache_dir = os.path.expanduser("~/.cache/excore/")
+# TODO(Asthestarsfalll): add worskspace
_workspace_cache_dir = None
+_registry_cache_file = "registry_cache.pkl"
+
+os.makedirs(_cache_dir, exist_ok=True)
LOGOS = [
r"""
diff --git a/excore/config.py b/excore/config.py
index 33b3c35..1844e5e 100644
--- a/excore/config.py
+++ b/excore/config.py
@@ -1,7 +1,8 @@
+import importlib
import os
import time
from dataclasses import dataclass
-from sys import exc_info
+from sys import exc_info, exit
from typing import Any, Dict, List, Tuple, Union
import toml
@@ -28,8 +29,8 @@
def _is_special(k: str) -> Tuple[str, str]:
"""
Determine if the given string begin with target special character.
- `!` denotes reuse module, which will only be built once and cache out.
- `@` denotes intermediate module, which will be built from scratch if need.
+ `@` denotes reuse module, which will only be built once and cache out.
+ `!` denotes intermediate module, which will be built from scratch if need.
Args:
k (str): The input string to check.
@@ -91,6 +92,19 @@ def _dict2list(v, return_list=False):
return v
+# FIXME(Asthestarsfalll): need to handle more situations.
+def _str_to_target(module_name):
+ module_name = module_name.split(".")
+ target_name = module_name.pop(-1)
+ try:
+ module = importlib.import_module(".".join(module_name))
+ module = getattr(module, target_name)
+ except ModuleNotFoundError:
+ logger.critical(f"Can not import such module: {module_name}")
+ exit(0)
+ return module
+
+
@dataclass
class ModuleNode(dict):
name: str
@@ -133,12 +147,13 @@ def _get_params(self):
def __call__(self):
try:
- cls = Registry.get_registry(self.base)[self.name]
+ cls_name = Registry.get_registry(self.base)[self.name]
except Exception as exc:
logger.critical(exc_info())
raise ModuleBuildError(
f"Failed to find the registered module {self.name} with base registry {self.base}"
) from exc
+ cls = _str_to_target(cls_name)
params = self._get_params()
try:
module = cls(**params)
@@ -280,7 +295,7 @@ def _parse_target_modules(self):
v = _dict2list(v)
self[name] = ModuleWrapper(v)
- def _parse_isolated_registerd_module(self, name):
+ def _parse_isolated_registered_module(self, name):
v = _attr2module(self.pop(name), name)
v = _dict2list(v, True)
for i in v:
@@ -305,7 +320,7 @@ def _parse_isolated_obj(self):
modules = self[name]
if isinstance(modules, dict):
if name in self.registered_modules:
- self._parse_isolated_registerd_module(name)
+ self._parse_isolated_registered_module(name)
else:
self._parse_isolated_module(name)
@@ -320,7 +335,7 @@ def _contain_module(self, name):
return True
return False
- # FIXME(Asthestarsfalll): ReuseNode should firstly search in hidden modules
+ # FIXME(Asthestarsfalll): Maybe reuseNode should firstly search in hidden modules?
def _parse_single_param(self, name, params):
ModuleType = _dispatch_module_node[params.base]
if name in self:
@@ -350,7 +365,8 @@ def _parse_single_param(self, name, params):
# FIXME(Asthestarsfalll): fix this
if not isinstance(converted, ModuleType):
raise CoreConfigParseError(
- f"Error when parse params {params.name}, target_type is {ModuleType}, but got {type(converted)}"
+ f"Error when parse params {params.name}, \
+ target_type is {ModuleType}, but got {type(converted)}"
)
return converted
@@ -402,6 +418,7 @@ def parse(self):
# TODO(Asthestarsfalll): automatically generate pyi file
# according to config files for type hinting. high priority.
+# TODO(Asthestarsfalll): Add dump method to generate toml config files.
class LazyConfig:
globals: Registry
hook_key = "ConfigHook"
@@ -481,6 +498,15 @@ def load_config(filename: str, base_key: str = "__base__") -> AttrNode:
def load(
filename: str, target_modules: List[str], base_key: str = BASE_CONFIG_KEY
) -> LazyConfig:
+ Registry.load()
+ # We'd better to lock register to prevent
+ # the inconsistency between the twice registration.
+ Registry.lock_register()
+ if not Registry._registry_pool:
+ raise RuntimeError(
+ "No module has been registered, \
+ you may need to call `excore.registry.auto_registry` first"
+ )
st = time.time()
AttrNode.set_key_fields(target_modules)
config = load_config(filename, base_key)
diff --git a/excore/registry.py b/excore/registry.py
index 1219c49..e78a408 100644
--- a/excore/registry.py
+++ b/excore/registry.py
@@ -1,18 +1,25 @@
import fnmatch
import functools
+import importlib
import inspect
import json
+import os
import re
from typing import Any, Callable, Dict, List, Optional, Sequence, Union
from tabulate import tabulate
+from ._constants import _cache_dir, _registry_cache_file
from .logger import logger
+from .utils import FileLock
_name_re = re.compile(r"^[A-Za-z0-9_]+$")
_private_flag: str = "__"
-__all__ = ["Registry"]
+__all__ = ["Registry", "auto_register"]
+
+
+# TODO(Asthestarsfalll): Maybe some methods need to be cleared.
def _is_pure_ascii(name: str):
@@ -73,8 +80,12 @@ def __call__(cls, name, **kwargs) -> "Registry":
return instance
+# Maybe someday we can get rid of Registry?
class Registry(dict, metaclass=RegistryMeta):
_globals: Optional["Registry"] = None
+ _registry_dir = "registry"
+ # just a workaround for twice registry
+ _prevent_register = False
"""A registry that stores functions and classes by name.
Attributes:
@@ -100,6 +111,38 @@ def __init__(
)
self.extra_info = dict()
+ @classmethod
+ def dump(cls):
+ file_path = os.path.join(_cache_dir, cls._registry_dir, _registry_cache_file)
+ os.makedirs(os.path.join(_cache_dir, cls._registry_dir), exist_ok=True)
+ import pickle # pylint: disable=import-outside-toplevel
+
+ with FileLock(file_path):
+ with open(file_path, "wb") as f:
+ pickle.dump(cls._registry_pool.items(), f)
+
+ @classmethod
+ def load(cls):
+ file_path = os.path.join(_cache_dir, cls._registry_dir, _registry_cache_file)
+ if not os.path.exists(file_path):
+ # shall we need to be silent? Or raise error?
+ logger.warning("Registry cache file do not exist!")
+ return
+ import pickle # pylint: disable=import-outside-toplevel
+
+ with FileLock(file_path):
+ with open(file_path, "rb") as f:
+ data = pickle.load(f)
+ cls._registry_pool.update(data)
+
+ @classmethod
+ def lock_register(cls):
+ cls._prevent_register = True
+
+ @classmethod
+ def unlock_register(cls):
+ cls._prevent_register = False
+
@classmethod
def get_registry(cls, name: str, default: Any = None) -> Any:
"""
@@ -153,12 +196,14 @@ def _register(
name: Optional[str] = None,
**extra_info,
) -> Callable:
+ if Registry._prevent_register:
+ return module
if not (inspect.isfunction(module) or inspect.isclass(module)):
raise TypeError(
"Only support function or class, but got {}".format(type(module))
)
- name = name or module.__name__
+ name = name or module.__qualname__
if not force and name in self:
if not self[name] == module:
raise ValueError("The name {} exists".format(name))
@@ -179,7 +224,8 @@ def _register(
elif hasattr(self, "extra_field"):
self.extra_info[name] = [None] * len(self.extra_field)
- self[name] = module
+ # NOTE(Asthestarsfalll): this methods only suit for local files
+ self[name] = ".".join([module.__module__, module.__qualname__])
# update to globals
if Registry._globals is not None and name.startswith(_private_flag):
@@ -338,3 +384,27 @@ def registry_table(cls, **table_kwargs) -> Any:
)
table = "\n" + table
return table
+
+
+def _get_default_module_name(target_dir):
+ assert os.path.isdir(target_dir)
+ full_path = os.path.abspath(target_dir)
+ return full_path.split(os.sep)[-1]
+
+
+def _auto_register(target_dir, module_name):
+ for file_name in os.listdir(target_dir):
+ full_path = os.path.join(target_dir, file_name)
+ if os.path.isdir(full_path):
+ _auto_register(full_path, module_name + "." + file_name)
+ elif file_name.endswith(".py") and file_name != "__init__.py":
+ import_name = module_name + "." + file_name[:-3]
+ print(import_name)
+ importlib.import_module(import_name)
+
+
+def auto_register(target_dir, module_name=None):
+ if module_name is None:
+ module_name = _get_default_module_name(target_dir)
+ _auto_register(target_dir, module_name)
+ Registry.dump()
diff --git a/excore/utils.py b/excore/utils.py
index 99334ff..8b648cc 100644
--- a/excore/utils.py
+++ b/excore/utils.py
@@ -1,12 +1,32 @@
import functools
+import threading
+import time
class CacheOut:
def __call__(self, func):
@functools.wraps(func)
def _cache(self):
- if not hasattr(self, "elem"):
- setattr(self, "elem", func(self))
- return getattr(self, "elem")
+ if not hasattr(self, "cached_elem"):
+ setattr(self, "cached_elem", func(self))
+ return getattr(self, "cached_elem")
return _cache
+
+
+class FileLock:
+ def __init__(self, file_path, timeout=15):
+ self.file_path = file_path
+ self.timeout = timeout
+ self.lock = threading.Lock()
+
+ def __enter__(self):
+ start_time = time.time()
+ while not self.lock.acquire(False):
+ if time.time() - start_time >= self.timeout:
+ raise TimeoutError("Failed to acquire lock on file")
+ time.sleep(0.1)
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.lock.release()
| Reduce unnecessary imports
`Registry` is a great design, but when the project become more and more huge, such a static design requires us to import all the python files which contains `Registry`.
So this is a compromise solution which I call `LazyRegistry`.
The `LazyRegistry` only register all modules for the first time, and it does not map the string to object any more, instead it will map the string to string, such as `ResNet` -> `src.modeling.bacbone.ResNet`.
It will dump all `Registry` to binary cached file into default cache_dir.
Before parsing configs, the cached file will be loaded.
| 2023-05-26T11:42:08 | 0.0 | [] | [] |
|||
CybercentreCanada/assemblyline-ui | CybercentreCanada__assemblyline-ui-699 | accfc2ca4861866d61719d83801fb7df7380f7b7 | diff --git a/assemblyline_ui/api/base.py b/assemblyline_ui/api/base.py
index 2ed84bdc..c202c931 100644
--- a/assemblyline_ui/api/base.py
+++ b/assemblyline_ui/api/base.py
@@ -1,4 +1,5 @@
+from urllib.parse import quote
import elasticapm
import functools
import hashlib
@@ -217,10 +218,13 @@ def make_file_response(data, name, size, status_code=200, content_type="applicat
if quota_user and quota_set:
QUOTA_TRACKER.end(quota_user)
+ filename = f"UTF-8''{quote(safe_str(name), safe='')}"
+
response = make_response(data, status_code)
response.headers["Content-Type"] = content_type
response.headers["Content-Length"] = size
- response.headers["Content-Disposition"] = 'attachment; filename="%s"' % safe_str(name)
+ response.headers["Content-Disposition"] = f"attachment; filename=file.bin; filename*={filename}"
+
return response
@@ -241,9 +245,10 @@ def generate():
yield data
reader.close()
- headers = {"Content-Type": 'application/octet-stream',
- "Content-Length": size,
- "Content-Disposition": 'attachment; filename="%s"' % safe_str(name)}
+ filename = f"UTF-8''{quote(safe_str(name), safe='')}"
+
+ headers = {"Content-Type": 'application/octet-stream', "Content-Length": size,
+ "Content-Disposition": f"attachment; filename=file.bin; filename*={filename}"}
return Response(generate(), status=status_code, headers=headers)
diff --git a/assemblyline_ui/api/v4/alert.py b/assemblyline_ui/api/v4/alert.py
index 0c95415a..1cae5675 100644
--- a/assemblyline_ui/api/v4/alert.py
+++ b/assemblyline_ui/api/v4/alert.py
@@ -441,10 +441,12 @@ def run_workflow(**kwargs):
user = kwargs['user']
try:
labels = set(request.json.get('labels', []))
- priority = request.json.get('priority', "").upper()
+ priority = request.json.get('priority')
+ priority = priority.upper() if priority else None
if priority not in PRIORITIES:
raise ValueError(f"Priority {priority} not in priorities")
- status = request.json.get('status', "").upper()
+ status = request.json.get('status')
+ status = status.upper() if status else None
if status not in STATUSES:
raise ValueError(f"Status '{status}' not in statuses")
except ValueError as e:
@@ -498,10 +500,12 @@ def run_workflow_by_batch(**kwargs):
user = kwargs['user']
try:
labels = set(request.json.get('labels', []))
- priority = request.json.get('priority', "").upper()
+ priority = request.json.get('priority')
+ priority = priority.upper() if priority else None
if priority not in PRIORITIES:
raise ValueError(f"Priority {priority} not in priorities")
- status = request.json.get('status', "").upper()
+ status = request.json.get('status')
+ status = status.upper() if status else None
if status not in STATUSES:
raise ValueError(f"Status '{status}' not in statuses")
except ValueError as e:
diff --git a/assemblyline_ui/api/v4/authentication.py b/assemblyline_ui/api/v4/authentication.py
index 926e2322..80b0e5f5 100644
--- a/assemblyline_ui/api/v4/authentication.py
+++ b/assemblyline_ui/api/v4/authentication.py
@@ -11,7 +11,7 @@
from io import BytesIO
from passlib.hash import bcrypt
from urllib.parse import urlparse
-from werkzeug.exceptions import BadRequest
+from werkzeug.exceptions import BadRequest, UnsupportedMediaType
from assemblyline.common.comms import send_reset_email, send_signup_email
from assemblyline.common.isotime import now
@@ -295,7 +295,7 @@ def get_reset_link(**_):
try:
data = request.json
- except BadRequest:
+ except (BadRequest, UnsupportedMediaType):
data = request.values
email = data.get('email', None)
@@ -343,7 +343,7 @@ def login(**_):
"""
try:
data = request.json
- except BadRequest:
+ except (BadRequest, UnsupportedMediaType):
data = request.values
user = data.get('user', None)
@@ -563,63 +563,68 @@ def oauth_validate(**_):
if user_data:
data = parse_profile(user_data, oauth_provider_config)
has_access = data.pop('access', False)
- if has_access and data['email'] is not None:
- oauth_avatar = data.pop('avatar', None)
-
- # Find if user already exists
- users = STORAGE.user.search(f"email:{data['email']}", fl="*", as_obj=False)['items']
- if users:
- cur_user = users[0]
- # Do not update username and password from the current user
- data['uname'] = cur_user.get('uname', data['uname'])
- data['password'] = cur_user.get('password', data['password'])
- else:
- if data['uname'] != data['email']:
- # Username was computed using a regular expression, lets make sure we don't
- # assign the same username to two users
- res = STORAGE.user.search(f"uname:{data['uname']}", rows=0, as_obj=False)
- if res['total'] > 0:
- cnt = res['total']
+
+ if data['email'] is None:
+ return make_api_response({"err_code": 4}, err="Could not find an email address for the user",
+ status_code=403)
+
+ if not has_access:
+ return make_api_response({"err_code": 2}, err="This user is not allowed access to the system",
+ status_code=403)
+
+ oauth_avatar = data.pop('avatar', None)
+
+ # Find if user already exists
+ users = STORAGE.user.search(f"email:{data['email']}", fl="*", as_obj=False)['items']
+ if users:
+ cur_user = users[0]
+ # Do not update username and password from the current user
+ data['uname'] = cur_user.get('uname', data['uname'])
+ data['password'] = cur_user.get('password', data['password'])
+ else:
+ if data['uname'] != data['email']:
+ # Username was computed using a regular expression, lets make sure we don't
+ # assign the same username to two users
+ res = STORAGE.user.search(f"uname:{data['uname']}", rows=0, as_obj=False)
+ if res['total'] > 0:
+ cnt = res['total']
+ new_uname = f"{data['uname']}{cnt}"
+ while STORAGE.user.get(new_uname) is not None:
+ cnt += 1
new_uname = f"{data['uname']}{cnt}"
- while STORAGE.user.get(new_uname) is not None:
- cnt += 1
- new_uname = f"{data['uname']}{cnt}"
- data['uname'] = new_uname
- cur_user = {}
-
- username = data['uname']
- email_adr = data['email']
-
- # Add add dynamic classification group
- data['classification'] = get_dynamic_classification(data['classification'], data)
-
- # Make sure the user exists in AL and is in sync
- if (not cur_user and oauth_provider_config.auto_create) or \
- (cur_user and oauth_provider_config.auto_sync):
-
- # Update the current user
- cur_user.update(data)
-
- # Save avatar
- if oauth_avatar:
- avatar = fetch_avatar(oauth_avatar, provider, oauth_provider_config)
- if avatar:
- STORAGE.user_avatar.save(username, avatar)
-
- # Save updated user
- STORAGE.user.save(username, cur_user)
-
- if cur_user:
- if avatar is None:
- avatar = STORAGE.user_avatar.get(username) or "/static/images/user_default.png"
- oauth_token_id = hashlib.sha256(str(token).encode("utf-8", errors='replace')).hexdigest()
- get_token_store(username).add(oauth_token_id)
- else:
- return make_api_response({"err_code": 3},
- err="User auto-creation is disabled",
- status_code=403)
+ data['uname'] = new_uname
+ cur_user = {}
+
+ username = data['uname']
+ email_adr = data['email']
+
+ # Add add dynamic classification group
+ data['classification'] = get_dynamic_classification(data['classification'], data)
+
+ # Make sure the user exists in AL and is in sync
+ if (not cur_user and oauth_provider_config.auto_create) or \
+ (cur_user and oauth_provider_config.auto_sync):
+
+ # Update the current user
+ cur_user.update(data)
+
+ # Save avatar
+ if oauth_avatar:
+ avatar = fetch_avatar(oauth_avatar, provider, oauth_provider_config)
+ if avatar:
+ STORAGE.user_avatar.save(username, avatar)
+
+ # Save updated user
+ STORAGE.user.save(username, cur_user)
+
+ if cur_user:
+ if avatar is None:
+ avatar = STORAGE.user_avatar.get(username) or "/static/images/user_default.png"
+ oauth_token_id = hashlib.sha256(str(token).encode("utf-8", errors='replace')).hexdigest()
+ get_token_store(username).add(oauth_token_id)
else:
- return make_api_response({"err_code": 2}, err="This user is not allowed access to the system",
+ return make_api_response({"err_code": 3},
+ err="User auto-creation is disabled",
status_code=403)
except OAuthError as err:
@@ -671,7 +676,7 @@ def reset_pwd(**_):
try:
data = request.json
- except BadRequest:
+ except (BadRequest, UnsupportedMediaType):
data = request.values
reset_id = data.get('reset_id', None)
@@ -786,7 +791,7 @@ def signup(**_):
try:
data = request.json
- except BadRequest:
+ except (BadRequest, UnsupportedMediaType):
data = request.values
uname = data.get('user', None)
@@ -879,7 +884,7 @@ def signup_validate(**_):
try:
data = request.json
- except BadRequest:
+ except (BadRequest, UnsupportedMediaType):
data = request.values
registration_key = data.get('registration_key', None)
diff --git a/assemblyline_ui/error.py b/assemblyline_ui/error.py
index da93024e..a6a617e4 100644
--- a/assemblyline_ui/error.py
+++ b/assemblyline_ui/error.py
@@ -88,6 +88,11 @@ def handle_404(_):
return make_api_response("", "Api does not exist (%s)" % request.path, 404)
[email protected]_errorhandler(415)
+def handle_415(e):
+ return make_api_response("", str(e))
+
+
@errors.app_errorhandler(500)
def handle_500(e):
if isinstance(e.original_exception, AccessDeniedException):
diff --git a/assemblyline_ui/helper/oauth.py b/assemblyline_ui/helper/oauth.py
index d4534019..f1274a38 100644
--- a/assemblyline_ui/helper/oauth.py
+++ b/assemblyline_ui/helper/oauth.py
@@ -20,7 +20,7 @@ def reorder_name(name):
def parse_profile(profile, provider):
# Find email address and normalize it for further processing
email_adr = None
- for email_key in ['email', 'emails', 'extension_selectedEmailAddress', 'otherMails', 'preferred_username', 'upn']:
+ for email_key in provider.email_fields:
email_adr = profile.get(email_key, None)
if email_adr:
break
@@ -42,7 +42,7 @@ def parse_profile(profile, provider):
uname = random_user(digits=provider.uid_randomize_digits, delimiter=provider.uid_randomize_delimiter)
else:
# Generate it from email address
- uname = profile.get('uname', email_adr)
+ uname = profile.get(provider.username_field, email_adr)
# 1. Use provided regex matcher
if uname is not None and uname == email_adr and provider.uid_regex:
diff --git a/assemblyline_ui/helper/user.py b/assemblyline_ui/helper/user.py
index dc7b0213..7e463481 100644
--- a/assemblyline_ui/helper/user.py
+++ b/assemblyline_ui/helper/user.py
@@ -129,7 +129,7 @@ def save_user_account(username, data, user):
def get_dynamic_classification(current_c12n, user_info):
- new_c12n = Classification.normalize_classification(current_c12n, get_dynamic_groups=False)
+ new_c12n = Classification.normalize_classification(current_c12n, skip_auto_select=True, get_dynamic_groups=False)
if Classification.dynamic_groups:
email = user_info.get('email', None)
diff --git a/assemblyline_ui/security/authenticator.py b/assemblyline_ui/security/authenticator.py
index 3d88dd56..33588a00 100644
--- a/assemblyline_ui/security/authenticator.py
+++ b/assemblyline_ui/security/authenticator.py
@@ -78,8 +78,8 @@ def get_logged_in_user(self):
session_id = flsk_session.get("session_id", None)
if not session_id:
- if current_app.session_cookie_name in request.cookies:
- session = request.cookies.get(current_app.session_cookie_name)
+ if current_app.config['SESSION_COOKIE_NAME'] in request.cookies:
+ session = request.cookies.get(current_app.config['SESSION_COOKIE_NAME'])
# Try to load the session by hand to check why is rejected
try:
diff --git a/setup.py b/setup.py
index 44340ff3..f24e323f 100644
--- a/setup.py
+++ b/setup.py
@@ -39,8 +39,8 @@
install_requires=[
'assemblyline',
'assemblyline-core',
- 'werkzeug==2.2.3',
- 'flask==2.2.3',
+ 'werkzeug',
+ 'flask',
'pyqrcode',
'markdown',
'python-ldap',
| get_dynamic_classification should skip auto selecting groups (master)
When normalizing dynamic classifications for users, `skip_auto_select` should be set so that users are not given groups they should not have.
| 2023-07-31T14:09:24 | 0.0 | [] | [] |
|||
we-like-parsers/pegen | we-like-parsers__pegen-106 | 26d533d747ee157f0ac460b9aff006a80c35b6dc | diff --git a/src/pegen/python_generator.py b/src/pegen/python_generator.py
index 05c9959..640bb9c 100644
--- a/src/pegen/python_generator.py
+++ b/src/pegen/python_generator.py
@@ -286,7 +286,7 @@ def alts_uses_locations(self, alts: Sequence[Alt]) -> bool:
def add_return(self, ret_val: str) -> None:
for stmt in self.cleanup_statements:
self.print(stmt)
- self.print(f"return {ret_val};")
+ self.print(f"return {ret_val}")
def visit_Rule(self, node: Rule) -> None:
is_loop = node.is_loop()
| Redundant semicolon in python_generator.py?
https://github.com/we-like-parsers/pegen/commit/e28fe4fb1972c55af5ddb6a7bdd9cba4ea072b81#r149797779
I don't know if this `;` has any special meaning here. From the generated parser script and from the commit diff, it looks like a typo.
| cc @MatthieuDartiailh
It's probably a typo. @yf-yang Feel free to open a PR to fix this if you want. | 2024-12-04T01:48:48 | 0.0 | [] | [] |
||
zlatko-minev/pyEPR | zlatko-minev__pyEPR-132 | 0ac3fa2213d2ef6dacd1d2f487ef345cebee235b | diff --git a/pyEPR/core_distributed_analysis.py b/pyEPR/core_distributed_analysis.py
index 12154f5..5df07a9 100644
--- a/pyEPR/core_distributed_analysis.py
+++ b/pyEPR/core_distributed_analysis.py
@@ -657,12 +657,13 @@ def calc_energy_magnetic(self,
def calc_p_electric_volume(self,
name_dielectric3D,
relative_to='AllObjects',
+ variation=None,
E_total=None
):
r'''
- Calculate the dielectric energy-participatio ratio
+ Calculate the dielectric energy-participation ratio
of a 3D object (one that has volume) relative to the dielectric energy of
- a list of object objects.
+ a list of objects.
This is as a function relative to another object or all objects.
@@ -670,18 +671,17 @@ def calc_p_electric_volume(self,
that might be stored in any lumped elements or lumped capacitors.
Returns:
- ---------
â°_object/â°_total, (â°_object, _total)
'''
if E_total is None:
logger.debug('Calculating â°_total')
- â°_total = self.calc_energy_electric(obj=relative_to)
+ â°_total = self.calc_energy_electric(obj=relative_to, variation=variation)
else:
â°_total = E_total
logger.debug('Calculating â°_object')
- â°_object = self.calc_energy_electric(obj=name_dielectric3D)
+ â°_object = self.calc_energy_electric(obj=name_dielectric3D, variation=variation)
return â°_object/â°_total, (â°_object, â°_total)
| `calc_p_electric_volume` does not support variations
Currently, [`DistributedAnalysis.calc_p_electric_volume`](https://pyepr-docs.readthedocs.io/en/latest/api/pyEPR.core_distributed_analysis.html#pyEPR.core_distributed_analysis.DistributedAnalysis.calc_p_electric_volume) does not support specifying different variations. The underlying `calc_energy_electric` supports different variations so it should be easy to implement this.
| 2022-08-04T08:10:53 | 0.0 | [] | [] |
|||
Accenture/AmpliGraph | Accenture__AmpliGraph-244 | e690576ff90aa74d0ce011c6094e072a6e064727 | diff --git a/ampligraph/latent_features/models/ComplEx.py b/ampligraph/latent_features/models/ComplEx.py
index a6122374..8e6848d0 100644
--- a/ampligraph/latent_features/models/ComplEx.py
+++ b/ampligraph/latent_features/models/ComplEx.py
@@ -255,7 +255,7 @@ def _fn(self, e_s, e_p, e_o):
tf.reduce_sum(e_p_img * e_s_real * e_o_img, axis=1) - \
tf.reduce_sum(e_p_img * e_s_img * e_o_real, axis=1)
- def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None):
+ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None):
"""Train a ComplEx model.
The model is trained on a training set X using the training protocol
@@ -321,7 +321,7 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
One can also think about assigning numeric values by looking at the distribution of it per predicate.
"""
- super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values)
+ super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values, tensorboard_logs_path=tensorboard_logs_path)
def predict(self, X, from_idx=False):
__doc__ = super().predict.__doc__ # NOQA
diff --git a/ampligraph/latent_features/models/ConvKB.py b/ampligraph/latent_features/models/ConvKB.py
index 2f1006d1..3a89d303 100644
--- a/ampligraph/latent_features/models/ConvKB.py
+++ b/ampligraph/latent_features/models/ConvKB.py
@@ -418,7 +418,7 @@ def _fn(self, e_s, e_p, e_o):
return tf.squeeze(self.scores)
- def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None):
+ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None):
"""Train a ConvKB model (with optional early stopping).
The model is trained on a training set X using the training protocol described in :cite:`trouillon2016complex`.
@@ -493,4 +493,4 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
One can also think about assigning numeric values by looking at the distribution of it per predicate.
"""
- super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values)
+ super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values, tensorboard_logs_path=tensorboard_logs_path)
diff --git a/ampligraph/latent_features/models/DistMult.py b/ampligraph/latent_features/models/DistMult.py
index 2944ea9e..c5cd6380 100644
--- a/ampligraph/latent_features/models/DistMult.py
+++ b/ampligraph/latent_features/models/DistMult.py
@@ -201,7 +201,7 @@ def _fn(self, e_s, e_p, e_o):
return tf.reduce_sum(e_s * e_p * e_o, axis=1)
- def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None):
+ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None):
"""Train an DistMult.
The model is trained on a training set X using the training protocol
@@ -267,7 +267,7 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
One can also think about assigning numeric values by looking at the distribution of it per predicate.
"""
- super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values)
+ super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values, tensorboard_logs_path=tensorboard_logs_path)
def predict(self, X, from_idx=False):
__doc__ = super().predict.__doc__ # NOQA
diff --git a/ampligraph/latent_features/models/EmbeddingModel.py b/ampligraph/latent_features/models/EmbeddingModel.py
index 36543ffa..d4ae4e67 100644
--- a/ampligraph/latent_features/models/EmbeddingModel.py
+++ b/ampligraph/latent_features/models/EmbeddingModel.py
@@ -828,6 +828,10 @@ def _perform_early_stopping_test(self, epoch):
current_test_value = hits_at_n_score(ranks, 1)
elif self.early_stopping_criteria == 'mrr':
current_test_value = mrr_score(ranks)
+ tag = "Early stopping {} current value".format(self.early_stopping_criteria)
+ summary = tf.Summary(value=[tf.Summary.Value(tag=tag,
+ simple_value=current_test_value)])
+ self.writer.add_summary(summary, epoch)
if self.early_stopping_best_value is None: # First validation iteration
self.early_stopping_best_value = current_test_value
@@ -944,7 +948,7 @@ def _training_data_generator(self):
else:
yield np.squeeze(out_triples), unique_entities, entity_embeddings
- def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None):
+ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None):
"""Train an EmbeddingModel (with optional early stopping).
The model is trained on a training set X using the training protocol
@@ -983,6 +987,11 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
If the numeric value is unknown pass a NaN weight. The model will uniformly randomly assign a numeric value.
One can also think about assigning numeric values by looking at the distribution of it per predicate.
+ tensorboard_logs_path: str or None
+ Path to store tensorboard logs, e.g. average training loss tracking per epoch (default: ``None`` indicating no
+ logs will be collected). When provided it will create a folder under provided path and save tensorboard files there.
+ To then view the loss in the terminal run: ``tensorboard --logdir <tensorboard_logs_path>``.
+
"""
self.train_dataset_handle = None
# try-except block is mainly to handle clean up in case of exception or manual stop in jupyter notebook
@@ -1067,7 +1076,8 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
tf.random.set_random_seed(self.seed)
self.sess_train = tf.Session(config=self.tf_config)
-
+ if tensorboard_logs_path is not None:
+ self.writer = tf.summary.FileWriter(tensorboard_logs_path, self.sess_train.graph)
batch_size = int(np.ceil(self.train_dataset_handle.get_size("train") / self.batches_count))
# dataset = tf.data.Dataset.from_tensor_slices(X).repeat().batch(batch_size).prefetch(2)
@@ -1146,7 +1156,11 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
losses.append(loss_batch)
if self.embedding_model_params.get('normalize_ent_emb', constants.DEFAULT_NORMALIZE_EMBEDDINGS):
self.sess_train.run(normalize_ent_emb_op)
-
+ if tensorboard_logs_path is not None:
+ avg_loss = sum(losses)/(batch_size * self.batches_count)
+ summary = tf.Summary(value=[tf.Summary.Value(tag="Average Loss",
+ simple_value=avg_loss)])
+ self.writer.add_summary(summary, epoch)
if self.verbose:
focusE = ''
if self.use_focusE:
@@ -1169,6 +1183,9 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
pass
if self._perform_early_stopping_test(epoch):
+ if tensorboard_logs_path is not None:
+ self.writer.flush()
+ self.writer.close()
self._end_training()
return
@@ -1176,6 +1193,9 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
self.sess_train.run(self.set_training_true)
except AttributeError:
pass
+ if tensorboard_logs_path is not None:
+ self.writer.flush()
+ self.writer.close()
self._save_trained_params()
self._end_training()
diff --git a/ampligraph/latent_features/models/HolE.py b/ampligraph/latent_features/models/HolE.py
index 4ba91d4b..1b9f728b 100644
--- a/ampligraph/latent_features/models/HolE.py
+++ b/ampligraph/latent_features/models/HolE.py
@@ -183,7 +183,7 @@ def _fn(self, e_s, e_p, e_o):
"""
return (2 / self.k) * (super()._fn(e_s, e_p, e_o))
- def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None):
+ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None):
"""Train a HolE model.
The model is trained on a training set X using the training protocol
@@ -249,7 +249,7 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
One can also think about assigning numeric values by looking at the distribution of it per predicate.
"""
- super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values)
+ super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values, tensorboard_logs_path=tensorboard_logs_path)
def predict(self, X, from_idx=False):
__doc__ = super().predict.__doc__ # NOQA
diff --git a/ampligraph/latent_features/models/RandomBaseline.py b/ampligraph/latent_features/models/RandomBaseline.py
index ffd36ebb..554f332c 100644
--- a/ampligraph/latent_features/models/RandomBaseline.py
+++ b/ampligraph/latent_features/models/RandomBaseline.py
@@ -79,7 +79,7 @@ def _fn(self, e_s, e_p, e_o):
else:
return tf.random_uniform((tf.size(e_s),), minval=0, maxval=1)
- def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None):
+ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None):
"""Train the random model.
There is no actual training involved in practice and the early stopping parameters won't have any effect.
@@ -143,7 +143,7 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
If the numeric value is unknown pass a NaN weight. The model will uniformly randomly assign a numeric value.
One can also think about assigning numeric values by looking at the distribution of it per predicate.
"""
- super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values)
+ super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values, tensorboard_logs_path=tensorboard_logs_path)
def predict(self, X, from_idx=False):
__doc__ = super().predict.__doc__ # NOQA
diff --git a/ampligraph/latent_features/models/TransE.py b/ampligraph/latent_features/models/TransE.py
index 7a68d330..4d3d3387 100644
--- a/ampligraph/latent_features/models/TransE.py
+++ b/ampligraph/latent_features/models/TransE.py
@@ -209,7 +209,7 @@ def _fn(self, e_s, e_p, e_o):
tf.norm(e_s + e_p - e_o, ord=self.embedding_model_params.get('norm', constants.DEFAULT_NORM_TRANSE),
axis=1))
- def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None):
+ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_edge_values=None, tensorboard_logs_path=None):
"""Train an Translating Embeddings model.
The model is trained on a training set X using the training protocol
@@ -275,7 +275,7 @@ def fit(self, X, early_stopping=False, early_stopping_params={}, focusE_numeric_
One can also think about assigning numeric values by looking at the distribution of it per predicate.
"""
- super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values)
+ super().fit(X, early_stopping, early_stopping_params, focusE_numeric_edge_values, tensorboard_logs_path=tensorboard_logs_path)
def predict(self, X, from_idx=False):
__doc__ = super().predict.__doc__ # NOQA
| Plotting Loss Values for each Epoch
Hello!
I am training a knowledge graph model using ampligraph 1.3.2. Could you please let me know how I can save the average loss values for each epoch and plot them in a graph?
Thanks in advance.
| Hey @sourav1312,
Although current codebase does not have that functionality baked in, [you can have a go with TensorBoard for tf 1.x.](https://github.com/tensorflow/tensorboard/blob/master/docs/r1/summaries.md)
@lukostaz For tracking loss and other metrics using Tensorboard or WandB, the appropriate variables need to be passed as parameters. I could not find any such variable in model.fit() function. How can I refer to such variables for the model that is being trained, like val_loss, train_loss, etc?
@sourav1312 Currently this is not supported, but we are working on this, and it will be added soon | 2021-05-20T20:58:27 | 0.0 | [] | [] |
||
Nixtla/utilsforecast | Nixtla__utilsforecast-99 | b2a037cf9aa47b6103d23db466bbd8d214b9ab31 | diff --git a/nbs/evaluation.ipynb b/nbs/evaluation.ipynb
index 1466b0f..f317959 100644
--- a/nbs/evaluation.ipynb
+++ b/nbs/evaluation.ipynb
@@ -98,6 +98,7 @@
" id_col: str = 'unique_id',\n",
" time_col: str = 'ds',\n",
" target_col: str = 'y',\n",
+ " agg_fn: Optional[str] = None,\n",
") -> DataFrame:\n",
" \"\"\"Evaluate forecast using different metrics.\n",
" \n",
@@ -121,11 +122,14 @@
" Column that identifies each timestep, its values can be timestamps or integers.\n",
" target_col : str (default='y')\n",
" Column that contains the target.\n",
+ " agg_fn : str, optional (default=None)\n",
+ " Statistic to compute on the scores by id to reduce them to a single number.\n",
"\n",
" Returns\n",
" -------\n",
" pandas or polars DataFrame\n",
" Metrics with one row per (id, metric) combination and one column per model.\n",
+ " If `agg_fn` is not `None`, there is only one row per metric.\n",
" \"\"\"\n",
" if models is None:\n",
" model_cols = [\n",
@@ -225,12 +229,18 @@
" results_per_metric.append(result)\n",
" if isinstance(df, pd.DataFrame):\n",
" df = pd.concat(results_per_metric).reset_index(drop=True)\n",
- " out_cols = [c for c in df.columns if c not in (id_col, 'metric')]\n",
- " df = df[[id_col, 'metric', *out_cols]]\n",
" else:\n",
- " df = pl.concat(results_per_metric, how='diagonal')\n",
- " out_cols = [c for c in df.columns if c not in (id_col, 'metric')]\n",
- " df = df.select([id_col, 'metric', *out_cols])\n",
+ " df = pl.concat(results_per_metric, how=\"diagonal\")\n",
+ " id_cols = [id_col, \"metric\"]\n",
+ " model_cols = [c for c in df.columns if c not in id_cols]\n",
+ " df = df[id_cols + model_cols]\n",
+ " if agg_fn is not None:\n",
+ " df = ufp.group_by_agg(\n",
+ " df,\n",
+ " by='metric',\n",
+ " aggs={m: agg_fn for m in model_cols},\n",
+ " maintain_order=True,\n",
+ " )\n",
" return df"
]
},
@@ -266,7 +276,8 @@
"> models:Optional[List[str]]=None, train_df:Union[pandas.core.fra\n",
"> me.DataFrame,polars.dataframe.frame.DataFrame,NoneType]=None,\n",
"> level:Optional[List[int]]=None, id_col:str='unique_id',\n",
- "> time_col:str='ds', target_col:str='y')\n",
+ "> time_col:str='ds', target_col:str='y',\n",
+ "> agg_fn:Optional[str]=None)\n",
"\n",
"*Evaluate forecast using different metrics.*\n",
"\n",
@@ -280,7 +291,8 @@
"| id_col | str | unique_id | Column that identifies each serie. |\n",
"| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
"| target_col | str | y | Column that contains the target. |\n",
- "| **Returns** | **Union** | | **Metrics with one row per (id, metric) combination and one column per model.** |"
+ "| agg_fn | Optional | None | Statistic to compute on the scores by id to reduce them to a single number. |\n",
+ "| **Returns** | **Union** | | **Metrics with one row per (id, metric) combination and one column per model.<br>If `agg_fn` is not `None`, there is only one row per metric.** |"
],
"text/plain": [
"---\n",
@@ -295,7 +307,8 @@
"> models:Optional[List[str]]=None, train_df:Union[pandas.core.fra\n",
"> me.DataFrame,polars.dataframe.frame.DataFrame,NoneType]=None,\n",
"> level:Optional[List[int]]=None, id_col:str='unique_id',\n",
- "> time_col:str='ds', target_col:str='y')\n",
+ "> time_col:str='ds', target_col:str='y',\n",
+ "> agg_fn:Optional[str]=None)\n",
"\n",
"*Evaluate forecast using different metrics.*\n",
"\n",
@@ -309,7 +322,8 @@
"| id_col | str | unique_id | Column that identifies each serie. |\n",
"| time_col | str | ds | Column that identifies each timestep, its values can be timestamps or integers. |\n",
"| target_col | str | y | Column that contains the target. |\n",
- "| **Returns** | **Union** | | **Metrics with one row per (id, metric) combination and one column per model.** |"
+ "| agg_fn | Optional | None | Statistic to compute on the scores by id to reduce them to a single number. |\n",
+ "| **Returns** | **Union** | | **Metrics with one row per (id, metric) combination and one column per model.<br>If `agg_fn` is not `None`, there is only one row per metric.** |"
]
},
"execution_count": null,
@@ -706,6 +720,18 @@
"summary"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "76454324-4e83-4c67-9704-779a5ce3dfed",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#| hide\n",
+ "#| polars\n",
+ "import polars.testing"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -724,10 +750,16 @@
" level=[80, 95],\n",
" ).drop('unique_id')\n",
")\n",
- "pl_evaluation = ufp.group_by(pl_evaluation, 'metric').mean()\n",
+ "pl_summary = ufp.group_by(pl_evaluation, 'metric').mean()\n",
"pd.testing.assert_frame_equal(\n",
" summary.sort_values('metric'),\n",
- " pl_evaluation.sort('metric').to_pandas(),\n",
+ " pl_summary.sort('metric').to_pandas(),\n",
+ ")\n",
+ "pl.testing.assert_frame_equal(\n",
+ " evaluate(\n",
+ " series_pl, metrics=metrics, train_df=series_pl, level=[80, 95], agg_fn='mean'\n",
+ " ).sort('metric'),\n",
+ " pl_summary.sort('metric'),\n",
")"
]
},
@@ -756,39 +788,57 @@
"def daily_mase(y, y_hat, y_train):\n",
" return ds_losses.mase(y, y_hat, y_train, seasonality=7)\n",
"\n",
- "uf_res = evaluation.drop(columns='unique_id').groupby('metric', as_index=False).mean()\n",
- "ds_res = ds_evaluate(\n",
- " series,\n",
- " metrics=[\n",
- " ds_losses.mae,\n",
- " ds_losses.mse,\n",
- " ds_losses.rmse,\n",
- " ds_losses.mape,\n",
- " daily_mase,\n",
- " ds_losses.smape,\n",
- " ds_losses.quantile_loss, \n",
- " ds_losses.mqloss,\n",
- " ds_losses.coverage, \n",
- " ds_losses.calibration,\n",
- " ds_losses.scaled_crps,\n",
- " ],\n",
- " level=[80, 95],\n",
- " Y_df=series,\n",
- ")\n",
- "ds_res['metric'] = ds_res['metric'].str.replace('-', '_')\n",
- "ds_res['metric'] = ds_res['metric'].str.replace('q_', 'q')\n",
- "ds_res['metric'] = ds_res['metric'].str.replace('lv_', 'level')\n",
- "ds_res['metric'] = ds_res['metric'].str.replace('daily_mase', 'mase')\n",
- "# utils doesn't multiply pct metrics by 100\n",
- "ds_res.loc[ds_res['metric'].str.startswith('coverage'), ['model0', 'model1']] /= 100\n",
- "ds_res.loc[ds_res['metric'].eq('mape'), ['model0', 'model1']] /= 100\n",
- "# we report smape between 0 and 1 instead of 0-200\n",
- "ds_res.loc[ds_res['metric'].eq('smape'), ['model0', 'model1']] /= 200\n",
+ "for agg_fn in [None, 'mean']:\n",
+ " uf_res = evaluate(\n",
+ " series,\n",
+ " metrics=metrics,\n",
+ " models=models,\n",
+ " train_df=series,\n",
+ " level=[80, 95],\n",
+ " agg_fn=agg_fn,\n",
+ " )\n",
+ " agg_by = None if agg_fn == 'mean' else ['unique_id']\n",
+ " ds_res = ds_evaluate(\n",
+ " series,\n",
+ " metrics=[\n",
+ " ds_losses.mae,\n",
+ " ds_losses.mse,\n",
+ " ds_losses.rmse,\n",
+ " ds_losses.mape,\n",
+ " daily_mase,\n",
+ " ds_losses.smape,\n",
+ " ds_losses.quantile_loss, \n",
+ " ds_losses.mqloss,\n",
+ " ds_losses.coverage, \n",
+ " ds_losses.calibration,\n",
+ " ds_losses.scaled_crps,\n",
+ " ],\n",
+ " level=[80, 95],\n",
+ " Y_df=series,\n",
+ " agg_by=agg_by,\n",
+ " )\n",
+ " ds_res['metric'] = ds_res['metric'].str.replace('-', '_')\n",
+ " ds_res['metric'] = ds_res['metric'].str.replace('q_', 'q')\n",
+ " ds_res['metric'] = ds_res['metric'].str.replace('lv_', 'level')\n",
+ " ds_res['metric'] = ds_res['metric'].str.replace('daily_mase', 'mase')\n",
+ " # utils doesn't multiply pct metrics by 100\n",
+ " ds_res.loc[ds_res['metric'].str.startswith('coverage'), ['model0', 'model1']] /= 100\n",
+ " ds_res.loc[ds_res['metric'].eq('mape'), ['model0', 'model1']] /= 100\n",
+ " # we report smape between 0 and 1 instead of 0-200\n",
+ " ds_res.loc[ds_res['metric'].eq('smape'), ['model0', 'model1']] /= 200\n",
"\n",
- "pd.testing.assert_frame_equal(\n",
- " uf_res.sort_values('metric').reset_index(drop=True),\n",
- " ds_res.sort_values('metric').reset_index(drop=True),\n",
- ")"
+ " ds_res = ds_res[uf_res.columns]\n",
+ " if agg_fn is None:\n",
+ " ds_res = ds_res.sort_values(['unique_id', 'metric'])\n",
+ " uf_res = uf_res.sort_values(['unique_id', 'metric'])\n",
+ " else:\n",
+ " ds_res = ds_res.sort_values('metric')\n",
+ " uf_res = uf_res.sort_values('metric')\n",
+ " \n",
+ " pd.testing.assert_frame_equal(\n",
+ " uf_res.reset_index(drop=True),\n",
+ " ds_res.reset_index(drop=True),\n",
+ " )"
]
}
],
diff --git a/utilsforecast/evaluation.py b/utilsforecast/evaluation.py
index 8160429..65b2bb6 100644
--- a/utilsforecast/evaluation.py
+++ b/utilsforecast/evaluation.py
@@ -49,6 +49,7 @@ def evaluate(
id_col: str = "unique_id",
time_col: str = "ds",
target_col: str = "y",
+ agg_fn: Optional[str] = None,
) -> DataFrame:
"""Evaluate forecast using different metrics.
@@ -72,11 +73,14 @@ def evaluate(
Column that identifies each timestep, its values can be timestamps or integers.
target_col : str (default='y')
Column that contains the target.
+ agg_fn : str, optional (default=None)
+ Statistic to compute on the scores by id to reduce them to a single number.
Returns
-------
pandas or polars DataFrame
Metrics with one row per (id, metric) combination and one column per model.
+ If `agg_fn` is not `None`, there is only one row per metric.
"""
if models is None:
model_cols = [
@@ -184,10 +188,16 @@ def evaluate(
results_per_metric.append(result)
if isinstance(df, pd.DataFrame):
df = pd.concat(results_per_metric).reset_index(drop=True)
- out_cols = [c for c in df.columns if c not in (id_col, "metric")]
- df = df[[id_col, "metric", *out_cols]]
else:
df = pl.concat(results_per_metric, how="diagonal")
- out_cols = [c for c in df.columns if c not in (id_col, "metric")]
- df = df.select([id_col, "metric", *out_cols])
+ id_cols = [id_col, "metric"]
+ model_cols = [c for c in df.columns if c not in id_cols]
+ df = df[id_cols + model_cols]
+ if agg_fn is not None:
+ df = ufp.group_by_agg(
+ df,
+ by="metric",
+ aggs={m: agg_fn for m in model_cols},
+ maintain_order=True,
+ )
return df
| Evaluate() to return aggregated metric evaluation across unique_ids.
### Description
It would be a nice feature to be able to calculate metrics for global models, instead of only by unique_id.
IE if we have an ML models (say Catboost and LGB) with multiple forecasts for each unique_id, I would like to know the MAE for Catboost vs LGB across all unique_ids.
### Use case
_No response_
| 2024-07-02T23:53:13 | 0.0 | [] | [] |
|||
PolicyEngine/policyengine-uk | PolicyEngine__policyengine-uk-989 | b5667a042ddec198160784e26ef297081279561f | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 4a56b9b83..e302f772f 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [2.14.0] - 2024-10-28 12:09:01
+
+### Fixed
+
+- Bugs affecting household app calculations.
+
## [2.13.2] - 2024-10-28 10:46:29
### Fixed
@@ -1573,6 +1579,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
+[2.14.0]: https://github.com/PolicyEngine/openfisca-uk/compare/2.13.2...2.14.0
[2.13.2]: https://github.com/PolicyEngine/openfisca-uk/compare/2.13.1...2.13.2
[2.13.1]: https://github.com/PolicyEngine/openfisca-uk/compare/2.13.0...2.13.1
[2.13.0]: https://github.com/PolicyEngine/openfisca-uk/compare/2.12.0...2.13.0
diff --git a/changelog.yaml b/changelog.yaml
index cc39a6a0a..484e02314 100644
--- a/changelog.yaml
+++ b/changelog.yaml
@@ -1320,3 +1320,8 @@
fixed:
- Threshold freeze for ST extended to 2027.
date: 2024-10-28 10:46:29
+- bump: minor
+ changes:
+ fixed:
+ - Bugs affecting household app calculations.
+ date: 2024-10-28 12:09:01
diff --git a/policyengine_uk/parameters/gov/hmrc/national_insurance/class_1/rates/employer.yaml b/policyengine_uk/parameters/gov/hmrc/national_insurance/class_1/rates/employer.yaml
index 4a0083b1e..c3b9208bc 100644
--- a/policyengine_uk/parameters/gov/hmrc/national_insurance/class_1/rates/employer.yaml
+++ b/policyengine_uk/parameters/gov/hmrc/national_insurance/class_1/rates/employer.yaml
@@ -3,7 +3,7 @@ description: National Insurance contribution rate by employers on earnings above
metadata:
label: NI Employer rate
propagate_metadata_to_children: true
+ unit: /1
reference: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/882271/Table-a4.pdf
- unit: marginal-rate
values:
2015-06-01: 0.138
diff --git a/policyengine_uk/variables/contrib/labour/private_school_vat.py b/policyengine_uk/variables/contrib/labour/private_school_vat.py
index fae34a3fb..755cfb699 100644
--- a/policyengine_uk/variables/contrib/labour/private_school_vat.py
+++ b/policyengine_uk/variables/contrib/labour/private_school_vat.py
@@ -9,6 +9,8 @@ class attends_private_school(Variable):
value_type = bool
def formula(person, period, parameters):
+ if not hasattr(person.simulation, "dataset"):
+ return 0
household = person.household
# To ensure that our model matches
# total number of students actually enrolled
@@ -37,6 +39,9 @@ def formula(person, period, parameters):
household_weight = household("household_weight", period)
weighted_income = MicroSeries(net_income, weights=household_weight)
+ if household_weight.sum() < 1e6:
+ return 0
+
percentile = np.zeros_like(weighted_income).astype(numpy.int64)
mask = household_weight > 0
@@ -60,7 +65,9 @@ def formula(person, period, parameters):
* is_child
)
- return random(person) < p_attends_private_school
+ value = random(person) < p_attends_private_school
+
+ return value
class private_school_vat(Variable):
diff --git a/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py b/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py
index 784842b83..43c271c28 100644
--- a/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py
+++ b/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py
@@ -159,12 +159,15 @@ def formula(person, period, parameters):
total_consumption = (consumption * person_weight).sum()
share_of_total_consumption = consumption / total_consumption
- return (
+ value = (
amount_paid_by_employers
* share_of_total_consumption
* consumer_incidence
)
+ if total_consumption == 0:
+ return 0
+
class employer_ni_response_capital_incidence(Variable):
label = "capital response to employer NI reform"
@@ -199,12 +202,15 @@ def formula(person, period, parameters):
total_wealth = (wealth * person_weight).sum()
share_of_total_wealth = wealth / total_wealth
- return (
+ value = (
amount_paid_by_employers
* share_of_total_wealth
* capital_incidence
)
+ if total_wealth == 0:
+ return 0
+
class employer_ni_fixed_employer_cost_change(Variable):
label = "employer NI reform incidence"
diff --git a/setup.py b/setup.py
index bbfd87ac0..ef3e9969a 100644
--- a/setup.py
+++ b/setup.py
@@ -4,7 +4,7 @@
setup(
name="PolicyEngine-UK",
- version="2.13.2",
+ version="2.14.0",
author="PolicyEngine",
author_email="[email protected]",
classifiers=[
| Hard-code secondary threshold freeze to 2027
Per [OBR](https://obr.uk/box/the-impact-of-frozen-or-reduced-personal-tax-thresholds/#:~:text=The%20freeze%20was%20extended%20to,across%20our%20entire%20forecast%20period.)
| 2024-10-28T12:10:22 | 0.0 | [] | [] |
|||
deadc0de6/catcli | deadc0de6__catcli-11 | a519d0a36c20ac289a0162582ebd5c53366b7c48 | diff --git a/catcli/walker.py b/catcli/walker.py
index 80e99b3..37b24fb 100644
--- a/catcli/walker.py
+++ b/catcli/walker.py
@@ -21,27 +21,45 @@ def __init__(self, noder, nohash=False, debug=False):
self.debug = debug
def index(self, path, parent, name, storagepath=''):
- '''index a directory and store in tree'''
+ '''
+ index a directory and store in tree
+ @path: path to index
+ @parent: parent node
+ @name: this stoarge name
+ '''
self._debug('indexing starting at {}'.format(path))
if not parent:
parent = self.noder.dir_node(name, path, parent)
+ if os.path.islink(path):
+ rel = os.readlink(path)
+ ab = os.path.join(path, rel)
+ if os.path.isdir(ab):
+ return parent, 0
+
cnt = 0
for (root, dirs, files) in os.walk(path):
for f in files:
self._debug('found file {} under {}'.format(f, path))
sub = os.path.join(root, f)
+ if not os.path.exists(sub):
+ continue
self._log(f)
self._debug('index file {}'.format(sub))
- self.noder.file_node(os.path.basename(f), sub,
- parent, storagepath)
- cnt += 1
+ n = self.noder.file_node(os.path.basename(f), sub,
+ parent, storagepath)
+ if n:
+ cnt += 1
for d in dirs:
self._debug('found dir {} under {}'.format(d, path))
base = os.path.basename(d)
sub = os.path.join(root, d)
self._debug('index directory {}'.format(sub))
+ if not os.path.exists(sub):
+ continue
dummy = self.noder.dir_node(base, sub, parent, storagepath)
+ if not dummy:
+ continue
cnt += 1
nstoragepath = os.sep.join([storagepath, base])
if not storagepath:
@@ -67,8 +85,8 @@ def _reindex(self, path, parent, top, storagepath):
self._debug('found file {} under {}'.format(f, path))
sub = os.path.join(root, f)
maccess = os.path.getmtime(sub)
- reindex, n = self._need_reindex(parent, f, maccess)
- if not reindex:
+ need_reindex, n = self._need_reindex(parent, f, maccess)
+ if not need_reindex:
self._debug('\tignore file {}'.format(sub))
self.noder.flag(n)
continue
@@ -83,8 +101,8 @@ def _reindex(self, path, parent, top, storagepath):
base = os.path.basename(d)
sub = os.path.join(root, d)
maccess = os.path.getmtime(sub)
- reindex, dummy = self._need_reindex(parent, base, maccess)
- if reindex:
+ need_reindex, dummy = self._need_reindex(parent, base, maccess)
+ if need_reindex:
self._debug('\tre-index directory {}'.format(sub))
dummy = self.noder.dir_node(base, sub, parent, storagepath)
cnt += 1
| Recursion!

Issue is pretty self explanatory
| no it's not, please elaborate
When a symlink points to another symlink in a recursive manner catcli can
get trapped resulting in path a/b/b/b/b/b... where /b is the symlink
location
On Thu, Mar 19, 2020, 7:02 AM deadc0de <[email protected]> wrote:
> no it's not, please elaborate
>
> â
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/deadc0de6/catcli/issues/9#issuecomment-601140743>, or
> unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACWCZJGSZFQJ6CX4JHGENODRIICXBANCNFSM4LPCRIBA>
> .
>
| 2020-03-27T09:09:26 | 0.0 | [] | [] |
||
cucapra/turnt | cucapra__turnt-24 | 38b168cd928db391fcca895998253f4f36ab546a | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0f3bbbf..52f2d8a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,11 @@
Turnt Changelog
===============
+1.10.0 (in development)
+-----------------------
+
+- The diff output, when `--diff` is enabled, now goes to stderr instead of stdout. This makes it possible to redirect diffs separately from the test results.
+
1.9.0 (2022-11-02)
------------------
diff --git a/README.md b/README.md
index 6810f55..1ffecd5 100644
--- a/README.md
+++ b/README.md
@@ -197,6 +197,7 @@ The most common `turnt` command-line options you'll need while running and updat
- `--save`: Bless the current output from each test as the "correct" output, saving it to the output file that you'll want to check into version control.
- `--diff`: Show diffs between the actual and expected output for each test.
+ (The diff goes to stderr while the [TAP][] results remain on stdout.)
You also might enjoy:
diff --git a/turnt/__init__.py b/turnt/__init__.py
index f78445b..a9ae7ae 100644
--- a/turnt/__init__.py
+++ b/turnt/__init__.py
@@ -3,4 +3,4 @@
"""
from .__main__ import turnt # noqa
-__version__ = '1.9.0'
+__version__ = '1.10.0'
diff --git a/turnt/run.py b/turnt/run.py
index d76d269..fc030b0 100644
--- a/turnt/run.py
+++ b/turnt/run.py
@@ -48,7 +48,8 @@ def check_result(cfg: Config, test: Test,
for saved_file, output_file in test.out_files.items():
# Diff the actual & expected output.
if cfg.diff:
- subprocess.run(test.diff_cmd + [saved_file, output_file])
+ subprocess.run(test.diff_cmd + [saved_file, output_file],
+ stdout=sys.stderr.buffer)
# Read actual & expected output.
with open(output_file) as f:
| Write diff to stderr
It would be nice to write the diff output from `--diff` to `stderr` so it can be kept separate from the test result
| 2023-03-03T18:49:18 | 0.0 | [] | [] |
|||
gnuboard/g6 | gnuboard__g6-635 | 87e4ac873c15d51f8169f2c25700e4127bd8ccaf | diff --git a/api/v1/models/response.py b/api/v1/models/response.py
index 85875c7c0..f87bf98da 100644
--- a/api/v1/models/response.py
+++ b/api/v1/models/response.py
@@ -56,6 +56,13 @@ class MessageResponse(BaseModel):
}
}
+response_429 = {
+ status.HTTP_429_TOO_MANY_REQUESTS: {
+ "model": MessageResponse,
+ "description": "ìì² ì í ì´ê³¼"
+ }
+}
+
response_500 = {
status.HTTP_500_INTERNAL_SERVER_ERROR: {
"description": "ìë² ì¤ë¥"
diff --git a/api/v1/routers/board.py b/api/v1/routers/board.py
index 088d46bfa..8baf577b3 100644
--- a/api/v1/routers/board.py
+++ b/api/v1/routers/board.py
@@ -7,9 +7,10 @@
from core.database import db_session
from lib.common import get_paging_info
-from lib.board_lib import insert_board_new, set_write_delay, get_list_thumbnail
+from lib.board_lib import insert_board_new, get_list_thumbnail
from api.v1.models.response import (
- response_401, response_403, response_404, response_422
+ response_401, response_403, response_404, response_422,
+ response_429
)
from api.v1.dependencies.board import arange_file_data
from api.v1.models.board import (
@@ -104,7 +105,7 @@ async def api_read_post(
})
content.update(additional_content)
service.validate_secret()
- service.validate_repeat()
+ service.validate_repeat_with_slowapi()
service.block_read_comment()
service.validate_read_level()
service.check_scrap()
@@ -151,7 +152,7 @@ async def api_read_post(
"nogood": ajax_good_data["nogood"],
})
content.update(additional_content)
- service.validate_repeat()
+ service.validate_repeat_with_slowapi()
service.block_read_comment()
service.check_scrap()
service.check_is_good()
@@ -162,7 +163,8 @@ async def api_read_post(
@router.post("/{bo_table}/writes",
summary="ê²ìí ê¸ ìì±",
responses={**response_401, **response_403,
- **response_404, **response_422}
+ **response_404, **response_422,
+ **response_429}
)
async def api_create_post(
db: db_session,
@@ -195,13 +197,13 @@ async def api_create_post(
service.validate_post_content(wr_data.wr_content)
service.validate_write_level()
service.arrange_data(wr_data, wr_data.secret, wr_data.html, wr_data.mail)
+ service.validate_write_delay_with_slowapi()
write = service.save_write(wr_data.parent_id, wr_data)
insert_board_new(service.bo_table, write)
service.add_point(write)
parent_write = service.get_parent_post(wr_data.parent_id)
service.send_write_mail_(write, parent_write)
service.set_notice(write.wr_id, wr_data.notice)
- set_write_delay(service.request)
service.delete_cache()
db.commit()
return {"result": "created", "wr_id": write.wr_id}
@@ -421,6 +423,8 @@ async def api_create_comment(
service.validate_comment_level()
service.validate_point()
service.validate_post_content(comment_data.wr_content)
+ service.validate_comment_password(comment_data.wr_password)
+ service.validate_write_delay_with_slowapi()
comment = service.save_comment(comment_data, parent_write)
service.add_point(comment)
service.send_write_mail_(comment, parent_write)
diff --git a/api/v1/routers/scrap.py b/api/v1/routers/scrap.py
index 9ced3f5b5..ad51f8403 100644
--- a/api/v1/routers/scrap.py
+++ b/api/v1/routers/scrap.py
@@ -105,6 +105,7 @@ async def create_member_scrap(
comment_service.validate_comment_level()
comment_service.validate_point()
comment_service.validate_post_content(data.wr_content)
+ comment_service.validate_comment_password(form.wr_password)
comment = comment_service.save_comment(form, write)
comment_service.add_point(comment)
comment_service.send_write_mail_(comment, write)
diff --git a/bbs/board.py b/bbs/board.py
index efb058e7f..10a545458 100644
--- a/bbs/board.py
+++ b/bbs/board.py
@@ -466,6 +466,7 @@ async def write_comment_update(
service.validate_comment_level()
service.validate_point()
service.validate_post_content(form.wr_content)
+ service.validate_comment_password(form.wr_password)
comment = service.save_comment(form, write)
service.add_point(comment)
service.send_write_mail_(comment, write)
diff --git a/bbs/scrap.py b/bbs/scrap.py
index 2baa3940c..6e550f088 100644
--- a/bbs/scrap.py
+++ b/bbs/scrap.py
@@ -74,6 +74,7 @@ async def scrap_form_update(
comment_service.validate_comment_level()
comment_service.validate_point()
comment_service.validate_post_content(wr_content)
+ comment_service.validate_comment_password(form.wr_password)
comment = comment_service.save_comment(form, write)
comment_service.add_point(comment)
comment_service.send_write_mail_(comment, write)
diff --git a/core/exception.py b/core/exception.py
index be6a920fa..0f38ba06f 100644
--- a/core/exception.py
+++ b/core/exception.py
@@ -6,6 +6,8 @@
from fastapi.templating import Jinja2Templates
from starlette.templating import _TemplateResponse
+from slowapi.errors import RateLimitExceeded
+
class AlertException(HTTPException):
"""ì¤í¬ë¦½í¸ ê²½ê³ ì°½ ì¶ë ¥ì ìí ìì¸ í´ëì¤
@@ -110,6 +112,14 @@ async def redirect_exception_handler(request: Request, exc: RedirectException):
"""RedirectException ìì¸ì²ë¦¬ handler ë±ë¡"""
return RedirectResponse(url=exc.url, status_code=exc.status_code)
+ @app.exception_handler(RateLimitExceeded)
+ async def rate_limit_exception_handler(request: Request, exc: RateLimitExceeded):
+ """RateLimitExceeded ìì¸ì²ë¦¬ handler ë±ë¡"""
+ return JSONResponse(
+ status_code=exc.status_code,
+ content={"message": exc.detail}
+ )
+
def template_response(
template_html: str,
diff --git a/lib/slowapi/__init__.py b/lib/slowapi/__init__.py
new file mode 100644
index 000000000..2eebd6abc
--- /dev/null
+++ b/lib/slowapi/__init__.py
@@ -0,0 +1,57 @@
+import inspect
+from typing import List
+from fastapi import Request
+from slowapi import Limiter
+from slowapi.wrappers import Limit
+from slowapi.errors import RateLimitExceeded
+
+
+class LimiterNoWarning(Limiter):
+ """
+ Limiter í´ëì¤ì __evaluate_limits ë©ìëìì
+ ë¡ê¹
ëë ê²½ê³ ë©ìì§ë¥¼ ì¶ë ¥íì§ ìëë¡ ì¤ë²ë¼ì´ë©íì¬ ì¬ì©íë Limiter í´ëì¤
+ """
+ def __init__(self, key_func):
+ super().__init__(key_func=key_func)
+
+ def _Limiter__evaluate_limits(
+ self, request: Request, endpoint: str, limits: List[Limit]
+ ) -> None:
+ failed_limit = None
+ limit_for_header = None
+ for lim in limits:
+ limit_scope = lim.scope or endpoint
+ if lim.is_exempt:
+ continue
+ if lim.methods is not None and request.method.lower() not in lim.methods:
+ continue
+ if lim.per_method:
+ limit_scope += ":%s" % request.method
+
+ if "request" in inspect.signature(lim.key_func).parameters.keys():
+ limit_key = lim.key_func(request)
+ else:
+ limit_key = lim.key_func()
+
+ args = [limit_key, limit_scope]
+ if all(args):
+ if self._key_prefix:
+ args = [self._key_prefix] + args
+ if not limit_for_header or lim.limit < limit_for_header[0]:
+ limit_for_header = (lim.limit, args)
+
+ cost = lim.cost(request) if callable(lim.cost) else lim.cost
+ if not self.limiter.hit(lim.limit, *args, cost=cost):
+ failed_limit = lim
+ limit_for_header = (lim.limit, args)
+ break
+ else:
+ self.logger.error(
+ "Skipping limit: %s. Empty value found in parameters.", lim.limit
+ )
+ continue
+ # keep track of which limit was hit, to be picked up for the response header
+ request.state.view_rate_limit = limit_for_header
+
+ if failed_limit:
+ raise RateLimitExceeded(failed_limit)
\ No newline at end of file
diff --git a/lib/slowapi/create_post_limit/limiter.py b/lib/slowapi/create_post_limit/limiter.py
new file mode 100644
index 000000000..d90874712
--- /dev/null
+++ b/lib/slowapi/create_post_limit/limiter.py
@@ -0,0 +1,102 @@
+from typing import Annotated, Optional
+from fastapi import Request, Depends
+from fastapi.security.utils import get_authorization_scheme_param
+from sqlalchemy import select
+from slowapi.util import get_remote_address
+
+from core.database import DBConnect
+from core.models import Config, Member
+from api.settings import api_settings
+from api.v1.auth import oauth2_scheme
+from api.v1.auth.jwt import JWT
+from api.v1.service.member import MemberServiceAPI
+from api.v1.models.auth import TokenPayload
+from lib.slowapi import LimiterNoWarning
+
+
+def get_request_member(
+ token: Annotated[str, Depends(oauth2_scheme)],
+ member_service: Annotated[MemberServiceAPI, Depends()]
+) -> Optional[Member]:
+ """
+ REST_API ìì² ì JWT í í°ì íµí´ ì¬ì©ì ì 보를 ê°ì ¸ì¤ë í¨ì
+
+ Args:
+ token (Annotated[str, Depends(oauth2_scheme)]): JWT í í°
+ member_service (Annotated[MemberServiceAPI, Depends()]): ì¬ì©ì ì ë³´ ìë¹ì¤
+
+ Returns:
+ Member: ì¬ì©ì ì ë³´ ê°ì²´ ëë None
+ """
+ payload: TokenPayload = JWT.decode_token(
+ token,
+ api_settings.ACCESS_TOKEN_SECRET_KEY
+ )
+
+ mb_id: str = payload.sub
+ if mb_id is None:
+ return None
+
+ member = member_service.get_member(mb_id)
+ return member
+
+
+def limiter_key_func(request: Request) -> Optional[str]:
+ """
+ Limiter ì¸ì¤í´ì¤ ìì±ì key_func ì¸ìì ì ê³µë í¨ì.
+ Noneì¼ë¡ ë°íëë IP 주ì(ê´ë¦¬ì IP)ë ìì² ì íì íì§ ìëë¤.
+
+ Args:
+ request (Request): FastAPI Request ê°ì²´
+
+ Returns:
+ Optional[str]: ìì² ì í IP 주ì ëë None
+ """
+ authorization = request.headers.get("Authorization")
+ scheme, token = get_authorization_scheme_param(authorization)
+
+ if not authorization or scheme.lower() != "bearer":
+ return get_remote_address(request)
+
+ with DBConnect().sessionLocal() as db:
+ member_service = MemberServiceAPI(request, db)
+ cf_admin = db.scalar(select(Config)).cf_admin
+
+ member = get_request_member(token, member_service)
+ if member.mb_id == cf_admin:
+ return None
+
+ return get_remote_address(request)
+
+
+def get_cf_delay_sec_from_db():
+ """
+ ë°ì´í°ë² ì´ì¤ìì cf_delay_sec ê°ì ê°ì ¸ìì
+ Limiter ì¸ì¤í´ì¤ ìì±ì ì¬ì©í ì í ííìì ë°ííë í¨ì
+ "n/t time" íìì¼ë¡ ë°í
+ - tìê° (ìê° ë¨ìë time) ëì në²ì ìì²ì íì©
+ - time: second, minute, hour, day, month, year
+ - documentation: https://limits.readthedocs.io/en/stable/quickstart.html#rate-limit-string-notation
+ """
+ with DBConnect().sessionLocal() as db:
+ cf_delay_sec = db.scalar(select(Config)).cf_delay_sec
+ limiter_expr = f"1/{cf_delay_sec} second"
+ return limiter_expr
+
+
+# ìì² ì í limiter ì¸ì¤í´ì¤ ìì±
+limiter = LimiterNoWarning(key_func=limiter_key_func)
+
+
[email protected](
+ get_cf_delay_sec_from_db,
+ error_message="ë무 ë¹ ë¥¸ ìê°ë´ì ê²ìê¸ì ì°ìí´ì ì¬ë¦´ ì ììµëë¤.",
+)
+def validate_slowapi_create_post(request: Request):
+ """
+ slowapiì Limiter를 íµí´ ê²ìê¸ ìì± API ìì² ì í ìê°ì ê²ì¦íë í¨ì
+
+ Args:
+ request (Request): FastAPI Request ê°ì²´
+ """
+ pass
\ No newline at end of file
diff --git a/lib/slowapi/read_count_limit/limiter.py b/lib/slowapi/read_count_limit/limiter.py
new file mode 100644
index 000000000..16fd7097e
--- /dev/null
+++ b/lib/slowapi/read_count_limit/limiter.py
@@ -0,0 +1,16 @@
+from fastapi import Request
+from slowapi.util import get_remote_address
+
+from lib.slowapi import LimiterNoWarning
+
+
+read_count_limiter = LimiterNoWarning(key_func=get_remote_address)
+
+
+@read_count_limiter.limit("1 per day")
+def limit_read_count(request: Request):
+ """
+ ê²ìê¸ ìì±ì ì¸ì ê²ìê¸ ì¡°íì,
+ í루ì 1íë§ ê²ìê¸ ì½ê¸° ì¹´ì´í¸ ì¦ê°ë¥¼ íì©íë í¨ì
+ """
+ pass
\ No newline at end of file
diff --git a/requirements.txt b/requirements.txt
index bbda3d5a9..0189cb46c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -61,4 +61,8 @@ APScheduler>=3.10.0
pymysql==1.1.1
psycopg2-binary==2.9.9
lxml==5.1.0
-PyJWT==2.8.0
\ No newline at end of file
+PyJWT==2.8.0
+Deprecated==1.2.14
+limits==3.12.0
+slowapi==0.1.9
+wrapt==1.16.0
\ No newline at end of file
diff --git a/service/board/board.py b/service/board/board.py
index b8725ed42..d5cb53174 100644
--- a/service/board/board.py
+++ b/service/board/board.py
@@ -17,6 +17,7 @@
remove_query_params, set_url_query_params
)
from lib.html_sanitizer import content_sanitizer
+from lib.slowapi.create_post_limit.limiter import validate_slowapi_create_post
from lib.pbkdf2 import create_hash, validate_password
from service import BaseService
from service.board_file_service import BoardFileService
@@ -112,6 +113,10 @@ def validate_write_delay(self):
if not is_write_delay(self.request):
self.raise_exception(status_code=400, detail="ë무 ë¹ ë¥¸ ìê°ë´ì ê²ìê¸ì ì°ìí´ì ì¬ë¦´ ì ììµëë¤.")
+ def validate_write_delay_with_slowapi(self):
+ """ê¸ì°ê¸° ê°ê²© ê²ì¦(slowapi)"""
+ validate_slowapi_create_post(self.request)
+
def validate_anonymous_password(self, data):
"""ë¹íì ê¸ì°ê¸°ì ë¹ë°ë²í¸ ê²ì¦"""
if not self.member and not data.wr_password:
diff --git a/service/board/read_post.py b/service/board/read_post.py
index 3328e607d..adf79c72a 100644
--- a/service/board/read_post.py
+++ b/service/board/read_post.py
@@ -9,6 +9,7 @@
from lib.common import set_url_query_params
from lib.board_lib import is_owner, cut_name
from lib.template_filters import number_format
+from lib.slowapi.read_count_limit.limiter import limit_read_count
from service.board_file_service import BoardFileService
from service.point_service import PointService
from . import BoardService
@@ -126,8 +127,12 @@ def validate_secret_with_session(self):
self.validate_secret(redirect_password_view=True)
self.request.session[session_name] = True
- def validate_repeat(self):
- """ê²ìê¸ ìì±ìë ì¡°íì, í¬ì¸í¸ ì²ë¦¬ë¥¼ íì§ ìëë¤."""
+ def _validate_repeat(self):
+ """
+ ê²ìê¸ ìì±ìë ì¡°íì, í¬ì¸í¸ ì²ë¦¬ë¥¼ íì§ ìëë¤.
+ sessionì íµí ê²ì¦(validate_repeat_with_session)ê³¼
+ slowapi를 íµí ê²ì¦(validate_repeat_with_slowapi)ì ì¬ì©
+ """
if self.member.mb_id == self.write.mb_id:
return
@@ -150,16 +155,28 @@ def validate_repeat(self):
def validate_repeat_with_session(self):
"""
- ê²ìê¸ ìì±ì íì¸(validate_repeat())ê³¼ ì¸ì
ì¬ë¶ë¥¼ íì¸íì¬
+ ê²ìê¸ ìì±ì íì¸(_validate_repeat())ê³¼ ì¸ì
ì¬ë¶ë¥¼ íì¸íì¬
íë² ì½ì ê²ìê¸ì ì¡°íì, í¬ì¸í¸ ì²ë¦¬ë¥¼ íì§ ìëë¤.
"""
session_name = f"ss_view_{self.bo_table}_{self.wr_id}"
if self.request.session.get(session_name):
return
- self.validate_repeat()
+ self._validate_repeat()
self.request.session[session_name] = True
+ def validate_repeat_with_slowapi(self):
+ """
+ ê²ìê¸ ìì±ì íì¸(_validate_repeat())ê³¼
+ slowapi를 íµí ip íì¸ì íµí´
+ íë² ì½ì ê²ìê¸ì ì¡°íì, í¬ì¸í¸ ì²ë¦¬ë¥¼ íì§ ìëë¤.
+ """
+ try:
+ limit_read_count(self.request)
+ self._validate_repeat()
+ except:
+ pass
+
def check_scrap(self):
"""ì¤í¬ë© ì¬ë¶ íì¸"""
if not self.member:
diff --git a/service/board/update_post.py b/service/board/update_post.py
index aa89f1297..d6ac0a290 100644
--- a/service/board/update_post.py
+++ b/service/board/update_post.py
@@ -120,6 +120,12 @@ def validate_write_comment_paring(self, wr_parent: int):
if self.wr_id != wr_parent:
self.raise_exception(detail="ìì±íë ¤ë ëëê¸ì ëê¸ì´, ë¶ëª¨ê¸ì ëê¸ì´ ìëëë¤.", status_code=403)
+ def validate_comment_password(self, wr_password: str):
+ """ë¹íì ëê¸ ìì± ì ë¹ë°ë²í¸ ê²ì¦"""
+ mb_id = getattr(self.member, "mb_id", "")
+ if not mb_id and not wr_password:
+ self.raise_exception(detail="ë¹íì ëê¸ ìì± ì ë¹ë°ë²í¸ë íìì
ëë¤.", status_code=403)
+
def save_comment(
self, data: Union[WriteCommentForm, CommentModel], write: WriteBaseModel
) -> WriteBaseModel:
@@ -150,8 +156,6 @@ def save_comment(
comment.wr_is_comment = 1
comment.wr_content = content_sanitizer.get_cleaned_data(data.wr_content)
comment.mb_id = getattr(self.member, "mb_id", "")
- if not comment.mb_id and not data.wr_password:
- self.raise_exception(detail="ë¹íì ëê¸ ìì± ì ë¹ë°ë²í¸ë íìì
ëë¤.", status_code=403)
comment.wr_password = create_hash(data.wr_password) if data.wr_password else ""
comment.wr_name = self.set_wr_name(self.member, data.wr_name)
self.validate_anonymous_password(data)
| ê²ìê¸ ì¡°í ì ë°ë³µì ì¼ë¡ ì¡°íì ì¬ë¼ê°ë ì´ì
ì´ì ì ìì ëìë¤ê³ íì
¨ë ê±¸ë¡ ê¸°ìµí©ëë¤ë§,
í
ì¤í¸ ëì¤ì ë¤ì ë°ê²¬ ëìë¤ì
íì¬ ìµì ë²ì ì´ë©°, ë°ë³µì ì¼ë¡ ì¡°íìê° ì¦ê°í©ëë¤.


| 2024-07-10T03:45:04 | 0.0 | [] | [] |
|||
stratis-storage/stratis-cli | stratis-storage__stratis-cli-812 | e5e7735609eec3194dc8367d214edd3e40ce07f5 | diff --git a/setup.py b/setup.py
index af78c2097..5453bdfee 100644
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,6 @@ def local_file(name):
"dbus-client-gen>=0.4",
"dbus-python-client-gen>=0.7",
"justbytes>=0.14",
- "psutil",
"python-dateutil",
"semantic_version",
"wcwidth",
diff --git a/src/stratis_cli/_error_reporting.py b/src/stratis_cli/_error_reporting.py
index b3a5623fd..36a2a9fe9 100644
--- a/src/stratis_cli/_error_reporting.py
+++ b/src/stratis_cli/_error_reporting.py
@@ -20,7 +20,6 @@
# isort: THIRDPARTY
import dbus
-import psutil
# isort: FIRSTPARTY
from dbus_client_gen import (
@@ -114,20 +113,8 @@ def _interpret_errors_0(error):
"Most likely stratis has insufficient permissions for the action requested."
)
- # We have observed two causes of this problem. The first is that
- # stratisd is not running at all. The second is that stratisd has not
- # yet established its D-Bus service. While coverage of the first condition
- # can be replicated by simply not starting stratisd, the second condition
- # is difficult to replicate through testing.
if error.get_dbus_name() == "org.freedesktop.DBus.Error.NameHasNoOwner":
- for proc in psutil.process_iter():
- try:
- if proc.name() == "stratisd": # pragma: no cover
- return "Most likely stratis is unable to connect to the stratis D-Bus service."
- except psutil.NoSuchProcess: # pragma: no cover
- pass
-
- return "It appears that there is no stratisd process running."
+ return "Most likely there is no stratisd process running."
# Due to the uncertain behavior with which libdbus
# treats a timeout value of 0, it proves difficult to test this case,
| Drop dependency on psutil
This is easily done, I just await the command to do it.
| 2021-07-15T15:49:01 | 0.0 | [] | [] |
|||
linien-org/linien | linien-org__linien-404 | b8bc8958fa42d01f50350a65284554b9f1c89d57 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 1a5595fb..e4829888 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,7 +9,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added
* Handle corrupted json files by @bleykauf in https://github.com/linien-org/linien/pull/399
-* Show differences when local and remote paramters do not match @bleykauf in https://github.com/linien-org/linien/pull/400
+* Show differences when local and remote parameters do not match by @bleykauf in https://github.com/linien-org/linien/pull/400
+* Show voltage on the x-axis when sweeping by @bleykauf in https://github.com/linien-org/linien/pull/404
## [2.0.2] - 2024-05-14
diff --git a/linien-gui/linien_gui/ui/plot_widget.py b/linien-gui/linien_gui/ui/plot_widget.py
index ee7d7b16..e8fdcb7a 100644
--- a/linien-gui/linien_gui/ui/plot_widget.py
+++ b/linien-gui/linien_gui/ui/plot_widget.py
@@ -73,31 +73,31 @@ def on_connection_established(self):
QtCore.QTimer.singleShot(100, self.listen_to_parameter_changes)
def listen_to_parameter_changes(self):
- self.parent.parameters.sweep_speed.add_callback(self.force_repaint_tick_strings)
- self.parent.parameters.lock.add_callback(self.force_repaint_tick_strings)
- self.force_repaint_tick_strings()
+ self.parent.parameters.sweep_center.add_callback(self.repaint_tick_strings)
+ self.parent.parameters.sweep_amplitude.add_callback(self.repaint_tick_strings)
+ self.parent.parameters.lock.add_callback(self.repaint_tick_strings)
+ self.repaint_tick_strings()
- def force_repaint_tick_strings(self, *args):
+ def repaint_tick_strings(self, *args):
self.picture = None
self.update()
- def tickStrings(self, values, scale, spacing):
- locked = self.parent.parameters.lock.value
- sweep_speed = self.parent.parameters.sweep_speed.value if not locked else 0
- time_between_points = (1 / 125e6) * 2 ** (sweep_speed) * DECIMATION
- values = [v * time_between_points for v in values]
- spacing *= time_between_points
-
- places = max(0, np.ceil(-np.log10(spacing * scale)))
- strings = []
- for v in values:
- vs = v * scale
- if abs(vs) < 0.001 or abs(vs) >= 10000:
- vstr = "%g" % vs
- else:
- vstr = ("%%0.%df" % places) % vs
- strings.append(vstr)
- return strings
+ def tickStrings(self, values, scale, spacing) -> list[str]:
+ if self.parent.parameters.lock.value:
+ # use µs for the x axis
+ spacing = DECIMATION / 125e6
+ values = [1e6 * scale * v * spacing for v in values]
+ precision_specifier = 1
+ else:
+ # use voltage for the x axis
+ center = self.parent.parameters.sweep_center.value
+ amplitude = self.parent.parameters.sweep_amplitude.value
+ min_ = center - amplitude
+ max_ = center + amplitude
+ spacing = abs(max_ - min_) / (N_POINTS - 1)
+ values = [scale * (v * spacing + min_) for v in values]
+ precision_specifier = 2
+ return [f"{v:.{precision_specifier}f}" for v in values]
class PlotWidget(pg.PlotWidget):
@@ -116,7 +116,6 @@ def __init__(self, *args, **kwargs):
self.getAxis("bottom").enableAutoSIPrefix(False)
self.showGrid(x=True, y=True)
- self.setLabel("bottom", "time", units="s")
# Causes auto-scale button (âAâ in lower-left corner) to be hidden for this
# PlotItem
@@ -140,7 +139,6 @@ def __init__(self, *args, **kwargs):
# OpenGL is enabled in the beginning of this file.
# NOTE: OpenGL has a bug that causes the plot to be way too small. Therefore,
# self.resize() is called below.
-
self.crosshair = pg.InfiniteLine(pos=N_POINTS / 2, pen=pg.mkPen("w", width=1))
self.addItem(self.crosshair)
@@ -210,7 +208,7 @@ def on_connection_established(self):
self.parameters = self.app.parameters
self.control = self.app.control
- def set_pens(*args):
+ def on_plot_settings_changed(*args):
pen_width = self.app.settings.plot_line_width.value
for curve, color in {
@@ -229,16 +227,18 @@ def set_pens(*args):
curve.setPen(pg.mkPen((r, g, b, a), width=pen_width))
for color_idx in range(N_COLORS):
- getattr(self.app.settings, f"plot_color_{color_idx}").add_callback(set_pens)
- self.app.settings.plot_line_width.add_callback(set_pens)
- self.app.settings.plot_line_opacity.add_callback(set_pens)
+ getattr(self.app.settings, f"plot_color_{color_idx}").add_callback(
+ on_plot_settings_changed
+ )
+ self.app.settings.plot_line_width.add_callback(on_plot_settings_changed)
+ self.app.settings.plot_line_opacity.add_callback(on_plot_settings_changed)
self.control_signal_history_data = self.parameters.control_signal_history.value
self.monitor_signal_history_data = self.parameters.monitor_signal_history.value
self.parameters.to_plot.add_callback(self.replot)
- def autolock_selection_changed(value):
+ def on_autolock_selection_changed(value):
if value:
self.parameters.optimization_selection.value = False
self.enable_area_selection(selectable_width=0.99)
@@ -247,9 +247,9 @@ def autolock_selection_changed(value):
self.disable_area_selection()
self.resume_plot_and_clear_cache()
- self.parameters.autolock_selection.add_callback(autolock_selection_changed)
+ self.parameters.autolock_selection.add_callback(on_autolock_selection_changed)
- def optimization_selection_changed(value):
+ def on_optimization_selection_changed(value):
if value:
self.parameters.autolock_selection.value = False
self.enable_area_selection(selectable_width=0.75)
@@ -259,14 +259,22 @@ def optimization_selection_changed(value):
self.resume_plot_and_clear_cache()
self.parameters.optimization_selection.add_callback(
- optimization_selection_changed
+ on_optimization_selection_changed
)
- def show_or_hide_crosshair(automatic_mode):
+ def show_or_hide_crosshair(automatic_mode: bool) -> None:
self.crosshair.setVisible(not automatic_mode)
self.parameters.automatic_mode.add_callback(show_or_hide_crosshair)
+ def set_xaxis_label(lock: bool) -> None:
+ if not lock:
+ self.setLabel("bottom", "sweep voltage", units="V")
+ else:
+ self.setLabel("bottom", "time", units="µs")
+
+ self.parameters.lock.add_callback(set_xaxis_label)
+
def _to_data_coords(self, event):
pos = self.plotItem.vb.mapSceneToView(event.pos())
x, y = pos.x(), pos.y()
@@ -595,7 +603,7 @@ def plot_signal_strength(
q -= int(round(channel_offset))
signal_strength = get_signal_strength_from_i_q(i, q)
- r, g, b, *stuff = color
+ r, g, b, *_ = color
x = list(range(len(signal_strength)))
signal_strength_scaled = signal_strength / V
@@ -668,14 +676,13 @@ def update_signal_history(self, to_plot):
)
if self.parameters.lock.value:
- x_axis_length = N_POINTS
def scale(arr):
timescale = self.parameters.control_signal_history_length.value
if arr:
arr = np.array(arr)
arr -= arr[0]
- arr *= 1 / timescale * x_axis_length
+ arr *= 1 / timescale * N_POINTS
return arr
history = self.control_signal_history_data["values"]
@@ -704,14 +711,13 @@ def enable_area_selection(self, selectable_width=0.5):
# there are some glitches if the width of the overlay is exactly right.
# Therefore we make it a little wider.
extra_width = N_POINTS / 100
- x_axis_length = N_POINTS
- boundary_width = (x_axis_length * (1 - selectable_width)) / 2.0
+ boundary_width = (N_POINTS * (1 - selectable_width)) / 2.0
- self.selection_boundaries = (boundary_width, x_axis_length - boundary_width)
+ self.selection_boundaries = (boundary_width, N_POINTS - boundary_width)
self.boundary_overlays[0].setRegion((-extra_width, boundary_width))
self.boundary_overlays[1].setRegion(
- (x_axis_length - boundary_width, x_axis_length + extra_width)
+ (N_POINTS - boundary_width, N_POINTS + extra_width)
)
for overlay in self.boundary_overlays:
@@ -724,9 +730,10 @@ def disable_area_selection(self):
overlay.setVisible(False)
def pause_plot_and_cache_data(self):
- """This function pauses plot updates. All incoming data is cached though.
- This is useful for letting the user select a line that is then used in
- the autolock."""
+ """
+ This function pauses plot updates. All incoming data is cached though. This is
+ useful for letting the user select a line that is then used in the autolock.
+ """
self._plot_paused = True
def resume_plot_and_clear_cache(self):
diff --git a/linien-server/linien_server/parameters.py b/linien-server/linien_server/parameters.py
index 97b425a9..329c9ab5 100644
--- a/linien-server/linien_server/parameters.py
+++ b/linien-server/linien_server/parameters.py
@@ -285,7 +285,7 @@ def __init__(self):
self.sweep_amplitude = Parameter(min_=0.001, max_=1, start=1, loggable=True)
"""
Amplitude of the sweep in units of 0.5 * Vpp of the output (2 V for fast outputs
- (range +/- 1 V) and 0.9 V for slow outputs (range 0 V to 1.8 V). That means an
+ (range +/- 1 V) and 0.9 V for slow outputs (range 0 V to 1.8 V)). That means an
amplitude of 1.0 corresponds to the full sweep range in both cases.
"""
| Should the x axis show voltage instead of time during ramp/sweep?
Might be easier to zero in on a specific locking feature this way.
| 2024-05-27T12:57:22 | 0.0 | [] | [] |
|||
roeniss/dhlottery-api | roeniss__dhlottery-api-18 | 998fe1cbb81ceec5dd590f5201b407b770cb7403 | diff --git a/..pylintrc.un~ b/..pylintrc.un~
deleted file mode 100644
index fc9ac19..0000000
Binary files a/..pylintrc.un~ and /dev/null differ
diff --git a/.gitignore b/.gitignore
index e942718..bed91eb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -8,3 +8,4 @@ build/
dist/
*.spec
*~
+.idea/
diff --git a/.idea/.gitignore b/.idea/.gitignore
deleted file mode 100644
index 13566b8..0000000
--- a/.idea/.gitignore
+++ /dev/null
@@ -1,8 +0,0 @@
-# Default ignored files
-/shelf/
-/workspace.xml
-# Editor-based HTTP Client requests
-/httpRequests/
-# Datasource local storage ignored files
-/dataSources/
-/dataSources.local.xml
diff --git a/.idea/ClojureProjectResolveSettings.xml b/.idea/ClojureProjectResolveSettings.xml
deleted file mode 100644
index df470b1..0000000
--- a/.idea/ClojureProjectResolveSettings.xml
+++ /dev/null
@@ -1,6 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
- <component name="ClojureProjectResolveSettings">
- <currentScheme>IDE</currentScheme>
- </component>
-</project>
\ No newline at end of file
diff --git a/.idea/dhlottery-api.iml b/.idea/dhlottery-api.iml
deleted file mode 100644
index d47de51..0000000
--- a/.idea/dhlottery-api.iml
+++ /dev/null
@@ -1,18 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<module type="JAVA_MODULE" version="4">
- <component name="NewModuleRootManager" inherit-compiler-output="true">
- <exclude-output />
- <content url="file://$MODULE_DIR$">
- <sourceFolder url="file://$MODULE_DIR$/src" isTestSource="false" />
- <excludeFolder url="file://$MODULE_DIR$/intellijvenv" />
- <excludeFolder url="file://$MODULE_DIR$/build" />
- <excludeFolder url="file://$MODULE_DIR$/dist" />
- </content>
- <orderEntry type="inheritedJdk" />
- <orderEntry type="sourceFolder" forTests="false" />
- </component>
- <component name="PackageRequirementsSettings">
- <option name="removeUnused" value="true" />
- <option name="modifyBaseFiles" value="true" />
- </component>
-</module>
\ No newline at end of file
diff --git a/.idea/misc.xml b/.idea/misc.xml
deleted file mode 100644
index 0e7a885..0000000
--- a/.idea/misc.xml
+++ /dev/null
@@ -1,6 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
- <component name="ProjectRootManager" version="2" languageLevel="JDK_20" project-jdk-name="Python 3.9 (dhlottery-api)" project-jdk-type="Python SDK">
- <output url="file://$PROJECT_DIR$/out" />
- </component>
-</project>
\ No newline at end of file
diff --git a/.idea/modules.xml b/.idea/modules.xml
deleted file mode 100644
index adf4153..0000000
--- a/.idea/modules.xml
+++ /dev/null
@@ -1,8 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
- <component name="ProjectModuleManager">
- <modules>
- <module fileurl="file://$PROJECT_DIR$/.idea/dhlottery-api.iml" filepath="$PROJECT_DIR$/.idea/dhlottery-api.iml" />
- </modules>
- </component>
-</project>
\ No newline at end of file
diff --git a/.idea/vcs.xml b/.idea/vcs.xml
deleted file mode 100644
index 35eb1dd..0000000
--- a/.idea/vcs.xml
+++ /dev/null
@@ -1,6 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project version="4">
- <component name="VcsDirectoryMappings">
- <mapping directory="" vcs="Git" />
- </component>
-</project>
\ No newline at end of file
diff --git a/.pylintrc~ b/.pylintrc~
deleted file mode 100644
index 6a35ee9..0000000
--- a/.pylintrc~
+++ /dev/null
@@ -1,13 +0,0 @@
-[MASTER]
-disable=
- C0114, # no docstring
- C0116, # no docstring
- C0115, # no docstring
- W0108, # Lambda may not be necessary
- R1724, # no elif
- W0511, # no TODO
- R1705, # no else
- R0903, # too few public methods
-fail-under=11
-[FORMAT]
-max-line-length=180
diff --git a/.pyproject.toml.un~ b/.pyproject.toml.un~
deleted file mode 100644
index f991c33..0000000
Binary files a/.pyproject.toml.un~ and /dev/null differ
diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md
index bb5a054..44f34b5 100644
--- a/docs/CONTRIBUTING.md
+++ b/docs/CONTRIBUTING.md
@@ -65,7 +65,7 @@ data = {
"resultMsg": "SUCCESS",
"buyRound": "950",
"arrGameChoiceNum": [
- "A|nn|nn|nn|nn|nn|nn3", # ...'3'?
+ "A|nn|nn|nn|nn|nn|nn3", # Manual : 1, Combine : 2, Automatic : 3
"B|nn|nn|nn|nn|nn|nn3",
"C|nn|nn|nn|nn|nn|nn3"
],
diff --git a/pyproject.toml~ b/pyproject.toml~
deleted file mode 100644
index 4af86c7..0000000
--- a/pyproject.toml~
+++ /dev/null
@@ -1,3 +0,0 @@
-[tool.black]
-line-length = 180
-include = '\.pyi?$'
diff --git a/requirements.txt b/requirements.txt
index b08beb4..7a3f9e8 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -2,3 +2,4 @@ requests~=2.28.2
bs4~=0.0.1
beautifulsoup4~=4.12.0
setuptools==67.6.1
+html5lib~=1.1
diff --git a/src/dhapi/client/lottery_client.py b/src/dhapi/client/lottery_client.py
index 6fe92cc..49da9c4 100644
--- a/src/dhapi/client/lottery_client.py
+++ b/src/dhapi/client/lottery_client.py
@@ -50,7 +50,7 @@ def _set_default_session(self):
raise KeyError("JSESSIONID cookie is not set in response")
def login(self, user_id: str, user_pw: str):
- requests.post(
+ resp = requests.post(
LotteryClient._login_request_url,
headers=self._headers,
data={
@@ -62,9 +62,9 @@ def login(self, user_id: str, user_pw: str):
},
timeout=10,
)
- # TODO: ë¡ê·¸ì¸ ì¤í¨í´ë ì¬ê¸°ì íì¸íì§ ìê³ (ex. ë§ì´íì´ì§ 방문 X),
- # ì´í ë¡ë 구매를 ìëí ëë ìê² ë¨
- # ë¡ê·¸ì¸ ì±ê³µ ì¤í¨ë¥¼ ì´ íì´ì¦ìì ìëê² ë°ëì§íë¤ê³ ìê°í¨
+ soup = BeautifulSoup(resp.text, "html5lib")
+ if soup.find("a", {"class": "btn_common"}) is not None:
+ raise ValueError("ë¡ê·¸ì¸ì ì¤í¨íìµëë¤. \nìì´ë ëë ë¹ë°ë²í¸ë¥¼ íì¸í´ì£¼ì¸ì.\nìì´ëë ëì문ì를 구ë¶í©ëë¤.")
def _get_round(self):
resp = requests.get(self._round_info_url, timeout=10)
diff --git a/src/dhapi/domain_object/lotto645_buy_request.py b/src/dhapi/domain_object/lotto645_buy_request.py
index b180047..1368753 100644
--- a/src/dhapi/domain_object/lotto645_buy_request.py
+++ b/src/dhapi/domain_object/lotto645_buy_request.py
@@ -4,9 +4,9 @@
class Lotto645BuyRequest:
def __init__(self, games=None):
"""
- :param ê²ìì ë¤ì¯ê°ì list ë¡ ì´ë£¨ì´ì ¸ ìë¤. ê° ê²ìì ì¬ì¯ 칸 ì§ë¦¬ list ë¡ ì´ë£¨ì´ì ¸ ìë¤. ê° ì¹¸ì 1~45 ì ì«ì ëë 'ìë'ì ì미íë "*" 를 ì¬ì©íë¤.
+ :param ê²ìì ë¤ì¯ ê°ì list ë¡ ì´ë£¨ì´ì ¸ ìë¤. ê° ê²ìì ì¬ì¯ 칸 ì§ë¦¬ list ë¡ ì´ë£¨ì´ì ¸ ìë¤. ê° ì¹¸ì 1~45 ì ì«ì ëë 'ìë'ì ì미íë "*" 를 ì¬ì©íë¤.
Asterisk("*") can be to indicate auto mode for that position.
- e.g. [["*", "*", "*", "*", "*"], ["*", "*", "*", "*", "*"], [1, 2, 3, 4, 15, 45], None, [3, 5, "*", "*", "*"]]
+ e.g. [["*", "*", "*", "*", "*"], ["*"], [1, 2, 3, 4, 15, 45], None, [3, 5, "*", "*", "*"]]
- This example shows two auto games, one manual game, one half-auto game and forth game is not used.
"""
self._games = games
@@ -20,7 +20,7 @@ def _is_correct_games(self, games):
def _is_correct_game(self, game):
return game is None or (
isinstance(game, list)
- and len(game) == 6
+ and (len(game) == 6 or len(game) == 1)
and (len(set(filter(lambda x: x != "*", game))) == len(list(filter(lambda x: x != "*", game))))
and all(map(lambda x: x == "*" or 1 <= x <= 45, game))
)
@@ -35,7 +35,7 @@ def has_half_auto_game(self):
return any(filter(lambda game: self._is_half_auto_game(game), self._filter_used_games()))
def _is_half_auto_game(self, game):
- return 0 < self._get_auto_count_in_game(game) < 6
+ return 0 < self._get_auto_count_in_game(game) < 6 and (self._get_auto_count_in_game(game) != 1)
def has_manual_game(self):
return any(filter(lambda game: self._is_manual_game(game), self._filter_used_games()))
diff --git a/src/dhapi/purchase/lotto645_controller.py b/src/dhapi/purchase/lotto645_controller.py
index 1ccaf80..1dd7d78 100644
--- a/src/dhapi/purchase/lotto645_controller.py
+++ b/src/dhapi/purchase/lotto645_controller.py
@@ -22,23 +22,19 @@ def buy(self, req: Lotto645BuyRequest, quiet: bool):
def _confirm_purchase(self, req, quiet):
print(
f"""{req.format()}
-ìì ê°ì´ 구매íìê² ìµëê¹? [Y/n] """,
+ìì ê°ì´ 구매íìê² ìµëê¹? [[Y]/n] """,
end="",
)
if quiet:
- print("quiet íëê·¸ê° ì£¼ì´ì ¸ ìëì¼ë¡ 구매를 ì§íí©ëë¤.")
+ print("\nquiet íëê·¸ê° ì£¼ì´ì ¸ ìëì¼ë¡ 구매를 ì§íí©ëë¤.")
return True
else:
answer = input().strip().lower()
return answer in ["y", "yes", ""]
- # TODO: ì´ìª½ì ì í ë³´ì§ ëª»í¨
+ # IDê° ë¤ë¥¸ ê²½ì° loginYnì´ Nì¼ë¡ ëì´
def _show_result(self, body: dict) -> None:
- if body.get("loginYn") != "Y":
- print("Fail to purchase (reason: not logged in)")
- return
-
result = body.get("result", {})
if result.get("resultMsg", "FAILURE").upper() != "SUCCESS":
print(f'Fail to purchase (reason: {result.get("resultMsg", f"Unknown (resultMsg field is empty. full response: {body})")})')
@@ -57,6 +53,7 @@ def _show_result(self, body: dict) -> None:
)
def _format_lotto_numbers(self, numbers: list) -> None:
- tabbed_numbers = ["\t\t" + number for number in numbers] # TODO: what is trailing '3' in each number?
+ # Manual : 1, Combine : 2, Automatic : 3
+ tabbed_numbers = ["\t\t" + number for number in numbers]
linebroken_tabbed_numbers = "\n".join(tabbed_numbers)
return linebroken_tabbed_numbers
diff --git a/src/dhapi/router/arg_parser.py b/src/dhapi/router/arg_parser.py
index cdd47b2..9d4d44c 100644
--- a/src/dhapi/router/arg_parser.py
+++ b/src/dhapi/router/arg_parser.py
@@ -28,7 +28,8 @@ def __init__(self):
dhapi buy_lotto645 -u $USER_ID -q # íì¸ ì ì°¨ ìì´ ìëì¼ë¡ 5ì¥ êµ¬ë§¤
dhapi buy_lotto645 -u $USER_ID -p $USER_PW -g *,*,*,*,*,* # ìëì¼ë¡ 1ì¥ êµ¬ë§¤
-dhapi buy_lotto645 -u $USER_ID -g 1,2,3,4,5,6 -g 11,12,13,16,17,18 -g 5,6,7,*,*,* -g *,*,*,*,*,* # 2ì¥ ìë, 1ì¥ ë°ìë, 1ì¥ ìë - ì´ 4ì¥ êµ¬ë§¤
+dhapi buy_lotto645 -u $USER_ID -p $USER_PW -g * # ìëì¼ë¡ 1ì¥ êµ¬ë§¤ (ë¨ì¶í)
+dhapi buy_lotto645 -u $USER_ID -g 1,2,3,4,5,6 -g 5,6,7,*,*,* -g *,*,*,*,*,* -g * # 1ì¥ ìë, 1ì¥ ë°ìë, 2ì¥ ìë - ì´ 4ì¥ êµ¬ë§¤
""",
)
@@ -51,7 +52,7 @@ def __init__(self):
help="""
구매í ë²í¸ 6ê°ë¥¼ 콤ë§ë¡ 구ë¶í´ ì
ë ¥í©ëë¤.
ìµì
ì ì¬ë¬ë² ì¬ì©íì¬ ì¬ë¬ ê²ìì 구매í ì ììµëë¤ (매주 ìµë 5 ê²ì).
-'-g *,*,*,*,*,*' ííë¡ ì ìíë©´ í´ë¹ ê²ìì 모ë ë²í¸ë¥¼ ìëì¼ë¡ ì íí©ëë¤ (ìë 모ë).
+'-g *,*,*,*,*,*' ëë '-g *' ííë¡ ì ìíë©´ í´ë¹ ê²ìì 모ë ë²í¸ë¥¼ ìëì¼ë¡ ì íí©ëë¤ (ìë 모ë).
í¹ì ì«ì ëì '*'를 ì
ë ¥í´ í´ë¹ ê°ë§ ìëì¼ë¡ 구매í ì ììµëë¤ (ë°ìë 모ë).
ìµì
ì ìì ì
ë ¥íì§ ìì¼ë©´ 'ìëì¼ë¡ 5ì¥ êµ¬ë§¤'를 ìíí©ëë¤.""",
)
diff --git a/src/dhapi/router/router.py b/src/dhapi/router/router.py
index 6be894b..b100814 100644
--- a/src/dhapi/router/router.py
+++ b/src/dhapi/router/router.py
@@ -1,8 +1,10 @@
+import sys
from dhapi.purchase.lotto645_controller import Lotto645Controller
from dhapi.router.arg_parser import ArgParser
def entrypoint():
+ sys.tracebacklimit = 0
arg_parser = ArgParser()
if arg_parser.is_buylotto645():
| ê²°ê³¼ì°½ì´ ì´ìíê² ëìì
구매 ê²°ê³¼ ëì 3 ë¶ì´ì ëìµëë¤
ì¤ì 구매 ë²í¸
A ìë 24 26 29 36 37 45
B ìë 1 5 17 21 39 45
C ìë 5 18 32 40 42 43
D ìë 12 14 28 33 39 45
E ìë 1 7 20 40 41 42
cli ê²°ê³¼
Numbers:
A|24|26|29|36|37|453
B|01|05|17|21|39|453
C|05|18|32|40|42|433
D|12|14|28|33|39|453
E|01|07|20|40|41|423
global error handling
ì ë¹í raise XXXError("asdasd") ë¡ ìë¬ ì²ë¦¬ íê³ ììëë°, ì´ ë©ìì§("asdasd")ë§ ì ëì¤ê² ê¸ë¡ë² ì¸í°ì
í¸ ë¶ì´ê³ ì¶ì. (not whole stacktrace)
| ì ë³´ ê°ì¬í©ëë¤. ëíë³µê¶ ìë²ìì ê·¸ë ê² ëì ¸ì£¼ë ê±° ê°ìë°.... ì미를 ì í ëª¨ë¥´ê² ë¤ì. ìê° ë ë ìì í´ëê² ìµëë¤. ~~ì§ì ìì í´ë³´ì¤ëì? í¤í¤~~
```
Fail to purchase (reason: íì¬ ìê°ì í매ìê°ì´ ìëëë¤.)
```
ì´ ìë¬ë raise ì¸ì ë¤ë¥¸ ë°©ë²ì ì´ ê² ê°ë¤... ìë¬ë¥¼ ëì§ë ë°©ìë íµì¼í´ì¼ê² ì. | 2023-04-19T10:12:59 | 0.0 | [] | [] |
||
lm-sys/FastChat | lm-sys__FastChat-1625 | e365af782e2f99dd674d021087f8ecfa3840adff | diff --git a/fastchat/model/chatglm_model.py b/fastchat/model/chatglm_model.py
index 59f310d3d..82109647a 100644
--- a/fastchat/model/chatglm_model.py
+++ b/fastchat/model/chatglm_model.py
@@ -39,9 +39,13 @@ def chatglm_generate_stream(
gen_kwargs["temperature"] = temperature
hist = []
- for i in range(0, len(messages) - 2, 2):
- hist.append((messages[i][1], messages[i + 1][1]))
- query = messages[-2][1]
+ query = ""
+ if type(messages) is list:
+ for i in range(0, len(messages) - 2, 2):
+ hist.append((messages[i][1], messages[i + 1][1]))
+ query = messages[-2][1]
+ elif type(messages) is str:
+ query = messages
input_echo_len = stream_chat_token_num(tokenizer, query, hist)
| There is a problem when using chatglm-6b to run the worker and use langchain to access the api
https://github.com/lm-sys/FastChat/blame/7c8aaa85e759b02c4737a7086c1fe5c26f39b54e/fastchat/model/chatglm_model.py#L43
IndexError: string index out of range
```
for i in range(0, len(messages) - 2, 2):
hist.append((messages[i][1], messages[i + 1][1]))
```
After checking, my 'messages' is a string, I don't know why this special treatment is needed here, any help is requested.
| you can see my merge request for this error #1492 | 2023-06-07T12:04:11 | 0.0 | [] | [] |
||
nautobot/nautobot-plugin-ssot-infoblox | nautobot__nautobot-plugin-ssot-infoblox-93 | 450c095f64a7bfb0f294d00518670b6c5974754c | diff --git a/nautobot_ssot_infoblox/diffsync/adapters/infoblox.py b/nautobot_ssot_infoblox/diffsync/adapters/infoblox.py
index 7ea1cfa..b29481f 100644
--- a/nautobot_ssot_infoblox/diffsync/adapters/infoblox.py
+++ b/nautobot_ssot_infoblox/diffsync/adapters/infoblox.py
@@ -6,7 +6,7 @@
from diffsync.enum import DiffSyncFlags
from nautobot.extras.plugins.exceptions import PluginImproperlyConfigured
from nautobot_ssot_infoblox.utils.client import InfobloxApi
-from nautobot_ssot_infoblox.utils.diffsync import get_ext_attr_dict
+from nautobot_ssot_infoblox.utils.diffsync import get_ext_attr_dict, build_vlan_map
from nautobot_ssot_infoblox.diffsync.models.infoblox import (
InfobloxAggregate,
InfobloxIPAddress,
@@ -24,7 +24,7 @@ class InfobloxAdapter(DiffSync):
vlangroup = InfobloxVLANView
vlan = InfobloxVLAN
- top_level = ["prefix", "ipaddress", "vlangroup", "vlan"]
+ top_level = ["vlangroup", "vlan", "prefix", "ipaddress"]
def __init__(self, *args, job=None, sync=None, conn=None, **kwargs):
"""Initialize Infoblox.
@@ -59,6 +59,7 @@ def load_prefixes(self):
description=_pf.get("comment", ""),
status=_pf.get("status", "active"),
ext_attrs=get_ext_attr_dict(extattrs=_pf.get("extattrs", {})),
+ vlans=build_vlan_map(vlans=_pf["vlans"]) if _pf.get("vlans") else {},
)
self.add(new_pf)
diff --git a/nautobot_ssot_infoblox/diffsync/adapters/nautobot.py b/nautobot_ssot_infoblox/diffsync/adapters/nautobot.py
index a90daa5..2f4cfbb 100644
--- a/nautobot_ssot_infoblox/diffsync/adapters/nautobot.py
+++ b/nautobot_ssot_infoblox/diffsync/adapters/nautobot.py
@@ -84,7 +84,7 @@ class NautobotAdapter(NautobotMixin, DiffSync):
vlangroup = NautobotVlanGroup
vlan = NautobotVlan
- top_level = ["prefix", "ipaddress", "vlangroup", "vlan"]
+ top_level = ["vlangroup", "vlan", "prefix", "ipaddress"]
def __init__(self, *args, job=None, sync=None, **kwargs):
"""Initialize Nautobot.
@@ -115,6 +115,7 @@ def load_prefixes(self):
description=prefix.description,
status=prefix.status.slug if hasattr(prefix, "status") else "container",
ext_attrs=prefix.custom_field_data,
+ vlans={prefix.vlan.vid: prefix.vlan.name} if hasattr(prefix, "vlan") else {},
pk=prefix.id,
)
try:
diff --git a/nautobot_ssot_infoblox/diffsync/models/base.py b/nautobot_ssot_infoblox/diffsync/models/base.py
index 33240ca..cedc275 100644
--- a/nautobot_ssot_infoblox/diffsync/models/base.py
+++ b/nautobot_ssot_infoblox/diffsync/models/base.py
@@ -9,12 +9,13 @@ class Network(DiffSyncModel):
_modelname = "prefix"
_identifiers = ("network",)
- _attributes = ("description", "status", "ext_attrs")
+ _attributes = ("description", "status", "ext_attrs", "vlans")
network: str
description: Optional[str]
status: Optional[str]
ext_attrs: Optional[dict]
+ vlans: Optional[dict]
pk: Optional[uuid.UUID] = None
diff --git a/nautobot_ssot_infoblox/diffsync/models/nautobot.py b/nautobot_ssot_infoblox/diffsync/models/nautobot.py
index 35dc708..cef2428 100644
--- a/nautobot_ssot_infoblox/diffsync/models/nautobot.py
+++ b/nautobot_ssot_infoblox/diffsync/models/nautobot.py
@@ -3,6 +3,9 @@
from django.core.exceptions import ValidationError
from django.utils.text import slugify
from nautobot.extras.choices import CustomFieldTypeChoices
+from nautobot.extras.choices import RelationshipTypeChoices
+from nautobot.extras.models import Relationship as OrmRelationship
+from nautobot.extras.models import RelationshipAssociation as OrmRelationshipAssociation
from nautobot.extras.models import CustomField as OrmCF
from nautobot.extras.models import Status as OrmStatus
from nautobot.dcim.models import Site as OrmSite
@@ -87,6 +90,39 @@ def create(cls, diffsync, ids, attrs):
status=status,
description=attrs.get("description", ""),
)
+ if attrs.get("vlans"):
+ relationship_dict = {
+ "name": "Prefix -> VLAN",
+ "slug": "prefix_to_vlan",
+ "type": RelationshipTypeChoices.TYPE_ONE_TO_MANY,
+ "source_type": ContentType.objects.get_for_model(OrmPrefix),
+ "source_label": "Prefix",
+ "destination_type": ContentType.objects.get_for_model(OrmVlan),
+ "destination_label": "VLAN",
+ }
+ relation, _ = OrmRelationship.objects.get_or_create(
+ name=relationship_dict["name"], defaults=relationship_dict
+ )
+ for _, _vlan in attrs["vlans"].items():
+ index = 0
+ try:
+ found_vlan = OrmVlan.objects.get(vid=_vlan["vid"], name=_vlan["name"], group__name=_vlan["group"])
+ if found_vlan:
+ if index == 0:
+ _prefix.vlan = found_vlan
+ OrmRelationshipAssociation.objects.get_or_create(
+ relationship_id=relation.id,
+ source_type=ContentType.objects.get_for_model(OrmPrefix),
+ source_id=_prefix.id,
+ destination_type=ContentType.objects.get_for_model(OrmVlan),
+ destination_id=found_vlan.id,
+ )
+ index += 1
+ except OrmVlan.DoesNotExist as err:
+ diffsync.job.log_warning(
+ message=f"Unable to find VLAN {_vlan['vid']} {_vlan['name']} in {_vlan['group']}. {err}"
+ )
+
if attrs.get("ext_attrs"):
process_ext_attrs(diffsync=diffsync, obj=_prefix, extattrs=attrs["ext_attrs"])
_prefix.tags.add(create_tag_sync_from_infoblox())
diff --git a/nautobot_ssot_infoblox/utils/client.py b/nautobot_ssot_infoblox/utils/client.py
index ff32476..14c9b28 100644
--- a/nautobot_ssot_infoblox/utils/client.py
+++ b/nautobot_ssot_infoblox/utils/client.py
@@ -669,6 +669,7 @@ def get_all_subnets(self, prefix: str = None):
"network": "10.223.0.0/21",
"network_view": "default",
"rir": "NONE",
+ "vlans": [],
},
{
"_ref": "network/ZG5zLm5ldHdvcmskMTAuMjIwLjY0LjAvMjEvMA:10.220.64.0/21/default",
@@ -676,13 +677,14 @@ def get_all_subnets(self, prefix: str = None):
"network": "10.220.64.0/21",
"network_view": "default",
"rir": "NONE",
+ "vlans": [],
},
]
"""
url_path = "network"
params = {
"_return_as_object": 1,
- "_return_fields": "network,network_view,comment,extattrs,rir_organization,rir",
+ "_return_fields": "network,network_view,comment,extattrs,rir_organization,rir,vlans",
"_max_results": 10000,
}
diff --git a/nautobot_ssot_infoblox/utils/diffsync.py b/nautobot_ssot_infoblox/utils/diffsync.py
index 2e3a4e5..71e8036 100644
--- a/nautobot_ssot_infoblox/utils/diffsync.py
+++ b/nautobot_ssot_infoblox/utils/diffsync.py
@@ -60,3 +60,18 @@ def get_ext_attr_dict(extattrs: dict):
for key, value in extattrs.items():
fixed_dict[slugify(key)] = value["value"]
return fixed_dict
+
+
+def build_vlan_map(vlans: list):
+ """Build map of VLAN ID to VLAN name.
+
+ Args:
+ vlans (list): List of VLANs assigned to
+
+ Returns:
+ dict: Dictionary mapping VLAN ID to VLAN name, VLAN ID, and VLAN View (group).
+ """
+ vlan_map = {}
+ for vlan in vlans:
+ vlan_map[vlan["id"]] = {"vid": vlan["id"], "name": vlan["name"], "group": get_vlan_view_name(vlan["vlan"])}
+ return vlan_map
diff --git a/pyproject.toml b/pyproject.toml
index 7d743bb..7100ac6 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "nautobot-ssot-infoblox"
-version = "0.6.1"
+version = "0.7.0"
description = "Nautobot SSoT Infoblox"
authors = ["Network to Code, LLC <[email protected]>"]
license = "Apache-2.0"
| Pair Network with VLAN
### Environment
* Nautobot version: 1.3.8
* nautobot-ssot-infoblox version: 0.5.2
<!--
Describe in detail the new functionality you are proposing.
-->
### Proposed Functionality
As Infoblox has the ability to pair networks with VLANs and Nautobot does too it'd be good to replicate this functionality.
<!--
Convey an example use case for your proposed feature. Write from the
perspective of a user who would benefit from the proposed
functionality and describe how.
--->
### Use Case
When end-users have a network attached to a VLAN it'd be useful to have that information replicated in Nautobot for automation purposes.
| 2022-08-02T20:56:34 | 0.0 | [] | [] |
|||
OpenVoiceOS/ovos-personal-backend | OpenVoiceOS__ovos-personal-backend-36 | d49f98cddc7221c631790cef3144d5ba33ea992d | diff --git a/ovos_local_backend/backend/auth.py b/ovos_local_backend/backend/auth.py
index 4a5b0b8..61e62ca 100644
--- a/ovos_local_backend/backend/auth.py
+++ b/ovos_local_backend/backend/auth.py
@@ -11,16 +11,23 @@
# limitations under the License.
#
+import os
+import time
+
+import requests
+from flask import request
+from oauthlib.oauth2 import WebApplicationClient
+
from ovos_local_backend.backend import API_VERSION
-from ovos_local_backend.utils import nice_json
from ovos_local_backend.backend.decorators import noindex, requires_auth
-from flask import request
-import time
+from ovos_local_backend.database.oauth import OAuthTokenDatabase, OAuthApplicationDatabase
+from ovos_local_backend.utils import nice_json
+os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
-def get_auth_routes(app):
- @app.route("/" + API_VERSION + "/auth/token", methods=['GET'])
+def get_auth_routes(app):
+ @app.route(f"/{API_VERSION}/auth/token", methods=['GET'])
@requires_auth
@noindex
def token():
@@ -36,4 +43,73 @@ def token():
"refreshToken": token}
return nice_json(device)
+ @app.route(f"/{API_VERSION}/auth/<oauth_id>/auth_url", methods=['GET'])
+ @requires_auth
+ @noindex
+ def oauth_url(oauth_id):
+ """ send auth url to user to confirm authorization,
+ once user opens it callback is triggered
+ """
+ params = dict(request.args)
+ params["callback_endpoint"] = request.base_url + f"/{API_VERSION}/auth/callback/{oauth_id}"
+
+ client = WebApplicationClient(params["client_id"])
+ request_uri = client.prepare_request_uri(
+ params["auth_endpoint"],
+ redirect_uri=params["callback_endpoint"],
+ scope=params["scope"],
+ )
+ with OAuthApplicationDatabase() as db:
+ db.add_application(oauth_id,
+ params["client_id"],
+ params["client_secret"],
+ params["auth_endpoint"],
+ params["token_endpoint"],
+ params["refresh_endpoint"],
+ params["callback_endpoint"],
+ params["scope"])
+
+ return request_uri, 200
+
+ @app.route(f"/{API_VERSION}/auth/callback/<oauth_id>", methods=['GET'])
+ @noindex
+ def oauth_callback(oauth_id):
+ """ user completed oauth, save token to db
+ """
+ params = dict(request.args)
+ code = params["code"]
+
+ data = OAuthApplicationDatabase()[oauth_id]
+ client_id = data["client_id"]
+ client_secret = data["client_secret"]
+ token_endpoint = data["token_endpoint"]
+
+ # Prepare and send a request to get tokens! Yay tokens!
+ client = WebApplicationClient(client_id)
+ token_url, headers, body = client.prepare_token_request(
+ token_endpoint,
+ authorization_response=request.url,
+ redirect_url=request.base_url,
+ code=code
+ )
+ token_response = requests.post(
+ token_url,
+ headers=headers,
+ data=body,
+ auth=(client_id, client_secret),
+ ).json()
+
+ with OAuthTokenDatabase() as db:
+ db.add_token(oauth_id, token_response)
+
+ return nice_json(params)
+
+ @app.route(f"/{API_VERSION}/device/<uuid>/token/<oauth_id>", methods=['GET'])
+ @requires_auth
+ @noindex
+ def oauth_token(uuid, oauth_id):
+ """a device is requesting a token for a previously approved OAuth app"""
+ data = OAuthTokenDatabase().get(oauth_id) or {}
+ return nice_json(data)
+
return app
diff --git a/ovos_local_backend/database/oauth.py b/ovos_local_backend/database/oauth.py
new file mode 100644
index 0000000..175cafd
--- /dev/null
+++ b/ovos_local_backend/database/oauth.py
@@ -0,0 +1,33 @@
+from json_database import JsonStorageXDG
+
+
+class OAuthTokenDatabase(JsonStorageXDG):
+ def __init__(self):
+ super().__init__("ovos_oauth")
+
+ def add_token(self, oauth_service, token_data):
+ self[oauth_service] = token_data
+
+ def total_tokens(self):
+ return len(self)
+
+
+class OAuthApplicationDatabase(JsonStorageXDG):
+ def __init__(self):
+ super().__init__("ovos_oauth_apps")
+
+ def add_application(self, oauth_service,
+ client_id, client_secret,
+ auth_endpoint, token_endpoint, refresh_endpoint,
+ callback_endpoint, scope):
+ self[oauth_service] = {"oauth_service": oauth_service,
+ "client_id": client_id,
+ "client_secret": client_secret,
+ "auth_endpoint": auth_endpoint,
+ "token_endpoint": token_endpoint,
+ "refresh_endpoint": refresh_endpoint,
+ "callback_endpoint": callback_endpoint,
+ "scope": scope}
+
+ def total_apps(self):
+ return len(self)
diff --git a/requirements/requirements.txt b/requirements/requirements.txt
index 9c2fca1..d16214e 100644
--- a/requirements/requirements.txt
+++ b/requirements/requirements.txt
@@ -8,4 +8,5 @@ ovos-stt-plugin-server
geocoder
timezonefinder
requests_cache
-selene_api>=0.0.2
\ No newline at end of file
+selene_api>=0.0.2
+oauthlib~=3.0
\ No newline at end of file
| implement oauth endpoints
oauth functionality should be implemented, need to research how the existing api works
| 2022-10-03T22:09:54 | 0.0 | [] | [] |
|||
joecummings/gitmine | joecummings__gitmine-84 | 73beac0206b6859a8819803b9e85e4fb351a48ee | diff --git a/gitmine/commands/get.py b/gitmine/commands/get.py
index 1a41253..a413b66 100644
--- a/gitmine/commands/get.py
+++ b/gitmine/commands/get.py
@@ -45,7 +45,9 @@ def get_prs(ctx: click.Context, color: bool, headers: Mapping[str, str]) -> Repo
return repositories
-def get_unassigned_issues(asc: bool, color: bool, headers: Mapping[str, str]) -> RepoDict:
+def get_unassigned_issues(
+ asc: bool, color: bool, repo_name: str, headers: Mapping[str, str]
+) -> RepoDict:
"""Get all Github Issues that are unnassigned from the repos in which user is a collaborator."""
def get_collaborator_repos() -> Any:
@@ -58,6 +60,8 @@ def get_collaborator_repos() -> Any:
return response.json()
collaborator_repos = get_collaborator_repos()
+ if repo_name:
+ collaborator_repos = filter(lambda x: x["full_name"] == repo_name, collaborator_repos)
params = {"direction": "asc" if asc else "desc", "assignee": "none"}
def get_issues_by_repo(repo: Mapping[str, Any]) -> Repository:
@@ -99,16 +103,22 @@ def get_issues(
if unassigned:
click.echo("Hang on, getting unassigned issues for you...")
- return get_unassigned_issues(asc, color, headers)
+ return get_unassigned_issues(asc, color, repo_name, headers)
+
+ if repo_name:
+ url = REPOS_ENDPOINT.copy()
+ url.path = url.path / repo_name / "issues"
+ else:
+ url = ISSUES_ENDPOINT.copy()
params = {"direction": "asc" if asc else "desc"}
logger.debug("Fetching issues from github.com \n")
- response = requests.get(ISSUES_ENDPOINT, headers=headers, params=params)
+ response = requests.get(url, headers=headers, params=params)
catch_bad_responses(response, get="issues")
repositories = RepoDict()
for issue in response.json():
- repo_name = issue["repository"]["full_name"]
+ repo_name = repo_name or issue["repository"]["full_name"]
repositories[repo_name].add_issue(
GithubElement.from_dict(issue, elem_type=ISSUE, color_coded=color)
)
| Get unassigned issues command doesn't work with repo specified
| 2021-08-26T18:26:52 | 0.0 | [] | [] |
|||
MyreMylar/pygame_gui | MyreMylar__pygame_gui-649 | 45de6bd2c765ce8033a69ae1e36c20661b38f91b | diff --git a/pygame_gui/elements/ui_panel.py b/pygame_gui/elements/ui_panel.py
index 842d6d67..4f98b5c0 100644
--- a/pygame_gui/elements/ui_panel.py
+++ b/pygame_gui/elements/ui_panel.py
@@ -86,8 +86,8 @@ def __init__(self,
else:
self.container_margins = margins
- container_rect = pygame.Rect(self.relative_rect.left,
- self.relative_rect.top,
+ container_rect = pygame.Rect(self.relative_rect.left + self.container_margins['left'],
+ self.relative_rect.top + self.container_margins['top'],
self.relative_rect.width - (self.container_margins['left'] +
self.container_margins['right']),
self.relative_rect.height - (self.container_margins['top'] +
| Version 0.6.12 - margind doesn't work
UIPanel(temp_rect, manager=self.ui_manager, object_id=ObjectID(class_id='MainPanel'), margins=margin_dict)
margin attribute doesn't work, but in previous version it did
| 2024-11-03T16:05:42 | 0.0 | [] | [] |
|||
cylc/cylc-uiserver | cylc__cylc-uiserver-262 | b4f67114ac47817b5491b8ecbb83b999bfbe4d01 | diff --git a/cylc/uiserver/app.py b/cylc/uiserver/app.py
index a8a3a9b2..e87a3825 100644
--- a/cylc/uiserver/app.py
+++ b/cylc/uiserver/app.py
@@ -40,7 +40,7 @@
)
from cylc.uiserver import (
- __file__ as uis_pkg
+ __file__ as uis_pkg,
)
from cylc.uiserver.data_store_mgr import DataStoreMgr
from cylc.uiserver.handlers import (
@@ -56,6 +56,15 @@
from cylc.uiserver.workflows_mgr import WorkflowsManager
+# UIS configuration dirs
+USER_CONF_ROOT: Path = Path('~/.cylc/hub').expanduser()
+SITE_CONF_ROOT: Path = Path(
+ os.getenv('CYLC_SITE_CONF_PATH')
+ or GlobalConfig.DEFAULT_SITE_CONF_PATH,
+ 'hub'
+)
+
+
class PathType(TraitType):
"""A pathlib traitlet type which allows string and undefined values."""
@@ -90,13 +99,9 @@ class CylcUIServer(ExtensionApp):
str,
[
# user configuration
- Path('~/.cylc/hub').expanduser(),
+ USER_CONF_ROOT,
# site configuration
- Path(
- os.getenv('CYLC_SITE_CONF_PATH')
- or GlobalConfig.DEFAULT_SITE_CONF_PATH,
- 'hub'
- ),
+ SITE_CONF_ROOT,
# base configuration - always used
Path(uis_pkg).parent,
]
diff --git a/cylc/uiserver/jupyter_config.py b/cylc/uiserver/jupyter_config.py
index 74c6754b..f293a7c9 100644
--- a/cylc/uiserver/jupyter_config.py
+++ b/cylc/uiserver/jupyter_config.py
@@ -23,6 +23,7 @@
__file__ as uis_pkg,
getenv,
)
+from cylc.uiserver.app import USER_CONF_ROOT
# the command the hub should spawn (i.e. the cylc uiserver itself)
@@ -57,7 +58,7 @@
]
# store the JupyterHub runtime files in ~/.cylc/hub
-RUNTIME_PATH = Path('~/.cylc/hub').expanduser()
-c.JupyterHub.cookie_secret_file = f'{RUNTIME_PATH / "cookie_secret"}'
-c.JupyterHub.db_url = f'{RUNTIME_PATH / "jupyterhub.sqlite"}'
-c.ConfigurableHTTPProxy.pid_file = f'{RUNTIME_PATH / "jupyterhub-proxy.pid"}'
+USER_CONF_ROOT.mkdir(parents=True, exist_ok=True)
+c.JupyterHub.cookie_secret_file = f'{USER_CONF_ROOT / "cookie_secret"}'
+c.JupyterHub.db_url = f'{USER_CONF_ROOT / "jupyterhub.sqlite"}'
+c.ConfigurableHTTPProxy.pid_file = f'{USER_CONF_ROOT / "jupyterhub-proxy.pid"}'
diff --git a/cylc/uiserver/jupyterhub_config.py b/cylc/uiserver/jupyterhub_config.py
index 0fadb39a..b7a0e175 100644
--- a/cylc/uiserver/jupyterhub_config.py
+++ b/cylc/uiserver/jupyterhub_config.py
@@ -24,16 +24,22 @@
import os
from pathlib import Path
-from cylc.uiserver import __file__ as uis_pkg
+from cylc.uiserver import (
+ __file__ as uis_pkg,
+)
+from cylc.uiserver.app import (
+ SITE_CONF_ROOT,
+ USER_CONF_ROOT,
+)
LOG = logging.getLogger(__name__)
# base configuration - always used
DEFAULT_CONF_PATH: Path = Path(uis_pkg).parent / 'jupyter_config.py'
# site configuration
-SITE_CONF_PATH: Path = Path('/etc/cylc/hub/jupyter_config.py')
+SITE_CONF_PATH: Path = SITE_CONF_ROOT / 'jupyter_config.py'
# user configuration
-USER_CONF_PATH: Path = Path('~/.cylc/hub/jupyter_config.py').expanduser()
+USER_CONF_PATH: Path = USER_CONF_ROOT / 'jupyter_config.py'
def _load(path):
| create ~/.cylc/hub/ automatically?
Running `cylc hub` for the first time fails if the `~/.cylc/hub/` directory does not exist.
| 2021-10-19T11:04:49 | 0.0 | [] | [] |
|||
CybercentreCanada/assemblyline-ui | CybercentreCanada__assemblyline-ui-742 | ee685c2de61943372f67c7a102561805983fa8a7 | diff --git a/assemblyline_ui/api/v4/ingest.py b/assemblyline_ui/api/v4/ingest.py
index d03d0fc6..f30d51df 100644
--- a/assemblyline_ui/api/v4/ingest.py
+++ b/assemblyline_ui/api/v4/ingest.py
@@ -279,9 +279,8 @@ def ingest_single_file(**kwargs):
# File is not found still, and we have external sources
dl_from = None
available_sources = [x for x in config.submission.sha256_sources
- if Classification.is_accessible(user['classification'],
- x.classification) and
- x.name in default_external_sources]
+ if Classification.is_accessible(user['classification'], x.classification)
+ and x.name in default_external_sources]
try:
for source in available_sources:
src_url = source.url.replace(source.replace_pattern, sha256)
@@ -384,6 +383,9 @@ def ingest_single_file(**kwargs):
if extracted_path:
out_file = extracted_path
+ if fileinfo["type"].startswith("uri/") and "uri_info" in fileinfo and "uri" in fileinfo["uri_info"]:
+ al_meta["name"] = fileinfo["uri_info"]["uri"]
+
# Alter filename and classification based on CaRT output
meta_classification = al_meta.pop('classification', s_params['classification'])
if meta_classification != s_params['classification']:
@@ -429,6 +431,8 @@ def ingest_single_file(**kwargs):
metadata.update(extra_meta)
# Set description if it does not exists
+ if fileinfo["type"].startswith("uri/") and "uri_info" in fileinfo and "uri" in fileinfo["uri_info"]:
+ default_description = f"Inspection of URL: {fileinfo['uri_info']['uri']}"
s_params['description'] = s_params['description'] or f"[{s_params['type']}] {default_description}"
# Create submission object
try:
diff --git a/assemblyline_ui/api/v4/submit.py b/assemblyline_ui/api/v4/submit.py
index c47182bd..db7298d2 100644
--- a/assemblyline_ui/api/v4/submit.py
+++ b/assemblyline_ui/api/v4/submit.py
@@ -67,8 +67,6 @@ def resubmit_for_dynamic(sha256, *args, **kwargs):
metadata = {}
try:
copy_sid = request.args.get('copy_sid', None)
- name = safe_str(request.args.get('name', sha256))
-
if copy_sid:
submission = STORAGE.submission.get(copy_sid, as_obj=False)
else:
@@ -106,13 +104,19 @@ def resubmit_for_dynamic(sha256, *args, **kwargs):
return make_api_response({}, "File %s cannot be found on the server therefore it cannot be resubmitted."
% sha256, status_code=404)
+ if (file_info["type"].startswith("uri/") and "uri_info" in file_info and "uri" in file_info["uri_info"]):
+ name = safe_str(file_info["uri_info"]["uri"])
+ submission_params['description'] = f"Resubmit {file_info['uri_info']['uri']} for Dynamic Analysis"
+ else:
+ name = safe_str(request.args.get('name', sha256))
+ submission_params['description'] = f"Resubmit {name} for Dynamic Analysis"
+
files = [{'name': name, 'sha256': sha256, 'size': file_info['size']}]
submission_params['submitter'] = user['uname']
submission_params['quota_item'] = True
if 'priority' not in submission_params:
submission_params['priority'] = 500
- submission_params['description'] = "Resubmit %s for Dynamic Analysis" % name
if "Dynamic Analysis" not in submission_params['services']['selected']:
submission_params['services']['selected'].append("Dynamic Analysis")
@@ -318,9 +322,6 @@ def submit(**kwargs):
s_params['quota_item'] = True
s_params['submitter'] = user['uname']
- if not s_params['description']:
- s_params['description'] = default_description
-
# Set max extracted/supplementary if missing from request
s_params['max_extracted'] = s_params.get('max_extracted', config.submission.default_max_extracted)
s_params['max_supplementary'] = s_params.get('max_supplementary', config.submission.default_max_supplementary)
@@ -366,13 +367,19 @@ def submit(**kwargs):
s_params['classification'] = Classification.max_classification(s_params['classification'],
fileinfo['classification'])
+ if (
+ fileinfo["type"].startswith("uri/")
+ and "uri_info" in fileinfo
+ and "uri" in fileinfo["uri_info"]
+ ):
+ default_description = f"Inspection of URL: {fileinfo['uri_info']['uri']}"
+
if not found and default_external_sources:
# File is not found still, and we have external sources
dl_from = None
available_sources = [x for x in config.submission.sha256_sources
- if Classification.is_accessible(user['classification'],
- x.classification) and
- x.name in default_external_sources]
+ if Classification.is_accessible(user['classification'], x.classification)
+ and x.name in default_external_sources]
try:
for source in available_sources:
src_url = source.url.replace(source.replace_pattern, sha256)
@@ -432,6 +439,9 @@ def submit(**kwargs):
with open(out_file, "wb") as my_file:
my_file.write(binary.read())
+ if not s_params['description']:
+ s_params['description'] = default_description
+
try:
metadata = flatten(data.get('metadata', {}))
metadata.update(extra_meta)
| Ignore unused parts when getting the dynamic classification
| 2023-09-26T13:55:34 | 0.0 | [] | [] |
|||
EnableSecurity/wafw00f | EnableSecurity__wafw00f-154 | 3257c48d45ffb2f6504629aa3c5d529f1b886c1b | diff --git a/wafw00f/main.py b/wafw00f/main.py
index ba3e7a68..c53a4f8a 100644
--- a/wafw00f/main.py
+++ b/wafw00f/main.py
@@ -92,7 +92,7 @@ def genericdetect(self):
if 'User-Agent' in self.headers:
self.headers.pop('User-Agent') # Deleting the user-agent key from object not dict.
resp3 = self.customRequest(headers=self.headers)
- if resp3 is not None:
+ if resp3 is not None and resp1 is not None:
if resp1.status_code != resp3.status_code:
self.log.info('Server returned a different response when request didn\'t contain the User-Agent header.')
reason = reasons[4]
| AttributeError: 'NoneType' object has no attribute 'status_code'
**Describe the bug**
The program stops and exits with some inputs (unfortunately not always reproducible).
**To Reproduce**
`wafw00f https://ccc-dr-contingent.comcast.com` (not always reproducible)
**Desktop (please complete the following information):**
- OS: Linux
- OS version, distribution: `Linux 5.13.0-30-generic #33-Ubuntu SMP`
- Python version: 3.9
- Wafw00f version: 2.1.0
**Debug output**
```
[*] Checking https://ccc-dr-contingent.comcast.com
[+] Generic Detection results:
ERROR:wafw00f:Something went wrong ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
Traceback (most recent call last):
File "/usr/local/bin/wafw00f", line 4, in <module>
__import__('pkg_resources').run_script('wafw00f==2.1.0', 'wafw00f')
File "/home/edoardottt/.local/lib/python3.9/site-packages/pkg_resources/__init__.py", line 662, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/home/edoardottt/.local/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1459, in run_script
exec(code, namespace, namespace)
File "/usr/local/lib/python3.9/dist-packages/wafw00f-2.1.0-py3.9.egg/EGG-INFO/scripts/wafw00f", line 8, in <module>
main.main()
File "/usr/local/lib/python3.9/dist-packages/wafw00f-2.1.0-py3.9.egg/wafw00f/main.py", line 451, in main
if attacker.genericdetect():
File "/usr/local/lib/python3.9/dist-packages/wafw00f-2.1.0-py3.9.egg/wafw00f/main.py", line 95, in genericdetect
if resp1.status_code != resp3.status_code:
AttributeError: 'NoneType' object has no attribute 'status_code'
```
| 2022-03-08T14:28:52 | 0.0 | [] | [] |
|||
4Catalyzer/fourmat | 4Catalyzer__fourmat-28 | 6da7333df8e3d17181d7f29940601d5ffff3173e | diff --git a/.isort.cfg b/.isort.cfg
deleted file mode 100644
index 3c04ba3..0000000
--- a/.isort.cfg
+++ /dev/null
@@ -1,5 +0,0 @@
-[settings]
-profile = black
-combine_as_imports = True
-line_length = 79
-src_paths = .
diff --git a/fourmat/assets/.isort.cfg b/fourmat/assets/.isort.cfg
deleted file mode 100644
index 3c04ba3..0000000
--- a/fourmat/assets/.isort.cfg
+++ /dev/null
@@ -1,5 +0,0 @@
-[settings]
-profile = black
-combine_as_imports = True
-line_length = 79
-src_paths = .
diff --git a/fourmat/assets/pyproject.toml b/fourmat/assets/pyproject.toml
index 6f24567..254c610 100644
--- a/fourmat/assets/pyproject.toml
+++ b/fourmat/assets/pyproject.toml
@@ -1,3 +1,9 @@
[tool.black]
line-length = 79
target-version = ['py36', 'py37', 'py38']
+
+[tool.isort]
+profile = "black"
+combine_as_imports = true
+line_length = 79
+src_paths = ["."]
diff --git a/fourmat/lint.py b/fourmat/lint.py
index f70328b..35de2d5 100644
--- a/fourmat/lint.py
+++ b/fourmat/lint.py
@@ -1,10 +1,9 @@
-import fnmatch
import shutil
import subprocess
import sys
from contextlib import contextmanager
from pathlib import Path
-from subprocess import PIPE, CalledProcessError
+from subprocess import CalledProcessError
import click
@@ -16,7 +15,7 @@
SNAPSHOT_GLOB = "*/snapshots/snap_*.py"
SNAPSHOT_REGEX = r".*\/snapshots\/snap_.*\.py"
-CONFIGURATION_FILES = (".flake8", ".isort.cfg", "pyproject.toml")
+CONFIGURATION_FILES = (".flake8", "pyproject.toml")
# -----------------------------------------------------------------------------
@@ -25,32 +24,6 @@ def get_project_paths():
return CONFIG_FILE.read_text().split()
-def get_dirty_filenames(paths, *, staged=False):
- filenames = subprocess.run(
- (
- "git",
- "diff-index",
- "--name-only",
- "--diff-filter",
- "ACM",
- *(("--cached",) if staged else ()),
- "HEAD",
- "--",
- *paths,
- ),
- check=True,
- stdout=PIPE,
- encoding=sys.getdefaultencoding(),
- ).stdout.split()
-
- return tuple(
- filename
- for filename in filenames
- if Path(filename).suffix == ".py"
- and not fnmatch.fnmatch(filename, SNAPSHOT_GLOB)
- )
-
-
# -----------------------------------------------------------------------------
diff --git a/pyproject.toml b/pyproject.toml
index 6f24567..254c610 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,3 +1,9 @@
[tool.black]
line-length = 79
target-version = ['py36', 'py37', 'py38']
+
+[tool.isort]
+profile = "black"
+combine_as_imports = true
+line_length = 79
+src_paths = ["."]
| Fourmat overrides .isort.cfg when project specifies [tool.isort] configuration in pyproject.toml
Modern versions of `isort` allow developers to specify configuration using a `[tool.isort]` section in `pyproject.toml`. This is preferable to prevent an over-proliferation of config file types, but `fourmat` will only see that an `.isort.cfg` file is missing and supplant its own default configuration, which takes precedence over the `pyproject.toml` configuration.
| that's great! I didn't know about this. when we built fourmat, isort didn't use the pyproject.toml. I'll try to find some time to work on this tomorrow | 2021-07-01T20:58:45 | 0.0 | [] | [] |
||
katajakasa/aiohttp-spyne | katajakasa__aiohttp-spyne-4 | 88d237c8e5dd95a73d5e9bb62df1f73b2b7dacc0 | diff --git a/README.rst b/README.rst
index 34e256e..7d1f737 100644
--- a/README.rst
+++ b/README.rst
@@ -11,7 +11,7 @@ Requirements:
* Python >= 3.7
* Aiohttp >= 3.7.0
-* Spyne >= 2.13.16
+* Spyne >= 2.14.0
Spyne alpha versions should also work.
diff --git a/aiohttp_spyne/aio_base.py b/aiohttp_spyne/aio_base.py
index 48c4699..82102a9 100644
--- a/aiohttp_spyne/aio_base.py
+++ b/aiohttp_spyne/aio_base.py
@@ -9,7 +9,7 @@
from spyne import Application
from spyne.application import get_fault_string_from_exception
from spyne.auxproc import process_contexts
-from spyne.interface import AllYourInterfaceDocuments
+from spyne.interface import InterfaceDocuments
from spyne.model.fault import Fault
from spyne.protocol.http import HttpRpc
from spyne.server import ServerBase
@@ -103,7 +103,7 @@ async def handle_error(
def _generate_wsdl(self, req: web.Request) -> bytes:
"""Requests spyne to generate a new WSDL document"""
actual_url = urlunparse([req.scheme, req.host, req.path, "", "", ""])
- doc = AllYourInterfaceDocuments(self.app.interface)
+ doc = InterfaceDocuments(self.app.interface)
doc.wsdl11.build_interface_document(actual_url)
return doc.wsdl11.get_interface_document()
diff --git a/requirements.txt b/requirements.txt
index fa2ef6f..bf36f2f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,9 +1,9 @@
-aiohttp>=3.0.0,<4.0.0
-spyne==2.13.16
-pytest==6.2.5
+aiohttp>=3.7.0,<4.0.0
+spyne==2.14.0
+pytest==7.1.0
zeep[async]==4.1.0
flake8==4.0.1
-black==21.10b0
-mypy==0.910
-wheel==0.37.0
-twine==3.6.0
+black==22.1.0
+mypy==0.941
+wheel==0.37.1
+twine==3.8.0
diff --git a/setup.py b/setup.py
index 1d7faf9..0e3a377 100644
--- a/setup.py
+++ b/setup.py
@@ -31,5 +31,5 @@
"Framework :: AsyncIO",
],
packages=["aiohttp_spyne"],
- install_requires=["aiohttp>=3.0.0,<4.0.0", "spyne>=2.13.16"],
+ install_requires=["aiohttp>=3.7.0,<4.0.0", "spyne>=2.14.0"],
)
| ImportError with spyne 2.14.0
Hello.
In spyne 2.14.0, the [`AllYourInterfaceDocuments`](https://github.com/arskom/spyne/blob/da03ccef1de503d36c75cacb97cc0673787509a9/spyne/interface/_base.py#L517) class ceased to exist and was replaced by [new ones](https://github.com/arskom/spyne/blob/master/spyne/interface/_base.py), which broke the current version of aiohttp-spyne.
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/proj/flc/api/venv/lib/python3.7/site-packages/aiohttp_spyne/__init__.py", line 2, in <module>
from .aio_spyne import AIOSpyne # noqa: F401
File "/home/user/proj/flc/api/venv/lib/python3.7/site-packages/aiohttp_spyne/aio_spyne.py", line 6, in <module>
from .aio_base import AioBase
File "/home/user/proj/flc/api/venv/lib/python3.7/site-packages/aiohttp_spyne/aio_base.py", line 12, in <module>
from spyne.interface import AllYourInterfaceDocuments
ImportError: cannot import name 'AllYourInterfaceDocuments' from 'spyne.interface'
```
Can you fix this error or do you mind if I make a PR?
| Hello,
Oh, new spyne version. I had not noticed yet :) Yes, if you have some time, please make a PR. I don't currently have enough time to work on this, but I can definitely check a PR. | 2022-03-15T14:14:44 | 0.0 | [] | [] |
||
open-traffic-generator/openapiart | open-traffic-generator__openapiart-134 | bfef85895975d293763049ea10d1f87d0e994c2d | diff --git a/openapiart/openapiartgo.py b/openapiart/openapiartgo.py
index 570b33a8..42479243 100644
--- a/openapiart/openapiartgo.py
+++ b/openapiart/openapiartgo.py
@@ -574,7 +574,7 @@ def _write_field_getter(self, new, field):
{body}
}}
""".format(
- fieldname=field.name,
+ fieldname=self._get_external_struct_name(field.name),
struct=new.struct,
getter_method=field.getter_method,
body=body,
@@ -647,13 +647,13 @@ def _write_field_setter(self, new, field):
return obj
}}
""".format(
- fieldname=field.name,
+ fieldname=self._get_external_struct_name(field.name),
newstruct=new.struct,
setter_method=field.setter_method,
body=body,
description=field.description,
fieldtype=field.type,
- fieldstruct=field.external_struct,
+ fieldstruct=new.interface,
set_choice=set_choice,
)
)
@@ -712,7 +712,7 @@ def _build_setters_getters(self, fluent_new):
return
choice_enums = self._get_parser("$..choice..enum").find(fluent_new.schema_object["properties"])
for property_name, property_schema in fluent_new.schema_object["properties"].items():
- if len(self._get_parser("$..enum").find(property_schema)) > 0 and property_schema["type"] == "array": # temporary
+ if len(self._get_parser("$.items.enum").find(property_schema)) > 0: # temporary
continue
field = FluentField()
field.schema = property_schema
@@ -726,6 +726,7 @@ def _build_setters_getters(self, fluent_new):
field.isEnum = len(self._get_parser("$..enum").find(property_schema)) > 0
if field.isEnum:
field.enums = self._get_parser("$..enum").find(property_schema)[0].value
+ field.enums.remove("unspecified")
field.isOptional = fluent_new.isOptional(property_name)
field.isPointer = False if field.type.startswith("[") else field.isOptional
if field.isEnum:
| Users shouldn't be allowed to set enums with UNSPECIFIED value
UNSPECIFIED value is not something that is documented by OTG spec and hence shouldn't be exposed to users when setting enums.
| 2021-08-24T16:08:25 | 0.0 | [] | [] |
|||
flatpak/flatpak-builder-tools | flatpak__flatpak-builder-tools-213 | 18d57e24b02282b5c1cb9c8fdc5f1af1f417e2f8 | diff --git a/cargo/flatpak-cargo-generator.py b/cargo/flatpak-cargo-generator.py
index 6f9cf782..8a1ceb26 100755
--- a/cargo/flatpak-cargo-generator.py
+++ b/cargo/flatpak-cargo-generator.py
@@ -20,9 +20,9 @@
VENDORED_SOURCES = 'vendored-sources'
COMMIT_LEN = 7
+
def canonical_url(url):
'Converts a string to a Cargo Canonical URL, as per https://github.com/rust-lang/cargo/blob/35c55a93200c84a4de4627f1770f76a8ad268a39/src/cargo/util/canonical_url.rs#L19'
- logging.debug('canonicalising %s', url)
# Hrm. The upstream cargo does not replace those URLs, but if we don't then it doesn't work too well :(
url = url.replace('git+https://', 'https://')
u = urlparse(url)
@@ -39,6 +39,7 @@ def canonical_url(url):
return u
+
def get_git_tarball(repo_url, commit):
url = canonical_url(repo_url)
path = url.path.split('/')[1:]
@@ -58,6 +59,7 @@ def get_git_tarball(repo_url, commit):
else:
raise ValueError(f'Don\'t know how to get tarball for {repo_url}')
+
async def get_remote_sha256(url):
logging.info(f"started sha256({url})")
sha256 = hashlib.sha256()
@@ -71,11 +73,13 @@ async def get_remote_sha256(url):
logging.info(f"done sha256({url})")
return sha256.hexdigest()
+
def load_toml(tomlfile='Cargo.lock'):
with open(tomlfile, 'r') as f:
toml_data = toml.load(f)
return toml_data
+
def fetch_git_repo(git_url, commit):
repo_dir = git_url.replace('://', '_').replace('/', '_')
cache_dir = os.environ.get('XDG_CACHE_HOME', os.path.expanduser('~/.cache'))
@@ -90,25 +94,50 @@ def fetch_git_repo(git_url, commit):
subprocess.run(['git', 'checkout', commit], cwd=clone_dir, check=True)
return clone_dir
+
async def get_git_cargo_packages(git_url, commit):
- logging.info(f'Loading packages from git {git_url}')
+ logging.info('Loading packages from %s', git_url)
git_repo_dir = fetch_git_repo(git_url, commit)
- with open(os.path.join(git_repo_dir, 'Cargo.toml'), 'r') as r:
- root_toml = toml.loads(r.read())
+ root_toml = load_toml(os.path.join(git_repo_dir, 'Cargo.toml'))
assert 'package' in root_toml or 'workspace' in root_toml
packages = {}
+
+ async def get_dep_packages(entry, toml_dir):
+ # https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html
+ if 'dependencies' in entry:
+ for dep_name, dep in entry['dependencies'].items():
+ if 'package' in dep:
+ dep_name = dep['package']
+ if 'path' not in dep:
+ continue
+ if dep_name in packages:
+ continue
+ dep_dir = os.path.normpath(os.path.join(toml_dir, dep['path']))
+ logging.debug("Loading dependency %s from %s in %s", dep_name, dep_dir, git_url)
+ dep_toml = load_toml(os.path.join(git_repo_dir, dep_dir, 'Cargo.toml'))
+ assert dep_toml['package']['name'] == dep_name, (git_url, toml_dir)
+ await get_dep_packages(dep_toml, dep_dir)
+ packages[dep_name] = dep_dir
+ if 'target' in entry:
+ for _, target in entry['target'].items():
+ await get_dep_packages(target, toml_dir)
+
if 'package' in root_toml:
+ await get_dep_packages(root_toml, '.')
packages[root_toml['package']['name']] = '.'
+
if 'workspace' in root_toml:
for member in root_toml['workspace']['members']:
for subpkg_toml in glob.glob(os.path.join(git_repo_dir, member, 'Cargo.toml')):
subpkg = os.path.relpath(os.path.dirname(subpkg_toml), git_repo_dir)
- with open(subpkg_toml, 'r') as s:
- pkg_toml = toml.loads(s.read())
+ pkg_toml = load_toml(subpkg_toml)
+ await get_dep_packages(pkg_toml, subpkg)
packages[pkg_toml['package']['name']] = subpkg
- logging.debug(f'Packages in repo: {packages}')
+
+ logging.debug('Packages in %s:\n%s', git_url, json.dumps(packages, indent=4))
return packages
+
async def get_git_sources(package, tarball=False):
name = package['name']
source = package['source']
@@ -152,6 +181,7 @@ async def get_git_sources(package, tarball=False):
'commit': commit,
'dest': f'{CARGO_CRATES}/{name}',
}]
+ logging.info("Loading %s from %s", name, repo_url)
git_cargo_packages = await get_git_cargo_packages(repo_url, commit)
pkg_subpath = git_cargo_packages[name]
if pkg_subpath != '.':
@@ -176,14 +206,14 @@ async def get_git_sources(package, tarball=False):
return (git_sources, cargo_vendored_entry)
+
async def get_package_sources(package, cargo_lock, git_tarballs=False):
metadata = cargo_lock.get('metadata')
name = package['name']
version = package['version']
if 'source' not in package:
- logging.warning(f'{name} has no source')
- logging.debug(f'Package for {name}: {package}')
+ logging.debug('%s has no source', name)
return
source = package['source']
@@ -200,11 +230,11 @@ async def get_package_sources(package, cargo_lock, git_tarballs=False):
return
crate_sources = [
{
- 'type': 'file',
+ 'type': 'archive',
+ 'archive-type': 'tar-gzip',
'url': f'{CRATES_IO}/{name}/{name}-{version}.crate',
'sha256': checksum,
- 'dest': CARGO_CRATES,
- 'dest-filename': f'{name}-{version}.crate'
+ 'dest': f'{CARGO_CRATES}/{name}-{version}',
},
{
'type': 'file',
@@ -215,6 +245,7 @@ async def get_package_sources(package, cargo_lock, git_tarballs=False):
]
return (crate_sources, {'crates-io': {'replace-with': VENDORED_SOURCES}})
+
async def generate_sources(cargo_lock, git_tarballs=False):
sources = []
cargo_vendored_sources = {
@@ -229,15 +260,7 @@ async def generate_sources(cargo_lock, git_tarballs=False):
sources += pkg_sources
cargo_vendored_sources.update(cargo_vendored_entry)
- sources.append({
- 'type': 'shell',
- 'dest': CARGO_CRATES,
- 'commands': [
- 'for c in *.crate; do tar -xf $c; done'
- ]
- })
-
- logging.debug(f'Vendored sources: {cargo_vendored_sources}')
+ logging.debug('Vendored sources:\n%s', json.dumps(cargo_vendored_sources, indent=4))
sources.append({
'type': 'file',
'url': 'data:' + urlquote(toml.dumps({
@@ -248,6 +271,7 @@ async def generate_sources(cargo_lock, git_tarballs=False):
})
return sources
+
def main():
parser = argparse.ArgumentParser()
parser.add_argument('cargo_lock', help='Path to the Cargo.lock file')
@@ -270,5 +294,6 @@ def main():
with open(outfile, 'w') as out:
json.dump(generated_sources, out, indent=4, sort_keys=False)
+
if __name__ == '__main__':
main()
| [cargo] Issue with Git versions of GTK-rs
Cargo.toml dependencies:
```toml
[dependencies]
actix-rt = "1.1.1"
bincode = "1.3.2"
cairo-rs = { git = "https://github.com/gtk-rs/gtk-rs-core.git", features = ["png"] }
chrono = { version = "0.4.19", features = ["unstable-locales"] }
cid = "0.6.1"
directories-next = "2.0.0"
futures = "0.3.13"
gdk = { git = "https://github.com/gtk-rs/gtk3-rs.git" }
gdk-pixbuf = { git = "https://github.com/gtk-rs/gtk-rs-core.git" }
gio = { git = "https://github.com/gtk-rs/gtk-rs-core.git" }
glib = { git = "https://github.com/gtk-rs/gtk-rs-core.git" }
gtk = { git = "https://github.com/gtk-rs/gtk3-rs.git", features = ["v3_24_9"], default-features = false }
ipfs = { git = "https://github.com/Christian7573/rust-ipfs.git", branch = "7573" }
ipfs-api = { version = "0.11.0", features = ["actix"], default-features = false }
lazy_static = "1.4.0"
notify-rust = { version = "4.3.0", features = ["images", "z"], default-features = false }
pango = { git = "https://github.com/gtk-rs/gtk-rs-core.git" }
url = "2.2.1"
urlencoding = "1.1.1"
reqwest = { version = "0.11.1", features = ["blocking"] }
tokio = { version = "0.2", features = ["full"] }
webkit2gtk = { git = "https://github.com/EmmyGH/webkit2gtk-rs.git", features = ["v2_30"] }
```
Error:
```python3
Traceback (most recent call last):
File "/home/emil/Documents/Projects/oku/./flatpak-cargo-generator.py", line 274, in <module>
main()
File "/home/emil/Documents/Projects/oku/./flatpak-cargo-generator.py", line 268, in main
generated_sources = asyncio.run(generate_sources(load_toml(args.cargo_lock),
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/emil/Documents/Projects/oku/./flatpak-cargo-generator.py", line 224, in generate_sources
for pkg in await asyncio.gather(*pkg_coros):
File "/home/emil/Documents/Projects/oku/./flatpak-cargo-generator.py", line 191, in get_package_sources
return await get_git_sources(package, tarball=git_tarballs)
File "/home/emil/Documents/Projects/oku/./flatpak-cargo-generator.py", line 156, in get_git_sources
pkg_subpath = git_cargo_packages[name]
KeyError: 'javascriptcore-rs-sys'
```
[`javascriptcore-rs`](https://github.com/gtk-rs/javascriptcore-rs) is a dependency of `webkit2gtk-rs`.
DO NOT MERGE: quick-and-dirty fix
This is because blurmac package in https://github.com/servo/devices is just stored in-tree, while we currently only handle workspaces in git repos.
Closes: https://github.com/flatpak/flatpak-builder-tools/issues/181
Tested-by: David Heidelberg <[email protected]>
| What's the project are you trying to build and how do you run `flatpak-cargo-generator.py`?
Ok, this looks somehow similar to #181. Can you please try the generator script with this patch?
```patch
diff --git a/cargo/flatpak-cargo-generator.py b/cargo/flatpak-cargo-generator.py
index 6f9cf78..bfcc15d 100755
--- a/cargo/flatpak-cargo-generator.py
+++ b/cargo/flatpak-cargo-generator.py
@@ -106,6 +106,13 @@ async def get_git_cargo_packages(git_url, commit):
with open(subpkg_toml, 'r') as s:
pkg_toml = toml.loads(s.read())
packages[pkg_toml['package']['name']] = subpkg
+ if 'dependencies' in root_toml:
+ for _, dep in root_toml['dependencies'].items():
+ if 'path' not in dep:
+ continue
+ dep_dir = os.path.normpath(dep['path'])
+ dep_toml = load_toml(os.path.join(git_repo_dir, dep_dir, 'Cargo.toml'))
+ packages[dep_toml['package']['name']] = dep_dir
logging.debug(f'Packages in repo: {packages}')
return packages
```
If it works for you, I guess I'll need to figure out a common solution for both cases.
| 2021-07-16T12:43:14 | 0.0 | [] | [] |
||
yanyongyu/githubkit | yanyongyu__githubkit-48 | fd80590ffd0efbed97398421f5e614075b740f75 | diff --git a/githubkit/utils.py b/githubkit/utils.py
index 9e30edb90..cc6946cbf 100644
--- a/githubkit/utils.py
+++ b/githubkit/utils.py
@@ -39,7 +39,9 @@ def _validate(cls, value: Any):
UNSET = Unset._UNSET
-Missing: TypeAlias = Union[Literal[UNSET], T]
+# if the property is not required, we allow it to have the value null.
+# See https://github.com/yanyongyu/githubkit/issues/47
+Missing: TypeAlias = Union[Literal[UNSET], T, None]
def exclude_unset(data: Any) -> Any:
| Should `Missing` type implicitly allow `None`?
GitHub API schema is not always consistent with the required and nullable fields. Currently, this is solved by maintaining the [`schema_overrides` table](https://github.com/yanyongyu/githubkit/blob/fd80590ffd0efbed97398421f5e614075b740f75/pyproject.toml#L81) and report them to GitHub (hoping they'll solve it).
Unfortunately, this is fragile and can break code if GitHub suddenly decides to pop-up nullable fields in its API response. For example, this was working well some days ago:
```py
from githubkit import GitHub
github = GitHub()
r = github.rest.orgs.get("frankie567-test-org-renamed")
r.parsed_data
```
But not anymore:
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/Users/fvoron/Development/githubkit/githubkit/response.py", line 50, in parsed_data
> return parse_raw_as(self._data_model, self._response.content)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "pydantic/tools.py", line 82, in pydantic.tools.parse_raw_as
> File "pydantic/tools.py", line 38, in pydantic.tools.parse_obj_as
> File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
> pydantic.error_wrappers.ValidationError: 2 validation errors for ParsingModel[OrganizationFull]
> __root__ -> name
> none is not an allowed value (type=type_error.none.not_allowed)
> __root__ -> blog
> none is not an allowed value (type=type_error.none.not_allowed)
---
That's why I was wondering if we could tweak the `Missing` type so it implicitly allows `None`. So basically this part:
https://github.com/yanyongyu/githubkit/blob/fd80590ffd0efbed97398421f5e614075b740f75/githubkit/utils.py#L42
becomes:
```py
Missing: TypeAlias = Union[Literal[UNSET], T, None]
```
Admittedly, this is not perfectly accurate regarding the schema but it seems more bullet-proof while GitHub make up their mind about this.
| 2023-09-26T09:02:59 | 0.0 | [] | [] |
|||
python/pyperformance | python__pyperformance-149 | 4ba066e3985b5713a73ef60a4a5b4031b8e4dbbd | diff --git a/pyperformance/_pythoninfo.py b/pyperformance/_pythoninfo.py
index 69205b98..2ebd9710 100644
--- a/pyperformance/_pythoninfo.py
+++ b/pyperformance/_pythoninfo.py
@@ -92,16 +92,45 @@ def inspect_python_install(python=sys.executable):
MAGIC_NUMBER = _imp.get_magic()
+def _is_dev_stdlib(stdlib_dir):
+ if os.path.basename(stdlib_dir) != 'Lib':
+ return False
+ srcdir = os.path.dirname(stdlib_dir)
+ for filename in [
+ os.path.join(srcdir, 'Python', 'pylifecycle.c'),
+ os.path.join(srcdir, 'PCbuild', 'pythoncore.vcxproj'),
+ ]:
+ if not os.path.exists(filename):
+ return False
+ return True
+
+
+def _is_dev_executable(executable, stdlib_dir):
+ builddir = os.path.dirname(executable)
+ if os.path.exists(os.path.join(builddir, 'pybuilddir.txt')):
+ return True
+ if os.name == 'nt':
+ pcbuild = os.path.dirname(os.path.dirname(executable))
+ if os.path.basename(pcbuild) == 'PCbuild':
+ srcdir = os.path.dirname(pcbuild)
+ if os.path.join(srcdir, 'Lib') == stdlib_dir:
+ return True
+ return False
+
+
def _inspect_python_install(executable, prefix, base_prefix,
platlibdir, stdlib_dir,
version_info, platform, implementation_name,
**_ignored):
is_venv = prefix != base_prefix
- if os.path.basename(stdlib_dir) == 'Lib':
- base_executable = os.path.join(os.path.dirname(stdlib_dir), 'python')
- if not os.path.exists(base_executable):
- raise NotImplementedError(base_executable)
+ if (_is_dev_stdlib(stdlib_dir) and
+ _is_dev_executable(executable, stdlib_dir)):
+ # XXX What about venv?
+ try:
+ base_executable = sys._base_executable
+ except AttributeError:
+ base_executable = executable
is_dev = True
else:
major, minor = version_info[:2]
@@ -162,5 +191,7 @@ def _get_raw_info():
if __name__ == '__main__':
info = _get_raw_info()
+ (info['_base_executable'], info['_is_dev'], info['_is_venv'],
+ ) = _inspect_python_install(**info)
json.dump(info, sys.stdout, indent=4)
print()
| Running on Windows from cpython repo is broken (and on Mac)
I have a repo where I'm developing changes to CPython and I want to run PyPerformance using the python executableI've built here. That executable is named `$REPO\PCbuild\amd64\python.exe` (the `amd64` name is a configuration option IIUC). But the code in `_pythoninfo.py` in `_inspect_python_install` assumes it is `$REPO/python`.
As a workaround I am using
```
if os.name == "nt":
base_executable = os.path.join(
os.path.dirname(stdlib_dir),
'PCbuild\\amd64\\python.exe',
)
```
Note that on macOS the assumption is also broken -- there the executable is `$REPO/python.exe`.
| Note that @zooba tells me that on Windows, stdlib *always* ends in `Lib`, so that test is also wrong. :-)
And a better way get the path on Windows is
```
base_executable = sys._base_executable
```
(According to @zooba, that variable doesn't have a meaning on Linux before 3.11, so beware.)
The right way to check for a dev build is `sysconfig.is_python_build()` (poorly named but that's what it returns). | 2022-03-15T01:25:25 | 0.0 | [] | [] |
||
scikit-hep/uproot5 | scikit-hep__uproot5-1333 | 9420c52de24a20cc9823a9b81105c440ffa79ade | diff --git a/src/uproot/source/fsspec.py b/src/uproot/source/fsspec.py
index b11663b14..59d220149 100644
--- a/src/uproot/source/fsspec.py
+++ b/src/uproot/source/fsspec.py
@@ -92,6 +92,9 @@ def chunk(self, start: int, stop: int) -> uproot.source.chunk.Chunk:
Request a byte range of data from the file as a
:doc:`uproot.source.chunk.Chunk`.
"""
+ if self.closed:
+ raise OSError(f"file {self._file_path!r} is closed")
+
self._num_requests += 1
self._num_requested_chunks += 1
self._num_requested_bytes += stop - start
@@ -129,6 +132,9 @@ def chunks(
it is triggered by already-filled chunks, rather than waiting for
chunks to be filled.
"""
+ if self.closed:
+ raise OSError(f"file {self._file_path!r} is closed")
+
self._num_requests += 1
self._num_requested_chunks += len(ranges)
self._num_requested_bytes += sum(stop - start for start, stop in ranges)
@@ -184,7 +190,7 @@ def closed(self) -> bool:
True if the associated file/connection/thread pool is closed; False
otherwise.
"""
- return False
+ return self._file.closed
class FSSpecLoopExecutor(uproot.source.futures.Executor):
| fsspec Source (the local file one, at least) does not close files
I saw this in @giedrius2020's talk.
```python
>>> import uproot
>>> import skhep_testdata
>>> with uproot.open(skhep_testdata.data_path("uproot-Zmumu.root")) as file:
... tree = file["events"]
...
>>> tree["px1"].array()
<Array [-41.2, 35.1, 35.1, 34.1, ..., 32.4, 32.4, 32.5] type='2304 * float64'>
>>> tree.file.closed
False
```
The `with` statement was supposed to close the file, making it impossible to later read the arrays from it. (Without this, the `with` statement is useless: its purpose is to prevent file-handle leaks, and nothing here is closing the file-handle.)
This is where the "`file`" object (a ReadOnlyDirectory) has an `__exit__` method that gets called at the end of the `with` statement:
https://github.com/scikit-hep/uproot5/blob/dc19ce911caf45b3239d47aa61e1732f79417b29/src/uproot/reading.py#L1515-L1516
and here is where that gets propagated to an FSSpecSource:
https://github.com/scikit-hep/uproot5/blob/dc19ce911caf45b3239d47aa61e1732f79417b29/src/uproot/source/fsspec.py#L81-L83
and here's the pre-fsspec Source (MemmapSource):
https://github.com/scikit-hep/uproot5/blob/dc19ce911caf45b3239d47aa61e1732f79417b29/src/uproot/source/file.py#L216-L223
Pre-fsspec file-handles got closed:
```python
>>> import uproot
>>> import skhep_testdata
>>> with uproot.open(skhep_testdata.data_path("uproot-Zmumu.root"), handler=uproot.MemmapSource) as file:
... tree = file["events"]
...
>>> tree["px1"].array()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jpivarski/irishep/uproot5/src/uproot/behaviors/TBranch.py", line 1825, in array
_ranges_or_baskets_to_arrays(
File "/home/jpivarski/irishep/uproot5/src/uproot/behaviors/TBranch.py", line 3023, in _ranges_or_baskets_to_arrays
hasbranches._file.source.chunks(ranges, notifications=notifications)
File "/home/jpivarski/irishep/uproot5/src/uproot/source/file.py", line 163, in chunks
raise OSError(f"memmap is closed for file {self._file_path}")
OSError: memmap is closed for file /home/jpivarski/.local/skhepdata/uproot-Zmumu.root
>>> tree.file.closed
True
```
Now I wonder why pytest is not complaining about the leaking file-handles. I know that it checks for unclosed files because I've seen that error. Here it is, reproduced on the terminal, and our test suite never raised an error. I don't understand that.
Socially, this could be a problem because the fsspec-based file-handles have been available for almost a year (Uproot 5) and users might have gotten used to opening the file in a `with` statement (uselessly) and later, outside the `with`, initiating more file-reading with TTree, RNTuple, and TDirectory objects. After fixing this, code that relied on this error will break. It will need to be a new minor version number, 5.4.0, at least.
| I was curious about this, so here's what I found out.
The `fsspec` implementation is set up to always return `False` for the closed property.
https://github.com/scikit-hep/uproot5/blob/53f917c82daac2f07e9a600226a1c358783a1653/src/uproot/source/fsspec.py#L193-L201
But if you check the underlying file `tree.file.source._file.f.closed`, then you find that it was closed. It is weird that it is still being able to read the file since it's supposed to be closed.
I feel like the comment above is explains the existence of the property, but not why it always returns `False`, so I'm guessing it was just a silly mistake. I can submit a PR to simply replace it to `return self._file.f.closed` (when possible) if you agree.
The other issue is that `FSSpecSource.chunks` doesn't check if the file is closed.
Also, maybe now it makes sense to drop support for Python 3.8 and clean up that method a bit? It would probably make sense to do both for a minor release.
What we can do is make final releases of Awkward and Uproot with Python 3.8 support, and then immediately afterward make a _minor_ release without Python 3.8 support, so that the change in Awkward version number only changes that level of support.
The question then becomes, which PRs do you want to be sure are included in the last with-Python-3.8 release?
>The question then becomes, which PRs do you want to be sure are included in the last with-Python-3.8 release?
I think it would make sense to at least wait for the fix/perf PRs that are ready (or almost ready). There's no reason to rush to drop Python 3.8, but I did notice that things started to break for 3.8 (see e.g https://github.com/scikit-hep/scikit-hep-testdata/pull/162). | 2024-11-07T19:34:27 | 0.0 | [] | [] |
||
KissPeter/APIFuzzer | KissPeter__APIFuzzer-54 | fb9ac6b0a5ece1e5a111f554bb1afa19f0a6ef32 | diff --git a/APIFuzzer b/APIFuzzer
index 950e064..3087cee 100755
--- a/APIFuzzer
+++ b/APIFuzzer
@@ -4,6 +4,7 @@ import argparse
import signal
import sys
import tempfile
+import traceback
from logging import _nameToLevel as levelNames
from apifuzzer.fuzz_utils import FailedToParseFileException
@@ -95,7 +96,8 @@ if __name__ == '__main__':
print('Failed to parse API definition')
exit(1)
except Exception as e:
- print(f'Unexpected exception happened during fuzz test preparation: {e}. Feel free to report the issue')
+ print(f'Unexpected exception happened during fuzz test preparation: {traceback.print_stack(*sys.exc_info())}.\n'
+ f' Feel free to report the issue',)
exit(1)
signal.signal(signal.SIGINT, signal_handler)
prog.run()
diff --git a/apifuzzer/fuzz_utils.py b/apifuzzer/fuzz_utils.py
index b52cf68..fcd6cda 100644
--- a/apifuzzer/fuzz_utils.py
+++ b/apifuzzer/fuzz_utils.py
@@ -14,7 +14,7 @@
from apifuzzer.utils import download_file, secure_randint
-def get_sample_data_by_type(param_type):
+def _get_sample_data_by_type(param_type):
types = {
"name": "012",
"string": "asd",
diff --git a/apifuzzer/fuzzer_target/fuzz_request_sender.py b/apifuzzer/fuzzer_target/fuzz_request_sender.py
index f631cdd..6851b6e 100644
--- a/apifuzzer/fuzzer_target/fuzz_request_sender.py
+++ b/apifuzzer/fuzzer_target/fuzz_request_sender.py
@@ -61,9 +61,11 @@ def transmit(self, **kwargs):
try:
_req_url = list()
for url_part in self.base_url, kwargs["url"]:
- if isinstance(url_part, Bits):
+ if not url_part:
+ continue
+ elif isinstance(url_part, Bits):
url_part = url_part.tobytes()
- if isinstance(url_part, bytes):
+ elif isinstance(url_part, bytes):
url_part = url_part.decode()
_req_url.append(url_part.strip("/"))
kwargs.pop("url")
diff --git a/apifuzzer/openapi_template_generator.py b/apifuzzer/openapi_template_generator.py
index 688a007..6921d2f 100644
--- a/apifuzzer/openapi_template_generator.py
+++ b/apifuzzer/openapi_template_generator.py
@@ -4,7 +4,7 @@
from json_ref_dict import materialize, RefDict
from apifuzzer.base_template import BaseTemplate
-from apifuzzer.fuzz_utils import get_sample_data_by_type, get_fuzz_type_by_param_type
+from apifuzzer.fuzz_utils import _get_sample_data_by_type, get_fuzz_type_by_param_type
from apifuzzer.move_json_parts import JsonSectionAbove
from apifuzzer.template_generator_base import TemplateGenerator
from apifuzzer.utils import transform_data_to_bytes, pretty_print, get_logger
@@ -182,7 +182,7 @@ def _process_api_resources(self, paths=None, existing_template=None):
elif param.get("default"):
sample_data = param.get("default")
else:
- sample_data = get_sample_data_by_type(param.get("type"))
+ sample_data = _get_sample_data_by_type(param.get("type"))
parameter_place_in_request = param.get("in")
parameters = list()
@@ -201,40 +201,14 @@ def _process_api_resources(self, paths=None, existing_template=None):
self.logger.debug(
f"Adding property: {param_name} with type: {parameter_data_type}"
)
- _additional_param = {
- "name": param_name,
- "type": parameter_data_type,
- "default": param.get("properties", {})
- .get(_param)
- .get("default"),
- "example": param.get("properties", {})
- .get(_param)
- .get("example"),
- "enum": param.get("properties", {}).get(_param).get("enum"),
- }
- parameters.append(_additional_param)
+ parameters.append(self._get_additional_parameters(_param, param, param_name,
+ parameter_data_type))
for _parameter in parameters:
param_name = _parameter.get("name")
parameter_data_type = _parameter.get("type")
-
- if _parameter.get("enum"):
- fuzzer_type = "enum"
- elif param_format is not None:
- fuzzer_type = param_format.lower()
- elif parameter_data_type is not None:
- fuzzer_type = parameter_data_type.lower()
- else:
- fuzzer_type = None
+ fuzzer_type = self._get_fuzzer_type(_parameter, param_format, parameter_data_type)
fuzz_type = get_fuzz_type_by_param_type(fuzzer_type)
-
- if _parameter.get("enum") and hasattr(
- fuzz_type, "accept_list_as_value"
- ):
- sample_data = _parameter.get("enum")
- elif _parameter.get("example"):
- sample_data = _parameter.get("example")
- elif _parameter.get("default"):
- sample_data = _parameter.get("default")
+ sample_data = self._get_sample_data(_parameter, fuzz_type, sample_data)
self.logger.info(
f"Resource: {resource} Method: {method} \n Parameter: {param} \n"
@@ -243,49 +217,92 @@ def _process_api_resources(self, paths=None, existing_template=None):
f"fuzzer: {fuzz_type.__name__}"
)
- if parameter_place_in_request == ParamTypes.PATH:
- template.path_variables.add(
- fuzz_type(name=param_name, value=str(sample_data))
- )
- elif parameter_place_in_request == ParamTypes.HEADER:
- template.headers.add(
- fuzz_type(
- name=param_name,
- value=transform_data_to_bytes(sample_data),
- )
- )
- elif parameter_place_in_request == ParamTypes.COOKIE:
- template.cookies.add(
- fuzz_type(name=param_name, value=sample_data)
- )
- elif parameter_place_in_request == ParamTypes.QUERY:
- template.params.add(
- fuzz_type(name=param_name, value=str(sample_data))
- )
- elif parameter_place_in_request == ParamTypes.BODY:
- if hasattr(fuzz_type, "accept_list_as_value"):
- template.data.add(
- fuzz_type(name=param_name, value=sample_data)
- )
- else:
- template.data.add(
- fuzz_type(
- name=param_name,
- value=transform_data_to_bytes(sample_data),
- )
- )
- elif parameter_place_in_request == ParamTypes.FORM_DATA:
- template.params.add(
- fuzz_type(name=param_name, value=str(sample_data))
- )
- else:
- self.logger.warning(
- f"Can not parse a definition ({parameter_place_in_request}): "
- f"{pretty_print(param)}"
- )
+ self._add_field_to_param(fuzz_type, param, param_name, parameter_place_in_request, sample_data,
+ template)
if template.get_stat() > 0:
self._save_template(template)
+ @staticmethod
+ def _get_additional_parameters(_param, param, param_name, parameter_data_type):
+ _additional_param = {
+ "name": param_name,
+ "type": parameter_data_type,
+ "default": param.get("properties", {})
+ .get(_param)
+ .get("default"),
+ "example": param.get("properties", {})
+ .get(_param)
+ .get("example"),
+ "enum": param.get("properties", {}).get(_param).get("enum"),
+ }
+ return _additional_param
+
+ def _add_field_to_param(self, fuzz_type, param, param_name, parameter_place_in_request, sample_data, template):
+ if parameter_place_in_request == ParamTypes.PATH:
+ template.path_variables.add(
+ fuzz_type(name=param_name, value=str(sample_data))
+ )
+ elif parameter_place_in_request == ParamTypes.HEADER:
+ template.headers.add(
+ fuzz_type(
+ name=param_name,
+ value=transform_data_to_bytes(sample_data),
+ )
+ )
+ elif parameter_place_in_request == ParamTypes.COOKIE:
+ template.cookies.add(
+ fuzz_type(name=param_name, value=sample_data)
+ )
+ elif parameter_place_in_request == ParamTypes.QUERY:
+ template.params.add(
+ fuzz_type(name=param_name, value=str(sample_data))
+ )
+ elif parameter_place_in_request == ParamTypes.BODY:
+ if hasattr(fuzz_type, "accept_list_as_value"):
+ template.data.add(
+ fuzz_type(name=param_name, value=sample_data)
+ )
+ else:
+ template.data.add(
+ fuzz_type(
+ name=param_name,
+ value=transform_data_to_bytes(sample_data),
+ )
+ )
+ elif parameter_place_in_request == ParamTypes.FORM_DATA:
+ template.params.add(
+ fuzz_type(name=param_name, value=str(sample_data))
+ )
+ else:
+ self.logger.warning(
+ f"Can not parse a definition ({parameter_place_in_request}): "
+ f"{pretty_print(param)}"
+ )
+
+ @staticmethod
+ def _get_sample_data(_parameter, fuzz_type, sample_data):
+ if _parameter.get("enum") and hasattr(
+ fuzz_type, "accept_list_as_value"
+ ):
+ sample_data = _parameter.get("enum")
+ elif _parameter.get("example"):
+ sample_data = _parameter.get("example")
+ elif _parameter.get("default"):
+ sample_data = _parameter.get("default")
+ return sample_data
+
+ @staticmethod
+ def _get_fuzzer_type(_parameter, param_format, parameter_data_type):
+ if _parameter.get("enum"):
+ fuzzer_type = "enum"
+ elif param_format is not None:
+ fuzzer_type = param_format.lower()
+ elif parameter_data_type is not None:
+ fuzzer_type = parameter_data_type.lower()
+ else:
+ fuzzer_type = None
+ return fuzzer_type
+
def _compile_base_url_for_swagger(self, alternate_url):
if alternate_url:
_base_url = "/".join(
@@ -305,14 +322,14 @@ def _compile_base_url_for_swagger(self, alternate_url):
return _base_url
def _compile_base_url_for_openapi(self, alternate_url):
- if self.api_resources.get("servers"):
- uri = urlparse(self.api_resources.get("servers", [])[0].get("url"))
+ if len(self.api_resources.get("servers", [])) > 0:
+ uri = urlparse(self.api_resources.get("servers", [{}])[0].get("url"))
else:
uri = urlparse(alternate_url)
if alternate_url:
_base_url = "/".join([alternate_url.strip("/"), uri.path.strip("/")])
else:
- _base_url = self.api_resources.get("servers", [])[0].get("url")
+ _base_url = self.api_resources.get("servers", [{}])[0].get("url")
return _base_url
def compile_base_url(self, alternate_url):
diff --git a/apifuzzer/utils.py b/apifuzzer/utils.py
index a68014a..61199b3 100644
--- a/apifuzzer/utils.py
+++ b/apifuzzer/utils.py
@@ -89,7 +89,10 @@ def transform_data_to_bytes(data_in):
elif isinstance(data_in, Bits):
return data_in.tobytes()
elif isinstance(data_in, list):
- return bytes(",".join(data_in), "utf-16")
+ tmp_data = []
+ for data in data_in:
+ tmp_data.append(transform_data_to_bytes(data).decode("utf-16"))
+ return bytes(",".join(tmp_data), "utf-16")
else:
return bytes(data_in)
| Unexpected exception happened during fuzz test preparation: sequence item 0: expected str instance, list found.
**Describe the bug**
For a list of lists property with default `[[]]` (`alternatives` in the schema attached), you get this error:
`Unexpected exception happened during fuzz test preparation: sequence item 0: expected str instance, list found.`
**APIFuzzer debug log**
[apifuzzer.log](https://github.com/KissPeter/APIFuzzer/files/8473616/apifuzzer.log)
**Related API definition**
[api_schema.yaml.txt](https://github.com/KissPeter/APIFuzzer/files/8473629/api_schema.yaml.txt)
**Software environment (please complete the following information):**
- OS: Ubuntu 20.04
- Python version: 3.8
- APIFuzzer Version: Latest from GitHub
| 2022-06-25T09:28:30 | 0.0 | [] | [] |
|||
SneaksAndData/adapta | SneaksAndData__adapta-43 | d44bff519935673b7123f864b8264922ea121026 | diff --git a/poetry.lock b/poetry.lock
index 5e7ad903..067bbebe 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -94,6 +94,22 @@ azure-common = ">=1.1,<2.0"
azure-mgmt-core = ">=1.3.0,<2.0.0"
msrest = ">=0.6.21"
+[[package]]
+name = "azure-servicebus"
+version = "7.3.4"
+description = "Microsoft Azure Service Bus Client Library for Python"
+category = "main"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+azure-common = ">=1.1,<2.0"
+azure-core = ">=1.14.0,<2.0.0"
+isodate = ">=0.6.0"
+msrest = ">=0.6.17,<2.0.0"
+six = ">=1.11.0"
+uamqp = ">=1.4.3,<2.0.0"
+
[[package]]
name = "azure-storage-blob"
version = "12.10.0"
@@ -591,6 +607,18 @@ category = "main"
optional = false
python-versions = ">=3.7"
+[[package]]
+name = "uamqp"
+version = "1.5.3"
+description = "AMQP 1.0 Client Library for Python"
+category = "main"
+optional = false
+python-versions = ">=3.6"
+
+[package.dependencies]
+certifi = ">=2017.4.17"
+six = ">=1.0,<2.0"
+
[[package]]
name = "urllib3"
version = "1.26.9"
@@ -615,7 +643,7 @@ python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[metadata]
lock-version = "1.1"
python-versions = "^3.8"
-content-hash = "a5670eca87d681003da9bbc587c5daab4fac8f7f3b52d71aa9e026db50132082"
+content-hash = "486e783d908c81a89dbf7eb5e6793bf5e55110ba193df74ea0e3e0a67d31727d"
[metadata.files]
astroid = [
@@ -650,6 +678,10 @@ azure-mgmt-storage = [
{file = "azure-mgmt-storage-19.1.0.zip", hash = "sha256:49ea22f00e0965a3550af34a41c1a1d3a481690f6500c78e85408802f56d7416"},
{file = "azure_mgmt_storage-19.1.0-py3-none-any.whl", hash = "sha256:61c7a55395e7410a24bfc8def353429eb772a105dd8268dce91f5ee38e4fc04e"},
]
+azure-servicebus = [
+ {file = "azure-servicebus-7.3.4.zip", hash = "sha256:b85cbbbbdfe49ac7ee832df5ec8d29b97810a71d5055918e2a45bd9cec11ce8d"},
+ {file = "azure_servicebus-7.3.4-py2.py3-none-any.whl", hash = "sha256:2b995c25b9a00fdab8a48537e6bf8c99e085f461c6ec8096a7e832ea9872d227"},
+]
azure-storage-blob = [
{file = "azure-storage-blob-12.10.0.zip", hash = "sha256:3c7dc2c93e7ff2a731acd66a36a1f0a6266072b4154deba4894dab891285ea3a"},
{file = "azure_storage_blob-12.10.0-py3-none-any.whl", hash = "sha256:a70995c4f9310eb704594f30505d1499286b4caac5543a2ebfe84431c4a38b0b"},
@@ -1032,6 +1064,33 @@ typing-extensions = [
{file = "typing_extensions-4.2.0-py3-none-any.whl", hash = "sha256:6657594ee297170d19f67d55c05852a874e7eb634f4f753dbd667855e07c1708"},
{file = "typing_extensions-4.2.0.tar.gz", hash = "sha256:f1c24655a0da0d1b67f07e17a5e6b2a105894e6824b92096378bb3668ef02376"},
]
+uamqp = [
+ {file = "uamqp-1.5.3-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ddc0bea1c8d8a7f23acc82b53dc5250638bc7b5c115dac175a67885bb65b04f2"},
+ {file = "uamqp-1.5.3-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f20c8f4598990c93c2ce509e3628ac558de94c5205ed458deb93a4f4d7b0672a"},
+ {file = "uamqp-1.5.3-cp310-cp310-win32.whl", hash = "sha256:dccd10ee8d08daa38464dd0437a676be3877cec597e51cea585f80aee437db84"},
+ {file = "uamqp-1.5.3-cp310-cp310-win_amd64.whl", hash = "sha256:80fe843c1da9bca6f06ca7aff49412451b083100ffa8a856f46773f0b6dd009a"},
+ {file = "uamqp-1.5.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:9b2876fd0df6b1bba6de4844dff9146d5f140b09f537678868d92466b1399f52"},
+ {file = "uamqp-1.5.3-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6c9420361df8a9ac3953df22d6645c5335a36147f2108c21cd08f047625405f6"},
+ {file = "uamqp-1.5.3-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:54ce94c200731cab1065981ece5da66edd203a16e982eb6c0e23877145bf2e58"},
+ {file = "uamqp-1.5.3-cp36-cp36m-win32.whl", hash = "sha256:b2fa093ed1b01b19cf6537a0707470c6a897877f9a7edbfcdf7353ebadd7367a"},
+ {file = "uamqp-1.5.3-cp36-cp36m-win_amd64.whl", hash = "sha256:0231853fe4beb7775e825bfa5a8dbcacb2fe5df9a196c7fc2b1646483b586dd9"},
+ {file = "uamqp-1.5.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:c7ac33b40dae123f9834682fa04b0311a5786ce4761bbef157e3d0597a65f8d6"},
+ {file = "uamqp-1.5.3-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:07abceeca0c1c3f28c9a20f7cdf985af981efdcbb6fa2bfd6a20c450082c9d56"},
+ {file = "uamqp-1.5.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:eb412690fd0e9537882d335f5b52741af8d2e7891309882eb07213e735755d56"},
+ {file = "uamqp-1.5.3-cp37-cp37m-win32.whl", hash = "sha256:9c06c831870b9f325f8ab157d5c2871db235fff70fa13cac078d96515a7d4d01"},
+ {file = "uamqp-1.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:09b31d935b8d5a128ec0f1e9455a73e0165652f654bfa4432a44a2698bb81cc3"},
+ {file = "uamqp-1.5.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:bb3245ed98e2eb4076955e3aa5271c9cc8f4269d6732051ae643f0c86f650e2d"},
+ {file = "uamqp-1.5.3-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:b2394ca17b722bb6cfddb0794699b9b427e0b3e93ed5ca42b0bee344506fe11f"},
+ {file = "uamqp-1.5.3-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:d39c68c6167f1c228cf0b376a487617b6204f4dbaa50aab654df90dbb549e8e2"},
+ {file = "uamqp-1.5.3-cp38-cp38-win32.whl", hash = "sha256:482080ec4cd873c35d67608b7ec7e1468f2636de233739328a87ad97604784bc"},
+ {file = "uamqp-1.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:f736af8bb213ad99219a03db5bae4001fdd86b5137d2f5ad1b0d5cec7b33a04e"},
+ {file = "uamqp-1.5.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9240730e62447e6c3f793e5773e5875903f264674edf0db64ed4cefd9c8e2790"},
+ {file = "uamqp-1.5.3-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:8eb2b060d4915847af2be24d82f5ec35772f98ec95f7cca546b246d113b17aa7"},
+ {file = "uamqp-1.5.3-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:dea5fc73cabebda56d334f92ec0d560551789559a8741813ebcd1f2e2b7416b8"},
+ {file = "uamqp-1.5.3-cp39-cp39-win32.whl", hash = "sha256:c46f731a993668b88b9b425619c8d774e3c4a315f71b51969840772d52e3ddfc"},
+ {file = "uamqp-1.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:546f9dcdd7610c0ee4a59aaf9124245988db44fbea417cbc495a18e1528f8437"},
+ {file = "uamqp-1.5.3.tar.gz", hash = "sha256:82e85f38cbdd742e04fa83a6be9786f06b1711fa8b7121d576d8f7c3c7c5d95b"},
+]
urllib3 = [
{file = "urllib3-1.26.9-py2.py3-none-any.whl", hash = "sha256:44ece4d53fb1706f667c9bd1c648f5469a2ec925fcf3a776667042d645472c14"},
{file = "urllib3-1.26.9.tar.gz", hash = "sha256:aabaf16477806a5e1dd19aa41f8c2b7950dd3c746362d7e3223dbe6de6ac448e"},
diff --git a/proteus/connectors/crystal/_connector.py b/proteus/connectors/crystal/_connector.py
index 55942f1d..d2b7fa4c 100644
--- a/proteus/connectors/crystal/_connector.py
+++ b/proteus/connectors/crystal/_connector.py
@@ -23,6 +23,12 @@ def __init__(self, *, base_url: str, user: Optional[str] = None, password: Optio
password = password if password is not None else os.environ.get('CRYSTAL_PASSWORD')
self.http.auth = HTTPBasicAuth(user, password)
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.dispose()
+
def create_run(self, algorithm: str, payload: Dict, api_version: str = "v1.1") -> str:
"""
Creates a Crystal job run against the latest API version.
@@ -91,3 +97,9 @@ def submit_result(self, result: AlgorithmRunResult, url: str) -> None:
# raise if not successful
run_response.raise_for_status()
+
+ def dispose(self) -> None:
+ """
+ Gracefully dispose object.
+ """
+ self.http.close()
diff --git a/proteus/connectors/service_bus/__init__.py b/proteus/connectors/service_bus/__init__.py
new file mode 100644
index 00000000..0c87d676
--- /dev/null
+++ b/proteus/connectors/service_bus/__init__.py
@@ -0,0 +1,3 @@
+"""init file"""
+
+from proteus.connectors.service_bus._connector import AzureServiceBusConnector
diff --git a/proteus/connectors/service_bus/_connector.py b/proteus/connectors/service_bus/_connector.py
new file mode 100644
index 00000000..b763dd11
--- /dev/null
+++ b/proteus/connectors/service_bus/_connector.py
@@ -0,0 +1,40 @@
+"""
+ Connector for Azure Service Bus.
+"""
+from typing import Optional
+import os
+from azure.servicebus import ServiceBusSender, ServiceBusClient, TransportType, ServiceBusMessage
+
+
+class AzureServiceBusConnector:
+ """
+ Connector for Azure Service Bus.
+ """
+ def __init__(self, conn_str: Optional[str] = None, queue_name: Optional[str] = None):
+ self.service_bus_client: ServiceBusClient = ServiceBusClient.from_connection_string(
+ conn_str=conn_str if conn_str is not None else os.environ['SERVICE_BUS_CONNECTION_STRING'],
+ transport_type=TransportType.Amqp,
+ )
+ self.sender: ServiceBusSender = self.service_bus_client.get_queue_sender(
+ queue_name=queue_name if queue_name is not None else os.environ['SERVICE_BUS_QUEUE']
+ )
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.dispose()
+
+ def send_message(self, message: str) -> None:
+ """
+ Send string message to service bus
+ """
+ sb_message = ServiceBusMessage(message)
+ self.sender.send_messages(sb_message)
+
+ def dispose(self) -> None:
+ """
+ Gracefully dispose object.
+ """
+ self.sender.close()
+ self.service_bus_client.close()
diff --git a/pyproject.toml b/pyproject.toml
index 3403796a..c67c4661 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -18,6 +18,7 @@ azure-storage-blob = ">12.7.0,<=12.10.0"
azure-mgmt-storage = "~19.1.0"
pandas = ">1.3,<1.5"
pyarrow = ">=7.0"
+azure-servicebus = "~7.3"
[tool.poetry.dev-dependencies]
pytest = "~6.2"
| [FEATURE] ServiceBusConnector
We will need a ServiceBusConnector using Azure Service Bus. We currently use following code:
```python
from azure.servicebus import ServiceBusSender, ServiceBusClient, TransportType, ServiceBusMessage
def instantiate_service_bus_connection() -> ServiceBusSender:
"""
Instantiates service bus connection.
:return: Service bus connection instance.
"""
service_bus_client = ServiceBusClient.from_connection_string(
conn_str=os.environ['SERVICE_BUS_CONNECTION_STRING'],
transport_type=TransportType.Amqp,
)
sender = service_bus_client.get_queue_sender(queue_name=os.environ['SERVICE_BUS_QUEUE'])
return sender
def send_service_bus_response(input_: Schedule, sender: ServiceBusSender, service_bus_sas_uri: str, order_id: str) -> None:
"""
Sends SAS URI and order id as response to the service bus.
:param input_: Original input to the optimization.
:param sender: Service bus connection instance.
:param service_bus_sas_uri: URI pointing to the result of the optimization to be sent on the service bus.
:param order_id: The id of the order.
:return: None
"""
trigger_details = copy.deepcopy(input_.__dict__)
trigger_details.pop('network_id')
trigger_details.pop('assortment_id')
output = {
"orderId": order_id,
"networkId": input_.network_id,
"assortmentId": input_.assortment_id,
"triggerDetails": trigger_details,
"sasUri": service_bus_sas_uri
}
message = ServiceBusMessage(json.dumps(output))
sender.send_messages(message)
```
The latter function needs to be more generic. It is more to show a hint of what functionality we expect.
FYI @george-zubrienko
[FEATURE] ServiceBusConnector
We will need a ServiceBusConnector using Azure Service Bus. We currently use following code:
```python
from azure.servicebus import ServiceBusSender, ServiceBusClient, TransportType, ServiceBusMessage
def instantiate_service_bus_connection() -> ServiceBusSender:
"""
Instantiates service bus connection.
:return: Service bus connection instance.
"""
service_bus_client = ServiceBusClient.from_connection_string(
conn_str=os.environ['SERVICE_BUS_CONNECTION_STRING'],
transport_type=TransportType.Amqp,
)
sender = service_bus_client.get_queue_sender(queue_name=os.environ['SERVICE_BUS_QUEUE'])
return sender
def send_service_bus_response(input_: Schedule, sender: ServiceBusSender, service_bus_sas_uri: str, order_id: str) -> None:
"""
Sends SAS URI and order id as response to the service bus.
:param input_: Original input to the optimization.
:param sender: Service bus connection instance.
:param service_bus_sas_uri: URI pointing to the result of the optimization to be sent on the service bus.
:param order_id: The id of the order.
:return: None
"""
trigger_details = copy.deepcopy(input_.__dict__)
trigger_details.pop('network_id')
trigger_details.pop('assortment_id')
output = {
"orderId": order_id,
"networkId": input_.network_id,
"assortmentId": input_.assortment_id,
"triggerDetails": trigger_details,
"sasUri": service_bus_sas_uri
}
message = ServiceBusMessage(json.dumps(output))
sender.send_messages(message)
```
The latter function needs to be more generic. It is more to show a hint of what functionality we expect.
FYI @george-zubrienko
| 2022-04-22T13:46:42 | 0.0 | [] | [] |
|||
NREL/mappymatch | NREL__mappymatch-86 | ca74b422c8e963790f04d3957b3bfee3988124c5 | diff --git a/mappymatch/matchers/lcss/lcss.py b/mappymatch/matchers/lcss/lcss.py
index db66a23..a950ddb 100644
--- a/mappymatch/matchers/lcss/lcss.py
+++ b/mappymatch/matchers/lcss/lcss.py
@@ -15,7 +15,12 @@
drop_stationary_points,
add_matches_for_stationary_points,
)
-from mappymatch.matchers.matcher_interface import *
+from mappymatch.matchers.matcher_interface import (
+ MatcherInterface,
+ MatchResult,
+ Trace,
+ List,
+)
from mappymatch.utils.crs import XY_CRS
log = logging.getLogger(__name__)
@@ -26,8 +31,10 @@ class LCSSMatcher(MatcherInterface):
A map matcher based on the paper:
Zhu, Lei, Jacob R. Holden, and Jeffrey D. Gonder.
- "Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data."
- Transportation Research Record: Journal of the Transportation Research Board 2645 (2017): 67-75.
+ "Trajectory Segmentation Map-Matching Approach for Large-Scale,
+ High-Resolution GPS Data."
+ Transportation Research Record: Journal of the Transportation Research
+ Board 2645 (2017): 67-75.
"""
@@ -141,7 +148,8 @@ def match_trace_batch(
processes: int = 1,
) -> List[MatchResult]:
"""
- match traces in batches; useful for large traces as the computational complexity of the scoring
+ match traces in batches; useful for large traces as the computational
+ complexity of the scoring
function is O(N^2)
:param trace_batch:
diff --git a/mappymatch/matchers/lcss/ops.py b/mappymatch/matchers/lcss/ops.py
index aa7c348..f964442 100644
--- a/mappymatch/matchers/lcss/ops.py
+++ b/mappymatch/matchers/lcss/ops.py
@@ -72,13 +72,12 @@ def new_path(
distance_epsilon: float,
) -> List[Road]:
"""
- Computes a shortest time and shortest distance path and returns the path that
- most closely matches the trace.
+ Computes a shortest time and shortest distance path and returns the path
+ that most closely matches the trace.
:param road_map:
:param trace:
:param distance_epsilon:
-
:return:
"""
if len(trace.coords) < 1:
@@ -124,7 +123,8 @@ def split_trajectory_segment(
:param trajectory_segment: the trajectory segment to split
:param distance_epsilon: the distance epsilon
- :return: a list of split segments or the original segment if it can't be split
+ :return: a list of split segments or the original segment if it can't be
+ split
"""
trace = trajectory_segment.trace
cutting_points = trajectory_segment.cutting_points
@@ -141,9 +141,6 @@ def _short_segment(ts: TrajectorySegment):
# no points to cut
return [trajectory_segment]
- o = trace.coords[0]
- d = trace.coords[-1]
-
new_paths = []
new_traces = []
@@ -266,8 +263,7 @@ def add_matches_for_stationary_points(
stationary_index: List[StationaryIndex],
) -> MatchResult:
"""
- takes a set of matches and adds duplicate match entries for stationary points
-
+ takes a set of matches and adds duplicate match entries for stationary
:param matches:
:param stationary_index:
| Apply Flake8 and resolve any issues
We currently have a CI check to run Flake8 but we need go through the codebase and fix all the errors.
| 2022-05-03T18:23:09 | 0.0 | [] | [] |
|||
ViCCo-Group/thingsvision | ViCCo-Group__thingsvision-163 | 09e8a2ff9ed4d9c27496edb107dea85047b5c0bd | diff --git a/thingsvision/thingsvision.py b/thingsvision/thingsvision.py
index aea1226..4e440d0 100644
--- a/thingsvision/thingsvision.py
+++ b/thingsvision/thingsvision.py
@@ -45,6 +45,12 @@ def get_parsers():
help="""Device to use for the extractor. Options are 'cpu', 'cuda', or 'cuda:x',
where x is the GPU index. (default: cuda)""",
)
+ common_parser.add_argument(
+ "--model-parameters",
+ type=str,
+ default=None,
+ help="For CLIP, OpenCLIP, DreamSIM, and DINO models a model_parameters dictionary has to be defined. (default: None)",
+ )
subparsers = parent_parser.add_subparsers(
title="Subcommands",
@@ -93,12 +99,6 @@ def get_parsers():
parser_extract.add_argument(
"--flatten-acts", action="store_true", help="Flatten activations before saving them to disk."
)
- parser_extract.add_argument(
- "--model-parameters",
- type=str,
- default=None,
- help="For CLIP, OpenCLIP, DreamSIM, and DINO models a model_parameters dictionary has to be defined. (default: None)",
- )
parser_extract.add_argument(
"--module-name",
type=str,
@@ -159,7 +159,7 @@ def main():
)
if args.command == "show-model":
- extractor.show_model()
+ print(extractor.show_model())
sys.exit(1)
if args.command == "extract-features":
| small issues with CLI
when trying the commands from ["getting started"](https://thingsvision.github.io/GettingStarted.html) in the documentation, it looks to me like, the parser does not properly find mode_parameters if show-model is called

also I think there is a small typo in the example calls where "extract_features" should be "extract-features"
Best, Jannis
| 2024-04-15T02:21:55 | 0.0 | [] | [] |
|||
django-cms/djangocms-attributes-field | django-cms__djangocms-attributes-field-48 | ab462c027071bfbc4f4732e3e181b6bc386dbfa8 | diff --git a/djangocms_attributes_field/widgets.py b/djangocms_attributes_field/widgets.py
index e9f890d..2dad209 100644
--- a/djangocms_attributes_field/widgets.py
+++ b/djangocms_attributes_field/widgets.py
@@ -1,7 +1,7 @@
from django.forms import Widget
from django.forms.utils import flatatt
from django.utils.html import escape, mark_safe, strip_spaces_between_tags
-from django.utils.translation import ugettext as _
+from django.utils.translation import gettext as _
class AttributesWidget(Widget):
@@ -13,10 +13,11 @@ class AttributesWidget(Widget):
# https://www.huyng.com/posts/django-custom-form-widget-for-dictionary-and-tuple-key-value-pairs
def __init__(self, *args, **kwargs):
"""
- Supports additional kwargs: `key_attr` and `val_attr`.
+ Supports additional kwargs: `key_attr`, `val_attr`, `sorted`.
"""
self.key_attrs = kwargs.pop('key_attrs', {})
self.val_attrs = kwargs.pop('val_attrs', {})
+ self.sorted = sorted if kwargs.pop('sorted', True) else lambda x: x
super().__init__(*args, **kwargs)
def _render_row(self, key, value, field_name, key_attrs, val_attrs):
@@ -69,7 +70,7 @@ def render(self, name, value, attrs=None, renderer=None):
output = '<div class="djangocms-attributes-field">'
if value and isinstance(value, dict) and len(value) > 0:
- for key in sorted(value):
+ for key in self.sorted(value):
output += self._render_row(key, value[key], name, flatatt(self.key_attrs), flatatt(self.val_attrs))
# Add empty template
@@ -122,6 +123,9 @@ def render(self, name, value, attrs=None, renderer=None):
width: 75% !important;
float: none !important;
}
+ body:not(.djangocms-admin-style) .attributes-pair .field-box:first-child input {
+ width: calc(100% - 1.3em);
+ }
.djangocms-attributes-field .attributes-pair .attributes-value {
width: 60% !important;
width: -webkit-calc(100% - 54px) !important;
| Feat: Django 4 compatibility
No change in functionality. Replacement of deprecated functions, like `force_text` by `force_str` or `ugettext` by `gettext`.
To test against Django V4, django CMS develop branch is pulled (which includes patches for Django 4)
| 2022-03-23T09:54:46 | 0.0 | [] | [] |
|||
ASPP/pelita | ASPP__pelita-807 | af6d069eb4b44fdc6a0b1bc3f6c87d17ade5333d | diff --git a/pelita/scripts/pelita_server.py b/pelita/scripts/pelita_server.py
index b1f9cdc6f..70ead7403 100755
--- a/pelita/scripts/pelita_server.py
+++ b/pelita/scripts/pelita_server.py
@@ -280,7 +280,8 @@ def handle_new_connection(self, dealer_id, message, progress):
avaliable_teams = {}
for team_info in self.team_infos:
# we construct the url from the url that reached us
- full_url = f"{requested_url.scheme}://{requested_url.hostname}/{team_info.server_path}"
+ port = "" if requested_url.port == PELITA_PORT or requested_url.port is None else f":{requested_url.port}"
+ full_url = f"{requested_url.scheme}://{requested_url.hostname}{port}/{team_info.server_path}"
avaliable_teams[full_url] = team_info.team_name
avaliable_teams_json = json.dumps(avaliable_teams).encode("utf8")
| Server does not return port with the team spec
When using a non-standard port for `pelita://` the scan does not work as the port is not returned from the server.
| 2024-07-05T08:19:22 | 0.0 | [] | [] |
|||
popsim-consortium/demes-python | popsim-consortium__demes-python-453 | bd4c588d74805f6089766fff8f84db9f3ca7d331 | diff --git a/demes/__main__.py b/demes/__main__.py
index eea81ec..062eeab 100644
--- a/demes/__main__.py
+++ b/demes/__main__.py
@@ -39,7 +39,7 @@ def __init__(self, subparsers):
help=(
"Output ms command line arguments, using the given reference "
"population size (N0) to translate into coalescent units "
- "(see the 'ms' subcommand for interpretation of this value)."
+ "(see the 'ms' subcommand for interpretation of this value). "
"The sampling configuration in the output will need editing "
"prior to simulation. The order of deme IDs matches the "
"order of demes in the input model. "
@@ -61,9 +61,8 @@ def __init__(self, subparsers):
"and how simplification is performed, may change over time. "
"Thus users should not rely on details of the output such as "
"presence or absence of specific fields, or other details "
- "that do not alter how the model is resolved into a "
- "fully-qualified model. "
- "A fully-qualified model is output by default."
+ "that do not alter how the model is resolved. "
+ "A fully-resolved model is output by default."
),
)
parser.add_argument(
diff --git a/demes/demes.py b/demes/demes.py
index 3693074..87d0d34 100644
--- a/demes/demes.py
+++ b/demes/demes.py
@@ -1,3 +1,4 @@
+from __future__ import annotations
import copy
import collections
import itertools
@@ -176,18 +177,29 @@ def insert_defaults(data, defaults):
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class Epoch:
"""
- Population size parameters for a deme in a specified time period.
- Times follow the forwards-in-time convention (time values increase
- from the present towards the past). The start time of the epoch is
- the more ancient time, and the end time is more recent, so that the
- start time must be greater than the end time
-
- :ivar float start_time: The start time of the epoch.
- :ivar float ~.end_time: The end time of the epoch (must be specified).
- :ivar float start_size: Population size at ``start_time``.
- :ivar float end_size: Population size at ``end_time``.
+ Population parameters for a deme in a specified time interval.
+
+ An epoch spans the open-closed time interval ``(start_time, end_time]``,
+ where ``start_time`` is the more ancient time,
+ and ``end_time`` is more recent.
+ Time values increase from the present towards the past,
+ and ``start_time`` is strictly greater than ``end_time``.
+
+ Epoch objects are not intended to be constructed directly.
+
+ :ivar float start_time:
+ The start time of the epoch.
+ This value is greater than zero and may be infinity.
+ :ivar float end_time:
+ The end time of the epoch.
+ This value is greater than or equal to zero and finite.
+ :ivar float start_size:
+ Population size at ``start_time``.
+ :ivar float end_size:
+ Population size at ``end_time``.
If ``start_size != end_size``, the population size changes
- monotonically between the start and end times.
+ between the start and end times according to the
+ ``size_function``.
:ivar str size_function: The size change function. This is either
``constant``, ``exponential`` or ``linear``, though it is possible
that additional values will be added in the future.
@@ -243,6 +255,8 @@ def __attrs_post_init__(self):
def time_span(self):
"""
The time span of the epoch.
+
+ :rtype: float
"""
return self.start_time - self.end_time
@@ -335,15 +349,18 @@ def isclose(
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class AsymmetricMigration:
"""
- Parameters for continuous asymmetric migration.
+ Continuous asymmetric migration.
+
The source and destination demes follow the forwards-in-time convention,
where migrants are born in the source deme and (potentially) have children
in the dest deme.
+ AsymmetricMigration objects are not intended to be constructed directly.
+
:ivar str source: The source deme for asymmetric migration.
:ivar str dest: The destination deme for asymmetric migration.
:ivar float start_time: The time at which the migration rate is activated.
- :ivar float ~.end_time: The time at which the migration rate is deactivated.
+ :ivar float end_time: The time at which the migration rate is deactivated.
:ivar float rate: The rate of migration per generation.
"""
@@ -430,15 +447,19 @@ def isclose(
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class Pulse:
"""
- Parameters for a pulse of migration from one deme to another.
+ An instantaneous pulse of migration from one deme to another.
+
Source and destination demes follow the forwards-in-time convention,
- of migrations born in the source deme having children in the dest
- deme. If more than one source is given, migration is concurrent, so that
- the sum of the migrant proportions must sum to less than one.
+ where migrants are born in a source deme and (potentially) have children
+ in the dest deme.
+ If more than one source is given, migration is concurrent,
+ and the sum of the migrant proportions sums to less than or equal to one.
+
+ Pulse objects are not intended to be constructed directly.
:ivar list(str) sources: The source deme(s).
:ivar str dest: The destination deme.
- :ivar float ~.time: The time of migration.
+ :ivar float time: The time of migration.
:ivar list(float) proportions: Immediately following migration, the proportion(s)
of individuals in the destination deme made up of migrant individuals or
having parents from the source deme(s).
@@ -557,14 +578,16 @@ def isclose(
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class Split:
"""
- Parameters for a split event, in which a deme ends at a given time and
+ A split event, in which a deme ends at a given time and
contributes ancestry to an arbitrary number of descendant demes. Note
that there could be just a single descendant deme, in which case ``split``
- is a bit of a misnomer...
+ is a bit of a misnomer.
+
+ Split objects are not intended to be constructed directly.
:ivar str parent: The parental deme.
:ivar list[str] children: A list of descendant demes.
- :ivar float ~.time: The split time.
+ :ivar float time: The split time.
"""
parent: Name = attr.ib(
@@ -655,12 +678,14 @@ def isclose(
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class Branch:
"""
- Parameters for a branch event, where a new deme branches off from a parental
+ A branch event, where a new deme branches off from a parental
deme. The parental deme need not end at that time.
+ Branch objects are not intended to be constructed directly.
+
:ivar str parent: The parental deme.
:ivar str child: The descendant deme.
- :ivar float ~.time: The branch time.
+ :ivar float time: The branch time.
"""
parent: Name = attr.ib(
@@ -738,14 +763,16 @@ def isclose(
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class Merge:
"""
- Parameters for a merge event, in which two or more demes end at some time and
+ A merge event, in which two or more demes end at some time and
contribute to a descendant deme.
+ Merge objects are not intended to be constructed directly.
+
:ivar list[str] parents: A list of parental demes.
:ivar list[float] proportions: A list of ancestry proportions,
in order of ``parents``.
:ivar str child: The descendant deme.
- :ivar float ~.time: The merge time.
+ :ivar float time: The merge time.
"""
parents: List[Name] = attr.ib(
@@ -859,14 +886,16 @@ def isclose(
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class Admix:
"""
- Parameters for an admixture event, where two or more demes contribute ancestry
+ An admixture event, where two or more demes contribute ancestry
to a new deme.
+ Admix objects are not intended to be constructed directly.
+
:ivar list[str] parents: A list of source demes.
:ivar list[float] proportions: A list of ancestry proportions,
in order of ``parents``.
:ivar str child: The admixed deme.
- :ivar float ~.time: The admixture time.
+ :ivar float time: The admixture time.
"""
parents: List[Name] = attr.ib(
@@ -984,16 +1013,28 @@ class Deme:
"""
A collection of individuals that have a common set of population parameters.
- :ivar str name: A concise string that identifies the deme.
- :ivar str description: A description of the deme.
- :ivar float start_time: The time at which the deme begins to exist.
- :ivar list[str] ancestors: List of deme names for the deme's ancestors.
- This may be ``None``, indicating the deme has no ancestors.
- :ivar list[float] proportions: If ``ancestors`` is not ``None``,
- this indicates the proportions of ancestry from each ancestor.
- This list has the same length as ``ancestors``, and must sum to 1.
- :ivar list[Epoch] epochs: A list of epochs, which define the population
- size(s) of the deme. The deme must be created with all epochs listed.
+
+ Deme objects are not intended to be constructed directly.
+
+ :ivar str name:
+ A concise string that identifies the deme.
+ :ivar str description:
+ A description of the deme.
+ :ivar float start_time:
+ The time at which the deme begins to exist.
+ :ivar list[str] ancestors:
+ List of deme names for the deme's ancestors.
+ :ivar list[float] proportions:
+ The proportions of ancestry from each ancestor,
+ ordered to correspond with the same order as the ancestors
+ list.
+ If there are one or more ancestors, the proportions sum to 1.
+ :ivar list[Epoch] epochs:
+ A list of epochs that span the time interval over which the
+ deme exists. Epoch time intervals are non-overlapping,
+ completely cover the deme's existence time interval,
+ and are listed in time-descending order (from the past
+ towards the present).
"""
name: Name = attr.ib(validator=[attr.validators.instance_of(str), valid_deme_name])
@@ -1188,6 +1229,8 @@ def isclose(
def end_time(self):
"""
The end time of the deme's existence.
+
+ :rtype: float
"""
return self.epochs[-1].end_time
@@ -1195,6 +1238,8 @@ def end_time(self):
def time_span(self):
"""
The time span over which the deme exists.
+
+ :rtype: float
"""
return self.start_time - self.end_time
@@ -1239,29 +1284,45 @@ def size_at(self, time: float) -> float:
@attr.s(auto_attribs=True, kw_only=True, slots=True)
class Graph:
"""
- The Graph class provides a high-level API for working with a demographic
- model. A Graph object matches Demes' data model, with a small number of
+ The Graph class is a resolved and validated representation of a
+ demographic model.
+
+ A Graph object matches Demes' :ref:`spec:sec_spec_mdm`, with a small number of
additional redundant attributes that make the Graph a more convenient
object to use when inspecting a model's properties.
-
- :ivar str description: A human readable description of the demography.
- :ivar str time_units: The units of time used for the demography. This is
+ Graph objects are not intended to be constructed directly---demographic
+ models should instead be :func:`loaded from a YAML document <demes.load>`,
+ or constructed programmatically using the :class:`Builder API <demes.Builder>`.
+
+ A demographic model can be thought of as an acyclic directed graph,
+ where each deme is a vertex and each ancestor/descendant relationship
+ is a directed edge. See the :meth:`predecessors` and :meth:`successors`
+ methods for conversion to the `NetworkX <https://networkx.org/>`_
+ graphical representation.
+
+ :ivar str description:
+ A human readable description of the demography.
+ :ivar str time_units:
+ The units of time used for the demography. This is
commonly ``years`` or ``generations``, but can be any string.
This field is intended to be useful for documenting a demography,
but the actual value provided here should not be relied upon.
- :ivar float generation_time: The generation time of demes, in units given
+ :ivar float generation_time:
+ The generation time of demes, in units given
by the ``time_units`` parameter. Concretely, dividing all times
by ``generation_time`` will convert the graph to have time
- units in generations. If ``generation_time`` is ``None``, the units
- are assumed to be in generations already.
+ units in generations.
See also: :meth:`.in_generations`.
- :ivar list[str] doi: If the graph describes a published demography,
- the DOI(s) should be be given here as a list.
- :ivar dict metadata: A dictionary of arbitrary additional data.
- :ivar list[Deme] demes: The demes in the demography.
- :ivar list[AsymmetricMigration] migrations: The continuous migrations for
- the demographic model.
- :ivar list[Pulse] pulses: The migration pulses for the demography.
+ :ivar list[str] doi:
+ A list of publications that describe the demographic model.
+ :ivar dict metadata:
+ A dictionary of arbitrary additional data.
+ :ivar list[Deme] demes:
+ The demes in the demographic model.
+ :ivar list[AsymmetricMigration] migrations:
+ The continuous asymmetric migrations for the demographic model.
+ :ivar list[Pulse] pulses:
+ The instantaneous pulse migrations for the demographic model.
"""
description: str = attr.ib(default="", validator=attr.validators.instance_of(str))
@@ -1912,9 +1973,13 @@ def discrete_demographic_events(self) -> Dict[str, List[Any]]:
)
return demo_events
- def in_generations(self) -> "Graph":
+ def in_generations(self) -> Graph:
"""
Return a copy of the graph with times in units of generations.
+
+ :return:
+ A demographic model with ``time_units`` in `"generations"`.
+ :rtype: Graph
"""
graph = copy.deepcopy(self)
assert graph.generation_time is not None
@@ -1933,9 +1998,16 @@ def in_generations(self) -> "Graph":
return graph
@classmethod
- def fromdict(cls, data: MutableMapping[str, Any]) -> "Graph":
+ def fromdict(cls, data: MutableMapping[str, Any]) -> Graph:
"""
- Return a graph from a dict representation. The inverse of :meth:`.asdict`.
+ Return a graph from a data dictionary.
+
+ :param dict data:
+ A data dictionary following either the
+ :ref:`spec:sec_spec_hdm` or the :ref:`spec:sec_spec_mdm`.
+ :return:
+ A resolved and validated demographic model.
+ :rtype: Graph
"""
if not isinstance(data, MutableMapping):
raise TypeError("data is not a dictionary")
@@ -2253,6 +2325,10 @@ def fromdict(cls, data: MutableMapping[str, Any]) -> "Graph":
def asdict(self, keep_empty_fields=True) -> MutableMapping[str, Any]:
"""
Return a fully-resolved dict representation of the graph.
+
+ :return:
+ A data dictionary following the :ref:`spec:sec_spec_mdm`.
+ :rtype: dict
"""
def filt(attrib, value):
@@ -2285,6 +2361,10 @@ def coerce_numbers(inst, attribute, value):
def asdict_simplified(self) -> MutableMapping[str, Any]:
"""
Return a simplified dict representation of the graph.
+
+ :return:
+ A data dictionary following the :ref:`spec:sec_spec_hdm`.
+ :rtype: dict
"""
def simplify_epochs(data):
@@ -2417,18 +2497,24 @@ def collapse_demes(pairs):
class Builder:
"""
- The Builder provides a set of convenient methods for incrementally
- constructing a deme graph. The state of the graph is stored internally as a
- dictionary of objects following Demes' data model, and may be converted
- into a fully-resolved :class:`Graph` object using the :meth:`.resolve()` method.
+ The Builder class provides a set of convenient methods for
+ incrementally constructing a demographic model.
+
+ The state of the demographic model is stored internally as a dictionary
+ of objects following Demes' :ref:`spec:sec_spec_hdm`.
+ The content of this dictionary is *not* resolved and is *not* verified.
+ The Builder object may be converted into a resolved and validated
+ :class:`Graph` object using the :meth:`.resolve()` method.
- :ivar dict data: The data dictionary of the graph's current state.
- The objects nested within this dictionary follow Demes' data model,
- as described in the :ref:`spec:sec_spec`.
+ :ivar dict data:
+ The data dictionary of the demographic model's current state.
+ The objects nested within this dictionary should follow
+ Demes' data model, as described in the :ref:`spec:sec_spec_hdm` schema.
.. note::
- Users may freely modify the data dictionary, as long as the data
- model is not violated.
+ Users may freely modify the data dictionary, such as temporarily
+ adding or deleting fields, as long as the :ref:`spec:sec_spec_hdm`
+ is not violated when the :meth:`.resolve` method is called.
"""
def __init__(
@@ -2442,19 +2528,22 @@ def __init__(
metadata: dict = None,
):
"""
- :param str description: A human readable description of the demography.
- May be ``None``.
- :param str time_units: The units of time used for the demography. This is
+ :param str description:
+ A human readable description of the demography.
+ :param str time_units:
+ The units of time used for the demography. This is
commonly ``years`` or ``generations``, but can be any string.
- :param float generation_time: The generation time of demes, in units given
- by the ``time_units`` parameter. Concretely, dividing all times
- by ``generation_time`` will convert the graph to have time
- units in generations. If ``generation_time`` is ``None``, the units
- are assumed to be in generations already.
- :param doi: If the graph describes a published demography, the DOI(s)
+ :param float generation_time:
+ The generation time of demes, in units given
+ by the ``time_units`` parameter.
+ :param list[str] doi:
+ If the graph describes a published demography, the DOI(s)
should be be given here as a list.
- :type doi: list[str]
- :param dict metadata: A dictionary of arbitrary additional data.
+ :param dict defaults:
+ A dictionary of default values, following the
+ :ref:`spec:sec_spec_hdm` schema for defaults.
+ :param dict metadata:
+ A dictionary of arbitrary additional data.
"""
self.data: MutableMapping[str, Any] = dict(time_units=time_units)
if description is not None:
@@ -2480,18 +2569,24 @@ def add_deme(
defaults: dict = None,
):
"""
- Add a deme. Ancestor demes must be added before their children.
+ Append a deme to the "demes" list field of the data dictionary.
+
+ If the data dictionary doesn't contain the "demes" field,
+ it will be added.
:param str name: A string identifier for the deme.
- :param str description: A description of the deme. May be ``None``.
+ :param str description: A description of the deme.
:param list[str] ancestors: List of deme names for the deme's ancestors.
- This may be ``None``, indicating the deme has no ancestors.
- :param list[float] proportions: If ``ancestors`` is not ``None``,
- this indicates the proportions of ancestry from each ancestor.
+ :param list[float] proportions:
+ The proportions of ancestry from each ancestor.
This list has the same length as ``ancestors``, and must sum to 1.
:param float start_time: The deme's start time.
- :param list[dict] epochs: List of epoch dictionaries. Each dictionary
- follows the data model for an epoch.
+ :param list[dict] epochs:
+ List of epoch dictionaries. Each dictionary
+ follows the :ref:`spec:sec_spec_hdm` schema for an epoch object.
+ :param dict defaults:
+ A dictionary of default deme values, following the
+ :ref:`spec:sec_spec_hdm` schema for deme defaults.
"""
deme: MutableMapping[str, Any] = dict(name=name)
if description is not None:
@@ -2527,25 +2622,35 @@ def add_migration(
end_time: float = None,
):
"""
- Add continuous symmetric migrations between all pairs of demes in a list,
- or alternately, add asymmetric migration from one deme to another.
- Source and destination demes follow the forwards-in-time convention,
- so that the migration rate refers to the movement of individuals from
- the ``source`` deme to the ``dest`` deme.
-
- :param list[str] demes: list of deme names. Migration is symmetric
- between all pairs of demes in this list. If not specified,
- migration will be asymmetric (and ``source`` and ``dest`` must
- be given).
- :param str source: The name of the source deme.
- :param str dest: The name of the destination deme.
- :param float rate: The rate of migration per generation.
- :param float start_time: The time at which the migration rate is enabled.
- If ``None``, the start time is defined by the earliest time at
- which the demes coexist.
- :param float end_time: The time at which the migration rate is disabled.
- If ``None``, the end time is defined by the latest time at which
- the demes coexist.
+ Append a period of continuous migration to the "migrations" list field
+ of the data dictionary.
+
+ If the data dictionary doesn't contain the "migrations" field,
+ it will be added.
+ Continuous migrations may be either symmetric or asymmetric.
+ For symmetric migrations, a list of deme names must be provided in the
+ ``demes`` field, and the ``source`` and ``dest`` fields must not
+ be used.
+ For asymmetric migrations, the ``source`` and ``dest`` fields must
+ be provided and the ``demes`` field must not be used.
+ Source and destination demes refer to individuals migrating
+ forwards in time.
+
+ :param list[str] demes:
+ List of deme names. If specified, migration is symmetric
+ between all pairs of demes in this list.
+ :param str source:
+ The name of the source deme. If specified, migration is asymmetric
+ from this deme.
+ :param str dest:
+ The name of the destination deme. If specified, migration is
+ asymmetric into this deme.
+ :param float rate:
+ The rate of migration per generation.
+ :param float start_time:
+ The time at which the migration rate is enabled.
+ :param float end_time:
+ The time at which the migration rate is disabled.
"""
migration: MutableMapping[str, Any] = dict()
if rate is not None:
@@ -2576,15 +2681,24 @@ def add_pulse(
time: float = None,
):
"""
- Add a pulse of migration at a fixed time.
- Source and destination demes follow the forwards-in-time convention.
-
- :param list(str) source: The name of the source deme(s).
- :param str dest: The name of the destination deme.
- :param list(float) proportion: At the instant after migration, this is the
- expected proportion(s) of individuals in the destination deme made up
+ Append a pulse of migration at a fixed time to the "pulses" list
+ field of the data dictionary.
+
+ If the data dictionary doesn't contain the "pulses" field,
+ it will be added.
+ Source and destination demes refer to individuals migrating
+ forwards in time.
+
+ :param list(str) sources:
+ A list of names of the source deme(s).
+ :param str dest:
+ The name of the destination deme.
+ :param list(float) proportion:
+ At the instant after migration, this is the expected proportion(s)
+ of individuals in the destination deme made up
of individuals from the source deme(s).
- :param float time: The time at which migrations occur.
+ :param float time:
+ The time at which migrations occur.
"""
pulse: MutableMapping[str, Any] = dict()
if sources is not None:
@@ -2602,7 +2716,7 @@ def add_pulse(
def resolve(self):
"""
- Resolve the data dictionary into a Graph.
+ Resolve the Builder's data dictionary into a Graph.
:return: The fully-resolved Graph.
:rtype: Graph
@@ -2614,9 +2728,11 @@ def fromdict(cls, data: MutableMapping[str, Any]) -> "Builder":
"""
Make a Builder object from an existing data dictionary.
- :param MutableMapping data: The data dictionary to initialise the
- graph's state. The objects nested within this dictionary must
- follow Demes' data model, as described in the :ref:`spec:sec_spec`.
+ :param MutableMapping data:
+ The data dictionary to initialise the Builder's state.
+ The objects nested within this dictionary should follow
+ Demes' :ref:`spec:sec_spec_hdm`, but see the note for
+ :attr:`.Builder.data`.
:return: The new Builder object.
:rtype: Builder
diff --git a/demes/load_dump.py b/demes/load_dump.py
index adcb052..31ec356 100644
--- a/demes/load_dump.py
+++ b/demes/load_dump.py
@@ -1,6 +1,7 @@
"""
Functions to load and dump graphs in YAML and JSON formats.
"""
+from __future__ import annotations
import contextlib
import json
import io
@@ -146,8 +147,11 @@ def assert_no_nulls(d):
def loads_asdict(string, *, format="yaml") -> MutableMapping[str, Any]:
"""
Load a YAML or JSON string into a dictionary of nested objects.
- The keywords and structure of the input are defined by the
- :ref:`spec:sec_spec`.
+
+ The input is *not* resolved, and is *not* validated.
+ The returned object may be converted into a :class:`Builder`
+ using :meth:`Builder.fromdict` or converted into a :class:`Graph`
+ using :meth:`Graph.fromdict`.
:param str string: The string to be loaded.
:param str format: The format of the input string. Either "yaml" or "json".
@@ -162,8 +166,11 @@ def loads_asdict(string, *, format="yaml") -> MutableMapping[str, Any]:
def load_asdict(filename, *, format="yaml") -> MutableMapping[str, Any]:
"""
Load a YAML or JSON file into a dictionary of nested objects.
- The keywords and structure of the input are defined by the
- :ref:`spec:sec_spec`.
+
+ The input is *not* resolved, and is *not* validated.
+ The returned object may be converted into a :class:`Builder`
+ using :meth:`Builder.fromdict` or converted into a :class:`Graph`
+ using :meth:`Graph.fromdict`.
:param filename: The path to the file to be loaded, or a file-like object
with a ``read()`` method.
@@ -191,7 +198,7 @@ def load_asdict(filename, *, format="yaml") -> MutableMapping[str, Any]:
return data
-def loads(string, *, format="yaml") -> "demes.Graph":
+def loads(string, *, format="yaml") -> demes.Graph:
"""
Load a graph from a YAML or JSON string.
The keywords and structure of the input are defined by the
@@ -199,14 +206,14 @@ def loads(string, *, format="yaml") -> "demes.Graph":
:param str string: The string to be loaded.
:param str format: The format of the input string. Either "yaml" or "json".
- :return: A graph.
- :rtype: .Graph
+ :return: A resolved and validated demographic model.
+ :rtype: demes.Graph
"""
data = loads_asdict(string, format=format)
return demes.Graph.fromdict(data)
-def load(filename, *, format="yaml") -> "demes.Graph":
+def load(filename, *, format="yaml") -> demes.Graph:
"""
Load a graph from a YAML or JSON file.
The keywords and structure of the input are defined by the
@@ -216,14 +223,14 @@ def load(filename, *, format="yaml") -> "demes.Graph":
with a ``read()`` method.
:type filename: Union[str, os.PathLike, FileLike]
:param str format: The format of the input file. Either "yaml" or "json".
- :return: A graph.
- :rtype: .Graph
+ :return: A resolved and validated demographic model.
+ :rtype: demes.Graph
"""
data = load_asdict(filename, format=format)
return demes.Graph.fromdict(data)
-def load_all(filename) -> Generator["demes.Graph", None, None]:
+def load_all(filename) -> Generator[demes.Graph, None, None]:
"""
Generate graphs from a YAML document stream. Documents must be separated by
the YAML document start indicator, ``---``.
@@ -233,7 +240,7 @@ def load_all(filename) -> Generator["demes.Graph", None, None]:
:param filename: The path to the file to be loaded, or a file-like object
with a ``read()`` method.
:type filename: Union[str, os.PathLike, FileLike]
- :return: A generator of graphs.
+ :return: A generator of resolved and validated demographic models.
:rtype: Generator[demes.Graph, None, None]
"""
with _open_file_polymorph(filename) as f:
@@ -255,10 +262,13 @@ def dumps(graph, *, format="yaml", simplified=True) -> str:
The keywords and structure of the output are defined by the
:ref:`spec:sec_spec`.
- :param .Graph graph: The graph to dump.
+ :param demes.Graph graph: The graph to dump.
:param str format: The format of the output file. Either "yaml" or "json".
- :param bool simplified: If True, returns a simplified graph. If False, returns
- a fully-qualified graph.
+ :param bool simplified:
+ If True, returns a string following the :ref:`spec:sec_spec_hdm`,
+ which has many fields omitted and is thus more compact.
+ If False, returns a string that is fully-resolved following the
+ :ref:`spec:sec_spec_mdm`.
:return: The YAML or JSON string.
:rtype: str
"""
@@ -274,13 +284,16 @@ def dump(graph, filename, *, format="yaml", simplified=True) -> None:
The keywords and structure of the output are defined by the
:ref:`spec:sec_spec`.
- :param .Graph graph: The graph to dump.
+ :param demes.Graph graph: The graph to dump.
:param filename: Path to the output file, or a file-like object with a
``write()`` method.
:type filename: Union[str, os.PathLike, FileLike]
:param str format: The format of the output file. Either "yaml" or "json".
- :param bool simplified: If True, outputs a simplified graph. If False, outputs
- a fully-qualified graph.
+ :param bool simplified:
+ If True, the output file follows the :ref:`spec:sec_spec_hdm`,
+ which has many fields omitted and is thus more compact.
+ If False, the output file is fully-resolved and follows the
+ :ref:`spec:sec_spec_mdm`.
"""
if simplified:
data = graph.asdict_simplified()
@@ -306,8 +319,11 @@ def dump_all(graphs, filename, *, simplified=True) -> None:
:param filename: Path to the output file, or a file-like object with a
``write()`` method.
:type filename: Union[str, os.PathLike, FileLike]
- :param bool simplified: If True, outputs simplified graphs. If False, outputs
- fully-qualified graphs.
+ :param bool simplified:
+ If True, the output file follows the :ref:`spec:sec_spec_hdm`,
+ which has many fields omitted and is thus more compact.
+ If False, the output file is fully-resolved and follows the
+ :ref:`spec:sec_spec_mdm`.
"""
with _open_file_polymorph(filename, "w") as f:
for graph in graphs:
diff --git a/demes/ms.py b/demes/ms.py
index ec49493..b43522d 100644
--- a/demes/ms.py
+++ b/demes/ms.py
@@ -861,6 +861,20 @@ def from_ms(
\\text{migration rate (per generation)} &= \\frac{M}{4 N_0}
+
+ .. warning::
+
+ Several programs exist that implement an ms or ms-like
+ command line interface. But discrepancies do exist
+ between ms and its clones for some command line arguments,
+ and such discrepancies alter the interpretation of the
+ demographic model.
+ This function implements the behaviour as described in
+ Hudson's ms manual. Command lines constructed for use with
+ other programs may work as desired, but users are cautioned to
+ carefully check the appropriate documentation to identify where
+ discrepancies may exist.
+
:param str command: The ms command line.
:param float N0:
The reference population size (:math:`N_0`) used to translate
@@ -903,10 +917,12 @@ def to_ms(graph: demes.Graph, *, N0, samples=None) -> str:
The reference population size used to translate into coalescent units.
See :func:`from_ms` for details.
:param list(int) samples:
- Sampling scheme that will be used with the '-I' option. This is ignored
- for graphs with only one deme. If not specified, the sampling
- configuration in the returned string will need editing prior to
- simulation.
+ A list of haploid sample numbers, one for each deme, in the same
+ order as the order of demes in the graph.
+ The parameter is ignored for demographic models with only one deme.
+ The given sampling scheme will be used with the ``-I`` argument.
+ If not specified, the sampling configuration in the returned string
+ will need editing prior to use.
:return: The ms command line.
:rtype: str
"""
diff --git a/docs/api.md b/docs/api.md
index a7dc602..16a62c4 100644
--- a/docs/api.md
+++ b/docs/api.md
@@ -19,7 +19,9 @@
```{eval-rst}
.. autoclass:: demes.Builder
- :members:
+ :members:
+
+ .. automethod:: __init__
```
## Working with Demes graphs
diff --git a/docs/introduction.md b/docs/introduction.md
index 6ee6c52..857c0e3 100644
--- a/docs/introduction.md
+++ b/docs/introduction.md
@@ -69,5 +69,5 @@ import demes
import demesdraw
graph = demes.load("../examples/two_epoch.yaml")
-demesdraw.tubes(graph, inf_ratio=0.4);
+demesdraw.tubes(graph)
```
diff --git a/docs/quickstart.md b/docs/quickstart.md
index 504d9bf..a828fb6 100644
--- a/docs/quickstart.md
+++ b/docs/quickstart.md
@@ -164,10 +164,7 @@ a useful way to check that the model is what we expect.
```{code-cell}
import demesdraw
-w = 2.0 * demesdraw.utils.size_max(ring_graph)
-# Manually set x-axis coordinates for each deme, to improve spacing.
-positions = {deme.name: j * w for j, deme in enumerate(ring_graph.demes)}
-demesdraw.tubes(ring_graph, positions=positions);
+demesdraw.tubes(ring_graph)
```
## Saving a Demes graph
| Graph docs should tell users not to instantiate a Graph directly.
The the Graph docs should be clear that users can inspect the Graph, but should construct a Graph by loading a YAML with demes.load() or using the deme.Builder().
| 2022-05-24T13:51:09 | 0.0 | [] | [] |
|||
theriverman/django-minio-backend | theriverman__django-minio-backend-31 | 3e34707e468b307745fcb136c02f90450eb5261f | diff --git a/django_minio_backend/models.py b/django_minio_backend/models.py
index 55f25cf..52f5065 100644
--- a/django_minio_backend/models.py
+++ b/django_minio_backend/models.py
@@ -89,7 +89,7 @@ def __init__(self,
self.__MINIO_EXTERNAL_ENDPOINT: str = get_setting("MINIO_EXTERNAL_ENDPOINT", self.__MINIO_ENDPOINT)
self.__MINIO_ACCESS_KEY: str = get_setting("MINIO_ACCESS_KEY")
self.__MINIO_SECRET_KEY: str = get_setting("MINIO_SECRET_KEY")
- self.__MINIO_USE_HTTPS: bool = get_setting("MINIO_USE_HTTPS")
+ self.__MINIO_USE_HTTPS: bool = get_setting("MINIO_USE_HTTPS", False)
self.__MINIO_EXTERNAL_ENDPOINT_USE_HTTPS: bool = get_setting("MINIO_EXTERNAL_ENDPOINT_USE_HTTPS", self.__MINIO_USE_HTTPS)
self.__MINIO_BUCKET_CHECK_ON_SAVE: bool = get_setting("MINIO_BUCKET_CHECK_ON_SAVE", False)
@@ -419,7 +419,7 @@ def validate_settings(self):
"""
validate_settings raises a ConfigurationError exception when one of the following conditions is met:
* Neither MINIO_PRIVATE_BUCKETS nor MINIO_PUBLIC_BUCKETS have been declared and configured with at least 1 bucket
- * A mandatory parameter (ENDPOINT, ACCESS_KEY, SECRET_KEY or USE_HTTP) hasn't been declared and configured properly
+ * A mandatory parameter (ENDPOINT, ACCESS_KEY or SECRET_KEY) hasn't been declared and configured properly
"""
# minimum 1 bucket has to be declared
if not (get_setting("MINIO_PRIVATE_BUCKETS") or get_setting("MINIO_PUBLIC_BUCKETS")):
@@ -431,10 +431,10 @@ def validate_settings(self):
'must be configured in your settings.py (can be both)'
)
# mandatory parameters must be configured
- mandatory_parameters = (self.__MINIO_ENDPOINT, self.__MINIO_ACCESS_KEY, self.__MINIO_SECRET_KEY, self.__MINIO_USE_HTTPS)
+ mandatory_parameters = (self.__MINIO_ENDPOINT, self.__MINIO_ACCESS_KEY, self.__MINIO_SECRET_KEY)
if any([bool(x) is False for x in mandatory_parameters]):
raise ConfigurationError(
- "A mandatory parameter (ENDPOINT, ACCESS_KEY, SECRET_KEY or USE_HTTP) hasn't been configured properly"
+ "A mandatory parameter (ENDPOINT, ACCESS_KEY, or SECRET_KEY) hasn't been configured properly"
)
| Merging #28 and #29 from develop
Returning content bytes on MinioBackend._save
# Info
I was facing an issue were the content length was not matching the content size, because the content was being interpreted as empty:
```
OSError at /my-fancy-endpoint
stream having not enough data;expected: 666, got: 0 bytes
```
- Issue occurs if I upload files as iso, text, pdf and so forth.
- Minio SDK Version: 7.1.0
This MR is also related to the closed issue #21.
Why is it necessary to have both public and private buckets?
### Discussed in https://github.com/theriverman/django-minio-backend/discussions/20
<div type='discussions-op-text'>
<sup>Originally posted by **toabi** March 22, 2021</sup>
If an application only uses one type of bucket it seems weird to have to define both.</div>
|
Thanks for the fix, @MarcelFox !
It's looking good, so I'm merging it to develop. Soon I'll release a new version including your fix too, but I need a little more time for verification.
| 2021-11-02T15:00:27 | 0.0 | [] | [] |
||
churchmanlab/genewalk | churchmanlab__genewalk-47 | a81e827a4a48c9618ab4d02f03b2de739dddd8bb | diff --git a/setup.py b/setup.py
index 1cfade3..2d83dff 100755
--- a/setup.py
+++ b/setup.py
@@ -8,7 +8,7 @@
def main():
- install_list = ['numpy', 'pandas', 'networkx>=2.1', 'gensim', 'goatools',
+ install_list = ['numpy', 'pandas', 'networkx>=2.1', 'gensim<4', 'goatools',
'scipy>=1.3.0', 'matplotlib', 'seaborn', 'plotly>=4.0.0']
setup(name='genewalk',
| Error when calling word2vec: unexpected keyword argument 'size'
GeneWalk needs updating because of GenSim 4.0.0 release
For users running into the following error:
` File "/lib64/python3.6/site-packages/genewalk/deepwalk.py", line 138, in word2vec
sample=sample)
TypeError: __init__() got an unexpected keyword argument 'size'`
Immediate fix: downgrade gensim to previous version before running genewalk:
`pip install --upgrade gensim==3.8.3`
Long term solution that I will implement very soon: make GeneWalk compatible with gensim 4.0.0.
More info on the Gensim migration: https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4
in Word2Vec: size ctr parameter is now consistently vector_size
| Temporarily we can set `gensim==3.8.3` in `setup.py` (and potentially make a minor release) to solve this issue. | 2021-03-29T13:13:44 | 0.0 | [] | [] |
||
sudoer-Huatao/calc_ultra | sudoer-Huatao__calc_ultra-7 | c31a296ea76d2de6be5b3af69069a6115cda461b | diff --git a/.DS_Store b/.DS_Store
new file mode 100644
index 0000000..b5cb5db
Binary files /dev/null and b/.DS_Store differ
diff --git a/README.md b/README.md
index 4f0b483..4fec516 100644
--- a/README.md
+++ b/README.md
@@ -1,20 +1,19 @@
# calc-ultra
-[](https://opensource.org/license/mit/) [](https://github.com/sudoer-Huatao/Calc-ULTRA_Calculus-Calculator)
+[](https://opensource.org/license/mit/) [](https://github.com/sudoer-Huatao/calc_ultra)
> **Calculus made easy**
-(Turn on dark mode for a better aesthetic)
+(Turn on dark mode for a better aesthetic) ð²
-The Calc-Ultra calculus calculator, but as a module!
-
-- Little Python background knowledge is needed!
+Calc-Ultra is a multi-functional calculator that uses command line interfaces. Little Python background knowledge is needed to use the calculator!
Supports:
- Derivatives
- Partials
- Implicit differentiation
+- Limits
- Antiderivatives
- Definite integrals
- Improper integrals
@@ -23,47 +22,58 @@ Supports:
- Vector/matrix operations
- **A perfect interface to do calculations!**
-## Note
+## Chinese version
-This is the module package of the Calc-Ultra calculator. For the Python script of this package, visit <https://github.com/sudoer-Huatao/Calc-ULTRA> (**unmaintained**).
+Want to check out the Chinese version? Visit <https://github.com/sudoer-Huatao/calc_ultra-chinese>! ð¨ð³
## Installation and Running
> Run the calculus calculator with a single line of code
-Command line: `pip3 install calc-ultra`.
-Due to Python import identifiers restrictions, please import Calc-Ultra as "calc_ultra" and not "calc-ultra" when you need to use the calculator.
+Use the following command to download Calc-Ultra (pip should be installed):
+
+`pip3 install calc-ultra`
+
+Due to Python import identifiers restrictions, please import Calc-Ultra as "calc_ultra" and not "calc-ultra".
-To run the calculator, import Calc-ULTRA as `calc_ultra` like so:
+Import Calc-Ultra like so to use:
`from calc_ultra import main`
-Make sure you have the latest version installed. To update calc-ultra, run `pip3 install --upgrade calc-ultra`.
+Please make sure you have the latest version installed. To update calc-ultra, run the following command:
-## Requirements
+`pip3 install --upgrade calc-ultra`
-This program requires the `sympy`, `numpy`, `rich`, `matplotlib`, and `scipy` modules installed. Other required modules are built into most Python IDEs.
+## Requirements
-## Warnings
+Calc-Ultra requires these modules/packages:
-### Function limitations
+- `sympy`
+- `numpy`
+- `matplotlib`
+- `scipy`
+- `rich`
+- `prompt-toolkit`
-Due to limitations of the SymPy module, **some functions cannot be integrated**. The Error Function `erf(x)` can be integrated in both indefinite integral and definite integral calculation, but the Absolute Value and Factorial functions are only available to definite integral calculations. Integration of composed functions is also limited due to SymPy limitations. While some composed functions work, others don't. ð
+If you do not have them installed, there is no need to worry. The modules needed should be installed automatically if you don't have them. Other required modules are built into most Python IDEs as well.
## Test PYPI
-Previous test versions of this project are on Test PYPI. View on <https://test.pypi.org/project/calc-ultra/>.
+Previous test versions of this project are on Test PYPI. View on <https://test.pypi.org/project/calc-ultra/>. ð¾
## Acknowledgements
> Without them, this would be impossible
-A general thank-you to all GitHub users who gave feedback and/or starred this repository. âï¸
-And... a SPECIAL THANK-YOU to @Haobot for troubleshooting and feedback! ðâ¤ï¸
+A big thank-you to all GitHub users who gave feedback and/or starred this repository. âï¸ Your encouragement is our motivation.
+The following contributors deserve a SPECIAL THANK-YOU ðâ¤ï¸:
+
+- @Haobot for troubleshooting and feedback!
+- Fanbo for feedback and ideas for improvement!
-This program was made using SymPy and Scipy for calculation and Matplotlib and NumPy for graphing.
+This program was made using `sympy` for calculation and `numpy`, `scipy`, and `matplotlib` for graphing.
-## Gallery
+## Gallery (Demos)
DerivaCalc derivative with graph demo:

diff --git a/dist/calc_ultra-1.3.4-py3-none-any.whl b/dist/calc_ultra-1.3.4-py3-none-any.whl
deleted file mode 100644
index f830132..0000000
Binary files a/dist/calc_ultra-1.3.4-py3-none-any.whl and /dev/null differ
diff --git a/dist/calc_ultra-1.3.4.tar.gz b/dist/calc_ultra-1.3.4.tar.gz
deleted file mode 100644
index 8b1429f..0000000
Binary files a/dist/calc_ultra-1.3.4.tar.gz and /dev/null differ
diff --git a/dist/calc_ultra-1.3.5-py3-none-any.whl b/dist/calc_ultra-1.3.5-py3-none-any.whl
new file mode 100644
index 0000000..06cda2c
Binary files /dev/null and b/dist/calc_ultra-1.3.5-py3-none-any.whl differ
diff --git a/dist/calc_ultra-1.3.5.tar.gz b/dist/calc_ultra-1.3.5.tar.gz
new file mode 100644
index 0000000..56130ec
Binary files /dev/null and b/dist/calc_ultra-1.3.5.tar.gz differ
diff --git a/pyproject.toml b/pyproject.toml
index 739fae3..b3c7b15 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -7,13 +7,14 @@ supported_platforms = ["macos"]
[project]
name = "calc_ultra"
-version = "1.3.4"
+version = "1.3.5"
dependencies = [
"sympy >= 1.12",
"numpy >= 1.26.2",
- "matplotlib >= 3.8.3",
+ "matplotlib >= 3.8.4",
"rich >= 13.7.1",
- "scipy >= 1.12.0"
+ "scipy >= 1.13.0",
+ "prompt_toolkit >= 3.0.43"
]
requires-python = ">=3.8"
authors = [
diff --git a/src/.DS_Store b/src/.DS_Store
new file mode 100644
index 0000000..3230d3e
Binary files /dev/null and b/src/.DS_Store differ
diff --git a/src/calc_ultra.egg-info/PKG-INFO b/src/calc_ultra.egg-info/PKG-INFO
index 6e42e93..4971ea7 100644
--- a/src/calc_ultra.egg-info/PKG-INFO
+++ b/src/calc_ultra.egg-info/PKG-INFO
@@ -1,6 +1,6 @@
Metadata-Version: 2.1
Name: calc_ultra
-Version: 1.3.4
+Version: 1.3.5
Summary: A calculus calculator with a menu-based interface.
Author-email: Huatao <[email protected]>
Maintainer-email: Huatao <[email protected]>
@@ -15,27 +15,27 @@ Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: sympy>=1.12
Requires-Dist: numpy>=1.26.2
-Requires-Dist: matplotlib>=3.8.3
+Requires-Dist: matplotlib>=3.8.4
Requires-Dist: rich>=13.7.1
-Requires-Dist: scipy>=1.12.0
+Requires-Dist: scipy>=1.13.0
+Requires-Dist: prompt_toolkit>=3.0.43
# calc-ultra
-[](https://opensource.org/license/mit/) [](https://github.com/sudoer-Huatao/Calc-ULTRA_Calculus-Calculator)
+[](https://opensource.org/license/mit/) [](https://github.com/sudoer-Huatao/calc_ultra)
> **Calculus made easy**
-(Turn on dark mode for a better aesthetic)
+(Turn on dark mode for a better aesthetic) ð²
-The Calc-Ultra calculus calculator, but as a module!
-
-- Little Python background knowledge is needed!
+Calc-Ultra is a multi-functional calculator that uses command line interfaces. Little Python background knowledge is needed to use the calculator!
Supports:
- Derivatives
- Partials
- Implicit differentiation
+- Limits
- Antiderivatives
- Definite integrals
- Improper integrals
@@ -44,47 +44,58 @@ Supports:
- Vector/matrix operations
- **A perfect interface to do calculations!**
-## Note
+## Chinese version
-This is the module package of the Calc-Ultra calculator. For the Python script of this package, visit <https://github.com/sudoer-Huatao/Calc-ULTRA> (**unmaintained**).
+Want to check out the Chinese version? Visit <https://github.com/sudoer-Huatao/calc_ultra-chinese>! ð¨ð³
## Installation and Running
> Run the calculus calculator with a single line of code
-Command line: `pip3 install calc-ultra`.
-Due to Python import identifiers restrictions, please import Calc-Ultra as "calc_ultra" and not "calc-ultra" when you need to use the calculator.
+Use the following command to download Calc-Ultra (pip should be installed):
+
+`pip3 install calc-ultra`
+
+Due to Python import identifiers restrictions, please import Calc-Ultra as "calc_ultra" and not "calc-ultra".
-To run the calculator, import Calc-ULTRA as `calc_ultra` like so:
+Import Calc-Ultra like so to use:
`from calc_ultra import main`
-Make sure you have the latest version installed. To update calc-ultra, run `pip3 install --upgrade calc-ultra`.
+Please make sure you have the latest version installed. To update calc-ultra, run the following command:
-## Requirements
+`pip3 install --upgrade calc-ultra`
-This program requires the `sympy`, `numpy`, `rich`, `matplotlib`, and `scipy` modules installed. Other required modules are built into most Python IDEs.
+## Requirements
-## Warnings
+Calc-Ultra requires these modules/packages:
-### Function limitations
+- `sympy`
+- `numpy`
+- `matplotlib`
+- `scipy`
+- `rich`
+- `prompt-toolkit`
-Due to limitations of the SymPy module, **some functions cannot be integrated**. The Error Function `erf(x)` can be integrated in both indefinite integral and definite integral calculation, but the Absolute Value and Factorial functions are only available to definite integral calculations. Integration of composed functions is also limited due to SymPy limitations. While some composed functions work, others don't. ð
+If you do not have them installed, there is no need to worry. The modules needed should be installed automatically if you don't have them. Other required modules are built into most Python IDEs as well.
## Test PYPI
-Previous test versions of this project are on Test PYPI. View on <https://test.pypi.org/project/calc-ultra/>.
+Previous test versions of this project are on Test PYPI. View on <https://test.pypi.org/project/calc-ultra/>. ð¾
## Acknowledgements
> Without them, this would be impossible
-A general thank-you to all GitHub users who gave feedback and/or starred this repository. âï¸
-And... a SPECIAL THANK-YOU to @Haobot for troubleshooting and feedback! ðâ¤ï¸
+A big thank-you to all GitHub users who gave feedback and/or starred this repository. âï¸ Your encouragement is our motivation.
+The following contributors deserve a SPECIAL THANK-YOU ðâ¤ï¸:
+
+- @Haobot for troubleshooting and feedback!
+- Fanbo for feedback and ideas for improvement!
-This program was made using SymPy and Scipy for calculation and Matplotlib and NumPy for graphing.
+This program was made using `sympy` for calculation and `numpy`, `scipy`, and `matplotlib` for graphing.
-## Gallery
+## Gallery (Demos)
DerivaCalc derivative with graph demo:

diff --git a/src/calc_ultra.egg-info/requires.txt b/src/calc_ultra.egg-info/requires.txt
index c8487bc..1f103ee 100644
--- a/src/calc_ultra.egg-info/requires.txt
+++ b/src/calc_ultra.egg-info/requires.txt
@@ -1,5 +1,6 @@
sympy>=1.12
numpy>=1.26.2
-matplotlib>=3.8.3
+matplotlib>=3.8.4
rich>=13.7.1
-scipy>=1.12.0
+scipy>=1.13.0
+prompt_toolkit>=3.0.43
diff --git a/src/calc_ultra/.DS_Store b/src/calc_ultra/.DS_Store
new file mode 100644
index 0000000..f15d37c
Binary files /dev/null and b/src/calc_ultra/.DS_Store differ
diff --git a/src/calc_ultra/main.py b/src/calc_ultra/main.py
index b851512..ead15b7 100644
--- a/src/calc_ultra/main.py
+++ b/src/calc_ultra/main.py
@@ -1,7 +1,12 @@
+###########
+# Imports #
+###########
+
from sympy.core.numbers import pi, E, oo
from math import floor, ceil
import math as mt
-from scipy.special import gamma, polygamma, erf
+from scipy.special import polygamma, gammainc, gammaincc, erf, erfc
+import scipy.special as ss
from rich import print
from rich.progress import (
Progress,
@@ -11,6 +16,8 @@
TimeElapsedColumn,
TextColumn,
)
+from prompt_toolkit import prompt
+from prompt_toolkit.key_binding import KeyBindings
from numpy import (
linspace,
exp,
@@ -33,10 +40,6 @@
cross,
)
from numpy.linalg import norm, det
-import numpy as np
-
-# Sympy uses symbolic Pi and e, which cannot be graphed by matplotlib
-# so later np.pi np.e are explicitly used instead.
from sympy import (
diff,
idiff,
@@ -48,15 +51,107 @@
pprint,
simplify,
symbols,
+ gamma,
+ lowergamma,
+ uppergamma,
+ zeta,
)
import matplotlib.pyplot as plt
-import datetime, logging, random, os, time, warnings
+import datetime, logging, random, readline, os, time, warnings
-# Disable auto Python warnings
+###########
+# Presets #
+###########
warnings.filterwarnings("ignore")
+readline.parse_and_bind("tab: complete")
+
+x, y, z = symbols("x, y, z")
+
+history = [] # Stores calculation history
+
+input = prompt
+
+key_bindings = KeyBindings()
+
+idx = 1
+
+expr_replace = [
+ ("^", "**"),
+ ("E1**", "exp"),
+ ("E1xp", "exp"),
+ ("E1rf", "erf"),
+ ("pE1r", "per"),
+ ("wE1r", "wer"),
+ ("ln", "log"),
+ ("arc", "a"),
+ ("abs", "Abs"),
+ ("pi", "3.141592653589793"),
+ ("E1", "2.718281828459045"),
+]
+
+graph_replace = [
+ ("asin", "arcsin"),
+ ("acos", "arccos"),
+ ("atan", "arctan"),
+ ("asinh", "arcsinh"),
+ ("acosh", "arccosh"),
+ ("atanh", "arctanh"),
+ ("csc", "1/sin"),
+ ("sec", "1/cos"),
+ ("cot", "1/tan"),
+ ("Abs", "fabs"),
+ ("gamma", "ss.gamma"),
+ ("polyss.gamma", "polygamma"),
+ ("lowergamma", "gammainc"),
+ ("uppergamma", "gammaincc"),
+]
+
+
+# Custom up/down key bindings
+
+
+@key_bindings.add("up")
+def _(event):
+ global idx
+
+ if idx != -1:
+ idx -= 1
+ buffer = event.app.current_buffer
+ buffer.text = history[idx]
+ buffer.cursor_position = len(buffer.text)
+ idx = idx % len(history)
+ else:
+ idx = 0
+ buffer = event.app.current_buffer
+ buffer.text = history[idx]
+ buffer.cursor_position = len(buffer.text)
+
+
+@key_bindings.add("down")
+def _(event):
+ global idx
+
+ if idx != len(history):
+ idx += 1
+ idx = idx % len(history)
+ buffer = event.app.current_buffer
+ buffer.text = history[idx]
+ buffer.cursor_position = len(buffer.text)
+
+ else:
+ idx = len(history) - 1
+ buffer = event.app.current_buffer
+ buffer.text = history[idx]
+ buffer.cursor_position = len(buffer.text)
+
+
+#####################
+# Calculation funcs #
+#####################
+
def simp():
print(
@@ -64,13 +159,14 @@ def simp():
)
while True:
- expr = input()
+ expr = input(key_bindings=key_bindings)
try:
if expr != "q":
# Easy to exit unlike VIM ;)
- result = eval(trig_rep(expr))
- print("\n[bright_yellow]Result: [/bright_yellow]", end="")
+ result = eval(replace_graph(expr))
+ history.append(str(result))
+ print("\n[bright_yellow]Result:[/bright_yellow]", end="")
pprint(result)
print()
@@ -79,105 +175,138 @@ def simp():
except:
print()
- logging.error(f'Could not parse: "{expr}"\n')
+ logging.error(f'Could not parse: "{expr}".\n')
def derive(function: str, order: str):
+ """Calculates the derivative of a function.
+
+ Takes a function and an order of differentiation and
+ returns the derivative of the function with the given order.
+ Graph option available.
+ """
+
calc = replace_expr(function)
- if check_order(order) is True:
- global df
- df = diff(calc, x, order)
+ check_order(order)
- print(
- f"\nDerivative of [bright_magenta]{trig_rep(function)}[/bright_magenta] with order {order} is:\n"
- )
- print_expr(df)
+ global df
+ df = diff(calc, x, order)
- check_simp(df)
+ print(
+ f"\nDerivative of [bright_magenta]{replace_graph(function)}[/bright_magenta] with order {order} is:\n"
+ )
+ print_expr(df)
- df = trig_rep(str(df))
+ df = replace_expr(str(df))
+ check_simp(df)
+ history.append(df)
- print("\n[bright_yellow]Show graph of area? (y/n)[/bright_yellow]")
- show = input("(Exit the graph window when you are finished to continue) ")
+ df = replace_graph(df)
- if show == "y":
- try:
- print()
- with Progress(
- SpinnerColumn(finished_text="[bright_green]â[/bright_green]"),
- TextColumn("[bright_yellow]Loading graph...[/bright_yellow]"),
- BarColumn(),
- TimeElapsedColumn(),
- transient=False,
- ) as progress:
- task = progress.add_task("", total=100)
-
- while not progress.finished:
- progress.update(task, advance=2)
- time.sleep(random.randint(2, 5) / 1000)
-
- x_array = linspace(-50, 50, 200000)
-
- def f(x):
- return eval(trig_rep(calc))
-
- def dif(x):
- return eval(df)
-
- title = "Function (red) and derivative (blue)"
- plt.title(title)
- plt.xlabel("x", weight="bold")
- plt.ylabel("y", rotation=0, weight="bold")
- plt.plot(x_array, f(x_array), color="red", label="Function")
- plt.plot(x_array, dif(x_array), color="blue", label="Derivative")
- plt.axis([-7.5, 7.5, -7.5, 7.5])
- plt.legend(loc="lower left")
- plt.grid()
- plt.show()
+ print("\n[bright_yellow]Show graph of area? (y/n)[/bright_yellow]")
+ show = input("(Exit the graph window when you are finished to continue) ")
- print("\nExited graph.\n")
+ if show == "y":
+ try:
+ print()
+ with Progress(
+ SpinnerColumn(finished_text="[bright_green]â[/bright_green]"),
+ TextColumn("[bright_yellow]Loading graph...[/bright_yellow]"),
+ BarColumn(),
+ TimeElapsedColumn(),
+ transient=False,
+ ) as progress:
+ task = progress.add_task("", total=100)
- except:
- plt.close()
- print("\n")
- logging.warning("Could not graph function.")
- print("\nExited graph.")
+ while not progress.finished:
+ progress.update(task, advance=2)
+ time.sleep(random.randint(2, 5) / 1000)
+
+ x_arr = linspace(-100, 100, 200000)
+
+ def f(x: array):
+ return eval(replace_graph(calc))
+
+ def dif(x: array):
+ return eval(df)
+
+ title = "Function (red) and derivative (blue)"
+ plt.title(title)
+ plt.xlabel("x", weight="bold")
+ plt.ylabel("y", rotation=0, weight="bold")
+ plt.plot(x_arr, f(x_arr), color="red", label="Function")
+ plt.plot(x_arr, dif(x_arr), color="blue", label="Derivative")
+ plt.axis([-7.5, 7.5, -7.5, 7.5])
+ plt.legend(loc="lower left")
+ plt.grid()
+ plt.show()
+
+ print("\nExited graph.\n")
+
+ except:
+ plt.close()
+ print("\n")
+ logging.warning("Could not graph function.")
+ print("\nExited graph.")
def partial_derive(function: str, var: str, order: str):
+ """Calculates the partial derivative of a function.
+
+ Takes a function, a variable, and an order of differentiation and
+ returns the partial derivative of the function in respect
+ to the variable with the order.
+ """
+
calc = replace_expr(function)
- if check_order(order) is True:
- df = diff(calc, var, order)
+ check_order(order)
- print(
- f"\nPartial derivative of [bright_magenta]{trig_rep(function)}[/bright_magenta] in respect to {var} of order {order} is:\n"
- )
- print_expr(df)
+ df = diff(calc, var, order)
+
+ print(
+ f"\nPartial derivative of [bright_magenta]{replace_graph(function)}[/bright_magenta] in respect to {var} of order {order} is:\n"
+ )
+ print_expr(df)
- check_simp(df)
+ df = replace_expr(str(df))
+ check_simp(df)
+ history.append(df)
def implicit_derive(circ: str, order: str):
+ """Calculates the implicit derivative of an equation.
+
+ Takes an equation and an order of differentiation and
+ returns the implicit derivative of an equation with the given order.
+ """
+
calc = replace_expr(circ)
left = eval(calc[: calc.find("=")])
right = eval(calc[calc.find("=") + 1 :])
- if check_order(order) is True:
- df = idiff(left - right, y, x, order)
+ check_order(order)
- print(
- f"\nDerivative of [bright_magenta]{circ}[/bright_magenta] with order {order} is:\n"
- )
- print_expr(df)
+ df = idiff(left - right, y, x, order)
- if str(simplify(df, evaluate=False)) != str(df):
- print("\nSimplify/rewrite:\n")
- print_expr(simplify(df, evaluate=False))
+ print(
+ f"\nDerivative of [bright_magenta]{circ}[/bright_magenta] with order {order} is:\n"
+ )
+ print_expr(df)
+
+ df = replace_expr(str(df))
+ check_simp(df)
+ history.append(df)
def antiderive(function: str):
+ """Calculates the antiderivative of a function.
+
+ Takes a function and returns the antiderivative of the function.
+ Graph option available.
+ """
+
calc = replace_expr(function)
F = Integral(calc, x).doit()
@@ -186,13 +315,15 @@ def antiderive(function: str):
return ""
print(
- f"\nAntiderivative of [bright_magenta]{trig_rep(function)}[/bright_magenta] is:\n"
+ f"\nAntiderivative of [bright_magenta]{replace_graph(function)}[/bright_magenta] is:\n"
)
print_expr(F)
+ F = replace_expr(str(F))
check_simp(F)
+ history.append(F)
- F = trig_rep(str(F))
+ F = replace_graph(F)
print("\n[bold]Don't forget to add a constant![/bold]\n")
@@ -215,20 +346,20 @@ def antiderive(function: str):
progress.update(task, advance=2)
time.sleep(random.randint(2, 5) / 1000)
- x_array = linspace(-100, 100, 200000)
+ x_arr = linspace(-100, 100, 200000)
- def f(x):
- return eval(trig_rep(calc))
+ def f(x: array):
+ return eval(replace_graph(calc))
- def af(x):
+ def af(x: array):
return eval(F)
title = "Function (red) and antiderivative (blue, C = 0)"
plt.title(title)
plt.xlabel("x", weight="bold")
plt.ylabel("y", rotation=0, weight="bold")
- plt.plot(x_array, f(x_array), color="red", label="Function")
- plt.plot(x_array, af(x_array), color="blue", label="Antiderivative")
+ plt.plot(x_arr, f(x_arr), color="red", label="Function")
+ plt.plot(x_arr, af(x_arr), color="blue", label="Antiderivative")
plt.axis([-7.5, 7.5, -7.5, 7.5])
plt.legend(loc="lower left")
@@ -245,29 +376,36 @@ def af(x):
def def_int(function: str, low: str, up: str):
- x = symbols("x")
+ """Calculates the definite integral of a function.
+
+ Takes a function and a lower and upper bound and
+ returns the definite integral from the lower bound to
+ the upper bound of the function.
+ Graph option available.
+ """
+
calc = replace_expr(function)
- check_bound(low)
+ check_num(low)
- clow = eval(replace_expr(low))
+ calc_low = eval(replace_expr(low))
- check_bound(up)
+ check_num(up)
- cup = eval(replace_expr(up))
+ calc_up = eval(replace_expr(up))
- result = integrate(calc, (x, clow, cup))
+ result = integrate(calc, (x, calc_low, calc_up))
- gup = eval(replace_bound(str(cup)))
- glow = eval(replace_bound(str(clow)))
+ graph_up = eval(replace_expr(str(calc_up)))
+ graph_low = eval(replace_expr(str(calc_low)))
- num_result = integrate(calc, (x, glow, gup)).evalf()
+ num_result = integrate(calc, (x, graph_low, graph_up)).evalf()
# Composite functions usually do not have primitive antiderivatives
# so calc-ultra is equipped with both symbolic and numerical answers.
if (
(str(result) == "nan")
- or ("I" in str(result))
+ or ("i" in str(result))
and ("Integral" not in str(result))
):
logging.warning("Cannot compute integral because integral does not converge.")
@@ -275,9 +413,11 @@ def def_int(function: str, low: str, up: str):
if "Integral" not in str(result):
print(
- f"\nCalculated integral of [bright_magenta]{trig_rep(function)}[/bright_magenta] from {low} to {up}. Final area is:\n"
+ f"\nCalculated integral of [bright_magenta]{replace_graph(function)}[/bright_magenta] from {low} to {up}. Final area is:\n"
)
print_expr(result)
+ result = replace_expr(str(result))
+ history.append(result)
else:
print("\nCannot express result symbolically.")
@@ -305,38 +445,38 @@ def def_int(function: str, low: str, up: str):
progress.update(task, advance=2)
time.sleep(random.randint(2, 5) / 1000)
- x_array = linspace(-100, 100, 200000)
+ x_arr = linspace(-100, 100, 200000)
if "log" in function:
- x_array = linspace(
- 0.00000001,
+ x_arr = linspace(
+ 0.0001,
100,
200000,
)
+ # The factorial is just the gamma function shifted one unit left
+ # Of course, the factorial is only defined for x >= 0
+
if "factorial" in function:
- x_array = linspace(
+ x_arr = linspace(
0,
100,
200000,
)
- # This is just the gamma function shifted one unit left
- # Of course, the factorial is only defined for x >= 0
-
- def f(x):
- return eval(trig_rep(calc))
+ def f(x: array):
+ return eval(replace_graph(calc))
title = "Shaded area beneath function"
plt.title(title)
plt.xlabel("x", weight="bold")
plt.ylabel("y", rotation=0, weight="bold")
- plt.plot(x_array, f(x_array), color="red", label="Function")
+ plt.plot(x_arr, f(x_arr), color="red", label="Function")
plt.fill_between(
- x_array,
- f(x_array),
- where=[(x_array > clow) and (x_array < cup) for x_array in x_array],
+ x_arr,
+ f(x_arr),
+ where=[(x_arr > calc_low) and (x_arr < calc_up) for x_arr in x_arr],
color="blue",
)
@@ -347,23 +487,36 @@ def f(x):
elif graph_option == "a":
# Adjusted graph view is sometimes better for
# large graphs with large bounds.
- if (float(f(glow)) != 0) and (float(f(gup)) != 0):
+ if (float(f(graph_low)) != 0) and (float(f(graph_up)) != 0):
plt.axis(
[
- glow - 5,
- gup + 5,
- float(f(round(glow)))
- - (float(f(round(glow))) + float(f(round(gup)))) / 2
+ graph_low - 5,
+ graph_up + 5,
+ float(f(round(graph_low)))
+ - (
+ float(f(round(graph_low)))
+ + float(f(round(graph_up)))
+ )
+ / 2
- 1,
- float(f(round(gup)))
- + (float(f(round(glow))) + float(f(round(gup)))) / 2
+ float(f(round(graph_up)))
+ + (
+ float(f(round(graph_low)))
+ + float(f(round(graph_up)))
+ )
+ / 2
+ 1,
]
)
- elif (float(f(glow)) == 0) or (float(f(gup)) == 0):
+ elif (float(f(graph_low)) == 0) or (float(f(graph_up)) == 0):
plt.axis(
- [glow - 5, gup + 5, -(gup - glow) / 2, (gup + glow) / 2]
+ [
+ graph_low - 5,
+ graph_up + 5,
+ -(graph_up - graph_low) / 2,
+ (graph_up + graph_low) / 2,
+ ]
)
except:
@@ -386,53 +539,73 @@ def f(x):
def improp_int(function: str, low: str, up: str):
+ """Calculates the improper integral of a function.
+
+ Takes a function and a lower and upper bound (can be inf)
+ and returns the improper integral from the lower bound to
+ the upper bound of the function. Uses Cauchy principal value
+ for integrals with infinite bounds and standard integration
+ for integrals with singularities.
+ """
+
function = replace_expr(function)
- check_bound(low)
+ check_num(low)
- clow = eval(replace_expr(low))
+ calc_low = eval(replace_expr(low))
- check_bound(up)
+ check_num(up)
- cup = eval(replace_expr(up))
+ calc_up = eval(replace_expr(up))
try:
improper_area = Integral(
- function, (x, clow, cup)
+ function, (x, calc_low, calc_up)
).principal_value() # Cauchy Principal Value
- if "Integral" not in str(improper_area):
- print(
- f"\nCalculated improper integral of [bright_magenta]{function}[/bright_magenta] from {low} to {up}. Final area is:\n"
- )
- print_expr(improper_area)
+ if "Integral" in str(improper_area):
+ logging.warning("Cannot compute improper integral.\n")
+ return ""
- else:
- print("Cannot compute improper integral.")
+ print(
+ f"\nCalculated improper integral of [bright_magenta]{function}[/bright_magenta] from {low} to {up}. Final area is:\n"
+ )
+ print_expr(improper_area)
+ improper_area = replace_expr(str(improper_area))
+ history.append(improper_area)
except ValueError:
- improper_area = integrate(function, (x, clow, cup))
+ improper_area = integrate(function, (x, calc_low, calc_up))
- if "Integral" not in str(improper_area):
- print(
- f"\nCalculated improper integral of [bright_magenta]{function}[/bright_magenta] from {low} to {up}. Final area is:\n"
- )
- print_expr(improper_area)
+ if "Integral" in str(improper_area):
+ logging.warning("Cannot compute improper integral.\n")
+ return ""
- else:
- print("Cannot compute improper integral.")
+ print(
+ f"\nCalculated improper integral of [bright_magenta]{function}[/bright_magenta] from {low} to {up}. Final area is:\n"
+ )
+ print_expr(improper_area)
+ improper_area = replace_expr(str(improper_area))
+ history.append(improper_area)
print()
def double_int(function: str, out_low: str, out_up: str, in_low: str, in_up: str):
+ """Calculates the double integral of a function over region R.
+
+ Takes a function and outer lower and upper and
+ inner lower and upper bounds (that define region R)
+ and returns the integral of the function over R.
+ """
+
function = replace_expr(function)
- check_bound(out_low)
+ check_num(out_low)
out_low = eval(replace_expr(out_low))
- check_bound(out_up)
+ check_num(out_up)
out_up = eval(replace_expr(out_up))
@@ -440,10 +613,10 @@ def double_int(function: str, out_low: str, out_up: str, in_low: str, in_up: str
in_up = eval(replace_expr(in_up))
- out_up = eval(replace_bound(str(out_up)))
- out_low = eval(replace_bound(str(out_low)))
- in_up = eval(replace_bound(str(in_up)))
- in_low = eval(replace_bound(str(in_low)))
+ out_up = eval(replace_expr(str(out_up)))
+ out_low = eval(replace_expr(str(out_low)))
+ in_up = eval(replace_expr(str(in_up)))
+ in_low = eval(replace_expr(str(in_low)))
result = integrate(function, (y, in_low, in_up), (x, out_low, out_up))
@@ -455,46 +628,58 @@ def double_int(function: str, out_low: str, out_up: str, in_low: str, in_up: str
f"\nDouble integral of [bright_magenta]{function}[/bright_magenta] with inner bounds [bright_cyan]{in_low}[/bright_cyan] and [bright_cyan]{in_up}[/bright_cyan] and outer bounds {out_low} and {out_up} is:\n"
)
print_expr(result)
+ result = replace_expr(str(result))
+ history.append(result)
print("")
- check_simp(result)
+def lim(function: str, value: str):
+ """Calculates limit of a function at a value.
+
+ Takes a function and a value and
+ returns the limit of the function at the point.
+ """
-def lim(expr: str, value: str):
- expr = replace_expr(expr)
+ function = replace_expr(function)
- check_bound(value)
+ check_num(value)
- value = float(eval(replace_bound(value)))
+ value = float(eval(replace_expr(value)))
- l = limit(expr, x, value)
+ l = limit(function, x, value)
if "Limit" in str(l):
logging.warning("Cannot compute limit.")
return ""
- if limit(expr, x, value, "+") != limit(expr, x, value, "-"):
- print(
- "\nThe limit does not exist (i.e., the limit approaching from the right does not equal the limit approaching from the left)."
+ if limit(function, x, value, "+") != limit(function, x, value, "-"):
+ logging.warning(
+ "\nThe limit does not exist (the limit approaching from the right does not equal the limit approaching from the left)."
)
print(
- "Use the one-side limit calculation of LimCalc to calculate the limit at one side.\n"
+ "[bright_red]Use one-side limit calculation to calculate the limit from one direction.[/bright_red]\n"
)
+ return ""
+
+ print(
+ f"\nLimit of [bright_magenta]{function}[/bright_magenta] as x approaches {value} is:\n"
+ )
+ print_expr(l)
+ l = replace_expr(str(l))
+ history.append(l)
- else:
- print(
- f"\nLimit of [bright_magenta]{expr}[/bright_magenta] as x approaches {value} is:\n"
- )
- print_expr(l)
- check_simp(l)
+def side_lim(function: str, value: str, direction: str):
+ """Calculates limit of a function at a value.
-def side_lim(expr: str, value: str, direction: str):
- expr = replace_expr(expr)
+ Takes a function and a value and
+ returns the limit of the function at the point.
+ """
+ function = replace_expr(function)
- check_bound(value)
+ check_num(value)
- value = float(eval(replace_bound(value)))
+ value = float(eval(replace_expr(value)))
if direction == "left" or direction == "Left":
direction_sign = "-"
@@ -504,39 +689,48 @@ def side_lim(expr: str, value: str, direction: str):
else:
print()
- logging.error("\nTypeError: Direction is neither right or left.")
+ logging.error("\nDirection is neither right or left.")
return ""
- l = limit(expr, x, value, dir=direction_sign)
+ l = limit(function, x, value, dir=direction_sign)
if "Limit" in str(l):
logging.warning("\nCannot compute limit.")
return ""
print(
- f"\nLimit of [bright_magenta]{expr}[/bright_magenta] as x approaches {value} from the [bright_cyan]{direction}[/bright_cyan] is:\n"
+ f"\nLimit of [bright_magenta]{function}[/bright_magenta] as x approaches {value} from the [bright_cyan]{direction}[/bright_cyan] is:\n"
)
print_expr(l)
+ l = replace_expr(str(l))
+ history.append(l)
+
+
+def eq_solve(mode: str):
+ """Solves an/a set of equation(s).
+ Takes a mode to determine the number of equations
+ to solve (1 to 3) and returns the values of the variables.
+ """
-def eq_solve(mode: int):
eq_list = []
- if mode == 1:
+ if mode == "1":
eq1 = input("\nEnter equation: ")
left = eval(eq1[: eq1.find("=")])
right = eval(eq1[eq1.find("=") + 1 :])
eq_set = solve(left - right)
if len(eq_set) == 0:
print()
- logging.error("UnknownError: Cannot solve equation")
+ logging.error("Cannot solve equation")
else:
print("\nx:\n")
for i in range(0, len(eq_set)):
pprint(eq_set[i])
+ history.append(replace_expr(str(eq_set[i])))
print()
- elif mode == 2:
+ elif mode == "2":
eq1 = input("Enter first equation: ")
left1 = eval(eq1[: eq1.find("=")])
right1 = eval(eq1[eq1.find("=") + 1 :])
@@ -554,7 +748,7 @@ def eq_solve(mode: int):
print("\nx, y:\n")
pprint(result)
- elif mode == 3:
+ elif mode == "3":
eq1 = input("Enter equation 1: ")
left1 = eval(eq1[: eq1.find("=")])
right1 = eval(eq1[eq1.find("=") + 1 :])
@@ -581,8 +775,22 @@ def eq_solve(mode: int):
pprint(result)
-def check_simp(expr) -> bool:
- if str(simplify(expr, evaluate=False)) != str(expr):
+################
+# Helper funcs #
+################
+
+
+# These assistant functions sanatize inputs and prevent errors
+
+
+def check_simp(expr: str) -> bool:
+ """Check for simplification of an expression.
+
+ Takes an expression and simplifies/rewrites it
+ if possible. Otherwise returns boolean value False.
+ """
+
+ if str(simplify(expr, evaluate=False)) != expr:
print("\nSimplify/rewrite:\n")
print_expr(simplify(expr, evaluate=False))
@@ -590,128 +798,94 @@ def check_simp(expr) -> bool:
return False
-def check_order(order: str) -> bool:
+def check_order(order: str):
+ """Check if the order of differentiation is valid.
+
+ Takes the order of differentiation and gives an error
+ if it is invalid.
+ """
+
if ("." in order) or (order.isnumeric() == False):
print()
- logging.error(
- "OrderError: Order of derivative calculation is not a valid number."
- )
- return False
+ logging.error("Invalid order of differentiation.")
- else:
- return True
+def check_num(num: str):
+ """Checks if a string is numerical (valid).
+
+ Takes a numerical valid string and gives an error
+ if it is invalid.
+ """
-def check_bound(bound: str):
if (
- (bound.isnumeric() is False)
- and ("pi" not in bound)
- and ("e" not in bound)
- and ("E" not in bound)
- and ("-" not in bound)
- and ("." not in bound)
- and ("sqrt" not in bound)
- and ("oo" not in bound)
- and ("/" not in bound)
+ (num.isnumeric() is False)
+ and ("pi" not in num)
+ and ("e" not in num)
+ and ("E" not in num)
+ and ("-" not in num)
+ and ("." not in num)
+ and ("sqrt" not in num)
+ and ("oo" not in num)
+ and ("/" not in num)
):
print()
- logging.error("TypeError: Integration bound is not a number.")
+ logging.error("Invalid integration bound/numerical expression.")
def replace_expr(expr: str) -> str:
- expr = expr.strip(" ")
+ """Sanatizes an expression from input.
- if "^" in expr:
- expr = expr.replace("^", "**")
+ Takes a string and sanatizes input for calculation.
+ """
- if "e" in expr:
- expr = expr.replace("e", "E")
+ expr = expr.strip(" ")
- if "E**" in expr:
- expr = expr.replace("E**", "exp")
+ # The extra 1 at the end is to recognize whether the "e"
+ # is part of an expression or a constant on it's own.
- if "ln" in expr:
- expr = expr.replace("ln", "log")
+ if "E" in expr:
+ expr = expr.replace("E", "E1")
- if "arc" in expr:
- expr = expr.replace("arc", "a")
+ if "e" in expr:
+ expr = expr.replace("e", "E1")
- if "abs" in expr:
- expr = expr.replace("abs", "Abs")
+ for r in expr_replace:
+ expr = expr.replace(*r)
return expr
-def replace_bound(bound: str) -> str:
- if "pi" in bound:
- bound = bound.replace("pi", str(np.pi))
-
- if "E" in bound:
- bound = bound.replace("E", str(np.e))
-
- return bound
-
-
-def factorial(x):
- return gamma(x + 1)
-
-
-def trig_rep(function: str) -> str:
- # Sympy and Numpy trig functions are vastly different
- # and uses different prefixes. Thus this replacement
- # algorithm is needed.
- if "asin" in function:
- function = function.replace("asin", "arcsin")
-
- if "acos" in function:
- function = function.replace("acos", "arccos")
-
- if "atan" in function:
- function = function.replace("atan", "arctan")
-
- if "asinh" in function:
- function = function.replace("asinh", "arcsinh")
-
- if "acosh" in function:
- function = function.replace("acosh", "arccosh")
+def replace_graph(function: str) -> str:
+ """Replaces an expression (function) for graphing.
- if "atanh" in function:
- function = function.replace("atanh", "arctanh")
-
- if "csc" in function:
- function = function.replace("csc", "1/sin")
-
- # Unfortunately, Numpy does not have an implemented csc,
- # sec, cot, csch, etc. so we have to implement our own.
-
- if "sec" in function:
- function = function.replace("sec", "1/cos")
-
- if "cot" in function:
- function = function.replace("cot", "1/tan")
-
- if "csch" in function:
- function = function.replace("csch", "1/sinh")
-
- if "sech" in function:
- function = function.replace("sech", "1/cosh")
-
- if "coth" in function:
- function = function.replace("coth", "1/tanh")
-
- if "Abs" in function:
- function = function.replace("Abs", "fabs")
+ Takes a string and sanatizes it for graphing.
+ This replacement includes changing prefixes of trig
+ functions (since Sympy and Numpy trig functions uses
+ different prefixes) and an implementation of csc, sec,
+ cot, etc. (since Numpy does not provide them).
+ """
if "log" in function and ",x)" in function:
function = function.replace("log", "mt.log")
+ # Graph constant functions
+
if "x" not in function:
function = "0 * x + " + function
+ for r in graph_replace:
+ function = function.replace(*r)
+
return function
def print_expr(text: str):
+ """Selects printing method.
+
+ Linked to the printing settings option in settings.
+ Chooses either normal print or Sympy pretty print.
+ """
+
printing_methods = {"p": lambda t: pprint(text), "n": lambda t: print(text)}
try:
@@ -721,11 +895,23 @@ def print_expr(text: str):
printing_methods["p"](text)
+def factorial(x):
+ """Simple implementation of a factorial function."""
+
+ # Define our own factorial
+ return ss.gamma(x + 1)
+
+
def nprint(text: str):
print(text)
time.sleep(0.04)
+####################
+# Driving programs #
+####################
+
+
def main():
with Progress(
SpinnerColumn(finished_text="[bright_green]â[/bright_green]"),
@@ -741,8 +927,7 @@ def main():
progress.update(task, advance=1)
time.sleep(random.randint(2, 5) / 100)
- global x, y, z
- x, y, z = symbols("x, y, z")
+ global x, y, z, history, expr_replace, graph_replace
while True:
instruct_path = (
@@ -768,7 +953,7 @@ def main():
print(f"\n(Time now is: {now})\n")
print("[bold bright_green](Current Screen: Main Screen)[/bold bright_green]\n")
- print("[bright_magenta]Enter Command: [/bright_magenta]", end="")
+ print("[purple]Enter Command: [/purple]", end="")
cmd = input()
if cmd == "1":
@@ -794,6 +979,7 @@ def main():
break
else:
+ print("\n")
logging.warning(f'Invalid command:"{cmd}"\n')
@@ -818,15 +1004,17 @@ def derivacalc():
nprint(
"\n[bold bright_green](Current Screen: DerivaCalc Main Screen)[/bold bright_green]\n"
)
- print("[bright_magenta]Enter Command: [/bright_magenta]", end="")
+ print("[purple]Enter Command: [/purple]", end="")
cmd = input()
if cmd == "1":
nprint(
"\n[bold bright_green](Current Screen: Derivative Screen)[/bold bright_green]\n"
)
- function = input("Enter a function: ")
- order = input("Enter order of derivative calculation: ")
+ function = input("Enter a function: ", key_bindings=key_bindings)
+ order = input(
+ "Enter order of derivative calculation: ", key_bindings=key_bindings
+ )
derive(function, order)
elif cmd == "2":
@@ -834,22 +1022,34 @@ def derivacalc():
"\n[bold bright_green](Current Screen: Partial Derivative Screen)[/bold bright_green]\n"
)
function = input(
- "Enter a function containing x and y or x and y and z: "
+ "Enter a function containing x and y or x and y and z: ",
+ key_bindings=key_bindings,
+ )
+ var = input(
+ "Enter variable to differentiate in respect to: ",
+ key_bindings=key_bindings,
)
- var = input("Enter variable to differentiate in respect to: ")
if var != "x" and var != "y" and var != "z":
print()
- logging.error("Variable to differentite in respect to is invalid.")
+ logging.error("Invalid variable to differentite in respect to.")
else:
- order = input("Enter the order of partial derivative calculation: ")
+ order = input(
+ "Enter the order of partial derivative calculation: ",
+ key_bindings=key_bindings,
+ )
partial_derive(function, var, order)
elif cmd == "3":
nprint(
"\n[bold bright_green](Current Screen: Implicit Derivative Screen)[/bold bright_green]\n"
)
- circ = input("Enter an equation containing x and y:")
- order = input("Enter order of implicit derivative calculation: ")
+ circ = input(
+ "Enter an equation containing x and y:", key_bindings=key_bindings
+ )
+ order = input(
+ "Enter order of implicit derivative calculation: ",
+ key_bindings=key_bindings,
+ )
implicit_derive(circ, order)
elif cmd == "4":
@@ -857,10 +1057,11 @@ def derivacalc():
break
else:
+ print("\n")
logging.warning(f'Invalid command:"{cmd}"')
except:
print("\n")
- logging.error("UnknownError: An unknown error occured.")
+ logging.error("An unknown error occured.")
nprint("[bold bright_red]Check if your input is valid.[/bold bright_red]\n")
@@ -880,32 +1081,40 @@ def intecalc():
nprint(
"[bold bright_green](Current Screen: InteCalc Main Screen)[/bold bright_green]\n"
)
- print("[bright_magenta]Enter Command: [/bright_magenta]", end="")
+ print("[purple]Enter Command: [/purple]", end="")
cmd = input()
if cmd == "1":
nprint(
"\n[bold bright_green](Current Screen: Antiderivative Screen)[/bold bright_green]\n"
)
- function = input("Enter a function: ")
+ function = input("Enter a function: ", key_bindings=key_bindings)
antiderive(function)
elif cmd == "2":
nprint(
"\n[bold bright_green](Current Screen: Definite Integral Screen)[/bold bright_green]\n"
)
- function = input("Enter a function: ")
- lower_bound = input("\nEnter the lower bound: ")
- upper_bound = input("Enter the upper bound: ")
+ function = input("Enter a function: ", key_bindings=key_bindings)
+ lower_bound = input(
+ "\nEnter the lower bound: ", key_bindings=key_bindings
+ )
+ upper_bound = input(
+ "Enter the upper bound: ", key_bindings=key_bindings
+ )
print(def_int(function, lower_bound, upper_bound))
elif cmd == "3":
nprint(
"\n[bold bright_green](Current Screen: Improper Integral Screen)[/bold bright_green]\n"
)
- function = input("Enter a function: ")
- lower_bound = input("\nEnter the lower bound: ")
- upper_bound = input("Enter the upper bound: ")
+ function = input("Enter a function: ", key_bindings=key_bindings)
+ lower_bound = input(
+ "\nEnter the lower bound: ", key_bindings=key_bindings
+ )
+ upper_bound = input(
+ "Enter the upper bound: ", key_bindings=key_bindings
+ )
improp_int(function, lower_bound, upper_bound)
elif cmd == "4":
@@ -913,10 +1122,18 @@ def intecalc():
"\n[bold bright_green](Current Screen: Double Integral Screen)[/bold bright_green]\n"
)
function = input("Enter a function: ")
- outer_low = input("\nEnter the lower outer bound: ")
- outer_up = input("Enter the upper outer bound: ")
- inner_low = input("\nEnter the lower inner bound: ")
- inner_up = input("Enter the upper inner bound: ")
+ outer_low = input(
+ "\nEnter the lower outer bound: ", key_bindings=key_bindings
+ )
+ outer_up = input(
+ "Enter the upper outer bound: ", key_bindings=key_bindings
+ )
+ inner_low = input(
+ "\nEnter the lower inner bound: ", key_bindings=key_bindings
+ )
+ inner_up = input(
+ "Enter the upper inner bound: ", key_bindings=key_bindings
+ )
double_int(function, outer_low, outer_up, inner_low, inner_up)
elif cmd == "5":
@@ -924,10 +1141,11 @@ def intecalc():
break
else:
+ print("\n")
logging.warning(f'Invalid command: "{cmd}"')
except:
print("\n")
- logging.error("UnknownError: An unknown error occured.")
+ logging.error("An unknown error occured.")
nprint("[bold bright_red]Check if your input is valid.[/bold bright_red]\n")
@@ -954,16 +1172,16 @@ def limcalc():
nprint(
"\n[bold bright_green](Current screen: Limit Screen)[/bold bright_green]\n"
)
- expr = input("Enter an expression: ")
- value = input("Enter point of evaluation: ")
+ expr = input("Enter an expression: ", key_bindings=key_bindings)
+ value = input("Enter point of evaluation: ", key_bindings=key_bindings)
lim(expr, value)
elif cmd == "2":
nprint(
"\n[bold bright_green](Current screen: One-sided Limit Screen)[/bold bright_green]\n"
)
- expr = input("Enter an expression: ")
- value = input("Enter point of evaluation: ")
+ expr = input("Enter an expression: ", key_bindings=key_bindings)
+ value = input("Enter point of evaluation: ", key_bindings=key_bindings)
direction = input("Enter direction of limit ('left' or 'right'): ")
side_lim(expr, value, direction)
@@ -972,11 +1190,12 @@ def limcalc():
break
else:
+ print("\n")
logging.warning(f'Invalid command: "{cmd}"')
except:
print("\n")
- logging.error("UnknownError: An unknown error occured.")
+ logging.error("An unknown error occured.")
nprint("[bold bright_red]Check if your input is valid.[/bold bright_red]\n")
@@ -1003,12 +1222,15 @@ def algcalc():
nprint(
"\n[bold bright_green](Current screen: Equation Solver Screen)[/bold bright_green]\n"
)
- print(
- "Enter mode: 1 for one set equation, 2 for two, and 3 for three: ",
- end="",
- )
- mode = int(input())
- eq_solve(mode)
+ mode = ""
+
+ while mode != "q":
+ print(
+ 'Enter mode: 1 for one set equation, 2 for two, and 3 for three ("q" to quit): ',
+ end="",
+ )
+ mode = input()
+ eq_solve(mode)
elif cmd == "2":
nprint(
@@ -1026,7 +1248,7 @@ def algcalc():
print('Enter any expression to start ("q" to quit):\n')
while True:
- expr = input()
+ expr = input(key_bindings=key_bindings)
try:
if expr != "q":
expr = (
@@ -1038,7 +1260,7 @@ def algcalc():
).replace("<", "norm(")
).replace(">", ")")
).strip(" ")
- result = eval(trig_rep(expr))
+ result = eval(replace_graph(expr))
print("\n[bright_yellow]Result: [/bright_yellow]", end="")
pprint(result)
print()
@@ -1065,7 +1287,7 @@ def algcalc():
print('Enter any expression to start ("q" to quit):\n')
while True:
- expr = input()
+ expr = input(key_bindings=key_bindings)
try:
if expr != "q":
expr = (
@@ -1077,7 +1299,7 @@ def algcalc():
).replace("|a", "det(a")
).replace(")|", "))")
).strip(" ")
- result = eval(trig_rep(expr))
+ result = eval(replace_graph(expr))
print("\n[bright_yellow]Result: [/bright_yellow]\n")
pprint(result)
print()
@@ -1093,11 +1315,12 @@ def algcalc():
break
else:
+ print("\n")
logging.warning(f'Invalid command: "{cmd}"')
except:
print("\n")
- logging.error("UnknownError: An unknown error occured.")
+ logging.error("An unknown error occured.")
nprint("[bold bright_red]Check if your input is valid.[/bold bright_red]\n")
@@ -1115,7 +1338,7 @@ def settings():
else:
nprint(line)
print("\n[bold green](Current Screen: Settings Screen)[/bold green]\n")
- print("[bright_magenta]Enter Command: [/bright_magenta]", end="")
+ print("[purple]Enter Command: [/purple]", end="")
cmd = input()
if cmd == "print":
@@ -1173,7 +1396,9 @@ def settings():
break
else:
+ print("\n")
logging.warning(f'Invalid command:"{cmd}"')
main()
+# You've reached the end of the file!
diff --git a/src/calc_ultra/texts/main_screen.txt b/src/calc_ultra/texts/main_screen.txt
index 400f19b..c5fa6e8 100644
--- a/src/calc_ultra/texts/main_screen.txt
+++ b/src/calc_ultra/texts/main_screen.txt
@@ -8,19 +8,6 @@
+---- / \ +----- +---- +---+ +---- | | \ / \
----------------------------------------------------------------------
- -----------------------------------
- | The Ultimate Calculus Calculator! |
- -----------------------------------
-
-Calc-Ultra supports:
-
- - Special numbers including pi and e;
- - Trigonometric functions;
- - Exponents (for base e, input as "exp(x)");
- - The Natural Logarithm ("log(x)");
- - Inverse Trigonometric Functions;
- - Hyperbolic Functions and Inverse Hyperbolic Functions;
- - Other Functions such as the Factorial factorial(x), the Error Function erf(x) and more!
Commands:
| Merge dev to main (sync pulls)
| 2024-05-02T10:35:22 | 0.0 | [] | [] |
|||
PyAbel/PyAbel | PyAbel__PyAbel-322 | 7412efe505c83be87a2cd6bd229cecca82f9618a | diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 98cc5b2e..78974186 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -8,7 +8,7 @@ Unreleased
angular_integration() and average_radial_intensity(), which had incorrect or
nonintuitive behavior (PR #318, PR #319).
-v0.8.4 (2020-04-15)
+v0.8.4 (2021-04-15)
-------------------
* Added odd angular orders to tools.vmi.Distributions (PR #266).
* Important! Some "center" functions/parameters are renamed to "origin" or
diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst
index 19a9dc9f..c9ec45a1 100644
--- a/CONTRIBUTING.rst
+++ b/CONTRIBUTING.rst
@@ -2,7 +2,7 @@ Contributing to PyAbel
======================
-PyAbel is an open-source project, and we welcome improvements! Please let us know about any issues with the software, even if's just a typo. The easiest way to get started is to open a `new issue <https://github.com/PyAbel/PyAbel/issues>`_.
+PyAbel is an open-source project, and we welcome improvements! Please let us know about any issues with the software, even if's just a typo. The easiest way to get started is to open a `new issue <https://github.com/PyAbel/PyAbel/issues>`__.
If you would like to make a Pull Request, the following information may be useful.
@@ -10,7 +10,7 @@ If you would like to make a Pull Request, the following information may be usefu
Change Log
----------
-If the change is significant (more than just a typo-fix), please leave a short note about the change in `CHANGELOG.rst <https://github.com/PyAbel/PyAbel/blob/master/CHANGELOG.rst>`_
+If the change is significant (more than just a typo-fix), please leave a short note about the change in `CHANGELOG.rst <https://github.com/PyAbel/PyAbel/blob/master/CHANGELOG.rst>`__
Unit tests
@@ -24,7 +24,7 @@ For more detailed information, the following can be used::
pytest abel/ -v --cov=abel
-Note that this requires that you have `pytest <https://docs.pytest.org/en/latest/>`_ and (optionally) `pytest-cov <https://pytest-cov.readthedocs.io/en/latest/>`_ installed. You can install these with ::
+Note that this requires that you have `pytest <https://docs.pytest.org/en/latest/>`__ and (optionally) `pytest-cov <https://pytest-cov.readthedocs.io/en/latest/>`__ installed. You can install these with ::
pip install pytest pytest-cov
@@ -32,7 +32,7 @@ Note that this requires that you have `pytest <https://docs.pytest.org/en/latest
Documentation
-------------
-PyAbel uses Sphinx and `Napoleon <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/index.html>`_ to process Numpy-style docstrings and is synchronized to `pyabel.readthedocs.io <http://pyabel.readthedocs.io>`_. To build the documentation locally, you will need `Sphinx <http://www.sphinx-doc.org/>`_, the `recommonmark <https://github.com/rtfd/recommonmark>`_ package, and the `sphinx_rtd_theme <https://github.com/snide/sphinx_rtd_theme/>`_. You can install them using ::
+PyAbel uses Sphinx and `Napoleon <http://sphinxcontrib-napoleon.readthedocs.io/en/latest/index.html>`__ to process Numpy-style docstrings and is synchronized to `pyabel.readthedocs.io <http://pyabel.readthedocs.io>`__. To build the documentation locally, you will need `Sphinx <http://www.sphinx-doc.org/>`__, the `recommonmark <https://github.com/rtfd/recommonmark>`__ package, and the `sphinx_rtd_theme <https://github.com/snide/sphinx_rtd_theme/>`__. You can install them using ::
pip install sphinx
pip install recommonmark
@@ -50,7 +50,7 @@ Then you can open ``doc/_build/hmtl/index.html`` to look at the documentation. S
to clear out the old documentation and get things to re-build properly.
-When you get tired of typing ``make html`` every time you make a change to the documentation, it's nice to use use `sphix-autobuild <https://pypi.python.org/pypi/sphinx-autobuild>`_ to automatically update the documentation in your browser for you. So, install sphinx-autobuild using ::
+When you get tired of typing ``make html`` every time you make a change to the documentation, it's nice to use use `sphix-autobuild <https://pypi.python.org/pypi/sphinx-autobuild>`__ to automatically update the documentation in your browser for you. So, install sphinx-autobuild using ::
pip install sphinx-autobuild
@@ -61,7 +61,7 @@ Now you should be able to ::
which should launch a browser window displaying the docs. When you save a change to any of the docs, the re-build should happen automatically and the docs should update in a matter of a few seconds.
-Alternatively, `restview <https://pypi.python.org/pypi/restview>`_ is a nice way to preview the ``.rst`` files.
+Alternatively, `restview <https://pypi.python.org/pypi/restview>`__ is a nice way to preview the ``.rst`` files.
Code Style
@@ -69,13 +69,13 @@ Code Style
We hope that the PyAbel code will be understandable, hackable, and maintainable for many years to come. So, please use good coding style, include plenty of comments, use docstrings for functions, and pick informative variable names.
-PyAbel attempts to follow `PEP8 <https://www.python.org/dev/peps/pep-0008/>`_ style whenever possible, since the PEP8 recommendations typically produces code that is easier to read. You can check your code using `pycodestyle <https://pypi.org/project/pycodestyle/>`_, which can be called from the command line or incorporated right into most text editors. Also, PyAbel is using automated pycodestyle checking of all Pull Requests using `pep8speaks <https://pep8speaks.com/>`_. However, `producing readable code <https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds>`_ is the primary goal, so please go ahead and break the rules of PEP8 when doing so improves readability. For example, if a section of your code is easier to read with lines slightly longer than 79 characters, then use the longer lines.
+PyAbel attempts to follow `PEP8 <https://www.python.org/dev/peps/pep-0008/>`__ style whenever possible, since the PEP8 recommendations typically produces code that is easier to read. You can check your code using `pycodestyle <https://pypi.org/project/pycodestyle/>`__, which can be called from the command line or incorporated right into most text editors. Also, PyAbel is using automated pycodestyle checking of all Pull Requests using `pep8speaks <https://pep8speaks.com/>`__. However, `producing readable code <https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds>`__ is the primary goal, so please go ahead and break the rules of PEP8 when doing so improves readability. For example, if a section of your code is easier to read with lines slightly longer than 79 characters, then use the longer lines.
Before merging
--------------
-If possible, before merging your pull request please rebase your fork on the last master on PyAbel. This could be done `as explained in this post <https://stackoverflow.com/questions/7244321/how-to-update-a-github-forked-repository>`_::
+If possible, before merging your pull request please rebase your fork on the last master on PyAbel. This could be done `as explained in this post <https://stackoverflow.com/questions/7244321/how-to-update-a-github-forked-repository>`__::
# Add the remote, call it "upstream" (only the fist time)
git remote add upstream https://github.com/PyAbel/PyAbel.git
@@ -100,7 +100,7 @@ If possible, before merging your pull request please rebase your fork on the las
git push -f
-See `this wiki <https://github.com/edx/edx-platform/wiki/How-to-Rebase-a-Pull-Request>`_ for more information.
+See `this wiki <https://github.com/edx/edx-platform/wiki/How-to-Rebase-a-Pull-Request>`__ for more information.
Adding a new forward or inverse Abel implementation
@@ -142,14 +142,14 @@ See ``abel/tests/test_basex.py`` for a concrete example.
Dependencies
------------
-The current list of dependencies can be found in `setup.py <https://github.com/PyAbel/PyAbel/blob/master/setup.py>`_. Please refrain from adding new dependencies, unless it cannot be avoided.
+The current list of dependencies can be found in `setup.py <https://github.com/PyAbel/PyAbel/blob/master/setup.py>`__. Please refrain from adding new dependencies, unless it cannot be avoided.
Releasing on PyPi
-----------------
-PyAbel should be automatically released on PyPi (see `PR #161 <https://github.com/PyAbel/PyAbel/pull/161>`_) whenever a new release is drafted on GitHub via the "Draft New Release" button on the `Releases page <https://github.com/PyAbel/PyAbel/releases>`_. But first, make a Pull Request that does the following:
+PyAbel should be automatically released on PyPi (see `PR #161 <https://github.com/PyAbel/PyAbel/pull/161>`__) whenever a new release is drafted on GitHub via the "Draft New Release" button on the `Releases page <https://github.com/PyAbel/PyAbel/releases>`__. But first, make a Pull Request that does the following:
- Increment the version number in abel/_version.py.
- Modify CHANGELOG.rst to include the new changes in the new version.
diff --git a/README.rst b/README.rst
index ae1f8b83..12922bc6 100644
--- a/README.rst
+++ b/README.rst
@@ -6,13 +6,13 @@ PyAbel README
.. image:: https://ci.appveyor.com/api/projects/status/g1rj5f0g7nohcuuo
:target: https://ci.appveyor.com/project/PyAbel/PyAbel
-**Note:** This readme is best viewed as part of the `PyAbel Documentation <https://pyabel.readthedocs.io/en/latest/readme_link.html>`_.
+**Note:** This readme is best viewed as part of the `PyAbel Documentation <https://pyabel.readthedocs.io/en/latest/readme_link.html>`__.
Introduction
------------
-``PyAbel`` is a Python package that provides functions for the forward and inverse `Abel transforms <https://en.wikipedia.org/wiki/Abel_transform>`_. The forward Abel transform takes a slice of a cylindrically symmetric 3D object and provides the 2D projection of that object. The inverse Abel transform takes a 2D projection and reconstructs a slice of the cylindrically symmetric 3D distribution.
+``PyAbel`` is a Python package that provides functions for the forward and inverse `Abel transforms <https://en.wikipedia.org/wiki/Abel_transform>`__. The forward Abel transform takes a slice of a cylindrically symmetric 3D object and provides the 2D projection of that object. The inverse Abel transform takes a 2D projection and reconstructs a slice of the cylindrically symmetric 3D distribution.
.. image:: https://user-images.githubusercontent.com/1107796/48970223-1b477b80-efc7-11e8-9feb-c614d6cadab6.png
:width: 500px
@@ -27,7 +27,7 @@ PyAbel provides efficient implementations of several Abel transform algorithms,
Transform Methods
-----------------
-The outcome of the numerical Abel transform depends on the exact method used. So far, PyAbel includes the following `transform methods <https://pyabel.readthedocs.io/en/latest/transform_methods.html>`_:
+The outcome of the numerical Abel transform depends on the exact method used. So far, PyAbel includes the following `transform methods <https://pyabel.readthedocs.io/en/latest/transform_methods.html>`__:
1. ``basex`` - Gaussian basis set expansion of Dribinski and co-workers.
@@ -51,7 +51,7 @@ The outcome of the numerical Abel transform depends on the exact method used. So
Installation
------------
-PyAbel requires Python 3.5-3.9. (Note: PyAbel is also currently tested to work with Python 2.7, but Python 2 support will be removed soon.) `NumPy <https://www.numpy.org/>`_ and `SciPy <https://www.scipy.org/>`_ are also required, and `Matplotlib <https://matplotlib.org/>`_ is required to run the examples. If you don't already have Python, we recommend an "all in one" Python package such as the `Anaconda Python Distribution <https://www.continuum.io/downloads>`_, which is available for free.
+PyAbel requires Python 3.5-3.9. (Note: PyAbel is also currently tested to work with Python 2.7, but Python 2 support will be removed soon.) `NumPy <https://www.numpy.org/>`__ and `SciPy <https://www.scipy.org/>`__ are also required, and `Matplotlib <https://matplotlib.org/>`__ is required to run the examples. If you don't already have Python, we recommend an "all in one" Python package such as the `Anaconda Python Distribution <https://www.anaconda.com/products/individual>`__, which is available for free.
With pip
~~~~~~~~
@@ -112,7 +112,7 @@ Output:
:width: 400px
:alt: example abel transform
-.. note:: Additional examples can be viewed on the `PyAbel examples <https://pyabel.readthedocs.io/en/latest/examples.html>`_ page and even more are found in the `PyAbel/examples <https://github.com/PyAbel/PyAbel/tree/master/examples>`_ directory.
+.. note:: Additional examples can be viewed on the `PyAbel examples <https://pyabel.readthedocs.io/en/latest/examples.html>`__ page and even more are found in the `PyAbel/examples <https://github.com/PyAbel/PyAbel/tree/master/examples>`__ directory.
Documentation
@@ -159,13 +159,13 @@ The PyAbel code adheres to the following conventions:
**Image origin:** Fundamentally, the forward and inverse Abel transforms in PyAbel consider the origin of the image to be located in the center of a pixel. This means that, for a symmetric image, the image will have a width that is an odd number of pixels. (The central pixel is effectively "shared" between both halves of the image.) In most situations, the image origin is specified using the ``origin`` keyword in ``abel.Transform`` (or directly using ``abel.center.center_image`` to find the origin (the center of symmetry) of your image). This processing step takes care of shifting the origin of the image to the middle of the central pixel. However, if the individual Abel transforms methods are used directly, care must be taken to supply a properly centered image. Some methods also provide low-level functions for transforming only the right half of the image (with the origin located in the middle of a 0th-column pixel).
-
- **Intensity:** The pixel intensities can have any value (within the floating-point range). However, the intensity scale must be linear. Keep in mind that cameras and common image formats often use `gamma correction <https://en.wikipedia.org/wiki/Gamma_correction>`_ and thus provide data with nonlinear intensity encoding. Thus, if possible, it is recommended to disable the gamma correction on cameras used to record images that will be inverse Abel-transformed. If this is not possible, then it is necessary to apply the appropriate intensity transformations before the analysis. Most PyAbel methods also assume intensities to be floating-point numbers, and when applied to integer types, can return inappropriately rounded results. The ``abel.Transform`` class recasts the input image to ``float64`` by default, but if you wish to call the transform methods directly or use other tools, you might need to perform the conversion yourself (as ``IM.astype(float)``, for example).
+ **Intensity:** The pixel intensities can have any value (within the floating-point range). However, the intensity scale must be linear. Keep in mind that cameras and common image formats often use `gamma correction <https://en.wikipedia.org/wiki/Gamma_correction>`__ and thus provide data with nonlinear intensity encoding. Thus, if possible, it is recommended to disable the gamma correction on cameras used to record images that will be inverse Abel-transformed. If this is not possible, then it is necessary to apply the appropriate intensity transformations before the analysis. Most PyAbel methods also assume intensities to be floating-point numbers, and when applied to integer types, can return inappropriately rounded results. The ``abel.Transform`` class recasts the input image to ``float64`` by default, but if you wish to call the transform methods directly or use other tools, you might need to perform the conversion yourself (as ``IM.astype(float)``, for example).
Support
-------
-If you have a question or suggestion about PyAbel, the best way to contact the PyAbel Developers Team is to `open a new issue <https://github.com/PyAbel/PyAbel/issues>`_.
+If you have a question or suggestion about PyAbel, the best way to contact the PyAbel Developers Team is to `open a new issue <https://github.com/PyAbel/PyAbel/issues>`__.
Contributing
@@ -173,15 +173,15 @@ Contributing
We welcome suggestions for improvement, together with any interesting images that demonstrate application of PyAbel.
-Either open a new `Issue <https://github.com/PyAbel/PyAbel/issues>`_ or make a `Pull Request <https://github.com/PyAbel/PyAbel/pulls>`_.
+Either open a new `Issue <https://github.com/PyAbel/PyAbel/issues>`__ or make a `Pull Request <https://github.com/PyAbel/PyAbel/pulls>`__.
-`CONTRIBUTING.rst <https://github.com/PyAbel/PyAbel/blob/master/CONTRIBUTING.rst>`_ has more information on how to contribute, such as how to run the unit tests and how to build the documentation.
+`CONTRIBUTING.rst <https://github.com/PyAbel/PyAbel/blob/master/CONTRIBUTING.rst>`__ has more information on how to contribute, such as how to run the unit tests and how to build the documentation.
License
-------
-PyAble is licensed under the `MIT license <https://github.com/PyAbel/PyAbel/blob/master/LICENSE.txt>`_, so it can be used for pretty much whatever you want! Of course, it is provided "as is" with absolutely no warranty.
+PyAble is licensed under the `MIT license <https://github.com/PyAbel/PyAbel/blob/master/LICENSE.txt>`__, so it can be used for pretty much whatever you want! Of course, it is provided "as is" with absolutely no warranty.
.. _READMEcitation:
@@ -191,12 +191,12 @@ Citation
First and foremost, please cite the paper(s) corresponding to the implementation of the Abel transform that you use in your work. The references can be found at the links above.
-If you find PyAbel useful in you work, it would bring us great joy if you would cite the project. You can find the DOI for the lastest verison `here <https://dx.doi.org/10.5281/zenodo.594858>`_
+If you find PyAbel useful in you work, it would bring us great joy if you would cite the project. You can find the DOI for the lastest verison `here <https://dx.doi.org/10.5281/zenodo.594858>`__
.. image:: https://zenodo.org/badge/30170345.svg
:target: https://zenodo.org/badge/latestdoi/30170345
-Additionally, we have written a scientific paper comparing various Abel transform methods. You can find the manuscript at the Review of Scientific Instruments (DOI: `doi.org/10.1063/1.5092635 <https://doi.org/10.1063/1.5092635>`_) or on arxiv (`arxiv.org/abs/1902.09007 <https://arxiv.org/abs/1902.09007>`_).
+Additionally, we have written a scientific paper comparing various Abel transform methods. You can find the manuscript at the Review of Scientific Instruments (DOI: `doi.org/10.1063/1.5092635 <https://doi.org/10.1063/1.5092635>`__) or on arxiv (`arxiv.org/abs/1902.09007 <https://arxiv.org/abs/1902.09007>`__).
**Have fun!**
diff --git a/abel/benchmark.py b/abel/benchmark.py
index 39257f3c..e5b61b51 100644
--- a/abel/benchmark.py
+++ b/abel/benchmark.py
@@ -757,7 +757,7 @@ def is_symmetric(arr, i_sym=True, j_sym=True):
checked for polar symmetry.
See `issue #34 comment
- <https://github.com/PyAbel/PyAbel/issues/34#issuecomment-160344809>`_
+ <https://github.com/PyAbel/PyAbel/issues/34#issuecomment-160344809>`__
for the defintion of a center of the image.
"""
diff --git a/abel/dasch.py b/abel/dasch.py
index d3db31ad..c6460f5d 100644
--- a/abel/dasch.py
+++ b/abel/dasch.py
@@ -33,7 +33,7 @@
"One-dimensional tomography: a comparison of Abel, onion-peeling, and
filtered backprojection methods",
`Appl. Opt. 31, 1146â1152 (1992)
- <https://doi.org/10.1364/AO.31.001146>`_.
+ <https://doi.org/10.1364/AO.31.001146>`__.
Parameters
----------
diff --git a/abel/hansenlaw.py b/abel/hansenlaw.py
index 24583d68..47360fee 100644
--- a/abel/hansenlaw.py
+++ b/abel/hansenlaw.py
@@ -60,14 +60,14 @@ def hansenlaw_transform(image, dr=1, direction='inverse', hold_order=0,
E. W. Hansen,
"Fast Hankel transform algorithm",
`IEEE Trans. Acoust. Speech Signal Proc. 33, 666â671 (1985)
- <https://dx.doi.org/10.1109/TASSP.1985.1164579>`_
+ <https://dx.doi.org/10.1109/TASSP.1985.1164579>`__
and
E. W. Hansen, P.-L. Law,
"Recursive methods for computing the Abel transform and its inverse",
`J. Opt. Soc. Am. A 2, 510â520 (1985)
- <https://dx.doi.org/10.1364/JOSAA.2.000510>`_.
+ <https://dx.doi.org/10.1364/JOSAA.2.000510>`__.
This function performs the HansenâLaw transform on only one "right-side"
image::
diff --git a/abel/linbasex.py b/abel/linbasex.py
index f994b192..b833b6ec 100644
--- a/abel/linbasex.py
+++ b/abel/linbasex.py
@@ -37,7 +37,7 @@
"Charged particle velocity map image reconstruction with one-dimensional
projections of spherical functions",
`Rev. Sci. Instrum. 84, 033101 (2013)
- <https://doi.org/10.1063/1.4793404>`_.
+ <https://doi.org/10.1063/1.4793404>`__.
``linbasex`` models the image using a sum of Legendre polynomials at each
radial pixel, As such, it should only be applied to situations that can
diff --git a/abel/onion_bordas.py b/abel/onion_bordas.py
index 5414c568..0e4012b1 100644
--- a/abel/onion_bordas.py
+++ b/abel/onion_bordas.py
@@ -72,14 +72,14 @@ def onion_bordas_transform(IM, dr=1, direction="inverse", shift_grid=True,
"Incorporating real time velocity map image reconstruction into closed-loop
coherent control",
`Rev. Sci. Instrum. 85, 113105 (2014)
- <https://doi.org/10.1063/1.4899267>`_.
+ <https://doi.org/10.1063/1.4899267>`__.
The algorithm actually originates from
C. Bordas, F. Paulig,
"Photoelectron imaging spectrometry: Principle and inversion method",
`Rev. Sci. Instrum. 67, 2257â2268 (1996)
- <https://doi.org/10.1063/1.1147044>`_.
+ <https://doi.org/10.1063/1.1147044>`__.
This function operates on the "right side" of an image. i.e. it works on
just half of a cylindrically symmetric image. Unlike the other transforms,
diff --git a/abel/transform.py b/abel/transform.py
index 2fbd7c9b..64546a88 100644
--- a/abel/transform.py
+++ b/abel/transform.py
@@ -127,7 +127,7 @@ class Transform(object):
J. P. Shaffer,
"Multiple scattering and the density distribution of a Cs MOT",
`Optics Express 13, 9672â9682 (2005)
- <https://dx.doi.org/10.1364/OPEX.13.009672>`_.
+ <https://doi.org/10.1364/OPEX.13.009672>`__.
angular_integration : bool
Integrate the image over angle to give the radial (speed) intensity
@@ -229,7 +229,7 @@ class Transform(object):
"Reconstruction of Abel-transformable images: The Gaussian basis-set
expansion Abel transform method",
`Rev. Sci. Instrum. 73, 2634â2642 (2002)
- <https://dx.doi.org/10.1063/1.1482156>`_.
+ <https://doi.org/10.1063/1.1482156>`__.
``direct``
This method attempts a direct integration of the Abel-transform
@@ -248,7 +248,7 @@ class Transform(object):
E. W. Hansen, P.-L. Law,
"Recursive methods for computing the Abel transform and its inverse",
`J. Opt. Soc. Am. A 2, 510â520 (1985)
- <https://dx.doi.org/10.1364/JOSAA.2.000510>`_.
+ <https://doi.org/10.1364/JOSAA.2.000510>`__.
``linbasex`` *
Velocity-mapping images are composed of projected Newton spheres with
@@ -263,7 +263,7 @@ class Transform(object):
"Charged particle velocity map image reconstruction with
one-dimensional projections of spherical functions",
`Rev. Sci. Instrum. 84, 033101 (2013)
- <https://doi.org/10.1063/1.4793404>`_.
+ <https://doi.org/10.1063/1.4793404>`__.
``onion_bordas``
The onion peeling method, also known as "back projection", originates
@@ -271,7 +271,7 @@ class Transform(object):
C. Bordas, F. Paulig,
"Photoelectron imaging spectrometry: Principle and inversion method",
`Rev. Sci. Instrum. 67, 2257â2268 (1996)
- <https://doi.org/10.1063/1.1147044>`_.
+ <https://doi.org/10.1063/1.1147044>`__.
The algorithm was subsequently coded in MatLab by
C. E. Rallis, T. G. Burwitz, P. R. Andrews, M. Zohrabi, R. Averin,
@@ -281,16 +281,16 @@ class Transform(object):
"Incorporating real time velocity map image reconstruction into
closed-loop coherent control",
`Rev. Sci. Instrum. 85, 113105 (2014)
- <https://doi.org/10.1063/1.4899267>`_,
+ <https://doi.org/10.1063/1.4899267>`__,
which was used as the basis of this Python port. See `issue #56
- <https://github.com/PyAbel/PyAbel/issues/56>`_.
+ <https://github.com/PyAbel/PyAbel/issues/56>`__.
``onion_peeling`` *
This is one of the most compact and fast algorithms, with the inverse
Abel transform achieved in one Python code-line, `PR #155
- <https://github.com/PyAbel/PyAbel/pull/155>`_. See also ``three_point``
- is the onion peeling algorithm as described by Dasch (1992), reference
- below.
+ <https://github.com/PyAbel/PyAbel/pull/155>`__. See also
+ ``three_point`` is the onion peeling algorithm as described by Dasch
+ (1992), reference below.
``rbasex`` *
The pBasex method by
@@ -298,7 +298,7 @@ class Transform(object):
âTwo-dimensional charged particle image inversion using a polar basis
function expansionâ,
`Rev. Sci. Instrum. 75, 4989â2996 (2004)
- <http://dx.doi.org/10.1063/1.1807578>`_
+ <http://doi.org/10.1063/1.1807578>`__
adapts the BASEX ("basis set expansion") method to the specific case of
velocity-mapping images by using a basis of 2D functions in polar
coordinates, such that the reconstructed radial distributions are
@@ -313,9 +313,9 @@ class Transform(object):
dynamics of hydroxymethyl radical (CH\ :sub:`2`\ OH and
CD\ :sub:`2`\ OH)â,
Ph.D. dissertation, University of Southern California, 2012
- (`ProQuest <https://search.proquest.com/docview/1289069738>`_,
+ (`ProQuest <https://search.proquest.com/docview/1289069738>`__,
`USC <http://digitallibrary.usc.edu/cdm/ref/collection/p15799coll3/id/
- 112619>`_).
+ 112619>`__).
``three_point`` *
The "Three Point" Abel transform method exploits the observation that
@@ -328,7 +328,7 @@ class Transform(object):
"One-dimensional tomography: a comparison of Abel, onion-peeling, and
filtered backprojection methods",
`Appl. Opt. 31, 1146â1152 (1992)
- <https://doi.org/10.1364/AO.31.001146>`_.
+ <https://doi.org/10.1364/AO.31.001146>`__.
``two_point`` *
Another Dasch method. Simple, and fast, but not as accurate as the
diff --git a/doc/circularize_image.rst b/doc/circularize_image.rst
index 5202dd24..e9f88a05 100644
--- a/doc/circularize_image.rst
+++ b/doc/circularize_image.rst
@@ -29,7 +29,7 @@ J. R. Gascooke, S. T. Gibson, W. D. Lawrance,
"A 'circularisation' method to repair deformations and determine the centre of
velocity map images",
`J. Chem. Phys. 147, 013924 (2017)
-<https://dx.doi.org/10.1063/1.4981024>`_.
+<https://dx.doi.org/10.1063/1.4981024>`__.
Implementation
diff --git a/doc/conf.py b/doc/conf.py
index 12ecbff7..a41f1e94 100755
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -80,7 +80,7 @@ def __getattr__(cls, name):
# General information about the project.
project = 'PyAbel'
-copyright = u'2016â2020, PyAbel team'
+copyright = u'2016â2021, PyAbel team'
author = 'PyAbel team'
# The version info for the project you're documenting, acts as replacement for
diff --git a/doc/transform_methods/rbasex-coord.py b/doc/transform_methods/rbasex-coord.py
index 2fb6981b..4d13a09b 100644
--- a/doc/transform_methods/rbasex-coord.py
+++ b/doc/transform_methods/rbasex-coord.py
@@ -86,7 +86,7 @@
ax.text(0, x_[anc] + lo / 3, y_[anc] + lo, '$\\theta$', **sl)
-plt.tight_layout(pad=0, rect=(-0.1, -0.2, 1.1, 1))
+plt.tight_layout(pad=0, rect=(-0.06, -0.1, 1.04, 1))
#plt.savefig('rbasex-coord.svg')
#plt.show()
diff --git a/examples/README.txt b/examples/README.txt
new file mode 100644
index 00000000..448030f0
--- /dev/null
+++ b/examples/README.txt
@@ -0,0 +1,5 @@
+Most example scripts provided here must be run from this directory, since they
+need the "data" subdirectory with the example data files and/or the "bases"
+subdirectory for saving/loading the basis sets. If you wish to copy or run such
+scripts elsewhere, please make sure that these paths are available or modify
+them accordingly.
| Bug in module abel.basex
The following error occurs when running the example `examples/example_basex_step.py`:
```
(py38) ubuntu:PyAbel$ python examples/example_basex_step.py
Traceback (most recent call last):
File "examples/example_basex_step.py", line 30, in <module>
recon_right = basex_transform(
File "/tmp/PyAbel/build/lib/abel/basex.py", line 153, in basex_transform
A = get_bs_cached(n, sigma=sigma, reg=reg, correction=correction,
File "/tmp/PyAbel/build/lib/abel/basex.py", line 323, in get_bs_cached
for f in listdir(basis_dir):
FileNotFoundError: [Errno 2] No such file or directory: 'bases'
```
This error occurs both with pyabel installed via pip and built from source.
| As I understand, this âbugâ is not in the `abel.basex` module but in the example. It seems that you are running it in a directory which does not have a `bases` subdirectory to store basis files. Try to go inside the `examples` directory first (it must already have the `bases` subdirectory) and run this example from there. Or create a `bases` subdirectory in the working directory where you want to run the code.
Thanks for the explanation. I didn't notice the `basis_dir` parameter, and assumed that the examples could be run from the repo root. A README in the examples directory would be helpful.
Yes, the examples probably need some more documentation...
Another thing that could be improved is to check whether the `bases` directory exists, and if not, offer to run (or just complain and run) without storing the basis. This was addressed in #277 for the GUI example, where it was causing real problems, but can be done (more easily) for other examples as well.
From a user perspective, the most useful behaviour would probably be for `abel.basex.basex_transform` to default to reading and writing from something like `$HOME/.cache/pyabel`.
This was also to some extent discussed [in #247](https://github.com/PyAbel/PyAbel/issues/247#issuecomment-474986998). :â) But we need to find a good portable way to do so. | 2021-06-24T18:07:05 | 0.0 | [] | [] |
||
boudinfl/pke | boudinfl__pke-225 | a22a86df7d3141c8d53ba2653374c9be28d11ca6 | diff --git a/pke/base.py b/pke/base.py
index 14d9df32..60745372 100644
--- a/pke/base.py
+++ b/pke/base.py
@@ -78,10 +78,15 @@ def load_document(self, input, language=None, stoplist=None,
self.normalization = normalization
# initialize the stoplist
- if stoplist:
+ if stoplist is not None:
self.stoplist = stoplist
else:
- self.stoplist = stopwords.get(self.language)
+ try:
+ self.stoplist = stopwords[self.language]
+ except KeyError:
+ logging.warning('No stoplist available in pke for \'{}\' language.'.format(self.language))
+ # logging.warning('Set `stoplist` to `[]` or a custom stoplist to suppress this warning.')
+ self.stoplist = []
# check whether input is a spacy doc object instance
if isinstance(input, spacy.tokens.doc.Doc):
@@ -96,9 +101,7 @@ def load_document(self, input, language=None, stoplist=None,
parser = PreprocessedReader()
sents = parser.read(list_of_sentence_tuples=input)
else:
- logging.error('Cannot process input. It is neither a spacy doc or a string: {}'.format(type(input)))
- # TODO raise TypeError('Cannot process input. It is neither a spacy doc, a string or a list of tuple: {}'.format(type(input)))) ?
- return
+ raise TypeError('Cannot process input. It is neither a spacy doc, a string or a list of list of tuple: {}'.format(type(input)))
# populate the sentences
self.sentences = sents
@@ -112,7 +115,7 @@ def load_document(self, input, language=None, stoplist=None,
langcode = 'porter'
stemmer = SnowballStemmer(langcode)
except ValueError:
- logging.error('No stemmer available for \'{}\' language -> fall back to porter.'.format(self.language))
+ logging.warning('No stemmer available in pke for \'{}\' language -> falling back to porter stemmer.'.format(self.language))
stemmer = SnowballStemmer("porter")
# populate Sentence.stems
diff --git a/pke/lang.py b/pke/lang.py
index cb83e162..dc24a2c5 100644
--- a/pke/lang.py
+++ b/pke/lang.py
@@ -12,23 +12,30 @@
import importlib
+# This dictionnary holds only languages supported by `pke`.
+# Supported languages need a stemmer and a spacy model.
+
+# This dictionnary maps spacy's langcode to stemmer language
+# (ordered by language name).
+# The list of languages was obtained using:
+# `nltk.stem.SnowballStemmer.languages`
+
langcodes = {
- "ar": "arabic",
- "da": "danish",
- "du": "dutch",
- "en": "english",
- "fi": "finnish",
- "fr": "french",
- "ge": "german",
- "hu": "hungarian",
- "it": "italian",
- "no": "norwegian",
- "pt": "portuguese",
- "ro": "romanian",
- "ru": "russian",
- "sp": "spanish",
- "sw": "swedish",
- "ja": "japanese"
+ # "ar": "arabic", # no spacy model yet ;)
+ "da": "danish",
+ "nl": "dutch",
+ "en": "english",
+ "fi": "finnish",
+ "fr": "french",
+ "de": "german",
+ # "hu": "hungarian", # no spacy model yet ;)
+ "it": "italian",
+ "nb": "norwegian",
+ "pt": "portuguese",
+ "ro": "romanian",
+ "ru": "russian",
+ "es": "spanish",
+ "sv": "swedish",
}
stopwords = {}
diff --git a/pke/readers.py b/pke/readers.py
index e78b6d66..aa468493 100644
--- a/pke/readers.py
+++ b/pke/readers.py
@@ -54,6 +54,9 @@ def __init__(self, language=None):
if language is None:
self.language = 'en'
+ if len(self.language) != 2:
+ raise ValueError('`language` is \'{}\', but should be an iso2 language code (\'en\' instead of \'english\')'.format(self.language))
+
def read(self, text, spacy_model=None):
"""Read the input file and use spacy to pre-process.
@@ -83,9 +86,11 @@ def read(self, text, spacy_model=None):
# stop execution is no model is available
else:
- logging.error('No spacy model for \'{}\' language.'.format(self.language))
- logging.error('A list of available spacy models is available at https://spacy.io/models.')
- return
+ excp_msg = 'No downloaded spacy model for \'{}\' language.'.format(self.language)
+ excp_msg += '\nA list of downloadable spacy models is available at https://spacy.io/models.'
+ excp_msg += '\nAlternatively, preprocess your document as a list of sentence tuple (word, pos), such as:'
+ excp_msg += "\n\t[[('The', 'DET'), ('brown', 'ADJ'), ('fox', 'NOUN'), ('.', 'PUNCT')]]"
+ raise Exception(excp_msg)
# add the sentence splitter
nlp.add_pipe('sentencizer')
| æ·»å "zh": "chinese"
å¨lang.pyæ·»å "zh": "chinese"ï¼æ¯æèªå¨å è½½ä¸æåç¨è¯ï¼è§£å³candidate_filteringç421è¡ï¼stoplist为noneçæ
åµ
| 2023-03-15T10:48:57 | 0.0 | [] | [] |
|||
PhoenixDL/rising | PhoenixDL__rising-128 | 297274d3f9387e2dcefdffeed837c409a36ac1bb | diff --git a/rising/transforms/functional/crop.py b/rising/transforms/functional/crop.py
index 68033340..f13ab177 100644
--- a/rising/transforms/functional/crop.py
+++ b/rising/transforms/functional/crop.py
@@ -60,7 +60,6 @@ def random_crop(data: torch.Tensor, size: Union[int, Sequence[int]],
Returns:
torch.Tensor: cropped output
- List[int]: top left corner used for crop
"""
if check_scalar(dist):
dist = [dist] * (data.ndim - 2)
| [Bug] Docstring for random_crop claims it returns crops corner but it doesn't
**Description**
When calling [`rising.transforms.functional.crop.random_crop`](https://github.com/PhoenixDL/rising/blob/7da711c27ae23eda3b27b2168aac9505c30e928f/rising/transforms/functional/crop.py#L51-L79) the docstring says it returns the crop corner. It doesn't.
**Environment**
* OS: Windows 10
* Python version: Python 3.8.5
* `rising` version: 0.2.0post0
* How did you install `rising`? pip
**Reproduction**
```python
import torch
from rising.transforms.functional import random_crop
x = torch.zeros(1, 1, 10, 10)
print(random_crop(x, (3, 3))) # Should have returned both crop and corner
```
| 2022-12-28T17:41:55 | 0.0 | [] | [] |
|||
obsidian-html/obsidian-html | obsidian-html__obsidian-html-304 | d5ee822fbd06653fa173c2ae6e8959affd6fe2a3 | diff --git a/obsidianhtml/HeaderTree.py b/obsidianhtml/HeaderTree.py
index d2c79330..d9d630fb 100644
--- a/obsidianhtml/HeaderTree.py
+++ b/obsidianhtml/HeaderTree.py
@@ -30,6 +30,46 @@ def PrintHeaderTree(root_element):
page.append(element)
return '\n'.join(page)
+def FindHeaderTreeKey(key_list, key):
+ # this code will find a key in the key list that is the same as the provided key
+ # with the option for one or more '-' at any location in the provided key relative to
+ # the keys in the keylist
+
+ if key in key_list:
+ return key
+
+ # first try to match keys without -
+ naive_matches = []
+ skey = key.replace('-', '')
+ for k in key_list:
+ if k.replace('-', '') == skey:
+ naive_matches.append(k)
+
+ if len(naive_matches) == 1:
+ return naive_matches[0]
+ if len(naive_matches) == 0:
+ raise Exception(f"Header {key} not found in list of {key_list}")
+
+ # more than one match found
+ # wifi-2-4-vs-5-0 wifi-24-vs-50
+ c = 0
+ for k in naive_matches:
+ for char in k:
+ if char == key[c]:
+ c += 1
+ if c == len(key):
+ return k
+ elif key[c] == '-':
+ c += 1
+ if char == key[c]:
+ c += 1
+ if c == len(key):
+ return k
+ else:
+ continue
+ raise Exception(f"Header {key} not found in list of {key_list}")
+
+
def ConvertMarkdownToHeaderTree(code):
lines = code.split('\n')
current_element = _newElement()
diff --git a/obsidianhtml/MarkdownPage.py b/obsidianhtml/MarkdownPage.py
index 9991b028..55a3c66b 100644
--- a/obsidianhtml/MarkdownPage.py
+++ b/obsidianhtml/MarkdownPage.py
@@ -5,7 +5,7 @@
import urllib.parse # convert link characters like %
import warnings
from .lib import DuplicateFileNameInRoot, GetObsidianFilePath, ConvertTitleToMarkdownId, MalformedTags, OpenIncludedFile
-from .HeaderTree import PrintHeaderTree, ConvertMarkdownToHeaderTree, GetReferencedBlock
+from .HeaderTree import PrintHeaderTree, ConvertMarkdownToHeaderTree, GetReferencedBlock, FindHeaderTreeKey
from .FileFinder import FindFile
class MarkdownPage:
@@ -354,7 +354,8 @@ def ConvertObsidianPageToMarkdownPage(self, origin:'OH_file'=None, include_depth
else:
header_id = ConvertTitleToMarkdownId(header)
header_dict, root_element = ConvertMarkdownToHeaderTree(included_page.page)
- included_page.page = PrintHeaderTree(header_dict[header_id])
+ header_dict_key = FindHeaderTreeKey(header_dict.keys(), header_id)
+ included_page.page = PrintHeaderTree(header_dict[header_dict_key])
# Wrap up
included_page.RestoreCodeSections()
diff --git a/obsidianhtml/lib.py b/obsidianhtml/lib.py
index 8b55ad07..67398a13 100644
--- a/obsidianhtml/lib.py
+++ b/obsidianhtml/lib.py
@@ -100,13 +100,14 @@ def ConvertTitleToMarkdownId(title):
# remove whitespace and lowercase
idstr = title.lower().strip()
+ # remove special characters "hi-hello-'bye!'" --> "hi-hello-bye"
+ idstr = "".join([ch for ch in idstr if ch in (ascii_letters + digits + ' -_')])
+
# convert "hi hello - 'bye!'" --> "hi-hello---'bye!'" --> "hi-hello-'bye!'"
idstr = idstr.replace(' ', '-')
while '--' in idstr:
idstr = idstr.replace('--', '-')
- # remove special characters "hi-hello-'bye!'" --> "hi-hello-bye"
- idstr = "".join([ch for ch in idstr if ch in (ascii_letters + digits + ' -_')])
return idstr
diff --git a/obsidianhtml/src/js/obsidian_core.js b/obsidianhtml/src/js/obsidian_core.js
index c6a11941..3f8739e6 100644
--- a/obsidianhtml/src/js/obsidian_core.js
+++ b/obsidianhtml/src/js/obsidian_core.js
@@ -107,19 +107,25 @@ function load_page() {
if (l.getAttribute("href").includes('#')) {
l.onclick = function () {
+ // remove current url from the link
let current_url = document.URL
+ console.log(current_url);
current_url = decodeURI(current_url.replace(window.location.protocol + '//', '').replace(window.location.host, ''))
+ current_url = current_url.split('#')[0];
+
let link = this.getAttribute("href")
link = link.replace(current_url, '')
+ // if we are left with something like: '#blabla' then we have an anchor link
+ // otherwise, we just go to the page:
if (link[0] != '#') {
link = this.getAttribute("href").replace('#', '#!')
window.location.href = link;
return false;
}
- console.log(link, document.getElementsByClassName("container")[0]);
-
+ // we scroll to the anchor
+ // we do this manually because scrolling divs suck
let levelcont = document.getElementsByClassName("container")[0];
var el = levelcont.querySelectorAll(link)[0];
if (el) {
@@ -285,11 +291,11 @@ function SetHeaders(container) {
els[i].innerHTML = '<a id="' + anchor_id + '" class="anchor" href="' + href + '"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>\n' + els[i].innerHTML
// body onload is not called when staying within the page
- // we need to call the LoadPage() function manually
+ // we need to call the load_page() function manually
let href_el = document.getElementById(anchor_id);
href_el.onclick = function () {
window.location.replace(this.href);
- LoadPage(0);
+ load_page(0);
}
// Show/hide link icon
| [Bug] Note inclusion based on header anchor does not work with special chars in the header
@dwrolvink Hello, it already worked perfectly for me ððð.
But I found some bugs in the links to blocks. For example using certain symbols will fail the build.
```md
%%% note1
# wifi 2.4 vs 5.0
---
# TCP/IP Model
---
# CI/CD && DevOps
---
%%% note2
Obsidian replaces these symbols with spaces
[[note1#wifi 2 4 vs 5 0]] // fail
[[note1#TCP IP Model]] // fail
[[note1#CI CD DevOps]] // fail
I had to change all those references for the build to work
```
_Originally posted by @GiancarloAparicio in https://github.com/obsidian-html/obsidian-html/issues/296#issuecomment-1197641425_
| 2022-07-28T20:49:13 | 0.0 | [] | [] |
|||
sappelhoff/pyprep | sappelhoff__pyprep-153 | 82390acff4c15a8e8ca92ed92c46f7a50d72e049 | diff --git a/CITATION.cff b/CITATION.cff
index 1b6ee06..d17cba2 100644
--- a/CITATION.cff
+++ b/CITATION.cff
@@ -48,6 +48,9 @@ authors:
- given-names: Nabil
family-names: Alibou
affiliation: 'Biotrial, Neuroscience Department, Rennes, France'
+ - given-names: Ayush
+ family-names: Agarwal
+ affiliation: 'Techno India University, India'
type: software
repository-code: 'https://github.com/sappelhoff/pyprep'
license: MIT
diff --git a/docs/whats_new.rst b/docs/whats_new.rst
index e8ce05d..2106264 100644
--- a/docs/whats_new.rst
+++ b/docs/whats_new.rst
@@ -27,6 +27,7 @@ Version 0.5.0 (Unreleased)
Changelog
~~~~~~~~~
- :meth:`~pyprep.NoisyChannels.find_bad_by_nan_flat` now accepts a ``flat_threshold`` argument, by `Nabil Alibou`_ (:gh:`144`)
+- changed _mad function in utils.py to use median_abs_deviation from the sciPy module, by `Ayush Agarwal`_ (:gh:`153`).
Bug
~~~
@@ -234,3 +235,4 @@ Changelog
.. _Mathieu Scheltienne: https://github.com/mscheltienne
.. _Ole Bialas: https://github.com/OleBialas
.. _Nabil Alibou: https://github.com/nabilalibou
+.. _Ayush Agarwal: https://github.com/Ayush-Devs
diff --git a/pyprep/utils.py b/pyprep/utils.py
index ae29678..c32f016 100644
--- a/pyprep/utils.py
+++ b/pyprep/utils.py
@@ -10,6 +10,7 @@
from psutil import virtual_memory
from scipy import linalg
from scipy.signal import firwin, lfilter, lfilter_zi
+from scipy.stats import median_abs_deviation
def _union(list1, list2):
@@ -487,11 +488,7 @@ def _mad(x, axis=None):
raise ValueError(e.format(x.ndim))
# Calculate the median absolute deviation from the median
- med = np.median(x, axis=axis)
- if axis == 1:
- med = med.reshape(-1, 1) # Transposes array to allow subtraction below
- mad = np.median(np.abs(x - med), axis=axis)
-
+ mad = median_abs_deviation(x, axis=axis)
return mad
| We should replace our own implementation of MAD by the one now available in scipy
https://github.com/sappelhoff/pyprep/blob/82390acff4c15a8e8ca92ed92c46f7a50d72e049/pyprep/utils.py#L464-L495
versus
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.median_abs_deviation.html
| 2024-08-20T13:44:57 | 0.0 | [] | [] |
|||
CybercentreCanada/assemblyline-ui | CybercentreCanada__assemblyline-ui-648 | ab5106ef7864ffd91c3ce3a6839981a65f589859 | diff --git a/assemblyline_ui/api/v4/service.py b/assemblyline_ui/api/v4/service.py
index 0d84f452..fe24e940 100644
--- a/assemblyline_ui/api/v4/service.py
+++ b/assemblyline_ui/api/v4/service.py
@@ -695,6 +695,7 @@ def list_all_services(**_):
'category': x.get('category', None),
'description': x.get('description', None),
'enabled': x.get('enabled', False),
+ 'is_external': x.get('is_external', False),
'name': x.get('name', None),
'privileged': x.get('privileged', False),
'rejects': x.get('rejects', None),
| Allow the system to change the state of rules not set by a user
| 2023-05-23T16:51:28 | 0.0 | [] | [] |
|||
morganpartee/codegpt | morganpartee__codegpt-14 | cadd49a64d5132f1763ffcdb8518c293e1be1e37 | diff --git a/.gitignore b/.gitignore
index 12a1e13..4d7b922 100644
--- a/.gitignore
+++ b/.gitignore
@@ -5,3 +5,4 @@
/dist
**/__pycache__/**
.oai
+.venv
\ No newline at end of file
diff --git a/codegpt/gpt_interface.py b/codegpt/gpt_interface.py
index b0d47bc..27801f5 100644
--- a/codegpt/gpt_interface.py
+++ b/codegpt/gpt_interface.py
@@ -20,7 +20,7 @@ def confirm_send(prompt: str, max_tokens=4000, yes=False, silent=False):
)
else:
typer.confirm(
- f"This prompt is {len(tokens)}ish tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {max_tokens}ish.",
+ f"This prompt is {tokens if type(tokens) == int else len(tokens)}ish tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {max_tokens}ish.",
default=True,
abort=True,
)
diff --git a/example/unsafe edit/codegpt.py b/example/unsafe edit/codegpt.py
index c91024c..75e8673 100644
--- a/example/unsafe edit/codegpt.py
+++ b/example/unsafe edit/codegpt.py
@@ -45,7 +45,7 @@ def _refactor_or_edit(
max_tokens = (round(4097 - (7 / 4) * len(tokens)),)
typer.confirm(
- f"This prompt is {len(tokens)} tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {max_tokens}.",
+ f"This prompt is {tokens if type(tokens) == int else len(tokens)} tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {max_tokens}.",
default=True,
abort=True,
)
diff --git a/example/unsafe edit/codegpt.py.old b/example/unsafe edit/codegpt.py.old
index 829be99..d3e78a9 100644
--- a/example/unsafe edit/codegpt.py.old
+++ b/example/unsafe edit/codegpt.py.old
@@ -54,7 +54,7 @@ def refactor(
tokens = nltk.word_tokenize(prompt)
typer.confirm(
- f"This prompt is {len(tokens)} tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {4097 - len(tokens)}.",
+ f"This prompt is {tokens if type(tokens) == int else len(tokens)} tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {4097 - len(tokens)}.",
default=True,
abort=True,
)
@@ -133,7 +133,7 @@ def edit(
tokens = nltk.word_tokenize(prompt)
typer.confirm(
- f"This prompt is {len(tokens)} tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {4097 - len(tokens)}.",
+ f"This prompt is {tokens if type(tokens) == int else len(tokens)} tokens, are you sure you want to continue?\nThe most GPT-3 can return in response is {4097 - len(tokens)}.",
abort=True,
default=True,
)
| Formatting bug for error string
Reproduce:
- clone this repo
- codegpt docs codegpt
Error:
```
Working... ââââââââââââââââââââââââââââââââââââââââ 0% -:--:--Error: object of type 'int' has no len()
Working... ââââââââââââââââââââââââââââââââââââââââ 100% 0:00:00
Done!
```
This seems to be breaking, there might be deeper bug that causes this to break.
| 2023-02-16T08:18:47 | 0.0 | [] | [] |
|||
nuxeo/nuxeo-python-client | nuxeo__nuxeo-python-client-257 | 0019d3f4045869cca3db430717ee49c6cb8e33f6 | diff --git a/nuxeo/uploads.py b/nuxeo/uploads.py
index d09a0ac2..ba0f957f 100644
--- a/nuxeo/uploads.py
+++ b/nuxeo/uploads.py
@@ -74,12 +74,11 @@ def post(self, handler="", **kwargs):
if handler:
handler = handler.lower()
handlers = self.handlers()
- if handler in handlers:
- if handler != "default":
- endpoint = f"{endpoint}/new/{handler}"
- else:
+ if handler not in handlers:
raise InvalidUploadHandler(handler, handlers)
+ if handler != "default":
+ endpoint = f"{endpoint}/new/{handler}"
data = self.client.request("POST", endpoint, **kwargs).json()
# Set a uniq ID for that batch, it will be used by third-party upload handlers
data["key"] = str(uuid4())
@@ -333,11 +332,10 @@ def get_uploader(
from .handlers.s3 import ChunkUploaderS3 as cls
else:
from .handlers.s3 import UploaderS3 as cls
+ elif chunked:
+ from .handlers.default import ChunkUploader as cls
else:
- if chunked:
- from .handlers.default import ChunkUploader as cls
- else:
- from .handlers.default import Uploader as cls
+ from .handlers.default import Uploader as cls
return cls(self, batch, blob, chunk_size, callback, **kwargs)
| NXPY-232: Allow specifying the upload provider type in the batch details
Co-authored-by: Romain Grasland <[email protected]>
| View issue in JIRA: [NXPY-232](https://jira.nuxeo.com/browse/NXPY-232): Allow specifying the upload provider type in the batch details | 2021-07-01T14:33:33 | 0.0 | [] | [] |
||
scholarly-python-package/scholarly | scholarly-python-package__scholarly-457 | 5efed39495e096bb029a79dee3d19f5bf373e458 | diff --git a/scholarly/_proxy_generator.py b/scholarly/_proxy_generator.py
index 43cc78d..a9bf9f2 100644
--- a/scholarly/_proxy_generator.py
+++ b/scholarly/_proxy_generator.py
@@ -433,6 +433,7 @@ def _handle_captcha2(self, url):
for cookie in self._get_webdriver().get_cookies():
cookie.pop("httpOnly", None)
cookie.pop("expiry", None)
+ cookie.pop("sameSite", None)
self._session.cookies.set(**cookie)
return self._session
| scholarly:Exception TypeError while fetching page: ("create_cookie() got unexpected keyword arguments: ['sameSite']",)
I was using scholarly on windows 10. I installed firefox and geckodriver, and can initiate the firefox session successfully, yet after I got the captha right, the program keeps create new session. I looked into the scholarly log, found this below:
INFO:scholarly:Getting https://scholar.google.com/scholar?hl=en&q=Spatial-Temporal%20Synchronous%20Graph%20Convolutional%20Networks%3A%20A%20New%20Framework%20for%20Spatial-Temporal%20Network%20Data%20Forecasting&as_vis=0&as_sdt=0,33
INFO:scholarly:Session proxy config is {}
INFO:scholarly:Got a captcha request.
INFO:scholarly:Solving the captcha took already 10 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 20 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 30 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 40 seconds (of maximum 604800 s).
INFO:scholarly:Solved captcha in less than 50 seconds.
INFO:scholarly:Exception TypeError while fetching page: ("create_cookie() got unexpected keyword arguments: ['sameSite']",)
INFO:scholarly:Retrying with a new session.
INFO:scholarly:Session proxy config is {}
INFO:scholarly:Got a captcha request.
INFO:scholarly:Solving the captcha took already 10 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 20 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 30 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 40 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 50 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 60 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 70 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 80 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 90 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 100 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 110 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 120 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 130 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 140 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 150 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 160 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 170 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 180 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 190 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 200 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 210 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 220 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 230 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 240 seconds (of maximum 604800 s).
INFO:scholarly:Getting https://scholar.google.com/scholar?hl=en&q=Spatial-Temporal%20Synchronous%20Graph%20Convolutional%20Networks%3A%20A%20New%20Framework%20for%20Spatial-Temporal%20Network%20Data%20Forecasting&as_vis=0&as_sdt=0,33
INFO:scholarly:Session proxy config is {}
INFO:scholarly:Got a captcha request.
INFO:scholarly:Solving the captcha took already 10 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 20 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 30 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 40 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 50 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 60 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 70 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 80 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 90 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 100 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 110 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 120 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 130 seconds (of maximum 604800 s).
INFO:scholarly:Solved captcha in less than 140 seconds.
INFO:scholarly:Exception TypeError while fetching page: ("create_cookie() got unexpected keyword arguments: ['sameSite']",)
INFO:scholarly:Retrying with a new session.
INFO:scholarly:Session proxy config is {}
INFO:scholarly:Got a captcha request.
INFO:scholarly:Solving the captcha took already 10 seconds (of maximum 604800 s).
INFO:scholarly:Solving the captcha took already 20 seconds (of maximum 604800 s).
I think it's due to the mis use of create_cookie(). I'm using a python version 3.8.3.
| To reproduce the issue, use scholarly.search_pubs method
I encountered the same problem. It seems the `requests.cookies.create_cookie()` method does not accept the field `'sameSite'`, which is set by Google after doing the captcha. I found a quick workaround, which is to pop that field from the cookie object before setting in the file `_proxy_generator.py`.
In essence, add the line
`cookie.pop("sameSite", None)`
below line 352 in `_proxy_generator.py`.
Merging this with https://github.com/scholarly-python-package/scholarly/issues/89
It seems like this fix was never merged! I still have to manually pop the sameSite cookie!
Please proceed with a pull request with the fix. | 2022-11-17T16:44:34 | 0.0 | [] | [] |
||
MrOlm/drep | MrOlm__drep-141 | fc0896affa666205605d790c30d2623291152c68 | diff --git a/drep/d_filter.py b/drep/d_filter.py
index c5d57ed..bd3e71a 100644
--- a/drep/d_filter.py
+++ b/drep/d_filter.py
@@ -576,6 +576,7 @@ def run_checkM(genome_folder_whole, checkm_outf_whole, **kwargs):
# Load table
try:
chdb = pd.read_table(desired_file,sep='\t')
+ chdb['Bin Id'] = chdb['Bin Id'].astype(str)
except:
logging.error("!!! checkM failed !!!\nSee https://drep.readthedocs.io/en/latest/advanced_use.html#troubleshooting-checkm for help troubleshooting")
sys.exit()
| Error "New checkM db needs to be made" if the genome filenames can be interpreted as integer
Hi,
I found this bug while running this tool on Galaxy. It was a bit hard to find out the origin of the error
I opened a PR to fix quickly, but I did not test it
Bérénice
----
**Bug**
dRep is failing with the error `New checkM db needs to be made` is raised if the genome filenames can be interpreted as integers (e,g, `001`)
**Version**: 2.5.4 and 3.2.2
**To Reproduce**
Steps to reproduce the behavior:
1. Create input genomes named `001`, `002`, etc
2. Run `dRep dereplicate outdir --checkM_method taxonomy_wf -g '001' '002' '003' '004' '005' '006' '007' '008' '009' '010' --debug`
3. See error in `outdir/log/logger.log`
```
02-04 13:54 DEBUG Running CheckM with command: ['/home/bebatut/miniconda3/envs/drep/bin/checkm', 'taxonomy_wf', 'domain', 'Bacteria', '/home/beb
atut/tmp/valerie-drep/data3/outdir/data/prodigal/', '/home/bebatut/tmp/valerie-drep/data3/outdir/data/checkM/checkM_outdir/', '-f', '/home/bebatut/
tmp/valerie-drep/data3/outdir/data/checkM/checkM_outdir//results.tsv', '--tab_table', '-t', '6', '-g', '-x', 'faa']
02-04 13:57 DEBUG Running CheckM with command: ['/home/bebatut/miniconda3/envs/drep/bin/checkm', 'qa', '/home/bebatut/tmp/valerie-drep/data3/out
dir/data/checkM/checkM_outdir/Bacteria.ms', '/home/bebatut/tmp/valerie-drep/data3/outdir/data/checkM/checkM_outdir/', '-f', '/home/bebatut/tmp/vale
rie-drep/data3/outdir/data/checkM/checkM_outdir/Chdb.tsv', '-t', '6', '--tab_table', '-o', '2']
02-04 13:57 ERROR 001 is not in checkM db
02-04 13:57 ERROR 002 is not in checkM db
02-04 13:57 ERROR 003 is not in checkM db
02-04 13:57 ERROR 004 is not in checkM db
02-04 13:57 ERROR 005 is not in checkM db
02-04 13:57 ERROR 006 is not in checkM db
02-04 13:57 ERROR 007 is not in checkM db
02-04 13:57 ERROR 008 is not in checkM db
02-04 13:57 ERROR 009 is not in checkM db
02-04 13:57 ERROR 010 is not in checkM db
02-04 13:57 ERROR New checkM db needs to be made
```
**Bug tracking**
- Error raised in function [`validate_chdb`](https://github.com/MrOlm/drep/blob/fc0896affa666205605d790c30d2623291152c68/drep/d_filter.py#L339)
- Plotting `Chdb['Bin Id']` shows a list of integer and not a list of string as in `bdb['genome']`
```
Will filter the genome list
Calculating genome info of genomes
100.00% of genomes passed length filtering
Running prodigal
Running checkM
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
001 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
002 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
003 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
004 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
005 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
006 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
007 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
008 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
009 is not in checkM db
-- [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ---
010 is not in checkM db
New checkM db needs to be made
```
**Proposed solution**: Transform the column `Chdb['Bin Id']` to string when [reading the dataframe](https://github.com/MrOlm/drep/blob/fc0896affa666205605d790c30d2623291152c68/drep/d_filter.py#L578)
| 2022-02-04T15:38:49 | 0.0 | [] | [] |
|||
PolicyEngine/policyengine-uk | PolicyEngine__policyengine-uk-969 | 03b1c8baa69912d2f497d9f8410a95e34f325954 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0e219086f..3d44da30e 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [2.7.0] - 2024-10-21 10:09:01
+
+### Added
+
+- Automatic allocation of post-employee-incidence employee to households (consumers/capital).
+
## [2.6.0] - 2024-10-19 19:58:09
### Fixed
@@ -1518,6 +1524,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
+[2.7.0]: https://github.com/PolicyEngine/openfisca-uk/compare/2.6.0...2.7.0
[2.6.0]: https://github.com/PolicyEngine/openfisca-uk/compare/2.5.0...2.6.0
[2.5.0]: https://github.com/PolicyEngine/openfisca-uk/compare/2.4.0...2.5.0
[2.4.0]: https://github.com/PolicyEngine/openfisca-uk/compare/2.3.0...2.4.0
diff --git a/changelog.yaml b/changelog.yaml
index 795f08abc..be3fad4ed 100644
--- a/changelog.yaml
+++ b/changelog.yaml
@@ -1274,3 +1274,8 @@
fixed:
- Bug in budget change reforms.
date: 2024-10-19 19:58:09
+- bump: minor
+ changes:
+ added:
+ - Automatic allocation of post-employee-incidence employee to households (consumers/capital).
+ date: 2024-10-21 10:09:01
diff --git a/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/capital_incidence.yaml b/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/capital_incidence.yaml
new file mode 100644
index 000000000..edb1a2701
--- /dev/null
+++ b/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/capital_incidence.yaml
@@ -0,0 +1,6 @@
+description: Fraction of (remaining after employee incidence) employer NI that is borne by capital.
+values:
+ 2010-01-01: 0.5
+metadata:
+ label: Employer NI capital incidence
+ unit: /1
diff --git a/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/consumer_incidence.yaml b/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/consumer_incidence.yaml
new file mode 100644
index 000000000..765298c71
--- /dev/null
+++ b/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/consumer_incidence.yaml
@@ -0,0 +1,6 @@
+description: Fraction of (remaining after employee incidence) employer NI that is borne by consumers.
+values:
+ 2010-01-01: 0.5
+metadata:
+ label: Employer NI consumer incidence
+ unit: /1
diff --git a/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/employee_incidence.yaml b/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/employee_incidence.yaml
index e31153a34..b0aaf7efe 100644
--- a/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/employee_incidence.yaml
+++ b/policyengine_uk/parameters/gov/contrib/policyengine/employer_ni/employee_incidence.yaml
@@ -1,4 +1,4 @@
-description: Fraction of employer NI that is borne by employees. The remaining is passed onto prices.
+description: Fraction of employer NI that is borne by employees.
values:
2010-01-01: 1
metadata:
diff --git a/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py b/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py
index 53d57560c..784842b83 100644
--- a/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py
+++ b/policyengine_uk/variables/contrib/policyengine/employer_ni/employer_ni_fixed_employer_cost_change.py
@@ -9,8 +9,200 @@ class employer_cost(Variable):
unit = GBP
def formula(person, period, parameters):
- return person("employment_income", period) + person(
- "ni_employer", period
+ benefits = add(
+ person,
+ period,
+ [
+ "household_statutory_sick_pay",
+ "household_statutory_maternity_pay",
+ "household_statutory_paternity_pay",
+ ],
+ )
+ return (
+ person("employment_income", period)
+ + person("ni_employer", period)
+ + person("employer_pension_contributions", period)
+ + benefits
+ )
+
+
+class baseline_employer_cost(Variable):
+ label = "baseline employer cost"
+ entity = Person
+ definition_period = YEAR
+ value_type = float
+ unit = GBP
+
+ def formula(person, period, parameters):
+ prior_employment_income = person(
+ "employment_income_before_lsr", period
+ )
+ employment_income_behavioral_response = person(
+ "employment_income_behavioral_response", period
+ )
+ benefits = add(
+ person,
+ period,
+ [
+ "household_statutory_sick_pay",
+ "household_statutory_maternity_pay",
+ "household_statutory_paternity_pay",
+ ],
+ )
+ employer_pension_contributions = person(
+ "employer_pension_contributions", period
+ )
+ ni_class_1_income = (
+ prior_employment_income
+ + employment_income_behavioral_response
+ + benefits
+ + employer_pension_contributions
+ )
+
+ # Calculate baseline employer cost
+ baseline_parameters = parameters(period).baseline
+ baseline_class_1 = (
+ baseline_parameters.gov.hmrc.national_insurance.class_1
+ )
+ r_b = baseline_class_1.rates.employer
+ t_b = baseline_class_1.thresholds.secondary_threshold * WEEKS_IN_YEAR
+ p_b = (
+ baseline_parameters.gov.contrib.policyengine.employer_ni.exempt_employer_pension_contributions
+ )
+ pen_con_subtracted_b = employer_pension_contributions
+ if p_b:
+ pen_con_subtracted_b = employer_pension_contributions
+ else:
+ pen_con_subtracted_b = 0
+
+ baseline_employer_ni = r_b * max_(
+ 0, ni_class_1_income - pen_con_subtracted_b - t_b
+ )
+ return ni_class_1_income + baseline_employer_ni
+
+
+class adjusted_employer_cost(Variable):
+ label = "employer cost"
+ entity = Person
+ definition_period = YEAR
+ value_type = float
+ unit = GBP
+
+ def formula(person, period, parameters):
+ employment_income = person("employment_income", period)
+ benefits = add(
+ person,
+ period,
+ [
+ "household_statutory_sick_pay",
+ "household_statutory_maternity_pay",
+ "household_statutory_paternity_pay",
+ ],
+ )
+ employer_pension_contributions = person(
+ "employer_pension_contributions", period
+ )
+ ni_class_1_income = (
+ employment_income + benefits + employer_pension_contributions
+ )
+
+ # Calculate employer cost
+ parameters = parameters(period)
+ class_1 = parameters.gov.hmrc.national_insurance.class_1
+ r_r = class_1.rates.employer
+ t_r = class_1.thresholds.secondary_threshold * WEEKS_IN_YEAR
+ p_r = (
+ parameters.gov.contrib.policyengine.employer_ni.exempt_employer_pension_contributions
+ )
+ pen_con_subtracted_r = employer_pension_contributions
+ if p_r:
+ pen_con_subtracted_r = employer_pension_contributions
+ else:
+ pen_con_subtracted_r = 0
+
+ employer_ni = r_r * max_(
+ 0, ni_class_1_income - pen_con_subtracted_r - t_r
+ )
+ return ni_class_1_income + employer_ni
+
+
+class employer_ni_response_consumer_incidence(Variable):
+ label = "price response to employer NI reform"
+ entity = Person
+ definition_period = YEAR
+ value_type = float
+ unit = GBP
+
+ def formula(person, period, parameters):
+ consumer_incidence = parameters(
+ period
+ ).gov.contrib.policyengine.employer_ni.consumer_incidence
+ if consumer_incidence == 0:
+ return 0
+
+ if not hasattr(person.simulation, "dataset"):
+ # In single-household simulations, we can't automatically put revenue into price increases because we don't know the revenue.
+ return 0
+
+ person_weight = person("person_weight", period)
+ baseline_employer_cost = person("baseline_employer_cost", period)
+ employer_cost = person("adjusted_employer_cost", period)
+ change_in_employer_cost = employer_cost - baseline_employer_cost
+ amount_paid_by_employers = (
+ person_weight * change_in_employer_cost
+ ).sum()
+
+ consumption = (
+ person.household("consumption", period)
+ / person.household.nb_persons()
+ )
+ total_consumption = (consumption * person_weight).sum()
+ share_of_total_consumption = consumption / total_consumption
+
+ return (
+ amount_paid_by_employers
+ * share_of_total_consumption
+ * consumer_incidence
+ )
+
+
+class employer_ni_response_capital_incidence(Variable):
+ label = "capital response to employer NI reform"
+ entity = Person
+ definition_period = YEAR
+ value_type = float
+ unit = GBP
+
+ def formula(person, period, parameters):
+ capital_incidence = parameters(
+ period
+ ).gov.contrib.policyengine.employer_ni.capital_incidence
+ if capital_incidence == 0:
+ return 0
+
+ if not hasattr(person.simulation, "dataset"):
+ # In single-household simulations, we can't automatically put revenue into price increases because we don't know the revenue.
+ return 0
+
+ person_weight = person("person_weight", period)
+ baseline_employer_cost = person("baseline_employer_cost", period)
+ employer_cost = person("adjusted_employer_cost", period)
+ change_in_employer_cost = employer_cost - baseline_employer_cost
+ amount_paid_by_employers = (
+ person_weight * change_in_employer_cost
+ ).sum()
+
+ wealth = (
+ person.household("corporate_wealth", period)
+ / person.household.nb_persons()
+ )
+ total_wealth = (wealth * person_weight).sum()
+ share_of_total_wealth = wealth / total_wealth
+
+ return (
+ amount_paid_by_employers
+ * share_of_total_wealth
+ * capital_incidence
)
diff --git a/policyengine_uk/variables/gov/gov_balance.py b/policyengine_uk/variables/gov/gov_balance.py
new file mode 100644
index 000000000..03cc04494
--- /dev/null
+++ b/policyengine_uk/variables/gov/gov_balance.py
@@ -0,0 +1,12 @@
+from policyengine_uk.model_api import *
+
+
+class gov_balance(Variable):
+ label = "government balance"
+ documentation = "Government deficit impact in respect of this household."
+ entity = Household
+ definition_period = YEAR
+ value_type = float
+ unit = GBP
+ adds = ["gov_tax"]
+ subtracts = ["gov_spending"]
diff --git a/policyengine_uk/variables/gov/gov_spending.py b/policyengine_uk/variables/gov/gov_spending.py
new file mode 100644
index 000000000..27e86fb95
--- /dev/null
+++ b/policyengine_uk/variables/gov/gov_spending.py
@@ -0,0 +1,46 @@
+from policyengine_uk.model_api import *
+
+
+class gov_spending(Variable):
+ label = "government spending"
+ documentation = "Government spending impact in respect of this household."
+ entity = Household
+ definition_period = YEAR
+ value_type = float
+ unit = GBP
+ adds = [
+ "child_benefit",
+ "esa_income",
+ "esa_contrib",
+ "housing_benefit",
+ "income_support",
+ "jsa_income",
+ "jsa_contrib",
+ "pension_credit",
+ "universal_credit",
+ "working_tax_credit",
+ "child_tax_credit",
+ "attendance_allowance",
+ "afcs",
+ "bsp",
+ "carers_allowance",
+ "dla",
+ "iidb",
+ "incapacity_benefit",
+ "jsa_contrib",
+ "pip",
+ "sda",
+ "state_pension",
+ "maternity_allowance",
+ "statutory_sick_pay",
+ "statutory_maternity_pay",
+ "ssmg",
+ "basic_income",
+ "epg_subsidy",
+ "cost_of_living_support_payment",
+ "energy_bills_rebate",
+ "winter_fuel_allowance",
+ "nhs_budget_change",
+ "education_budget_change",
+ "other_public_spending_budget_change",
+ ]
diff --git a/policyengine_uk/variables/gov/gov_tax.py b/policyengine_uk/variables/gov/gov_tax.py
new file mode 100644
index 000000000..20908642f
--- /dev/null
+++ b/policyengine_uk/variables/gov/gov_tax.py
@@ -0,0 +1,36 @@
+from policyengine_uk.model_api import *
+
+
+class gov_tax(Variable):
+ label = "government tax revenue"
+ documentation = (
+ "Government tax revenue impact in respect of this household."
+ )
+ entity = Household
+ definition_period = YEAR
+ value_type = float
+ unit = GBP
+ adds = [
+ "expected_sdlt",
+ "expected_ltt",
+ "expected_lbtt",
+ "corporate_sdlt",
+ "business_rates",
+ "council_tax",
+ "domestic_rates",
+ "fuel_duty",
+ "tv_licence",
+ "wealth_tax",
+ "non_primary_residence_wealth_tax",
+ "income_tax",
+ "national_insurance",
+ "LVT",
+ "carbon_tax",
+ "vat_change",
+ "capital_gains_tax",
+ "private_school_vat",
+ "corporate_incident_tax_revenue_change",
+ "consumer_incident_tax_revenue_change",
+ "high_income_incident_tax_change",
+ "ni_employer",
+ ]
diff --git a/policyengine_uk/variables/gov/hmrc/tax.py b/policyengine_uk/variables/gov/hmrc/tax.py
index c239dd0ce..3df50c968 100644
--- a/policyengine_uk/variables/gov/hmrc/tax.py
+++ b/policyengine_uk/variables/gov/hmrc/tax.py
@@ -40,6 +40,8 @@ class household_tax(Variable):
"corporate_incident_tax_revenue_change",
"consumer_incident_tax_revenue_change",
"high_income_incident_tax_change",
+ "employer_ni_response_capital_incidence",
+ "employer_ni_response_consumer_incidence",
]
def formula(household, period, parameters):
diff --git a/policyengine_uk/variables/household/demographic/household.py b/policyengine_uk/variables/household/demographic/household.py
index c3cb38167..ffbfe4b7a 100644
--- a/policyengine_uk/variables/household/demographic/household.py
+++ b/policyengine_uk/variables/household/demographic/household.py
@@ -37,6 +37,7 @@ class household_weight(Variable):
label = "Weight factor for the household"
definition_period = YEAR
uprating = "gov.ons.population"
+ default_value = 1
class Country(Enum):
diff --git a/policyengine_uk/variables/household/income/income.py b/policyengine_uk/variables/household/income/income.py
index 53c1b420d..d7ff291a5 100644
--- a/policyengine_uk/variables/household/income/income.py
+++ b/policyengine_uk/variables/household/income/income.py
@@ -165,7 +165,6 @@ class hbai_household_net_income(Variable):
"statutory_maternity_pay",
"ssmg",
"basic_income",
- "epg_subsidy",
"cost_of_living_support_payment",
"winter_fuel_allowance",
]
diff --git a/setup.py b/setup.py
index d7fa35af4..63ade09df 100644
--- a/setup.py
+++ b/setup.py
@@ -4,7 +4,7 @@
setup(
name="PolicyEngine-UK",
- version="2.6.0",
+ version="2.7.0",
author="PolicyEngine",
author_email="[email protected]",
classifiers=[
| Automatically apportion remaining employer NI response to consumption
Rather than having the user do this manually.
Automatically apportion remaining employer NI response to consumption
Rather than having the user do this manually.
| 2024-10-21T09:10:23 | 0.0 | [] | [] |
|||
CATIA-Systems/FMPy | CATIA-Systems__FMPy-658 | 04aa5fd1aa86af25caa37b7f9a710d240890ca4e | diff --git a/fmpy/gui/MainWindow.py b/fmpy/gui/MainWindow.py
index 572bb9ba..d0d93481 100644
--- a/fmpy/gui/MainWindow.py
+++ b/fmpy/gui/MainWindow.py
@@ -1132,7 +1132,7 @@ def createGraphics(self):
def variableColor(variable):
if variable.type.startswith(('Float', 'Real')):
return QColor.fromRgb(26, 77, 179)
- elif variable.type.startswith(('Enumeration', 'Int', 'UInt')):
+ elif variable.type.startswith(('Enumeration', 'Int', 'UInt', 'Clock')):
return QColor.fromRgb(179, 77, 26)
elif variable.type == 'Boolean':
return QColor.fromRgb(255, 0, 255)
diff --git a/fmpy/gui/model.py b/fmpy/gui/model.py
index a6a5e499..7f03234a 100644
--- a/fmpy/gui/model.py
+++ b/fmpy/gui/model.py
@@ -68,7 +68,7 @@ def columnData(self, v, column, role):
if type.startswith(('float', 'real')):
type = 'float'
- if type.startswith(('enumeration', 'int', 'uint')):
+ if type.startswith(('enumeration', 'int', 'uint', 'clock')):
type = 'integer'
return QIcon(f':/icons/light/{type}_{causality}.svg')
@@ -195,8 +195,6 @@ def index(self, row, column, parent):
variable = self.modelDescription.modelVariables[row]
return self.createIndex(row, column, variable)
- return QModelIndex()
-
def parent(self, index):
return QModelIndex()
| Display icons for Clock variables in GUI
| 2024-03-25T09:28:39 | 0.0 | [] | [] |
|||
Noble-Lab/casanovo | Noble-Lab__casanovo-63 | c0c9fdb6ca10a85a900a378adde68299ea5bcab9 | diff --git a/README.md b/README.md
index 96a30f2d..8ed3207f 100644
--- a/README.md
+++ b/README.md
@@ -68,6 +68,7 @@ See [`casanovo/config.yaml`](https://github.com/Noble-Lab/casanovo/blob/main/cas
casanovo --mode=denovo --model=path/to/pretrained.ckpt --peak_path=path/to/predict/spectra.mgf --config=path/to/config.yaml --output=path/to/output
```
+Casanovo can predict peptide sequences for MS/MS data in mzML, mzXML, and MGF files.
This will write peptide predictions for the given MS/MS spectra to the specified output file in a tab-separated format (extension: .csv).
- To evaluate _de novo_ sequencing performance based on known spectrum annotations:
@@ -76,7 +77,7 @@ This will write peptide predictions for the given MS/MS spectra to the specified
casanovo --mode=eval --model=path/to/pretrained.ckpt --peak_path=path/to/test/annotated_spectra.mgf --config=path/to/config.yaml
```
-Note that to evaluate the peptide predictions, ground truth peptide labels in an annotated MGF file need to be present.
+To evaluate the peptide predictions, ground truth peptide labels need to be provided as an annotated MGF file.
- To train a model from scratch:
@@ -84,6 +85,8 @@ Note that to evaluate the peptide predictions, ground truth peptide labels in an
casanovo --mode=train --peak_path=path/to/train/annotated_spectra.mgf --peak_path_val=path/to/validation/annotated_spectra.mgf --config=path/to/config.yaml
```
+Training and validation MS/MS data need to be provided as annotated MGF files.
+
If a training is continued for a previously trained model, specify the starting model weights using `--model`.
### Example job
diff --git a/casanovo/denovo/model_runner.py b/casanovo/denovo/model_runner.py
index 2e63b8b3..d62f89aa 100644
--- a/casanovo/denovo/model_runner.py
+++ b/casanovo/denovo/model_runner.py
@@ -5,7 +5,7 @@
import os
import tempfile
import uuid
-from typing import Any, Dict, List, Optional
+from typing import Any, Dict, Iterable, List, Optional
import pytorch_lightning as pl
from depthcharge.data import AnnotatedSpectrumIndex, SpectrumIndex
@@ -106,11 +106,24 @@ def _execute_existing(
out_filename=out_filename,
)
# Read the MS/MS spectra for which to predict peptide sequences.
- if len(peak_filenames := _get_peak_filenames(peak_path)) == 0:
+ if annotated:
+ peak_ext = (".mgf", ".h5", "hdf5")
+ else:
+ peak_ext = (".mgf", ".mzml", ".mzxml", ".h5", "hdf5")
+ if len(peak_filenames := _get_peak_filenames(peak_path, peak_ext)) == 0:
logger.error("Could not find peak files from %s", peak_path)
raise FileNotFoundError("Could not find peak files")
+ peak_is_index = any(
+ [os.path.splitext(fn)[1] in (".h5", ".hdf5") for fn in peak_filenames]
+ )
+ if peak_is_index and len(peak_filenames) > 1:
+ logger.error("Multiple HDF5 spectrum indexes specified")
+ raise ValueError("Multiple HDF5 spectrum indexes specified")
tmp_dir = tempfile.TemporaryDirectory()
- idx_filename = os.path.join(tmp_dir.name, f"{uuid.uuid4().hex}.hdf5")
+ if peak_is_index:
+ idx_filename, peak_filenames = peak_filenames[0], None
+ else:
+ idx_filename = os.path.join(tmp_dir.name, f"{uuid.uuid4().hex}.hdf5")
if annotated:
index = AnnotatedSpectrumIndex(idx_filename, peak_filenames)
else:
@@ -170,22 +183,41 @@ def train(
The configuration options.
"""
# Read the MS/MS spectra to use for training and validation.
- if len(train_filenames := _get_peak_filenames(peak_path)) == 0:
+ ext = (".mgf", ".h5", ".hdf5")
+ if len(train_filenames := _get_peak_filenames(peak_path, ext)) == 0:
logger.error("Could not find training peak files from %s", peak_path)
raise FileNotFoundError("Could not find training peak files")
+ train_is_index = any(
+ [os.path.splitext(fn)[1] in (".h5", ".hdf5") for fn in train_filenames]
+ )
+ if train_is_index and len(train_filenames) > 1:
+ logger.error("Multiple training HDF5 spectrum indexes specified")
+ raise ValueError("Multiple training HDF5 spectrum indexes specified")
if (
peak_path_val is None
- or len(val_filenames := _get_peak_filenames(peak_path_val)) == 0
+ or len(val_filenames := _get_peak_filenames(peak_path_val, ext)) == 0
):
logger.error(
"Could not find validation peak files from %s", peak_path_val
)
raise FileNotFoundError("Could not find validation peak files")
+ val_is_index = any(
+ [os.path.splitext(fn)[1] in (".h5", ".hdf5") for fn in val_filenames]
+ )
+ if val_is_index and len(val_filenames) > 1:
+ logger.error("Multiple validation HDF5 spectrum indexes specified")
+ raise ValueError("Multiple validation HDF5 spectrum indexes specified")
tmp_dir = tempfile.TemporaryDirectory()
- train_idx_filename = os.path.join(tmp_dir.name, f"{uuid.uuid4().hex}.hdf5")
- val_idx_filename = os.path.join(tmp_dir.name, f"{uuid.uuid4().hex}.hdf5")
- train_index = AnnotatedSpectrumIndex(train_idx_filename, train_filenames)
- val_index = AnnotatedSpectrumIndex(val_idx_filename, val_filenames)
+ if train_is_index:
+ train_idx_fn, train_filenames = train_filenames[0], None
+ else:
+ train_idx_fn = os.path.join(tmp_dir.name, f"{uuid.uuid4().hex}.hdf5")
+ train_index = AnnotatedSpectrumIndex(train_idx_fn, train_filenames)
+ if val_is_index:
+ val_idx_fn, val_filenames = val_filenames[0], None
+ else:
+ val_idx_fn = os.path.join(tmp_dir.name, f"{uuid.uuid4().hex}.hdf5")
+ val_index = AnnotatedSpectrumIndex(val_idx_fn, val_filenames)
# Initialize the data loaders.
dataloader_params = dict(
n_peaks=config["n_peaks"],
@@ -264,7 +296,9 @@ def train(
tmp_dir.cleanup()
-def _get_peak_filenames(path: str) -> List[str]:
+def _get_peak_filenames(
+ path: str, supported_ext: Iterable[str] = (".mgf",)
+) -> List[str]:
"""
Get all matching peak file names from the path pattern.
@@ -275,6 +309,8 @@ def _get_peak_filenames(path: str) -> List[str]:
----------
path : str
The path pattern.
+ supported_ext : Iterable[str]
+ Extensions of supported peak file formats. Default: MGF.
Returns
-------
@@ -286,5 +322,5 @@ def _get_peak_filenames(path: str) -> List[str]:
return [
fn
for fn in glob.glob(path, recursive=True)
- if os.path.splitext(fn.lower())[1] == ".mgf"
+ if os.path.splitext(fn.lower())[1] in supported_ext
]
diff --git a/setup.cfg b/setup.cfg
index e2cb8284..18e5de6c 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -22,7 +22,7 @@ packages = find:
python_requires = >=3.8
install_requires =
click
- depthcharge-ms @ git+https://github.com/wfondrie/depthcharge@75fad7df9869572290ec79ec6a67d5d329b7a6c7
+ depthcharge-ms @ git+https://github.com/wfondrie/depthcharge@3dda29053ab479fa31fa1309f65f285d04481c1e
numpy
pandas
pytorch-lightning
| Check if input files in the mzML, mzXML format work
Implement if not
Check if input files in the mzML, mzXML format work
Implement if not
| 2022-08-06T20:31:14 | 0.0 | [] | [] |
|||
ufal/factgenie | ufal__factgenie-129 | 905328fbf9d1ba8f558b24d0bcb75b06f7cfbc47 | diff --git a/factgenie/static/css/custom.css b/factgenie/static/css/custom.css
index aa407817..2b4b7fe3 100755
--- a/factgenie/static/css/custom.css
+++ b/factgenie/static/css/custom.css
@@ -900,7 +900,6 @@ a:hover {
}
.slider-crowdsourcing {
- position: absolute;
width: 100%;
}
diff --git a/factgenie/static/js/factgenie.js b/factgenie/static/js/factgenie.js
index 1438df35..c32ae389 100644
--- a/factgenie/static/js/factgenie.js
+++ b/factgenie/static/js/factgenie.js
@@ -1,6 +1,6 @@
// var url_prefix = window.location.href.split(/[?#]/)[0];
var url_prefix = window.url_prefix;
-var example_idx = 0;
+var current_example_idx = 0;
var total_examples = 1;
var datasets = window.datasets;
var metadata = window.metadata;
@@ -21,7 +21,14 @@ if (mode == "annotate") {
if (mode == "annotate" || mode == "browse") {
// the draggable divider between the main area and the right panel
var splitInstance = Split(['#centerpanel', '#rightpanel'], {
- sizes: sizes, gutterSize: 1
+ sizes: sizes, gutterSize: 1,
+ onDrag: function () {
+ // trigger a resize update on slider inputs when the handle is dragged
+ // not a perfect solution, but it works
+ $('.slider-crowdsourcing').each(function () {
+ $(this).css('width', "80%");
+ });
+ }
});
}
@@ -34,11 +41,11 @@ function mod(n, m) {
}
function nextBtn() {
- goToPage(example_idx + 1);
+ goToPage(current_example_idx + 1);
}
function prevBtn() {
- goToPage(example_idx - 1);
+ goToPage(current_example_idx - 1);
}
function startBtn() {
@@ -53,30 +60,26 @@ function randomBtn() {
goToPage(randInt(total_examples - 1));
}
-function goToAnnotation(page) {
- $(".page-link").removeClass("bg-active");
- $(`#page-link-${page}`).addClass("bg-active");
- saveCurrentAnnotations();
- showAnnotation();
-}
-
function goToView(page) {
const dataset = $('#dataset-select').val();
const split = $('#split-select').val();
- fetchExample(dataset, split, example_idx);
+ fetchExample(dataset, split, page);
- $("#page-input").val(example_idx);
+ $("#page-input").val(page);
}
function goToPage(page) {
- example_idx = page;
- example_idx = mod(example_idx, total_examples);
+ const example_idx = current_example_idx;
+
+ current_example_idx = page;
+ current_example_idx = mod(current_example_idx, total_examples);
if (mode == "annotate") {
- goToAnnotation(example_idx);
+ saveCurrentAnnotations(example_idx);
+ goToAnnotation(current_example_idx);
} else {
- goToView(example_idx);
+ goToView(current_example_idx);
}
}
@@ -125,6 +128,7 @@ function loadAnnotations() {
}
Promise.all(promises)
.then(() => {
+
YPet.addInitializer(function (options) {
/* Configure the # and colors of Annotation types (minimum 1 required) */
YPet.AnnotationTypes = new AnnotationTypeList(annotation_span_categories);
@@ -158,15 +162,17 @@ function loadAnnotations() {
collection = [];
}
});
- goToAnnotation(example_idx);
}
+ goToAnnotation(0);
});
YPet.start();
-
})
- .catch(() => {
+ .catch((e) => {
// Handle errors if any request fails
console.error("One or more requests failed.");
+ // Log the error
+ console.error(e);
+
})
.finally(() => {
// This block will be executed regardless of success or failure
@@ -211,9 +217,8 @@ function submitAnnotations(campaign_id) {
}
function collectFlags() {
- // collect values of all checkboxes within divs of class `flag-checkbox`, save values sequentially (for each id)
const flags = [];
- $(".flag-checkbox").each(function () {
+ $(".crowdsourcing-flag").each(function () {
const label = $(this).find("label").text().trim();
const value = $(this).find("input[type='checkbox']").prop("checked");
flags.push({
@@ -227,10 +232,6 @@ function collectFlags() {
function collectOptions() {
const options = [];
- // for all ".crowdsourcing-option" div's, see whether is has the "option-select" or "option-slider" class
- // and collect the value of the select or the slider, respectively
- // save it along with the value of the label
-
$(".crowdsourcing-option").each(function (x) {
if ($(this).hasClass("option-select")) {
const type = "select";
@@ -261,27 +262,35 @@ function collectTextFields() {
}
-function saveCurrentAnnotations() {
+function saveCurrentAnnotations(example_idx) {
var collection = YPet[`p${example_idx}`].currentView.collection.parentDocument.get('annotations').toJSON();
annotation_set[example_idx]["annotations"] = collection;
annotation_set[example_idx]["flags"] = collectFlags();
annotation_set[example_idx]["options"] = collectOptions();
annotation_set[example_idx]["textFields"] = collectTextFields();
-
- console.log(example_idx);
- console.log(annotation_set[example_idx]);
}
+function clearExampleLevelFields() {
+ // uncheck all checkboxes
+ $(".crowdsourcing-flag input[type='checkbox']").prop("checked", false);
-function markAnnotationAsComplete() {
- saveCurrentAnnotations();
+ // reset all options to the first value
+ $(".crowdsourcing-option select").val(0);
+ $(".crowdsourcing-option input[type='range']").val(0);
+
+ // clear the values in text inputs
+ $(".crowdsourcing-text input[type='text']").val("");
+}
- $('#page-link-' + example_idx).removeClass("bg-incomplete");
- $('#page-link-' + example_idx).addClass("bg-complete");
+function markAnnotationAsComplete() {
+ $('#page-link-' + current_example_idx).removeClass("bg-incomplete");
+ $('#page-link-' + current_example_idx).addClass("bg-complete");
// if all the examples are annotated, post the annotations
if ($(".bg-incomplete").length == 0) {
+ saveCurrentAnnotations(current_example_idx);
+
// show the `submit` button
$("#submit-annotations-btn").show();
@@ -290,16 +299,8 @@ function markAnnotationAsComplete() {
scrollTop: $("#submit-annotations-btn").offset().top
}, 500);
- } else if (example_idx < total_examples - 1) {
- // uncheck all checkboxes
- $(".flag-checkbox input[type='checkbox']").prop("checked", false);
-
- // reset all options to the first value
- $(".crowdsourcing-option select").val(0);
-
- // clear the values in text inputs
- $(".crowdsourcing-text input[type='text']").val("");
-
+ } else if (current_example_idx < total_examples - 1) {
+ // annotations will be saved automatically
nextBtn();
}
}
@@ -315,30 +316,31 @@ function showRawData(data) {
$("#rawarea").html(`<pre>${rawDataStr}</pre>`);
}
-function showAnnotation() {
+function goToAnnotation(example_idx) {
+ $(".page-link").removeClass("bg-active");
+ $(`#page-link-${example_idx}`).addClass("bg-active");
+
$(".annotate-box").hide();
$(`#out-text-${example_idx}`).show();
const data = examples_cached[example_idx];
+ $("#examplearea").html(data.html);
const flags = annotation_set[example_idx].flags;
const options = annotation_set[example_idx].options;
const textFields = annotation_set[example_idx].textFields;
+ clearExampleLevelFields();
+
if (flags !== undefined) {
- // flags are an array
- $(".flag-checkbox").each(function (i) {
+ $(".crowdsourcing-flag").each(function (i) {
$(this).find("input[type='checkbox']").prop("checked", flags[i]["value"]);
});
- } else {
- // uncheck all checkboxes
- $(".flag-checkbox input[type='checkbox']").prop("checked", false);
}
if (options !== undefined) {
- // options is an array of objects
for (const [i, option] of Object.entries(options)) {
- const div = $(`#options div:eq(${i})`);
+ const div = $(`.crowdsourcing-option:eq(${i})`);
// we can have either a select or a slider (we can tell by `type`)
// we need to set the option defined by `index`
if (option.type == "select") {
@@ -346,28 +348,16 @@ function showAnnotation() {
}
if (option.type == "slider") {
div.find("input[type='range']").val(option.index);
- div.find("output").val(option.value);
}
}
- } else {
- // clear all options
- $("#options").empty();
}
if (textFields !== undefined) {
- // textFields is an array of objects
for (const [i, textField] of Object.entries(textFields)) {
- const div = $(`#textFields div:eq(${i})`);
- div.find("input[type='text']").val(textField.value);
+ $(`.crowdsourcing-text input:eq(${i})`).val(textField.value);
}
- } else {
- // clear all textFields
- $("#textFields").empty();
}
-
- $("#examplearea").html(data.html);
- // $(".text-type").html(`${type}`);
}
function permalinkBtn() {
@@ -376,7 +366,7 @@ function permalinkBtn() {
const url_prefix = window.location.href.split(/[?#]/)[0];
- let permalink = `${url_prefix}?dataset=${dataset}&split=${split}&example_idx=${example_idx}`;
+ let permalink = `${url_prefix}?dataset=${dataset}&split=${split}&example_idx=${current_example_idx}`;
popover = bootstrap.Popover.getOrCreateInstance("#permalink-btn", options = { html: true });
popover.setContent({
@@ -408,18 +398,18 @@ function changeDataset() {
}
const split = $('#split-select').val();
- example_idx = 0;
- fetchExample(dataset, split, example_idx);
- $("#page-input").val(example_idx);
+ current_example_idx = 0;
+ fetchExample(dataset, split, current_example_idx);
+ $("#page-input").val(current_example_idx);
}
function changeSplit() {
$("#dataset-spinner").show();
const dataset = $('#dataset-select').val();
const split = $('#split-select').val();
- example_idx = 0;
- fetchExample(dataset, split, example_idx);
- $("#page-input").val(example_idx);
+ current_example_idx = 0;
+ fetchExample(dataset, split, current_example_idx);
+ $("#page-input").val(current_example_idx);
}
function changeExample(dataset, split, example_idx) {
@@ -431,7 +421,7 @@ function changeExample(dataset, split, example_idx) {
$('#split-select').append(`<option value="${split}">${split}</option>`);
}
$('#split-select').val(split);
- example_idx = example_idx;
+ current_example_idx = example_idx;
fetchExample(dataset, split, example_idx);
$("#page-input").val(example_idx);
}
@@ -1618,7 +1608,7 @@ $(document).ready(function () {
$("#dataset-select").val(
$("#dataset-select option:first").val()
).trigger("change");
- $("#page-input").val(example_idx);
+ $("#page-input").val(current_example_idx);
}
}
diff --git a/factgenie/static/js/manage.js b/factgenie/static/js/manage.js
index e7d974bb..edc189d9 100644
--- a/factgenie/static/js/manage.js
+++ b/factgenie/static/js/manage.js
@@ -294,7 +294,6 @@ $(document).ready(function () {
});
- // $("#page-input").val(example_idx);
enableTooltips();
});
diff --git a/factgenie/templates/crowdsourcing_new.html b/factgenie/templates/crowdsourcing_new.html
index 4877ba4e..8e81ab09 100755
--- a/factgenie/templates/crowdsourcing_new.html
+++ b/factgenie/templates/crowdsourcing_new.html
@@ -197,7 +197,7 @@ <h2 class="accordion-header" id="headingAdvanced">
<span style="margin-left: 5px; margin-right: 10px;">List of options</span>
<div class="mb-2">
<small class="form-text text-muted">Options that the annotators can choose from for each
- example.</small>
+ example. <b>Note that sliders are currently not working in the Safari browser.</b></small>
</div>
</div>
<div id="options"></div>
diff --git a/factgenie/utils.py b/factgenie/utils.py
index b41c8627..92f2c472 100644
--- a/factgenie/utils.py
+++ b/factgenie/utils.py
@@ -1068,14 +1068,14 @@ def parse_crowdsourcing_config(config):
return config
-def generate_checkboxes(flags):
+def generate_flags(flags):
if not flags:
return ""
flags_segment = "<div class='mb-4'><p><b>Please check if you agree with any of the following statements:</b></p>"
for i, flag in enumerate(flags):
flags_segment += f"""
- <div class="form-check flag-checkbox">
+ <div class="form-check crowdsourcing-flag">
<input class="form-check-input" type="checkbox" value="{i}" id="checkbox-{i}">
<label class="form-check-label" for="checkbox-{i}">
{flag}
@@ -1168,7 +1168,7 @@ def create_crowdsourcing_page(campaign_id, config):
instructions=instructions_html,
annotation_span_categories=config.get("annotation_span_categories", []),
has_display_overlay='style="display: none"' if not has_display_overlay else "",
- flags=generate_checkboxes(config.get("flags", [])),
+ flags=generate_flags(config.get("flags", [])),
options=generate_options(config.get("options", [])),
text_fields=generate_text_fields(config.get("text_fields", [])),
)
| Sliders do not correctly show the selected value
Bug report:
> When you move between the examples, the positions of sliders do not load to the previously selected values. However, it seems that they are saved correctly in the final JSON file. In short: it's better to annotate examples from the first one to the last one, without returning to the previously annotated examples
| 2024-10-16T15:22:29 | 0.0 | [] | [] |
|||
libffcv/ffcv | libffcv__ffcv-293 | 9be04091844828c312afd96e74b7e3f8b85b1023 | diff --git a/README.md b/README.md
index f21db238..7c07605a 100644
--- a/README.md
+++ b/README.md
@@ -92,7 +92,7 @@ writer = DatasetWriter(write_path, {
})
# Write dataset
-writer.from_indexed_dataset(ds)
+writer.from_indexed_dataset(my_dataset)
```
Then replace your old loader with the `ffcv` loader at train time (in PyTorch,
no other changes required!):
diff --git a/examples/cifar/train_cifar.py b/examples/cifar/train_cifar.py
index 3160e706..c46b1a3e 100644
--- a/examples/cifar/train_cifar.py
+++ b/examples/cifar/train_cifar.py
@@ -185,7 +185,7 @@ def evaluate(model, loaders, lr_tta=False):
with autocast():
out = model(ims)
if lr_tta:
- out += model(ch.fliplr(ims))
+ out += model(ims.flip(-1))
total_correct += out.argmax(1).eq(labs).sum().cpu().item()
total_num += ims.shape[0]
print(f'{name} accuracy: {total_correct / total_num * 100:.1f}%')
diff --git a/ffcv/loader/epoch_iterator.py b/ffcv/loader/epoch_iterator.py
index 02e04ef5..09160141 100644
--- a/ffcv/loader/epoch_iterator.py
+++ b/ffcv/loader/epoch_iterator.py
@@ -82,6 +82,7 @@ def run(self):
self.current_batch_slot = (
slot + 1) % (self.loader.batches_ahead + 2)
result = self.run_pipeline(b_ix, ixes, slot, events[slot])
+ # print("RES", b_ix, "ready")
to_output = (slot, result)
while True:
try:
@@ -93,13 +94,15 @@ def run(self):
if self.terminate_event.is_set():
return
if IS_CUDA:
+ # print("SUB", b_ix)
# We were able to submit this batch
# Therefore it means that the user must have entered the for loop for
# (batch_slot - batch_ahead + 1) % (batches ahead + 2)
# Therefore batch_slot - batch_ahead must have all it's work submitted
# We will record an event of all the work submitted on the main stream
# and make sure no one overwrite the data until they are done
- just_finished_slot = (slot - self.loader.batches_ahead) % (self.loader.batches_ahead + 2)
+ just_finished_slot = (slot - self.loader.batches_ahead - 1) % (self.loader.batches_ahead + 2)
+ # print("JFS", just_finished_slot)
event = ch.cuda.Event()
event.record(self.current_stream)
events[just_finished_slot] = event
diff --git a/ffcv/memory_allocator.py b/ffcv/memory_allocator.py
index c869f684..6faf55a6 100644
--- a/ffcv/memory_allocator.py
+++ b/ffcv/memory_allocator.py
@@ -37,7 +37,7 @@ def malloc(self, size):
# print(f"Allocating {size} bytes")
if size > self.page_size:
raise ValueError(f"Tried allocating {size} but" +
- " page size is {self.page_size}")
+ f" page size is {self.page_size}")
if size > self.space_left_in_page:
self.flush_page()
diff --git a/setup.py b/setup.py
index 82f3065e..b101db9b 100644
--- a/setup.py
+++ b/setup.py
@@ -71,7 +71,7 @@ def pkgconfig(package, kw):
output = subprocess.getoutput(
'pkg-config --cflags --libs {}'.format(package))
if 'not found' in output:
- raise Exception(f"Could not find required package: {package}.")
+ raise RuntimeError(f"Could not find required package: {package}.")
for token in output.strip().split():
kw.setdefault(flag_map.get(token[:2]), []).append(token[2:])
return kw
@@ -89,7 +89,11 @@ def pkgconfig(package, kw):
extension_kwargs = pkgconfig_windows('pthread', extension_kwargs)
else:
- extension_kwargs = pkgconfig('opencv4', extension_kwargs)
+ try:
+ extension_kwargs = pkgconfig('opencv4', extension_kwargs)
+ except RuntimeError:
+ pass # Fallback to opencv package
+ extension_kwargs = pkgconfig('opencv', extension_kwargs)
extension_kwargs = pkgconfig('libturbojpeg', extension_kwargs)
extension_kwargs['libraries'].append('pthread')
@@ -113,9 +117,10 @@ def pkgconfig(package, kw):
'terminaltables',
'pytorch_pfn_extras',
'fastargs',
- 'cv2',
+ 'opencv-python',
'assertpy',
'tqdm',
'psutil',
+ 'numba',
]
)
| [Bug] Synchronization issue on GPU
I'm using the v0.0.4 version from this branch: https://github.com/libffcv/ffcv/tree/v0.0.4
There's a (possibly major) bug where two models will not receive the same inputs from the FFCV dataloader, unless `torch.cuda.synchronize` is explicitly called. Below is a simple code snippet to reproduce this issue:
```python
import torch
from torchvision.models import resnet18
from tqdm import tqdm
from copy import deepcopy
dataloader = create_ffcv_dataloader() # Your own custom dataloader factory
model1 = resnet18(pretrained=False).cuda()
model2 = deepcopy(model1)
with torch.no_grad():
for it, (imgs, *_) in enumerate(tqdm(dataloader)):
model1(imgs)
model2(imgs)
# Uncommenting the following line will pass the assertion at the bottom, while leaving it commented will trigger assertion error
# torch.cuda.synchronize()
if it == 20:
break
assert model1.bn1.running_mean.allclose(model2.bn1.running_mean)
```
BatchNorm tracks running stats, which can be used to check whether two identical models received the same inputs on the forward pass. Without `torch.cuda.synchronize()`, the above code will trigger an assertion error, since the two models received different inputs at some point. With `torch.cuda.synchronize()`, no assertion error will be triggered.
Also, I have noticed that this behavior does not necessarily happen with larger models, where the forward pass takes a longer time.
[Bug] Synchronization issue on GPU
I'm using the v0.0.4 version from this branch: https://github.com/libffcv/ffcv/tree/v0.0.4
There's a (possibly major) bug where two models will not receive the same inputs from the FFCV dataloader, unless `torch.cuda.synchronize` is explicitly called. Below is a simple code snippet to reproduce this issue:
```python
import torch
from torchvision.models import resnet18
from tqdm import tqdm
from copy import deepcopy
dataloader = create_ffcv_dataloader() # Your own custom dataloader factory
model1 = resnet18(pretrained=False).cuda()
model2 = deepcopy(model1)
with torch.no_grad():
for it, (imgs, *_) in enumerate(tqdm(dataloader)):
model1(imgs)
model2(imgs)
# Uncommenting the following line will pass the assertion at the bottom, while leaving it commented will trigger assertion error
# torch.cuda.synchronize()
if it == 20:
break
assert model1.bn1.running_mean.allclose(model2.bn1.running_mean)
```
BatchNorm tracks running stats, which can be used to check whether two identical models received the same inputs on the forward pass. Without `torch.cuda.synchronize()`, the above code will trigger an assertion error, since the two models received different inputs at some point. With `torch.cuda.synchronize()`, no assertion error will be triggered.
Also, I have noticed that this behavior does not necessarily happen with larger models, where the forward pass takes a longer time.
| 2023-02-27T14:47:53 | 0.0 | [] | [] |
|||
augerai/a2ml | augerai__a2ml-557 | 40df0b2d51e4530ce53bae007eaf0a0d405c63ee | diff --git a/a2ml/__init__.py b/a2ml/__init__.py
index ad2cafa4..dce8b34b 100644
--- a/a2ml/__init__.py
+++ b/a2ml/__init__.py
@@ -1,1 +1,1 @@
-__version__ = '1.0.18'
+__version__ = '1.0.19'
diff --git a/a2ml/api/auger/impl/cloud/experiment_session.py b/a2ml/api/auger/impl/cloud/experiment_session.py
index c5c0873f..7b6a474d 100644
--- a/a2ml/api/auger/impl/cloud/experiment_session.py
+++ b/a2ml/api/auger/impl/cloud/experiment_session.py
@@ -91,6 +91,10 @@ def get_leaderboard(self):
def update_settings(self):
from .experiment import AugerExperimentApi
+ if not self.ctx.config.get('experiment'):
+ self.ctx.log("Config does not contain experiment section. Do not update Review retrain options")
+ return
+
session_props = self.properties()
props = session_props.get('model_settings',{}).get('evaluation_options')
config_props = AugerExperimentApi.get_experiment_options(self.ctx.config)
| WIP: Move api to auger.ai repo
Moving all aunderlying auger api code to auger.ai repo
| 2021-05-22T21:18:19 | 0.0 | [] | [] |
|||
adafruit/Adafruit_CircuitPython_IS31FL3731 | adafruit__Adafruit_CircuitPython_IS31FL3731-53 | 97eb72a132f0a1349b2276cbc39539725f90515f | diff --git a/.gitignore b/.gitignore
index db3d538..546a827 100644
--- a/.gitignore
+++ b/.gitignore
@@ -38,6 +38,7 @@ _build
# Virtual environment-specific files
.env
.venv
+venv
# MacOS-specific files
*.DS_Store
diff --git a/adafruit_is31fl3731/__init__.py b/adafruit_is31fl3731/__init__.py
index c8cd30c..49290a4 100644
--- a/adafruit_is31fl3731/__init__.py
+++ b/adafruit_is31fl3731/__init__.py
@@ -10,7 +10,7 @@
Base library.
-* Author(s): Tony DiCola, Melissa LeBlanc-Williams, David Glaude
+* Author(s): Tony DiCola, Melissa LeBlanc-Williams, David Glaude, E. A. Graham Jr.
Implementation Notes
--------------------
@@ -202,13 +202,18 @@ def fade(self, fade_in=None, fade_out=None, pause=0):
"""
if fade_in is None and fade_out is None:
self._register(_CONFIG_BANK, _BREATH2_REGISTER, 0)
- elif fade_in is None:
+ return
+ if fade_in is None:
fade_in = fade_out
elif fade_out is None:
fade_out = fade_in
- fade_in = int(math.log(fade_in / 26, 2))
- fade_out = int(math.log(fade_out / 26, 2))
- pause = int(math.log(pause / 26, 2))
+
+ if fade_in != 0:
+ fade_in = int(math.log(fade_in / 26, 2))
+ if fade_out != 0:
+ fade_out = int(math.log(fade_out / 26, 2))
+ if pause != 0:
+ pause = int(math.log(pause / 26, 2))
if not 0 <= fade_in <= 7:
raise ValueError("Fade in out of range")
if not 0 <= fade_out <= 7:
diff --git a/examples/is31fl3731_ledshim_fade.py b/examples/is31fl3731_ledshim_fade.py
new file mode 100644
index 0000000..eb7ecda
--- /dev/null
+++ b/examples/is31fl3731_ledshim_fade.py
@@ -0,0 +1,24 @@
+# SPDX-FileCopyrightText: 2023 E. A. Graham, Jr.
+# SPDX-License-Identifier: MIT
+
+import time
+import board
+import busio
+from adafruit_is31fl3731.led_shim import LedShim as Display
+
+i2c = busio.I2C(board.SCL, board.SDA)
+
+# initial display if you are using Pimoroni LED SHIM
+display = Display(i2c)
+
+y = 1
+for x in range(28):
+ display.pixel(x, y, 255)
+
+display.fade(fade_in=104, pause=250)
+
+try:
+ while True:
+ time.sleep(10)
+except KeyboardInterrupt:
+ display.sleep(True)
| "fade" method throws Exception when both params are None
https://github.com/adafruit/Adafruit_CircuitPython_IS31FL3731/blob/97eb72a132f0a1349b2276cbc39539725f90515f/adafruit_is31fl3731/__init__.py#L193
It seems there's a missing `return` that should occur when both parameters are `None`. If that does **not** occur, line 209 will (obviously) fail.
| Would you mind making a PR? I think you are right what the fix is. https://learn.adafruit.com/contribute-to-circuitpython-with-git-and-github | 2023-02-17T17:11:32 | 0.0 | [] | [] |
||
theislab/cellrank | theislab__cellrank-1175 | 3b8e3144ae14610b099dc71f047102b8e37b787e | diff --git a/src/cellrank/models/_pygam_model.py b/src/cellrank/models/_pygam_model.py
index c986b149f..f7e179cac 100644
--- a/src/cellrank/models/_pygam_model.py
+++ b/src/cellrank/models/_pygam_model.py
@@ -34,7 +34,7 @@ class GamLinkFunction(ModeEnum):
LOGIT = enum.auto()
INVERSE = enum.auto()
LOG = enum.auto()
- INV_SQUARED = "inverse-squared"
+ INV_SQUARED = enum.auto()
class GamDistribution(ModeEnum):
@@ -95,7 +95,7 @@ def __init__(
n_knots: Optional[int] = 6,
spline_order: int = 3,
distribution: Literal["normal", "binomial", "poisson", "gamma", "gaussian", "inv_gauss"] = "gamma",
- link: Literal["identity", "logit", "inverse", "log", "inverse-squared"] = "log",
+ link: Literal["identity", "logit", "inverse", "log", "inv_squared"] = "log",
max_iter: int = 2000,
expectile: Optional[float] = None,
grid: Optional[Union[str, Mapping[str, Any]]] = None,
| Inverse-squared link error
<!-- Ask a question about CellRank's core concepts, usage or functionalities, in the block below: -->
Hello,
Thanks for the amazing work,
I am trying to create a model :
```python
modelGamaInv = cr.models.GAM(SL2_ddc, distribution = "inv_gauss", link = "inverse-squared",n_knots = 10)
```
When I try to use this model I face an error with CellRank :
```python
cr.pl.gene_trends(
SL2_ddc,
model=modelGamaInv,
lineages="BEC_Adult",
data_key="magic_imputed_data",
genes=gene_list,
same_plot=True,
ncols=2,
time_key="slingshot_lineage2",
cell_color="new_labelling",
hide_cells=False,
weight_threshold=(1e-3, 1e-3),
size =5
)
```
```
Traceback (most recent call last):
File "/miniconda3/lib/python3.9/site-packages/cellrank/models/_pygam_model.py", line 203, in fit
self.model.fit(self.x, self.y, weights=self.w, **kwargs)
File "/miniconda3/lib/python3.9/site-packages/pygam/pygam.py", line 886, in fit
self._validate_params()
File "/miniconda3/lib/python3.9/site-packages/pygam/pygam.py", line 265, in _validate_params
raise ValueError('unsupported link {}'.format(self.link))
ValueError: unsupported link inverse-squared
```
Have you an idea of what can lead to this error ?
I use CellRank version 2.0.0 and pygam version 0.8.0
Thanks
Loïc
| @michalk8 might know - I don't think we have tested this extensively with different link functions.
Hi @loicguille , seems like there's a typo in the docs, can you try `link="inv_squared"`?
Hi, when I use the command you suggest :
```python
modelSL2 = cr.models.GAM(SL2_ddc, distribution = "inv_gauss", link = "inv_squared",n_knots = 10)
```
I get another error :
```
ValueError: Invalid option `'inv_squared'` for `GamLinkFunction`. Valid options are: `['identity', 'logit', 'inverse', 'log', 'inverse-squared']`.
```
Thanks for the reply.
| 2024-03-05T07:57:35 | 0.0 | [] | [] |
||
DeepMReye/DeepMReye | DeepMReye__DeepMReye-9 | 0ad3db9296a55b29c5bf5d712ad4a6b82d20594c | diff --git a/deepmreye/preprocess.py b/deepmreye/preprocess.py
index 8f40cea..1c10836 100644
--- a/deepmreye/preprocess.py
+++ b/deepmreye/preprocess.py
@@ -92,7 +92,7 @@ def run_participant(fp_func, dme_template, eyemask_big, eyemask_small, x_edges,
# --------------------------------------------------------------------------------
# --------------------------MASKING-----------------------------------------------
# --------------------------------------------------------------------------------
-def get_masks(data_path='../deepmreye/masks/'):
+def get_masks(data_path=''):
"""Loads masks for whole brain, big eye mask and small eye mask
Parameters
@@ -117,6 +117,10 @@ def get_masks(data_path='../deepmreye/masks/'):
z_edges : list
Edges of mask in z-dimension
"""
+
+ if data_path == "":
+ data_path = os.path.abspath(os.path.join(__file__, "..", "masks"))
+
eyemask_small = ants.image_read(os.path.join(data_path, 'eyemask_small.nii'))
eyemask_big = ants.image_read(os.path.join(data_path, 'eyemask_big.nii'))
dme_template = ants.image_read(os.path.join(data_path, 'dme_template.nii'))
| masks can't be found
After a
```
pip install git+https://github.com/DeepMReye/DeepMReye.git
```
I get this when trying to load the masks.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/tmp/ipykernel_31249/7865041.py in <module>
1 # Preload masks to save time within participant loop
----> 2 (eyemask_small, eyemask_big, dme_template, mask, x_edges, y_edges, z_edges) = preprocess.get_masks()
~/github/CPP_deepMReye/code/env/lib/python3.8/site-packages/deepmreye/preprocess.py in get_masks(data_path)
118 Edges of mask in z-dimension
119 """
--> 120 eyemask_small = ants.image_read(data_path + 'eyemask_small.nii')
121 eyemask_big = ants.image_read(data_path + 'eyemask_big.nii')
122 dme_template = ants.image_read(data_path + 'dme_template.nii')
~/github/CPP_deepMReye/code/env/lib/python3.8/site-packages/ants/core/ants_image_io.py in image_read(filename, dimension, pixeltype, reorient)
513 filename = os.path.expanduser(filename)
514 if not os.path.exists(filename):
--> 515 raise ValueError("File %s does not exist!" % filename)
516
517 hinfo = image_header_info(filename)
ValueError: File ../deepmreye/masks/eyemask_small.nii does not exist!
```
Given the default of `get_masks` and I am not sure that they get "installed" (or at all) by the `pip install`
```
get_masks(data_path='../deepmreye/masks/'):
```
Will pass a value to the path to the masks in the repo on my local machine but there may be a smarter way of doing this.
| better use `os.path.join` in here rather than string concatenation.
```
eyemask_small = ants.image_read(data_path + 'eyemask_small.nii')
eyemask_big = ants.image_read(data_path + 'eyemask_big.nii')
dme_template = ants.image_read(data_path + 'dme_template.nii')
```
as I am getting the following because of a missing `/`
```
File /home/remi/github/CPP_deepMReye/code/lib/deepMReye/deepmreye/maskseyemask_small.nii does not exist!
```
and I wonder if this default value won't break on windows: `'../deepmreye/masks/'`
Minimimal working example of the bug:
### install
```
conda create --name deepmreye python=3.7
conda activate deepmreye
pip install git+https://github.com/DeepMReye/DeepMReye.git
```
### launch iPython
```
ipython
```
### try to get the masks
```
from deepmreye import preprocess
(
eyemask_small,
eyemask_big,
dme_template,
mask,
x_edges,
y_edges,
z_edges,
) = preprocess.get_masks()
```
### stack trace
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-7b7bf6d546b8> in <module>
7 y_edges,
8 z_edges,
----> 9 ) = preprocess.get_masks()
~/github/DeepMReye/deepmreye/preprocess.py in get_masks(data_path)
118 Edges of mask in z-dimension
119 """
--> 120 eyemask_small = ants.image_read(os.path.join(data_path, 'eyemask_small.nii'))
121 eyemask_big = ants.image_read(os.path.join(data_path, 'eyemask_big.nii'))
122 dme_template = ants.image_read(os.path.join(data_path, 'dme_template.nii'))
~/miniconda3/envs/deepmreye/lib/python3.7/site-packages/ants/core/ants_image_io.py in image_read(filename, dimension, pixeltype, reorient)
513 filename = os.path.expanduser(filename)
514 if not os.path.exists(filename):
--> 515 raise ValueError("File %s does not exist!" % filename)
516
517 hinfo = image_header_info(filename)
ValueError: File ../deepmreye/masks/eyemask_small.nii does not exist!
``` | 2021-12-09T12:19:40 | 0.0 | [] | [] |
||
ewels/rich-codex | ewels__rich-codex-38 | 751422b0412e25e413e2daa06f5dc040e857e80b | diff --git a/src/rich_codex/codex_search.py b/src/rich_codex/codex_search.py
index bf42ff1..18b2954 100644
--- a/src/rich_codex/codex_search.py
+++ b/src/rich_codex/codex_search.py
@@ -309,7 +309,7 @@ def parse_configs(self):
log.info(f"Found {len(configs)} config file{'s' if len(configs) > 1 else ''}")
for config in configs:
with config.open() as fh:
- self.parse_config(config_fn, yaml.safe_load(fh))
+ self.parse_config(config, yaml.safe_load(fh))
def parse_config(self, config_fn, config):
"""Parse a single rich-codex config file."""
| Output for config file use is wrong
When using a config file, the output will be wrong...pointing to an incorrect config file. The exception is for config files that also happen to be the last one specified in the `self.configs` list.
To reproduce this issue, create a config file at one of either the first two "known" locations:
```python
self.configs = [
# Create a config here
".rich-codex.yml",
# ...or here...
".github/rich-codex.yml",
# ...but not here
"docs/img/rich-codex.yml"
]
```
For example, this config file, at `.rich-codex.yml`:
```yaml
---
outputs:
- command: "python -m pip list"
fake_command: "pip list"
title: Installed Packages
img_paths:
- docs/img/pip_list.svg
```
results in output like this:
```
⯠rich-codex -v --skip-git-checks
INFO rich-codex â¡ï¸ðâ¡ï¸ version 1.2.4
DEBUG Popen(['git', 'diff', '--cached', '--abbrev=40', '--full-index', '--raw'], cwd=/Users/maxrake/dev/delme/rich-codex, universal_newlines=False, shell=None,
istream=None)
DEBUG Popen(['git', 'diff', '--abbrev=40', '--full-index', '--raw'], cwd=/Users/maxrake/dev/delme/rich-codex, universal_newlines=False, shell=None, istream=None)
DEBUG Popen(['git', 'diff', '--abbrev=40', '--full-index', '-M', '--raw', '-z', '--no-color'], cwd=/Users/maxrake/dev/delme/rich-codex, universal_newlines=False,
shell=None, istream=None)
DEBUG Popen(['git', 'cat-file', '--batch-check'], cwd=/Users/maxrake/dev/delme/rich-codex, universal_newlines=False, shell=None, istream=<valid stream>)
DEBUG Popen(['git', 'status', '--porcelain', '--untracked-files'], cwd=/Users/maxrake/dev/delme/rich-codex, universal_newlines=False, shell=None, istream=None)
DEBUG Git status check: Found uncommitted changes: ['README.md', 'docs/config/colours.md', 'docs/config/command_setup.md', 'docs/config/ignoring_changes.md',
'docs/config/tweaks.md', 'docs/index.md', 'docs/inputs/markdown.md', 'docs/troubleshooting.md', 'docs/usage/cli.md', 'src/rich_codex/codex_search.py',
'.rich-codex.yml', 'docs/img/main_header.svg', 'docs/img/pip_list.svg'] (skip_git_checks: True)
DEBUG Trimming terminal output down to a minimum of 80
DEBUG Appending contents of .gitignore to 'SEARCH_EXCLUDE'
DEBUG Found config '.rich-codex.yml'
DEBUG Couldn't find '.github/rich-codex.yml'
DEBUG Couldn't find 'docs/img/rich-codex.yml'
INFO Found 1 config file
DEBUG Found valid output in 'docs/img/rich-codex.yml': {'command': 'python -m pip list', 'fake_command': 'pip list', 'title': 'Installed Packages', 'img_paths':
['docs/img/pip_list.svg']}
INFO Searching 21 files
---TRIMMED---
DEBUG Collapsing 1 image requests to 1 deduplicated
ââââââââââââââââââââââ¬ââââââââââââââââââââââââââ®
â Commands to run: â Source â
ââââââââââââââââââââââ¼ââââââââââââââââââââââââââ¤
â python -m pip list â docs/img/rich-codex.yml â
â°âââââââââââââââââââââ´ââââââââââââââââââââââââââ¯
```
Notice that the correct config (`.rich-codex.yml`) was found and identified, but then the incorrect config (`docs/img/rich-codex.yml`) was referenced after that.
| 2022-08-23T20:07:16 | 0.0 | [] | [] |
|||
jg-rp/liquid | jg-rp__liquid-120 | 1bbf092f8381f5231d431f5cd9514fc708219ebb | diff --git a/liquid/builtin/filters/array.py b/liquid/builtin/filters/array.py
index 16ffa22..a35b3c0 100644
--- a/liquid/builtin/filters/array.py
+++ b/liquid/builtin/filters/array.py
@@ -125,7 +125,7 @@ def concat(sequence: ArrayT, second_array: ArrayT) -> ArrayT:
return list(chain(sequence, second_array))
-@liquid_filter
+@sequence_filter
def map_(sequence: ArrayT, key: object) -> List[object]:
"""Create an array of values from a map."""
try:
diff --git a/liquid/golden/map_filter.py b/liquid/golden/map_filter.py
index b4bea48..b341ac9 100644
--- a/liquid/golden/map_filter.py
+++ b/liquid/golden/map_filter.py
@@ -35,4 +35,16 @@
expect="#",
globals={"a": [{"title": "foo"}, {"title": "bar"}]},
),
+ Case(
+ description="nested arrays get flattened",
+ template=r"{{ a | map: 'title' | join: '#' }}",
+ expect="foo#bar#baz",
+ globals={"a": [{"title": "foo"}, [{"title": "bar"}, {"title": "baz"}]]},
+ ),
+ Case(
+ description="input is a hash",
+ template=r"{{ a | map: 'title' | join: '#' }}",
+ expect="foo",
+ globals={"a": {"title": "foo", "some": "thing"}},
+ ),
]
| The `map` filter should flatten its input sequence
The reference implementation of the `map` filter uses [`InputIterator`](https://github.com/Shopify/liquid/blob/master/lib/liquid/standardfilters.rb#L932) on its input value, causing the input to be flattened if it's a nested array, or be a single element array if it's a hash.
We have similar behaviour for other filters, using our `sequence_filter` decorator, but we do not apply it to `map`.
| 2023-06-20T06:45:02 | 0.0 | [] | [] |
|||
BobBuildTool/bob | BobBuildTool__bob-455 | bd03a97e4b9efac6f938a1f8111250cab411417c | diff --git a/doc/manpages/bob-project.rst b/doc/manpages/bob-project.rst
index 570e48ae3..fcade9171 100644
--- a/doc/manpages/bob-project.rst
+++ b/doc/manpages/bob-project.rst
@@ -89,7 +89,7 @@ passed on the command line *after* the package name.
Packages will be marked as 'exclude from build' in eclipse. Usefull if indexer runs OOM.
``-I ADDITIONAL_INCLUDES``
- Additional include directories. (added recursive starting from this directory)
+ Additional include directories.
``--name NAME``
Name of project. Default is complete_path_to_package
@@ -138,7 +138,7 @@ to be passed on the command line *after* the package name.
wtih foobar-* but excludes the foobar-baz package.
``-I ADDITIONAL_INCLUDES``
- Additional include directories. (added recursive starting from this directory)
+ Additional include directories.
``--kit KIT``
Name of the kit to use for this project.
diff --git a/pym/bob/generators/EclipseCdtGenerator.py b/pym/bob/generators/EclipseCdtGenerator.py
index a08e4e9aa..1afa06eac 100644
--- a/pym/bob/generators/EclipseCdtGenerator.py
+++ b/pym/bob/generators/EclipseCdtGenerator.py
@@ -231,8 +231,7 @@ def eclipseCdtGenerator(package, argv, extra, bobRoot):
# find additional include dirs
for i in args.additional_includes:
if os.path.exists(i):
- for root, directories, filenames in os.walk(i):
- includeDirs.append(os.path.join(i,root))
+ includeDirs.append(i)
except re.error as e:
raise ParseError("Invalid regular expression '{}': {}".format(e.pattern), e)
diff --git a/pym/bob/generators/QtCreatorGenerator.py b/pym/bob/generators/QtCreatorGenerator.py
index fbbcfa2c9..da9505e45 100644
--- a/pym/bob/generators/QtCreatorGenerator.py
+++ b/pym/bob/generators/QtCreatorGenerator.py
@@ -301,8 +301,7 @@ def qtProjectGenerator(package, argv, extra, bobRoot):
for i in args.additional_includes:
for e in parseArgumentLine(i):
if os.path.exists(e):
- for root, directories, filenames in os.walk(e):
- hList.append(os.path.join(e,root))
+ hList.append(e)
# compose start includes
for i in args.start_includes:
diff --git a/pym/bob/generators/common.py b/pym/bob/generators/common.py
index 6abf4f7de..75c0b33a7 100644
--- a/pym/bob/generators/common.py
+++ b/pym/bob/generators/common.py
@@ -324,9 +324,7 @@ def configure(self, package, argv):
for i in self.args.additional_includes:
for e in parseArgumentLine(i):
if os.path.exists(e):
- for root, directories, filenames in os.walk(e):
- filterDirs(directories)
- self.appendIncludeDirectories.append(os.path.join(e,root))
+ self.appendIncludeDirectories.append(e)
def generate(self):
if self.args.overwrite:
| wrong path joining in common generator
https://github.com/BobBuildTool/bob/blob/bd03a97e4b9efac6f938a1f8111250cab411417c/pym/bob/generators/common.py#L329
hej, i tested this line with absolute and relative paths for linux and MSYS2.
in both platforms there are scenarios, where this line fails the result:
`-I ./dev/build/fdt`
e.g. result:
```
e=./dev/build/fdt
root=./dev/build/fdt/ros/datalogging-tgt/x86_64-cross-linux-gnu/1/workspace/build/fdt.ros.datalogging/CMakeFiles/fdt.ros.datalogging.dir/src
join=./dev/build/fdt/./dev/build/fdt/ros/datalogging-tgt/x86_64-cross-linux-gnu/1/workspace/build/fdt.ros.datalogging/CMakeFiles/fdt.ros.datalogging.dir/src
```
same, if i use `-I dev/build/fdt`. so relative paths always fail.
absolute paths for linux are okay, e.g.: `-I $PWD/dev/build/fdt`
result:
```
e=/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt
root=/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/ros/datalogging-tgt/x86_64-cross-linux-gnu/1/workspace/build/fdt.ros.datalogging/CMakeFiles/fdt.ros.datalogging.dir/src
join=/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/ros/datalogging-tgt/x86_64-cross-linux-gnu/1/workspace/build/fdt.ros.datalogging/CMakeFiles/fdt.ros.datalogging.dir/src
```
but the join here is also useles, because the result is always the same as root.
for windows absolute paths all fail for all path-variants (unix, windows, mixed):
e.g.:
```
e=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt
root=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar
join=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar
```
As i tested, it is important, that the path is always a absolute one!
| https://github.com/BobBuildTool/bob/blob/bd03a97e4b9efac6f938a1f8111250cab411417c/pym/bob/generators/QtCreatorGenerator.py#L305
the same "issues" here, if we use qt-creator for MSYS!
```
e=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt
root=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar
join=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar
```
a solution for me, would be to replace: `os.path.join(e,root)` by `os.path.abspath(root)`
for linux, absolute and relative paths are working, for MSYS2 the the user have to use this: `-I $(pwd -W)/dev/build/...` (works for qtcreator and vs2019)
The line is just wrong for what it tries to accomplish. Changing it to `os.path.abspath(root)` looks like the correct solution. Interestingly the code in question was introduced silently in e9ec42ac50. Before this commit the include directory was added as-is.
I'm wondering why we would ever need to recursively add these additional include directories. @rhubert do you remember the purpose of this recursive approach? It somewhat feels wrong...
Not exactly... I think the QT-creator needs to know all sub directories for the indexer to be happy. Not sure if this is still the case for a up-to-date version. The commit is >5 years old...
Before e9ec42a the include-dir-list was build of all directories containing a header-file within the checkout-dirs. With `-I` you can specify a directory outside of the bob-known world which you want to have in the index to and this is also added recursively. Not sure what the usecase of this was. Maybe some host-builds where you want to have `/usr/include`.
@mahaase why do you need to add the build directory? All packaged dep headers should be found?
I think it's simply not supported to add directories relative to the bob-workspace. Adding them absolute `-I $(pwd)/dev/build/fdt` should work - at least for unix. I don't know what the windows issue is as a don't know what
>```
>e=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt
>root=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar
>join=D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar
>```
is at all...
If qt-creator still needs all directories, the solution might be:
```python
os.path.abspath(os.path.join(e,root))
```
?
> @mahaase why do you need to add the build directory? All packaged dep headers should be found?
use-case 1: using https://cmake.org/cmake/help/latest/module/GenerateExportHeader.html inside the build-directory a `*_export.h` file will be generated for export header stuff for shared libraries.
use-case 2: inside the source-step, just checking out IDL files. in build-step a generator will create the headerfiles, that also should be resolved in IDE.
especially for VS2019 projects use-case 1 is important, becuase of intellisense will declare the header file defective, if `class UNKNOWN_DEFINE Name {` exists.
@rhubert the `os.path.join(e,root)` is a problem for MSYS2. like i debugged and wrote, if `e` value is `D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt` and `root` value is `D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar`, than python will join them to this result:
`D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/D:/msys64/home/mhaase/ws/fdt.bobrecipes/dev/build/fdt/vcar-dev/x86_64-pc-win32/1/workspace/install/usr/lib/cmake/fdt.vcar`
that path is of course not working.
sooo, this issue was just found, because of missing functionality of IDE generators, that the build-directories shall checked for headerfiles, too. would that be another ticket?
I don't think it's a good idea to add build dirs to the IDE. ATM we add only the `src` dir of the current package and the `dist` dirs of the dependencies.
I'd assume you have many duplicated headers if you also _all_ the build-dirs. (`rsync $1/ .`). For downloaded-deps you don't have a build dir? The only valid build-dir is IMO the build dir of the component the project is generated for.
If you really need this in your setup `-I ./dev/build` is the way to go..
```
for root, directories, filenames in os.walk(e):
filterDirs(directories)
self.appendIncludeDirectories.append(os.path.join(e,root))
```
is just broken. It doesn't make sense to filter the directory list and ignore the result.
```
self.appendIncludeDirectories.append(os.abspath(directory)
``` | 2022-01-25T20:49:13 | 0.0 | [] | [] |
||
sourmash-bio/sourmash | sourmash-bio__sourmash-2953 | eee9bdb9dbce7d2bb53af54604be1bec6f3d283b | diff --git a/doc/classifying-signatures.md b/doc/classifying-signatures.md
index 089793c731..eb05de58ff 100644
--- a/doc/classifying-signatures.md
+++ b/doc/classifying-signatures.md
@@ -190,15 +190,17 @@ first as "abundance projection" and the second as "angular similarity".
`sourmash gather` can report approximate abundance information for
containment queries against genome databases. This will give you
-numbers that (approximately) match what you get from counting mapped
-reads.
+numbers that (approximately) match what you get from counting the coverage
+of each contig by mapping reads.
-If you create your input signatures with `-p abund`,
-`sourmash gather` will use that information
-to calculate an abundance-weighted result. This will weight
-each match to a hash value by the multiplicity of the hash value in
+If you create your query signature with `-p abund`,
+`sourmash gather` will use the resulting k-mer multiplicity information
+to calculate an abundance-weighted result, weighting
+each hash value match by the multiplicity of the hash value in
the query signature. You can turn off this behavior with
-`--ignore-abundance`.
+`--ignore-abundance`. The abundance is reported as column `avg_abund`
+in the console output, and columns `average_abund`, `median_abund`, and
+`std_abund` in the CSV output.
For example, if you have a metagenome composed of two equal sized genomes
A and B, with A present at 10 times the abundance of B, `gather` on
@@ -347,7 +349,7 @@ First, we make some synthetic data sets:
then we make signature s10-s11 with r1 and r3, i.e. 1:1 abundance, and
make signature s10x10-s11 with r2 and r3, i.e. 10:1 abundance.
-### A first experiment: 1:1 abundance.
+### A first experiment: 1:1 abundance ratio.
When we project r1+r3, 1:1 abundance, onto s10, s11, and s12 genomes
with gather:
@@ -367,10 +369,10 @@ overlap p_query p_match avg_abund
* approximately 50% of each query matching (first column, `p_query`)
* approximately 80% of subject genome's contents being matched (this is due to the low coverage of 2 used to build queries; `p_match`)
-* approximately 2.0 abundance (third column, `avg_abund`)
+* approximately 2.0 coverage (third column, `avg_abund`)
* no match to genome s12.
-### A second experiment: 10:1 abundance.
+### A second experiment: 10:1 abundance ratio.
When we project r2+r3, 10:1 abundance, onto s10, s11, and s12 genomes
with gather:
@@ -391,11 +393,11 @@ overlap p_query p_match avg_abund
* approximately 91% of s10 matching
* approximately 9% of s11 matching
* approximately 100% of the high coverage genome being matched, with only 80% of the low coverage genome
-* approximately 2.0 abundance (third column, avg_abund) for s11, and (very) approximately 20x abundance for genome s10.
+* approximately 2.0 coverage (third column, avg_abund) for s11, and (very) approximately 20x coverage for genome s10.
The cause of the poor approximation for genome s10 is unclear; it
could be due to low coverage, or small genome size, or something
-else. More investigation needed.
+else. More investigation is needed.
## Appendix C: sourmash gather output examples
diff --git a/doc/command-line.md b/doc/command-line.md
index 99eb7d0d4a..ee5421a6be 100644
--- a/doc/command-line.md
+++ b/doc/command-line.md
@@ -373,9 +373,9 @@ the recovered matches hit 45.6% of the query k-mers (unweighted).
```
For each match,
-* 'overlap', the first column, is the estimated number of k-mers shared between the match and the query.
-* 'p_query' is the _percentage_ of the query that overlaps with the match; it is the amount of the metagenome "explained" by this match.
-* 'p_match' is the percentage of the _match_ that overlaps with the query; it is the "detection" of the match in the metagenome.
+* 'overlap', the first column, is the estimated number of base pairs shared between the match and the query, based on the number of shared hashes.
+* 'p_query' is the _percentage_ of the query that overlaps with the match; it is the amount of the metagenome "explained" by this match. It is typically a lower bound on the percent of metagenomes reads that will map to this genome.
+* 'p_match' is the percentage of the _match_ that overlaps with the query; it is the "detection" of the match in the metagenome. It is typically a lower bound on the number of base pairs that will be covered by read mapping.
Quite a bit more information per match row is available in the CSV
output saved with `-o`; for details, see
diff --git a/src/sourmash/cli/gather.py b/src/sourmash/cli/gather.py
index 732fef52bf..0b0115efd2 100644
--- a/src/sourmash/cli/gather.py
+++ b/src/sourmash/cli/gather.py
@@ -24,7 +24,7 @@
sourmash gather query.sig [ list of signatures or SBTs ]
```
-Example output:
+Example output for an unweighted/noabund query:
```
overlap p_query p_match
--------- ------- --------
@@ -35,6 +35,17 @@
0.7 Mbp 5.3%% 17.6%% AE017285.1 Desulfovibrio vulgaris sub...
```
+Example output for a weighted query:
+```
+overlap p_query p_match avg_abund
+--------- ------- ------- ---------
+9.3 Mbp 0.8% 97.5% 6.7 NC_007951.1 Burkholderia xenovorans ...
+7.3 Mbp 2.3% 99.9% 23.9 NC_003272.1 Nostoc sp. PCC 7120 DNA,...
+7.0 Mbp 8.9% 100.0% 94.5 BX119912.1 Rhodopirellula baltica SH...
+6.6 Mbp 1.4% 100.0% 16.3 NC_009972.1 Herpetosiphon aurantiacu...
+...
+```
+
The command line option `--threshold-bp` sets the threshold below
which matches are no longer reported; by default, this is set to
50kb. see the Appendix in Classifying Signatures [1] for details.
diff --git a/src/sourmash/search.py b/src/sourmash/search.py
index a019c2bbc2..7b2db8008f 100644
--- a/src/sourmash/search.py
+++ b/src/sourmash/search.py
@@ -433,18 +433,27 @@ class GatherResult(PrefetchResult):
sum_weighted_found: int = None
total_weighted_hashes: int = None
- gather_write_cols = ['intersect_bp', 'f_orig_query', 'f_match', 'f_unique_to_query',
- 'f_unique_weighted','average_abund', 'median_abund', 'std_abund', 'filename', # here we use 'filename'
- 'name', 'md5', 'f_match_orig', 'unique_intersect_bp', 'gather_result_rank',
- 'remaining_bp', 'query_filename', 'query_name', 'query_md5', 'query_bp', 'ksize',
- 'moltype', 'scaled', 'query_n_hashes', 'query_abundance', 'query_containment_ani',
- 'match_containment_ani', 'average_containment_ani', 'max_containment_ani',
+ gather_write_cols = ['intersect_bp', 'f_orig_query', 'f_match',
+ 'f_unique_to_query',
+ 'f_unique_weighted','average_abund',
+ 'median_abund', 'std_abund', 'filename',
+ 'name', 'md5',
+ 'f_match_orig', 'unique_intersect_bp',
+ 'gather_result_rank', 'remaining_bp',
+ 'query_filename', 'query_name', 'query_md5',
+ 'query_bp', 'ksize', 'moltype', 'scaled',
+ 'query_n_hashes', 'query_abundance',
+ 'query_containment_ani',
+ 'match_containment_ani',
+ 'average_containment_ani',
+ 'max_containment_ani',
'potential_false_negative',
- 'n_unique_weighted_found', 'sum_weighted_found',
+ 'n_unique_weighted_found',
+ 'sum_weighted_found',
'total_weighted_hashes']
ci_cols = ["query_containment_ani_low", "query_containment_ani_high",
- "match_containment_ani_low", "match_containment_ani_high"]
+ "match_containment_ani_low", "match_containment_ani_high"]
gather_write_cols_ci = gather_write_cols + ci_cols
@@ -671,9 +680,10 @@ def __init__(self, query, counters, *,
# do we pay attention to abundances?
query_mh = query.minhash
query_hashes = query_mh.hashes
- orig_query_abunds = { k: 1 for k in query_hashes }
if track_abundance:
orig_query_abunds = query_hashes
+ else:
+ orig_query_abunds = { k: 1 for k in query_hashes }
# adjust for not found...
if noident_mh is None: # create empty
@@ -721,9 +731,9 @@ def _update_scaled(self, scaled):
orig_query_abunds = self.orig_query_abunds
self.noident_query_sum_abunds = sum(( orig_query_abunds[k] \
for k in self.noident_mh.hashes ))
- self.sum_abunds = sum(( orig_query_abunds[k] \
+ self.total_weighted_hashes = sum(( orig_query_abunds[k] \
for k in self.orig_query_mh.hashes ))
- self.sum_abunds += self.noident_query_sum_abunds
+ self.total_weighted_hashes += self.noident_query_sum_abunds
if max_scaled != scaled:
return max_scaled
@@ -746,7 +756,6 @@ def __next__(self):
cmp_scaled = self.cmp_scaled
# will not be changed::
- track_abundance = self.track_abundance
threshold_bp = self.threshold_bp
orig_query_abunds = self.orig_query_abunds
@@ -770,7 +779,7 @@ def __next__(self):
# retrieve various saved things, after potential downsampling
orig_query_mh = self.orig_query_mh
- sum_abunds = self.sum_abunds
+ total_weighted_hashes = self.total_weighted_hashes
noident_mh = self.noident_mh
orig_query_len = len(orig_query_mh) + len(noident_mh)
@@ -784,10 +793,10 @@ def __next__(self):
new_query = SourmashSignature(new_query_mh)
# compute weighted information for remaining query hashes
- query_hashes = set(query_mh.hashes) - set(found_mh.hashes)
+ query_hashes = set(new_query_mh.hashes)
n_weighted_missed = sum((orig_query_abunds[k] for k in query_hashes))
n_weighted_missed += self.noident_query_sum_abunds
- sum_weighted_found = sum_abunds - n_weighted_missed
+ sum_weighted_found = total_weighted_hashes - n_weighted_missed
# build a GatherResult
result = GatherResult(self.orig_query, best_match,
@@ -795,18 +804,17 @@ def __next__(self):
filename=filename,
gather_result_rank=self.result_n,
gather_querymh=query.minhash,
- ignore_abundance= not track_abundance,
+ ignore_abundance=not self.track_abundance,
threshold_bp=threshold_bp,
orig_query_len=orig_query_len,
- orig_query_abunds = self.orig_query_abunds,
+ orig_query_abunds=self.orig_query_abunds,
estimate_ani_ci=self.estimate_ani_ci,
sum_weighted_found=sum_weighted_found,
- total_weighted_hashes=sum_abunds,
+ total_weighted_hashes=total_weighted_hashes,
)
self.result_n += 1
self.query = new_query
- self.orig_query_mh = orig_query_mh
return result
| adjust sourmash gather "overlap" documentation description
under gather docs: https://sourmash.readthedocs.io/en/latest/command-line.html#sourmash-gather-find-metagenome-members
ââoverlapâ, the first column, is the estimated number of k-mers shared between the match and the query.â
I think instead this should this say something estimated number of base pairs based on kmers shared between match and query
| 2024-01-29T03:28:01 | 0.0 | [] | [] |
|||
Lightning-Universe/lightning-transformers | Lightning-Universe__lightning-transformers-16 | 487ceb107c01dae7e409220f124f8c5f2ae81cd1 | diff --git a/conf/scheduler/linear_schedule_with_warmup.yaml b/conf/scheduler/linear_schedule_with_warmup.yaml
index 9cbdf617..f73e4704 100644
--- a/conf/scheduler/linear_schedule_with_warmup.yaml
+++ b/conf/scheduler/linear_schedule_with_warmup.yaml
@@ -1,4 +1,4 @@
# @package _group_
_target_: transformers.get_linear_schedule_with_warmup
-num_warmup_steps: 10
-num_training_steps: 10
\ No newline at end of file
+num_warmup_steps: 0.1 # float values determines percentage of training steps to use as warmup
+num_training_steps: -1 # -1 specifies to infer number of training steps
\ No newline at end of file
diff --git a/lightning_transformers/core/model.py b/lightning_transformers/core/model.py
index b9641ad9..307c8161 100644
--- a/lightning_transformers/core/model.py
+++ b/lightning_transformers/core/model.py
@@ -1,6 +1,8 @@
+import math
from typing import Optional, Any
import pytorch_lightning as pl
+from pytorch_lightning import _logger as log
from hydra.utils import get_class, instantiate
from omegaconf import DictConfig
@@ -36,10 +38,44 @@ def configure_optimizers(self):
},
]
optimizer = instantiate(self.optim, optimizer_grouped_parameters)
- scheduler = instantiate(self.scheduler, optimizer)
+
+ if self.scheduler.num_training_steps < 0:
+ # less than 0 specifies to infer number of training steps
+ self.scheduler.num_training_steps = self.num_training_steps
+ log.info(f"Inferring number of training steps, set to {self.scheduler.num_training_steps}")
+
+ if isinstance(self.scheduler.num_warmup_steps, float):
+ # Convert float values to percentage of training steps to use as warmup
+ warmup_ratio = self.scheduler.num_warmup_steps
+ self.scheduler.num_warmup_steps = self.scheduler.num_training_steps * warmup_ratio
+ log.info(f"Inferring number of warmup steps from ratio, set to {self.scheduler.num_warmup_steps}")
+
+ scheduler = instantiate(
+ config=self.scheduler,
+ optimizer=optimizer
+ )
scheduler = {'scheduler': scheduler, 'interval': 'step', 'frequency': 1}
return [optimizer], [scheduler]
+ @property
+ def num_training_steps(self) -> int:
+ """Total training steps inferred from datamodule and devices."""
+ if self.trainer.max_steps:
+ return self.trainer.max_steps
+ batch_size = self.trainer.datamodule.batch_size
+
+ if self.trainer.limit_train_batches != 0:
+ dataset_size = self.trainer.limit_train_batches
+ else:
+ dataset_size = len(self.trainer.datamodule.train_dataloader())
+
+ num_devices = max(1, self.trainer.num_gpus, self.trainer.num_processes)
+ if self.trainer.tpu_cores:
+ num_devices = max(num_devices, self.trainer.tpu_cores)
+
+ effective_batch_size = batch_size * self.trainer.accumulate_grad_batches * num_devices
+ return math.ceil(dataset_size / effective_batch_size) * self.trainer.max_epochs
+
class TaskTransformer(LitTransformer):
"""
| Infer max steps for schedulers
Many schedulers require total_steps to be defined: https://github.com/PyTorchLightning/lightning-transformers/blob/master/conf/scheduler/linear_schedule_with_warmup.yaml#L3-L4
We should define a function within this repo to determine the total number of steps from the data module, appropriately considering limit_train_batches/num_processes etc. This is similar to what has been done in NeMo and I have some psuedocode here:
```python
def total_training_steps(self,
max_epochs,
max_steps,
accumulate_grad_batches,
limit_train_batches,
num_distributed) -> int:
if max_steps > 0:
return max_steps
# Compute effective num max_steps
num_samples = len(train_dataloader.dataset)
batch_size = train_dataloader.batch_size
drop_last = train_dataloader.drop_last
return self.compute_max_steps(
max_epochs=max_epochs,
accumulate_grad_batches=accumulate_grad_batches,
limit_train_batches=limit_train_batches,
num_distributed=num_distributed,
num_samples=num_samples,
batch_size=batch_size,
drop_last=drop_last,
)
def compute_max_steps(
self,
max_epochs,
accumulate_grad_batches,
limit_train_batches,
num_distributed,
num_samples,
batch_size,
drop_last):
_round = math.floor if drop_last else math.ceil
sampler_num_samples = math.ceil(num_samples / num_distributed)
steps_per_epoch = _round(sampler_num_samples / batch_size)
if isinstance(limit_train_batches, int) or limit_train_batches == 0.0:
steps_per_epoch = min(steps_per_epoch, int(limit_train_batches))
elif steps_per_epoch != float('inf'):
# limit_train_batches is a percentage of batches per epoch
steps_per_epoch = int(steps_per_epoch * limit_train_batches)
if accumulate_grad_batches == 1:
steps_per_epoch = max(steps_per_epoch, 1)
return math.ceil(steps_per_epoch / accumulate_grad_batches) * max_epochs
```
Eventually I feel like this code should go into lightning as a useful function, there are already relevant issues in PL around this.
| 2021-01-07T11:09:21 | 0.0 | [] | [] |
|||
hactar-is/wagtail-meilisearch | hactar-is__wagtail-meilisearch-5 | e0080f1015981fefe1135844663183cfa39a0d5d | diff --git a/pyproject.toml b/pyproject.toml
index 863642e..3e6ab61 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -29,7 +29,7 @@ include = [
python = "^3.8"
arrow = "^1.2.3"
-meilisearch = "^0.25.0"
+meilisearch = "^0.30.0"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
diff --git a/wagtail_meilisearch/backend.py b/wagtail_meilisearch/backend.py
index 40ffd3c..aa0fb00 100644
--- a/wagtail_meilisearch/backend.py
+++ b/wagtail_meilisearch/backend.py
@@ -17,6 +17,7 @@
from wagtail.search.backends.base import (
BaseSearchBackend, BaseSearchResults, EmptySearchResults, BaseSearchQueryCompiler
)
+from wagtail.search.query import PlainText, Phrase, Fuzzy
try:
from django.utils.encoding import force_text
@@ -46,6 +47,10 @@ def _get_field_mapping(field):
return field.field_name
+def get_index_label(model):
+ return model._meta.label.replace('.', '-')
+
+
class MeiliSearchModelIndex:
"""Creats a working index for each model sent to it.
@@ -61,15 +66,9 @@ def __init__(self, backend, model):
"""
self.backend = backend
self.client = backend.client
- self.query_limit = backend.query_limit
self.model = model
self.name = model._meta.label
self.index = self._set_index(model)
- self.search_params = {
- 'limit': self.query_limit,
- 'attributesToRetrieve': ['id', ],
- 'showMatchesPosition': True
- }
self.update_strategy = backend.update_strategy
self.update_delta = backend.update_delta
self.delta_fields = [
@@ -87,7 +86,7 @@ def _update_stop_words(self, label):
sys.stdout.write(f'WARN: Failed to update stop words on {label}\n')
def _set_index(self, model):
- label = self._get_label(model)
+ label = get_index_label(model)
# if index doesn't exist, create
try:
self.client.get_index(label).get_settings()
@@ -99,10 +98,6 @@ def _set_index(self, model):
return index
- def _get_label(self, model):
- label = model._meta.label.replace('.', '-')
- return label
-
def _rebuild(self):
self.index.delete()
self._set_index(self.model)
@@ -294,7 +289,7 @@ def delete_item(self, obj):
self.index.delete_document(obj.id)
def search(self, query):
- return self.index.search(query, self.search_params)
+ return self.index.search(query, self.backend.search_params)
def __str__(self):
return self.name
@@ -317,7 +312,7 @@ def add_items(self, model, chunk):
class MeiliSearchRebuilder:
def __init__(self, model_index):
self.index = model_index
- self.uid = self.index._get_label(self.index.model)
+ self.uid = get_index_label(self.index.model)
self.dummy_index = DummyModelIndex()
def start(self):
@@ -408,22 +403,41 @@ def _get_field_boosts(self, model):
return boosts
- def _do_search(self):
- results = []
+ @property
+ def models(self):
+ return get_descendant_models(self.query_compiler.queryset.model)
- qc = self.query_compiler
- model = qc.queryset.model
- models = get_descendant_models(model)
- terms = qc.query.query_string
-
- for m in models:
- index = self.backend.get_index_for_model(m)
- result = index.search(terms)
- boosts = self._get_field_boosts(m)
- for item in result['hits']:
- if item not in results:
- item['boosts'] = boosts
- results.append(item)
+ @property
+ def query_string(self):
+ query = self.query_compiler.query
+ if isinstance(query, (PlainText, Phrase, Fuzzy)):
+ return query.query_string
+ return ''
+
+ def _do_search(self):
+ models = self.models
+ terms = self.query_string
+
+ models_boosts = {}
+ for model in models:
+ label = get_index_label(model)
+ models_boosts[label] = self._get_field_boosts(model)
+
+ results = [
+ {
+ **item,
+ 'boosts': models_boosts[items['indexUid']]
+ }
+ for items in self.backend.client.multi_search([
+ {
+ 'indexUid': index_uid,
+ 'q': terms,
+ **self.backend.search_params,
+ }
+ for index_uid in models_boosts
+ ])['results']
+ for item in items['hits']
+ ]
"""At this point we have a list of results that each look something like this
(with various fields snipped)...
@@ -480,20 +494,38 @@ def _do_search(self):
sorted_results = sorted(results, key=itemgetter('score'), reverse=True)
sorted_ids = [_['id'] for _ in sorted_results]
+ qc = self.query_compiler
+ window_sorted_ids = sorted_ids[self.start:self.stop]
+ results = qc.queryset.filter(pk__in=window_sorted_ids)
+
# This piece of utter genius is borrowed wholesale from wagtail-whoosh after I spent
# several hours trying and failing to work out how to do this.
if qc.order_by_relevance:
# Retrieve the results from the db, but preserve the order by score
- preserved_order = Case(*[When(pk=pk, then=pos) for pos, pk in enumerate(sorted_ids)])
- results = qc.queryset.filter(pk__in=sorted_ids).order_by(preserved_order)
- else:
- results = qc.queryset.filter(pk__in=sorted_ids)
- results = results.distinct()[self.start:self.stop]
+ preserved_order = Case(*[When(pk=pk, then=pos) for pos, pk in enumerate(window_sorted_ids)])
+ results = results.order_by(preserved_order)
- return results
+ return results.distinct()
def _do_count(self):
- return len(self._do_search())
+ models = self.models
+ terms = self.query_string
+ indexes_uids = [
+ get_index_label(model)
+ for model in models
+ ]
+ return sum([
+ results['totalHits']
+ for results in self.backend.client.multi_search([
+ {
+ 'indexUid': index_uid,
+ 'q': terms,
+ 'attributesToRetrieve': [],
+ 'hitsPerPage': 0,
+ }
+ for index_uid in indexes_uids
+ ])['results']
+ ])
class MeiliSearchBackend(BaseSearchBackend):
@@ -517,6 +549,11 @@ def __init__(self, params):
self.skip_models = params.get('SKIP_MODELS', [])
self.update_strategy = params.get('UPDATE_STRATEGY', 'soft')
self.query_limit = params.get('QUERY_LIMIT', 999999)
+ self.search_params = {
+ 'limit': self.query_limit,
+ 'attributesToRetrieve': ['id'],
+ 'showMatchesPosition': True
+ }
self.update_delta = None
if self.update_strategy == 'delta':
self.update_delta = params.get('UPDATE_DELTA', {'weeks': -1})
| Use `/multi-search` for better performance
Currently, `wagtail-meilisearch` queries each meilisearch index one by one and gathers results:
https://github.com/hactar-is/wagtail-meilisearch/blob/49ce3814a62e3f128bcfd79d544aaa9d80cf2444/wagtail_meilisearch/backend.py#L419-L426
On a website with dozens of page types, searching with meilisearch on the global admin search can take more than a minute.
Would it be possible to use [the `/multi-search` endpoint](https://www.meilisearch.com/docs/reference/api/multi_search) to send all these "subqueries" at once? I did a test using 3 indexes, the latency is the same as with 1 index, so we can probably expect much better performance with this approach.
| 2024-02-15T12:16:17 | 0.0 | [] | [] |
|||
mu-editor/pup | mu-editor__pup-227 | 343af147d94ca7c21ee262459ab9b8022b0d13d9 | diff --git a/src/pup/plugins/download.py b/src/pup/plugins/download.py
index 3309dd9..7edeebc 100644
--- a/src/pup/plugins/download.py
+++ b/src/pup/plugins/download.py
@@ -8,6 +8,8 @@
import httpx
+_log = logging.getLogger(__name__)
+
class Step:
@@ -44,8 +46,10 @@ def _download(self, url, dirname, filename):
with open(file.with_suffix('.url'), 'wt', encoding='utf8') as f:
f.write(url)
- with open(file, 'wb') as f, httpx.stream('GET', url) as r:
+ _log.info(f'Downloading {url!r}...')
+ with open(file, 'wb') as f, httpx.stream('GET', url, follow_redirects=True) as r:
for chunk in r.iter_bytes():
f.write(chunk)
+ _log.info(f'Cached in {str(file)!r}.')
return file
| Dowload plugin should follow redirects
As of this writing, an HTTP GET of `https://github.com/indygreg/python-build-standalone/releases/download/20220528/cpython-3.10.4+20220528-x86_64-apple-darwin-pgo-full.tar.zst` returns a 302.
Setting `PUP_PBS_URL` to it fails, thus!
| 2022-06-21T21:18:58 | 0.0 | [] | [] |
|||
GeoscienceAustralia/GeodePy | GeoscienceAustralia__GeodePy-126 | 82c059a39b831f30035107577ba695703aab99bf | diff --git a/Standalone/ntv2reader.py b/Standalone/ntv2reader.py
deleted file mode 100644
index ce7bcbe..0000000
--- a/Standalone/ntv2reader.py
+++ /dev/null
@@ -1,621 +0,0 @@
-import struct
-from datetime import datetime as dt
-import numpy as np
-import pandas as pd
-
-'''
-Created 12/11/2018
-
-@Author: Jaimie Dodd
-
-This script defines classes for reading the AUSGeoid2020 binary NTv2 file, and interpolating the Ellipsoid to AHD
-Separation, Deflection in the Prime Meridian and Deflection in the Prime Vertical of a given latitude and longitude via
-bilinear or bicubic interpolation (user specified input or bicubic by default).
-
-Values have been tested against the output of DynAdjust.
-'''
-
-class NTv2ReaderBinary(object):
- def __init__(self):
- self.numOrec = 0
- self.numSrec = 0
- self.numFile = 0
- self.gsType = ''
- self.version = ''
- self.systemF = ''
- self.systemT = ''
- self.majorF = 0
- self.minorF = 0
- self.majorT = 0
- self.minorT = 0
- self.subgrids = pd.DataFrame(columns=['CONTAINS', 'SUB_NAME', 'PARENT', 'CREATED', 'UPDATED', 'S_LAT', 'N_LAT',
- 'E_LONG', 'W_LONG', 'LAT_INC', 'LONG_INC', 'GS_COUNT',
- 'ELLIPSOID_TO_AHD_SEPARATION', 'DEFLECTION_PRIME_MERIDIAN',
- 'DEFLECTION_PRIME_VERTICAL'])
- self.ellps2ahdsep = 0
- self.deflectionprimemeridian = 0
- self.deflectionprimevertical = 0
-
- def ntv2reader(self, file, lat, lon, interpolation_method='bicubic'):
- """
- Function for reading in the binary NTv2 file and
-
- :param file: NTv2 file (full location as string)
- :param lat: latitude of point of interest (decimal degrees)
- :param lon: longitude of point of interest (decimal degrees)
- :param interpolation_method: 'bilinear' or 'bicubic' ('bicubic' by default)
- :return: tuple containing ellipsoid to AHD separation, deflection in prime meridian and deflection in prime
- vertical.
-
- """
-
- # If interpolation method specified is something other than bilinear or bicubic, then raise ValueError
- interpolation_options = ['bilinear', 'bicubic']
- if interpolation_method not in interpolation_options:
- raise ValueError("Invalid Interpolation method. Expected one of: %s" % interpolation_options)
-
- # Read the NTv2 file as binary
- f = open(file, 'rb')
-
- # NUM_OREC
- f.seek(8, 1)
-
- byte = f.read(4)
- self.numOrec = int.from_bytes(byte, byteorder='little')
-
- f.seek(4, 1)
-
- # NUM_SREC
- f.seek(8, 1)
-
- byte = f.read(4)
- self.numSrec = int.from_bytes(byte, byteorder='little')
-
- f.seek(4, 1)
-
- # NUM_FILE
- f.seek(8, 1)
-
- byte = f.read(4)
- self.numFile = int.from_bytes(byte, byteorder='little')
-
- f.seek(4, 1)
-
- # GS_TYPE
- f.seek(8, 1)
-
- byte = f.read(8)
- self.gsType = byte.decode('utf8').strip('\x00').strip()
-
- # VERSION
- f.seek(8, 1)
-
- byte = f.read(8)
- self.version = byte.decode('utf8').strip('\x00').strip()
-
- # SYSTEM_F
- f.seek(8, 1)
-
- byte = f.read(8)
- self.systemF = byte.decode('utf8').strip('\x00').strip()
-
- # SYSTEM_T
- f.seek(8, 1)
-
- byte = f.read(8)
- self.systemT = byte.decode('utf8').strip('\x00').strip()
-
- # MAJOR_F
- f.seek(8, 1)
-
- byte = f.read(8)
- self.majorF = struct.unpack('d', byte)[0]
-
- # MINOR_F
- f.seek(8, 1)
-
- byte = f.read(8)
- self.minorF = struct.unpack('d', byte)[0]
-
- # MAJOR_T
- f.seek(8, 1)
-
- byte = f.read(8)
- self.majorT = struct.unpack('d', byte)[0]
-
- # MINOR_T
- f.seek(8, 1)
-
- byte = f.read(8)
- self.minorT = struct.unpack('d', byte)[0]
-
- # Convert lat and lon to seconds
- if self.gsType == 'SECONDS':
- lat = lat * 3600
- lon = lon * -3600
-
- # Sub Grids
- for i in range(0, self.numFile):
- self.subgrids = self.subgrids.append(SubGrid().findsubgrid(f, lat, lon, interpolation_method),
- ignore_index=True)
-
- # Close the NTv2 file
- f.close()
-
- # Filter subgrids dataframe so that only subgrids with values remain
- self.subgrids = self.subgrids[self.subgrids['CONTAINS']]
-
- # If more than one subgrid which contains coordinates, then filter dataframe so subgrid with smallest gridwidth
- # remains
- if len(self.subgrids) > 1:
- self.subgrids = self.subgrids.loc[[self.subgrids.loc[:, 'LAT_INC'].idxmin()]]
-
- # If subgrids dataframe is not empty, then return values for ellipsoid to AHD separation, deflection of prime
- # meridian and deflection of prime vertical
- if not self.subgrids.empty:
- self.ellps2ahdsep = self.subgrids.iloc[0, -3]
- self.deflectionprimemeridian = self.subgrids.iloc[0, -2]
- self.deflectionprimevertical = self.subgrids.iloc[0, -1]
- return self.ellps2ahdsep, self.deflectionprimemeridian, self.deflectionprimevertical
- # If no subgrids contain coordinates, then raise ValueError
- else:
- raise ValueError("The coordinates supplied are outside the extents of the grid.")
-
-
-class SubGrid(object):
-
- def __init__(self):
- self.subName = ''
- self.parent = ''
- self.created = ''
- self.updated = ''
- self.sLat = 0
- self.nLat = 0
- self.eLong = 0
- self.wLong = 0
- self.latInc = 0
- self.longInc = 0
- self.gsCount = 0
- self.Nval = 0
- self.Xi = 0
- self.Eta = 0
- self.contains = False
-
- def findsubgrid(self, f, lat, lon, interpolation_method='bicubic'):
- """
- Function to pull out metadata of a subgrid in the AUSGeoid2020 NTv2 file
-
- :param f: variable for open file NTv2 being read as binary
- :param lat: latitude of the point of interest (seconds)
- :param lon: longitude of the point of interest (negative seconds)
- :param interpolation_method: interpolation method as specified in ntv2reader() -
- 'bilinear' or 'bicubic' ('bicubic' by default)
-
- :return: subgrid metadata in form of a dictionary or results from bilinear() or bicubic() (depending on
- interpolation method specified) if point lies within subgrid.
- """
- # SUB_NAME
- f.seek(8, 1)
-
- byte = f.read(8)
- self.subName = byte.decode('utf').strip('\x00').strip()
-
- # PARENT
- f.seek(8, 1)
-
- byte = f.read(8)
- self.parent = byte.decode('utf').strip('\x00').strip()
-
- # CREATED
- f.seek(8, 1)
-
- byte = f.read(8)
- self.created = dt.strptime(byte.decode('utf').strip('\x00').strip(), '%d%m%Y').strftime('%d/%m/%Y')
-
- # UPDATED
- f.seek(8, 1)
-
- byte = f.read(8)
- self.updated = dt.strptime(byte.decode('utf').strip('\x00').strip(), '%d%m%Y').strftime('%d/%m/%Y')
-
- # S_LAT
- f.seek(8, 1)
-
- byte = f.read(8)
- self.sLat = round(struct.unpack('d', byte)[0], 3)
-
- # N_LAT
- f.seek(8, 1)
-
- byte = f.read(8)
- self.nLat = round(struct.unpack('d', byte)[0], 3)
-
- # E_LONG
- f.seek(8, 1)
-
- byte = f.read(8)
- self.eLong = round(struct.unpack('d', byte)[0], 3)
-
- # W_LONG
- f.seek(8, 1)
-
- byte = f.read(8)
- self.wLong = round(struct.unpack('d', byte)[0], 3)
-
- # LAT_INC
- f.seek(8, 1)
-
- byte = f.read(8)
- self.latInc = round(struct.unpack('d', byte)[0], 6)
-
- # LONG_INC
- f.seek(8, 1)
-
- byte = f.read(8)
- self.longInc = round(struct.unpack('d', byte)[0], 6)
-
- # GS_COUNT
- f.seek(8, 1)
-
- byte = f.read(4)
- self.gsCount = int.from_bytes(byte, byteorder='little')
-
- f.seek(4, 1)
-
- # Check if coordinates fall within subgrid. If yes, return self.contains = True
- if self.sLat <= lat < self.nLat and self.eLong <= lon < self.wLong:
- self.contains = True
-
- # If self.contains is True, determine number of columns in grid, and row and column of node to bottom right of
- # point of interest, then call relevant interpolation method function
- if self.contains is True:
- # Determine number of columns
- numcols = 1 + int((self.wLong - self.eLong) / self.longInc)
-
- # Determine row and col numbers of node below right of point
- row = int((lat - self.sLat) / self.latInc)
- col = int((lon - self.eLong) / self.longInc)
-
- # If interpolation_method == 'bilinear', call bilinear function
- if interpolation_method == 'bilinear':
- return self.bilinear(f, lat, lon, numcols, row, col)
- # If interpolation_method == 'bicubic', call bicubic function
- elif interpolation_method == 'bicubic':
- return self.bicubic(f, lat, lon, numcols, row, col)
-
- # If point is not in subgrid, skip to end of subgrid and return sub grid metadata in form of dictionary
- else:
- f.seek(16 * self.gsCount, 1)
-
- return {'CONTAINS': self.contains, 'SUB_NAME': self.subName, 'PARENT': self.parent, 'CREATED': self.created,
- 'UPDATED': self.updated, 'S_LAT': self.sLat, 'N_LAT': self.nLat, 'E_LONG': self.eLong,
- 'W_LONG': self.wLong, 'LAT_INC': self.latInc, 'LONG_INC': self.longInc, 'GS_COUNT': self.gsCount,
- 'ELLIPSOID_TO_AHD_SEPARATION': np.nan, 'DEFLECTION_PRIME_MERIDIAN': np.nan,
- 'DEFLECTION_PRIME_VERTICAL': np.nan}
-
- def bilinear(self, f, lat, lon, numcols, row, col):
-
- """
- Function to perform bilinear interpolatoin of the Ellipsoid to AHD Separation, Deflection in Prime Meridian and
- Deflection in Prime Vertical of a point of interest.
-
- :param f: variable for open file NTv2 being read as binary
- :param lat: latitude of the point of interest (seconds)
- :param lon: longitude of the point of interest (negative seconds)
- :param numcols: number of columns in grid
- :param row: row number of point to the bottom right of the point of interest
- :param col: column number of point to the bottom right of the point of interest
-
- :return: dictionary of subgrid metadata and values for Ellipsoid to AHD Separation, Deflection in Prime Meridian
- and Deflection in Prime Vertical of point of interest found via bilinear interpolation.
- """
-
- # | |
- # --o-----o--
- # |4 |3
- # | *P |
- # --o-----o--
- # |2 |1
- #
- # o - Node position.
-
- # Determine position in the file of the four surrounding nodes
- pos1 = row * numcols + col
- pos2 = pos1 + 1
- pos3 = pos1 + numcols
- pos4 = pos3 + 1
-
- # Navigate to start of posA node
- f.seek(16 * pos1, 1)
-
- # Read in values for nodes A and B
- (pos1nval, pos1xi, pos1eta) = self.read_node(f)
- (pos2nval, pos2xi, pos2eta) = self.read_node(f)
-
- # Navigate to beginning of node C
- f.seek(16*(pos3 - pos2 - 1), 1)
-
- # Read in values for nodes C and D
- (pos3nval, pos3xi, pos3eta) = self.read_node(f)
- (pos4nval, pos4xi, pos4eta) = self.read_node(f)
-
- # Determine latitude and longitude of node A
- lat1 = self.sLat + row * self.latInc
- long1 = self.eLong + col * self.longInc
-
- # Determine interpolation scale factors
- x = round((lon - long1) / self.longInc, 6)
- y = round((lat - lat1) / self.latInc, 6)
-
- # Call bilinear interpolation function to determine N value, Xi and Eta
- # (Ellipsoid to AHD separation, deflection in prime meridian and deflection in prime vertical).
- self.Nval = round(bilinear_interpolation(pos1nval, pos2nval, pos3nval, pos4nval, x, y), 3)
- self.Xi = round(bilinear_interpolation(pos1xi, pos2xi, pos3xi, pos4xi, x, y), 2)
- self.Eta = round(bilinear_interpolation(pos1eta, pos2eta, pos3eta, pos4eta, x, y), 2)
-
- # Navigate to the end of the subgrid
- f.seek(16 * (self.gsCount - pos4 - 1), 1)
-
- # Return dictionary of information
- return {'CONTAINS': self.contains, 'SUB_NAME': self.subName, 'PARENT': self.parent, 'CREATED': self.created,
- 'UPDATED': self.updated, 'S_LAT': self.sLat, 'N_LAT': self.nLat, 'E_LONG': self.eLong,
- 'W_LONG': self.wLong, 'LAT_INC': self.latInc, 'LONG_INC': self.longInc, 'GS_COUNT': self.gsCount,
- 'ELLIPSOID_TO_AHD_SEPARATION': self.Nval, 'DEFLECTION_PRIME_MERIDIAN': self.Xi,
- 'DEFLECTION_PRIME_VERTICAL': self.Eta}
-
- def bicubic(self, f, lat, lon, numcols, row, col):
- """
- Function to perform bicubic interpolatoin of the Ellipsoid to AHD Separation, Deflection in Prime Meridian and
- Deflection in Prime Vertical of a point of interest.
-
- :param f: variable for open file NTv2 being read as binary
- :param lat: latitude of the point of interest (seconds)
- :param lon: longitude of the point of interest (negative seconds)
- :param numcols: number of columns in grid
- :param row: row number of point to the bottom right of the point of interest
- :param col: column number of point to the bottom right of the point of interest
-
- :return: dictionary of subgrid metadata and values for Ellipsoid to AHD Separation, Deflection in Prime Meridian
- and Deflection in Prime Vertical of point of interest found via bicubic interpolation.
-
- """
-
- # | | | |
- # --o-----o-----o-----o--
- # |11 |12 |13 |14
- # | | | |
- # --o-----o-----o-----o--
- # |10 |3 |4 |15
- # | | *P | |
- # --o-----o-----o-----o--
- # |9 |2 |1 |16
- # | | | |
- # --o-----o-----o-----o--
- # |8 |7 |6 |5
- #
- # o - Node position.
-
- # Determine position in the file of the sixteen surrounding nodes
- pos1 = row * numcols + col
- pos2 = pos1 + 1
- pos3 = pos2 + numcols
- pos4 = pos3 - 1
- pos5 = pos4 - 2 * numcols - 1
- pos6 = pos5 + 1
- pos7 = pos6 + 1
- pos8 = pos7 + 1
- pos9 = pos8 + numcols
- pos10 = pos9 + numcols
- pos11 = pos10 + numcols
- pos12 = pos11 - 1
- pos13 = pos12 - 1
- pos14 = pos13 - 1
- pos15 = pos14 - numcols
- pos16 = pos15 - numcols
-
- # Navigate to start of posA node
- f.seek(16 * pos5, 1)
-
- # Read in values for nodes 5-8
- (pos5nval, pos5xi, pos5eta) = self.read_node(f)
- (pos6nval, pos6xi, pos6eta) = self.read_node(f)
- (pos7nval, pos7xi, pos7eta) = self.read_node(f)
- (pos8nval, pos8xi, pos8eta) = self.read_node(f)
-
- # Navigate to start of pos16 node
- f.seek(16 * (pos16 - pos8 - 1), 1)
-
- # Read in values for nodes 16, 1, 2, and 9
- (pos16nval, pos16xi, pos16eta) = self.read_node(f)
- (pos1nval, pos1xi, pos1eta) = self.read_node(f)
- (pos2nval, pos2xi, pos2eta) = self.read_node(f)
- (pos9nval, pos9xi, pos9eta) = self.read_node(f)
-
- # Navigate to start of pos15 node
- f.seek(16 * (pos15 - pos9 - 1), 1)
-
- # Read in values for nodes 15, 3, 4 and 10
- (pos15nval, pos15xi, pos15eta) = self.read_node(f)
- (pos4nval, pos4xi, pos4eta) = self.read_node(f)
- (pos3nval, pos3xi, pos3eta) = self.read_node(f)
- (pos10nval, pos10xi, pos10eta) = self.read_node(f)
-
- # Navigate to start of pos14 node
- f.seek(16 * (pos14 - pos10 - 1), 1)
-
- # Read in values for nodes 11, 12, 13 and 14
- (pos14nval, pos14xi, pos14eta) = self.read_node(f)
- (pos13nval, pos13xi, pos13eta) = self.read_node(f)
- (pos12nval, pos12xi, pos12eta) = self.read_node(f)
- (pos11nval, pos11xi, pos11eta) = self.read_node(f)
-
- # Determine latitude and longitude of node A
- lat1 = self.sLat + row * self.latInc
- long1 = self.eLong + col * self.longInc
-
- # Determine interpolation scale factors
- x = round((lon - long1) / self.longInc, 6)
- y = round((lat - lat1) / self.latInc, 6)
-
- # Call bicubic_interpolation to determine nVal, Xi and Eta
- # (Ellipsoid to AHD separation, deflection in prime meridian and deflection in prime vertical).
- self.Nval = round(bicubic_interpolation(pos1nval, pos2nval, pos3nval, pos4nval, pos5nval, pos6nval, pos7nval,
- pos8nval, pos9nval, pos10nval, pos11nval, pos12nval, pos13nval,
- pos14nval, pos15nval, pos16nval, x, y), 3)
- self.Xi = round(bicubic_interpolation(pos1xi, pos2xi, pos3xi, pos4xi, pos5xi, pos6xi, pos7xi, pos8xi, pos9xi,
- pos10xi, pos11xi, pos12xi, pos13xi, pos14xi, pos15xi, pos16xi, x, y), 2)
- self.Eta = round(bicubic_interpolation(pos1eta, pos2eta, pos3eta, pos4eta, pos5eta, pos6eta, pos7eta, pos8eta,
- pos9eta, pos10eta, pos11eta, pos12eta, pos13eta, pos14eta, pos15eta,
- pos16eta, x, y), 2)
-
- # Navigate to the end of the subgrid
- f.seek(16 * (self.gsCount - pos11 - 1), 1)
-
- # Return dictionary of information
- return {'CONTAINS': self.contains, 'SUB_NAME': self.subName, 'PARENT': self.parent, 'CREATED': self.created,
- 'UPDATED': self.updated, 'S_LAT': self.sLat, 'N_LAT': self.nLat, 'E_LONG': self.eLong,
- 'W_LONG': self.wLong, 'LAT_INC': self.latInc, 'LONG_INC': self.longInc, 'GS_COUNT': self.gsCount,
- 'ELLIPSOID_TO_AHD_SEPARATION': self.Nval, 'DEFLECTION_PRIME_MERIDIAN': self.Xi,
- 'DEFLECTION_PRIME_VERTICAL': self.Eta}
-
- # Define function to read in node values
- @staticmethod
- def read_node(f):
- """
- Function to read in values of nodes
-
- :param f: variable for open file NTv2 being read as binary
- :return: Ellipsoid to AHD Separation, Deflection in Prime Meridian and Deflection in Prime Vertical for node.
-
- """
- # Read ellipsoid to AHD separation value (N)
- byte = f.read(4)
- nval = round(struct.unpack('f', byte)[0], 3)
- # Read deflection in prime meridian value (Xi)
- byte = f.read(4)
- xi = round(struct.unpack('f', byte)[0], 3)
- # Read deflection in prime vertical value (Eta)
- byte = f.read(4)
- eta = round(struct.unpack('f', byte)[0], 3)
- # Skip to beginning of next node
- f.seek(4, 1)
- # Return values
- return nval, xi, eta
-
-
-# Define function for bilinear interpolation
-def bilinear_interpolation(n1, n2, n3, n4, x, y):
- """
- Bilinear interpolation of value for point of interest (P).
-
- :param n1: value at node 1
- :param n2: value at node 2
- :param n3: value at node 3
- :param n4: value at node 4
- :param x: interpolation scale factor for x axis
- :param y: interpolation scale factor for y axis
-
- :return: value at node P
-
- """
-
- a0 = n1
- a1 = round(n2 - n1, 3)
- a2 = round(n3 - n1, 3)
- a3 = round(n1 + n4 - n2 - n3, 3)
- p = a0 + (a1 * x) + (a2 * y) + (a3 * x * y)
- return p
-
-
-# Define function for performing bicubic interpolation
-def bicubic_interpolation(n1, n2, n3, n4, n5, n6, n7, n8, n9, n10, n11, n12, n13, n14, n15, n16, x, y):
- """
- Bicubic interpolation of value for point of interest (P).
-
- :param n1: value at node 1
- :param n2: value at node 2
- :param n3: value at node 3
- :param n4: value at node 4
- :param n5: value at node 5
- :param n6: value at node 6
- :param n7: value at node 7
- :param n8: value at node 8
- :param n9: value at node 9
- :param n10: value at node 10
- :param n11: value at node 11
- :param n12: value at node 12
- :param n13: value at node 13
- :param n14: value at node 14
- :param n15: value at node 16
- :param n16: value at node 17
- :param x: interpolation scale factor for x axis
- :param y: interpolation scale factor for y axis
-
- :return: value at node P
- """
-
- # Define the inverse of the coefficient matrix
- cinv = np.array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
- [-3, 0, 0, 3, 0, 0, 0, 0, -2, 0, 0, -1, 0, 0, 0, 0],
- [2, 0, 0, -2, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0],
- [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
- [0, 0, 0, 0, -3, 0, 0, 3, 0, 0, 0, 0, -2, 0, 0, -1],
- [0, 0, 0, 0, 2, 0, 0, -2, 0, 0, 0, 0, 1, 0, 0, 1],
- [-3, 3, 0, 0, -2, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, -3, 3, 0, 0, -2, -1, 0, 0],
- [9, -9, 9, -9, 6, 3, -3, -6, 6, -6, -3, 3, 4, 2, 1, 2],
- [-6, 6, -6, 6, -4, -2, 2, 4, -3, 3, 3, -3, -2, -1, -1, -2],
- [2, -2, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 2, -2, 0, 0, 1, 1, 0, 0],
- [-6, 6, -6, 6, -3, -3, 3, 3, -4, 4, 2, -2, -2, -2, -1, -1],
- [4, -4, 4, -4, 2, 2, -2, -2, 2, -2, -2, 2, 1, 1, 1, 1]])
-
- # Define x parameters
- # Function values
- x1 = n1
- x2 = n2
- x3 = n3
- x4 = n4
- # X Derivatives
- x5 = round((n2 - n16) / 2, 4)
- x6 = round((n9 - n1) / 2, 4)
- x7 = round((n10 - n4) / 2, 4)
- x8 = round((n3 - n15) / 2, 4)
- # Y Derivatives
- x9 = round((n4 - n6) / 2, 4)
- x10 = round((n3 - n7) / 2, 4)
- x11 = round((n12 - n2) / 2, 4)
- x12 = round((n13 - n1) / 2, 4)
- # Cross Derivatives (XY)
- x13 = round((n3 - n7 - n15 + n5) / 4, 4)
- x14 = round((n10 - n8 - n4 + n6) / 4, 4)
- x15 = round((n11 - n9 - n13 + n1) / 4, 4)
- x16 = round((n12 - n2 - n14 + n16) / 4, 4)
-
- # Create array from x parameters
- xarr = np.array([x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15, x16])
-
- # Multiply the inverse of the coefficient matrix by the array of x values to give array of alpha values
- alpha = np.matmul(cinv, xarr)
-
- # Calculate value at the point of interest
- n_p = 0
- for i in range(0, 4):
- for j in range(0, 4):
- n_p = n_p + alpha[i * 4 + j] * x ** i * y ** j
-
- # Return the value
- return n_p
-
-
-# TEST OF SCRIPT
-# Specify AUSGeoid2020 binary NTv2 file location
-ntv2file = "C://Git/Python/NTv2.git/AUSGeoid2020_20180201.gsb"
-# Assign class to variable
-grids = NTv2ReaderBinary()
-# Call ntv2reader to determine values at specific point
-values = grids.ntv2reader(ntv2file, -26.948643, 145.6548721, 'bilinear')
-# Print values
-print(values)
diff --git a/geodepy/ntv2reader.py b/geodepy/ntv2reader.py
new file mode 100644
index 0000000..6a152ef
--- /dev/null
+++ b/geodepy/ntv2reader.py
@@ -0,0 +1,583 @@
+"""
+GeodePy - Python Geodesy Package
+ntv2reader Module
+
+Adapted from Jaimie Dodd's ntv2reader.py for AUSGeoid ntv2 files (2018-11-12).
+Expanded (2021-03-21) for the more general case to include horizontal transformation grids
+"""
+
+import struct
+from datetime import datetime as dt
+import numpy as np
+
+
+class NTv2Grid(object):
+ def __init__(self, num_orec, num_srec, num_file, gs_type, version, system_f, system_t, major_f, minor_f, major_t,
+ minor_t, file_path):
+ self.num_orec = num_orec # Number of header identifiers
+ self.num_srec = num_srec # Number of sub-header idents
+ self.num_file = num_file # Number of subgrids in file
+ self.gs_type = gs_type # grid shift type
+ self.version = version # grid file version
+ self.system_f = system_f # reference system
+ self.system_t = system_t # reference system
+ self.major_f = major_f # semi major of from system
+ self.minor_f = minor_f # semi minor of from system
+ self.major_t = major_t # semi major of to system
+ self.minor_t = minor_t # semi minor of to system
+ self.subgrids = {}
+ self.file_path = file_path
+
+
+class SubGrid(object):
+ def __init__(self, sub_name, parent, created, updated, s_lat, n_lat, e_long, w_long, lat_inc, long_inc, gs_count):
+ self.sub_name = sub_name # subgrid name
+ self.parent = parent # parent name
+ self.created = created # date created
+ self.updated = updated # date modified
+ self.s_lat = s_lat # south latitude extents
+ self.n_lat = n_lat # north latitude extents
+ self.e_long = e_long # east longitude extents
+ self.w_long = w_long # west longitude extents
+ self.lat_inc = lat_inc # latitude increment
+ self.long_inc = long_inc # longitude increment
+ self.gs_count = gs_count # total nodes in subgrid
+
+ def ntv2_bilinear(self, lat, lon, num_cols, row, col, f, start_byte):
+ """
+ Function to perform bicubic interpolation of the four ntv2 fields at a point of interest.
+
+ :param lat: latitude of the point of interest (arc-seconds)
+ :param lon: longitude of the point of interest (negative arc-seconds)
+ :param num_cols: number of columns in grid
+ :param row: row number of point to the bottom right of the point of interest
+ :param col: column number of point to the bottom right of the point of interest
+ :param f: variable for open file NTv2 being read as binary
+ :param start_byte: start index of subgrid
+
+ :return: four field tuple of interpolation results at point of interest.
+ """
+
+ # | |
+ # --o-----o--
+ # |4 |3
+ # | *P |
+ # --o-----o--
+ # |2 |1
+ #
+ # o - Node position.
+
+ # Determine position in the file of the four surrounding nodes
+ pos1 = row * num_cols + col
+ pos2 = pos1 + 1
+ pos3 = pos1 + num_cols
+ pos4 = pos3 + 1
+
+ # Navigate to start of subgrid
+ f.seek(start_byte, 1)
+ # Navigate to start of posA node
+ f.seek(16 * pos1, 1)
+
+ # Read in values for nodes 1 and 2
+ node_1 = read_node(f)
+ node_2 = read_node(f)
+
+ # Navigate to beginning of node 3
+ f.seek(16*(pos3 - pos2 - 1), 1)
+
+ # Read in values for nodes 3 and 4
+ node_3 = read_node(f)
+ node_4 = read_node(f)
+
+ # Determine latitude and longitude of node 1
+ lat1 = self.s_lat + row * self.lat_inc
+ long1 = self.e_long + col * self.long_inc
+
+ # Determine interpolation scale factors
+ x = (lon - long1) / self.long_inc
+ y = (lat - lat1) / self.lat_inc
+
+ field_1 = round(bilinear_interpolation(node_1[0], node_2[0], node_3[0], node_4[0], x, y), 6)
+ field_2 = round(bilinear_interpolation(node_1[1], node_2[1], node_3[1], node_4[1], x, y), 6)
+ field_3 = round(bilinear_interpolation(node_1[2], node_2[2], node_3[2], node_4[2], x, y), 6)
+ field_4 = round(bilinear_interpolation(node_1[3], node_2[3], node_3[3], node_4[3], x, y), 6)
+
+ return field_1, field_2, field_3, field_4
+
+ def ntv2_bicubic(self, lat, lon, num_cols, row, col, f, start_byte):
+ """
+ Function to perform bicubic interpolation of the four ntv2 fields at a point of interest.
+
+ :param lat: latitude of the point of interest (arc-seconds)
+ :param lon: longitude of the point of interest (negative arc-seconds)
+ :param num_cols: number of columns in grid
+ :param row: row number of point to the bottom right of the point of interest
+ :param col: column number of point to the bottom right of the point of interest
+ :param f: variable for open file NTv2 being read as binary
+ :param start_byte: start index of subgrid
+
+ :return: four field tuple of interpolation results at point of interest.
+
+ """
+
+ # | | | |
+ # --o-----o-----o-----o--
+ # |11 |12 |13 |14
+ # | | | |
+ # --o-----o-----o-----o--
+ # |10 |3 |4 |15
+ # | | *P | |
+ # --o-----o-----o-----o--
+ # |9 |2 |1 |16
+ # | | | |
+ # --o-----o-----o-----o--
+ # |8 |7 |6 |5
+ #
+ # o - Node position.
+
+ # Determine position in the file of the sixteen surrounding nodes
+ pos1 = row * num_cols + col
+ pos2 = pos1 + 1
+ pos3 = pos2 + num_cols
+ pos4 = pos3 - 1
+ pos5 = pos4 - 2 * num_cols - 1
+ pos6 = pos5 + 1
+ pos7 = pos6 + 1
+ pos8 = pos7 + 1
+ pos9 = pos8 + num_cols
+ pos10 = pos9 + num_cols
+ pos11 = pos10 + num_cols
+ pos12 = pos11 - 1
+ pos13 = pos12 - 1
+ pos14 = pos13 - 1
+ pos15 = pos14 - num_cols
+ pos16 = pos15 - num_cols
+
+ # Navigate to start of subgrid
+ f.seek(start_byte, 1)
+ # Navigate to start of pos1 node
+ f.seek(16 * pos5, 1)
+
+ # Read in values for nodes 5-8
+ node_5 = read_node(f)
+ node_6 = read_node(f)
+ node_7 = read_node(f)
+ node_8 = read_node(f)
+
+ # Navigate to start of pos16 node
+ f.seek(16 * (pos16 - pos8 - 1), 1)
+
+ # Read in values for nodes 16, 1, 2, and 9
+ node_16 = read_node(f)
+ node_1 = read_node(f)
+ node_2 = read_node(f)
+ node_9 = read_node(f)
+
+ # Navigate to start of pos15 node
+ f.seek(16 * (pos15 - pos9 - 1), 1)
+
+ # Read in values for nodes 15, 3, 4 and 10
+ node_15 = read_node(f)
+ node_4 = read_node(f)
+ node_3 = read_node(f)
+ node_10 = read_node(f)
+
+ # Navigate to start of pos14 node
+ f.seek(16 * (pos14 - pos10 - 1), 1)
+
+ # Read in values for nodes 11, 12, 13 and 14
+ node_14 = read_node(f)
+ node_13 = read_node(f)
+ node_12 = read_node(f)
+ node_11 = read_node(f)
+
+ # Determine latitude and longitude of node 1
+ lat1 = self.s_lat + row * self.lat_inc
+ long1 = self.e_long + col * self.long_inc
+
+ # Determine interpolation scale factors
+ x = round((lon - long1) / self.long_inc, 6)
+ y = round((lat - lat1) / self.lat_inc, 6)
+
+ # Call bicubic interpolation function to interpolate ntv2 fields.
+ field_1 = round(bicubic_interpolation(node_1[0], node_2[0], node_3[0], node_4[0],
+ node_5[0], node_6[0], node_7[0], node_8[0],
+ node_9[0], node_10[0], node_11[0], node_12[0],
+ node_13[0], node_14[0], node_15[0], node_16[0],
+ x, y), 6)
+ field_2 = round(bicubic_interpolation(node_1[1], node_2[1], node_3[1], node_4[1],
+ node_5[1], node_6[1], node_7[1], node_8[1],
+ node_9[1], node_10[1], node_11[1], node_12[1],
+ node_13[1], node_14[1], node_15[1], node_16[1],
+ x, y), 6)
+ field_3 = round(bicubic_interpolation(node_1[2], node_2[2], node_3[2], node_4[2],
+ node_5[2], node_6[2], node_7[2], node_8[2],
+ node_9[2], node_10[2], node_11[2], node_12[2],
+ node_13[2], node_14[2], node_15[2], node_16[2],
+ x, y), 6)
+ field_4 = round(bicubic_interpolation(node_1[3], node_2[3], node_3[3], node_4[3],
+ node_5[3], node_6[3], node_7[3], node_8[3],
+ node_9[3], node_10[3], node_11[3], node_12[3],
+ node_13[3], node_14[3], node_15[3], node_16[3],
+ x, y), 6)
+
+ return field_1, field_2, field_3, field_4
+
+
+def read_node(f):
+ """
+ Function to read in values of nodes
+
+ :param f: variable for open file NTv2 being read as binary
+ :return: tuple containing the four ntv2 fields at the grid node.
+ """
+
+ # field_1: shift lat / geoid separation (N)
+ byte = f.read(4)
+ field_1 = struct.unpack('f', byte)[0]
+ # field 2: shift lon / deflection in prime meridian value (Xi)
+ byte = f.read(4)
+ field_2 = struct.unpack('f', byte)[0]
+ # field 3: reliability lat / deflection in prime vertical value (Eta)
+ byte = f.read(4)
+ field_3 = struct.unpack('f', byte)[0]
+ # field 4: reliability lon / NA
+ byte = f.read(4)
+ field_4 = struct.unpack('f', byte)[0]
+
+ return field_1, field_2, field_3, field_4
+
+
+def bilinear_interpolation(n1, n2, n3, n4, x, y):
+ """
+ Bilinear interpolation of value for point of interest (P).
+
+ :param n1: value at node 1
+ :param n2: value at node 2
+ :param n3: value at node 3
+ :param n4: value at node 4
+ :param x: interpolation scale factor for x axis
+ :param y: interpolation scale factor for y axis
+
+ :return: value at node P
+ """
+
+ a0 = n1
+ a1 = n2 - n1
+ a2 = n3 - n1
+ a3 = n1 + n4 - n2 - n3
+ p = a0 + (a1 * x) + (a2 * y) + (a3 * x * y)
+ return p
+
+
+def bicubic_interpolation(n1, n2, n3, n4, n5, n6, n7, n8, n9, n10, n11, n12, n13, n14, n15, n16, x, y):
+ """
+ Bicubic interpolation of value for point of interest (P).
+
+ :param n1: value at node 1
+ :param n2: value at node 2
+ :param n3: value at node 3
+ :param n4: value at node 4
+ :param n5: value at node 5
+ :param n6: value at node 6
+ :param n7: value at node 7
+ :param n8: value at node 8
+ :param n9: value at node 9
+ :param n10: value at node 10
+ :param n11: value at node 11
+ :param n12: value at node 12
+ :param n13: value at node 13
+ :param n14: value at node 14
+ :param n15: value at node 16
+ :param n16: value at node 17
+ :param x: interpolation scale factor for x axis
+ :param y: interpolation scale factor for y axis
+
+ :return: value at node P
+ """
+
+ # Define the inverse of the coefficient matrix
+ cinv = np.array([[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
+ [-3, 0, 0, 3, 0, 0, 0, 0, -2, 0, 0, -1, 0, 0, 0, 0],
+ [2, 0, 0, -2, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0],
+ [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
+ [0, 0, 0, 0, -3, 0, 0, 3, 0, 0, 0, 0, -2, 0, 0, -1],
+ [0, 0, 0, 0, 2, 0, 0, -2, 0, 0, 0, 0, 1, 0, 0, 1],
+ [-3, 3, 0, 0, -2, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, -3, 3, 0, 0, -2, -1, 0, 0],
+ [9, -9, 9, -9, 6, 3, -3, -6, 6, -6, -3, 3, 4, 2, 1, 2],
+ [-6, 6, -6, 6, -4, -2, 2, 4, -3, 3, 3, -3, -2, -1, -1, -2],
+ [2, -2, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [0, 0, 0, 0, 0, 0, 0, 0, 2, -2, 0, 0, 1, 1, 0, 0],
+ [-6, 6, -6, 6, -3, -3, 3, 3, -4, 4, 2, -2, -2, -2, -1, -1],
+ [4, -4, 4, -4, 2, 2, -2, -2, 2, -2, -2, 2, 1, 1, 1, 1]])
+
+ # Define x parameters
+ # Function values
+ x1 = n1
+ x2 = n2
+ x3 = n3
+ x4 = n4
+ # X Derivatives
+ x5 = (n2 - n16) / 2
+ x6 = (n9 - n1) / 2
+ x7 = (n10 - n4) / 2
+ x8 = (n3 - n15) / 2
+ # Y Derivatives
+ x9 = (n4 - n6) / 2
+ x10 = (n3 - n7) / 2
+ x11 = (n12 - n2) / 2
+ x12 = (n13 - n1) / 2
+ # Cross Derivatives (XY)
+ x13 = (n3 - n7 - n15 + n5) / 4
+ x14 = (n10 - n8 - n4 + n6) / 4
+ x15 = (n11 - n9 - n13 + n1) / 4
+ x16 = (n12 - n2 - n14 + n16) / 4
+
+ # Create array from x parameters
+ xarr = np.array([x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15, x16])
+
+ # Multiply the inverse of the coefficient matrix by the array of x values to give array of alpha values
+ alpha = np.matmul(cinv, xarr)
+
+ # Calculate value at the point of interest
+ n_p = 0
+ for i in range(0, 4):
+ for j in range(0, 4):
+ n_p = n_p + alpha[i * 4 + j] * x ** i * y ** j
+
+ return n_p
+
+
+def read_ntv2_file(ntv2_gsb_file):
+ """
+ Function to read an ntv2 gsb file and create grid & subgrid objects
+
+ :param ntv2_gsb_file: full path to ntv2 file
+ :return: ntv2 grid object
+ """
+
+ with open(ntv2_gsb_file, 'rb') as f:
+ # NUM_OREC
+ f.seek(8, 1)
+ byte = f.read(4)
+ num_orec = int.from_bytes(byte, byteorder='little')
+
+ f.seek(4, 1)
+
+ # NUM_SREC
+ f.seek(8, 1)
+ byte = f.read(4)
+ num_srec = int.from_bytes(byte, byteorder='little')
+
+ f.seek(4, 1)
+ # NUM_FILE
+ f.seek(8, 1)
+ byte = f.read(4)
+ num_file = int.from_bytes(byte, byteorder='little')
+
+ f.seek(4, 1)
+ # GS_TYPE
+ f.seek(8, 1)
+ byte = f.read(8)
+ gs_type = byte.decode('utf8').strip('\x00').strip()
+
+ # VERSION
+ f.seek(8, 1)
+ byte = f.read(8)
+ version = byte.decode('utf8').strip('\x00').strip()
+
+ # SYSTEM_F
+ f.seek(8, 1)
+ byte = f.read(8)
+ system_f = byte.decode('utf8').strip('\x00').strip()
+
+ # SYSTEM_T
+ f.seek(8, 1)
+ byte = f.read(8)
+ system_t = byte.decode('utf8').strip('\x00').strip()
+
+ # MAJOR_F
+ f.seek(8, 1)
+ byte = f.read(8)
+ major_f = struct.unpack('d', byte)[0]
+
+ # MINOR_F
+ f.seek(8, 1)
+ byte = f.read(8)
+ minor_f = struct.unpack('d', byte)[0]
+
+ # MAJOR_T
+ f.seek(8, 1)
+ byte = f.read(8)
+ major_t = struct.unpack('d', byte)[0]
+
+ # MINOR_T
+ f.seek(8, 1)
+ byte = f.read(8)
+ minor_t = struct.unpack('d', byte)[0]
+
+ grid = NTv2Grid(
+ num_orec=num_orec,
+ num_srec=num_srec,
+ num_file=num_file,
+ gs_type=gs_type,
+ version=version,
+ system_f=system_f,
+ system_t=system_t,
+ major_f=major_f,
+ minor_f=minor_f,
+ major_t=major_t,
+ minor_t=minor_t,
+ file_path=ntv2_gsb_file
+ )
+
+ # read subgrids
+ for i in range(0, grid.num_file):
+ # SUB_NAME
+ f.seek(8, 1)
+ byte = f.read(8)
+ sub_name = byte.decode('utf').strip('\x00').strip()
+
+ # PARENT
+ f.seek(8, 1)
+ byte = f.read(8)
+ parent = byte.decode('utf').strip('\x00').strip()
+
+ # CREATED
+ f.seek(8, 1)
+ byte = f.read(8)
+ created = dt.strptime(byte.decode('utf').strip('\x00').strip(), '%d%m%Y').strftime('%d/%m/%Y')
+
+ # UPDATED
+ f.seek(8, 1)
+ byte = f.read(8)
+ updated = dt.strptime(byte.decode('utf').strip('\x00').strip(), '%d%m%Y').strftime('%d/%m/%Y')
+
+ # S_LAT
+ f.seek(8, 1)
+ byte = f.read(8)
+ s_lat = round(struct.unpack('d', byte)[0], 3)
+
+ # N_LAT
+ f.seek(8, 1)
+ byte = f.read(8)
+ n_lat = round(struct.unpack('d', byte)[0], 3)
+
+ # E_LONG
+ f.seek(8, 1)
+ byte = f.read(8)
+ e_long = round(struct.unpack('d', byte)[0], 3)
+
+ # W_LONG
+ f.seek(8, 1)
+ byte = f.read(8)
+ w_long = round(struct.unpack('d', byte)[0], 3)
+
+ # LAT_INC
+ f.seek(8, 1)
+ byte = f.read(8)
+ lat_inc = round(struct.unpack('d', byte)[0], 6)
+
+ # LONG_INC
+ f.seek(8, 1)
+ byte = f.read(8)
+ long_inc = round(struct.unpack('d', byte)[0], 6)
+
+ # GS_COUNT
+ f.seek(8, 1)
+ byte = f.read(4)
+ gs_count = int.from_bytes(byte, byteorder='little')
+
+ f.seek(4, 1)
+ # skip ahead to next subgrid
+ f.seek(16 * gs_count, 1)
+
+ grid.subgrids[sub_name] = SubGrid(
+ sub_name=sub_name,
+ parent=parent,
+ created=created,
+ updated=updated,
+ s_lat=s_lat,
+ n_lat=n_lat,
+ e_long=e_long,
+ w_long=w_long,
+ lat_inc=lat_inc,
+ long_inc=long_inc,
+ gs_count=gs_count
+ )
+
+ return grid
+
+
+def interpolate_ntv2(grid_object, lat, lon, method='bicubic'):
+ """
+ Function to interpolate Ntv2Grid objects
+ :param grid_object: Ntv2Grid object
+ :param lat: latitude (decimal degrees)
+ :param lon: longitude (decimal degrees)
+ :param method: interpolation strategy, bicubic or bilinear
+
+ :return: tuple of four ntv2 fields
+ """
+
+ interpolation_methods = {
+ 'bicubic',
+ 'bilinear'
+ }
+ if method not in interpolation_methods:
+ raise ValueError(f'interpolation method "{method}" not supported')
+
+ # convert decimal degrees to arc-seconds
+ lat *= 3600
+ lon *= -3600
+
+ # determine subgrid for point of interest
+ in_subgrids = set()
+ for sg in grid_object.subgrids.values():
+ if sg.s_lat <= lat < sg.n_lat and sg.e_long <= lon < sg.w_long:
+ in_subgrids.add(sg.sub_name)
+
+ # return null fields if no subgrid found, else choose subgrid with finest resolution
+ if len(in_subgrids) == 0:
+ return None, None, None, None
+ else:
+ inc = None
+ in_grid = None
+ for sg in in_subgrids:
+ if not inc:
+ inc = grid_object.subgrids[sg].lat_inc
+ in_grid = grid_object.subgrids[sg]
+ else:
+ if grid_object.subgrids[sg].lat_inc < inc:
+ inc = grid_object.subgrids[sg].lat_inc
+ in_grid = grid_object.subgrids[sg]
+
+ # Determine number of columns in grid, and row and column of node to bottom right of
+ # point of interest, then call relevant interpolation method function
+
+ # determine number of columns
+ num_cols = 1 + int((in_grid.w_long - in_grid.e_long) / in_grid.long_inc)
+
+ # determine row and col numbers of node below right of point
+ row = int((lat - in_grid.s_lat) / in_grid.lat_inc)
+ col = int((lon - in_grid.e_long) / in_grid.long_inc)
+
+ # locate data in gsb_file
+ skip_bytes = 176 # grid header length
+
+ with open(grid_object.file_path, 'rb') as f:
+ for sg in grid_object.subgrids.values():
+
+ skip_bytes += 176 # subgrid header length
+ if sg.sub_name == in_grid.sub_name:
+ if method == 'bilinear':
+ results = in_grid.ntv2_bilinear(lat, lon, num_cols, row, col, f, skip_bytes)
+ elif method == 'bicubic':
+ results = in_grid.ntv2_bicubic(lat, lon, num_cols, row, col, f, skip_bytes)
+ else:
+ skip_bytes += sg.gs_count * 16
+
+ return results
diff --git a/geodepy/transform.py b/geodepy/transform.py
index 552d0aa..f60d65e 100644
--- a/geodepy/transform.py
+++ b/geodepy/transform.py
@@ -19,6 +19,7 @@
from geodepy.statistics import vcv_local2cart, vcv_cart2local
from geodepy.convert import hp2dec, geo2grid, \
grid2geo, xyz2llh, llh2xyz
+from geodepy.ntv2reader import NTv2Grid, interpolate_ntv2
def conform7(x, y, z, trans, vcv=None):
@@ -232,3 +233,40 @@ def gda2020_to_atrf2014(x, y, z, epoch_to, vcv=None):
:return: Cartesian X, Y, Z Coordinates and vcv matrix in terms of ATRF at the specified Epoch
"""
return conform14(x, y, z, epoch_to, -atrf_gda2020, vcv=vcv)
+
+
+def ntv2_2d(ntv2_grid, lat, lon, forward_tf=True, method='bicubic'):
+ """
+ Performs a 2D transformation based on ntv2 grid shifts.
+ :param ntv2_grid: Ntv2Grid object (create with read_ntv2_file() function in geodepy.ntv2reader module)
+ :param lat: latitude in decimal degrees
+ :param lon: longitude in decimal degrees
+ :param forward_tf: True/False:
+ - True applies the shifts in the direction given in the grid.
+ - False applies the shifts in the opposite direction of the grid
+ :param method: Interpolation strategy - either 'bicubic' or 'bilinear'
+ :return: Transformed latitude and longitude
+ """
+
+ # validate input data
+ if not isinstance(ntv2_grid, NTv2Grid):
+ raise TypeError('ntv2_grid must be Ntv2Grid object')
+ if method != 'bicubic' and method != 'bilinear':
+ raise ValueError(f'interpolation strategy "{method}" not supported')
+
+ # interrogate grid
+ shifts = interpolate_ntv2(ntv2_grid, lat, lon, method=method)
+
+ # null results are outside of grid extents.
+ if shifts[0] is None:
+ raise ValueError('Coordinate outside of grid extents')
+
+ if forward_tf:
+ tf_lat = lat + shifts[0] / 3600
+ tf_lon = lon - shifts[1] / 3600
+ else:
+ tf_lat = lat - shifts[0] / 3600
+ tf_lon = lon + shifts[1] / 3600
+
+ return tf_lat, tf_lon
+
| Testing
Time to pull in all of the most recent changes into master!
| 2021-03-26T02:55:47 | 0.0 | [] | [] |
|||
FEniCS/ufl | FEniCS__ufl-175 | 79ef1d4ac04388e629fbcc3c5b8935d5cbd6ceff | diff --git a/setup.cfg b/setup.cfg
index 1a0e9d878..256c55d64 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -43,7 +43,6 @@ setup_requires =
wheel
install_requires =
numpy
- typing_extensions; python_version < "3.11"
[options.extras_require]
docs = sphinx; sphinx_rtd_theme
diff --git a/ufl/cell.py b/ufl/cell.py
index 7982490f9..61c8e69e8 100644
--- a/ufl/cell.py
+++ b/ufl/cell.py
@@ -15,12 +15,6 @@
from ufl.core.ufl_type import UFLObject
from abc import abstractmethod
-try:
- from typing import Self
-except ImportError:
- # This alternative is needed pre Python 3.11
- # Delete this after 04 Oct 2026 (Python 3.10 end of life)
- from typing_extensions import Self
__all_classes__ = ["AbstractCell", "Cell", "TensorProductCell"]
@@ -44,7 +38,7 @@ def has_simplex_facets(self) -> bool:
"""Return True if all the facets of this cell are simplex cells."""
@abstractmethod
- def _lt(self, other: Self) -> bool:
+ def _lt(self, other) -> bool:
"""Define an arbitrarily chosen but fixed sort order for all instances of this type with the same dimensions."""
@abstractmethod
@@ -276,7 +270,7 @@ def sub_entity_types(self, dim: int) -> typing.Tuple[AbstractCell, ...]:
except IndexError:
return ()
- def _lt(self, other: Self) -> bool:
+ def _lt(self, other) -> bool:
return self._cellname < other._cellname
def cellname(self) -> str:
@@ -385,7 +379,7 @@ def sub_entity_types(self, dim: int) -> typing.Tuple[AbstractCell, ...]:
return [self]
raise NotImplementedError(f"TensorProductCell.sub_entities({dim}) is not implemented.")
- def _lt(self, other: Self) -> bool:
+ def _lt(self, other) -> bool:
return self._ufl_hash_data_() < other._ufl_hash_data_()
def cellname(self) -> str:
| `typing_extensions` breaks Spack build
See https://github.com/FEniCS/dolfinx/actions/runs/5407842569/jobs/9826291336.
Is it essential?
| I suppose not, it's a backport package for new type annotation features to older python versions. Is this happening because the spack build doesn't install dependencies in the same way? Or because we forgot to add it as a dependency in pyproject?
Dependencies need to be listed explicitly with Spack, see https://spack.readthedocs.io/en/2021.08.19/build_systems/pythonpackage.html#python-dependencies for the reasons.
`typing_extensions` is used to type hint very few 'private' methods, probably not justifying the introduction of a dependency. Easiest would be to remove. Docs can explain the type. | 2023-06-29T08:57:11 | 0.0 | [] | [] |
||
OpShin/pluthon | OpShin__pluthon-5 | 9517bd40a3f97227b70a182110202dc294b2141b | diff --git a/README.md b/README.md
index b73478a..81c0014 100644
--- a/README.md
+++ b/README.md
@@ -14,4 +14,11 @@ This is an programming language that compiles down to Untyped Plutus Language Co
programming language.
It is used as an intermediate step when compiling a pythonic smart contract language down to UPLC.
+
+## Contributing
+
Contributions are very welcome.
+
+## Notes
+
+- Some higher level functions defined in pluthon use UPLC builtin variables. In order to avoid naming conflicts, all variables assigned start with "_" and end with "#".
diff --git a/pluthon/pluthon_functional_data.py b/pluthon/pluthon_functional_data.py
index a9a0602..3b8fdc0 100644
--- a/pluthon/pluthon_functional_data.py
+++ b/pluthon/pluthon_functional_data.py
@@ -8,6 +8,7 @@
"""
identity = lambda x: x
+
_BUILTIN_TYPE_MAP = {
bytes: (ByteString, identity),
str: (ByteString, lambda x: x.encode()),
@@ -29,17 +30,17 @@ def FunctionalMapExtend(
) -> "FunctionalMap":
additional_compares = Apply(
old_statemonad,
- Var("x"),
- Var("def"),
+ PVar("x"),
+ PVar("def"),
)
for name, value in zip(names, values):
keytype, transform = _BUILTIN_TYPE_MAP[type(name)]
additional_compares = Ite(
- _EQUALS_MAP[keytype](Var("x"), keytype(transform(name))),
+ _EQUALS_MAP[keytype](PVar("x"), keytype(transform(name))),
Delay(value),
additional_compares,
)
- return Lambda(
+ return PLambda(
["x", "def"],
additional_compares,
)
@@ -51,7 +52,7 @@ class FunctionalMap(AST):
def __new__(
cls, kv: typing.Optional[typing.Dict[typing.Any, AST]] = None
) -> "FunctionalMap":
- res = Lambda(["x", "def"], Var("def"))
+ res = PLambda(["x", "def"], PVar("def"))
if kv is not None:
res = FunctionalMapExtend(res, kv.keys(), kv.values())
return res
@@ -83,8 +84,8 @@ def __new__(cls, *vs: AST) -> "FunctionalTuple":
# idea: just construct a nested if/else comparison
if not vs:
return Unit()
- param = Var("__f__")
- return Lambda([param.name], Apply(param, *map(Delay, vs)))
+ param_name = "__f__"
+ return PLambda([param_name], Apply(PVar(param_name), *map(Delay, vs)))
class FunctionalTupleAccess(AST):
@@ -92,5 +93,5 @@ def __new__(cls, tuple: AST, index: int, size: int):
if size == 0:
raise ValueError("Can not access elements of an empty tuple")
return Apply(
- tuple, Lambda([f"v{i}" for i in range(size)], Force(Var(f"v{index}")))
+ tuple, PLambda([f"v{i}" for i in range(size)], Force(PVar(f"v{index}")))
)
diff --git a/pluthon/pluthon_sugar.py b/pluthon/pluthon_sugar.py
index 71a77d0..92951c2 100644
--- a/pluthon/pluthon_sugar.py
+++ b/pluthon/pluthon_sugar.py
@@ -3,10 +3,37 @@
########## Pluto Abstractions that simplify handling complex structures ####################
+def name_scheme_compatible_varname(x: str):
+ return f"_{x}#"
+
+
+def PVar(x: str):
+ """
+ A simple wrapper around Var to ensure adherence to the naming scheme
+ :param x: name of the variable
+ :return: variable for use with pluthon, adhering to the naming scheme
+ """
+ return Var(name_scheme_compatible_varname(x))
+
+
+def PLambda(vars: typing.List[str], term: AST):
+ """
+ A simple wrapper around Lambda to ensure adherence to the naming scheme
+ """
+ return Lambda(list(map(name_scheme_compatible_varname, vars)), term)
+
+
+def PLet(bindings: typing.List[typing.Tuple[str, AST]], term: AST):
+ """
+ A simple wrapper around Let to ensure adherence to the naming scheme
+ """
+ return Let([(name_scheme_compatible_varname(x), y) for x, y in bindings], term)
+
+
def RecFun(x: AST):
- return Let(
+ return PLet(
[("g", x)],
- Apply(Var("g"), Var("g")),
+ Apply(PVar("g"), PVar("g")),
)
@@ -15,7 +42,7 @@ def Not(x: AST):
def Iff(x: AST, y: AST):
- return Let([("y", y)], Ite(x, Var("y"), Not(Var("y"))))
+ return PLet([("y", y)], Ite(x, PVar("y"), Not(PVar("y"))))
def And(x: AST, y: AST):
@@ -27,7 +54,7 @@ def Or(x: AST, y: AST):
def Xor(x: AST, y: AST):
- return Let([("y", y)], Ite(x, Not(Var("y")), Var("y")))
+ return PLet([("y", y)], Ite(x, Not(PVar("y")), PVar("y")))
def Implies(x: AST, y: AST):
@@ -266,19 +293,19 @@ def SingleDataPairList(x: AST):
def FoldList(l: AST, f: AST, a: AST):
"""Left fold over a list l operator f: accumulator -> list_elem -> accumulator with initial value a"""
return Apply(
- Lambda(
+ PLambda(
["op"],
RecFun(
- Lambda(
+ PLambda(
["fold", "xs", "a"],
IteNullList(
- Var("xs"),
- Var("a"),
+ PVar("xs"),
+ PVar("a"),
Apply(
- Var("fold"),
- Var("fold"),
- TailList(Var("xs")),
- Apply(Var("op"), Var("a"), HeadList(Var("xs"))),
+ PVar("fold"),
+ PVar("fold"),
+ TailList(PVar("xs")),
+ Apply(PVar("op"), PVar("a"), HeadList(PVar("xs"))),
),
),
),
@@ -293,23 +320,23 @@ def FoldList(l: AST, f: AST, a: AST):
def RFoldList(l: AST, f: AST, a: AST):
"""Right fold over a list l operator f: accumulator -> list_elem -> accumulator with initial value a"""
return Apply(
- Lambda(
+ PLambda(
["op"],
RecFun(
- Lambda(
+ PLambda(
["fold", "xs", "a"],
IteNullList(
- Var("xs"),
- Var("a"),
+ PVar("xs"),
+ PVar("a"),
Apply(
- Var("op"),
+ PVar("op"),
Apply(
- Var("fold"),
- Var("fold"),
- TailList(Var("xs")),
- Var("a"),
+ PVar("fold"),
+ PVar("fold"),
+ TailList(PVar("xs")),
+ PVar("a"),
),
- HeadList(Var("xs")),
+ HeadList(PVar("xs")),
),
),
),
@@ -324,19 +351,19 @@ def RFoldList(l: AST, f: AST, a: AST):
def IndexAccessList(l: AST, i: AST):
return Apply(
RecFun(
- Lambda(
+ PLambda(
["f", "i", "xs"],
IteNullList(
- Var("xs"),
+ PVar("xs"),
TraceError("IndexError"),
Ite(
- EqualsInteger(Var("i"), Integer(0)),
- HeadList(Var("xs")),
+ EqualsInteger(PVar("i"), Integer(0)),
+ HeadList(PVar("xs")),
Apply(
- Var("f"),
- Var("f"),
- SubtractInteger(Var("i"), Integer(1)),
- TailList(Var("xs")),
+ PVar("f"),
+ PVar("f"),
+ SubtractInteger(PVar("i"), Integer(1)),
+ TailList(PVar("xs")),
),
),
),
@@ -349,19 +376,19 @@ def IndexAccessList(l: AST, i: AST):
def Range(limit: AST, start: AST = Integer(0), step: AST = Integer(1)):
return Apply(
- Lambda(
+ PLambda(
["limit", "step"],
RecFun(
- Lambda(
+ PLambda(
["f", "cur"],
Ite(
- LessThanInteger(Var("cur"), Var("limit")),
+ LessThanInteger(PVar("cur"), PVar("limit")),
PrependList(
- Var("cur"),
+ PVar("cur"),
Apply(
- Var("f"),
- Var("f"),
- AddInteger(Var("cur"), Var("step")),
+ PVar("f"),
+ PVar("f"),
+ AddInteger(PVar("cur"), PVar("step")),
),
),
EmptyIntegerList(),
@@ -375,23 +402,23 @@ def Range(limit: AST, start: AST = Integer(0), step: AST = Integer(1)):
)
-def MapList(l: AST, m: AST = Lambda(["x"], Var("x")), empty_list=EmptyDataList()):
+def MapList(l: AST, m: AST = PLambda(["x"], PVar("x")), empty_list=EmptyDataList()):
"""Apply a map function on each element in a list"""
return Apply(
- Lambda(
+ PLambda(
["op"],
RecFun(
- Lambda(
+ PLambda(
["map", "xs"],
IteNullList(
- Var("xs"),
+ PVar("xs"),
empty_list,
PrependList(
- Apply(Var("op"), HeadList(Var("xs"))),
+ Apply(PVar("op"), HeadList(PVar("xs"))),
Apply(
- Var("map"),
- Var("map"),
- TailList(Var("xs")),
+ PVar("map"),
+ PVar("map"),
+ TailList(PVar("xs")),
),
),
),
@@ -406,21 +433,21 @@ def MapList(l: AST, m: AST = Lambda(["x"], Var("x")), empty_list=EmptyDataList()
def FindList(l: AST, key: AST, default: AST):
"""Returns the first element in the list where key evaluates to true - otherwise returns default"""
return Apply(
- Lambda(
+ PLambda(
["op"],
RecFun(
- Lambda(
+ PLambda(
["f", "xs"],
IteNullList(
- Var("xs"),
+ PVar("xs"),
default,
Ite(
- Apply(Var("op"), HeadList(Var("xs"))),
- HeadList(Var("xs")),
+ Apply(PVar("op"), HeadList(PVar("xs"))),
+ HeadList(PVar("xs")),
Apply(
- Var("f"),
- Var("f"),
- TailList(Var("xs")),
+ PVar("f"),
+ PVar("f"),
+ TailList(PVar("xs")),
),
),
),
@@ -435,30 +462,30 @@ def FindList(l: AST, key: AST, default: AST):
def FilterList(l: AST, k: AST, empty_list=EmptyDataList()):
"""Apply a filter function on each element in a list (throws out all that evaluate to false)"""
return Apply(
- Lambda(
+ PLambda(
["op"],
RecFun(
- Lambda(
+ PLambda(
["filter", "xs"],
IteNullList(
- Var("xs"),
+ PVar("xs"),
empty_list,
- Let(
- [("head", HeadList(Var("xs")))],
+ PLet(
+ [("head", HeadList(PVar("xs")))],
Ite(
- Apply(Var("op"), Var("head")),
+ Apply(PVar("op"), PVar("head")),
PrependList(
- Var("head"),
+ PVar("head"),
Apply(
- Var("filter"),
- Var("filter"),
- TailList(Var("xs")),
+ PVar("filter"),
+ PVar("filter"),
+ TailList(PVar("xs")),
),
),
Apply(
- Var("filter"),
- Var("filter"),
- TailList(Var("xs")),
+ PVar("filter"),
+ PVar("filter"),
+ TailList(PVar("xs")),
),
),
),
@@ -477,30 +504,30 @@ def MapFilterList(l: AST, filter_op: AST, map_op: AST, empty_list=EmptyDataList(
Performs only a single pass and is hence much more efficient than filter + map
"""
return Apply(
- Lambda(
+ PLambda(
["filter", "map"],
RecFun(
- Lambda(
+ PLambda(
["filtermap", "xs"],
IteNullList(
- Var("xs"),
+ PVar("xs"),
empty_list,
- Let(
- [("head", HeadList(Var("xs")))],
+ PLet(
+ [("head", HeadList(PVar("xs")))],
Ite(
- Apply(Var("filter"), Var("head")),
+ Apply(PVar("filter"), PVar("head")),
PrependList(
- Apply(Var("map"), Var("head")),
+ Apply(PVar("map"), PVar("head")),
Apply(
- Var("filtermap"),
- Var("filtermap"),
- TailList(Var("xs")),
+ PVar("filtermap"),
+ PVar("filtermap"),
+ TailList(PVar("xs")),
),
),
Apply(
- Var("filtermap"),
- Var("filtermap"),
- TailList(Var("xs")),
+ PVar("filtermap"),
+ PVar("filtermap"),
+ TailList(PVar("xs")),
),
),
),
@@ -515,7 +542,9 @@ def MapFilterList(l: AST, filter_op: AST, map_op: AST, empty_list=EmptyDataList(
def LengthList(l: AST):
- return FoldList(l, Lambda(["a", "_"], AddInteger(Var("a"), Integer(1))), Integer(0))
+ return FoldList(
+ l, PLambda(["a", "_"], AddInteger(PVar("a"), Integer(1))), Integer(0)
+ )
# Data Utils
| Introduce naming schema to avoid name conflicts with lower/upper layers
Tools like hebi build on top of pluthon and also use native UPLC variables.
In order to make sure that there can not be any naming conflicts, the UPLC variables should follow a strict scheme that can then be avoided by tools higher up the toolchain (I.e. every variable name starts with an underscore)
| 2023-03-10T16:25:18 | 0.0 | [] | [] |
|||
SwissDataScienceCenter/renku-python | SwissDataScienceCenter__renku-python-3641 | db0f8cf87d8f234e87aa48058eceaaf68283cedf | diff --git a/renku/core/session/docker.py b/renku/core/session/docker.py
index 15d009dc32..f7a930ef42 100644
--- a/renku/core/session/docker.py
+++ b/renku/core/session/docker.py
@@ -356,6 +356,9 @@ def session_start_helper(consider_disk_request: bool):
environment["CHOWN_HOME"] = "yes"
environment["CHOWN_HOME_OPTS"] = "-R"
+ if "force_build" in kwargs:
+ del kwargs["force_build"]
+
container = self.docker_client().containers.run(
image_name,
'jupyter notebook --NotebookApp.ip="0.0.0.0"'
| the `--force-build` flag should not be in the docker section
The `--force-build` flag to `renku session start` appears in the "docker configuration" section of the help text - it should be in the "options" section as it is "native" to renku. Using that flag results in an error:
```
TypeError: run() got an unexpected keyword argument 'force_build'
```
| 2023-10-16T12:44:03 | 0.0 | [] | [] |
|||
Bayer-Group/tiffslide | Bayer-Group__tiffslide-80 | 3b6b1fc490c3acf3bd136eaeba77b67954c1dc01 | diff --git a/tiffslide/tiffslide.py b/tiffslide/tiffslide.py
index 6120a98..7c241ca 100644
--- a/tiffslide/tiffslide.py
+++ b/tiffslide/tiffslide.py
@@ -13,6 +13,7 @@
from typing import Iterator
from typing import Literal
from typing import Mapping
+from typing import Optional
from typing import TypeVar
from typing import overload
from warnings import warn
@@ -644,6 +645,7 @@ class _PropertyParser:
scn="leica",
bif="ventana",
ndpi="hamamatsu",
+ philips="philips_tiff"
# add more when needed
)
@@ -843,6 +845,37 @@ def parse_generic_tiff(self) -> dict[str, Any]:
md.update(self.collect_level_info(series0))
return md
+ def parse_philips_tiff(self) -> dict[str, Any]:
+ """parse Philips tiff properties"""
+ md = self.parse_generic_tiff()
+ if self._tf.philips_metadata is None:
+ return md
+ philips_metadata = ElementTree.fromstring(self._tf.philips_metadata)
+ def get_first_attribute_with_name(
+ root: ElementTree.Element,
+ name: str
+ ) -> Optional[str]:
+ return next(
+ root.iterfind(f".//Attribute[@Name='{name}']")
+ ).text
+ md[PROPERTY_NAME_VENDOR] = get_first_attribute_with_name(
+ philips_metadata,
+ 'DICOM_MANUFACTURER'
+ )
+ pixel_spacing_attribute = get_first_attribute_with_name(
+ philips_metadata,
+ 'DICOM_PIXEL_SPACING'
+ )
+ if pixel_spacing_attribute is not None:
+ pixel_spacings = [
+ float(element.strip('"')) * 1000
+ for element in pixel_spacing_attribute.split(' ')
+ ]
+ mpp_x, mpp_y = pixel_spacings[0], pixel_spacings[1]
+ md[PROPERTY_NAME_MPP_X] = mpp_x
+ md[PROPERTY_NAME_MPP_Y] = mpp_y
+ return md
+
def _parse_metadata_aperio(desc: str) -> dict[str, Any]:
"""Aperio SVS metadata"""
| tiffslide not reading MPP of CAMELYON16 but openslide does
hello, i am using tiffslide with the camelyon16 dataset. i realized that tiffslide cannot read the mpp of these images, whereas openslide-python can. i have included info below on how to reproduce this behavior:
create python environment and download camelyon16 file:
```bash
# Create python env
python3.10 -m venv --upgrade-deps venv/
source ./venv/bin/activate
python -m pip install awscli openslide-python tiffslide
# Copy sample file from CAMELYON16
aws s3 cp --no-sign-request s3://camelyon-dataset/CAMELYON16/images/normal_001.tif .
```
read mpp with openslide and tiffslide:
```python
>>> import openslide, tiffslide
>>> oslide = openslide.open_slide("normal_001.tif")
>>> oslide.properties[openslide.PROPERTY_NAME_MPP_X]
'0.24309399999999998'
>>> tslide = tiffslide.open_slide("normal_001.tif")
>>> tslide.properties[tiffslide.PROPERTY_NAME_MPP_X]
>>> tslide.properties[tiffslide.PROPERTY_NAME_MPP_X] is None
True
```
here are the openslide-python and tiffslide versions
```python
>>> import openslide, tiffslide
>>> openslide.__version__
'1.3.0'
>>> tiffslide.__version__
'2.2.0'
```
| Hey @kaczmarj
Thanks for opening the issue! This seems to be a philips tiff file, and we would need to add support in `_PropertyParser`
There would need to be an entry here: https://github.com/Bayer-Group/tiffslide/blob/8bea5a4c8e1429071ade6d4c40169ce153786d19/tiffslide/tiffslide.py#L642-L648
and a corresponding metadata parsing method like `parse_aperio` or `parse_leica`.
Would you want to work on a PR?
Cheers,
Andreas :smiley:
thanks @ap-- - i don't have the bandwidth at this time to learn about phillips tiff and create a pr for this issue. but if anyone else reading along feels motivated to address this, i have pasted the output of `openslide-show-properties` for `s3://camelyon-dataset/CAMELYON16/images/normal_001.tif`
```
openslide.level-count: '10'
openslide.level[0].downsample: '1'
openslide.level[0].height: '221184'
openslide.level[0].tile-height: '512'
openslide.level[0].tile-width: '512'
openslide.level[0].width: '97792'
openslide.level[1].downsample: '2'
openslide.level[1].height: '110592'
openslide.level[1].tile-height: '512'
openslide.level[1].tile-width: '512'
openslide.level[1].width: '48896'
openslide.level[2].downsample: '4'
openslide.level[2].height: '55296'
openslide.level[2].tile-height: '512'
openslide.level[2].tile-width: '512'
openslide.level[2].width: '24448'
openslide.level[3].downsample: '8'
openslide.level[3].height: '27648'
openslide.level[3].tile-height: '512'
openslide.level[3].tile-width: '512'
openslide.level[3].width: '12224'
openslide.level[4].downsample: '16'
openslide.level[4].height: '13824'
openslide.level[4].tile-height: '512'
openslide.level[4].tile-width: '512'
openslide.level[4].width: '6112'
openslide.level[5].downsample: '32'
openslide.level[5].height: '6912'
openslide.level[5].tile-height: '512'
openslide.level[5].tile-width: '512'
openslide.level[5].width: '3056'
openslide.level[6].downsample: '64'
openslide.level[6].height: '3456'
openslide.level[6].tile-height: '512'
openslide.level[6].tile-width: '512'
openslide.level[6].width: '1528'
openslide.level[7].downsample: '128'
openslide.level[7].height: '1728'
openslide.level[7].tile-height: '512'
openslide.level[7].tile-width: '512'
openslide.level[7].width: '764'
openslide.level[8].downsample: '256'
openslide.level[8].height: '864'
openslide.level[8].tile-height: '512'
openslide.level[8].tile-width: '512'
openslide.level[8].width: '382'
openslide.level[9].downsample: '512'
openslide.level[9].height: '432'
openslide.level[9].tile-height: '512'
openslide.level[9].tile-width: '512'
openslide.level[9].width: '191'
openslide.mpp-x: '0.24309399999999998'
openslide.mpp-y: '0.24309399999999998'
openslide.quickhash-1: 'cc2b83f3fd7391491124031fcf20459a478e424e5a87b66aefcaee48f39973a2'
openslide.vendor: 'philips'
philips.DICOM_BITS_ALLOCATED: '8'
philips.DICOM_BITS_STORED: '8'
philips.DICOM_DERIVATION_DESCRIPTION: 'tiff-useBigTIFF=1-useRgb=0-levels=10003,10002,10000,10001-processing=0-q80-sourceFilename="T11-07929-I1-6"'
philips.DICOM_HIGH_BIT: '7'
philips.DICOM_LOSSY_IMAGE_COMPRESSION: '01'
philips.DICOM_LOSSY_IMAGE_COMPRESSION_METHOD: '"PHILIPS_TIFF_1_0"'
philips.DICOM_LOSSY_IMAGE_COMPRESSION_RATIO: '"3"'
philips.DICOM_MANUFACTURER: '3D Histech'
philips.DICOM_PHOTOMETRIC_INTERPRETATION: 'RGB'
philips.DICOM_PIXEL_REPRESENTATION: '0'
philips.DICOM_PIXEL_SPACING: '"0.000243094" "0.000243094"'
philips.DICOM_PLANAR_CONFIGURATION: '0'
philips.DICOM_SAMPLES_PER_PIXEL: '3'
philips.DICOM_SOFTWARE_VERSIONS: '"4.0.3"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[0].DICOM_PIXEL_SPACING: '"0.000243902" "0.000243902"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[0].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[0].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '97792'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[0].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '0'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[0].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '221184'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[1].DICOM_PIXEL_SPACING: '"0.000487805" "0.000487805"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[1].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[1].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '49152'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[1].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '1'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[1].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '110592'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[2].DICOM_PIXEL_SPACING: '"0.00097561" "0.00097561"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[2].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[2].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '24576'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[2].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '2'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[2].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '55296'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[3].DICOM_PIXEL_SPACING: '"0.00195122" "0.00195122"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[3].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[3].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '12288'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[3].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '3'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[3].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '27648'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[4].DICOM_PIXEL_SPACING: '"0.00390244" "0.00390244"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[4].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[4].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '6144'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[4].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '4'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[4].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '13824'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[5].DICOM_PIXEL_SPACING: '"0.00780488" "0.00780488"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[5].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[5].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '3072'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[5].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '5'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[5].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '7168'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[6].DICOM_PIXEL_SPACING: '"0.0156098" "0.0156098"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[6].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[6].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '1536'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[6].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '6'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[6].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '3584'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[7].DICOM_PIXEL_SPACING: '"0.0312195" "0.0312195"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[7].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[7].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '1024'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[7].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '7'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[7].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '2048'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[8].DICOM_PIXEL_SPACING: '"0.062439" "0.062439"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[8].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[8].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '512'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[8].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '8'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[8].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '1024'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[9].DICOM_PIXEL_SPACING: '"0.124878" "0.124878"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[9].PIIM_DP_PIXEL_DATA_REPRESENTATION_POSITION: '"0" "0" "0"'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[9].PIIM_PIXEL_DATA_REPRESENTATION_COLUMNS: '512'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[9].PIIM_PIXEL_DATA_REPRESENTATION_NUMBER: '9'
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[9].PIIM_PIXEL_DATA_REPRESENTATION_ROWS: '512'
philips.PIM_DP_IMAGE_COLUMNS: '97792'
philips.PIM_DP_IMAGE_ROWS: '221184'
philips.PIM_DP_IMAGE_TYPE: 'WSI'
philips.PIM_DP_SOURCE_FILE: '%FILENAME%'
philips.PIM_DP_UFS_INTERFACE_VERSION: '3.0'
philips.UFS_IMAGE_PIXEL_TRANSFORMATION_METHOD: '0'
tiff.ResolutionUnit: 'inch'
tiff.Software: 'Philips DP v1.0'
```
the mpp appears to come from
```
philips.DICOM_PIXEL_SPACING: '"0.000243094" "0.000243094"'
```
because those values agree with `openslide.mpp-x` and `openslide.mpp-y`, but there is also a level 0 pixel spacing entry, which is slightly different.
```
philips.PIIM_PIXEL_DATA_REPRESENTATION_SEQUENCE[0].DICOM_PIXEL_SPACING: '"0.000243902" "0.000243902"'
```
openslide's website says the following ([source](https://openslide.org/formats/philips/)):
```
openslide.mpp-x
calculated as 1000 * philips.DICOM_PIXEL_SPACING[1]
openslide.mpp-y
calculated as 1000 * philips.DICOM_PIXEL_SPACING[0]
```
> i don't have the bandwidth at this time to learn about phillips tiff and create a pr
No worries ð and thanks for providing the additional information. All of this can be parsed out of the XML stored image description of the file.
If anyone is up for it, this is a nice and self contained issue to work on:
- Add an entry to the dict in _PropertyParser mentioned above. (This will dispatch to a method named parse_philips())
- In parse_phillips() get the image description tag content, parse the XML and extract the values from the comment above.
Happy to answer further questions, and provide guidance with the PR.
Cheers,
Andreas ð
| 2023-11-29T20:23:16 | 0.0 | [] | [] |
||
ericaltendorf/plotman | ericaltendorf__plotman-60 | dd3f078e8526a6f89a449306a9ac5ae6c4244693 | diff --git a/job.py b/job.py
index 6dd69994..0aa327ee 100644
--- a/job.py
+++ b/job.py
@@ -31,6 +31,21 @@ def is_plotting_cmdline(cmdline):
and 'create' == cmdline[3]
)
+# This is a cmdline argument fix for https://github.com/ericaltendorf/plotman/issues/41
+def cmdline_argfix(cmdline):
+ known_keys = 'krbut2dne'
+ for i in cmdline:
+ # If the argument starts with dash and a known key and is longer than 2,
+ # then an argument is passed with no space between its key and value.
+ # This is POSIX compliant but the arg parser was tripping over it.
+ # In these cases, splitting that item up in separate key and value
+ # elements results in a `cmdline` list that is correctly formatted.
+ if i[0]=='-' and i[1] in known_keys and len(i)>2:
+ yield i[0:2] # key
+ yield i[2:] # value
+ else:
+ yield i
+
# TODO: be more principled and explicit about what we cache vs. what we look up
# dynamically from the logfile
class Job:
@@ -86,7 +101,7 @@ def __init__(self, proc, logroot):
assert 'chia' in args[1]
assert 'plots' == args[2]
assert 'create' == args[3]
- args_iter = iter(args[4:])
+ args_iter = iter(cmdline_argfix(args[4:]))
for arg in args_iter:
val = None if arg in ['-e'] else next(args_iter)
if arg == '-k':
| Parse command lines of existing jobs more robustly
E.g.:
```
$ ./plotman.py interactive
Warning: unrecognized args: -k32 -n1
Warning: unrecognized args: -t/tmp -2/tmp2
Warning: unrecognized args: -d/dst -b6000
Warning: unrecognized args: -u128 -r3
...
```
Should probably use a standard library for parsing the args. :)
Parse command lines of existing jobs more robustly
E.g.:
```
$ ./plotman.py interactive
Warning: unrecognized args: -k32 -n1
Warning: unrecognized args: -t/tmp -2/tmp2
Warning: unrecognized args: -d/dst -b6000
Warning: unrecognized args: -u128 -r3
...
```
Should probably use a standard library for parsing the args. :)
| 2021-03-31T20:50:53 | 0.0 | [] | [] |
|||
upb-lea/pySignalScope | upb-lea__pySignalScope-3 | fcc1bd4caeb6ac4b780ce592caf975442235d1fc | diff --git a/pysignalscope/scope.py b/pysignalscope/scope.py
index a97214e..3da8f6d 100644
--- a/pysignalscope/scope.py
+++ b/pysignalscope/scope.py
@@ -15,23 +15,45 @@ class Scope:
def __init__(self, channel_time: Union[List[float], np.ndarray], channel_data: Union[List[float], np.ndarray],
channel_label: Optional[str] = None, channel_unit: Optional[str] = None, channel_color: Optional[str] = None,
channel_source: Optional[str] = None, channel_linestyle: Optional[str] = None) -> None:
+ # check channel_time for a valid type, convert to numpy if necessary
if isinstance(channel_time, List):
self.channel_time = np.array(channel_time)
elif isinstance(channel_time, np.ndarray):
self.channel_time = channel_time
else:
- raise Exception("channel_time must be type list or ArrayLike")
+ raise TypeError("channel_time must be type list or ArrayLike.")
+ # check channel_data for a valid type, convert to numpy if necessary
if isinstance(channel_data, List):
self.channel_data = np.array(channel_data)
elif isinstance(channel_data, np.ndarray):
self.channel_data = channel_data
else:
- raise Exception("channel_data must be type list or ArrayLike")
- self.channel_label = channel_label
- self.channel_unit = channel_unit
- self.channel_color = channel_color
- self.channel_source = channel_source
- self.channel_linestyle = channel_linestyle
+ raise TypeError("channel_data must be type list or ArrayLike")
+ # check channel_label for a valid type
+ if isinstance(channel_label, str) or channel_label is None:
+ self.channel_label = channel_label
+ else:
+ raise TypeError("channel_label must be type str or None.")
+ # check channel unit for a valid type
+ if isinstance(channel_unit, str) or channel_unit is None:
+ self.channel_unit = channel_unit
+ else:
+ raise TypeError("channel_unit must be type str or None.")
+ # check channel_color for a valid type
+ if isinstance(channel_color, str) or channel_color is None:
+ self.channel_color = channel_color
+ else:
+ raise TypeError("channel_color must be type str or None.")
+ # check channel_source for a valid type
+ if isinstance(channel_source, str) or channel_source is None:
+ self.channel_source = channel_source
+ else:
+ raise TypeError("channel_source must be type str or None.")
+ # check channel_linestyle for a valid type
+ if isinstance(channel_linestyle, str) or channel_linestyle is None:
+ self.channel_linestyle = channel_linestyle
+ else:
+ raise TypeError("channel_linestyle must be type str or None.")
def modify(self, channel_data_factor: Optional[float] = None, channel_data_offset: Optional[float] = None,
channel_label: Optional[str] = None, channel_unit: Optional[str] = None, channel_color: Optional[str] = None,
| Implement type check for Scope __init__() values from the user input.
Currently there is no user input check if the data types are correct.
There should be several checks, to validate the input types.
| 2024-10-24T12:17:46 | 0.0 | [] | [] |
|||
InQuest/ThreatIngestor | InQuest__ThreatIngestor-102 | 21c67a0cf96b41c91e8c73c313f5135de7167f01 | diff --git a/threatingestor/sources/twitter.py b/threatingestor/sources/twitter.py
index 192fbbb..f12b0ed 100644
--- a/threatingestor/sources/twitter.py
+++ b/threatingestor/sources/twitter.py
@@ -34,7 +34,7 @@ def __init__(self, name, api_key, api_secret_key, access_token, access_token_sec
# If q is set, use Search API.
# Otherwise, default to mentions API.
self.endpoint = self.api.statuses.mentions_timeline
- if kwargs.get('slug') and kwargs.get('owner_screen_name'):
+ if (kwargs.get('slug') and kwargs.get('owner_screen_name')) or (kwargs.get('list_id') and kwargs.get('owner_screen_name')):
self.endpoint = self.api.lists.statuses
elif kwargs.get('screen_name') or kwargs.get('user_id'):
self.endpoint = self.api.statuses.user_timeline
diff --git a/threatingestor/sources/twitter_follow_links.py b/threatingestor/sources/twitter_follow_links.py
new file mode 100644
index 0000000..37b9084
--- /dev/null
+++ b/threatingestor/sources/twitter_follow_links.py
@@ -0,0 +1,107 @@
+from __future__ import absolute_import
+
+
+import re
+import requests
+import twitter
+from loguru import logger
+from threatingestor.sources import Source
+
+
+TWEET_URL = 'https://twitter.com/{user}/status/{id}'
+
+WHITELIST_DOMAINS = r"pastebin\.com"
+
+
+class Plugin(Source):
+
+ def __init__(self, name, api_key, api_secret_key, access_token, access_token_secret, defanged_only=True, **kwargs):
+ self.name = name
+ self.api = twitter.Twitter(auth=twitter.OAuth(access_token, access_token_secret, api_key, api_secret_key))
+
+ # Let the user decide whether to include non-obfuscated URLs or not.
+ self.include_nonobfuscated = not defanged_only
+
+ # Support for full tweet
+ tweet_param = {'tweet_mode': 'extended'}
+ kwargs.update(tweet_param)
+
+ # Forward kwargs.
+ # NOTE: No validation is done here, so if the config is wrong, expect bad results.
+ self.kwargs = kwargs
+
+ # Decide which endpoint to use based on passed arguments.
+ # If slug and owner_screen_name, use List API.
+ # If screen_name or user_id, use User Timeline API.
+ # If q is set, use Search API.
+ # Otherwise, default to mentions API.
+ self.endpoint = self.api.statuses.mentions_timeline
+ if (kwargs.get('slug') and kwargs.get('owner_screen_name')) or (kwargs.get('list_id') and kwargs.get('owner_screen_name')):
+ self.endpoint = self.api.lists.statuses
+ elif kwargs.get('screen_name') or kwargs.get('user_id'):
+ self.endpoint = self.api.statuses.user_timeline
+ elif kwargs.get('q'):
+ self.endpoint = self.api.search.tweets
+
+ def run(self, saved_state):
+ # Modify kwargs to insert since_id.
+ if saved_state:
+ self.kwargs['since_id'] = saved_state
+
+ # Pull new tweets.
+ try:
+ response = self.endpoint(**self.kwargs)
+ except twitter.api.TwitterHTTPError as e:
+ # API error; log and return early.
+ logger.warning(f"Twitter API Error: {e}")
+
+ return saved_state, []
+
+ # Correctly handle responses from different endpoints.
+ try:
+ tweet_list = response['statuses']
+ except TypeError:
+ tweet_list = response
+
+ tweets = [{
+ 'content': s.get('full_text', ''),
+ 'id': s.get('id_str', ''),
+ 'user': s.get('user', {}).get('screen_name', ''),
+ 'entities': s.get('entities', {}),
+ } for s in tweet_list]
+
+ artifacts = []
+ # Traverse in reverse, old to new.
+ tweets.reverse()
+ for tweet in tweets:
+
+ # Expand t.co links.
+ for url in tweet['entities'].get('urls', []):
+ try:
+ tweet['content'] = tweet['content'].replace(url['url'], url['expanded_url'])
+
+ # Check if pastebin.com in url
+ if re.search(WHITELIST_DOMAINS, url['expanded_url']):
+
+ # Check if the url is already returning the 'raw' pastebin. If not, update the url
+ if 'raw' not in url['expanded_url']:
+ pastebin_id = re.search(r"pastebin.com/(.*?)$", url['expanded_url']).group(1)
+ location = f"https://pastebin.com/raw/{pastebin_id}"
+ else:
+ location = url['expanded_url']
+
+ req = requests.get(location)
+ saved_state = tweet['id']
+ artifacts += self.process_element(req.text, location, include_nonobfuscated=True)
+
+ logger.log('NOTIFY', f"Discovered paste: {location}")
+
+ except KeyError:
+ # No url/expanded_url, continue without expanding.
+ pass
+
+ return saved_state, artifacts
+
+
+
+
| config.yml follow links for IOC's
Didn't there used to be an option in the config.yml to follow links? I.E. pastebin etc? I can't seem to find it.
Thanks!
| alright so the issue isn't "defanged = false" , the issue is it doesn't seem to work.
below is my config.yml
```yaml
general:
# You always need this section!
# Here are some sane values to include:
daemon: true
sleep: 900
state_path: state.db
credentials:
# This section is optional. Use it to define credentials to reference below
# in the source and operator sections.
- name: twitter-auth
# https://dev.twitter.com/oauth/overview/application-owner-access-tokens
api_key: token
api_secret_key: tokey
access_token: token-KJNNBgxQT9bLP6zBFCPeZIBdbU7MPY
access_token_secret: token
defanged_only: false
- name: misp-auth
url: https://misp.org
key: key
ssl: False
sources:
# This section defines each of the input sources for ThreatIngestor.
# Define as many as you want. ThreatIngestor maintains a "state" for each of
# your sources, which helps pull in only new content since the last run.
- name: Twitter_ingest
module: twitter
credentials: twitter-auth
# https://dev.twitter.com/rest/reference/get/lists/statuses
owner_screen_name: oasdfas
slug: Ioc
- name: twitter-open-directory
module: twitter
credentials: twitter-auth
q: '"payload" #APT'
- name: twitter-open-directory
module: twitter
credentials: twitter-auth
q: '"md5" #APT'
- name: twitter-open-directory
module: twitter
credentials: twitter-auth
q: '"url" #phishing "phishing"'
- name: twitter-open-directory
module: twitter
credentials: twitter-auth
q: '"url" #APT'
- name: twitter-open-directory
module: twitter
credentials: twitter-auth
q: '"ip" #APT'
- name: twitter-open-directory
module: twitter
credentials: twitter-auth
q: '"c2" #APT'
operators:
# This section defines outputs for the information extracted from your
# sources. All filtering and flow control is done here, with options like
# "allowed_sources", "artifact_types", and "filter".
- name: csv
# Write artifacts to a CSV file
module: csv
filename: output.csv
- name: sqlite-db
module: sqlite
filename: artifacts.db
- name: misp-instance
module: misp
credentials: misp-auth
tags: [type:OSINT, twitter]
```
Hi @mathurin68, I'm looking into `defanged_only: false` not working. Do you have an example of a tweet with a non-defanged URL that isn't being extracted?
According to the logic [here](https://github.com/InQuest/ThreatIngestor/blob/master/threatingestor/sources/__init__.py#L100): URLs in the tweet should be extracted. I'll look into the possibility that `defanged_only: false` in the config file isn't being picked up.
Hey @cmmorrow!
Like for this one...
https://twitter.com/Cryptolaemus1/status/1299363102107013120
is it supposed to follow the pastebin link?
Haha could always be something I don't have configured right too...
Hmm it seems to be missing a ton of stuff too --
https://twitter.com/_re_fox/status/1301564536575733760
+1 Would love to be able to follow links in tweets. Pastebin being a great example as often the majority of IOC's are not in the tweet body, but the attached link.
Having a set of whitelisted domains to follow could negate opening dangerous links.
Looking at a way to implement this now, but due to the static nature of sources, it's seeming a bit tricky. | 2020-10-30T23:16:06 | 0.0 | [] | [] |
||
bayesiains/nflows | bayesiains__nflows-33 | 75048ff2ebd6b7ccad2fb8380630da08aa6ab86b | diff --git a/nflows/flows/base.py b/nflows/flows/base.py
index 85fdeda..27c0df6 100644
--- a/nflows/flows/base.py
+++ b/nflows/flows/base.py
@@ -6,6 +6,8 @@
from nflows.distributions.base import Distribution
from nflows.utils import torchutils
+from inspect import signature
+
class Flow(Distribution):
"""Base class for all flow objects."""
@@ -23,6 +25,9 @@ def __init__(self, transform, distribution, embedding_net=None):
super().__init__()
self._transform = transform
self._distribution = distribution
+ distribution_signature = signature(self._distribution.log_prob)
+ distribution_arguments = distribution_signature.parameters.keys()
+ self._context_used_in_base = 'context' in distribution_arguments
if embedding_net is not None:
assert isinstance(embedding_net, torch.nn.Module), (
"embedding_net is not a nn.Module. "
@@ -37,12 +42,22 @@ def __init__(self, transform, distribution, embedding_net=None):
def _log_prob(self, inputs, context):
embedded_context = self._embedding_net(context)
noise, logabsdet = self._transform(inputs, context=embedded_context)
- log_prob = self._distribution.log_prob(noise, context=embedded_context)
+ if self._context_used_in_base:
+ log_prob = self._distribution.log_prob(noise, context=embedded_context)
+ else:
+ log_prob = self._distribution.log_prob(noise)
return log_prob + logabsdet
def _sample(self, num_samples, context):
embedded_context = self._embedding_net(context)
- noise = self._distribution.sample(num_samples, context=embedded_context)
+ if self._context_used_in_base:
+ noise = self._distribution.sample(num_samples, context=embedded_context)
+ else:
+ repeat_noise = self._distribution.sample(num_samples*embedded_context.shape[0])
+ noise = torch.reshape(
+ repeat_noise,
+ (embedded_context.shape[0], -1, repeat_noise.shape[1])
+ )
if embedded_context is not None:
# Merge the context dimension with sample dimension in order to apply the transform.
@@ -65,9 +80,14 @@ def sample_and_log_prob(self, num_samples, context=None):
For flows, this is more efficient that calling `sample` and `log_prob` separately.
"""
embedded_context = self._embedding_net(context)
- noise, log_prob = self._distribution.sample_and_log_prob(
- num_samples, context=embedded_context
- )
+ if self._context_used_in_base:
+ noise, log_prob = self._distribution.sample_and_log_prob(
+ num_samples, context=embedded_context
+ )
+ else:
+ noise, log_prob = self._distribution.sample_and_log_prob(
+ num_samples
+ )
if embedded_context is not None:
# Merge the context dimension with sample dimension in order to apply the transform.
diff --git a/nflows/transforms/autoregressive.py b/nflows/transforms/autoregressive.py
index 6c3958f..219fef5 100644
--- a/nflows/transforms/autoregressive.py
+++ b/nflows/transforms/autoregressive.py
@@ -40,7 +40,7 @@ def forward(self, inputs, context=None):
return outputs, logabsdet
def inverse(self, inputs, context=None):
- num_inputs = np.prod(inputs.shape[1:])
+ num_inputs = int(np.prod(inputs.shape[1:]))
outputs = torch.zeros_like(inputs)
logabsdet = None
for _ in range(num_inputs):
| Rewrote BoxUniform such that it can be used as a base for a flow
Hello,
Back again trying to contribute something useful. I found that if I use BoxUniform as the base distribution for a flow model I get
`log_prob() got an unexpected keyword argument 'context'`
I rewrote BoxUniform to look more like StandardNormal to prevent that issue. I could also rename it if the original BoxUniform needs to stay around.
Cheers,
Rob
| Could this or something like it please be accepted | 2021-02-17T19:18:51 | 0.0 | [] | [] |
||
behave-contrib/behave-html-pretty-formatter | behave-contrib__behave-html-pretty-formatter-83 | c38e66695da0ce0a7a7c84d60e82b7665371a9fa | diff --git a/CHANGES b/CHANGES
index 59600f1..27024c5 100644
--- a/CHANGES
+++ b/CHANGES
@@ -1,3 +1,8 @@
+behave-html-pretty-formatter 1.12.3
+===================================
+Adding color scheme of results from Summary to Global Summary, based on Issue #82
+
+
behave-html-pretty-formatter 1.12.2
===================================
Fix download of HTML embeds, add `Download Plaintext` button
diff --git a/behave_html_pretty_formatter/behave.css b/behave_html_pretty_formatter/behave.css
index ac2576d..a5cf013 100644
--- a/behave_html_pretty_formatter/behave.css
+++ b/behave_html_pretty_formatter/behave.css
@@ -231,6 +231,26 @@ pre * {
margin-bottom: 0.1em;
}
+.global-summary-status.passed {
+ color: var(--summary-passed);
+}
+
+.global-summary-status.failed, .global-summary-status.error {
+ color: var(--summary-failed);
+}
+
+.global-summary-status.undefined {
+ color: var(--summary-undefined);
+}
+
+.global-summary-status.skipped {
+ color: var(--summary-skipped);
+}
+
+.contrast .global-summary-status {
+ color: rgb(248, 248, 248);
+}
+
.feature-summary-row {
color: var(--feature-color);
border-left: 0.4rem solid var(--feature-color);
diff --git a/behave_html_pretty_formatter/behave.min.css b/behave_html_pretty_formatter/behave.min.css
index 1e14b84..98810fe 100644
--- a/behave_html_pretty_formatter/behave.min.css
+++ b/behave_html_pretty_formatter/behave.min.css
@@ -1,1 +1,1 @@
-@charset "utf-8"; [data-theme=light]{--body-color:#333;--body-bg:#fff;--strong-color:#000;--feature-bg:#eee;--feature-color:#777;--duration-color:#313131;--summary-passed:#4f8a10;--summary-passed-border:#4f8a10;--summary-failed:#d8000c;--summary-failed-border:#d8000c;--summary-undefined:#945901;--summary-undefined-border:#ffdf61;--summary-skipped:#76adff;--summary-skipped-border:#76adff;--passed-bg:#dff2bf;--passed-step-bg:#c6dba3;--passed-border:#b4cc8c;--failed-bg:#f5c9cd;--failed-step-bg:#ea868f;--failed-border:#dd7a82;--undefined-bg:#ffdf61;--undefined-step-bg:#f1cb32;--undefined-border:#917400;--skipped-bg:#eef5ff;--skipped-step-bg:#cfe2ff;--skipped-border:#b8c9e4;--commentary-bg:#b9b9b9;--table-bg-odd:#fff;--table-bg-even:#eee;--button-bg:#666;--button-color:#eee;--button-bg-active:#898989;--button-color-active:#fff}[data-theme=dark]{--body-color:#ddd;--body-bg:#000;--strong-color:#fff;--feature-bg:#222;--feature-color:#aaa;--duration-color:#cecece;--summary-passed:#4f8a10;--summary-passed-border:#4f8a10;--summary-failed:#d8000c;--summary-failed-border:#d8000c;--summary-undefined:#945901;--summary-undefined-border:#ffdf61;--summary-skipped:#76adff;--summary-skipped-border:#76adff;--passed-bg:#42630a;--passed-step-bg:#697e41;--passed-border:#91a86b;--failed-bg:#69272d;--failed-step-bg:#a8666c;--failed-border:#df888f;--undefined-bg:#665a2a;--undefined-step-bg:#b6940d;--undefined-border:#dbb20e;--skipped-bg:#345381;--skipped-step-bg:#3d659e;--skipped-border:#6981a8;--commentary-bg:#5c5c5c;--table-bg-odd:#555;--table-bg-even:#444;--button-bg:#555;--button-color:#cdcdcd;--button-bg-active:#898989;--button-color-active:#fff}html,body{font-family:Arial,Helvetica,sans-serif;font-size:1rem;margin:0;padding:0;color:var(--body-color);background:var(--body-bg)}body{padding:1rem 1rem;font-size:.85rem}pre,pre *{margin:0}.embed_button::after,.scenario-name::after{position:absolute;top:-0.5em;left:-0.2em;content:"\2304";font-size:1.8em;transition:all .2s linear}.embed_button.collapse::after,.collapse .scenario-name::after{top:-0.29em;left:-0.5em;transform:rotate(-90deg);-moz-transform:rotate(-90deg);-webkit-transform:rotate(-90deg);-ms-transform:rotate(-90deg);-o-transform:rotate(-90deg)}.embed_button,.scenario-name{padding-left:1.2em;position:relative}.feature-title,.global-summary{font-size:1rem;display:flex;flex-wrap:wrap;align-items:center;background-color:var(--feature-bg);color:var(--feature-color);padding:.5em 1em;margin-bottom:5px}.feature-title:not(:first-child){margin-top:1em}.global-summary{color:var(--strong-color);margin-bottom:0}.feature-icon{height:1.2em;display:inline-block;margin-right:.3em;text-align:center;vertical-align:middle}.feature-icon.contrast{display:none}.feature-title.contrast,.global-summary.contrast{font-weight:bold;font-size:1.25rem;background-color:#000;color:#fff}.feature-summary-commentary{border-left:.4rem solid var(--feature-color);background-color:var(--commentary-bg);color:var(--strong-color);word-wrap:break-word;max-width:40%;margin-right:1rem;margin-top:.2rem;margin-left:.2rem;padding:.5rem;white-space:pre-wrap}.feature-summary-commentary.contrast{background-color:#242323;color:#f8f8f8;font-size:1rem}.feature-summary-container{display:flex;flex-wrap:wrap;padding:5px;padding-right:1rem;margin-bottom:5px;background-color:var(--feature-bg);color:var(--feature-color);justify-content:start;font-size:.8rem}.feature-summary-container.collapse{display:none}.feature-summary-container.contrast{background-color:#000;color:#f8f8f8;font-size:1rem}.feature-additional-info-container{padding:5px;background-color:var(--feature-bg);color:var(--feature-color);justify-content:start;font-size:.8rem;flex-basis:100%}.contrast .feature-additional-info-container{background-color:#000;color:#f8f8f8;font-size:1rem}.feature-summary-stats{margin-top:.2em}.feature-summary-stats .button{padding-left:.4em;padding-right:.4em;padding-top:.1em;padding-bottom:.1em;margin-bottom:.1em}.feature-summary-row{color:var(--feature-color);border-left:.4rem solid var(--feature-color);padding-left:.5rem;padding-top:.1em;padding-bottom:.1em;margin-bottom:.1em}.feature-summary-row.passed{color:var(--summary-passed);border-left:.4rem solid var(--summary-passed-border)}.feature-summary-row.failed,.feature-summary-row.error{color:var(--summary-failed);border-left:.4rem solid var(--summary-failed-border)}.feature-summary-row.undefined{color:var(--summary-undefined);border-left:.4rem solid var(--summary-undefined-border)}.feature-summary-row.skipped{color:var(--summary-skipped);border-left:.4rem solid var(--summary-skipped-border)}.feature-summary-row.contrast{color:#f8f8f8;border-left:.4rem solid #f8f8f8}.feature-container{margin-bottom:2rem}.feature-started{align-self:center;margin-left:auto;font-size:.75rem;font-style:italic}.feature-started.contrast{font-size:1.25rem;color:#fff}.scenario-capsule{padding:1rem;padding-right:.5rem;padding-top:.3rem;margin-bottom:1rem;color:var(--strong-color)}.scenario-header{padding:1rem;padding-bottom:0;margin-top:0;margin-bottom:0;color:var(--strong-color)}.scenario-capsule:last-child{border:0}.scenario-capsule{background-color:var(--feature-bg)}.scenario-header.passed,.global-summary.passed{background-color:var(--passed-step-bg)}.scenario-header.failed,.global-summary.failed,.scenario-header.error,.global-summary.error{background-color:var(--failed-step-bg)}.scenario-header.undefined,.global-summary.undefined{background-color:var(--undefined-step-bg)}.scenario-header.skipped,.global-summary.skipped{background-color:var(--skipped-step-bg)}.scenario-header.contrast,.scenario-capsule.contrast,.global-summary.contrast{background-color:#000;color:#fff}.scenario-info{display:flex;flex-wrap:wrap;font-size:1.25rem}.scenario-name{cursor:pointer;font-weight:bold;padding-bottom:.5em}.scenario-duration{align-self:center;margin-left:auto;font-size:.75rem;font-style:italic;padding:0 .5em .5em 0}.scenario-duration.contrast{font-size:1.25rem;color:#fff}.scenario-tags{color:var(--body-color);font-weight:bold;font-size:.75rem;margin:.1rem .8em .5rem 0;display:inline-block}.scenario-tags.contrast{color:white;font-weight:bold;font-size:1rem;margin:.1rem 1em .5rem 0}.step-capsule{margin:2px 0 2px 2px;padding:.5rem;color:var(--strong-color);display:flex;flex-wrap:wrap;font-size:.75rem}.step-capsule.passed{background-color:var(--passed-step-bg);border:1px solid var(--passed-border)}.step-capsule.failed,.step-capsule.error{background-color:var(--failed-step-bg);border:1px solid var(--failed-border)}.step-capsule.undefined{background-color:var(--undefined-step-bg);border:1px solid var(--undefined-step-bg)}.step-capsule.skipped{background-color:var(--skipped-step-bg);border:1px solid var(--skipped-border)}.step-capsule.commentary{background-color:var(--commentary-bg);margin-left:1rem}.step-capsule.description{background-color:var(--commentary-bg);margin-left:0}.step-capsule.contrast{background-color:#242323;color:#fff;font-size:1.25rem;border:none}.step-status{display:none;padding:0 1rem 0 0;font-weight:bold;font-size:1.25rem}.step-decorator{padding:0;padding-right:1.5rem}.step-duration{color:var(--duration-color);font-style:italic;padding:0;padding-right:1.5rem}.step-duration.contrast{color:#f8f8f8}.messages{margin:0 0 4px 1em}.scenario-capsule .messages:last-child{border-bottom:1px dashed var(--strong-color)}.scenario-capsule .messages.contrast:last-child{border-bottom:1px dashed #fff}.embed-capsule{margin:.5em 0}.embed_content{white-space:pre-wrap;word-wrap:break-word;font-size:12px;margin:.5rem}.embed_content.collapse{display:none}.embed_button{cursor:pointer;margin:0 1rem .5em 0;text-decoration:underline;color:var(--strong-color);font-size:12px;width:max-content}.embed_button.contrast{color:#fff;font-size:20px}th,td{padding:6px}thead{background-color:#333;color:#fff;cursor:pointer}table{color:var(--body-color);margin:2px 1em 4px 1em;border-collapse:collapse;border:1px solid #000;vertical-align:middle}table.contrast{font-size:1rem}table tbody tr:nth-child(odd){background-color:var(--table-bg-odd)}table tbody tr:nth-child(even){background-color:var(--table-bg-even)}table tbody.collapse{display:none}table.contrast tbody tr{background-color:#fff;color:#000;border:1px solid #000}img,video{max-width:100%;max-height:100%}a{color:inherit;text-decoration:none}a:hover{text-decoration:underline;text-decoration-color:var(--strong-color)}.contrast a:hover{color:grey;text-decoration:underline;text-decoration-color:grey}.scenario-header.collapse .scenario-tags,.scenario-capsule.collapse{display:none}.scenario-header.collapse{padding:.5rem 1rem 0 1rem;margin-bottom:1rem}.button{display:inline-block;color:var(--button-color);background-color:var(--button-bg);border-radius:.2em;font-weight:bold;text-decoration:none;padding:.5em .9em;text-align:center;cursor:pointer}.button:hover{text-decoration:none;color:var(--button-color-active);background-color:var(--button-bg-active)}.contrast .button{color:#111;background-color:#eee}.contrast .button:hover{text-decoration:none}.display-flex{display:flex}.display-block{display:block}.display-inline{display:inline}.display-block.display-inline{display:inline-block}.flex-gap{column-gap:1em;row-gap:2px}.flex-left-space{margin-left:auto}.margin-top{margin-top:15px}.no-margin-top{margin-top:0}.margin-bottom{margin-bottom:15px}@media only screen and (max-width:750px){.feature-title,.global-summary{flex-direction:column}.feature-started{margin-left:unset}.feature-summary-container{margin-left:0;margin-top:.25rem;font-size:1rem;display:block}.feature-additional-info-container{margin-left:0;margin-top:.25rem;font-size:1rem}.feature-summary-commentary{max-width:100%;margin-right:0}.flex-left-space{margin-left:initial}.feature-summary-stats{margin-left:.2rem}.scenario-capsule{padding-right:0}}
\ No newline at end of file
+@charset "utf-8"; [data-theme=light]{--body-color:#333;--body-bg:#fff;--strong-color:#000;--feature-bg:#eee;--feature-color:#777;--duration-color:#313131;--summary-passed:#4f8a10;--summary-passed-border:#4f8a10;--summary-failed:#d8000c;--summary-failed-border:#d8000c;--summary-undefined:#945901;--summary-undefined-border:#ffdf61;--summary-skipped:#76adff;--summary-skipped-border:#76adff;--passed-bg:#dff2bf;--passed-step-bg:#c6dba3;--passed-border:#b4cc8c;--failed-bg:#f5c9cd;--failed-step-bg:#ea868f;--failed-border:#dd7a82;--undefined-bg:#ffdf61;--undefined-step-bg:#f1cb32;--undefined-border:#917400;--skipped-bg:#eef5ff;--skipped-step-bg:#cfe2ff;--skipped-border:#b8c9e4;--commentary-bg:#b9b9b9;--table-bg-odd:#fff;--table-bg-even:#eee;--button-bg:#666;--button-color:#eee;--button-bg-active:#898989;--button-color-active:#fff}[data-theme=dark]{--body-color:#ddd;--body-bg:#000;--strong-color:#fff;--feature-bg:#222;--feature-color:#aaa;--duration-color:#cecece;--summary-passed:#4f8a10;--summary-passed-border:#4f8a10;--summary-failed:#d8000c;--summary-failed-border:#d8000c;--summary-undefined:#945901;--summary-undefined-border:#ffdf61;--summary-skipped:#76adff;--summary-skipped-border:#76adff;--passed-bg:#42630a;--passed-step-bg:#697e41;--passed-border:#91a86b;--failed-bg:#69272d;--failed-step-bg:#a8666c;--failed-border:#df888f;--undefined-bg:#665a2a;--undefined-step-bg:#b6940d;--undefined-border:#dbb20e;--skipped-bg:#345381;--skipped-step-bg:#3d659e;--skipped-border:#6981a8;--commentary-bg:#5c5c5c;--table-bg-odd:#555;--table-bg-even:#444;--button-bg:#555;--button-color:#cdcdcd;--button-bg-active:#898989;--button-color-active:#fff}html,body{font-family:Arial,Helvetica,sans-serif;font-size:1rem;margin:0;padding:0;color:var(--body-color);background:var(--body-bg)}body{padding:1rem 1rem;font-size:.85rem}pre,pre *{margin:0}.embed_button::after,.scenario-name::after{position:absolute;top:-0.5em;left:-0.2em;content:"\2304";font-size:1.8em;transition:all .2s linear}.embed_button.collapse::after,.collapse .scenario-name::after{top:-0.29em;left:-0.5em;transform:rotate(-90deg);-moz-transform:rotate(-90deg);-webkit-transform:rotate(-90deg);-ms-transform:rotate(-90deg);-o-transform:rotate(-90deg)}.embed_button,.scenario-name{padding-left:1.2em;position:relative}.feature-title,.global-summary{font-size:1rem;display:flex;flex-wrap:wrap;align-items:center;background-color:var(--feature-bg);color:var(--feature-color);padding:.5em 1em;margin-bottom:5px}.feature-title:not(:first-child){margin-top:1em}.global-summary{color:var(--strong-color);margin-bottom:0}.feature-icon{height:1.2em;display:inline-block;margin-right:.3em;text-align:center;vertical-align:middle}.feature-icon.contrast{display:none}.feature-title.contrast,.global-summary.contrast{font-weight:bold;font-size:1.25rem;background-color:#000;color:#fff}.feature-summary-commentary{border-left:.4rem solid var(--feature-color);background-color:var(--commentary-bg);color:var(--strong-color);word-wrap:break-word;max-width:40%;margin-right:1rem;margin-top:.2rem;margin-left:.2rem;padding:.5rem;white-space:pre-wrap}.feature-summary-commentary.contrast{background-color:#242323;color:#f8f8f8;font-size:1rem}.feature-summary-container{display:flex;flex-wrap:wrap;padding:5px;padding-right:1rem;margin-bottom:5px;background-color:var(--feature-bg);color:var(--feature-color);justify-content:start;font-size:.8rem}.feature-summary-container.collapse{display:none}.feature-summary-container.contrast{background-color:#000;color:#f8f8f8;font-size:1rem}.feature-additional-info-container{padding:5px;background-color:var(--feature-bg);color:var(--feature-color);justify-content:start;font-size:.8rem;flex-basis:100%}.contrast .feature-additional-info-container{background-color:#000;color:#f8f8f8;font-size:1rem}.feature-summary-stats{margin-top:.2em}.feature-summary-stats .button{padding-left:.4em;padding-right:.4em;padding-top:.1em;padding-bottom:.1em;margin-bottom:.1em}.global-summary-status.passed{color:var(--summary-passed)}.global-summary-status.failed,.global-summary-status.error{color:var(--summary-failed)}.global-summary-status.undefined{color:var(--summary-undefined)}.global-summary-status.skipped{color:var(--summary-skipped)}.contrast .global-summary-status{color:#f8f8f8}.feature-summary-row{color:var(--feature-color);border-left:.4rem solid var(--feature-color);padding-left:.5rem;padding-top:.1em;padding-bottom:.1em;margin-bottom:.1em}.feature-summary-row.passed{color:var(--summary-passed);border-left:.4rem solid var(--summary-passed-border)}.feature-summary-row.failed,.feature-summary-row.error{color:var(--summary-failed);border-left:.4rem solid var(--summary-failed-border)}.feature-summary-row.undefined{color:var(--summary-undefined);border-left:.4rem solid var(--summary-undefined-border)}.feature-summary-row.skipped{color:var(--summary-skipped);border-left:.4rem solid var(--summary-skipped-border)}.feature-summary-row.contrast{color:#f8f8f8;border-left:.4rem solid #f8f8f8}.feature-container{margin-bottom:2rem}.feature-started{align-self:center;margin-left:auto;font-size:.75rem;font-style:italic}.feature-started.contrast{font-size:1.25rem;color:#fff}.scenario-capsule{padding:1rem;padding-right:.5rem;padding-top:.3rem;margin-bottom:1rem;color:var(--strong-color)}.scenario-header{padding:1rem;padding-bottom:0;margin-top:0;margin-bottom:0;color:var(--strong-color)}.scenario-capsule:last-child{border:0}.scenario-capsule{background-color:var(--feature-bg)}.scenario-header.passed,.global-summary.passed{background-color:var(--passed-step-bg)}.scenario-header.failed,.global-summary.failed,.scenario-header.error,.global-summary.error{background-color:var(--failed-step-bg)}.scenario-header.undefined,.global-summary.undefined{background-color:var(--undefined-step-bg)}.scenario-header.skipped,.global-summary.skipped{background-color:var(--skipped-step-bg)}.scenario-header.contrast,.scenario-capsule.contrast,.global-summary.contrast{background-color:#000;color:#fff}.scenario-info{display:flex;flex-wrap:wrap;font-size:1.25rem}.scenario-name{cursor:pointer;font-weight:bold;padding-bottom:.5em}.scenario-duration{align-self:center;margin-left:auto;font-size:.75rem;font-style:italic;padding:0 .5em .5em 0}.scenario-duration.contrast{font-size:1.25rem;color:#fff}.scenario-tags{color:var(--body-color);font-weight:bold;font-size:.75rem;margin:.1rem .8em .5rem 0;display:inline-block}.scenario-tags.contrast{color:white;font-weight:bold;font-size:1rem;margin:.1rem 1em .5rem 0}.step-capsule{margin:2px 0 2px 2px;padding:.5rem;color:var(--strong-color);display:flex;flex-wrap:wrap;font-size:.75rem}.step-capsule.passed{background-color:var(--passed-step-bg);border:1px solid var(--passed-border)}.step-capsule.failed,.step-capsule.error{background-color:var(--failed-step-bg);border:1px solid var(--failed-border)}.step-capsule.undefined{background-color:var(--undefined-step-bg);border:1px solid var(--undefined-step-bg)}.step-capsule.skipped{background-color:var(--skipped-step-bg);border:1px solid var(--skipped-border)}.step-capsule.commentary{background-color:var(--commentary-bg);margin-left:1rem}.step-capsule.description{background-color:var(--commentary-bg);margin-left:0}.step-capsule.contrast{background-color:#242323;color:#fff;font-size:1.25rem;border:none}.step-status{display:none;padding:0 1rem 0 0;font-weight:bold;font-size:1.25rem}.step-decorator{padding:0;padding-right:1.5rem}.step-duration{color:var(--duration-color);font-style:italic;padding:0;padding-right:1.5rem}.step-duration.contrast{color:#f8f8f8}.messages{margin:0 0 4px 1em}.scenario-capsule .messages:last-child{border-bottom:1px dashed var(--strong-color)}.scenario-capsule .messages.contrast:last-child{border-bottom:1px dashed #fff}.embed-capsule{margin:.5em 0}.embed_content{white-space:pre-wrap;word-wrap:break-word;font-size:12px;margin:.5rem}.embed_content.collapse{display:none}.embed_button{cursor:pointer;margin:0 1rem .5em 0;text-decoration:underline;color:var(--strong-color);font-size:12px;width:max-content}.embed_button.contrast{color:#fff;font-size:20px}th,td{padding:6px}thead{background-color:#333;color:#fff;cursor:pointer}table{color:var(--body-color);margin:2px 1em 4px 1em;border-collapse:collapse;border:1px solid #000;vertical-align:middle}table.contrast{font-size:1rem}table tbody tr:nth-child(odd){background-color:var(--table-bg-odd)}table tbody tr:nth-child(even){background-color:var(--table-bg-even)}table tbody.collapse{display:none}table.contrast tbody tr{background-color:#fff;color:#000;border:1px solid #000}img,video{max-width:100%;max-height:100%}a{color:inherit;text-decoration:none}a:hover{text-decoration:underline;text-decoration-color:var(--strong-color)}.contrast a:hover{color:grey;text-decoration:underline;text-decoration-color:grey}.scenario-header.collapse .scenario-tags,.scenario-capsule.collapse{display:none}.scenario-header.collapse{padding:.5rem 1rem 0 1rem;margin-bottom:1rem}.button{display:inline-block;color:var(--button-color);background-color:var(--button-bg);border-radius:.2em;font-weight:bold;text-decoration:none;padding:.5em .9em;text-align:center;cursor:pointer}.button:hover{text-decoration:none;color:var(--button-color-active);background-color:var(--button-bg-active)}.contrast .button{color:#111;background-color:#eee}.contrast .button:hover{text-decoration:none}.display-flex{display:flex}.display-block{display:block}.display-inline{display:inline}.display-block.display-inline{display:inline-block}.flex-gap{column-gap:1em;row-gap:2px}.flex-left-space{margin-left:auto}.margin-top{margin-top:15px}.no-margin-top{margin-top:0}.margin-bottom{margin-bottom:15px}@media only screen and (max-width:750px){.feature-title,.global-summary{flex-direction:column}.feature-started{margin-left:unset}.feature-summary-container{margin-left:0;margin-top:.25rem;font-size:1rem;display:block}.feature-additional-info-container{margin-left:0;margin-top:.25rem;font-size:1rem}.feature-summary-commentary{max-width:100%;margin-right:0}.flex-left-space{margin-left:initial}.feature-summary-stats{margin-left:.2rem}.scenario-capsule{padding-right:0}}
\ No newline at end of file
diff --git a/behave_html_pretty_formatter/html_pretty.py b/behave_html_pretty_formatter/html_pretty.py
index e917ee3..fab8e83 100644
--- a/behave_html_pretty_formatter/html_pretty.py
+++ b/behave_html_pretty_formatter/html_pretty.py
@@ -1397,43 +1397,65 @@ def generate_toggle_buttons(self):
onclick="toggle_hash('high_contrast')",
)
- def _generate_global_summary(self):
+ def _calculate_statuses(self, behave_object, statuses):
"""
- Process and render global statistics.
+ Calculate Statuses of either Feature or Scenario behave object.
"""
- if self.global_summary == "auto":
- if len(self.features) <= 1:
- return False
- elif not self.global_summary:
- return False
+ status = behave_object.status.name.lower()
- f_statuses, s_statuses = {}, {}
- for feature in self.features:
- f_status = feature.status.name.lower()
- count = f_statuses.get(f_status, 0) + 1
- f_statuses[f_status] = count
+ # Handle upstream Status.error status, add it to Status.failed for now.
+ if status == "error":
+ count = statuses.get(Status.failed.name.lower(), 0) + 1
+ status = Status.failed.name.lower()
+ else:
+ count = statuses.get(status, 0) + 1
- for scenario in feature.scenarios:
- s_status = scenario.status.name.lower()
- count = s_statuses.get(s_status, 0) + 1
- s_statuses[s_status] = count
+ statuses[status] = count
+
+ def _calculate_global_status_from_results(self, feature_statuses):
+ """
+ Calculate global status from Feature results.
+ """
global_status = Status.passed.name.lower()
# If no passed scenario mark as skipped.
- if not f_statuses.get(Status.passed.name.lower(), 0):
+ if not feature_statuses.get(Status.passed.name.lower(), 0):
global_status = Status.skipped.name.lower()
# If some undefined scenario, mark as undefined
# else remain passed or skipped.
- if f_statuses.get(Status.undefined.name.lower(), 0):
+ if feature_statuses.get(Status.undefined.name.lower(), 0):
global_status = Status.undefined.name.lower()
# If some failed scenario, mark as failed
# else remain passed, skipped or undefined.
- if f_statuses.get(Status.failed.name.lower(), 0):
+ if feature_statuses.get(Status.failed.name.lower(), 0):
global_status = Status.failed.name.lower()
+ return global_status
+
+ def _generate_global_summary(self):
+ """
+ Process and render global statistics.
+ """
+
+ if self.global_summary == "auto":
+ if len(self.features) <= 1:
+ return False
+
+ elif not self.global_summary:
+ return False
+
+ feature_statuses, scenario_statuses = {}, {}
+ for feature in self.features:
+ self._calculate_statuses(feature, feature_statuses)
+
+ for scenario in feature.scenarios:
+ self._calculate_statuses(scenario, scenario_statuses)
+
+ global_status = self._calculate_global_status_from_results(feature_statuses)
+
with div(cls=f"global-summary flex-gap {global_status}"):
# Generate icon if present.
if self.icon:
@@ -1455,26 +1477,37 @@ def _generate_global_summary(self):
cls=f"feature-summary-container flex-gap {collapse}",
):
with div(cls="feature-summary-stats"):
- line = ", ".join(
- f"{f_statuses.get(s.name.lower(), 0)} {s.name.lower()}"
- for s in [
- Status.passed,
- Status.failed,
- Status.undefined,
- Status.skipped,
- ]
- )
- div(f"Features: {line}.", cls="feature-summary-row")
- line = ", ".join(
- f"{s_statuses.get(s.name.lower(), 0)} {s.name.lower()}"
- for s in [
- Status.passed,
- Status.failed,
- Status.undefined,
- Status.skipped,
- ]
- )
- div(f"Scenarios: {line}.", cls="feature-summary-row")
+ statuses = [
+ Status.passed,
+ Status.failed,
+ Status.undefined,
+ Status.skipped,
+ ]
+ with div("Features: ", cls="feature-summary-row"):
+ for status in statuses:
+ span(
+ "".join(
+ (
+ f"{feature_statuses.get(status.name.lower(), 0)} ",
+ status.name.lower(),
+ ", " if status != Status.skipped else ".",
+ ),
+ ),
+ cls=f"global-summary-status {status.name.lower()}",
+ )
+
+ with div("Scenarios: ", cls="feature-summary-row"):
+ for status in statuses:
+ span(
+ "".join(
+ (
+ f"{scenario_statuses.get(status.name.lower(), 0)} ",
+ status.name.lower(),
+ ", " if status != Status.skipped else ".",
+ ),
+ ),
+ cls=f"global-summary-status {status.name.lower()}",
+ )
with div(cls="feature-summary-stats flex-left-space"):
finish_time = datetime.now()
diff --git a/pyproject.toml b/pyproject.toml
index 30ecadd..01b7c6c 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "behave-html-pretty-formatter"
-version = "1.12.2"
+version = "1.12.3"
description = "Pretty HTML Formatter for Behave"
readme = "README.md"
license = {file = "LICENSE"}
| Use colors in the global summary Feature/Scenario section
Would that be a possibility, to use the same color scheme in the Global summaries, as it is used in the Feature summary?

Use colors in the global summary Feature/Scenario section
Would that be a possibility, to use the same color scheme in the Global summaries, as it is used in the Feature summary?

Use colors in the global summary Feature/Scenario section
Would that be a possibility, to use the same color scheme in the Global summaries, as it is used in the Feature summary?

Use colors in the global summary Feature/Scenario section
Would that be a possibility, to use the same color scheme in the Global summaries, as it is used in the Feature summary?

| Hello, I do not see a reason why that would not be possible. Either I or Filip will look into it once we have some time.
Should be fairly simple, I will look into it. Thank you for report.
@Cuslika Just to be clear, you meant to color it whole like this?


Or did you mean to color it word by word like this?

I think it would be the most ideal, to have it colored word by word, so the last one.
Hello, I do not see a reason why that would not be possible. Either I or Filip will look into it once we have some time.
Should be fairly simple, I will look into it. Thank you for report.
@Cuslika Just to be clear, you meant to color it whole like this?


Or did you mean to color it word by word like this?

I think it would be the most ideal, to have it colored word by word, so the last one.
Hello, I do not see a reason why that would not be possible. Either I or Filip will look into it once we have some time.
Should be fairly simple, I will look into it. Thank you for report.
@Cuslika Just to be clear, you meant to color it whole like this?


Or did you mean to color it word by word like this?

I think it would be the most ideal, to have it colored word by word, so the last one.
Hello, I do not see a reason why that would not be possible. Either I or Filip will look into it once we have some time.
Should be fairly simple, I will look into it. Thank you for report.
@Cuslika Just to be clear, you meant to color it whole like this?


Or did you mean to color it word by word like this?

I think it would be the most ideal, to have it colored word by word, so the last one. | 2024-11-22T09:16:09 | 0.0 | [] | [] |
||
borgbackup/borg | borgbackup__borg-8384 | e4b5a59be0b5eaf10c17a19cbe6c5d39a8f7608f | diff --git a/src/borg/archive.py b/src/borg/archive.py
index 9265a2e625..2e7323c294 100644
--- a/src/borg/archive.py
+++ b/src/borg/archive.py
@@ -1058,6 +1058,8 @@ def stat_simple_attrs(self, st):
group = gid2group(st.st_gid)
if group is not None:
attrs["group"] = group
+ if st.st_ino > 0:
+ attrs["inode"] = st.st_ino
return attrs
def stat_ext_attrs(self, st, path, fd=None):
diff --git a/src/borg/constants.py b/src/borg/constants.py
index 0511f62de0..65f9c00dbe 100644
--- a/src/borg/constants.py
+++ b/src/borg/constants.py
@@ -1,7 +1,7 @@
# this set must be kept complete, otherwise the RobustUnpacker might malfunction:
# fmt: off
ITEM_KEYS = frozenset(['path', 'source', 'target', 'rdev', 'chunks', 'chunks_healthy', 'hardlink_master', 'hlid',
- 'mode', 'user', 'group', 'uid', 'gid', 'mtime', 'atime', 'ctime', 'birthtime', 'size',
+ 'mode', 'user', 'group', 'uid', 'gid', 'mtime', 'atime', 'ctime', 'birthtime', 'size', 'inode',
'xattrs', 'bsdflags', 'acl_nfs4', 'acl_access', 'acl_default', 'acl_extended',
'part'])
# fmt: on
diff --git a/src/borg/item.pyi b/src/borg/item.pyi
index 8cb2df43de..31fda55407 100644
--- a/src/borg/item.pyi
+++ b/src/borg/item.pyi
@@ -209,6 +209,10 @@ class Item(PropDict):
@nlink.setter
def nlink(self, val: int) -> None: ...
@property
+ def inode(self) -> int: ...
+ @inode.setter
+ def inode(self, val: int) -> None: ...
+ @property
def size(self) -> int: ...
@size.setter
def size(self, val: int) -> None: ...
diff --git a/src/borg/item.pyx b/src/borg/item.pyx
index 04dd884f29..ff50912d49 100644
--- a/src/borg/item.pyx
+++ b/src/borg/item.pyx
@@ -288,6 +288,8 @@ cdef class Item(PropDict):
# size is only present for items with a chunk list and then it is sum(chunk_sizes)
size = PropDictProperty(int)
+ inode = PropDictProperty(int)
+
hlid = PropDictProperty(bytes) # hard link id: same value means same hard link.
hardlink_master = PropDictProperty(bool) # legacy
| borg2: archive the inode number?
Archive the inode number, maybe we can use it some day to replace the local files cache by the previous archive in the repo.
files cache is a mapping `full-path -> (cmtime, size, inode, chunkinfos)`.
cmtime is either ctime or mtime, depending on `--files-cache` option.
chunkinfos is a list of tuples (chunk_id, size).
we already have most of these infos in the previous archive also.
see also #7930.
| 2024-09-17T09:49:38 | 0.0 | [] | [] |
|||
pibooth/pibooth-google-photo | pibooth__pibooth-google-photo-12 | aca4b75b41b8b909122f82e2fa5adb7a8ee947f6 | diff --git a/README.rst b/README.rst
index 6a68e34..ea75c63 100644
--- a/README.rst
+++ b/README.rst
@@ -35,6 +35,12 @@ Here below the new configuration options available in the `pibooth`_ configurati
.. note:: Edit the configuration by running the command ``pibooth --config``.
+Picture URL
+-----------
+
+Uploaded picture URL is set to `app.previous_picture_url` attribute at the end of
+`processing` state (``state_processing_exit`` hook).
+
Grant secured access
--------------------
diff --git a/pibooth_google_photo.py b/pibooth_google_photo.py
index fd0f2d4..3d1a086 100644
--- a/pibooth_google_photo.py
+++ b/pibooth_google_photo.py
@@ -30,14 +30,24 @@ def pibooth_configure(cfg):
@pibooth.hookimpl
def pibooth_startup(app, cfg):
"""Create the GooglePhotosUpload instance."""
- app.google_photos = GooglePhotosApi(cfg.getpath('GOOGLE', 'client_id_file'))
+ app.previous_picture_url = None
+ client_id_file = cfg.getpath('GOOGLE', 'client_id_file')
+
+ if not client_id_file:
+ LOGGER.debug("No credentials file defined in [GOOGLE][client_id_file], upload deactivated")
+ elif not os.path.exists(client_id_file):
+ LOGGER.error("No such file [GOOGLE][client_id_file]='%s', please check config", client_id_file)
+ elif client_id_file and os.path.getsize(client_id_file) == 0:
+ LOGGER.error("Empty file [GOOGLE][client_id_file]='%s', please check config", client_id_file)
+ else:
+ app.google_photos = GooglePhotosApi(client_id_file)
@pibooth.hookimpl
def state_processing_exit(app, cfg):
"""Upload picture to google photo album"""
- pictures = (app.previous_picture_file,)
- app.google_photos.upload(pictures, cfg.get('GOOGLE', 'album_name'))
+ if hasattr(app, 'google_photos'):
+ app.previous_picture_url = app.google_photos.upload(app.previous_picture_file, cfg.get('GOOGLE', 'album_name'))
class GooglePhotosApi(object):
@@ -64,16 +74,9 @@ def __init__(self, client_id_file, credentials_filename="google_credentials.dat"
self.client_id_file = client_id_file
self.credentials_file = os.path.join(os.path.dirname(self.client_id_file), credentials_filename)
- self.activated = True
- if not os.path.exists(self.client_id_file) or os.path.getsize(self.client_id_file) == 0:
- if self.client_id_file:
- LOGGER.error("Can not load [GOOGLE][client_id_file]='%s' please check config",
- self.client_id_file)
- self.activated = False
-
self._albums_cache = {} # Keep cache to avoid multiple request
self._credentials = None
- if self.activated and self.is_reachable():
+ if self.is_reachable():
self._session = self._get_authorized_session()
else:
self._session = None
@@ -164,21 +167,22 @@ def create_album(self, album_name):
LOGGER.error("Can not create Google Photos album '%s'", album_name)
return None
- def upload(self, photo_files, album_name):
- """Upload a list of pictures files to the given Google Photos album.
+ def upload(self, filename, album_name):
+ """Upload a photo file to the given Google Photos album.
- :param photo_files: list of photos name with full path
- :type photo_files: file
+ :param filename: photo file full path
+ :type filename: str
:param album_name: name of albums to upload
:type album_name: str
+
+ :returns: URL of the uploaded photo
+ :rtype: str
"""
- if not self.activated:
- # Invalid credentials file
- return
+ photo_url = None
if not self.is_reachable():
LOGGER.error("Google Photos upload failure: no internet connexion!")
- return
+ return photo_url
if not self._credentials:
# Plugin was disabled at startup but activated after
@@ -189,55 +193,50 @@ def upload(self, photo_files, album_name):
album_id = self.create_album(album_name)
if not album_id:
LOGGER.error("Google Photos upload failure: album '%s' not found!", album_name)
- return
+ return photo_url
self._session.headers["Content-type"] = "application/octet-stream"
self._session.headers["X-Goog-Upload-Protocol"] = "raw"
- for filename in photo_files:
+ with open(filename, mode='rb') as fp:
+ data = fp.read()
- try:
- with open(filename, mode='rb') as fp:
- data = fp.read()
- except OSError as err:
- LOGGER.error("Google Photos upload failure: can not read file '%s': %s", filename, err)
- continue
-
- self._session.headers["X-Goog-Upload-File-Name"] = os.path.basename(filename)
-
- LOGGER.info("Uploading picture '%s' to Google Photos", filename)
- upload_token = self._session.post(self.URL + '/uploads', data)
-
- if upload_token.status_code == 200 and upload_token.content:
-
- create_body = json.dumps({"albumId": album_id,
- "newMediaItems": [
- {"description": "",
- "simpleMediaItem": {"uploadToken": upload_token.content.decode()}
- }
- ]}, indent=4)
-
- resp = self._session.post(self.URL + '/mediaItems:batchCreate', create_body).json()
- LOGGER.debug("Google Photos server response: %s", resp)
-
- if "newMediaItemResults" in resp:
- status = resp["newMediaItemResults"][0]["status"]
- if status.get("code") and (status.get("code") > 0):
- LOGGER.error("Google Photos upload failure: can not add '%s' to library: %s",
- os.path.basename(filename), status["message"])
- else:
- LOGGER.info("Google Photos upload successful: '%s' added to album '%s'",
- os.path.basename(filename), album_name)
- else:
- LOGGER.error("Google Photos upload failure: can not add '%s' to library",
- os.path.basename(filename))
+ self._session.headers["X-Goog-Upload-File-Name"] = os.path.basename(filename)
+
+ LOGGER.info("Uploading picture '%s' to Google Photos", filename)
+ upload_token = self._session.post(self.URL + '/uploads', data)
- elif upload_token.status_code != 200:
- LOGGER.error("Google Photos upload failure: can not connect to '%s' (HTTP error %s)",
- self.URL, upload_token.status_code)
+ if upload_token.status_code == 200 and upload_token.content:
+ create_body = json.dumps({"albumId": album_id,
+ "newMediaItems": [
+ {"description": "",
+ "simpleMediaItem": {"uploadToken": upload_token.content.decode()}
+ }
+ ]
+ })
+
+ resp = self._session.post(self.URL + '/mediaItems:batchCreate', create_body).json()
+ LOGGER.debug("Google Photos server response: %s", resp)
+
+ if "newMediaItemResults" in resp:
+ status = resp["newMediaItemResults"][0]["status"]
+ if status.get("code") and (status.get("code") > 0):
+ LOGGER.error("Google Photos upload failure: can not add '%s' to library: %s",
+ os.path.basename(filename), status["message"])
+ else:
+ photo_url = resp["newMediaItemResults"][0]['mediaItem'].get('productUrl')
+ LOGGER.info("Google Photos upload successful: '%s' added to album '%s'",
+ os.path.basename(filename), album_name)
else:
- LOGGER.error("Google Photos upload failure: no response content from server '%s'",
- self.URL)
+ LOGGER.error("Google Photos upload failure: can not add '%s' to library",
+ os.path.basename(filename))
+
+ elif upload_token.status_code != 200:
+ LOGGER.error("Google Photos upload failure: can not connect to '%s' (HTTP error %s)",
+ self.URL, upload_token.status_code)
+ else:
+ LOGGER.error("Google Photos upload failure: no response content from server '%s'",
+ self.URL)
try:
del self._session.headers["Content-type"]
@@ -245,3 +244,5 @@ def upload(self, photo_files, album_name):
del self._session.headers["X-Goog-Upload-File-Name"]
except KeyError:
pass
+
+ return photo_url
| Access to picture URL uploaded on google Photos
Is it possible when the photo is uploaded to Google Photo that they scan a qr code with the link to the specifc photo?
I know a part of the link https://photos.google.com/share/**AxFF4t56kiJiu89m**/2021-06-11-10-14-08_pibooth.jpg
But what is the part between the stars?
| Hello I also want to do link to the specific photo, but without success... I don't see on google API how do that.
sorry | 2022-04-11T21:51:16 | 0.0 | [] | [] |
||
SBU-BMI/wsinfer | SBU-BMI__wsinfer-188 | b05b7ee29e8482d2866c8e076a6f417fc7eda02a | diff --git a/pyproject.toml b/pyproject.toml
index 5800f22..d57c5d4 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -55,7 +55,7 @@ dependencies = [
"torch>=1.7",
"torchvision",
"tqdm",
- "wsinfer-zoo",
+ "wsinfer-zoo>=0.6.2",
]
dynamic = ["version"]
diff --git a/wsinfer/wsi.py b/wsinfer/wsi.py
index 6284178..27205e8 100644
--- a/wsinfer/wsi.py
+++ b/wsinfer/wsi.py
@@ -223,6 +223,8 @@ def _get_mpp_tifffile(slide_path: str | Path) -> tuple[float, float]:
with tifffile.TiffFile(slide_path) as tif:
series0 = tif.series[0]
page0 = series0[0]
+ if not isinstance(page0, tifffile.TiffPage):
+ raise CannotReadSpacing("not a tifffile.TiffPage instance")
try:
resolution_unit = page0.tags["ResolutionUnit"].value
x_resolution = Fraction(*page0.tags["XResolution"].value)
| Generating geojson files on run command
fixes #181
| I'm currently updating the tests. Will push them as and when I resolve them
@kaczmarj Since we've updated the result directories (model-outputs-csv/geojson), we will have to update the tests such that we assert that csv and json files be stored in `tmp_path/model-outputs-csv` and `tmp_path/model-outputs-geojson` directories right? Just trying to get an intuition so that I can modify the tests accordingly.
Updated the tests. All are passing except one. In the test `test_issue_97`, when we are running the command again using `runner.invoke`, it fails because the output directory already exists (for geojson). Perhaps we can let it generate the resulting geojson directory again? Or should I handle it in the test itself?
>Since we've updated the result directories (model-outputs-csv/geojson), we will have to update the tests such that we assert that csv and json files be stored in tmp_path/model-outputs-csv and tmp_path/model-outputs-geojson directories right?
Yes that is correct.
>it fails because the output directory already exists (for geojson).
i don't see this error in the github actions logs. what is the traceback?
It was getting raised because we are checking whether the output directory exists or not. If it was, we were raising the `FileExistsError` (instead of the `Click.Exceptions` I believe).
```python
def parallelize_geojson(csvs: list, results_dir: Path, num_workers: int) -> None:
output = results_dir / "model-outputs-geojson"
if not results_dir.exists():
raise FileExistsError(f"results_dir does not exist: {results_dir}")
if output.exists():
# raise FileExistsError("Output directory already exists.")
shutil.rmtree(f"{output}")
# rest of the code
```
To handle that, I'm just deleting the directory if it already exists(using `shutil`) and then it is getting created again below. I'm doing this so that the test passes, although we would want to change this.
i see. so what we do for model outputs typically is skip any slides that already have model ouptut CSVs that exist. we should implement the same behavior for the geojson conversion.
so in the list of `csvs` to be converted, we should remove any that already exist as geojson. so existing geojsons will not be touched.
Got it. I'll make the changes
@kaczmarj Not entirely sure why the pytorch-nightly test is failing. Might be an issue with `slide_path` perhaps?
i think there are two issues.
1. the style tests are failing. to fix that, run `isort` and `black` on the code to format the code.
2. to fix the pytorch nightly test, _i think_ we need to check that a certain variable is not None.
https://github.com/SBU-BMI/wsinfer/blob/b05b7ee29e8482d2866c8e076a6f417fc7eda02a/wsinfer/wsi.py#L225
add the following between lines 225 and 226
```python
if page0 is None:
raise CannotReadSpacing()
```
Tried a `try-except` too. Didn't work
i will take a look at this. it could be that something in the tifffile has changed slightly | 2023-09-15T17:54:01 | 0.0 | [] | [] |
Subsets and Splits