repo
stringlengths 7
59
| instance_id
stringlengths 11
63
| base_commit
stringlengths 40
40
| patch
stringlengths 167
798k
| test_patch
stringclasses 1
value | problem_statement
stringlengths 20
65.2k
| hints_text
stringlengths 0
142k
| created_at
timestamp[ns]date 2015-08-30 10:31:05
2024-12-13 16:08:19
| environment_setup_commit
stringclasses 1
value | version
stringclasses 1
value | FAIL_TO_PASS
sequencelengths 0
0
| PASS_TO_PASS
sequencelengths 0
0
|
---|---|---|---|---|---|---|---|---|---|---|---|
keystroke3/redpaper | keystroke3__redpaper-6 | 9b3a0094f55518e1c64a692c0f1759a04b5d564c | diff --git a/fetch.py b/fetch.py
index 8b5ca33..e735695 100755
--- a/fetch.py
+++ b/fetch.py
@@ -34,6 +34,7 @@
wall_data_file = config['settings']['wall_data_file']
pictures = config['settings']['download_dir']
d_limit = int(config['settings']['download_limit'])
+subreddit = config['settings']['subreddit']
def auth():
@@ -48,7 +49,7 @@ def auth():
commaScopes="all",
)
# collect data from reddit
- wallpaper = reddit.subreddit("wallpaper+wallpapers")
+ wallpaper = reddit.subreddit(subreddit)
top_paper = wallpaper.hot(limit=d_limit)
diff --git a/settings.py b/settings.py
index d6c7764..7677eab 100644
--- a/settings.py
+++ b/settings.py
@@ -27,6 +27,7 @@
"wall_data.json"),
'Wallpaper_selection_method': "sequential",
'download_limit': 1,
+ 'subreddit': "wallpaper+wallpapers",
}
@@ -44,6 +45,8 @@ def set_settings():
with open(settings_file, "w") as f:
f.write("")
set_settings()
+
+
if not os.path.exists(settings_file):
set_settings()
else:
@@ -53,6 +56,7 @@ def set_settings():
pictures = config['settings']['download_dir']
d_limit = int(config['settings']['download_limit'])
wall_selection_method = config['settings']['wallpaper_selection_method']
+subreddit = config['settings']['subreddit']
global message
message = ""
@@ -139,6 +143,29 @@ def change_path(new_path="", silent=False):
change_path()
+def change_subreddit():
+ """
+ Allows the user to change the subreddit where we fetch the pictures from
+ """
+ global message
+ Red()
+ new_subreddit = input(f"""
+ {green}Enter the just the name of the subreddit.
+ Example wallpapers for reddit.com/r/wallpapers\n
+ Current path is: {subreddit}\n{normal}
+ {red}x{normal} : {blue}main settings{normal}
+ >>> """)
+ if new_subreddit == "x":
+ main_settings()
+ return
+ else:
+ config.set('settings', 'subreddit', str(new_subreddit))
+ set_settings()
+ Red()
+ change_subreddit()
+ return
+
+
def wall_selection():
"""
Allows the user to specify the method to be used when choosing wallpapers
@@ -214,6 +241,7 @@ def main_settings():
{red} 2 {normal}: {blue} Change wallpaper selection method
{normal}
{red} 3 {normal}: {blue} Change the download limit{normal}\n
+ {red} 4 {normal}: {blue} Change subreddit to download from{normal}\n
{red} r {normal}: {blue} Reset to default {normal}\n
{red} x {normal}: {blue} main menu {normal}\n
>>> """)
@@ -223,6 +251,8 @@ def main_settings():
wall_selection()
elif choice == "3":
max_dl_choice()
+ elif choice == "4":
+ change_subreddit()
elif choice == "r" or choice == "R":
restore_default()
main_settings()
| Issue in wall_set.py
After the last committed change to wall_set.py redpaper crashed on line 56:
saved_walls = json.load(data)
due to data not be assigned before use.
This was on a default Kali Linux install. Changing methods still resulted in a error but this time due to attempting to use a closed file.
| 2019-07-30T06:36:48 | 0.0 | [] | [] |
|||
sdaqo/anipy-cli | sdaqo__anipy-cli-136 | bfa5498a5528dcb6365427177e57b7919addf1af | diff --git a/.gitignore b/.gitignore
index 3d78abe5..475cdec6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -8,4 +8,10 @@ anipy_cli.egg-info/
user_files/
anipy_cli/config_personal.py
pypi.sh
-.idea/
\ No newline at end of file
+.idea/
+
+# VSCode
+.vscode/
+
+# Venv
+.venv/
\ No newline at end of file
diff --git a/README.md b/README.md
index 04a7374a..a659659e 100644
--- a/README.md
+++ b/README.md
@@ -47,7 +47,9 @@ Places of the config:
[Sample Config](https://github.com/sdaqo/anipy-cli/blob/master/docs/sample_config.yaml)
-**Attention Windows Users:** If you activate the option `reuse_mpv_window`, you will have to donwload and put the `mpv-2.dll` in your path. To get it go look here: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/
+**Attention Windows Users Using MPV:** If you activate the option `reuse_mpv_window`, you will have to download and put the `mpv-2.dll` in your path. To get it go look here: https://sourceforge.net/projects/mpv-player-windows/files/libmpv/
+
+**Attention Windows Users on Config File Placement:** If you have downloaded Python from the Microsoft Store, your config file will be cached inside of your Python's AppData. For example: `%USERPROFILE%\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\Local\anipy-cli\config.yaml`.
# Usage
diff --git a/anipy_cli/config.py b/anipy_cli/config.py
index cdfd4bbf..94668d0a 100644
--- a/anipy_cli/config.py
+++ b/anipy_cli/config.py
@@ -14,7 +14,7 @@ def __init__(self):
self._config_file, self._yaml_conf = Config._read_config()
if not self._yaml_conf:
- self._yaml_conf = {}
+ self._create_config() # Create config file
@property
def _anipy_cli_folder(self):
| Config not being generated
**Describe the bug**
For new installs the config should be generated and pre-filled with default values based on the Config-Class' @Property fields.
For some reason this doesn't happen.
Even if no config-file exists the config is not "none" when reading from file in:
[config.py :14](https://github.com/sdaqo/anipy-cli/blob/cccf65144cf96d92c8e30fd2584353ae5f0e3c37/anipy_cli/config.py#L14)
```python
with self._config_file.open("r") as conf:
self._yaml_conf = yaml.safe_load(conf)
if self._yaml_conf is None:
# The config file is empty
self._yaml_conf = {}
```
Checked on Windows11 only so far.
| this looks like a windows problem, linux works fine
alright, thanks for checking. I'll look into this
Also for macos, the default config file is not created but works if done by the user
I'm on Windows 11. I'll go ahead and see what I can find out.
# TL;DR
The current version of anipy doesn't create a config file. But when I force it to, it creates it, and reads it, but I cannot find it anywhere on the file system.
# Breakdown
According to #79, the config system was changed so users don't have to manually add vars every update. Great change, but by doing so they also removed the code that creates at least some kind of default file we can find and adjust (`_create_config()` has no references).
To fix this, I added the code that creates the config file, but there is indeed an issue on Windows (I've only test Windows). Python says that the config file was created, it pulls data from it and everything, but I can't actually find the file on my system.
Pulled data evidence:
<img width="368" alt="image" src="https://github.com/sdaqo/anipy-cli/assets/68718280/7aec340e-9589-421d-9978-d646eafe191b">
Can't find it:
<img width="318" alt="image" src="https://github.com/sdaqo/anipy-cli/assets/68718280/72026683-2b13-4d6c-a29d-58f1db48f5cb">
I went through the computer's process of placing the config file, and I still couldn't find it no matter what I did.
I found a Stack Overflow question about it, but there were no answers.
# Update:
When using different Python versions, the one that created the file seems to recognize it and the other does not
<img width="1139" alt="image" src="https://github.com/sdaqo/anipy-cli/assets/68718280/8c793f50-09e8-470e-a463-3dc93a561914">
# GOOD NEWS
I found where Python is actually writing the file. Turns out if you download Python from the Microsoft store, Python gets sandboxed, and anything written to AppData is written into its cache, as seen here:
<img width="470" alt="image" src="https://github.com/sdaqo/anipy-cli/assets/68718280/2f60fc3c-f2e6-4632-8ceb-adf365dbd518">
Unfortunately there aren't many good solutions for solving this, except for knowing that your Python might be sandboxed. So, for now I'll just give you the possible location your config file might get stored on a sandboxed Python on Windows:
`%USERPROFILE%\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\Local\anipy-cli\config.yaml`
I recommend either users download Python directly from the site, or be warned that Microsoft Store Python is sandboxed and will be stored in this location. You could also relocate where the config file is placed if you wish, but I'm not sure where it could go. | 2023-08-30T04:43:44 | 0.0 | [] | [] |
||
TerryHowe/ansible-modules-hashivault | TerryHowe__ansible-modules-hashivault-432 | 9d844336e874c6d8ae6f81b6a71038777e5d637c | diff --git a/ansible/modules/hashivault/hashivault_azure_auth_config.py b/ansible/modules/hashivault/hashivault_azure_auth_config.py
index 61ab2fa0..807b249d 100644
--- a/ansible/modules/hashivault/hashivault_azure_auth_config.py
+++ b/ansible/modules/hashivault/hashivault_azure_auth_config.py
@@ -105,8 +105,9 @@ def hashivault_azure_auth_config(module):
# check if current config matches desired config values, if they dont match, set changed true
for k, v in current_state.items():
- if v != desired_state[k]:
- changed = True
+ if k in desired_state:
+ if v != desired_state[k]:
+ changed = True
# if configs dont match and checkmode is off, complete the change
if changed and not module.check_mode:
| Possible hvac breaking change
Running `ansible-playbook -v test_azure_auth_config.yml`
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: KeyError: 'root_password_ttl'
```
| you could fix this by adding `if k in desired_state:` in the for loop. It prevents inconsistencies between the current and desired state to become a problem. | 2023-03-31T11:32:57 | 0.0 | [] | [] |
||
quic/aimet | quic__aimet-2552 | 486f006961be1e8e2f64ce954b156a9ba41d1a38 | diff --git a/TrainingExtensions/torch/src/python/aimet_torch/elementwise_ops.py b/TrainingExtensions/torch/src/python/aimet_torch/elementwise_ops.py
index 2d801bde0a..c70adce82b 100644
--- a/TrainingExtensions/torch/src/python/aimet_torch/elementwise_ops.py
+++ b/TrainingExtensions/torch/src/python/aimet_torch/elementwise_ops.py
@@ -361,16 +361,27 @@ def forward(self, *args) -> torch.Tensor:
res = []
for index, (boxes, scores) in enumerate(zip(batches_boxes, batch_scores)):
for class_index, classes_score in enumerate(scores):
- filtered_score_ind = (classes_score > self.score_threshold).nonzero()[:, 0]
- boxes = boxes[filtered_score_ind, :]
- classes_score = classes_score[filtered_score_ind]
- temp_res = torchvision.ops.nms(boxes, classes_score, self.iou_threshold)
- res_ = filtered_score_ind[temp_res]
- for val in res_:
- res.append([index, class_index, val.detach()])
- res = res[:(self.max_output_boxes_per_class *(index+1))]
+ nms_output = self.perform_nms_per_class(boxes, classes_score)
+ res_per_class = []
+ for val in nms_output:
+ res_per_class.append([index, class_index, val.detach()])
+ res_per_class = res_per_class[:self.max_output_boxes_per_class]
+ res.extend(res_per_class)
return torch.Tensor(res).type(torch.int64)
+ def perform_nms_per_class(self, boxes: torch.Tensor, classes_score: torch.Tensor) -> torch.Tensor:
+ """
+ Performs NMS per class
+ :param boxes: boxes on which NMS should be performed
+ :param classes_score: corresponding class scores for the boxes
+ :return: returns box indices filtered out by NMS
+ """
+ filtered_score_ind = (classes_score > self.score_threshold).nonzero()[:, 0]
+ filtered_boxes = boxes[filtered_score_ind]
+ filtered_classes_score = classes_score[filtered_score_ind]
+ res_ = torchvision.ops.nms(filtered_boxes, filtered_classes_score, self.iou_threshold)
+ return filtered_score_ind[res_]
+
class GatherNd(torch.nn.Module):
""" GatherNd op implementation"""
| Minor changes to NMS Op implementation
| 2023-11-06T05:11:48 | 0.0 | [] | [] |
|||
AlertaDengue/AlertaDengue | AlertaDengue__AlertaDengue-563 | 8dcd16a8a5555c301b89a8f96f046a76b81420ef | diff --git a/AlertaDengue/dbf/utils.py b/AlertaDengue/dbf/utils.py
index 26601d30..6fd36705 100644
--- a/AlertaDengue/dbf/utils.py
+++ b/AlertaDengue/dbf/utils.py
@@ -80,7 +80,7 @@ def _parse_fields(dbf_name: str, df: gpd) -> pd:
except ValueError:
df[col] = pd.to_datetime(df[col], errors="coerce")
- return df.loc[[all_expected_fields]]
+ return df.loc[:, all_expected_fields]
def chunk_gen(chunksize, totalsize):
| [DBF]: Error slicing expected columns in _parse_fields function
# Objetivos / Objectivos / Purpose
## Geral / General
```
KeyError: "None of [Index([('NU_ANO', 'ID_MUNICIP', 'ID_AGRAVO', 'DT_SIN_PRI', 'SEM_PRI', 'DT_NOTIFIC', 'NU_NOTIFIC', 'SEM_NOT', 'DT_DIGITA', 'DT_NASC', 'NU_IDADE_N', 'CS_SEXO')], dtype='object')] are in the [index]"
```
```
File "/opt/services/AlertaDengue/dbf/utils.py", line 83, in _parse_fields
return df.loc[[all_expected_fields]]
```
[_parse_fields](https://github.com/AlertaDengue/AlertaDengue/blob/main/AlertaDengue/dbf/utils.py#L83)
```
Change to:
df.loc[ :, all_expected_fields]
```
| 2022-10-19T10:48:37 | 0.0 | [] | [] |
|||
cytomining/copairs | cytomining__copairs-48 | 11570eb5e03d908e344bb8f7f90940e6dac603e3 | diff --git a/src/copairs/compute.py b/src/copairs/compute.py
index 41512d0..55221e3 100644
--- a/src/copairs/compute.py
+++ b/src/copairs/compute.py
@@ -151,7 +151,7 @@ def compute_p_values(ap_scores, null_confs, null_size: int, seed):
p_values = np.empty(len(ap_scores), dtype=np.float32)
for i, (ap_score, ix) in enumerate(zip(ap_scores, rev_ix)):
# Reverse to get from hi to low
- num = null_size - np.searchsorted(null_dists[ix], ap_score)
+ num = null_size - np.searchsorted(null_dists[ix], ap_score, side='right')
p_values[i] = (num + 1) / (null_size + 1)
return p_values
diff --git a/src/copairs/map.py b/src/copairs/map.py
index fc69403..7c48d3e 100644
--- a/src/copairs/map.py
+++ b/src/copairs/map.py
@@ -12,26 +12,45 @@
logger = logging.getLogger('copairs')
-def evaluate_and_filter(df, columns) -> list:
- '''Evaluate the query and filter the dataframe'''
+def extract_filters(columns, df_columns) -> list:
+ '''Extract and validate filters from columns'''
parsed_cols = []
+ queries_to_eval = []
+
for col in columns:
- if col in df.columns:
+ if col in df_columns:
parsed_cols.append(col)
continue
-
column_names = re.findall(r'(\w+)\s*[=<>!]+', col)
- valid_column_names = [col for col in column_names if col in df.columns]
+
+ valid_column_names = [col for col in column_names if col in df_columns]
if not valid_column_names:
raise ValueError(f"Invalid query or column name: {col}")
+
+ queries_to_eval.append(col)
+ parsed_cols.extend(valid_column_names)
+
+ if len(parsed_cols) != len(set(parsed_cols)):
+ raise ValueError(f"Duplicate queries for column: {col}")
+
+ return queries_to_eval, parsed_cols
+
+
+def apply_filters(df, query_list):
+ '''Combine and apply filters to dataframe'''
+ if not query_list:
+ return df
+
+ combined_query = " & ".join(f"({query})" for query in query_list)
+ try:
+ df_filtered = df.query(combined_query)
+ except Exception as e:
+ raise ValueError(f"Invalid combined query expression: {combined_query}. Error: {e}")
- try:
- df = df.query(col)
- parsed_cols.extend(valid_column_names)
- except:
- raise ValueError(f"Invalid query expression: {col}")
+ if df_filtered.empty:
+ raise ValueError(f"Empty dataframe after processing combined query: {combined_query}")
- return df, parsed_cols
+ return df_filtered
def flatten_str_list(*args):
@@ -55,7 +74,9 @@ def create_matcher(obs: pd.DataFrame,
neg_diffby,
multilabel_col=None):
columns = flatten_str_list(pos_sameby, pos_diffby, neg_sameby, neg_diffby)
- obs, columns = evaluate_and_filter(obs, columns)
+ query_list, columns = extract_filters(columns, obs.columns)
+ obs = apply_filters(obs, query_list)
+
if multilabel_col:
return MatcherMultilabel(obs, columns, multilabel_col, seed=0)
return Matcher(obs, columns, seed=0)
| [bug] AP of 1.0 should have min p-value
When calculated AP is 1.0, corresponding p-value is too large.
For example, for 2 positive profiles, 16 controls, null size 1000, and AP = 1, p-value = 0.061938, which is incorrect, because the proportion of null to the right of the 1.0 should be 0, and p-value [should](https://github.com/cytomining/copairs/blob/11570eb5e03d908e344bb8f7f90940e6dac603e3/src/copairs/compute.py#L155) be `p=(num + 1) / (null_size + 1) = 1/1001 = ~0.000999`
| 2023-11-14T03:58:04 | 0.0 | [] | [] |
|||
neuropsychology/NeuroKit | neuropsychology__NeuroKit-526 | 8e714580c6a3ce8012c27b7587e035f366ec1b0a | diff --git a/NEWS.rst b/NEWS.rst
index a70741f95a..cf54b29508 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -19,7 +19,7 @@ New Features
Fixes
+++++++++++++
-* None
+* Ensure detected offset in `emg_activation()` is not beyond signal length
0.1.4.1
diff --git a/neurokit2/emg/emg_activation.py b/neurokit2/emg/emg_activation.py
index 42f74d4a32..edd8b63ba5 100644
--- a/neurokit2/emg/emg_activation.py
+++ b/neurokit2/emg/emg_activation.py
@@ -375,10 +375,13 @@ def _emg_activation_activations(activity, duration_min=0.05):
baseline = events_find(activity == 0, threshold=0.5, threshold_keep="above", duration_min=duration_min)
baseline["offset"] = baseline["onset"] + baseline["duration"]
- # Cross-comparison
- valid = np.isin(activations["onset"], baseline["offset"])
+ # Cross-comparison
+ valid = np.isin(activations["onset"], baseline["offset"])
onsets = activations["onset"][valid]
- offsets = activations["offset"][valid]
+ offsets = activations["offset"][valid]
+
+ # make sure offset indices are within length of signal
+ offsets = offsets[offsets < len(activity)]
new_activity = np.array([])
for x, y in zip(onsets, offsets):
| emg_process() activation offset on last element bug
Hello!
I seem to have found a small bug upon running the neurokit2 _emg_process()_ function with a data structure where the signal's activation offset is the last element of the structure, it returned error saying that the element wasn't on the structure.
I managed to eliminate this error by making a simple adjustment on **neurokit2/signal/signal_formatpeaks.py line 67** as the following image shows:

With this change i managed to run the function to my file with no problems.
The versions my system is running are:
- neurokit2==0.1.2
- tensorflow==2.4.1
- pandas==1.2.2
emg_process() activation offset on last element bug
Hello!
I seem to have found a small bug upon running the neurokit2 _emg_process()_ function with a data structure where the signal's activation offset is the last element of the structure, it returned error saying that the element wasn't on the structure.
I managed to eliminate this error by making a simple adjustment on **neurokit2/signal/signal_formatpeaks.py line 67** as the following image shows:

With this change i managed to run the function to my file with no problems.
The versions my system is running are:
- neurokit2==0.1.2
- tensorflow==2.4.1
- pandas==1.2.2
| Hi ð Thanks for reaching out and opening your first issue here! We'll try to come back to you as soon as possible. â¤ï¸ 
Hi @Vasco-Cardoso, indeed it is important to ensure that the detected emg indices are within the length of the signal. The offset being the last element is an edge case that we didn't consider - thanks for pointing that out and proposing a potential solution ð However, I think the issue here is more to do with making sure that the extracted onsets/offsets (from `emg_activation`) are within the signal before letting `signal_formatpeaks` sanitize them - so I'd say it's best to leave the latter untouched.
https://github.com/neuropsychology/NeuroKit/blob/c1104386655724a5c624e740abf651c939fc5e48/neurokit2/emg/emg_activation.py#L370-L382
@DominiqueMakowski We can make a small modification to enforce this:
```
# make sure indices are within length of signal
onsets = np.array([i for i in activations["onset"][valid] if i < len(activity)])
offsets = np.array([i for i in activations["offset"][valid] if i < len(activity)])
```
can't we simply do
```python
# Cross-comparison
valid = np.isin(activations["onset"], baseline["offset"])
onsets = activations["onset"][valid]
offsets = activations["offset"][valid]
# make sure indices are within length of signal
onsets = onsets[onsets < len(activity)]
offsets = offsets[offsets < len(activity)]
```
But how come it's possible to have onsets and offsets bigger than the length ð¤ ?
This probably won't be a problem for onsets actually, just offsets - since in our code offset indices are derived by `activations["onset"] + activations["duration"]` so in this edge case here where the activated portions (durations) end exactly at the last data point, the offset then becomes detected as `len(signal) + 1` index
And yes you're right @DominiqueMakowski `onsets = onsets[onsets < len(activity)]` is better ð
Hi ð Thanks for reaching out and opening your first issue here! We'll try to come back to you as soon as possible. â¤ï¸ 
Hi @Vasco-Cardoso, indeed it is important to ensure that the detected emg indices are within the length of the signal. The offset being the last element is an edge case that we didn't consider - thanks for pointing that out and proposing a potential solution ð However, I think the issue here is more to do with making sure that the extracted onsets/offsets (from `emg_activation`) are within the signal before letting `signal_formatpeaks` sanitize them - so I'd say it's best to leave the latter untouched.
https://github.com/neuropsychology/NeuroKit/blob/c1104386655724a5c624e740abf651c939fc5e48/neurokit2/emg/emg_activation.py#L370-L382
@DominiqueMakowski We can make a small modification to enforce this:
```
# make sure indices are within length of signal
onsets = np.array([i for i in activations["onset"][valid] if i < len(activity)])
offsets = np.array([i for i in activations["offset"][valid] if i < len(activity)])
```
can't we simply do
```python
# Cross-comparison
valid = np.isin(activations["onset"], baseline["offset"])
onsets = activations["onset"][valid]
offsets = activations["offset"][valid]
# make sure indices are within length of signal
onsets = onsets[onsets < len(activity)]
offsets = offsets[offsets < len(activity)]
```
But how come it's possible to have onsets and offsets bigger than the length ð¤ ?
This probably won't be a problem for onsets actually, just offsets - since in our code offset indices are derived by `activations["onset"] + activations["duration"]` so in this edge case here where the activated portions (durations) end exactly at the last data point, the offset then becomes detected as `len(signal) + 1` index
And yes you're right @DominiqueMakowski `onsets = onsets[onsets < len(activity)]` is better ð
| 2021-09-01T10:08:21 | 0.0 | [] | [] |
||
vanvalenlab/deepcell-tracking | vanvalenlab__deepcell-tracking-115 | 49d8ca2261337d4668f205d7c0cbff60fae5ddb5 | diff --git a/deepcell_tracking/utils.py b/deepcell_tracking/utils.py
index 121bfc3..e7a635d 100644
--- a/deepcell_tracking/utils.py
+++ b/deepcell_tracking/utils.py
@@ -498,9 +498,10 @@ def get_image_features(X, y, appearance_dim=32, crop_mode='resize', norm=True):
# Check data and normalize
if len(idx) > 0:
- mean = np.mean(app[idx])
- std = np.std(app[idx])
- app[idx] = (app[idx] - mean) / std
+ masked_app = app[idx]
+ mean = np.mean(masked_app)
+ std = np.std(masked_app)
+ app[idx] = (masked_app - mean) / std
appearances[i] = app
| Cast data to correct type in `get_image_features`
This PR fixes a bug in `get_image_features`. If the X data is passed in with an integer type (instead of float), the output of `crop_mode='fixed'` and `norm=True` is incorrect. In the examples below, the first image is incorrect while the second is correct.
This PR eliminates the bug by casting X data to float32 and y data to int32 to avoid incorrect use of the function.
<img width="420" alt="Screen Shot 2023-01-24 at 7 34 48 PM" src="https://user-images.githubusercontent.com/20373588/214474471-a4a41fe9-4e58-44a5-8986-2001c0827822.png">
<img width="439" alt="Screen Shot 2023-01-24 at 7 34 38 PM" src="https://user-images.githubusercontent.com/20373588/214474468-1c6ecc00-ec06-4ec6-821b-e5b67b5e9012.png">
| 2023-01-25T04:48:50 | 0.0 | [] | [] |
|||
qcpydev/qcpy | qcpydev__qcpy-110 | f101eebfc859542ef379b214912c947983d1a79d | diff --git a/src/visualize/bloch.py b/src/visualize/bloch.py
index 749b0da..56be747 100644
--- a/src/visualize/bloch.py
+++ b/src/visualize/bloch.py
@@ -1,7 +1,11 @@
+import re
+
import matplotlib.pyplot as plt
import numpy as np
-from .base import sphere, theme, light_mode
-from ..tools import probability, amplitude
+
+from ..errors import BlochSphereOutOfRangeError, InvalidSavePathError
+from ..tools import amplitude, probability
+from .base import light_mode, sphere, theme
def bloch(
@@ -11,8 +15,24 @@ def bloch(
show: bool = True,
light: bool = False,
):
- amplitutes = amplitude(quantumstate)
+ """Creates a qsphere visualization that can be interacted with.
+ Args:
+ quantum_state (ndarray/QuantumCircuit): State vector array or qcpy quantum circuit.
+ path (str): The path in which the image file will be saved when save is set true.
+ save (bool): Will save an image in the working directory when this boolean is true.
+ show (bool): Boolean to turn on/off the qsphere being opened in matplotlib.
+ light (bool): Will change the default dark theme mode to a light theme mode.
+ Returns:
+ None
+ """
+ if save and re.search(r"[<>:/\\|?*]", path) or len(path) > 255:
+ raise InvalidSavePathError("Invalid file name")
+ amplitudes = amplitude(quantumstate)
phase_angles = probability(quantumstate, False)
+ if amplitudes.size > 2:
+ BlochSphereOutOfRangeError(
+ "Bloch sphere only accepts a single qubit quantum circuit"
+ )
light_mode(light)
ax = sphere(theme.BACKGROUND_COLOR)
ax.quiver(1, 0, 0, 0.75, 0, 0, color="lightgray")
@@ -25,7 +45,7 @@ def bloch(
ax.quiver(0, 0, -1, 0, 0, -0.75, color="lightgray")
ax.text(0, 0, -2, "-z", color="gray")
ax.text(0.1, 0, -1.5, "|1>", color="gray")
- theta = np.arcsin(amplitutes[1]) * 2
+ theta = np.arcsin(amplitudes[1]) * 2
phi = phase_angles[1]
x = 1 * np.sin(theta) * np.cos(phi)
y = 1 * np.sin(theta) * np.sin(phi)
diff --git a/src/visualize/probability.py b/src/visualize/probability.py
index 50aadc0..1880e88 100644
--- a/src/visualize/probability.py
+++ b/src/visualize/probability.py
@@ -1,8 +1,12 @@
+import re
+
import matplotlib.pyplot as plt
-from .base import graph, light_mode, theme
-from ..tools import probability as prob
import numpy as np
+from ..errors import InvalidSavePathError
+from ..tools import probability as prob
+from .base import graph, light_mode, theme
+
def probability(
state: any,
@@ -11,26 +15,33 @@ def probability(
show: bool = True,
light: bool = False,
):
+ """Creates a probability representation of a given quantum circuit in matplotlib.
+ Args:
+ quantum_state (ndarray/QuantumCircuit): State vector array or qcpy quantum circuit.
+ path (str): The path in which the image file will be saved when save is set true.
+ save (bool): Will save an image in the working directory when this boolean is true.
+ show (bool): Boolean to turn on/off the qsphere being opened in matplotlib.
+ light (bool): Will change the default dark theme mode to a light theme mode.
+ Returns:
+ None
+ """
+ if save and re.search(r"[<>:/\\|?*]", path) or len(path) > 255:
+ raise InvalidSavePathError("Invalid file name")
probabilities = prob(state)
num_qubits = int(np.log2(probabilities.size))
state_list = [format(i, "b").zfill(num_qubits) for i in range(2**num_qubits)]
percents = [i * 100 for i in probabilities]
-
plt.clf()
plt.close()
-
light_mode(light)
ax = graph(theme.TEXT_COLOR, theme.BACKGROUND_COLOR, num_qubits)
ax.bar(state_list, percents, color="#39c0ba")
-
plt.xlabel("Computational basis states", color=theme.ACCENT_COLOR)
plt.ylabel("Probability (%)", labelpad=5, color=theme.ACCENT_COLOR)
plt.title("Probabilities", pad=10, color=theme.ACCENT_COLOR)
plt.tight_layout()
-
if save:
plt.savefig(path)
if show:
plt.show()
-
return
diff --git a/src/visualize/q_sphere.py b/src/visualize/q_sphere.py
index 2406d55..6fc769a 100644
--- a/src/visualize/q_sphere.py
+++ b/src/visualize/q_sphere.py
@@ -1,8 +1,10 @@
import matplotlib.pyplot as plt
from numpy import pi, log2, ndarray, cos, sin, linspace
import math
+import re
from typing import Union
from ..quantum_circuit import QuantumCircuit
+from ..errors import InvalidSavePathError
from .base import (
sphere,
color_bar,
@@ -19,6 +21,18 @@ def q_sphere(
show: bool = True,
light: bool = False,
) -> None:
+ """Creates a qsphere visualization that can be interacted with.
+ Args:
+ quantum_state (ndarray/QuantumCircuit): State vector array or qcpy quantum circuit.
+ path (str): The path in which the image file will be saved when save is set true.
+ save (bool): Will save an image in the working directory when this boolean is true.
+ show (bool): Boolean to turn on/off the qsphere being opened in matplotlib.
+ light (bool): Will change the default dark theme mode to a light theme mode.
+ Returns:
+ None
+ """
+ if save and re.search(r"[<>:/\\|?*]", path) or len(path) > 255:
+ raise InvalidSavePathError("Invalid file name")
colors = plt.get_cmap("hsv")
norm = plt.Normalize(0, pi * 2)
ax = sphere(theme.BACKGROUND_COLOR)
diff --git a/src/visualize/state_vector.py b/src/visualize/state_vector.py
index d5960f8..dc6eec2 100644
--- a/src/visualize/state_vector.py
+++ b/src/visualize/state_vector.py
@@ -1,6 +1,7 @@
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.colors import rgb2hex
+from ..errors import *
from .base.graph import graph
from ..tools import amplitude, phaseangle
from .base import color_bar, theme, light_mode
@@ -13,6 +14,18 @@ def state_vector(
show: bool = True,
light: bool = False,
):
+ """Outputs a state vector representation from a given quantum circuit in matplotlib.
+ Args:
+ quantum_state (ndarray/QuantumCircuit): State vector array or qcpy quantum circuit.
+ path (str): The path in which the image file will be saved when save is set true.
+ save (bool): Will save an image in the working directory when this boolean is true.
+ show (bool): Boolean to turn on/off the qsphere being opened in matplotlib.
+ light (bool): Will change the default dark theme mode to a light theme mode.
+ Returns:
+ None
+ """
+ if save and re.search(r"[<>:/\\|?*]", path) or len(filename) > 255:
+ raise InvalidSavePathError("Invalid file name")
amplitudes = amplitude(circuit)
phase_angles = phaseangle(circuit)
num_qubits = int(np.log2(amplitudes.size))
@@ -28,7 +41,6 @@ def state_vector(
plt.xlabel("Computational basis states", color=theme.TEXT_COLOR)
plt.ylabel("Amplitutde", labelpad=5, color=theme.TEXT_COLOR)
plt.title("State Vector", pad=10, color=theme.TEXT_COLOR)
-
plt.tight_layout()
if save:
plt.savefig(path)
| visualize section needs error classes to handle code coverage
Need to implement current error classes into the visualize, possibly implementing more as more base cases arise.
Visualize part of package needs docstrings
| 2024-11-02T07:56:14 | 0.0 | [] | [] |
|||
microsoft/responsible-ai-toolbox | microsoft__responsible-ai-toolbox-2510 | 1ef65b802f4ff6dbf9f4ae379552ee52b1e9acf2 | diff --git a/notebooks/responsibleaidashboard/text/genai-integration-demo.ipynb b/notebooks/responsibleaidashboard/text/genai-integration-demo.ipynb
index a72a206393..26a149efe0 100644
--- a/notebooks/responsibleaidashboard/text/genai-integration-demo.ipynb
+++ b/notebooks/responsibleaidashboard/text/genai-integration-demo.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
@@ -17,20 +17,23 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 2,
"metadata": {},
- "outputs": [],
- "source": [
- "def replace_error_chars(message:str):\n",
- " message = message.replace('`', '')\n",
- " return message"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "Dataset({\n",
+ " features: ['id', 'title', 'context', 'question', 'answers'],\n",
+ " num_rows: 87599\n",
+ "})"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
"dataset = datasets.load_dataset(\"squad\", split=\"train\")\n",
"dataset"
@@ -38,7 +41,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
@@ -50,20 +53,83 @@
"for row in dataset:\n",
" context.append(row['context'])\n",
" questions.append(row['question'])\n",
- " answers.append(replace_error_chars(row['answers']['text'][0]))\n",
+ " answers.append(row['answers']['text'][0])\n",
" templated_prompt = template.format(context=row['context'], question=row['question'])\n",
" prompts.append(templated_prompt)"
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 4,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "<div>\n",
+ "<style scoped>\n",
+ " .dataframe tbody tr th:only-of-type {\n",
+ " vertical-align: middle;\n",
+ " }\n",
+ "\n",
+ " .dataframe tbody tr th {\n",
+ " vertical-align: top;\n",
+ " }\n",
+ "\n",
+ " .dataframe thead th {\n",
+ " text-align: right;\n",
+ " }\n",
+ "</style>\n",
+ "<table border=\"1\" class=\"dataframe\">\n",
+ " <thead>\n",
+ " <tr style=\"text-align: right;\">\n",
+ " <th></th>\n",
+ " <th>prompt</th>\n",
+ " </tr>\n",
+ " </thead>\n",
+ " <tbody>\n",
+ " <tr>\n",
+ " <th>0</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>1</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>2</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>3</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>4</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " </tbody>\n",
+ "</table>\n",
+ "</div>"
+ ],
+ "text/plain": [
+ " prompt\n",
+ "0 Answer the question given the context.\\n\\ncont...\n",
+ "1 Answer the question given the context.\\n\\ncont...\n",
+ "2 Answer the question given the context.\\n\\ncont...\n",
+ "3 Answer the question given the context.\\n\\ncont...\n",
+ "4 Answer the question given the context.\\n\\ncont..."
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
"data = pd.DataFrame({\n",
- " 'context': context,\n",
- " 'questions': questions,\n",
+ " # 'context': context,\n",
+ " # 'questions': questions,\n",
" # 'answers': answers,\n",
" 'prompt' : prompts})\n",
"test_data = data[:3]\n",
@@ -72,7 +138,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -102,7 +168,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@@ -118,7 +184,7 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
@@ -139,9 +205,17 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 8,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Dataset download attempt 1 of 4\n"
+ ]
+ }
+ ],
"source": [
"from responsibleai_text import RAITextInsights, ModelTask\n",
"from raiwidgets import ResponsibleAIDashboard"
@@ -149,51 +223,162 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 9,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "<div>\n",
+ "<style scoped>\n",
+ " .dataframe tbody tr th:only-of-type {\n",
+ " vertical-align: middle;\n",
+ " }\n",
+ "\n",
+ " .dataframe tbody tr th {\n",
+ " vertical-align: top;\n",
+ " }\n",
+ "\n",
+ " .dataframe thead th {\n",
+ " text-align: right;\n",
+ " }\n",
+ "</style>\n",
+ "<table border=\"1\" class=\"dataframe\">\n",
+ " <thead>\n",
+ " <tr style=\"text-align: right;\">\n",
+ " <th></th>\n",
+ " <th>prompt</th>\n",
+ " </tr>\n",
+ " </thead>\n",
+ " <tbody>\n",
+ " <tr>\n",
+ " <th>0</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>1</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " <tr>\n",
+ " <th>2</th>\n",
+ " <td>Answer the question given the context.\\n\\ncont...</td>\n",
+ " </tr>\n",
+ " </tbody>\n",
+ "</table>\n",
+ "</div>"
+ ],
+ "text/plain": [
+ " prompt\n",
+ "0 Answer the question given the context.\\n\\ncont...\n",
+ "1 Answer the question given the context.\\n\\ncont...\n",
+ "2 Answer the question given the context.\\n\\ncont..."
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
"test_data.head()"
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 10,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "feature extraction: 0it [00:00, ?it/s]"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "feature extraction: 3it [00:00, 3.04it/s]\n",
+ "Failed to parse metric `This is a dummy answer`: invalid literal for int() with base 10: 'This is a dummy answer'\n",
+ "Failed to parse metric `This is a dummy answer`: invalid literal for int() with base 10: 'This is a dummy answer'\n",
+ "Failed to parse metric `This is a dummy answer`: invalid literal for int() with base 10: 'This is a dummy answer'\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "computing coherence score\n",
+ "coherence score\n",
+ "[0, 0, 0]\n",
+ "ext_dataset\n",
+ "['positive_words', 'negative_words', 'negation_words', 'negated_entities', 'named_persons', 'sentence_length', 'target_score']\n",
+ " positive_words negative_words negation_words negated_entities \\\n",
+ "0 50 0 0 0 \n",
+ "1 50 0 0 0 \n",
+ "2 52 0 0 0 \n",
+ "\n",
+ " named_persons sentence_length target_score \n",
+ "0 3 827 5 \n",
+ "1 2 805 5 \n",
+ "2 3 832 5 \n"
+ ]
+ }
+ ],
"source": [
"rai_insights = RAITextInsights(\n",
" pipeline_model, test_data, None,\n",
- " task_type=ModelTask.GENERATIVE_TEXT)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# TODO: Remove this once the insights object is updated to handle this\n",
- "rai_insights.temp_questions = test_data['questions']\n",
- "rai_insights.temp_context = test_data['context']\n",
- "rai_insights.temp_eval_model = eval_model"
+ " task_type=ModelTask.GENERATIVE_TEXT,\n",
+ " text_column='prompt')"
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 11,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "================================================================================\n",
+ "Error Analysis\n",
+ "Current Status: Generating error analysis reports.\n",
+ "Current Status: Finished generating error analysis reports.\n",
+ "Time taken: 0.0 min 0.3656380000002173 sec\n",
+ "================================================================================\n"
+ ]
+ }
+ ],
"source": [
"rai_insights.error_analysis.add()\n",
- "# rai_insights.compute()"
+ "rai_insights.compute()"
]
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 12,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "ResponsibleAI started at http://localhost:8704\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "<raiwidgets.responsibleai_dashboard.ResponsibleAIDashboard at 0x2858abb3e20>"
+ ]
+ },
+ "execution_count": 12,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
"ResponsibleAIDashboard(rai_insights)"
]
diff --git a/raiwidgets/raiwidgets/responsibleai_dashboard.py b/raiwidgets/raiwidgets/responsibleai_dashboard.py
index fc194258ad..17e752ed70 100644
--- a/raiwidgets/raiwidgets/responsibleai_dashboard.py
+++ b/raiwidgets/raiwidgets/responsibleai_dashboard.py
@@ -117,6 +117,15 @@ def get_question_answering_metrics():
methods=["POST"]
)
+ def get_generative_text_metrics():
+ data = request.get_json(force=True)
+ return jsonify(self.input.get_generative_text_metrics(data))
+ self.add_url_rule(
+ get_generative_text_metrics,
+ '/get_generative_text_metrics',
+ methods=["POST"]
+ )
+
if hasattr(self._service, 'socketio'):
@self._service.socketio.on('handle_object_detection_json')
def handle_object_detection_json(od_json):
diff --git a/raiwidgets/raiwidgets/responsibleai_dashboard_input.py b/raiwidgets/raiwidgets/responsibleai_dashboard_input.py
index 0df2fdf3f2..830701e861 100644
--- a/raiwidgets/raiwidgets/responsibleai_dashboard_input.py
+++ b/raiwidgets/raiwidgets/responsibleai_dashboard_input.py
@@ -171,13 +171,19 @@ def _prepare_filtered_error_analysis_data(self, features, filters,
def debug_ml(self, data):
try:
- features = data[0]
+ features = data[0] # TODO: Remove prompt feature
filters = data[1]
composite_filters = data[2]
max_depth = data[3]
num_leaves = data[4]
min_child_samples = data[5]
metric = display_name_to_metric[data[6]]
+ text_cols = self._analysis._text_column
+ if text_cols is None:
+ text_cols = []
+ elif isinstance(text_cols, str):
+ text_cols = [text_cols]
+ features = [f for f in features if f not in text_cols]
filtered_data_df = self._prepare_filtered_error_analysis_data(
features, filters, composite_filters, metric)
@@ -484,3 +490,35 @@ def get_question_answering_metrics(self, post_data):
"inner error: {}".format(e_str),
WidgetRequestResponseConstants.data: []
}
+
+ def get_generative_text_metrics(self, post_data):
+ """Flask endpoint function to get Model Overview metrics
+ for the Generative Text scenario.
+
+ :param post_data: List of inputs in the order
+ # TODO: What is the data we are getting here?
+ (tentative) [true_y, predicted_y, aggregate_method, class_name, iou_threshold].
+ :type post_data: List
+
+ :return: JSON/dict data response
+ :rtype: Dict[str, List]
+ """
+ try:
+ selection_indexes = post_data[0]
+ generative_text_cache = post_data[1]
+ exp = self._analysis.compute_genai_metrics(
+ selection_indexes,
+ generative_text_cache
+ )
+ return {
+ WidgetRequestResponseConstants.data: exp
+ }
+ except Exception as e:
+ print(e)
+ traceback.print_exc()
+ e_str = _format_exception(e)
+ return {
+ WidgetRequestResponseConstants.error:
+ EXP_VIZ_ERR_MSG.format(e_str),
+ WidgetRequestResponseConstants.data: []
+ }
diff --git a/responsibleai_text/responsibleai_text/managers/error_analysis_manager.py b/responsibleai_text/responsibleai_text/managers/error_analysis_manager.py
index 7abd6a2c25..461ca23772 100644
--- a/responsibleai_text/responsibleai_text/managers/error_analysis_manager.py
+++ b/responsibleai_text/responsibleai_text/managers/error_analysis_manager.py
@@ -12,6 +12,7 @@
import pandas as pd
from ml_wrappers import wrap_model
+from erroranalysis._internal.constants import ModelTask as ErrorAnalysisTask
from erroranalysis._internal.error_analyzer import ModelAnalyzer
from erroranalysis._internal.error_report import as_error_report
from responsibleai._tools.shared.state_directory_management import \
@@ -21,6 +22,7 @@
ErrorAnalysisManager as BaseErrorAnalysisManager
from responsibleai.managers.error_analysis_manager import as_error_config
from responsibleai_text.common.constants import ModelTask
+from responsibleai_text.utils.genai_metrics.metrics import get_genai_metric
from responsibleai_text.utils.feature_extractors import get_text_columns
LABELS = 'labels'
@@ -84,10 +86,20 @@ def __init__(self, model, dataset, is_multilabel, task_type, classes=None):
self.dataset.loc[:, ['context', 'questions']])
self.predictions = np.array(self.predictions)
elif self.task_type == ModelTask.GENERATIVE_TEXT:
- # FIXME: Copying from QUESTION_ANSWERING for now
- self.predictions = self.model.predict(
- self.dataset.loc[:, ['context', 'questions']])
- self.predictions = np.array(self.predictions)
+ # FIXME: Making constant predictions for now
+ # print('self dataset')
+ # print(self.dataset)
+ # self.predictions = [4] * len(self.dataset)
+ # self.predictions = np.array(self.predictions)
+ print('computing coherence score')
+ coherence = get_genai_metric(
+ 'coherence',
+ predictions=self.model.predict(self.dataset),
+ references=dataset['prompt'],
+ wrapper_model=self.model)
+ print('coherence score')
+ print(coherence['scores'])
+ self.predictions = np.array(coherence['scores'])
else:
raise ValueError("Unknown task type: {}".format(self.task_type))
@@ -198,9 +210,17 @@ def __init__(self, model: Any, dataset: pd.DataFrame,
task_type, index_classes)
if categorical_features is None:
categorical_features = []
+ if task_type == ModelTask.GENERATIVE_TEXT:
+ sup_task_type = ErrorAnalysisTask.REGRESSION
+ ext_dataset = ext_dataset.copy()
+ del ext_dataset['prompt']
+ ext_dataset['target_score'] = 5
+ target_column = 'target_score'
+ else:
+ sup_task_type = ErrorAnalysisTask.CLASSIFICATION
super(ErrorAnalysisManager, self).__init__(
index_predictor, ext_dataset, target_column,
- classes, categorical_features)
+ classes, categorical_features, model_task=sup_task_type)
@staticmethod
def _create_index_predictor(model, dataset, target_column,
diff --git a/responsibleai_text/responsibleai_text/rai_text_insights/rai_text_insights.py b/responsibleai_text/responsibleai_text/rai_text_insights/rai_text_insights.py
index a37faedd7c..8a3abf16a5 100644
--- a/responsibleai_text/responsibleai_text/rai_text_insights/rai_text_insights.py
+++ b/responsibleai_text/responsibleai_text/rai_text_insights/rai_text_insights.py
@@ -30,6 +30,7 @@
from responsibleai_text.managers.explainer_manager import ExplainerManager
from responsibleai_text.utils.feature_extractors import (extract_features,
get_text_columns)
+from responsibleai_text.utils.genai_metrics.metrics import get_genai_metric
module_logger = logging.getLogger(__name__)
module_logger.setLevel(logging.INFO)
@@ -106,7 +107,6 @@ def _add_extra_metadata_features(task_type, feature_metadata):
feature_metadata.context_col = 'context'
return feature_metadata
-
class RAITextInsights(RAIBaseInsights):
"""Defines the top-level RAITextInsights API.
@@ -616,14 +616,16 @@ def _get_dataset(self):
# add prompt and (optionally) context to dataset
# for generative text tasks
if self.task_type == ModelTask.GENERATIVE_TEXT:
- prompt = self.test[self._feature_metadata.prompt_col]
- context = self.test.get(self._feature_metadata.context_col)
-
- dashboard_dataset.prompt = convert_to_list(prompt)
- if context is None:
- dashboard_dataset.context = None
- else:
- dashboard_dataset.context = convert_to_list(context)
+ # prompt = self.test[self._feature_metadata.prompt_col]
+ # context = self.test.get(self._feature_metadata.context_col)
+
+ # dashboard_dataset.prompt = convert_to_list(prompt)
+ # if context is None:
+ # dashboard_dataset.context = None
+ # else:
+ # dashboard_dataset.context = convert_to_list(context)
+ # NOT DOING FOR NOW
+ pass
return dashboard_dataset
@@ -895,86 +897,77 @@ def compute_genai_metrics(
question_answering_cache
):
print('compute_genai_metrics')
- curr_file_dir = Path(__file__).resolve().parent
dashboard_dataset = self.get_data().dataset
+ prompt_idx = dashboard_dataset.feature_names.index('prompt')
+ prompts = [feat[prompt_idx] for feat in dashboard_dataset.features]
true_y = dashboard_dataset.true_y
predicted_y = dashboard_dataset.predicted_y
- eval_model = self.temp_eval_model
- questions = self.temp_questions
- context = self.temp_context
-
all_cohort_metrics = []
for cohort_indices in selection_indexes:
- print('cohort metrics')
- true_y_cohort = [true_y[cohort_index] for cohort_index
- in cohort_indices]
+ cohort_metrics = dict()
+
+ if true_y is None:
+ true_y_cohort = None
+ else:
+ true_y_cohort = [true_y[cohort_index] for cohort_index
+ in cohort_indices]
predicted_y_cohort = [predicted_y[cohort_index] for cohort_index
in cohort_indices]
- questions_cohort = [questions[cohort_index] for cohort_index
- in cohort_indices]
- context_cohort = [context[cohort_index] for cohort_index
- in cohort_indices]
+ prompts_cohort = [prompts[cohort_index] for cohort_index
+ in cohort_indices]
try:
- print('exact match')
- exact_match = evaluate.load('exact_match')
- exact_match_results = exact_match.compute(
- predictions=predicted_y_cohort, references=true_y_cohort)
+ if true_y_cohort is not None:
+ exact_match = evaluate.load('exact_match')
+ cohort_metrics['exact_match'] = exact_match.compute(
+ predictions=predicted_y_cohort, references=true_y_cohort)
- print('coherence')
- coherence = evaluate.load(
- str(curr_file_dir.joinpath('metrics/coherence.py')))
- coherence_results = coherence.compute(
+ cohort_metrics['coherence'] = get_genai_metric(
+ 'coherence',
predictions=predicted_y_cohort,
- references=questions_cohort,
- wrapper_model=eval_model)
+ references=prompts_cohort,
+ wrapper_model=self._wrapped_model
+ )
# coherence_results = {'scores' : [3.4]}
- print('equivalence')
- equivalence = evaluate.load(
- str(curr_file_dir.joinpath('metrics/equivalence.py')))
- equivalence_results = equivalence.compute(
+ if true_y_cohort is not None:
+ cohort_metrics['equivalence'] = get_genai_metric(
+ 'equivalence',
+ predictions=predicted_y_cohort,
+ references=prompts_cohort,
+ answers=true_y_cohort,
+ wrapper_model=self._wrapped_model
+ )
+ # equivalence_results = {'scores' : [3.4]}
+
+ cohort_metrics['fluency'] = get_genai_metric(
+ 'fluency',
predictions=predicted_y_cohort,
- references=questions_cohort,
- answers=true_y_cohort,
- wrapper_model=eval_model)
-
- print('fluency')
- fluency = evaluate.load(
- str(curr_file_dir.joinpath('metrics/fluency.py')))
- fluency_results = fluency.compute(
- predictions=predicted_y_cohort,
- references=questions_cohort,
- wrapper_model=eval_model)
+ references=prompts_cohort,
+ wrapper_model=self._wrapped_model
+ )
+ # fluency_results = {'scores' : [3.4]}
print('groundedness')
- # groundedness = evaluate.load(
- # str(curr_file_dir.joinpath('metrics/groundedness.py')))
- # groundedness_results = groundedness.compute(
- # predictions=predicted_y_cohort,
- # references=context_cohort,
- # wrapper_model=eval_model)
- groundedness_results = {'scores' : [3.4]}
+ cohort_metrics['groundedness'] = get_genai_metric(
+ 'groundedness',
+ predictions=predicted_y_cohort,
+ references=prompts_cohort,
+ wrapper_model=self._wrapped_model
+ )
+ # groundedness_results = {'scores' : [3.4]}
print('relevance')
- # relevance = evaluate.load(
- # str(curr_file_dir.joinpath('metrics/relevance.py')))
- # relevance_results = relevance.compute(
- # predictions=predicted_y_cohort,
- # references=context_cohort,
- # questions=questions_cohort,
- # wrapper_model=eval_model)
- relevance_results = {'scores' : [3.5]}
-
- all_cohort_metrics.append([
- exact_match_results['exact_match'],
- np.mean(coherence_results['scores']),
- np.mean(equivalence_results['scores']),
- np.mean(fluency_results['scores']),
- np.mean(groundedness_results['scores']),
- np.mean(relevance_results['scores'])])
+ cohort_metrics['relevance'] = get_genai_metric(
+ 'relevance',
+ predictions=predicted_y_cohort,
+ references=prompts_cohort,
+ wrapper_model=self._wrapped_model
+ )
+ # relevance_results = {'scores' : [3.5]}
+
+ all_cohort_metrics.append(cohort_metrics)
except ValueError:
- all_cohort_metrics.append([0, 0, 0, 0, 0, 0])
- print('all done')
+ all_cohort_metrics.append({})
return all_cohort_metrics
diff --git a/responsibleai_text/responsibleai_text/utils/feature_extractors.py b/responsibleai_text/responsibleai_text/utils/feature_extractors.py
index afea5eb9b2..7259bb3f23 100644
--- a/responsibleai_text/responsibleai_text/utils/feature_extractors.py
+++ b/responsibleai_text/responsibleai_text/utils/feature_extractors.py
@@ -63,6 +63,7 @@ def extract_features(text_dataset: pd.DataFrame,
feature_names.append("context_overlap")
elif task_type == ModelTask.GENERATIVE_TEXT:
# TODO: Add feature names for generative text
+ start_meta_index = 0
feature_names = base_feature_names
else:
raise ValueError("Unknown task type: {}".format(task_type))
diff --git a/responsibleai_text/responsibleai_text/utils/genai_metrics/metrics.py b/responsibleai_text/responsibleai_text/utils/genai_metrics/metrics.py
new file mode 100644
index 0000000000..7a5c240e9e
--- /dev/null
+++ b/responsibleai_text/responsibleai_text/utils/genai_metrics/metrics.py
@@ -0,0 +1,22 @@
+# Copyright (c) Microsoft Corporation
+# Licensed under the MIT License.
+
+"""Compute AI-assisted metrics for generative text models."""
+
+from pathlib import Path
+import evaluate
+
+def get_genai_metric(metric_name, **metric_kwargs):
+ """Get the metric from the genai library.
+
+ :param metric_name: The name of the metric.
+ :type metric_name: str
+ :param metric_kwargs: The keyword arguments to pass to the metric.
+ :type metric_kwargs: dict
+ :return: The metric.
+ :rtype: float
+ """
+ curr_file_dir = Path(__file__).resolve().parent
+ metric = evaluate.load(
+ str(curr_file_dir.joinpath(f'scripts/{metric_name}.py')))
+ return metric.compute(**metric_kwargs)
diff --git a/responsibleai_text/responsibleai_text/rai_text_insights/metrics/coherence.py b/responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/coherence.py
similarity index 95%
rename from responsibleai_text/responsibleai_text/rai_text_insights/metrics/coherence.py
rename to responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/coherence.py
index e27d9f3eb4..4342ee978e 100644
--- a/responsibleai_text/responsibleai_text/rai_text_insights/metrics/coherence.py
+++ b/responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/coherence.py
@@ -31,12 +31,23 @@
Five stars: the answer has perfect coherency
This rating value should always be an integer between 1 and 5. So the rating produced should be 1 or 2 or 3 or 4 or 5.
+Some examples of valid responses are:
+1
+2
+5
+Some examples of invalid responses are:
+1/5
+1.5
+3.0
+5 stars
QUESTION:
{question}
ANSWER:
{prediction}
+
+RATING:
""".strip()
diff --git a/responsibleai_text/responsibleai_text/rai_text_insights/metrics/equivalence.py b/responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/equivalence.py
similarity index 100%
rename from responsibleai_text/responsibleai_text/rai_text_insights/metrics/equivalence.py
rename to responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/equivalence.py
diff --git a/responsibleai_text/responsibleai_text/rai_text_insights/metrics/fluency.py b/responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/fluency.py
similarity index 100%
rename from responsibleai_text/responsibleai_text/rai_text_insights/metrics/fluency.py
rename to responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/fluency.py
diff --git a/responsibleai_text/responsibleai_text/rai_text_insights/metrics/groundedness.py b/responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/groundedness.py
similarity index 100%
rename from responsibleai_text/responsibleai_text/rai_text_insights/metrics/groundedness.py
rename to responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/groundedness.py
diff --git a/responsibleai_text/responsibleai_text/rai_text_insights/metrics/relevance.py b/responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/relevance.py
similarity index 89%
rename from responsibleai_text/responsibleai_text/rai_text_insights/metrics/relevance.py
rename to responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/relevance.py
index b2d736d220..7947556b52 100644
--- a/responsibleai_text/responsibleai_text/rai_text_insights/metrics/relevance.py
+++ b/responsibleai_text/responsibleai_text/utils/genai_metrics/scripts/relevance.py
@@ -32,10 +32,7 @@
This rating value should always be an integer between 1 and 5. So the rating produced should be 1 or 2 or 3 or 4 or 5.
-CONTEXT:
-{context}
-
-QUESTION:
+QUESTION AND CONTEXT:
{question}
ANSWER:
@@ -54,8 +51,7 @@ def _info(self):
features=datasets.Features(
{
"predictions": datasets.Value("string", id="sequence"),
- "references": datasets.Value("string", id="sequence"),
- "questions": datasets.Value("string", id="sequence")
+ "references": datasets.Value("string", id="sequence")
}
),
)
@@ -64,9 +60,8 @@ def _compute(self, *, predictions=None, references=None, **kwargs):
m = []
templated_ques = []
- questions = kwargs['questions']
- for p, r, q in zip(predictions, references, questions):
- templated_ques.append(_TEMPLATE.format(context=r, question=q, prediction=p))
+ for p, r in zip(predictions, references):
+ templated_ques.append(_TEMPLATE.format(question=r, prediction=p))
model = kwargs['wrapper_model']
| Move legacy interpret dashboard
Interpret dashboard has two top-level components, ExpanationDashboard and NewExplanationDashboard. When the new dashboard is determined to be sufficient, the old dashboard and its components should be moved to a legacy folder.
| 2024-01-26T18:37:52 | 0.0 | [] | [] |
|||
bambinos/bambi | bambinos__bambi-822 | 4cc310351d9705d5e0a1824726ca42b4ebcfa1bb | diff --git a/bambi/backend/inference_methods.py b/bambi/backend/inference_methods.py
index 900d9c26..dee510e7 100644
--- a/bambi/backend/inference_methods.py
+++ b/bambi/backend/inference_methods.py
@@ -17,6 +17,9 @@ def __init__(self):
self.pymc_methods = self._pymc_methods()
def _get_bayeux_methods(self, model):
+ # If bayeux is not installed, return an empty MCMC list.
+ if model is None:
+ return {"mcmc": []}
# Bambi only supports bayeux MCMC methods
mcmc_methods = model.methods.get("mcmc")
return {"mcmc": mcmc_methods}
@@ -85,7 +88,7 @@ def bayeux_model():
A dummy model with a simple quadratic likelihood function.
"""
if importlib.util.find_spec("bayeux") is None:
- return {"mcmc": []}
+ return None
import bayeux as bx # pylint: disable=import-outside-toplevel
| Error on Import
When importing bambi on current version of main branch (https://github.com/bambinos/bambi/tree/6180e733b3e843cfaa249031c5789ff0c5f12795)
an error occurs
Error:
```
(.venv) ~/P/C/test 1m 5.7s â± python (base)
Python 3.10.8 (main, Nov 24 2022, 08:09:04) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import bambi
WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
Matplotlib is building the font cache; this may take a moment.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/phansen/Projects/CFA/test/.venv/lib/python3.10/site-packages/bambi/__init__.py", line 7, in <module>
from .backend import inference_methods, PyMCModel
File "/Users/phansen/Projects/CFA/test/.venv/lib/python3.10/site-packages/bambi/backend/__init__.py", line 1, in <module>
from .pymc import PyMCModel
File "/Users/phansen/Projects/CFA/test/.venv/lib/python3.10/site-packages/bambi/backend/pymc.py", line 16, in <module>
from bambi.backend.inference_methods import inference_methods
File "/Users/phansen/Projects/CFA/test/.venv/lib/python3.10/site-packages/bambi/backend/inference_methods.py", line 119, in <module>
inference_methods = InferenceMethods()
File "/Users/phansen/Projects/CFA/test/.venv/lib/python3.10/site-packages/bambi/backend/inference_methods.py", line 16, in __init__
self.bayeux_methods = self._get_bayeux_methods(bayeux_model())
File "/Users/phansen/Projects/CFA/test/.venv/lib/python3.10/site-packages/bambi/backend/inference_methods.py", line 21, in _get_bayeux_methods
mcmc_methods = model.methods.get("mcmc")
AttributeError: 'dict' object has no attribute 'methods'
```
when bayuex is not installed, this function returns the wrong type of object:
https://github.com/bambinos/bambi/blob/6180e733b3e843cfaa249031c5789ff0c5f12795/bambi/backend/inference_methods.py#L87-L88
| Hey @peterhansen-cfa2, apologies for the delayed response. This looks like a simple fix. I will look into it tomorrow. Thanks for raising the issue! | 2024-07-04T17:13:07 | 0.0 | [] | [] |
||
ami-iit/adam | ami-iit__adam-85 | 956031282b9d2268eec852767814d0830dc8147d | diff --git a/src/adam/model/__init__.py b/src/adam/model/__init__.py
index 0dd4bf9..0426e31 100644
--- a/src/adam/model/__init__.py
+++ b/src/adam/model/__init__.py
@@ -1,4 +1,4 @@
-from .abc_factories import Joint, Link, ModelFactory
+from .abc_factories import Joint, Link, ModelFactory, Inertial, Pose
from .model import Model
from .std_factories.std_joint import StdJoint
from .std_factories.std_link import StdLink
diff --git a/src/adam/model/abc_factories.py b/src/adam/model/abc_factories.py
index 853dbb8..4720588 100644
--- a/src/adam/model/abc_factories.py
+++ b/src/adam/model/abc_factories.py
@@ -88,8 +88,36 @@ class Inertial:
"""Inertial description"""
mass: npt.ArrayLike
- inertia = Inertia
- origin = Pose
+ inertia: Inertia
+ origin: Pose
+
+ @staticmethod
+ def zero() -> "Inertial":
+ """Returns an Inertial object with zero mass and inertia"""
+ return Inertial(
+ mass=0.0,
+ inertia=Inertia(
+ ixx=0.0,
+ ixy=0.0,
+ ixz=0.0,
+ iyy=0.0,
+ iyz=0.0,
+ izz=0.0,
+ ),
+ origin=Pose(xyz=[0.0, 0.0, 0.0], rpy=[0.0, 0.0, 0.0]),
+ )
+
+ def set_mass(self, mass: npt.ArrayLike) -> "Inertial":
+ """Set the mass of the inertial object"""
+ self.mass = mass
+
+ def set_inertia(self, inertia: Inertia) -> "Inertial":
+ """Set the inertia of the inertial object"""
+ self.inertia = inertia
+
+ def set_origin(self, origin: Pose) -> "Inertial":
+ """Set the origin of the inertial object"""
+ self.origin = origin
@dataclasses.dataclass
diff --git a/src/adam/model/std_factories/std_link.py b/src/adam/model/std_factories/std_link.py
index 9d90829..7754747 100644
--- a/src/adam/model/std_factories/std_link.py
+++ b/src/adam/model/std_factories/std_link.py
@@ -2,7 +2,7 @@
import urdf_parser_py.urdf
from adam.core.spatial_math import SpatialMath
-from adam.model import Link
+from adam.model import Link, Inertial, Pose
class StdLink(Link):
@@ -15,10 +15,15 @@ def __init__(self, link: urdf_parser_py.urdf.Link, math: SpatialMath):
self.inertial = link.inertial
self.collisions = link.collisions
+ # if the link has no inertial properties (a connecting frame), let's add them
+ if link.inertial is None:
+ link.inertial = Inertial.zero()
+
# if the link has inertial properties, but the origin is None, let's add it
if link.inertial is not None and link.inertial.origin is None:
- link.inertial.origin.xyz = [0, 0, 0]
- link.inertial.origin.rpy = [0, 0, 0]
+ link.inertial.origin = Pose(xyz=[0, 0, 0], rpy=[0, 0, 0])
+
+ self.inertial = link.inertial
def spatial_inertia(self) -> npt.ArrayLike:
"""
diff --git a/src/adam/model/std_factories/std_model.py b/src/adam/model/std_factories/std_model.py
index ffeff71..5f8f4fd 100644
--- a/src/adam/model/std_factories/std_model.py
+++ b/src/adam/model/std_factories/std_model.py
@@ -44,9 +44,8 @@ def __init__(self, path: str, math: SpatialMath):
# to have a useless and noisy warning, let's remove before hands all the sensor elements,
# that anyhow are not parser by urdf_parser_py or adam
# See https://github.com/ami-iit/ADAM/issues/59
- xml_file = open(path, "r")
- xml_string = xml_file.read()
- xml_file.close()
+ with open(path, "r") as xml_file:
+ xml_string = xml_file.read()
xml_string_without_sensors_tags = urdf_remove_sensors_tags(xml_string)
self.urdf_desc = urdf_parser_py.urdf.URDF.from_xml_string(
xml_string_without_sensors_tags
@@ -64,17 +63,45 @@ def get_links(self) -> List[StdLink]:
"""
Returns:
List[StdLink]: build the list of the links
+
+ A link is considered a "real" link if
+ - it has an inertial
+ - it has children
+ - if it has no children and no inertial, it is at lest connected to the parent with a non fixed joint
"""
return [
- self.build_link(l) for l in self.urdf_desc.links if l.inertial is not None
+ self.build_link(l)
+ for l in self.urdf_desc.links
+ if (
+ l.inertial is not None
+ or l.name in self.urdf_desc.child_map.keys()
+ or any(
+ j.type != "fixed"
+ for j in self.urdf_desc.joints
+ if j.child == l.name
+ )
+ )
]
def get_frames(self) -> List[StdLink]:
"""
Returns:
List[StdLink]: build the list of the links
+
+ A link is considered a "fake" link (frame) if
+ - it has no inertial
+ - it does not have children
+ - it is connected to the parent with a fixed joint
"""
- return [self.build_link(l) for l in self.urdf_desc.links if l.inertial is None]
+ return [
+ self.build_link(l)
+ for l in self.urdf_desc.links
+ if l.inertial is None
+ and l.name not in self.urdf_desc.child_map.keys()
+ and all(
+ j.type == "fixed" for j in self.urdf_desc.joints if j.child == l.name
+ )
+ ]
def build_joint(self, joint: urdf_parser_py.urdf.Joint) -> StdJoint:
"""
diff --git a/src/adam/parametric/model/parametric_factories/parametric_link.py b/src/adam/parametric/model/parametric_factories/parametric_link.py
index c9ae02b..1346f3f 100644
--- a/src/adam/parametric/model/parametric_factories/parametric_link.py
+++ b/src/adam/parametric/model/parametric_factories/parametric_link.py
@@ -50,10 +50,11 @@ def __init__(
length_multiplier=self.length_multiplier
)
self.mass = self.compute_mass()
- self.inertial = Inertial(self.mass)
- self.inertial.mass = self.mass
- self.inertial.inertia = self.compute_inertia_parametric()
- self.inertial.origin = self.modify_origin()
+ inertia_parametric = self.compute_inertia_parametric()
+ origin = self.modify_origin()
+ self.inertial = Inertial(
+ mass=self.mass, inertia=inertia_parametric, origin=origin
+ )
self.update_visuals()
def get_principal_length(self):
| Error detecting two root links
Hi, I got this bug:
```
File "/samsung4tb/BiDex-touch/bidex_collect/utils/hand_retargeter.py", line 49, in __init__
kinDyn = KinDynComputations(os.path.join(CUR_PATH, "../../bidex_sim/assets/robots/ur_description/urdf/ur3e_leap_right.urdf"),
File "/samsung4tb/venvs/isaacgym_venv/lib/python3.8/site-packages/adam/jax/computations.py", line 34, in __init__
model = Model.build(factory=factory, joints_name_list=joints_name_list)
File "/samsung4tb/venvs/isaacgym_venv/lib/python3.8/site-packages/adam/model/model.py", line 63, in build
tree = Tree.build_tree(links=links_list, joints=joints_list)
File "/samsung4tb/venvs/isaacgym_venv/lib/python3.8/site-packages/adam/model/tree.py", line 72, in build_tree
raise ValueError("The model has more than one root link")
ValueError: The model has more than one root link
```
Loading the attached URDF erroneously says there are two root links (`base_link` and `palm_lower`) even though using check_urdf from ROS confirms there is only one.
[ur3e_leap_right.zip](https://github.com/user-attachments/files/15754226/ur3e_leap_right.zip)
| fyi @Giulero I can confirm that also iDynTree's URDF parser is able to load the URDF that @richardrl provided with no error, i.e. :
~~~
D:\ur3e_leap_right> conda create -n idyntree idyntree
D:\ur3e_leap_right> conda activate idyntree
(idyntree) D:\ur3e_leap_right>idyntree-model-info -m ./ur3e_leap_right.urdf --print
Model:
Links:
[0] base_link
[1] shoulder_link
[2] upper_arm_link
[3] forearm_link
[4] wrist_1_link
[5] wrist_2_link
[6] wrist_3_link
[7] flange
[8] tool0
[9] wrist_axis
[10] palm_lower
[11] mcp_joint
[12] mcp_joint_2
[13] mcp_joint_3
[14] pip_4
[15] thumb_pip
[16] thumb_dip
[17] thumb_fingertip
[18] pip_3
[19] dip_3
[20] fingertip_3
[21] pip_2
[22] dip_2
[23] fingertip_2
[24] pip
[25] dip
[26] fingertip
Frames:
[27] mcp_joint_root --> mcp_joint
[28] mcp_joint_2_root --> mcp_joint_2
[29] mcp_joint_3_root --> mcp_joint_3
[30] thumb_fingertip_tip --> thumb_fingertip
[31] fingertip_3_tip --> fingertip_3
[32] fingertip_2_tip --> fingertip_2
[33] fingertip_tip --> fingertip
Joints:
[0] shoulder_pan_joint (dofs: 1) : base_link<-->shoulder_link
[1] shoulder_lift_joint (dofs: 1) : shoulder_link<-->upper_arm_link
[2] elbow_joint (dofs: 1) : upper_arm_link<-->forearm_link
[3] wrist_1_joint (dofs: 1) : forearm_link<-->wrist_1_link
[4] wrist_2_joint (dofs: 1) : wrist_1_link<-->wrist_2_link
[5] wrist_3_joint (dofs: 1) : wrist_2_link<-->wrist_3_link
[6] 1 (dofs: 1) : palm_lower<-->mcp_joint
[7] 5 (dofs: 1) : palm_lower<-->mcp_joint_2
[8] 9 (dofs: 1) : palm_lower<-->mcp_joint_3
[9] 12 (dofs: 1) : palm_lower<-->pip_4
[10] 13 (dofs: 1) : pip_4<-->thumb_pip
[11] 14 (dofs: 1) : thumb_pip<-->thumb_dip
[12] 15 (dofs: 1) : thumb_dip<-->thumb_fingertip
[13] 8 (dofs: 1) : mcp_joint_3<-->pip_3
[14] 10 (dofs: 1) : pip_3<-->dip_3
[15] 11 (dofs: 1) : dip_3<-->fingertip_3
[16] 4 (dofs: 1) : mcp_joint_2<-->pip_2
[17] 6 (dofs: 1) : pip_2<-->dip_2
[18] 7 (dofs: 1) : dip_2<-->fingertip_2
[19] 0 (dofs: 1) : mcp_joint<-->pip
[20] 2 (dofs: 1) : pip<-->dip
[21] 3 (dofs: 1) : dip<-->fingertip
[22] wrist_3-flange (dofs: 0) : wrist_3_link<-->flange
[23] flange-tool0 (dofs: 0) : flange<-->tool0
[24] hand_joint (dofs: 0) : tool0<-->wrist_axis
[25] wrist_joint (dofs: 0) : wrist_axis<-->palm_lower | 2024-06-12T10:02:01 | 0.0 | [] | [] |
||
paulgazz/kmax | paulgazz__kmax-240 | 6f83b96e0106624d53372fa58c3df9623c86a524 | diff --git a/kmax/klocalizer.py b/kmax/klocalizer.py
index 2d7a31d8..a38a29b0 100644
--- a/kmax/klocalizer.py
+++ b/kmax/klocalizer.py
@@ -447,7 +447,12 @@ def get_config_from_model(model: z3.Model, arch: Arch, set_tristate_m, allow_non
val, expr = val_and_expr.split("|", 1)
# TODO: this will use the last def if there are multiple ones.
# use constraint solver to pick the right one.
- nonbool_defs[var] = val[1:-1] # strip the quotes
+ if val.startswith('"') and val.endswith('"'):
+ nonbool_defs[var] = val[1:-1] # strip the quotes
+ else:
+ # default value is from another variable, so it won't have a quoted value
+ # TODO: support assigning default values from other config options
+ pass
for entry in model: # the model has some problem, we can't get the entry
str_entry = str(entry)
matches = token_pattern.match(str_entry)
| kismet generates invalid value for CONFIG_ARCH_MMAP_RND_BITS
Kernel developers reported that there are suspicious info in the reports from kernel test robot:
https://lore.kernel.org/oe-kbuild-all/[email protected]/
```
>>> kismet warnings: (new ones prefixed by >>)
>>>>> kismet: WARNING: unmet direct dependencies detected for SM_GCC_8550 when selected by SM_CAMCC_8550
>>> .config:7280:warning: symbol value 'ONFIG_ARCH_MMAP_RND_BITS_MI' invalid for ARCH_MMAP_RND_BITS
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> Where is this coming from? I have seen this warning in several build
>> reports (earliest 2023-01-31), but cannot reproduce it with the provided
>> commit and config.
>
I'm pretty sure that what Geert is asking about here is the warning (".config:7280:...") with
the truncated kconfig symbol 'ONFIG_ARCH_MMAP_RND_BITS_MI'. I have also seen several of these.
Is this a bug in kismet or a bug in the robot or something else?
```
This can be reproduced by the following steps:
```
$ git checkout ccc4e6a061a21d75b96d82fc4b084a8d96df6eb4 (this is an upstream commit)
$ kismet --selectees CONFIG_SM_GCC_8550 --selectors CONFIG_SM_CAMCC_8550 -a arm
$ cd kismet-test-cases
$ grep ARCH_MMAP_RND_BITS udd-arm-CONFIG_SM_GCC_8550-CONFIG_SM_CAMCC_8550-0-0.config
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_ARCH_MMAP_RND_BITS=ONFIG_ARCH_MMAP_RND_BITS_MI <--
$ cd ..
$ cp kismet-test-cases/udd-arm-CONFIG_SM_GCC_8550-CONFIG_SM_CAMCC_8550-0-0.config .config
$ make ARCH=arm olddefconfig
.config:7113:warning: symbol value 'ONFIG_ARCH_MMAP_RND_BITS_MI' invalid for ARCH_MMAP_RND_BITS <--
WARNING: unmet direct dependencies detected for SM_GCC_8550
Depends on [n]: COMMON_CLK [=y] && COMMON_CLK_QCOM [=y] && (ARM64 || COMPILE_TEST [=n])
Selected by [y]:
- SM_CAMCC_8550 [=y] && COMMON_CLK [=y] && COMMON_CLK_QCOM [=y]
#
# configuration written to .config
#
```
We can see that the config file generated in kismet-test-cases dir has an invalid value for CONFIG_ARCH_MMAP_RND_BITS.
| Thanks for the heads up will take a look more closely.
> Is the issue the missing "C" in "CONFIG_..." in the message, leading to "ONFIG_..."?
Hi @paulgazz, it is not exactly missing the "C" character, actually CONFIG_ARCH_MMAP_RND_BITS should be an integer value, but here it is assigned as a truncated string "ONFIG_ARCH_MMAP_RND_BITS_MI"
```
config ARCH_MMAP_RND_BITS
int "Number of bits to use for ASLR of mmap base address" if EXPERT
range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX
default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT
default ARCH_MMAP_RND_BITS_MIN
depends on HAVE_ARCH_MMAP_RND_BITS
help
This value can be used to select the number of bits to use to
determine the random offset to the base address of vma regions
resulting from mmap allocations. This value will be bounded
by the architecture's minimum and maximum supported values.
This value can be changed after boot using the
/proc/sys/vm/mmap_rnd_bits tunable
```
Looks like I introduced a bug in v4.5 when using non-Boolean configuration option default values (to avoid the need for a user to set them). I will work on a patch and new release. | 2023-11-08T11:31:13 | 0.0 | [] | [] |
||
jina-ai/serve | jina-ai__serve-5598 | e5b203efd0e11a3dcac7747078db03a154827cf3 | diff --git a/jina/serve/runtimes/gateway/composite/gateway.py b/jina/serve/runtimes/gateway/composite/gateway.py
index 4e10fa8fe0300..4458730f1a1d8 100644
--- a/jina/serve/runtimes/gateway/composite/gateway.py
+++ b/jina/serve/runtimes/gateway/composite/gateway.py
@@ -31,6 +31,7 @@ def __init__(
gateway_kwargs = {k: v for k, v in kwargs.items() if k != 'runtime_args'}
gateway_kwargs['runtime_args'] = dict(vars(runtime_args))
gateway = gateway_cls(**gateway_kwargs)
+ gateway.streamer = self.streamer
self.gateways.append(gateway)
async def setup_server(self):
| feat: use single GatewayStreamer object for all gateways in the CompositeGateway
The `GatewayStreamer` object is instantiated in the `BaseGateway`. The `CompositeGateway` itself is a sub class of the `BaseGateway` and the respective protocol specific `Gateway` is also a sub class of the `BaseGateway`. Which means that there is at least one streamer object for the `CompositeGateway` class and one for each protocol specific gateway. This complicates the warmup logic (#5467) because by default only the `CompositeGateway` streamer object is being triggered. Using a single streamer object is more compact and reduces the overhead of creating multiple connection pools within each streamer object.
| 2023-01-13T15:02:06 | 0.0 | [] | [] |
|||
almarklein/timetagger | almarklein__timetagger-95 | dfcf88d19014f8c347fe1d1f76b6cb2bfcfa1a16 | diff --git a/tasks.py b/tasks.py
index 3436f1a7..1542fb4b 100644
--- a/tasks.py
+++ b/tasks.py
@@ -54,7 +54,7 @@ def lint(ctx):
"flake8",
ROOT_DIR,
"--max-line-length=999",
- "--extend-ignore=N,E731,E203,F541",
+ "--extend-ignore=N,E731,E203,F541,D,B",
"--exclude=build,dist,*.egg-info",
]
ret_code = subprocess.call(cmd, cwd=ROOT_DIR)
diff --git a/timetagger/app/front.py b/timetagger/app/front.py
index 561307a4..6410d986 100644
--- a/timetagger/app/front.py
+++ b/timetagger/app/front.py
@@ -776,10 +776,15 @@ def _draw_button(self, ctx, x, y, given_w, h, text, action, tt, options):
y1 = y
y2 = y1 + h
+ # Register the button and tooltip
+ ob = {"button": True, "action": action}
+ self._picker.register(x1, y1, x2, y2, ob)
+ hover = self._canvas.register_tooltip(x1, y1, x2, y2, tt, "below")
+
# Draw button body and its shadow
+ rn = BUTTON_ROUNDNESS
if opt.body:
ctx.fillStyle = COLORS.button_bg
- rn = BUTTON_ROUNDNESS
for i in range(2):
dy = 2 if i == 0 else 0
ctx.beginPath()
@@ -789,16 +794,19 @@ def _draw_button(self, ctx, x, y, given_w, h, text, action, tt, options):
ctx.arc(x1 + rn, y2 + dy - rn, rn, 0.5 * PI, 1.0 * PI)
ctx.closePath()
if i == 0:
- ctx.shadowBlur = 3
+ ctx.shadowBlur = 5 if hover else 3
ctx.shadowColor = COLORS.button_shadow
ctx.fill()
ctx.shadowBlur = 0
-
- # Register the button and tooltip
- ob = {"button": True, "action": action}
- self._picker.register(x1, y1, x2, y2, ob)
- if tt:
- self._canvas.register_tooltip(x1, y1, x2, y2, tt, "below")
+ elif hover:
+ ctx.fillStyle = "rgba(255,255,255,0.1)"
+ ctx.beginPath()
+ ctx.arc(x1 + rn, y1 + rn, rn, 1.0 * PI, 1.5 * PI)
+ ctx.arc(x2 - rn, y1 + rn, rn, 1.5 * PI, 2.0 * PI)
+ ctx.arc(x2 - rn, y2 - rn, rn, 0.0 * PI, 0.5 * PI)
+ ctx.arc(x1 + rn, y2 - rn, rn, 0.5 * PI, 1.0 * PI)
+ ctx.closePath()
+ ctx.fill()
# Get starting x
x = x1 + opt.padding + 0.5 * (w - needed_w)
@@ -1447,6 +1455,7 @@ def on_init(self):
self._interaction_mode = 0
self._last_pointer_down_event = None
+ self._arrow_state = 0, 0 # last_timestamp, last_alpha
self._last_scale_scroll = 0
self._last_trans_scroll = 0
self._pointer_pos = {}
@@ -1464,6 +1473,10 @@ def on_draw(self, ctx):
x3, x4 = 0, width
height = max(200, 0.33 * (y2 - y1))
y3, y4 = (y1 + y2) / 2 - height / 2, (y1 + y2) / 2 + height / 2
+ self._picker.register(
+ x3, y3, x4, y4, {"button": True, "action": "showrecords"}
+ )
+ hover = self._canvas.register_tooltip(x3, y3, x4, y4, "")
ctx.beginPath()
ctx.moveTo(x3, y3)
ctx.lineTo(x4, y3 + width)
@@ -1473,13 +1486,10 @@ def on_draw(self, ctx):
ctx.fill()
ctx.textAlign = "center"
ctx.textBaseline = "middle"
- ctx.fillStyle = COLORS.prim1_clr
+ ctx.fillStyle = COLORS.tick_text if hover else COLORS.prim1_clr
ctx.font = FONT.size + "px " + FONT.default
for i, c in enumerate("Records"):
ctx.fillText(c, (x3 + x4) / 2, (y3 + y4) / 2 + (i - 3) * 18)
- self._picker.register(
- x3, y3, x4, y4, {"button": True, "action": "showrecords"}
- )
return
x3 = self._canvas.grid_round(x1 + 64)
@@ -1489,6 +1499,9 @@ def on_draw(self, ctx):
ctx.fillStyle = COLORS.panel_bg
ctx.fillRect(x3, y1, x4 - x3, y2 - y1)
+ # Draw animated arrow indicator
+ self._draw_arrow(ctx, x1, y1, x2, y2, x3, x4)
+
self._help_text = ""
self._draw_ticks(ctx, x3, y1, x4, y2)
@@ -1501,17 +1514,49 @@ def on_draw(self, ctx):
# Draw title text
if self._canvas.w > 700:
text1 = "Timeline"
- text2 = self._help_text
ctx.textAlign = "left"
ctx.textBaseline = "top"
- #
ctx.font = "bold " + (FONT.size * 1.4) + "px " + FONT.mono
ctx.fillStyle = COLORS.prim2_clr
ctx.fillText(text1, 10, 65)
- #
- ctx.font = (FONT.size * 0.9) + "px " + FONT.default
- ctx.fillStyle = COLORS.prim2_clr
- ctx.fillText(text2, 10, 90)
+ # ctx.font = (FONT.size * 0.9) + "px " + FONT.default
+ # ctx.fillStyle = COLORS.prim2_clr
+ # ctx.fillText(self._help_text, 10, 90)
+
+ def _draw_arrow(self, ctx, x1, y1, x2, y2, x3, x4):
+ """Draw arrow to indicate that the timeline can be dragged.
+ To avoid sudden appearance we animate fade-in and out.
+ """
+
+ min_alpha = 0.0
+ max_alpha = 0.1
+ animation_speed_in_seconds = 0.5
+
+ # Register empty tooltip so we can detect mouse over
+ hover = self._canvas.register_tooltip(x3, y1, x4, y2, "")
+ show_arrow = hover or self._interaction_mode
+
+ # Determine arrow alpha
+ now = self._canvas.now()
+ delta_t = (now - self._arrow_state[0]) if self._arrow_state[0] else 0.001
+ delta_a = delta_t * (max_alpha - min_alpha) / animation_speed_in_seconds
+ if show_arrow:
+ new_alpha = min(max_alpha, self._arrow_state[1] + delta_a)
+ else:
+ new_alpha = max(min_alpha, self._arrow_state[1] - delta_a)
+ if new_alpha != self._arrow_state[1]:
+ self.update()
+ self._arrow_state = now, new_alpha
+ else:
+ self._arrow_state = 0, new_alpha # mark zero time
+
+ # Draw arrow
+ if new_alpha and (x2 - x4) > 20:
+ ctx.font = ((x2 - x4) * 0.9) + "px FontAwesome"
+ ctx.textAlign = "center"
+ ctx.textBaseline = "middle"
+ ctx.fillStyle = "rgba(128,128,128," + new_alpha + ")"
+ ctx.fillText("\uf338", (x2 + x4) / 2, (y1 + y2) / 2)
def _draw_edge(self, ctx, x1, y1, x2, y2):
def drawstrokerect(lw):
@@ -1666,7 +1711,7 @@ def _draw_record_area(self, ctx, x1, x2, x3, y1, y2):
# Draw records themselves
self._draw_records(ctx, x1, x2, x3, y1, y2)
else:
- self._help_text = "click on a " + stat_name + " to zoom"
+ # self._help_text = "click on a " + stat_name + " to zoom"
self._can_interact_with_records = False
t3 = dt.floor(t1, stat_period)
while t3 < t2:
@@ -1676,9 +1721,10 @@ def _draw_record_area(self, ctx, x1, x2, x3, y1, y2):
self._picker.register(
x1, y3, x3, y4, {"statrect": True, "t1": t3, "t2": t4}
)
- # self._draw_stats(ctx, t3, t4, x1+10, y3, x3-10, y4, stat_period)
- self._draw_stats(ctx, t3, t4, x2, y3, x3, y4, stat_period)
- ctx.lineWidth = 2
+ hover = self._canvas.register_tooltip(x1, y3, x3, y4, "")
+ # self._draw_stats(ctx, t3, t4, x1+10, y3, x3-10, y4, stat_period, hover)
+ self._draw_stats(ctx, t3, t4, x2, y3, x3, y4, stat_period, hover)
+ ctx.lineWidth = 1.2
ctx.strokeStyle = COLORS.tick_stripe1
ctx.beginPath()
ctx.moveTo(x1, y3)
@@ -1735,8 +1781,8 @@ def _draw_records(self, ctx, x1, x2, x3, y1, y2):
# Select all records in this range. Sort so that smaller records are drawn on top.
records = window.store.records.get_records(t1, t2).values()
- if len(records) > 0:
- self._help_text = "click a record to edit it"
+ # if len(records) > 0:
+ # self._help_text = "click a record to edit it"
# Sort records by size, so records cannot be completely overlapped by another
records.sort(key=lambda r: r.t1 - (now if (r.t1 == r.t2) else r.t2))
@@ -1913,6 +1959,11 @@ def _draw_one_record(
ry1 = grid_round(ry1)
ry2 = grid_round(ry2)
+ # Define inset and outset bump (for running records)
+ inset, outset = 0, 0
+ if record.t1 == record.t2:
+ inset, outset = 0, 16
+
# Define roundness and how much each slab moves outward
rn = RECORD_ROUNDNESS
rnb = COLORBAND_ROUNDNESS
@@ -1920,6 +1971,53 @@ def _draw_one_record(
timeline_only = ry2 < y1 or ry1 > y2
+ # Make the timeline-part clickable - the pick region is increased if needed
+ ry1_, ry2_ = ry1, ry2
+ if ry2 - ry1 < 16:
+ ry1_, ry2_ = 0.5 * (ry1 + ry2) - 8, 0.5 * (ry1 + ry2) + 8
+ self._picker.register(
+ x2, ry1_, x3, ry2_, {"recordrect": True, "region": 0, "record": record}
+ )
+ tt_text = tags.join(" ") + "\n(click to make draggable)"
+ hover_timeline = self._canvas.register_tooltip(
+ x2, ry1, x3, ry2 + outset, tt_text, "mouse"
+ )
+
+ # Make description part clickable - the pick region is increased if needed
+ if not timeline_only:
+ d = {
+ "button": True,
+ "action": "editrecord",
+ "help": "",
+ "key": record.key,
+ }
+ self._picker.register(x5, ty1, x6, ty2, d)
+ tt_text = tags.join(" ") + "\n(Click to edit)"
+ hover_description = self._canvas.register_tooltip(x5, ty1, x6, ty2, tt_text)
+
+ # Cast a shadow if hovering
+ if hover_timeline and self._selected_record is None:
+ ctx.beginPath()
+ ctx.arc(x2 + rne, ry2 - rne, rne, 0.5 * PI, 1.0 * PI)
+ ctx.arc(x2 + rne, ry1 + rne, rne, 1.0 * PI, 1.5 * PI)
+ ctx.lineTo(x3, 0.5 * (ry1 + ry2))
+ ctx.closePath()
+ ctx.shadowBlur = 6
+ ctx.shadowColor = "rgba(0, 0, 0, 0.8)" # COLORS.button_shadow
+ ctx.fill()
+ ctx.shadowBlur = 0
+ elif hover_description:
+ ctx.beginPath()
+ ctx.arc(x5 + rne, ty2 - rne, rne, 0.5 * PI, 1.0 * PI)
+ ctx.arc(x5 + rne, ty1 + rne, rne, 1.0 * PI, 1.5 * PI)
+ ctx.arc(x6 - rne, ty1 + rne, rne, 1.5 * PI, 2.0 * PI)
+ ctx.arc(x6 - rne, ty2 - rne, rne, 2.0 * PI, 2.5 * PI)
+ ctx.closePath()
+ ctx.shadowBlur = 5
+ ctx.shadowColor = COLORS.button_shadow
+ ctx.fill()
+ ctx.shadowBlur = 0
+
# Draw record representation
path = utils.RoundedPath()
if timeline_only:
@@ -1973,10 +2071,8 @@ def _draw_one_record(
ctx.stroke(path)
# Running records have a small outset
- inset, outset = 0, 0
- if record.t1 == record.t2:
+ if outset:
x1f, x2f = x2 + (x3 - x2) / 3, x3 - (x3 - x2) / 3
- inset, outset = 0, 16
ctx.beginPath()
ctx.moveTo(x2f, ry2 - inset)
ctx.arc(x2f - rn, ry2 + outset - rn, rn, 0.0 * PI, 0.5 * PI)
@@ -2003,17 +2099,6 @@ def _draw_one_record(
ctx.fillStyle = COLORS.record_edge
ctx.fillText("+", 0.5 * (x2 + x3), 0.5 * (ry1 + ry2))
- # Make the timeline-part clickable - the pick region is increased if needed
- ry1_, ry2_ = ry1, ry2
- if ry2 - ry1 < 16:
- ry1_, ry2_ = 0.5 * (ry1 + ry2) - 8, 0.5 * (ry1 + ry2) + 8
- self._picker.register(
- x2, ry1_, x3, ry2_, {"recordrect": True, "region": 0, "record": record}
- )
-
- tt_text = tags.join(" ") + "\n(click to make draggable)"
- self._canvas.register_tooltip(x2, ry1, x3, ry2 + outset, tt_text, "mouse")
-
# The rest is for the description part
if timeline_only:
return
@@ -2069,17 +2154,6 @@ def _draw_one_record(
ctx.fillText("â¦", x, text_ypos, max_x - x)
x = new_x
- # Make description part clickable - the pick region is increased if needed
- d = {
- "button": True,
- "action": "editrecord",
- "help": "",
- "key": record.key,
- }
- self._picker.register(x5, ty1, x6, ty2, d)
- tt_text = tags.join(" ") + "\n(Click to edit)"
- self._canvas.register_tooltip(x5, ty1, x6, ty2, tt_text)
-
def _draw_selected_record_extras(
self, ctx, record, t1, x1, x4, x6, y0, y1, y2, npixels, nsecs, yy
):
@@ -2188,7 +2262,7 @@ def _draw_selected_record_extras(
duration_text = dt.duration_string(duration, True)
ctx.fillText(duration_text, 0.5 * (x1 + x2), 0.5 * (ry1 + ry2))
- def _draw_stats(self, ctx, t1, t2, x1, y1, x2, y2, stat_period):
+ def _draw_stats(self, ctx, t1, t2, x1, y1, x2, y2, stat_period, hover):
# Determine header for this block
t = 0.5 * (t1 + t2)
@@ -2256,13 +2330,10 @@ def _draw_stats(self, ctx, t1, t2, x1, y1, x2, y2, stat_period):
bigfontsize = max(FONT.size, bigfontsize)
ymargin = (y2 - y1) / 20
- # Draw big text in blue if it is the timerange containing today
- if t1 < self._canvas.now() < t2:
- ctx.fillStyle = COLORS.prim1_clr
- else:
- ctx.fillStyle = COLORS.prim2_clr
+ # Draw big text in stronger color if it is the timerange containing today
# Draw duration at the left
+ ctx.fillStyle = COLORS.prim1_clr if hover else COLORS.prim2_clr
fontsizeleft = bigfontsize * (0.7 if selected_tags else 0.9)
ctx.font = f"{fontsizeleft}px {FONT.default}"
ctx.textBaseline = "bottom"
@@ -2273,6 +2344,8 @@ def _draw_stats(self, ctx, t1, t2, x1, y1, x2, y2, stat_period):
ctx.fillText(duration_text, x1 + 10, y2 - ymargin)
# Draw time-range indication at the right
+ isnow = t1 < self._canvas.now() < t2
+ ctx.fillStyle = COLORS.prim1_clr if isnow else COLORS.prim2_clr
ctx.font = f"bold {bigfontsize}px {FONT.default}"
ctx.textBaseline = "bottom"
ctx.textAlign = "right"
@@ -2698,6 +2771,10 @@ def on_draw(self, ctx):
x3, x4 = self._canvas.w - width, self._canvas.w
height = max(220, 0.33 * (y2 - y1))
y3, y4 = (y1 + y2) / 2 - height / 2, (y1 + y2) / 2 + height / 2
+ self._picker.register(
+ x3, y3, x4, y4, {"button": True, "action": "showanalytics"}
+ )
+ hover = self._canvas.register_tooltip(x3, y3, x4, y4, "")
ctx.beginPath()
ctx.moveTo(x4, y3)
ctx.lineTo(x3, y3 + width)
@@ -2707,13 +2784,10 @@ def on_draw(self, ctx):
ctx.fill()
ctx.textAlign = "center"
ctx.textBaseline = "middle"
- ctx.fillStyle = COLORS.prim1_clr
+ ctx.fillStyle = COLORS.tick_text if hover else COLORS.prim1_clr
ctx.font = FONT.size + "px " + FONT.default
for i, c in enumerate("Overview"):
ctx.fillText(c, (x3 + x4) / 2, (y3 + y4) / 2 + (i - 4) * 18)
- self._picker.register(
- x3, y3, x4, y4, {"button": True, "action": "showanalytics"}
- )
return
self._help_text = ""
@@ -2739,17 +2813,14 @@ def on_draw(self, ctx):
# Draw title text
if self._canvas.w > 700:
text1 = "Overview"
- text2 = self._help_text
ctx.textAlign = "right"
ctx.textBaseline = "top"
- #
ctx.font = "bold " + (FONT.size * 1.4) + "px " + FONT.mono
ctx.fillStyle = COLORS.prim2_clr
ctx.fillText(text1, x2 - 10, 65)
- #
- ctx.font = (FONT.size * 0.9) + "px " + FONT.default
- ctx.fillStyle = COLORS.prim2_clr
- ctx.fillText(text2, x2 - 10, 90)
+ # ctx.font = (FONT.size * 0.9) + "px " + FONT.default
+ # ctx.fillStyle = COLORS.prim2_clr
+ # ctx.fillText(self._help_text, x2 - 10, 90)
# Show some help if no records are shown
if len(self._level_counts) == 1:
@@ -2914,12 +2985,12 @@ def _draw_stats(self, ctx, x1, y1, x2, y2):
if bar.height > 0:
self._draw_one_stat_unit(ctx, bar, root.cum_t)
- # Determine help text
- if self._maxlevel > 0:
- if len(self.selected_tags) == 0:
- self._help_text = "click a tag to filter"
- else:
- self._help_text = "click more tags to filter more"
+ # # Determine help text
+ # if self._maxlevel > 0:
+ # if len(self.selected_tags) == 0:
+ # self._help_text = "click a tag to filter"
+ # else:
+ # self._help_text = "click more tags to filter more"
def _invalidate_element(self, d, parent=None):
d.invalid = True
@@ -3178,13 +3249,21 @@ def _draw_one_stat_unit(self, ctx, unit, totaltime):
{"button": True, "action": "chosecolor:" + unit.tagz},
)
tt_text = "Color for " + unit.tagz + "\n(Click to change color)"
- self._canvas.register_tooltip(
+ hover = self._canvas.register_tooltip(
x2,
y2,
ex,
y3,
tt_text,
)
+ if hover:
+ ctx.beginPath()
+ ctx.arc(x2 + rnb, y3 - rnb - 0.6, rnb, 0.5 * PI, 1.0 * PI)
+ ctx.arc(x2 + rnb, y2 + rnb - 0.6, rnb, 1.0 * PI, 1.5 * PI)
+ ctx.lineTo(ex, y2)
+ ctx.lineTo(ex, y3)
+ ctx.closePath()
+ ctx.stroke()
# Draw edge
ctx.stroke(path)
diff --git a/timetagger/app/utils.py b/timetagger/app/utils.py
index 4e5436b5..46df6730 100644
--- a/timetagger/app/utils.py
+++ b/timetagger/app/utils.py
@@ -561,6 +561,7 @@ def __init__(self, node):
self.node.setAttribute("tabindex", -1) # allow catching key events
# For tooltips
+ self._pointer_hover = ""
self._tooltips = Picker()
self._tooltipdiv = window.document.createElement("div")
self._tooltipdiv.className = "tooltipdiv"
@@ -743,6 +744,12 @@ def _tooltip_handler(self, e):
x, y = ev.pos
# Get tooltip object - if text is None it means no tooltip
ob = self._tooltips.pick(x, y)
+ # Handle over. Schedule a new draw if the over-status changes.
+ hash = "" if ob is None else ob.hash
+ if hash != self._pointer_hover:
+ self._pointer_hover = hash
+ self.update()
+ # Don't show a tooltip if its text is empty
if ob is not None and not ob.text:
ob = None
# Handle touch events - show tt during a touch, but only after a delay
@@ -813,9 +820,14 @@ def _tooltip_show(self):
self._tooltipdiv.style.right = self._tooltipdiv.xpos + 10 + "px"
def register_tooltip(self, x1, y1, x2, y2, text, positioning="ob"):
- """Register a tooltip at the given position."""
- ob = {"rect": [x1, y1, x2, y2], "text": text, "positioning": positioning}
+ """Register a tooltip at the given position.
+ Returns whether the mouse hovers here now.
+ """
+ rect = [x1, y1, x2, y2]
+ hash = str(rect)
+ ob = {"rect": rect, "text": text, "positioning": positioning, "hash": hash}
self._tooltips.register(x1, y1, x2, y2, ob)
+ return self._pointer_hover == ob.hash
# To overload
| Make buttons more "clickable"
By making use of e.g. a hover state, or at least some animation when it's pressed down.
| 2021-09-08T21:27:11 | 0.0 | [] | [] |
|||
knodle/knodle | knodle__knodle-137 | 06cf808369efc5796e9e38fd62c7267ceae2cdea | diff --git a/knodle/evaluation/tacred_metrics.py b/knodle/evaluation/tacred_metrics.py
new file mode 100644
index 00000000..2c3fd424
--- /dev/null
+++ b/knodle/evaluation/tacred_metrics.py
@@ -0,0 +1,96 @@
+#!/usr/bin/env python
+
+"""
+Score the predictions with gold labels, using precision, recall and F1 metrics.
+"""
+
+import sys
+from collections import Counter
+
+NO_RELATION = "no_relation"
+
+def score(key, prediction, verbose=False): # key ist ein batch, prediction auch
+ correct_by_relation = Counter()
+ guessed_by_relation = Counter()
+ gold_by_relation = Counter()
+ # Loop over the data to compute a score
+ for row in range(len(key)):
+ gold = key[row]
+ guess = prediction[row]
+ if gold == NO_RELATION and guess == NO_RELATION:
+ pass
+ elif gold == NO_RELATION and guess != NO_RELATION:
+ guessed_by_relation[guess] += 1
+ elif gold != NO_RELATION and guess == NO_RELATION:
+ gold_by_relation[gold] += 1
+ elif gold != NO_RELATION and guess != NO_RELATION:
+ guessed_by_relation[guess] += 1
+ gold_by_relation[gold] += 1
+ if gold == guess:
+ correct_by_relation[guess] += 1
+
+ # Print verbose information
+ if verbose:
+ print("Per-relation statistics:")
+ relations = gold_by_relation.keys()
+ longest_relation = 0
+ for relation in sorted(relations):
+ longest_relation = max(len(relation), longest_relation)
+ for relation in sorted(relations):
+ # (compute the score)
+ correct = correct_by_relation[relation]
+ guessed = guessed_by_relation[relation]
+ gold = gold_by_relation[relation]
+ prec = 1.0
+ if guessed > 0:
+ prec = float(correct) / float(guessed)
+ recall = 0.0
+ if gold > 0:
+ recall = float(correct) / float(gold)
+ f1 = 0.0
+ if prec + recall > 0:
+ f1 = 2.0 * prec * recall / (prec + recall)
+ # (print the score)
+ sys.stdout.write(("{:<" + str(longest_relation) + "}").format(relation))
+ sys.stdout.write(" P: ")
+ if prec < 0.1:
+ sys.stdout.write(' ')
+ if prec < 1.0:
+ sys.stdout.write(' ')
+ sys.stdout.write("{:.2%}".format(prec))
+ sys.stdout.write(" R: ")
+ if recall < 0.1:
+ sys.stdout.write(' ')
+ if recall < 1.0:
+ sys.stdout.write(' ')
+ sys.stdout.write("{:.2%}".format(recall))
+ sys.stdout.write(" F1: ")
+ if f1 < 0.1:
+ sys.stdout.write(' ')
+ if f1 < 1.0:
+ sys.stdout.write(' ')
+ sys.stdout.write("{:.2%}".format(f1))
+ sys.stdout.write(" #: %d" % gold)
+ sys.stdout.write("\n")
+ print("")
+
+ # Print the aggregate score
+ if verbose:
+ print("Final Score:")
+ prec_micro = 1.0
+ if sum(guessed_by_relation.values()) > 0:
+ prec_micro = float(sum(correct_by_relation.values())) / float(sum(guessed_by_relation.values()))
+ recall_micro = 0.0
+ if sum(gold_by_relation.values()) > 0:
+ recall_micro = float(sum(correct_by_relation.values())) / float(sum(gold_by_relation.values()))
+ f1_micro = 0.0
+ if prec_micro + recall_micro > 0.0:
+ f1_micro = 2.0 * prec_micro * recall_micro / (prec_micro + recall_micro)
+
+ print("Precision (micro): {:.3%}".format(prec_micro))
+ print(" Recall (micro): {:.3%}".format(recall_micro))
+ print(" F1 (micro): {:.3%}".format(f1_micro))
+
+ print("\n")
+
+ return {"precision": prec_micro, "recall": recall_micro, "f1": f1_micro}
\ No newline at end of file
diff --git a/knodle/trainer/crossweigh_weighing/crossweigh.py b/knodle/trainer/crossweigh_weighing/crossweigh.py
index dcde2a16..26c01fa9 100644
--- a/knodle/trainer/crossweigh_weighing/crossweigh.py
+++ b/knodle/trainer/crossweigh_weighing/crossweigh.py
@@ -1,20 +1,23 @@
import logging
+import os
+from typing import Dict, Tuple, Union
+
import numpy as np
import torch
-import torch.nn as nn
-from torch.autograd import function
+import torch.nn.functional as F
+from joblib import load
+from sklearn.metrics import classification_report
from torch.functional import Tensor
from torch.nn import Module
from torch.utils.data import TensorDataset, DataLoader
-from joblib import load
from tqdm import tqdm
-import torch.nn.functional as F
-import os
from knodle.trainer.crossweigh_weighing.crossweigh_denoising_config import CrossWeighDenoisingConfig
from knodle.trainer.crossweigh_weighing.crossweigh_trainer_config import CrossWeighTrainerConfig
-from knodle.trainer.crossweigh_weighing.utils import set_device, set_seed, make_plot, get_labels
from knodle.trainer.crossweigh_weighing.crossweigh_weights_calculator import CrossWeighWeightsCalculator
+from knodle.trainer.crossweigh_weighing.utils import (
+ set_seed, draw_loss_accuracy_plot, get_labels, calculate_dev_tacred_metrics
+)
from knodle.trainer.trainer import Trainer
from knodle.trainer.utils.utils import accuracy_of_probs
@@ -31,8 +34,10 @@ def __init__(
rule_assignments_t: np.ndarray,
inputs_x: TensorDataset,
rule_matches_z: np.ndarray,
- dev_features: TensorDataset,
- dev_labels: TensorDataset,
+ dev_features: TensorDataset = None,
+ dev_labels: Tensor = None,
+ evaluation_method: str = "sklearn_classification_report",
+ dev_labels_ids: Dict = None,
path_to_weights: str = "data",
denoising_config: CrossWeighDenoisingConfig = None,
trainer_config: CrossWeighTrainerConfig = None,
@@ -57,6 +62,8 @@ def __init__(
self.denoising_config = denoising_config
self.dev_features = dev_features
self.dev_labels = dev_labels
+ self.evaluation_method = evaluation_method
+ self.dev_labels_ids = dev_labels_ids
self.path_to_weights = path_to_weights
self.run_classifier = run_classifier
self.use_weights = use_weights
@@ -67,8 +74,7 @@ def __init__(
else:
self.trainer_config = trainer_config
logger.info("Initalized trainer with custom model config: {}".format(self.trainer_config.__dict__))
-
- self.device = set_device(self.trainer_config.enable_cuda)
+
set_seed(self.trainer_config.seed)
def train(self):
@@ -81,23 +87,26 @@ def train(self):
logger.info("No classifier should be trained")
return
- train_labels = get_labels(self.rule_matches_z, self.rule_assignments_t,
- self.trainer_config.no_match_class_label)
+ logger.info("Classifier training is started")
+
+ train_labels = get_labels(
+ self.rule_matches_z, self.rule_assignments_t, self.trainer_config.no_match_class_label)
train_loader = self._get_feature_label_dataloader(self.model_input_x, train_labels, sample_weights)
- dev_loader = self._get_feature_label_dataloader(self.dev_features, self.dev_labels)
+ train_losses, train_acc = [], []
+
+ if self.dev_features is not None:
+ dev_loader = self._get_feature_label_dataloader(self.dev_features, self.dev_labels)
+ dev_losses, dev_acc = [], []
- logger.info("Classifier training is started")
self.model.train()
- train_losses, dev_losses, train_accs, dev_accs = [], [], [], []
for curr_epoch in tqdm(range(self.trainer_config.epochs)):
+ logger.info(f"Epoch {curr_epoch}")
running_loss, epoch_acc = 0.0, 0.0
- self.trainer_config.criterion.weight = self.trainer_config.class_weights
- self.trainer_config.criterion.reduction = 'none'
batch_losses = []
for features, labels, weights in train_loader:
self.model.zero_grad()
predictions = self.model(features)
- loss = self._get_loss_with_sample_weights(self.trainer_config.criterion, predictions, labels, weights)
+ loss = self._get_loss_with_sample_weights(predictions, labels, weights)
loss.backward()
if self.trainer_config.use_grad_clipping:
torch.nn.utils.clip_grad_norm_(self.model.parameters(), self.trainer_config.grad_clipping)
@@ -111,72 +120,108 @@ def train(self):
avg_loss = running_loss / len(train_loader)
avg_acc = epoch_acc / len(train_loader)
train_losses.append(avg_loss)
- train_accs.append(avg_acc)
-
- logger.info("Epoch loss: {}".format(avg_loss))
- logger.info("Epoch Accuracy: {}".format(avg_acc))
-
- dev_loss, dev_acc = self._evaluate(dev_loader)
- dev_losses.append(dev_loss)
- dev_accs.append(dev_acc)
+ train_acc.append(avg_acc)
+ logger.info(f"Train loss: {avg_loss:.7f}, train accuracy: {avg_acc * 100:.2f}%")
- logger.info("Train loss: {:.7f}, train accuracy: {:.2f}%, dev loss: {:.3f}, dev accuracy: {:.2f}%".format(
- avg_loss, avg_acc * 100, dev_loss, dev_acc * 100))
+ if self.dev_features is not None:
+ dev_loss, dev_metrics = self._evaluate(dev_loader)
+ dev_losses.append(dev_loss)
+ dev_acc.append(dev_metrics["precision"])
+ logger.info(f"Dev loss: {dev_loss:.3f}, Dev metrics: {dev_metrics}")
- make_plot(train_losses, dev_losses, train_accs, dev_accs, "train loss", "dev loss", "train acc", "dev acc")
-
- def _evaluate(self, dev_loader):
- """ Model evaluation on dev set: the trained model is applied on the dev set and the average loss value
- is returned """
- self.model.eval()
- with torch.no_grad():
- dev_loss, dev_acc = 0.0, 0.0
- dev_criterion = nn.CrossEntropyLoss(weight=self.trainer_config.class_weights)
- for tokens, labels in dev_loader:
- labels = labels.long()
- predictions = self.model(tokens)
- acc = accuracy_of_probs(predictions, labels)
-
- predictions_one_hot = F.one_hot(predictions.argmax(1),
- num_classes=self.trainer_config.output_classes).float()
- loss = dev_criterion(predictions_one_hot, labels.flatten(0))
-
- dev_loss += loss.detach()
- dev_acc += acc.item()
- return dev_loss / len(dev_loader), dev_acc / len(dev_loader)
+ if self.dev_features is not None:
+ draw_loss_accuracy_plot({"train loss": train_losses, "dev loss": dev_losses, "tran acc": train_acc, "dev acc": dev_acc})
+ else:
+ draw_loss_accuracy_plot({"train loss": train_losses, "tran acc": train_acc})
def _get_sample_weights(self):
""" This function checks whether there are accesible already pretrained sample weights. If yes, return
them. If not, calculates sample weights calling method of CrossWeighWeightsCalculator class"""
- try:
- sample_weights = load(os.path.join(self.path_to_weights, "sample_weights.lib"))
+ if os.path.isfile(os.path.join(self.path_to_weights, "sample_weights.lib")):
logger.info("Already pretrained samples sample_weights will be used.")
- except OSError:
+ sample_weights = load(os.path.join(self.path_to_weights, "sample_weights.lib"))
+ else:
logger.info("No pretrained sample weights are found, they will be calculated now")
- sample_weights = CrossWeighWeightsCalculator(self.model, self.rule_assignments_t, self.inputs_x,
- self.rule_matches_z, self.path_to_weights,
- self.denoising_config
- ).calculate_weights()
- logger.info("Sample weights are calculated and saved to {} file".format(self.path_to_weights))
+ sample_weights = CrossWeighWeightsCalculator(
+ self.model,
+ self.rule_assignments_t,
+ self.inputs_x,
+ self.rule_matches_z,
+ self.path_to_weights,
+ self.denoising_config
+ ).calculate_weights()
+ logger.info(f"Sample weights are calculated and saved to {self.path_to_weights} file")
return sample_weights
def _get_feature_label_dataloader(
- self, samples: TensorDataset, labels: np.ndarray, sample_weights: np.ndarray = None, shuffle: bool = True
+ self, samples: TensorDataset, labels: Union[Tensor, np.ndarray], sample_weights: np.ndarray = None, shuffle: bool = True
) -> DataLoader:
""" Converts encoded samples and labels to dataloader. Optionally: add sample_weights as well """
+ tensor_target = torch.LongTensor(labels).to(self.trainer_config.device)
+ tensor_samples = samples.tensors[0].to(self.trainer_config.device)
- tensor_target = torch.LongTensor(labels).to(device=self.device)
- tensor_samples = samples.tensors[0].to(device=self.device)
if sample_weights is not None:
- sample_weights = torch.FloatTensor(sample_weights).to(device=self.device)
+ sample_weights = torch.FloatTensor(sample_weights).to(self.trainer_config.device)
dataset = torch.utils.data.TensorDataset(tensor_samples, tensor_target, sample_weights)
else:
dataset = torch.utils.data.TensorDataset(tensor_samples, tensor_target)
- dataloader = self._make_dataloader(dataset, shuffle=shuffle)
- return dataloader
- def _get_loss_with_sample_weights(self, criterion: function, output: Tensor, labels: Tensor,
- weights: Tensor) -> Tensor:
+ return self._make_dataloader(dataset, shuffle=shuffle)
+
+ def _get_loss_with_sample_weights(self, output: Tensor, labels: Tensor, weights: Tensor) -> Tensor:
""" Calculates loss for each training sample and multiplies it with corresponding sample weight"""
- return (criterion(output, labels) * weights).sum() / self.trainer_config.class_weights[labels].sum()
+ loss_no_reduction = self.trainer_config.criterion(output,
+ labels,
+ weight=self.trainer_config.class_weights,
+ reduction="none")
+ return (loss_no_reduction * weights).sum() / self.trainer_config.class_weights[labels].sum()
+
+ def _evaluate(self, dev_dataloader: DataLoader) -> Union[Tuple[float, None], Tuple[float, Dict]]:
+ """ Model evaluation on dev set: the trained model is applied on the dev set and the average loss is returned"""
+ self.model.eval()
+ all_predictions, all_labels = torch.Tensor(), torch.Tensor()
+
+ with torch.no_grad():
+ dev_loss, dev_acc = 0.0, 0.0
+ for features, labels in dev_dataloader:
+ predictions = self.model(features)
+ dev_loss += self.calculate_dev_loss(predictions, labels.long())
+
+ _, predicted = torch.max(predictions, 1)
+ all_predictions = torch.cat([all_predictions, predicted])
+ all_labels = torch.cat([all_labels, labels.long()])
+
+ predictions, gold_labels = (all_predictions.detach().numpy(), all_labels.detach().numpy())
+ dev_metrics = self.calculate_dev_metrics(predictions, gold_labels)
+ return dev_loss / len(dev_dataloader), dev_metrics
+
+ def calculate_dev_loss(self, predictions: Tensor, labels: Tensor) -> Tensor:
+ """ Calculates the loss on the dev set using given criterion"""
+ predictions_one_hot = F.one_hot(predictions.argmax(1), num_classes=self.trainer_config.output_classes).float()
+ loss = self.trainer_config.criterion(predictions_one_hot, labels.flatten(0))
+ return loss.detach()
+
+ def calculate_dev_metrics(self, predictions: np.ndarray, gold_labels: np.ndarray) -> Union[Dict, None]:
+ """
+ Returns the dictionary of metrics calculated on the dev set with one of the evaluation functions
+ or None, if the needed evaluation method was not found
+ """
+
+ if self.evaluation_method == "tacred":
+
+ if self.dev_labels_ids is None:
+ logging.warning(
+ "Labels to labels ids correspondence is needed to make TACRED specific evaluation. Since it is "
+ "absent now, the standard sklearn metrics will be calculated instead"
+ )
+ return classification_report(y_true=gold_labels, y_pred=predictions, output_dict=True)["macro avg"]
+
+ return calculate_dev_tacred_metrics(predictions, gold_labels, self.dev_labels_ids)
+
+ elif self.evaluation_method == "sklearn_classification_report":
+ return classification_report(y_true=gold_labels, y_pred=predictions, output_dict=True)["macro avg"]
+
+ else:
+ logging.warning("No evaluation method is given. The evaluation on dev data is skipped")
+ return None
diff --git a/knodle/trainer/crossweigh_weighing/crossweigh_denoising_config.py b/knodle/trainer/crossweigh_weighing/crossweigh_denoising_config.py
index 21477ded..34677176 100644
--- a/knodle/trainer/crossweigh_weighing/crossweigh_denoising_config.py
+++ b/knodle/trainer/crossweigh_weighing/crossweigh_denoising_config.py
@@ -5,6 +5,7 @@
from snorkel.classification import cross_entropy_with_probs
from torch import Tensor
from torch.optim import optimizer
+from knodle.trainer.utils.utils import check_and_return_device
class CrossWeighDenoisingConfig:
@@ -14,9 +15,9 @@ def __init__(
crossweigh_partitions: int = 3,
crossweigh_folds: int = 5,
crossweigh_epochs: int = 2,
- weight_reducing_rate: int = 0.5,
- samples_start_weights: int = 3.0,
- no_match_weights: int = 0.5,
+ weight_reducing_rate: float = 0.5,
+ samples_start_weights: float = 3.0,
+ no_match_weights: float = 0.5,
size_factor: int = 200,
batch_size: int = 32,
lr: float = 0.01,
@@ -25,7 +26,6 @@ def __init__(
optimizer_: optimizer = None,
criterion: Callable[[Tensor, Tensor], float] = cross_entropy_with_probs,
seed: int = "12345",
- enable_cuda: bool = False,
use_grad_clipping: bool = True,
grad_clipping: int = 5,
no_match_class_label: int = None):
@@ -42,10 +42,10 @@ def __init__(
self.output_classes = output_classes
self.criterion = criterion
self.seed = seed
- self.enable_cuda = enable_cuda
self.use_grad_clipping = use_grad_clipping
self.grad_clipping = grad_clipping
self.no_match_class_label = no_match_class_label
+ self.device = check_and_return_device()
self.criterion = criterion
diff --git a/knodle/trainer/crossweigh_weighing/crossweigh_trainer_config.py b/knodle/trainer/crossweigh_weighing/crossweigh_trainer_config.py
index eea75743..d6f15753 100644
--- a/knodle/trainer/crossweigh_weighing/crossweigh_trainer_config.py
+++ b/knodle/trainer/crossweigh_weighing/crossweigh_trainer_config.py
@@ -1,6 +1,6 @@
from typing import Callable
import torch
-import torch.nn as nn
+from knodle.trainer.utils.utils import check_and_return_device
from snorkel.classification import cross_entropy_with_probs
from torch import Tensor
from torch.nn import Module
@@ -19,7 +19,6 @@ def __init__(
epochs: int = 2,
class_weights: Tensor = None,
seed: int = 12345, # set seed for reproducibility
- enable_cuda: bool = False,
use_grad_clipping: bool = True,
grad_clipping: int = 5,
no_match_class_label: int = None
@@ -28,11 +27,11 @@ def __init__(
self.batch_size = batch_size
self.lr = lr
self.seed = seed
- self.enable_cuda = enable_cuda
self.use_grad_clipping = use_grad_clipping
self.grad_clipping = grad_clipping
self.output_classes = output_classes
self.no_match_class_label = no_match_class_label
+ self.device = check_and_return_device()
if epochs <= 0:
raise ValueError("Epochs needs to be positive")
diff --git a/knodle/trainer/crossweigh_weighing/crossweigh_weights_calculator.py b/knodle/trainer/crossweigh_weighing/crossweigh_weights_calculator.py
index 884f625e..9ed34ba5 100644
--- a/knodle/trainer/crossweigh_weighing/crossweigh_weights_calculator.py
+++ b/knodle/trainer/crossweigh_weighing/crossweigh_weights_calculator.py
@@ -12,9 +12,7 @@
from joblib import dump
from tqdm import tqdm
from knodle.trainer.crossweigh_weighing.crossweigh_denoising_config import CrossWeighDenoisingConfig
-from knodle.trainer.crossweigh_weighing.utils import (
- set_device, set_seed, check_splitting, return_unique, get_labels
-)
+from knodle.trainer.crossweigh_weighing.utils import set_seed, check_splitting, return_unique, get_labels
logger = logging.getLogger(__name__)
@@ -24,14 +22,15 @@
class CrossWeighWeightsCalculator:
- def __init__(self,
- model: Module,
- rule_assignments_t: np.ndarray,
- inputs_x: TensorDataset,
- rule_matches_z: np.ndarray,
- output_dir: str,
- denoising_config: CrossWeighDenoisingConfig = None,
- no_relation_class: int = NO_RELATION_CLASS):
+ def __init__(
+ self,
+ model: Module,
+ rule_assignments_t: np.ndarray,
+ inputs_x: TensorDataset,
+ rule_matches_z: np.ndarray,
+ output_dir: str,
+ denoising_config: CrossWeighDenoisingConfig = None,
+ other_class_id: int = NO_RELATION_CLASS):
self.inputs_x = inputs_x
self.rule_matches_z = rule_matches_z
@@ -39,16 +38,15 @@ def __init__(self,
self.model = model
self.crossweigh_model = copy.deepcopy(self.model)
self.output_dir = output_dir
- self.no_relation_class = no_relation_class
+ self.no_relation_class = other_class_id
if denoising_config is None:
self.denoising_config = CrossWeighDenoisingConfig(self.model)
- logger.info("Default CrossWeigh Config is used: {}".format(self.denoising_config.__dict__))
+ logger.info(f"Default CrossWeigh Config is used: {self.denoising_config.__dict__}")
else:
self.denoising_config = denoising_config
- logger.info("Initalized trainer with custom model config: {}".format(self.denoising_config.__dict__))
+ logger.info(f"Initalized trainer with custom model config: {self.denoising_config.__dict__}")
- self.device = set_device(self.denoising_config.enable_cuda)
self.sample_weights = self.initialise_sample_weights()
def calculate_weights(self) -> torch.FloatTensor:
@@ -68,17 +66,17 @@ def calculate_weights(self) -> torch.FloatTensor:
rules_samples_ids_dict = self._get_rules_samples_ids_dict()
for partition in range(self.denoising_config.cw_partitions):
-
- logger.info("============= CrossWeigh Partition {}/{}: =============".format(
- partition + 1, self.denoising_config.cw_partitions))
+ logger.info(f"============= CrossWeigh Partition {partition + 1}/{self.denoising_config.cw_partitions}: "
+ f"=============")
shuffled_rules_ids, no_match_ids = self._get_shuffled_rules_idx() # shuffle anew for each cw round
for fold in range(self.denoising_config.cw_folds):
# for each fold the model is trained from scratch
- self.crossweigh_model = copy.deepcopy(self.model).to(device=self.device)
- train_loader, test_loader = self.get_cw_data(shuffled_rules_ids, no_match_ids, rules_samples_ids_dict,
- labels, fold)
+ self.crossweigh_model = copy.deepcopy(self.model).to(self.denoising_config.device)
+ train_loader, test_loader = self.get_cw_data(
+ shuffled_rules_ids, no_match_ids, rules_samples_ids_dict, labels, fold
+ )
self.cw_train(train_loader)
self.cw_test(test_loader)
@@ -194,9 +192,9 @@ def cw_convert2tensor(
Turns the input data (encoded samples, encoded labels, indices in the original matrices) to a DataLoader
which could be used for further model training or testing
"""
- tensor_words = samples.to(device=self.device)
- tensor_target = torch.LongTensor(labels).to(device=self.device)
- tensor_idx = torch.LongTensor(idx).to(device=self.device)
+ tensor_words = samples.to(self.denoising_config.device)
+ tensor_target = torch.LongTensor(labels).to(self.denoising_config.device)
+ tensor_idx = torch.LongTensor(idx).to(self.denoising_config.device)
dataset = torch.utils.data.TensorDataset(tensor_words, tensor_target, tensor_idx)
return torch.utils.data.DataLoader(dataset, batch_size=self.denoising_config.batch_size, shuffle=shuffle)
diff --git a/knodle/trainer/crossweigh_weighing/utils.py b/knodle/trainer/crossweigh_weighing/utils.py
index da4fefc0..4468b551 100644
--- a/knodle/trainer/crossweigh_weighing/utils.py
+++ b/knodle/trainer/crossweigh_weighing/utils.py
@@ -1,13 +1,17 @@
import logging
+from typing import Dict
+
+import matplotlib.pyplot as plt
import numpy as np
import torch
-import matplotlib.pyplot as plt
+
+from knodle.evaluation import tacred_metrics
from knodle.trainer.utils.denoise import get_majority_vote_probs, get_majority_vote_probs_with_no_rel
logger = logging.getLogger(__name__)
-def get_labels(
+def get_labels_randomly(
rule_matches_z: np.ndarray, rule_assignments_t: np.ndarray
) -> np.ndarray:
""" Calculates sample labels basing on z and t matrices. If several patterns matched, select one randomly """
@@ -108,28 +112,22 @@ def check_splitting(
def return_unique(where_to_find: np.ndarray, what_to_find: np.ndarray) -> np.ndarray:
""" Checks intersections between the 1st and the 2nd arrays and return unique values of the 1st array """
- intersections = np.intersect1d(where_to_find, what_to_find, return_indices=True)[
- 1
- ].tolist()
+ intersections = np.intersect1d(where_to_find, what_to_find, return_indices=True)[1].tolist()
return np.delete(where_to_find, intersections)
-def make_plot(
- value_1: list,
- value_2: list,
- value_3: list,
- value_4: list,
- label_1: str,
- label_2: str,
- label_3: str,
- label_4: str,
-):
+def draw_loss_accuracy_plot(curves: dict) -> None:
""" The function creates a plot of 4 curves and displays it"""
- plt.plot(value_1, "g", label=label_1)
- plt.plot(value_2, "r", label=label_2)
- plt.plot(value_3, "b", label=label_3)
- plt.plot(value_4, "y", label=label_4)
- plt.legend(bbox_to_anchor=(1.05, 1), loc="upper left", borderaxespad=0.0)
+ colors = "bgrcmyk"
+ color_index = 0
+ epochs = range(1, len(next(iter(curves.values()))) + 1)
+
+ for label, value in curves.items():
+ plt.plot(epochs, value, c=colors[color_index], label=label)
+ color_index += 1
+
+ plt.xticks(epochs)
+ plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
plt.show()
@@ -139,9 +137,19 @@ def get_labels(
""" Check whether dataset contains negative samples and calculates the labels using majority voting """
if no_match_class_label:
if no_match_class_label < 0:
- raise RuntimeError("A label for negative samples should be greater that 0 for correct matrix multiplication")
+ raise RuntimeError("Label for negative samples should be greater than 0 for correct matrix multiplication")
if no_match_class_label < rule_matches_z.shape[1]:
- raise RuntimeError("The label for negative samples is probably already assigned to some other class")
+ raise RuntimeError("Label for negative samples is probably already assigned to some other class")
return get_majority_vote_probs_with_no_rel(rule_matches_z, rule_assignments_t, no_match_class_label)
else:
return get_majority_vote_probs(rule_matches_z, rule_assignments_t)
+
+
+def calculate_dev_tacred_metrics(predictions: np.ndarray, labels: np.ndarray, labels2ids: Dict) -> Dict:
+ predictions_idx = predictions.astype(int).tolist()
+ labels_idx = labels.astype(int).tolist()
+ idx2labels = dict([(value, key) for key, value in labels2ids.items()])
+
+ predictions = [idx2labels[p] for p in predictions_idx]
+ test_labels = [idx2labels[p] for p in labels_idx]
+ return tacred_metrics.score(test_labels, predictions, verbose=True)
| conll: eval function for RE on TACRED data
| 2021-02-02T18:05:13 | 0.0 | [] | [] |
|||
eth-brownie/brownie | eth-brownie__brownie-1152 | e4130b3ec7a3afd3098272c481f337483cdd7087 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index d572bd08e..9c018cf51 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,7 @@ This changelog format is based on [Keep a Changelog](https://keepachangelog.com/
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased](https://github.com/eth-brownie/brownie)
+- Add support remapping with a sub-folder (like OpenZeppelin/openzeppelin-contracts-upgradeable, ref: [#1137](https://github.com/eth-brownie/brownie/issues/1137))
- Add polygon network integration ([#1119](https://github.com/eth-brownie/brownie/pull/1119))
- Fixed subcalls to empty accounts not appearing in the subcalls property of TransactionReceipts ([#1106](https://github.com/eth-brownie/brownie/pull/1106))
- Add support for `POLYGONSCAN_TOKEN` env var ([#1135](https://github.com/eth-brownie/brownie/pull/1135))
diff --git a/brownie/project/compiler/__init__.py b/brownie/project/compiler/__init__.py
index 25f91b802..56df1331f 100644
--- a/brownie/project/compiler/__init__.py
+++ b/brownie/project/compiler/__init__.py
@@ -206,15 +206,19 @@ def _get_solc_remappings(remappings: Optional[list]) -> list:
remap_dict = dict([remappings.split("=")])
else:
remap_dict = dict(i.split("=") for i in remappings)
-
- for path in _get_data_folder().joinpath("packages").iterdir():
+ remapped_dict = {}
+ packages = _get_data_folder().joinpath("packages")
+ for path in packages.iterdir():
key = next((k for k, v in remap_dict.items() if v.startswith(path.name)), None)
if key:
- remap_dict[key] = path.parent.joinpath(remap_dict[key]).as_posix()
+ remapped_dict[key] = path.parent.joinpath(remap_dict.pop(key)).as_posix()
else:
- remap_dict[path.name] = path.as_posix()
+ remapped_dict[path.name] = path.as_posix()
+ for (k, v) in remap_dict.items():
+ if packages.joinpath(v).exists():
+ remapped_dict[k] = packages.joinpath(v).as_posix()
- return [f"{k}={v}" for k, v in remap_dict.items()]
+ return [f"{k}={v}" for k, v in dict(remap_dict, **remapped_dict).items()]
def _get_allow_paths(allow_paths: Optional[str], remappings: list) -> str:
| brownie-config.yaml remappings bug
### Environment information
* `brownie` Version: 1.14.6
* `solc` Version: 0.6.6
* Python Version: 3.8.10
* OS: ubuntu
```yaml
dependencies:
- Uniswap/[email protected]
- Uniswap/[email protected]
compiler:
solc:
version: 0.6.6
remappings:
- "@uniswapcore=Uniswap/[email protected]"
- "@uniswaplib=**Uniswap**/[email protected]"
```
```
brownie compile
contracts/UniswapV2Migrator.sol:3:1: ParserError: Source "Uniswap/[email protected]/contracts/libraries/TransferHelper.sol" not found: File not found.
import '@uniswaplib/contracts/libraries/TransferHelper.sol'
```
```
make .brownie/packages/Uniswaplib
mv Uniswap/[email protected] Uniswaplib
```
```yaml
compiler:
solc:
version: 0.6.6
remappings:
- "@uniswapcore=Uniswap/[email protected]"
- "@uniswaplib=**Uniswaplib**/[email protected]"
```
brownie compile OK
| Here's OK. I guess they can't have the same prefix directory.
dependencies:
- iearn-finance/[email protected]
- OpenZeppelin/[email protected]
compiler:
solc:
version: 0.6.12
remappings:
- "@yearnvaults=iearn-finance/[email protected]"
- "@openzeppelin=OpenZeppelin/[email protected]"
Brownie is passing the remappings to `solc`, so the expected behaviours here are in the Solidity docs.
https://docs.soliditylang.org/en/v0.6.6/using-the-compiler.html#using-the-commandline-compiler | 2021-07-20T00:35:35 | 0.0 | [] | [] |
||
AI-Planning/macq | AI-Planning__macq-116 | 7dd66a60f73d9466699bf7634e4fbbd991fab841 | diff --git a/macq/extract/extract.py b/macq/extract/extract.py
index 00581d62..0484903a 100644
--- a/macq/extract/extract.py
+++ b/macq/extract/extract.py
@@ -1,10 +1,9 @@
from dataclasses import dataclass
-from typing import List
from enum import Enum, auto
+from ..trace import ObservationLists, Action, State
+from .model import Model
from .observer import Observer
from .slaf import Slaf
-from ..observation import Observation
-from ..trace import ObservationLists, Action, State
@dataclass
@@ -38,7 +37,7 @@ class Extract:
from state observations.
"""
- def __new__(cls, obs_lists: ObservationLists, mode: modes):
+ def __new__(cls, obs_lists: ObservationLists, mode: modes) -> Model:
"""Extracts a Model object.
Extracts a model from the observations using the specified extraction
diff --git a/macq/extract/learned_action.py b/macq/extract/learned_action.py
index 167ec556..a4a68cdf 100644
--- a/macq/extract/learned_action.py
+++ b/macq/extract/learned_action.py
@@ -4,7 +4,7 @@
class LearnedAction:
- def __init__(self, name: str, obj_params: List[str], **kwargs):
+ def __init__(self, name: str, obj_params: List, **kwargs):
self.name = name
self.obj_params = obj_params
if "cost" in kwargs:
@@ -24,7 +24,11 @@ def __hash__(self):
return hash(self.details())
def details(self):
- string = f"{self.name} {' '.join([o for o in self.obj_params])}"
+ try:
+ string = f"{self.name} {' '.join([o for o in self.obj_params])}"
+ except TypeError:
+ string = f"{self.name} {' '.join([o.details() for o in self.obj_params])}"
+
return string
def update_precond(self, fluents: Set[Fluent]):
diff --git a/macq/extract/model.py b/macq/extract/model.py
index 1b616eda..925e3cb8 100644
--- a/macq/extract/model.py
+++ b/macq/extract/model.py
@@ -34,14 +34,26 @@ def __init__(self, fluents: Set[str], actions: Set[LearnedAction]):
def __eq__(self, other):
if not isinstance(other, Model):
return False
- return self.fluents == other.fluents and self.actions == other.actions
+ self_fluent_type, other_fluent_type = type(list(self.fluents)[0]), type(
+ list(other.fluents)[0]
+ )
+ if self_fluent_type == other_fluent_type:
+ return self.fluents == other.fluents and self.actions == other.actions
+ if self_fluent_type == str:
+ return set(map(lambda f: str(f), other.fluents)) == self.fluents
+ if other_fluent_type == str:
+ return set(map(lambda f: str(f), self.fluents)) == other.fluents
def details(self):
# Set the indent width
indent = " " * 2
string = "Model:\n"
# Map fluents to a comma separated string of the fluent names
- string += f"{indent}Fluents: {', '.join(self.fluents)}\n"
+ try:
+ string += f"{indent}Fluents: {', '.join(self.fluents)}\n"
+ except TypeError:
+ string += f"{indent}Fluents: {', '.join(map(str,self.fluents))}\n"
+
# Map the actions to a summary of their names, preconditions, add
# effects and delete effects
string += f"{indent}Actions:\n"
diff --git a/macq/observation/__init__.py b/macq/observation/__init__.py
index 3bb7f802..e6b4a195 100644
--- a/macq/observation/__init__.py
+++ b/macq/observation/__init__.py
@@ -1,11 +1,13 @@
from .observation import Observation, InvalidQueryParameter
from .identity_observation import IdentityObservation
from .partial_observation import PartialObservation
+from .atomic_partial_observation import AtomicPartialObservation
__all__ = [
+ "InvalidQueryParameter",
"Observation",
"IdentityObservation",
"PartialObservation",
- "InvalidQueryParameter",
+ "AtomicPartialObservation",
]
diff --git a/macq/observation/atomic_partial_observation.py b/macq/observation/atomic_partial_observation.py
new file mode 100644
index 00000000..8f5dff99
--- /dev/null
+++ b/macq/observation/atomic_partial_observation.py
@@ -0,0 +1,139 @@
+from ..trace import Step, Fluent
+from ..trace import PartialState
+from . import Observation
+from typing import Callable, Union, Set
+import random
+
+
+class PercentError(Exception):
+ """Raised when the user attempts to supply an invalid percentage of fluents to hide."""
+
+ def __init__(
+ self,
+ message="The percentage supplied is invalid.",
+ ):
+ super().__init__(message)
+
+
+class AtomicPartialObservation(Observation):
+ """The Partial Observability Token.
+
+ The partial observability token stores the step where some of the values of
+ the fluents in the step's state are unknown. Inherits the base Observation
+ class.
+ """
+
+ def __init__(
+ self,
+ step: Step,
+ method: Union[Callable[[int], Step], Callable[[Set[Fluent]], Step]],
+ **method_kwargs
+ ):
+ """
+ Creates an PartialObservation object, storing the step.
+
+ Args:
+ step (Step):
+ The step associated with this observation.
+ method (function reference):
+ The method to be used to tokenize the step.
+ **method_kwargs (keyword arguments):
+ The arguments to be passed to the corresponding method function.
+ """
+ super().__init__(index=step.index)
+ self.step = method(self, step, **method_kwargs)
+
+ def __eq__(self, value):
+ return isinstance(value, AtomicPartialObservation) and self.step == value.step
+
+ def random_subset(self, step: Step, percent_missing: float):
+ """Method of tokenization that picks a random subset of fluents to hide.
+
+ Args:
+ step (Step):
+ The step to tokenize.
+ percent_missing (float):
+ The percentage of fluents to hide.
+
+ Returns:
+ The new step created using a PartialState that takes the hidden fluents into account.
+ """
+ if percent_missing > 1 or percent_missing < 0:
+ raise PercentError()
+
+ fluents = step.state.fluents
+ num_new_fluents = int(len(fluents) * (percent_missing))
+
+ new_fluents = {}
+ # shuffle keys and take an appropriate subset of them
+ hide_fluents_ls = list(fluents)
+ random.shuffle(hide_fluents_ls)
+ hide_fluents_ls = hide_fluents_ls[:num_new_fluents]
+ # get new dict
+ for f in fluents:
+ if f in hide_fluents_ls:
+ new_fluents[f] = None
+ else:
+ new_fluents[f] = step.state[f]
+ return Step(PartialState(new_fluents), step.action, step.index)
+
+ def same_subset(self, step: Step, hide_fluents: Set[Fluent]):
+ """Method of tokenization that hides the same subset of fluents every time.
+
+ Args:
+ step (Step):
+ The step to tokenize.
+ hide_fluents (Set[Fluent]):
+ The set of fluents that will be hidden each time.
+
+ Returns:
+ The new step created using a PartialState that takes the hidden fluents into account.
+ """
+ new_fluents = {}
+ for f in step.state.fluents:
+ if f in hide_fluents:
+ new_fluents[f] = None
+ else:
+ new_fluents[f] = step.state[f]
+ return Step(PartialState(new_fluents), step.action, step.index)
+
+ def get_all_base_fluents(self):
+ """Returns a set of the details all the fluents used at the current step. The value of the fluents is not included."""
+ fluents = set()
+ for f in self.step.state.fluents:
+ fluents.add(str(f)[1:-1])
+ return fluents
+
+
+"""
+ used these to store action and state info with just strings
+
+ class IdentityState(dict):
+ def __hash__(self):
+ return hash(tuple(sorted(self.items())))
+
+ @dataclass
+ class IdentityAction:
+ name: str
+ obj_params: List[str]
+ cost: Optional[int]
+
+ def __str__(self):
+ return self.name + str(self.obj_params) + str(self.cost)
+
+ def __hash__(self):
+ return hash(str(self))
+
+
+ and here is the old matches function
+
+ def _matches(self, key: str, value: str):
+ if key == "action":
+ if self.action is None:
+ return value is None
+ return str(self.action) == value
+ elif key == "fluent_holds":
+ return self.state[value]
+ else:
+ raise InvalidQueryParameter(IdentityObservation, key)
+"""
diff --git a/macq/observation/identity_observation.py b/macq/observation/identity_observation.py
index 96b3cc5b..16b3dd5c 100644
--- a/macq/observation/identity_observation.py
+++ b/macq/observation/identity_observation.py
@@ -37,18 +37,9 @@ def __init__(self, step: Step, **kwargs):
The step associated with this observation.
"""
super().__init__(index=step.index, **kwargs)
- self.state = self.IdentityState(
- {str(fluent): value for fluent, value in step.state.items()}
- )
- self.action = (
- None
- if step.action is None
- else self.IdentityAction(
- step.action.name,
- list(map(lambda o: o.details(), step.action.obj_params)),
- step.action.cost,
- )
- )
+
+ self.state = step.state.clone()
+ self.action = None if step.action is None else step.action.clone()
def __hash__(self):
return hash(self.details())
@@ -58,15 +49,15 @@ def __eq__(self, other):
return False
return self.state == other.state and self.action == other.action
+ def details(self):
+ return f"Obs {str(self.index)}.\n State: {str(self.state)}\n Action: {str(self.action)}"
+
def _matches(self, key: str, value: str):
if key == "action":
if self.action is None:
return value is None
- return str(self.action) == value
+ return self.action.details() == value
elif key == "fluent_holds":
- return self.state[value]
+ return self.state.holds(value)
else:
raise InvalidQueryParameter(IdentityObservation, key)
-
- def details(self):
- return f"Obs {str(self.index)}.\n State: {str(self.state)}\n Action: {str(self.action)}"
diff --git a/macq/trace/action.py b/macq/trace/action.py
index 019c6060..6df95fab 100644
--- a/macq/trace/action.py
+++ b/macq/trace/action.py
@@ -70,3 +70,6 @@ def add_parameter(self, obj: PlanningObject):
The object to be added to the action's object parameters.
"""
self.obj_params.append(obj)
+
+ def _serialize(self):
+ return self.name
diff --git a/macq/trace/fluent.py b/macq/trace/fluent.py
index f4139220..d1581e3e 100644
--- a/macq/trace/fluent.py
+++ b/macq/trace/fluent.py
@@ -34,6 +34,9 @@ def __eq__(self, other):
def details(self):
return " ".join([self.obj_type, self.name])
+ def _serialize(self):
+ return self.details()
+
class Fluent:
"""Fluents of a planning domain.
@@ -77,3 +80,6 @@ def __lt__(self, other):
if not isinstance(other, Fluent):
raise TypeError(f"Cannot compare Fluent to {other.__name__}.")
return str(self) < str(other)
+
+ def _serialize(self):
+ return str(self)
diff --git a/macq/trace/trace_list.py b/macq/trace/trace_list.py
index 9732fad0..86161282 100644
--- a/macq/trace/trace_list.py
+++ b/macq/trace/trace_list.py
@@ -238,4 +238,4 @@ def get_all_transitions(self):
if action:
actions.add(action)
- return {action: self.get_transitions(str(action)) for action in actions}
+ return {action: self.get_transitions(action.details()) for action in actions}
| Generalize ObservationLists queries
- [ ] Allow queries to work with both atomic and sophisticated observations
| 2021-07-08T15:39:56 | 0.0 | [] | [] |
|||
qcpydev/qcpy | qcpydev__qcpy-107 | 80601590e8e88df90d8e4bbe22adc4a15d4df30e | diff --git a/src/circuit_drawing/circuit_drawing.py b/src/circuit_drawing/circuit_drawing.py
index 4f90db9..b0c470e 100644
--- a/src/circuit_drawing/circuit_drawing.py
+++ b/src/circuit_drawing/circuit_drawing.py
@@ -1,8 +1,24 @@
from .drawings import *
from .wire import Wire
+from typing import List
class CircuitDrawing:
+ """Private handler of generating the circuit drawing.
+
+ Note:
+ This is a work in progress and may see some small bugs/invalid formations.
+ In other iterations, this will change functionality!
+
+ Args:
+ qubits (int): number of qubits.
+
+ Attributes:
+ qubits (int): Number of qubits from quantum circuit.
+ circuit_queue (arr): 2D-Queue of strings that format/generate the circuit drawing.
+ max_length (int): Value to compare when needing to extend rows to match lengths.
+ """
+
def __init__(self, qubits: int):
self.qubits = qubits
self.circuit_queue = []
@@ -11,15 +27,37 @@ def __init__(self, qubits: int):
self.circuit_queue.append(Wire())
def equal_length(self) -> None:
+ """Determines and sets all rows of strings to be equal after a gate insertion"""
for i in range(self.qubits):
while self.circuit_queue[i].length < self.max_length:
self.add_drawing(horizontal_line(), i)
def add_drawing(self, drawing: str, qubit: int) -> None:
+ """Inserts drawing at specific qubit drawing row.
+
+ Args:
+ drawing (str): number of qubits.
+ qubit (int): Which qubit the drawing is inserted at.
+ """
self.circuit_queue[qubit].add(drawing)
self.max_length = max(self.max_length, self.circuit_queue[qubit].length)
def insert_single(self, gate: str, qubit: int) -> None:
+ """Inserts a single gate drawing into a specific qubit row.
+
+ Note:
+ This is a work in progress and may see some small bugs/invalid formations.
+ In other iterations, this will change functionality!
+
+ Args:
+ qubits (int): number of qubits.
+
+ Attributes:
+ qubits (int): Number of qubits from quantum circuit.
+ circuit_queue (arr): 2D-Queue of strings that format/generate the circuit drawing.
+ max_length (int): Value to compare when needing to extend rows to match lengths.
+
+ """
to_insert = self.max_length - 1
if self.max_length:
while (
@@ -35,7 +73,13 @@ def insert_single(self, gate: str, qubit: int) -> None:
self.add_drawing(single_gate(gate), qubit)
self.equal_length()
- def two_qubit(self, qubit_1: int, qubit_2: int, gate=None) -> None:
+ def two_qubit(self, qubit_1: int, qubit_2: int, gate: str = "") -> None:
+ """Adds a two qubit gate into the circuit drawing.
+ Args:
+ qubit_1 (int): start of range of two qubits.
+ qubit_2 (int): end of range of two qubits.
+ gate (str): The gate's symbol to be drawn.
+ """
self.equal_length()
start = min(qubit_1, qubit_2)
end = max(qubit_1, qubit_2)
@@ -52,7 +96,13 @@ def two_qubit(self, qubit_1: int, qubit_2: int, gate=None) -> None:
self.add_drawing(swap_point(), qubit_2)
self.equal_length()
- def add_multi(self, gate: str, controls, target: int) -> None:
+ def add_multi(self, gate: str, controls: List[int], target: int) -> None:
+ """Adds a multi gate drawing (toffoli for example)
+ Args:
+ gate (str): Character symbol of the gate that is being inserted.
+ controls (arr): array of controls on the gate.
+ target (int): Where the gate drawing will be inserted.
+ """
controls.append(target)
self.equal_length()
for i in range(self.qubits):
@@ -74,12 +124,28 @@ def add_multi(self, gate: str, controls, target: int) -> None:
self.equal_length()
def add_swap(self, qubit_1, qubit_2) -> None:
+ """Draws a swap gate on a circuit drawing.
+ Args:
+ qubit_2 (int): first qubit to add 'x' drawing.
+ qubit_1 (int): second qubit to add 'x' drawing.
+ """
self.two_qubit(qubit_1=qubit_1, qubit_2=qubit_2)
- def add_control(self, gate, control, target) -> None:
+ def add_control(self, gate: str, control: int, target: int) -> None:
+ """Adds a gate that has a singular controlled qubit to the drawing.
+ Args:
+ gate (str): Character symbol for the target drawing.
+ control (int): Control qubit.
+ target (int): Target qubit.
+ """
self.two_qubit(qubit_1=control, qubit_2=target, gate=gate)
- def add_block(self, gate: str, qubits) -> None:
+ def add_block(self, gate: str, qubits: List[int]) -> None:
+ """Adds a block drawing to the circuit drawing (example: RC3X).
+ Args:
+ gate (str): String that represents the gate.
+ qubits (int): Which qubits to know the range of the gate.
+ """
center = (max(qubits) + min(qubits)) // 2
for i in range(self.qubits):
if i == center:
@@ -92,7 +158,16 @@ def add_block(self, gate: str, qubits) -> None:
self.add_drawing(block_connect(), i)
self.equal_length()
- def make_wire(self, wire, i) -> str:
+ def make_wire(self, wire: List[str], i: int) -> str:
+ """Creates an entire row drawing to print for a singular qubit.
+
+ Args:
+ wire (arr): Array of strings to concatenate together.
+ i (int): Which qubit is being drawn.
+ Returns:
+ str: Returns a string of the generated qubit row.
+
+ """
top = [" "]
middle = ["q" + str(i) + "â"]
bottom = [" "]
@@ -109,6 +184,10 @@ def make_wire(self, wire, i) -> str:
return "".join(top) + "\n" + "".join(middle) + "\n" + "".join(bottom) + "\n"
def make(self) -> str:
+ """Generates the entirety of the string to print.
+ Returns:
+ str: Combination of all qubit strings in a single string.
+ """
output = ""
for i in range(len(self.circuit_queue)):
output += self.make_wire(self.circuit_queue[i].content, i)
diff --git a/src/circuit_drawing/drawings.py b/src/circuit_drawing/drawings.py
index e9e983f..ca5574f 100644
--- a/src/circuit_drawing/drawings.py
+++ b/src/circuit_drawing/drawings.py
@@ -1,4 +1,11 @@
def multi_control(is_connector: bool = False, is_end: bool = False) -> str:
+ """Formats a controlled section of the drawing
+ Args:
+ is_connector (bool): Determines if this is not the bottom or top of a drawing.
+ is_end (int): Determines if the connector is at the end of a multi drawing.
+ Returns:
+ str: Formatted version of a multi controlled drawing.
+ """
res = " âââ ââ â "
if is_connector:
res = " â âââ ââ â "
@@ -8,30 +15,64 @@ def multi_control(is_connector: bool = False, is_end: bool = False) -> str:
def multi_connect() -> str:
+ """Shows the drawing of when a qubit is simply passed through in the logic
+ Returns:
+ str: A connect drawing.
+ """
return " â âââ¼ââ â "
def horizontal_line() -> str:
+ """A simple horizontal line in the circuit drawing
+ Returns:
+ str: A horizontal line block.
+ """
return " âââââ "
def block_bottom() -> str:
+ """When a block has ended this is called in circuit drawing.
+ Returns:
+ str: End of the block drawing.
+ """
return "â â⤠ââââââ"
-def block_connect():
+def block_connect() -> str:
+ """When a qubit is in range of a block
+ Returns:
+ str: A connector for a block drawing.
+ """
return "â â⤠ââ â"
def block_gate(gate: str) -> str:
+ """The "center" of a block drawing
+ Args:
+ gate (str): Not currently used (Needs to change that!)
+ Returns:
+ str: Center block drawing.
+ """
return "â ââ¤MULââ â"
def block_top() -> str:
+ """The start of the range for a block drawing
+ Returns:
+ str: Top of a block drawing.
+ """
return "âââââ⤠ââ â"
-def single_gate(gate, is_controlled: bool = False, is_start: bool = False):
+def single_gate(gate: str, is_controlled: bool = False, is_start: bool = False) -> str:
+ """Draws a gate with it's symbol inside.
+ Args:
+ gate (str): String char representation of a quantum gate.
+ is_controlled (bool): Determines if the gate is controlled by another qubit.
+ is_start (bool): Determines if this gate is upside down with a target qubit.
+ Returns:
+ str: A quantum gate to then be inserted into a circuit drawing.
+ """
top = "âââ´ââ" if (is_controlled and not is_start) else "âââââ"
middle = "â¤"
if len(gate) == 1:
@@ -43,9 +84,17 @@ def single_gate(gate, is_controlled: bool = False, is_start: bool = False):
return top + middle + ("âââââ" if not is_start else "âââ¬ââ")
-def swap_point():
+def swap_point() -> str:
+ """Block drawing of a swap gate.
+ Returns:
+ str: Swap gate drawing.
+ """
return " âââ³ââ "
-def vertical_line():
+def vertical_line() -> str:
+ """A simple vertical line block.
+ Returns:
+ str: A string of a vertical line block.
+ """
return " â "
diff --git a/src/circuit_drawing/wire.py b/src/circuit_drawing/wire.py
index 9718b26..66da6bf 100644
--- a/src/circuit_drawing/wire.py
+++ b/src/circuit_drawing/wire.py
@@ -1,17 +1,44 @@
class Wire:
+ """Private handler of array of strings
+ Note:
+ This is a work in progress and may see some small bugs/invalid formations.
+ In other iterations, this will change functionality!
+ Attributes:
+ length (int): Length of the array of content.
+ content (arr): Array of strings that were inserted into by circuit_drawing.
+ """
+
def __init__(self):
self.length = 0
self.content = []
def add(self, to_add: str) -> None:
+ """Appends a string into the content
+ Args:
+ to_add (str): String to add.
+ """
+
self.length += 1
self.content.append(to_add)
def insert(self, to_insert: int, to_add: str) -> None:
+ """Inserts a string into the content array
+ Args:
+ to_insert (int): Where to insert drawing.
+ to_add (str): The string to insert.
+ """
+
if to_insert >= self.length:
self.add(to_add)
else:
self.content[to_insert] = to_add
def at(self, index: int) -> str:
+ """Returns the string at a certain index
+ Args:
+ index (int): To try and find the string at a specific location.
+ Returns:
+ str: empty if nothing was found, else the value stored at the index.
+ """
+
return "" if index >= self.length else self.content[index]
| Missing Docstrings for circuit_drawing
All of circuit_drawing currently does not have docustrings and it needs to be included.
| 2024-10-28T03:17:23 | 0.0 | [] | [] |
|||
memiiso/pyliquibase | memiiso__pyliquibase-80 | a6841d11e93c9dfaa2f1f0a3ba56c5313afe9a16 | diff --git a/pyliquibase/__init__.py b/pyliquibase/__init__.py
index e2237d2..26580b7 100644
--- a/pyliquibase/__init__.py
+++ b/pyliquibase/__init__.py
@@ -18,7 +18,7 @@
class Pyliquibase():
- def __init__(self, defaultsFile: str,
+ def __init__(self, defaultsFile: str = None,
liquibaseDir: str = None,
jdbcDriversDir: str = None,
additionalClasspath: str = None,
@@ -233,12 +233,9 @@ def _download_file(self, url: str, destination: str) -> None:
def main():
parser = argparse.ArgumentParser()
- parser.add_argument('--defaultsFile', type=str, default="liquibase.properties",
- help='Relative path to liquibase.properties file'
- )
_args, args = parser.parse_known_args()
- pl = Pyliquibase(defaultsFile=_args.defaultsFile)
+ pl = Pyliquibase()
pl.execute(*args)
diff --git a/setup.py b/setup.py
index 0870f04..81a4f3b 100644
--- a/setup.py
+++ b/setup.py
@@ -13,7 +13,7 @@
'pyliquibase = pyliquibase:main',
],
},
- version='2.2.0',
+ version='2.3.0',
packages=find_packages(),
author="Memiiso Organization",
description='Python liquibase',
| init project functionality
It's a small thing, but bootstrapping a fresh liquibase project from scratch with pyliquibase would be nice.
It would be helpful if pyliquibase didn't error out due to a missing liquibase.properties file when trying to run `pyliquibase init project`
From the liquibase documentation:
"You can easily create a new Liquibase project containing a liquibase.properties file by running the [init project](https://docs.liquibase.com/commands/init/project.html) command."
| 2024-07-22T11:36:10 | 0.0 | [] | [] |
|||
IDSIA/sacred | IDSIA__sacred-902 | f7311fcb04e7ff27d192170fa0a06ab49e0c5f63 | diff --git a/sacred/serializer.py b/sacred/serializer.py
index 5be9a19e..3f616961 100644
--- a/sacred/serializer.py
+++ b/sacred/serializer.py
@@ -30,4 +30,4 @@ def flatten(obj):
def restore(flat):
- return json.decode(_json.dumps(flat), keys=True)
+ return json.decode(_json.dumps(flat), keys=True, on_missing="error")
| Non-loadable classes in config files are silently ignored
When loading a custom config file I discovered, that non-loadable classes are just silently ignored while digging into a bug (https://github.com/HumanCompatibleAI/imitation/issues/664).
The root cause seems to be in
https://github.com/IDSIA/sacred/blob/f7311fcb04e7ff27d192170fa0a06ab49e0c5f63/sacred/serializer.py#L33
where you should call `json.decode` with the `on_missing` parameter set to `error` or at least `warn`.
This is a change introduced from `jsonpickle` 2.1.0 to 2.2.0.
| 2023-01-25T17:24:28 | 0.0 | [] | [] |
|||
Morisset/PyNeb_devel | Morisset__PyNeb_devel-40 | becbe51915cd9c11ee788f2f99e0cfb293edc438 | diff --git a/pyneb/utils/manage_atomic_data.py b/pyneb/utils/manage_atomic_data.py
index 810f1af..cc80f1c 100644
--- a/pyneb/utils/manage_atomic_data.py
+++ b/pyneb/utils/manage_atomic_data.py
@@ -617,18 +617,15 @@ def extract_flt(str_):
extract_flt('(123.00?') -> 123.00
"""
res = ''
- if len(str_) > 0:
- if str_.decode()[0] in ('(', '['):
- str_ = str_[1:]
- for l in str_.decode():
+ this_str_ = str_.decode() if isinstance(str_, bytes) else str_
+ if len(this_str_) > 0 and this_str_[0] in ('(', '['):
+ this_str_ = this_str_[1:]
+ for l in this_str_:
if l.isdigit() or l == '.':
res += l
else:
break
- if res == '':
- return np.nan
- else:
- return float(res)
+ return np.nan if res == '' else float(res)
def readNIST(NISTfile,NLevels=None):
"""
diff --git a/pyneb/version.py b/pyneb/version.py
index 8867348..a058468 100644
--- a/pyneb/version.py
+++ b/pyneb/version.py
@@ -1,2 +1,2 @@
# PyNeb version
-__version__ = '1.1.20'
+__version__ = '1.1.21'
diff --git a/updatePyNeb.md b/updatePyNeb.md
index 2f08cfb..b613d74 100644
--- a/updatePyNeb.md
+++ b/updatePyNeb.md
@@ -27,6 +27,7 @@ Publish the new version on the pypi server
=============================
* Switch to master branch
+* synchronize with repository: git pull
* Run the following from the root directory (where dist is) and check the tar file is created in dist:
`python setup.py sdist`
* Run the following to upload the tar file to pypi server. **Update the value of the distribution in {0}:**
| Preparing 1.1.21
Correct bug in utils/manage_atomic_data/extract_flt : bytes and str confusion.
| 2024-11-07T19:47:35 | 0.0 | [] | [] |
|||
adafruit/Adafruit_CircuitPython_RFM69 | adafruit__Adafruit_CircuitPython_RFM69-53 | cf64114ea070c9fe867e1777cfa2244fe4ae4b4a | diff --git a/adafruit_rfm69.py b/adafruit_rfm69.py
index 23df7f5..407407f 100644
--- a/adafruit_rfm69.py
+++ b/adafruit_rfm69.py
@@ -312,7 +312,7 @@ def __init__( # pylint: disable=invalid-name
self.reset() # Reset the chip.
# Check the version of the chip.
version = self._read_u8(_REG_VERSION)
- if version != 0x24:
+ if version not in (0x23, 0x24):
raise RuntimeError("Invalid RFM69 version, check wiring!")
self.idle() # Enter idle state.
# Setup the chip in a similar way to the RadioHead RFM69 library.
| Feather RP2040 RFM69 Packet Radio version 35 not supported
A recently-purchased feather has an rfm69 device that reports version 0x23 which fails a version check in the RFM69 constructor:
```
...
File "/lib/adafruit_rfm69.py", line 316, in __init__
RuntimeError: Invalid RFM69 version, check wiring!
```
Version number extraction from REPL:
```
import time
import board
import digitalio
import adafruit_bus_device.spi_device as spidev
from micropython import const
reset = digitalio.DigitalInOut(board.RFM_RST)
reset.switch_to_output(value=False)
reset.value = True
time.sleep(0.0001)
reset.value = False
time.sleep(0.005)
buff = bytearray(4)
dev = spidev.SPIDevice(board.SPI(), digitalio.DigitalInOut(board.RFM_CS), baudrate=2000000, polarity=0, phase=0)
with dev as d:
buff[0] = const(0x10) & 0x7f
d.write(buff, end=1)
d.readinto(buff, end=4)
print(f"{buff[0]:#0x}")
```
| 2024-04-21T17:35:22 | 0.0 | [] | [] |
|||
michael-lazar/jetforce | michael-lazar__jetforce-59 | 3d255a7e0b2886d487faf57bc6e771c447233487 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 1222214..b2f6bc4 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,7 +2,10 @@
### v0.9.0 (unreleased)
-N/A
+#### Fixes
+
+- Fix not sending the complete certificate chain for TLS certificates
+ that include it.
### v0.8.2 (2021-03-21)
diff --git a/jetforce/tls.py b/jetforce/tls.py
index 641fa66..5cf8312 100644
--- a/jetforce/tls.py
+++ b/jetforce/tls.py
@@ -179,7 +179,7 @@ def _makeContext(self) -> OpenSSL.SSL.Context:
ctx.set_options(self._options)
ctx.set_mode(self._mode)
- ctx.use_certificate_file(self.certfile)
+ ctx.use_certificate_chain_file(self.certfile)
ctx.use_privatekey_file(self.keyfile or self.certfile)
for extraCert in self.extraCertChain:
ctx.add_extra_chain_cert(extraCert)
| jetforce not serving the full certificate chain
```
$ openssl s_client -connect mozz.us:1965
CONNECTED(00000005)
depth=0 CN = mozz.us
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = mozz.us
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 CN = mozz.us
verify return:1
```
https://testtls.com/mozz.us/1965
> Chain Of Trust | CRITICAL | failed (chain incomplete).
I'm using my Let's Encrypt full certificate chain in my jetforce settings
```
certfile = "/etc/letsencrypt/live/mozz.us/fullchain.pem"
keyfile = "/etc/letsencrypt/live/mozz.us/privkey.pem"
```
It looks like this twisted method in jetforce
```
ctx.use_certificate_file(self.certfile)
```
needs to be switched to
```
ctx.use_certificate_chain_file(self.certfile)
```
| 2021-05-04T23:51:42 | 0.0 | [] | [] |
|||
pythonguis/pyqtconfig | pythonguis__pyqtconfig-23 | 1cd1679796c929a3700bd84a5ea6226b4ec13d38 | diff --git a/pyqtconfig/config.py b/pyqtconfig/config.py
index aa4d8d8..8259b4f 100644
--- a/pyqtconfig/config.py
+++ b/pyqtconfig/config.py
@@ -1,19 +1,20 @@
# -*- coding: utf-8 -*-
+''' PyQtConfig is a simple API for handling, persisting and synchronising
+ configuration within PyQt applications.
+'''
from __future__ import unicode_literals
import logging
-# Import PyQt5 classes
-from .qt import *
-
-import os
-import sys
-import numpy as np
import types
+from collections import OrderedDict
-from collections import defaultdict, OrderedDict
-import operator
-import logging
-
+# Import PyQt5 classes
+from .qt import (QComboBox, QCheckBox, QAction,
+ QActionGroup, QPushButton, QSpinBox,
+ QDoubleSpinBox, QPlainTextEdit, QLineEdit,
+ QListWidget, QSlider, QButtonGroup,
+ QTabWidget, QVariant, Qt, QMutex, QMutexLocker, QSettings,
+ QObject, pyqtSignal)
try:
import xml.etree.cElementTree as et
except ImportError:
@@ -23,7 +24,7 @@
QVariant
except NameError:
QVariant = None
-
+
RECALCULATE_ALL = 1
RECALCULATE_VIEW = 2
@@ -37,10 +38,11 @@ def types_MethodType(fn, handler):
def _convert_list_type_from_XML(vs):
'''
- Lists are a complex type with possibility for mixed sub-types. Therefore each
- sub-entity must be wrapped with a type specifier.
+ Lists are a complex type with possibility for mixed sub-types. Therefore
+ each sub-entity must be wrapped with a type specifier.
'''
- vlist = vs.findall('ListItem') + vs.findall('ConfigListItem') # ConfigListItem is legacy
+ # ConfigListItem is legacy
+ vlist = vs.findall('ListItem') + vs.findall('ConfigListItem')
l = []
for xconfig in vlist:
v = xconfig.text
@@ -53,8 +55,8 @@ def _convert_list_type_from_XML(vs):
def _convert_list_type_to_XML(co, vs):
'''
- Lists are a complex type with possibility for mixed sub-types. Therefore each
- sub-entity must be wrapped with a type specifier.
+ Lists are a complex type with possibility for mixed sub-types. Therefore
+ each sub-entity must be wrapped with a type specifier.
'''
for cv in vs:
c = et.SubElement(co, "ListItem")
@@ -66,8 +68,8 @@ def _convert_list_type_to_XML(co, vs):
def _convert_dict_type_from_XML(vs):
'''
- Dicts are a complex type with possibility for mixed sub-types. Therefore each
- sub-entity must be wrapped with a type specifier.
+ Dicts are a complex type with possibility for mixed sub-types. Therefore
+ each sub-entity must be wrapped with a type specifier.
'''
vlist = vs.findall('DictItem')
d = {}
@@ -82,8 +84,8 @@ def _convert_dict_type_from_XML(vs):
def _convert_dict_type_to_XML(co, vs):
'''
- Dicts are a complex type with possibility for mixed sub-types. Therefore each
- sub-entity must be wrapped with a type specifier.
+ Dicts are a complex type with possibility for mixed sub-types. Therefore
+ each sub-entity must be wrapped with a type specifier.
'''
for k, v in vs.items():
c = et.SubElement(co, "DictItem")
@@ -98,6 +100,7 @@ def _apply_text_str(co, s):
co.text = str(s)
return co
+
CONVERT_TYPE_TO_XML = {
'str': _apply_text_str,
'unicode': _apply_text_str,
@@ -125,54 +128,59 @@ def _apply_text_str(co, s):
def build_dict_mapper(mdict):
'''
- Build a map function pair for forward and reverse mapping from a specified dict
-
- Mapping requires both a forward and reverse (get, set) mapping function. This function
- is used to automatically convert a supplied dict to a forward and reverse paired lambda.
-
- :param mdict: A dictionary of display values (keys) and stored values (values)
+ Build a map function pair for forward and reverse mapping from a specified
+ dict
+
+ Mapping requires both a forward and reverse (get, set) mapping function.
+ This function is used to automatically convert a supplied dict to a forward
+ and reverse paired lambda.
+
+ :param mdict: A dictionary of display values (keys) and stored values
+ (values)
:type mdict: dict
:rtype: 2-tuple of lambdas that perform forward and reverse map
-
+
'''
rdict = {v: k for k, v in mdict.items()}
return (
lambda x: mdict[x] if x in mdict else x,
lambda x: rdict[x] if x in rdict else x,
- )
+ )
+
try:
# Python2.7
unicode
-except:
+except NameError:
# Python3 recoding
def unicode(s):
if isinstance(s, bytes):
return s.decode('utf-8')
- else:
- return s
+ return s
# Basestring for typechecking
try:
basestring
-except:
+except NameError:
basestring = str
def build_tuple_mapper(mlist):
'''
- Build a map function pair for forward and reverse mapping from a specified list of tuples
-
- :param mlist: A list of tuples of display values (keys) and stored values (values)
+ Build a map function pair for forward and reverse mapping from a specified
+ list of tuples
+
+ :param mlist: A list of tuples of display values (keys) and stored values
+ (values)
:type mlist: list-of-tuples
:rtype: 2-tuple of lambdas that perform forward and reverse map
-
+
'''
rdict = {v: k for k, v in mlist}
return (
- lambda x: mdict[x] if x in mdict else x,
+ lambda x: mlist[x] if x in mlist else x,
lambda x: rdict[x] if x in rdict else x,
- )
+ )
# CUSTOM HANDLERS
@@ -250,8 +258,7 @@ def _get_QActionGroup(self):
"""
if self.checkedAction():
return self.actions().index(self.checkedAction())
- else:
- return None
+ return None
def _set_QActionGroup(self, v):
@@ -352,10 +359,10 @@ def _set_QPlainTextEdit(self, v):
def _event_QPlainTextEdit(self):
"""
Return current value changed signal for QPlainTextEdit box.
-
- Note that this is not a native Qt signal but a signal manually fired on
- the user's pressing the "Apply changes" to the code button. Attaching to the
- modified signal would trigger recalculation on every key-press.
+
+ Note that this is not a native Qt signal but a signal manually fired on
+ the user's pressing the "Apply changes" to the code button. Attaching
+ to the modified signal would trigger recalculation on every key-press.
"""
return self.sourceChangesApplied
@@ -399,7 +406,8 @@ def _set_CodeEditor(self, v):
def _event_CodeEditor(self):
"""
- Return current value changed signal for CodeEditor box. Wraps _event_QPlainTextEdit.
+ Return current value changed signal for
+ CodeEditor box. Wraps _event_QPlainTextEdit.
"""
return _event_QPlainTextEdit(self)
@@ -408,7 +416,7 @@ def _event_CodeEditor(self):
def _get_QListWidget(self):
"""
Get currently selected values in QListWidget via re-mapping filter.
-
+
Selected values are returned as a list.
"""
return [self._get_map(s.text()) for s in self.selectedItems()]
@@ -417,12 +425,14 @@ def _get_QListWidget(self):
def _set_QListWidget(self, v):
"""
Set currently selected values in QListWidget via re-mapping filter.
-
+
Supply values to be selected as a list.
"""
if v:
for s in v:
- self.findItems(unicode(self._set_map(s)), Qt.MatchExactly)[0].setSelected(True)
+ self.findItems(
+ unicode(self._set_map(s)),
+ Qt.MatchExactly)[0].setSelected(True)
def _event_QListWidget(self):
@@ -436,7 +446,7 @@ def _event_QListWidget(self):
def _get_QListWidgetAddRemove(self):
"""
Get current values in QListWidget via re-mapping filter.
-
+
Selected values are returned as a list.
"""
return [self._get_map(self.item(n).text()) for n in range(0, self.count())]
@@ -445,7 +455,7 @@ def _get_QListWidgetAddRemove(self):
def _set_QListWidgetAddRemove(self, v):
"""
Set currently values in QListWidget via re-mapping filter.
-
+
Supply values to be selected as a list.
"""
block = self.blockSignals(True)
@@ -506,11 +516,11 @@ def _event_QNoneDoubleSpinBox(self):
return self.valueChanged
-#QCheckTreeWidget
+# QCheckTreeWidget
def _get_QCheckTreeWidget(self):
"""
Get currently checked values in QCheckTreeWidget via re-mapping filter.
-
+
Selected values are returned as a list.
"""
return [self._get_map(s) for s in self._checked_item_cache]
@@ -519,12 +529,14 @@ def _get_QCheckTreeWidget(self):
def _set_QCheckTreeWidget(self, v):
"""
Set currently checked values in QCheckTreeWidget via re-mapping filter.
-
+
Supply values to be selected as a list.
"""
if v:
for s in v:
- f = self.findItems(unicode(self._set_map(s)), Qt.MatchExactly | Qt.MatchRecursive)
+ f = self.findItems(
+ unicode(self._set_map(s)),
+ Qt.MatchExactly | Qt.MatchRecursive)
if f:
f[0].setCheckState(0, Qt.Checked)
@@ -534,8 +546,8 @@ def _event_QCheckTreeWidget(self):
Return current checked changed signal for QCheckTreeWidget.
"""
return self.itemCheckedChanged
-
-
+
+
# QSlider
def _get_QSlider(self):
"""
@@ -556,9 +568,9 @@ def _event_QSlider(self):
Return value change signal for QSlider
"""
return self.valueChanged
-
-#QButtonGroup
+
+# QButtonGroup
def _get_QButtonGroup(self):
"""
Get a list of (index, checked) tuples for the buttons in the group
@@ -568,7 +580,8 @@ def _get_QButtonGroup(self):
def _set_QButtonGroup(self, v):
"""
- Set the states for all buttons in a group from a list of (index, checked) tuples
+ Set the states for all buttons in a group from a list of
+ (index, checked) tuples
"""
for idx, state in v:
self.buttons()[idx].setChecked(state)
@@ -581,7 +594,7 @@ def _event_QButtonGroup(self):
return self.buttonClicked
-#QTabWidget
+# QTabWidget
def _get_QTabWidget(self):
"""
Get the current tabulator index
@@ -600,8 +613,8 @@ def _event_QTabWidget(self):
"""
Return currentChanged signal for QTabWidget
"""
- return self.currentChanged
-
+ return self.currentChanged
+
HOOKS = {
QComboBox: (_get_QComboBox, _set_QComboBox, _event_QComboBox),
@@ -610,8 +623,10 @@ def _event_QTabWidget(self):
QActionGroup: (_get_QActionGroup, _set_QActionGroup, _event_QActionGroup),
QPushButton: (_get_QPushButton, _set_QPushButton, _event_QPushButton),
QSpinBox: (_get_QSpinBox, _set_QSpinBox, _event_QSpinBox),
- QDoubleSpinBox: (_get_QDoubleSpinBox, _set_QDoubleSpinBox, _event_QDoubleSpinBox),
- QPlainTextEdit: (_get_QPlainTextEdit, _set_QPlainTextEdit, _event_QPlainTextEdit),
+ QDoubleSpinBox: (
+ _get_QDoubleSpinBox, _set_QDoubleSpinBox, _event_QDoubleSpinBox),
+ QPlainTextEdit: (
+ _get_QPlainTextEdit, _set_QPlainTextEdit, _event_QPlainTextEdit),
QLineEdit: (_get_QLineEdit, _set_QLineEdit, _event_QLineEdit),
QListWidget: (_get_QListWidget, _set_QListWidget, _event_QListWidget),
QSlider: (_get_QSlider, _set_QSlider, _event_QSlider),
@@ -619,12 +634,15 @@ def _event_QTabWidget(self):
QTabWidget: (_get_QTabWidget, _set_QTabWidget, _event_QTabWidget)
}
+
# ConfigManager handles configuration for a given appview
-# Supports default values, change signals, export/import from file (for workspace saving)
+# Supports default values, change signals, export/import from file
+# (for workspace saving)
class ConfigManagerBase(QObject):
# Signals
- updated = pyqtSignal(int) # Triggered anytime configuration is changed (refresh)
+ # Triggered anytime configuration is changed (refresh)
+ updated = pyqtSignal(int)
def __init__(self, defaults=None, *args, **kwargs):
super(ConfigManagerBase, self).__init__(*args, **kwargs)
@@ -634,8 +652,9 @@ def __init__(self, defaults=None, *args, **kwargs):
self.reset()
if defaults is None:
defaults = {}
-
- self.defaults = defaults # Same mapping as above, used when config not set
+
+ # Same mapping as above, used when config not set
+ self.defaults = defaults
def _get(self, key):
with QMutexLocker(self.mutex):
@@ -653,12 +672,12 @@ def _get_default(self, key):
# Get config
def get(self, key):
- """
+ """
Get config value for a given key from the config manager.
-
- Returns the value that matches the supplied key. If the value is not set a
- default value will be returned as set by set_defaults.
-
+
+ Returns the value that matches the supplied key. If the value is
+ not set a default value will be returned as set by set_defaults.
+
:param key: The configuration key to return a config value for
:type key: str
:rtype: Any supported (str, int, bool, list-of-supported-types)
@@ -666,22 +685,22 @@ def get(self, key):
v = self._get(key)
if v is not None:
return v
- else:
- return self._get_default(key)
+ return self._get_default(key)
def set(self, key, value, trigger_handler=True, trigger_update=True):
- """
+ """
Set config value for a given key in the config manager.
-
- Set key to value. The optional trigger_update determines whether event hooks
- will fire for this key (and so re-calculation). It is useful to suppress these
- when updating multiple values for example.
-
+
+ Set key to value. The optional trigger_update determines whether
+ event hooks will fire for this key (and so re-calculation). It is
+ useful to suppress these when updating multiple values for example.
+
:param key: The configuration key to set
:type key: str
:param value: The value to set the configuration key to
- :type value: Any supported (str, int, bool, list-of-supported-types)
- :rtype: bool (success)
+ :type value: Any supported
+ (str, int, bool, list-of-supported-types)
+ :rtype: bool (success)
"""
old = self._get(key)
if old is not None and old == value:
@@ -700,7 +719,9 @@ def set(self, key, value, trigger_handler=True, trigger_update=True):
# Trigger update notification
if trigger_update:
- self.updated.emit(self.eventhooks[key] if key in self.eventhooks else RECALCULATE_ALL)
+ self.updated.emit(
+ self.eventhooks[key] if key in self.eventhooks
+ else RECALCULATE_ALL)
return True
@@ -708,18 +729,21 @@ def set(self, key, value, trigger_handler=True, trigger_update=True):
def set_default(self, key, value, eventhook=RECALCULATE_ALL):
"""
Set the default value for a given key.
-
- This will be returned if the value is
- not set in the current config. It is important to include defaults for all
- possible config values for backward compatibility with earlier versions of a plugin.
-
+
+ This will be returned if the value is
+ not set in the current config. It is important to include defaults for
+ all possible config values for backward compatibility with earlier
+ versions of a plugin.
+
:param key: The configuration key to set
:type key: str
:param value: The value to set the configuration key to
:type value: Any supported (str, int, bool, list-of-supported-types)
- :param eventhook: Attach either a full recalculation trigger (default), or a view-only recalculation trigger to these values.
+ :param eventhook: Attach either a full recalculation trigger
+ (default), or a view-only recalculation trigger
+ to these values.
:type eventhook: int RECALCULATE_ALL, RECALCULATE_VIEWS
-
+
"""
self.defaults[key] = value
@@ -729,39 +753,42 @@ def set_default(self, key, value, eventhook=RECALCULATE_ALL):
def set_defaults(self, keyvalues, eventhook=RECALCULATE_ALL):
"""
Set the default value for a set of keys.
-
- These will be returned if the value is
- not set in the current config. It is important to include defaults for all
- possible config values for backward compatibility with earlier versions of a plugin.
-
+
+ These will be returned if the value is
+ not set in the current config. It is important to include defaults for
+ all possible config values for backward compatibility with earlier
+ versions of a plugin.
+
:param keyvalues: A dictionary of keys and values to set as defaults
:type key: dict
- :param eventhook: Attach either a full recalculation trigger (default), or a view-only recalculation trigger to these values.
+ :param eventhook: Attach either a full recalculation trigger (default),
+ or a view-only recalculation trigger to these values.
:type eventhook: int RECALCULATE_ALL, RECALCULATE_VIEWS
-
+
"""
for key, value in list(keyvalues.items()):
self.defaults[key] = value
self.eventhooks[key] = eventhook
- # Updating the defaults may update the config (if anything without a config value
- # is set by it; should check)
+ # Updating the defaults may update the config (if anything
+ # without a config value is set by it; should check)
self.updated.emit(eventhook)
# Completely replace current config (wipe all other settings)
- def replace(self, keyvalues, trigger_update=True):
+ def replace(self, keyvalues):
"""
Completely reset the config with a set of key values.
-
- Note that this does not wipe handlers or triggers (see reset), it simply replaces the values
- in the config entirely. It is the equivalent of unsetting all keys, followed by a
- set_many. Anything not in the supplied keyvalues will revert to default.
-
+
+ Note that this does not wipe handlers or triggers (see reset), it
+ simply replaces the values in the config entirely. It is the
+ equivalent of unsetting all keys, followed by a set_many.
+ Anything not in the supplied keyvalues will revert to default.
+
:param keyvalues: A dictionary of keys and values to set as defaults
:type keyvalues: dict
- :param trigger_update: Flag whether to trigger a config update (+recalculation) after all values are set.
- :type trigger_update: bool
-
+ :param trigger_update: Flag whether to trigger a config update
+ (+recalculation) after all values are set.
+
"""
self.config = []
self.set_many(keyvalues)
@@ -769,14 +796,15 @@ def replace(self, keyvalues, trigger_update=True):
def set_many(self, keyvalues, trigger_update=True):
"""
Set the value of multiple config settings simultaneously.
-
- This postpones the
- triggering of the update signal until all values are set to prevent excess signals.
- The trigger_update option can be set to False to prevent any update at all.
-
+
+ This postpones the triggering of the update signal until all values
+ are set to prevent excess signals. The trigger_update option can be
+ set to False to prevent any update at all.
+
:param keyvalues: A dictionary of keys and values to set.
:type key: dict
- :param trigger_update: Flag whether to trigger a config update (+recalculation) after all values are set.
+ :param trigger_update: Flag whether to trigger a config update
+ (+recalculation) after all values are set.
:type trigger_update: bool
"""
has_updated = False
@@ -790,26 +818,29 @@ def set_many(self, keyvalues, trigger_update=True):
return has_updated
# HANDLERS
- # Handlers are UI elements (combo, select, checkboxes) that automatically update
- # and updated from the config manager. Allows instantaneous updating on config
- # changes and ensuring that elements remain in sync
+ # Handlers are UI elements (combo, select, checkboxes) that automatically
+ # update and updated from the config manager. Allows instantaneous
+ # updating on config changes and ensuring that elements remain in sync
def add_handler(self, key, handler, mapper=(lambda x: x, lambda x: x),
- auto_set_default=True, default=None):
+ default=None):
"""
Add a handler (UI element) for a given config key.
-
- The supplied handler should be a QWidget or QAction through which the user
- can change the config setting. An automatic getter, setter and change-event
- handler is attached which will keep the widget and config in sync. The attached
- handler will default to the correct value from the current config.
-
- An optional mapper may also be provider to handler translation from the values
- shown in the UI and those saved/loaded from the config.
+
+ The supplied handler should be a QWidget or QAction through which
+ the user can change the config setting. An automatic getter, setter
+ and change-event handler is attached which will keep the widget
+ and config in sync. The attached handler will default to the correct
+ value from the current config.
+
+ An optional mapper may also be provider to handler translation from
+ the values shown in the UI and those saved/loaded from the config.
"""
- # Add map handler for converting displayed values to internal config data
- if isinstance(mapper, (dict, OrderedDict)): # By default allow dict types to be used
+ # Add map handler for converting displayed values to
+ # internal config data
+ if isinstance(mapper, (dict, OrderedDict)):
+ # By default allow dict types to be used
mapper = build_dict_mapper(mapper)
elif isinstance(mapper, list) and isinstance(mapper[0], tuple):
@@ -817,12 +848,12 @@ def add_handler(self, key, handler, mapper=(lambda x: x, lambda x: x),
handler._get_map, handler._set_map = mapper
- if key in self.handlers: # Already there; so skip must remove first to replace
+ if key in self.handlers:
+ # Already there; so skip must remove first to replace
return
self.handlers[key] = handler
-
# Look for class in hooks and add getter, setter, updater
cls = self._get_hook(handler)
hookg, hooks, hooku = self.hooks[cls]
@@ -832,8 +863,8 @@ def add_handler(self, key, handler, mapper=(lambda x: x, lambda x: x),
handler.updater = types_MethodType(hooku, handler)
logging.debug("Add handler %s for %s" % (type(handler).__name__, key))
- handler_callback = lambda x = None: self.set(key, handler.getter(),
- trigger_handler=False)
+ handler_callback = lambda x=None: self.set(key, handler.getter(),
+ trigger_handler=False)
handler.updater().connect(handler_callback)
# Store this so we can issue a specific remove on deletes
@@ -850,7 +881,8 @@ def add_handler(self, key, handler, mapper=(lambda x: x, lambda x: x),
if self._get(key) is not None:
handler.setter(self._get(key))
- # If the key is in defaults; set the handler to the default state (but don't add to config)
+ # If the key is in defaults; set the handler to the default state
+ # (but don't add to config)
elif key in self.defaults:
handler.setter(self.defaults[key])
@@ -866,7 +898,6 @@ def _get_hook(self, handler):
"type (%s)" % type(handler).__name__)
return cls
-
def add_handlers(self, keyhandlers):
for key, handler in list(keyhandlers.items()):
self.add_handler(key, handler)
@@ -895,7 +926,8 @@ def setXMLConfig(self, root):
config = {}
for xconfig in root.findall('Config/ConfigSetting'):
- #id="experiment_control" type="unicode" value="monocyte at intermediate differentiation stage (GDS2430_2)"/>
+ # id="experiment_control" type="unicode" value="monocyte
+ # at intermediate differentiation stage (GDS2430_2)"/>
if xconfig.get('type') in CONVERT_TYPE_FROM_XML:
v = CONVERT_TYPE_FROM_XML[xconfig.get('type')](xconfig)
config[xconfig.get('id')] = v
@@ -904,7 +936,8 @@ def setXMLConfig(self, root):
def as_dict(self):
'''
- Return the combination of defaults and config as a flat dict (so it can be pickled)
+ Return the combination of defaults and config as a flat dict
+ (so it can be pickled)
'''
result_dict = {}
for k, v in self.defaults.items():
@@ -912,14 +945,15 @@ def as_dict(self):
return result_dict
-
+
class ConfigManager(ConfigManagerBase):
def reset(self):
- """
+ """
Reset the config manager to it's initialised state.
-
- This clears all values, unsets all defaults and removes all handlers, maps, and hooks.
+
+ This clears all values, unsets all defaults and removes all
+ handlers, maps, and hooks.
"""
self.config = {}
self.handlers = {}
@@ -927,7 +961,7 @@ def reset(self):
self.defaults = {}
self.maps = {}
self.eventhooks = {}
-
+
def _get(self, key):
with QMutexLocker(self.mutex):
try:
@@ -943,10 +977,11 @@ def _set(self, key, value):
class QSettingsManager(ConfigManagerBase):
def reset(self):
- """
+ """
Reset the config manager to it's initialised state.
-
- This initialises QSettings, unsets all defaults and removes all handlers, maps, and hooks.
+
+ This initialises QSettings, unsets all defaults and removes all
+ handlers, maps, and hooks.
"""
self.settings = QSettings()
self.handlers = {}
@@ -960,11 +995,13 @@ def _get(self, key):
v = self.settings.value(key, None)
if v is not None:
- if type(v) == QVariant and v.type() == QVariant.Invalid: # Invalid check for Qt4
+ if type(v) == QVariant and v.type() == QVariant.Invalid:
+ # Invalid check for Qt4
return None
- # Map type to that in defaults: required in case QVariant is a string
- # representation of the actual value (e.g. on Windows Reg)
+ # Map type to that in defaults: required in case QVariant is a
+ # string representation of the actual value
+ # (e.g. on Windows Reg)
vt = type(v)
if key in self.defaults:
dt = type(self.defaults[key])
diff --git a/pyqtconfig/demo.py b/pyqtconfig/demo.py
index 329e5f6..c31a643 100644
--- a/pyqtconfig/demo.py
+++ b/pyqtconfig/demo.py
@@ -1,7 +1,10 @@
-from .qt import *
+import sys
from pyqtconfig import ConfigManager
-
+from .qt import (QComboBox, QCheckBox, QSpinBox, QMainWindow,
+ QLineEdit, QApplication, QTextEdit,
+ QGridLayout, QWidget)
+
class MainWindow(QMainWindow):
def __init__(self):
diff --git a/pyqtconfig/qt.py b/pyqtconfig/qt.py
index 243a715..374c80d 100644
--- a/pyqtconfig/qt.py
+++ b/pyqtconfig/qt.py
@@ -1,7 +1,10 @@
+# -*- coding: utf-8 -*-
+''' This module is a header-like file for pyqtconfig.
+'''
from __future__ import unicode_literals
import sys
import os
-import logging
+import importlib
PYSIDE = 0
PYQT4 = 1
@@ -24,32 +27,38 @@
else:
# Try importing in turn
try:
- import PyQt5
+ importlib.import_module('PyQt5')
USE_QT_PY = PYQT5
- except:
+ except ImportError:
try:
- import PyQt4
+ importlib.import_module('PyQt5')
USE_QT_PY = PYQT4
except ImportError:
try:
- import PySide
+ importlib.import_module('PyQt5')
USE_QT_PY = PYSIDE
- except:
+ except ImportError:
pass
# Import PyQt classes accessible in elsewhere through from qt import *
if USE_QT_PY == PYQT5:
- from PyQt5.QtGui import *
- from PyQt5.QtCore import *
- from PyQt5.QtWebKit import *
- from PyQt5.QtNetwork import *
- from PyQt5.QtWidgets import *
- from PyQt5.QtWebKitWidgets import *
+ from PyQt5.QtCore import (QVariant, Qt, QMutex, QMutexLocker, QSettings,
+ QObject, pyqtSignal)
+ from PyQt5.QtWidgets import (QComboBox, QCheckBox, QAction,
+ QActionGroup, QPushButton, QSpinBox,
+ QDoubleSpinBox, QPlainTextEdit, QLineEdit,
+ QListWidget, QSlider, QButtonGroup,
+ QTabWidget, QApplication, QGridLayout,
+ QTextEdit, QWidget, QMainWindow)
elif USE_QT_PY == PYSIDE:
- from PySide.QtGui import *
- from PySide.QtCore import *
- from PySide.QtNetwork import *
+ from PySide.QtGui import (QComboBox, QCheckBox, QAction, QMainWindow,
+ QActionGroup, QPushButton, QSpinBox,
+ QDoubleSpinBox, QPlainTextEdit, QLineEdit,
+ QListWidget, QSlider, QButtonGroup, QWidget,
+ QTabWidget, QApplication, QGridLayout, QTextEdit)
+ from PySide.QtCore import (Signal, Qt, QMutex, QMutexLocker, QSettings,
+ QObject)
pyqtSignal = Signal
@@ -58,7 +67,10 @@
import sip
sip.setapi('QString', 2)
sip.setapi('QVariant', 2)
- from PyQt4.QtGui import *
- from PyQt4.QtCore import *
- from PyQt4.QtWebKit import *
- from PyQt4.QtNetwork import *
+ from PyQt4.QtGui import (QComboBox, QCheckBox, QAction, QMainWindow,
+ QActionGroup, QPushButton, QSpinBox,
+ QDoubleSpinBox, QPlainTextEdit, QLineEdit,
+ QListWidget, QSlider, QButtonGroup, QWidget,
+ QTabWidget, QApplication, QGridLayout, QTextEdit)
+ from PyQt4.QtCore import (QVariant, Qt, QMutex, QMutexLocker, QSettings,
+ QObject, pyqtSignal)
| QtWebKit is deprecated
Dear maintainer
There's an issue with qt.py. You currently import PyQt5.QtWebKit, which is deprecated. pyqtconfig still works if you have pyqt5 installed via apt or dnf (which is QT 5.9.5 for me at this moment), but not if it's installed via pip3 (which is QT 5.11.1 for me at this moment). So there needs to be some kind of way to import either PyQt5.QtWebKit and PyQt5.QtWebKitWidgets or PyQt5.QtWebEngine and QtWebEngineWidgets, depending on PyQt5.QtCore.QT_VERSION_STR
I also have a question (which may or may not be a stupid question): why do you import everything from PyQt5.QtGui, PyQt5.QtCore, PyQt5.QtWebKit, PyQt5.QtNetwork, PyQt5.QtWidgets and PyQt5.QtWebKitWidgets? This way you have to have extra python packages installed which you don't necessarily need. Isn't there a way to do the imports as they are needed?
| 2018-08-17T13:43:11 | 0.0 | [] | [] |
|||
starsimhub/starsim | starsimhub__starsim-536 | 7577d15e3a3ce01ccdac8dd39fefd139412d6aa0 | diff --git a/starsim/distributions.py b/starsim/distributions.py
index 2d65c0aa..10d42332 100644
--- a/starsim/distributions.py
+++ b/starsim/distributions.py
@@ -92,10 +92,17 @@ def initialize(self, obj=None, base_seed=None, sim=None, force=False):
if obj is None:
errormsg = 'Must supply a container that contains one or more Dist objects, typically the sim'
raise ValueError(errormsg)
- self.dists = find_dists(obj)
+
+ # Do not look for distributions in the people states, since they shadow the "real" states
+ skip = id(sim.people._states) if sim is not None else None
+
+ # Find and initialize the distributions
+ self.dists = find_dists(obj, skip=skip)
for trace,dist in self.dists.items():
if not dist.initialized or force:
dist.initialize(trace=trace, seed=base_seed, sim=sim, force=force)
+
+ # Confirm the seeds are unique
self.check_seeds()
self.initialized = True
return self
| States with ss.Dist are not initialized correctly
In TBsim, the TB disease module has a `FloatArr` that is initialized from a random Dist:
```python
ss.FloatArr('ppf_LS_to_presymp', default=ss.random())
```
However, every time I run the code, values for `ppf_LS_to_presymp` are different. Results are not reproducible, even with the same seed.
Could be due to recent changes to initialization, but the feels like a real issue. We need a test for this as well.
DistSeedRepeatError when running many sims
When running 4,000 simulations (TBsim), I occasionally trigger a DistSeedRepeatError:
```python
DistSeedRepeatError ss.bernoulli(people_female_default, pars={'p': 0.5}) ss.random(people__states_13459533552_default, pars={})
starsim.distributions.DistSeedRepeatError: A common seed was found between ss.bernoulli(people_female_default, pars={'p': 0.5}) and ss.random(people__states_13459533552_default, pars={}). This is likely caused by incorrect initialization of the parent Dists object.
```
I don't believe this message is caused by incorrect initialization, because the error is triggered only rarely.
| Ah, I found the issue -- this distribution gets found first in `People.states`, where it is given a key based on the object ID, which is effectively random each time:
```py
import starsim as ss
class RandState(ss.Disease):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.add_states(
ss.FloatArr('test_state', default=ss.random())
)
def init_post(self):
super().init_post()
print('Initial value', self.test_state[0])
sim = ss.Sim(diseases=RandState(), networks='random')
sim.run()
#0. 'pars_randomnet_n_contacts': # fine
ss.constant(pars_randomnet_n_contacts, pars={'v': 10})
#1. 'people_female_default': # fine
ss.bernoulli(people_female_default, pars={'p': 0.5})
#2. 'people_age_default': # fine
ss.uniform(people_age_default, pars={'low': 0, 'high': 100})
#3. 'networks_randomnet_dist': # fine
ss.Dist(networks_randomnet_dist, dist=RandomNet, pars={})
#4. 'people__states_136907923215568_default': # NOT FINE
ss.random(people__states_136907923215568_default, pars={})
```
I _think_ the best solution is to just skip `People.states` when parsing the object for dists to find, but will need to think if this would have unexpected consequences.
As with #533, this appears to be related to:
```python
ss.FloatArr('ppf_LS_to_presymp', default=ss.random())
```
wherein a FloatArr is to be initialized to a Dist. It's not working, and it also appears as though this might be the cause of the DistSeedRepeatError. | 2024-06-05T15:55:46 | 0.0 | [] | [] |
||
VIDA-NYU/reprozip | VIDA-NYU__reprozip-370 | f356ba0ffc973e051e85d708ab84c0392aa5a4f4 | diff --git a/reprounzip-docker/reprounzip/unpackers/docker.py b/reprounzip-docker/reprounzip/unpackers/docker.py
index 662062050..25f258679 100644
--- a/reprounzip-docker/reprounzip/unpackers/docker.py
+++ b/reprounzip-docker/reprounzip/unpackers/docker.py
@@ -651,8 +651,8 @@ def finalize(self):
uid = gid = 1000
dockerfile.write(
- 'RUN /busybox chown %d:%d %s\n' % (
- uid, gid, shell_escape(unicode_(target))
+ 'RUN ["/busybox", "chown", "%d:%d", %s]\n' % (
+ uid, gid, json.dumps(unicode_(target)),
)
)
| "reprounzip docker upload" not working
I ran the following command to modify the file:
```
reprounzip docker upload solution X1.csv:X2.csv
```
It throws an error saying `exec: "/bin/sh": stat /bin/sh: no such file or directory`
Not sure what needs to be done while zipping the codebase.
Thanks
Log:
```
[+] Building 0.5s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 265B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/reprounzip_image_ms9uyj0vy0:latest 0.0s
=> [1/3] FROM docker.io/library/reprounzip_image_ms9uyj0vy0 0.1s
=> [internal] load build context 0.0s
=> => transferring context: 193.72kB 0.0s
=> [2/3] COPY X1.csv /mnt/nfs/work1/barna/sainyam/sigmod2021_competition/code/X2.csv 0.0s
=> ERROR [3/3] RUN /busybox chown 1000:1000 /mnt/nfs/work1/barna/sainyam/sigmod2021_competition/code/X2.csv 0.3s
------
> [3/3] RUN /busybox chown 1000:1000 /mnt/nfs/work1/barna/sainyam/sigmod2021_competition/code/X2.csv:
#7 0.262 container_linux.go:370: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory
------
executor failed running [/bin/sh -c /busybox chown 1000:1000 /mnt/nfs/work1/barna/sainyam/sigmod2021_competition/code/X2.csv]: exit code: 1
[REPROUNZIP] 15:06:44.124 CRITICAL: docker build failed with code 1
```
| Oh oh, it looks like `RUN` is using the shell. I don't think this was happening before and I notice you are using BuildKit, maybe setting `DOCKER_BUILDKIT=0` would help?
I will work on a fix.
It did not work but I can try the fix that you pushed.
```
(base) sainyams-MacBook-Pro:sigmod2021_competition sainyam$ reprounzip docker upload test X1.csv:X2.csv
Sending build context to Docker daemon 196.6kB
Step 1/3 : FROM reprounzip_image_r53c4bftmz
---> 10c2c00554d8
Step 2/3 : COPY X1.csv /mnt/nfs/work1/barna/sainyam/sigmod2021_competition/code/X2.csv
---> e3f2a96c611d
Step 3/3 : RUN /busybox chown 1000:1000 /mnt/nfs/work1/barna/sainyam/sigmod2021_competition/code/X2.csv
---> Running in f91749a3f1b9
OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
[REPROUNZIP] 15:44:21.394 CRITICAL: docker build failed with code 1
``` | 2021-03-24T22:28:27 | 0.0 | [] | [] |
||
carla-simulator/scenario_runner | carla-simulator__scenario_runner-741 | b6a6cc00874867a7b3b40185cfca5ff2bcd27869 | diff --git a/Docs/CHANGELOG.md b/Docs/CHANGELOG.md
index a50746887..f7133f958 100644
--- a/Docs/CHANGELOG.md
+++ b/Docs/CHANGELOG.md
@@ -18,6 +18,7 @@
- Added support for storyboards with multiple stories
### :bug: Bug Fixes
* Fixed bug at the Getting Started docs which caused an import error
+* Fixed neverending lane change maneuver in OpenSCENARIO
### :ghost: Maintenance
* Extended SimpleVehicleController (OSC) to handle traffic lights
* Generalized visualizer attached to OSC controllers
diff --git a/srunner/scenarioconfigs/openscenario_configuration.py b/srunner/scenarioconfigs/openscenario_configuration.py
index ad15be0ba..ad67eeabd 100644
--- a/srunner/scenarioconfigs/openscenario_configuration.py
+++ b/srunner/scenarioconfigs/openscenario_configuration.py
@@ -260,6 +260,8 @@ def _set_actor_information(self):
if ref_actor.transform is not None:
raise e
break
+ else:
+ raise e
if actor.transform is None:
all_actor_transforms_set = False
diff --git a/srunner/scenariomanager/actorcontrols/simple_vehicle_control.py b/srunner/scenariomanager/actorcontrols/simple_vehicle_control.py
index 90b7481e9..473e78cd1 100644
--- a/srunner/scenariomanager/actorcontrols/simple_vehicle_control.py
+++ b/srunner/scenariomanager/actorcontrols/simple_vehicle_control.py
@@ -232,7 +232,7 @@ def _offset_waypoint(self, transform):
else:
right_vector = transform.get_right_vector()
offset_location = transform.location + carla.Location(x=self._offset*right_vector.x,
- y=self._offset*right_vector.y)
+ y=self._offset*right_vector.y)
return offset_location
@@ -279,7 +279,7 @@ def _set_new_velocity(self, next_location):
if self._consider_traffic_lights:
if (self._actor.is_at_traffic_light() and
- self._actor.get_traffic_light_state() == carla.TrafficLightState.Red):
+ self._actor.get_traffic_light_state() == carla.TrafficLightState.Red):
target_speed = 0
if target_speed < current_speed:
@@ -290,8 +290,11 @@ def _set_new_velocity(self, next_location):
else:
self._actor.set_light_state(carla.VehicleLightState.NONE)
if self._max_acceleration is not None:
- target_speed = min(target_speed, current_speed + (current_time -
- self._last_update) * self._max_acceleration)
+ tmp_speed = min(target_speed, current_speed + (current_time -
+ self._last_update) * self._max_acceleration)
+ # If the tmp_speed is < 0.5 the vehicle may not properly accelerate.
+ # Therefore, we bump the speed to 0.5 m/s if target_speed allows.
+ target_speed = max(tmp_speed, min(0.5, target_speed))
# set new linear velocity
velocity = carla.Vector3D(0, 0, 0)
diff --git a/srunner/scenariomanager/actorcontrols/visualizer.py b/srunner/scenariomanager/actorcontrols/visualizer.py
index 879626ee4..a8f25775c 100644
--- a/srunner/scenariomanager/actorcontrols/visualizer.py
+++ b/srunner/scenariomanager/actorcontrols/visualizer.py
@@ -115,6 +115,11 @@ def render(self):
im_v = cv2.vconcat([self._cv_image_actor, self._cv_image_bird])
cv2.circle(im_v, (900, 300), 80, (170, 170, 170), -1)
text = str(int(round((self._actor.get_velocity().x * 3.6))))+" kph"
+
+ speed = np.sqrt(self._actor.get_velocity().x**2 + self._actor.get_velocity().y**2)
+
+
+ text = str(int(round((speed * 3.6))))+" kph"
text = ' '*(7-len(text)) + text
im_v = cv2.putText(im_v, text, (830, 310), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 2, cv2.LINE_AA)
cv2.imshow("", im_v)
diff --git a/srunner/scenariomanager/scenarioatomics/atomic_behaviors.py b/srunner/scenariomanager/scenarioatomics/atomic_behaviors.py
index a46b35828..bfc444e62 100644
--- a/srunner/scenariomanager/scenarioatomics/atomic_behaviors.py
+++ b/srunner/scenariomanager/scenarioatomics/atomic_behaviors.py
@@ -974,6 +974,19 @@ def update(self):
if distance > self._distance_other_lane:
# long enough distance on new lane --> SUCCESS
new_status = py_trees.common.Status.SUCCESS
+
+ new_waypoints = []
+ map_wp = current_position_actor
+ while len(new_waypoints) < 200:
+ map_wps = map_wp.next(2.0)
+ if map_wps:
+ new_waypoints.append(map_wps[0].transform)
+ map_wp = map_wps[0]
+ else:
+ break
+
+ actor_dict[self._actor.id].update_waypoints(new_waypoints, start_time=self._start_time)
+
else:
self._pos_before_lane_change = current_position_actor.transform.location
diff --git a/srunner/tools/openscenario_parser.py b/srunner/tools/openscenario_parser.py
index 0bd528799..676c67803 100644
--- a/srunner/tools/openscenario_parser.py
+++ b/srunner/tools/openscenario_parser.py
@@ -491,7 +491,7 @@ def convert_position_to_transform(position, actor_list=None):
actor_transform = obj_actor.get_transform()
break
- if obj_actor is None:
+ if obj_actor is None or actor_transform is None:
raise AttributeError("Object '{}' provided as position reference is not known".format(obj))
# calculate orientation h, p, r
@@ -590,7 +590,8 @@ def convert_position_to_transform(position, actor_list=None):
is_absolute = True
waypoint = CarlaDataProvider.get_map().get_waypoint_xodr(road_id, lane_id, s)
if waypoint is None:
- raise AttributeError("Lane position cannot be found")
+ raise AttributeError("Lane position 'roadId={}, laneId={}, s={}, offset={}' does not exist".format(
+ road_id, lane_id, s, offset))
transform = waypoint.transform
if lane_pos.find('Orientation') is not None:
@@ -1041,7 +1042,7 @@ def convert_maneuver_to_atomic(action, actor, actor_list, catalogs):
lat_maneuver.find("LaneChangeActionDynamics").attrib.get('value', float("inf")))
atomic = ChangeActorLateralMotion(actor, direction=direction,
distance_lane_change=distance,
- distance_other_lane=1000,
+ distance_other_lane=10,
lane_changes=lane_changes,
name=maneuver_name)
elif private_action.find('LaneOffsetAction') is not None:
| Vehicle Maneuver Stops After LaneChange Event
Hi,
I am trying to run OSC scenario with Lateral and Longitudinal events in parallel but I am not getting the expected results
below are my test scenario
1) accelerate --> lane chage --> decelerate
2) accelerate --> lane change --> lane change --> decelerate
3) accelerate --> lane change --> lane change --> accelerate --> decelerate
In all the cases vehicle does not perform the event that comes after lane change
Here is my scenario file
[LateralLongitudinalExample.txt](https://github.com/carla-simulator/scenario_runner/files/5953693/LateralLongitudinalExample.txt)
please check the sceanrio and kindly let me know if i have missed anything.
Desktop:
OS: `Ubuntu 20.04`
CARLA Version: `CARLA 0.9.11`
Python version: `3.7`
scenario_runner :`master`
Regards,
Sagar
| I'm experiencing similar issue #654. Also some problems with lane change: #718
Hi,
I found the cause of this issue
https://github.com/carla-simulator/carla/blob/master/PythonAPI/carla/agents/navigation/local_planner.py
After exausting `self._waypoints_queue` reseting `self._global_plan` to `False` is not done.
```python
if len(self._waypoints_queue) == 0 and len(self._waypoint_buffer) == 0:
control = carla.VehicleControl()
control.steer = 0.0
control.throttle = 0.0
control.brake = 1.0
control.hand_brake = False
control.manual_gear_shift = False
return control
```
so when I made below chages the issue is resolved.
```python
def __init__(self, vehicle, opt_dict=None):
.
.
.
# initializing controller
self._opt_dict = opt_dict
self._init_controller(opt_dict)
def run_step(self, debug=False):
.
.
.
if len(self._waypoints_queue) == 0 and len(self._waypoint_buffer) == 0:
self._init_controller(self._opt_dict)
control = carla.VehicleControl()
control.steer = 0.0
control.throttle = 0.0
control.brake = 1.0
control.hand_brake = False
control.manual_gear_shift = False
return control
```
I could create a PR but it is on carla repo. (it may take more time to review)
Hey @sagar-g-v. Sorry for the huge response delay but yes that is indeed the case. _Warning: Long explanation incoming_.
By default, the local planner works by randomly moving the actor through town, while automatically populating the waypoint buffer so that it never ends. Additionally, a specific route can be given by the user (see [this function](https://github.com/carla-simulator/carla/blob/master/PythonAPI/carla/agents/navigation/local_planner.py#L197)). However, this last behavior makes the local planner stop populating the waypoint buffer (last line of the previous link causes [this conditional](https://github.com/carla-simulator/carla/blob/master/PythonAPI/carla/agents/navigation/local_planner.py#L233) to fail) and therefore, once the given route ends, the vehicle stops. This is indeed what you've reported at the previous comment.
OSC lane changes do exactly this, they set the global plan to the specific trajectory of the lane change. By default, these lane changes last for 1000 meters (check [here](https://github.com/carla-simulator/scenario_runner/blob/master/srunner/tools/openscenario_parser.py#L1042)) after the lane change is complete, so this isn't an issue in most cases, except when overwritten by another lateral action (such as lane offset).
Right now, the best solution would be to comment [this line](https://github.com/carla-simulator/carla/blob/master/PythonAPI/carla/agents/navigation/local_planner.py#L221), which will make the local planner never stop populating the waypoint buffer, even if a specific route is given. This is similar to your approach, but it avoids having to reinitialize everything.
> I could create a PR but it is on carla repo. (it may take more time to review)
If you can, I'd like you to try this approach and see if it fixes your issues. I'll keep testing this and see if it is a good idea to optionally stop the waypoint buffer populating, instead of being always done. If so, I'll do a PR to the main CARLA repo. If not, we could just do a patch at SR as this indeed breaks the LaneChange behavior.
Thank you for the detailed explaination. I am glad that we are on the same page.
I will check and update my status here
@sagar-g-v @glopezdiest There might be another option. We could repopulate the waypoint list from SR side, when we detect that the lane change is completed. | 2021-03-17T12:21:26 | 0.0 | [] | [] |
||
PennLINC/CuBIDS | PennLINC__CuBIDS-276 | 8e90140e6a573acde4ecee4951a1cc38bac0be46 | diff --git a/cubids/cli.py b/cubids/cli.py
index 40c2eca8..a55cecb0 100644
--- a/cubids/cli.py
+++ b/cubids/cli.py
@@ -61,16 +61,6 @@ def _parse_validate():
help="Disregard NIfTI header content during validation",
required=False,
)
- parser.add_argument(
- "--ignore_subject_consistency",
- action="store_true",
- default=True,
- help=(
- "Skip checking that any given file for one "
- "subject is present for all other subjects"
- ),
- required=False,
- )
parser.add_argument(
"--sequential-subjects",
action="store",
diff --git a/cubids/cubids.py b/cubids/cubids.py
index 6fdba68b..842215e0 100644
--- a/cubids/cubids.py
+++ b/cubids/cubids.py
@@ -150,7 +150,7 @@ def datalad_undo_last_commit(self):
Uses git reset --hard to revert to the previous commit.
"""
if not self.is_datalad_clean():
- raise Exception("Untracked changes present. " "Run clear_untracked_changes first")
+ raise Exception("Untracked changes present. Run clear_untracked_changes first")
reset_proc = subprocess.run(["git", "reset", "--hard", "HEAD~1"], cwd=self.path)
reset_proc.check_returncode()
diff --git a/cubids/validator.py b/cubids/validator.py
index 40a130b8..3f670041 100644
--- a/cubids/validator.py
+++ b/cubids/validator.py
@@ -11,15 +11,14 @@
logger = logging.getLogger("cubids-cli")
-def build_validator_call(path, ignore_headers=False, ignore_subject=True):
+def build_validator_call(path, ignore_headers=False):
"""Build a subprocess command to the bids validator."""
# build docker call
- command = ["bids-validator", "--verbose", "--json"]
+ # CuBIDS automatically ignores subject consistency.
+ command = ["bids-validator", "--verbose", "--json", "--ignoreSubjectConsistency"]
if ignore_headers:
command.append("--ignoreNiftiHeaders")
- if ignore_subject:
- command.append("--ignoreSubjectConsistency")
command.append(path)
@@ -39,7 +38,7 @@ def build_subject_paths(bids_dir):
subjects = glob.glob(bids_dir)
if len(subjects) < 1:
- raise ValueError("Couldn't find any subjects " "in the specified directory:\n" + bids_dir)
+ raise ValueError("Couldn't find any subjects in the specified directory:\n" + bids_dir)
subjects_dict = {}
@@ -94,7 +93,7 @@ def parse_issue(issue_dict):
return_dict["files"] = [
get_nested(x, "file", "relativePath") for x in issue_dict.get("files", "")
]
- return_dict["type"] = issue_dict.get("key" "")
+ return_dict["type"] = issue_dict.get("key", "")
return_dict["severity"] = issue_dict.get("severity", "")
return_dict["description"] = issue_dict.get("reason", "")
return_dict["code"] = issue_dict.get("code", "")
diff --git a/cubids/workflows.py b/cubids/workflows.py
index eea9bfa5..37793981 100644
--- a/cubids/workflows.py
+++ b/cubids/workflows.py
@@ -37,7 +37,6 @@ def validate(
sequential,
sequential_subjects,
ignore_nifti_headers,
- ignore_subject_consistency,
):
"""Run the bids validator.
@@ -49,7 +48,6 @@ def validate(
sequential
sequential_subjects
ignore_nifti_headers
- ignore_subject_consistency
"""
# check status of output_prefix, absolute or relative?
abs_path_output = True
@@ -69,7 +67,6 @@ def validate(
call = build_validator_call(
str(bids_dir),
ignore_nifti_headers,
- ignore_subject_consistency,
)
ret = run_validator(call)
@@ -148,8 +145,7 @@ def validate(
# run the validator
nifti_head = ignore_nifti_headers
- subj_consist = ignore_subject_consistency
- call = build_validator_call(tmpdirname, nifti_head, subj_consist)
+ call = build_validator_call(tmpdirname, nifti_head)
ret = run_validator(call)
# parse output
if ret.returncode != 0:
@@ -228,9 +224,6 @@ def validate(
if ignore_nifti_headers:
cmd.append("--ignore_nifti_headers")
- if ignore_subject_consistency:
- cmd.append("--ignore_subject_consistency")
-
elif container_type == "singularity":
cmd = [
"singularity",
@@ -250,9 +243,6 @@ def validate(
if ignore_nifti_headers:
cmd.append("--ignore_nifti_headers")
- if ignore_subject_consistency:
- cmd.append("--ignore_subject_consistency")
-
if sequential:
cmd.append("--sequential")
| `cubids-validate` `--ignore_subject_consistency` can only be True
It looks like `--ignore_subject_consistency` has an action of `store_true` and a default of `True`, so it seems like it can't be `False`. Does that sound accurate?
https://github.com/PennLINC/CuBIDS/blob/71a70267db9f69e6591f6fdcf71d93b89bdb68a9/cubids/cli.py#L65-L70
| I think we should get rid of this argument. The whole point of cubids is to deal with subject inconsistency, so maybe we don't need this at all | 2024-01-17T14:23:22 | 0.0 | [] | [] |
||
microsoft/FLAML | microsoft__FLAML-1360 | e5d95f5674d3ef66021b6391e101331d891f30de | diff --git a/flaml/automl/task/generic_task.py b/flaml/automl/task/generic_task.py
index df61d7e664..210842120f 100644
--- a/flaml/automl/task/generic_task.py
+++ b/flaml/automl/task/generic_task.py
@@ -706,7 +706,6 @@ def evaluate_model_CV(
fit_kwargs = {}
if cv_score_agg_func is None:
cv_score_agg_func = default_cv_score_agg_func
- start_time = time.time()
val_loss_folds = []
log_metric_folds = []
metric = None
@@ -813,8 +812,6 @@ def evaluate_model_CV(
if is_spark_dataframe:
X_train.spark.unpersist() # uncache data to free memory
X_val.spark.unpersist() # uncache data to free memory
- if budget and time.time() - start_time >= budget:
- break
val_loss, metric = cv_score_agg_func(val_loss_folds, log_metric_folds)
n = total_fold_num
pred_time /= n
| Cross-validation process isn't always completed across all folds
Hi all,
I've found an issue where sometimes, when the time_budget is running out, the final model/s aren't evaluated across all n_splits requested by the user, but instead the process is ended early.
For example, a user might want 10 fold cross-validation to be used, but actually, on the final model, only 4 fold cross validation is used. This means that on rare occasions, FLAML returns a best model (i.e. `automl.model`) which was evaluated across less folds than the user requested. It's possible that were that model evaluated across all requested folds, it would no longer be the best model.
I can understand why FLAML ends early when the time budget is low, but it would be great if there was either an option to switch this off (i.e. once a model is selected, it is evaluated across all folds) or have the automl.model return the best _completed_ model, not just the model with the highest average validation loss.
This is a hard issue to replicate for two reasons:
1) Performance of FLAML differs on different machines
2) The default logging doesn't log the number of folds used.
I've done my best to make this issue reproducible by writing a custom logging function `cv_score_agg_func`, which logs an 'n_folds' attribute, showing the number of folds that model has been evaluated against.
This code leads to the below (attached) logs on my machine, which showcase the error (see the "n_folds" 4 on the final line). Others may need to tweak the time_budget to trigger a similar issue.
```
from flaml import AutoML
from sklearn import datasets
def cv_score_agg_func(val_loss_folds, log_metrics_folds):
metric_to_minimize = sum(val_loss_folds)/len(val_loss_folds)
metrics_to_log = None
for single_fold in log_metrics_folds:
if metrics_to_log is None:
metrics_to_log = single_fold
elif isinstance(metrics_to_log, dict):
metrics_to_log = {k: metrics_to_log[k] + v for k, v in single_fold.items()}
else:
metrics_to_log += single_fold
if metrics_to_log:
n = len(val_loss_folds)
metrics_to_log = (
{k: v / n for k, v in metrics_to_log.items()}
if isinstance(metrics_to_log, dict)
else metrics_to_log / n
)
metrics_to_log["n_folds"] = n
return metric_to_minimize, metrics_to_log
dic_data = datasets.load_iris(as_frame=True) # numpy arrays
iris_data = dic_data["frame"] # pandas dataframe data + target
automl = AutoML()
automl_settings = {
"time_budget": 11, # in seconds
"metric": 'accuracy',
"task": 'classification',
"log_file_name": "incomplete_error.log",
"log_type": "all",
"eval_method": "cv",
"n_splits":10,
"cv_score_agg_func": cv_score_agg_func,
"early_stop": False,
}
x_train = iris_data[["sepal length (cm)","sepal width (cm)", "petal length (cm)","petal width (cm)"]].to_numpy()
y_train = iris_data['target']
automl.fit(x_train, y_train, **automl_settings)
```
In the attached logs, it should be clear that the final model was not tested against all 10 folds.
[incomplete_error.log](https://github.com/user-attachments/files/16966351/incomplete_error.log)
In terms of package versions, I'm using FLAML 2.1.2, catboost 1.2.5, scikit-learn 1.5.0 and Python 3.12.0
| Thank you @dannycg1996 for reporting the issue. Would you like to raise a PR to fix it?
Hi @thinkall, I'd be happy to.
Could I ask for your thoughts in advance of writing the code though please?
I've identified the source of this issue, which is this line on `generic_task.py`
```
if budget and time.time() - start_time >= budget:
break
```
I see two potential solutions here.
1) Simply add another boolean parameter `complete_cv_process` to `Automl.fit()` . I'm not sure if this should default to True or False. We could then change the above if statement to be:
```
if budget and not complete_cv_process and time.time() - start_time >= budget:
break
```
This would ensure that the cross-validation process would run to completion. However, there is the obvious issue that this risks the time budget being overrun. For large datasets or slow estimators (especially if someone has implemented something like Support Vector Machines) this may be problematic.
2) We stick to the current approach of ending the cross-validation process early if the time budget runs out, but we scrap incomplete models. We don't log incomplete models (maybe), and we don't compare its validation loss against the current 'best model', so there's no risk of the best_model provided to the user being one which was evaluated through an incomplete cross-validation process.
This is probably the better solution, as it respects the allocated time budget. However I've delved into the FLAML codebase, and I'm not sure on how to implement it - would need the advice of someone more experienced with the FLAML codebase on how to safely exit the AutoML process.
Thank you @dannycg1996 . I'd prefer the first solution. We don't shut down the training exactly at the given time budget now, so this change will not introduce any surprises. As for the second solution, it means we risk wasting a lot of time training a model without using it at all. WDYT?
Thanks for the quick feedback @thinkall!
I'm happy to implement the first solution. Thinking about it, do we even need to add the `complete_cv_process` boolean parameter (as outlined above)? I have to assume that users would rather have their AutoML process overrun its budget slightly, than receive an incomplete model.
It is probably the better option just to remove the if statement in its entirety, which would ensure that users never receive incomplete models.
What are your thoughts?
> Thanks for the quick feedback @thinkall!
>
>
>
> I'm happy to implement the first solution. Thinking about it, do we even need to add the `complete_cv_process` boolean parameter (as outlined above)? I have to assume that users would rather have their AutoML process overrun its budget slightly, than receive an incomplete model.
>
> It is probably the better option just to remove the if statement in its entirety, which would ensure that users never receive incomplete models.
>
>
>
> What are your thoughts?
Do you mean exposing the parameter to FLAML users? Can it be an internal flag variable? We detect whether the cross validation is finished or not and modify the value automatically.
Sorry, I did originally intend to expose the `complete_cv_process` parameter to users - it would be a parameter on the `automl.fit()` method, which would allow users to dictate whether the cross-validation process must run to completion (`complete_cv_process=True`) or can be exited early if the allocated budget is running out (`complete_cv_process = False`). However, I don't think anyone will want to set `complete_cv_process=False` - it isn't worth the trade-off.
I'm not sure how useful an internal flag would be. The if statement I highlighted above
```
if budget and time.time() - start_time >= budget:
break
```
acts to exit the cross-validation process early if the time budget has run out. I could have an internal flag somewhere which tracks whether the cross-validation process was completed or not for a given model, but it isn't much use unless we then use that flag somewhere, and you've stated (very reasonably) above that you'd prefer that we just complete the CV process - even if it means overrunning the time budget.
The best solution in my eyes is just to completely remove the if statement highlighted above, so that we never exit the cross-validation process early - the cross-validation process will be run to completion every time. I've tested it locally, and it works for me. Would you be happy with that solution?
> Sorry, I did originally intend to expose the `complete_cv_process` parameter to users - it would be a parameter on the `automl.fit()` method, which would allow users to dictate whether the cross-validation process must run to completion (`complete_cv_process=True`) or can be exited early if the allocated budget is running out (`complete_cv_process = False`). However, I don't think anyone will want to set `complete_cv_process=False` - it isn't worth the trade-off.
>
> I'm not sure how useful an internal flag would be. The if statement I highlighted above
>
> ```
> if budget and time.time() - start_time >= budget:
> break
> ```
>
> acts to exit the cross-validation process early if the time budget has run out. I could have an internal flag somewhere which tracks whether the cross-validation process was completed or not for a given model, but it isn't much use unless we then use that flag somewhere, and you've stated (very reasonably) above that you'd prefer that we just complete the CV process - even if it means overrunning the time budget.
>
> The best solution in my eyes is just to completely remove the if statement highlighted above, so that we never exit the cross-validation process early - the cross-validation process will be run to completion every time. I've tested it locally, and it works for me. Would you be happy with that solution?
You're right, @dannycg1996 ! I agree with you that we can simply remove the if statement here. We've actually considered the time budget with `budget_per_train`. The overtime will mainly come from the overhead of each train, that should be acceptable. In your experiments, how much extra time will it spend on cross validation after removing the if statement? | 2024-09-30T11:05:45 | 0.0 | [] | [] |
||
m-beau/NeuroPyxels | m-beau__NeuroPyxels-319 | d5b8b1bc92d5a0e3e4688bed6404c316a18e7661 | diff --git a/npyx/c4/predict_cell_types.py b/npyx/c4/predict_cell_types.py
index cdc5c3fe..c6d88134 100644
--- a/npyx/c4/predict_cell_types.py
+++ b/npyx/c4/predict_cell_types.py
@@ -1,5 +1,6 @@
import os
import sys
+import time
if __name__ == "__main__":
__package__ = "npyx.c4"
@@ -19,6 +20,12 @@
from tqdm.auto import tqdm
+from joblib import Parallel, delayed
+import multiprocessing
+from multiprocessing import Pool, Lock
+
+
+
import npyx.corr as corr
import npyx.datasets as datasets
from npyx.gl import get_units
@@ -107,6 +114,66 @@ def prepare_dataset(dp, units):
return np.concatenate((acgs_3d, waveforms), axis=1), bad_units
+def aux_prepare_dataset(dp, u):
+ t = trn(dp, u)
+ if len(t) < 100:
+ #Bad units
+ return [True, [], []]
+
+ # We set period_m to None to use the whole recording
+ t, _ = trn_filtered(dp, u, period_m=None)
+ if len(t) < 10:
+ #Bad units
+ return [True, [], []]
+
+
+ wvf, _, _, _ = wvf_dsmatch(dp, u, t_waveforms=120)
+ waveforms = datasets.preprocess_template(wvf)
+
+ _, acg = corr.crosscorr_vs_firing_rate(t, t, 2000, 1)
+ acg, _ = corr.convert_acg_log(acg, 1, 2000)
+ acgs_3d = acg.ravel() * 10
+
+ return [False, waveforms, acgs_3d]
+
+def prepare_dataset_parallel(dp, units):
+
+ waveforms = []
+ acgs_3d = []
+ bad_units = []
+
+ num_cores = len(units)
+ max_num_cores = multiprocessing.cpu_count() if multiprocessing.cpu_count() < 60 else 60
+ if num_cores > max_num_cores:
+ num_cores = max_num_cores
+
+ dataset_results = Parallel(n_jobs=num_cores, prefer="processes")(delayed(aux_prepare_dataset)(dp, u) for u in tqdm(units, desc="Preparing waveforms and ACGs for classification"))
+
+
+ for i in range(len(units)):
+ if dataset_results[i][0]==True:
+ bad_units.append(units[i])
+ else:
+ waveforms.append(dataset_results[i][1])
+ acgs_3d.append(dataset_results[i][2])
+
+ if len(bad_units) > 0:
+ print(
+ f"Units {str(bad_units)[1:-1]} were skipped because they had too few good spikes."
+ )
+ acgs_3d = np.array(acgs_3d)
+ waveforms = np.array(waveforms)
+
+ if len(acgs_3d) == 0:
+ raise ValueError(
+ "No units were found with the provided parameter choices after quality checks."
+ )
+
+ return np.concatenate((acgs_3d, waveforms), axis=1), bad_units
+
+
+
+
def format_predictions(predictions_matrix: np.ndarray):
"""
Formats the predictions matrix by computing the mean predictions, prediction confidences, delta mean confidences,
@@ -145,6 +212,9 @@ def format_predictions(predictions_matrix: np.ndarray):
def main():
+ start_time = time.time()
+
+
parser = argparse.ArgumentParser()
parser.add_argument(
@@ -269,7 +339,10 @@ def main():
else:
units = get_units(args.data_path, args.quality)
- prediction_dataset, bad_units = prepare_dataset(args.data_path, units)
+ #prediction_dataset, bad_units = prepare_dataset(args.data_path, units)
+ prediction_dataset, bad_units = prepare_dataset_parallel(args.data_path, units)
+
+
good_units = [u for u in units if u not in bad_units]
@@ -330,25 +403,33 @@ def main():
confidence_passing = np.array(good_units)[confidence_mask]
- for i, unit in enumerate(good_units):
- if unit not in confidence_passing:
- continue
- plot_features_1cell_vertical(
- i,
- prediction_dataset[:, :2010].reshape(-1, 10, 201) * 100,
- prediction_dataset[:, 2010:],
- predictions=raw_probabilities,
- saveDir=plots_folder,
- fig_name=f"unit_{unit}_cell_type_predictions",
- plot=False,
- cbin=1,
- cwin=2000,
- figsize=(10, 4),
- LABELMAP=datasets.CORRESPONDENCE_NO_GRC,
- C4_COLORS=C4_COLORS,
- fs=30000,
- unit_id=unit,
- )
+ def aux_plot_features_1cell_vertical(i, unit):
+ if unit in confidence_passing:
+ plot_features_1cell_vertical(
+ i,
+ prediction_dataset[:, :2010].reshape(-1, 10, 201) * 100,
+ prediction_dataset[:, 2010:],
+ predictions=raw_probabilities,
+ saveDir=plots_folder,
+ fig_name=f"unit_{unit}_cell_type_predictions",
+ plot=False,
+ cbin=1,
+ cwin=2000,
+ figsize=(10, 4),
+ LABELMAP=datasets.CORRESPONDENCE_NO_GRC,
+ C4_COLORS=C4_COLORS,
+ fs=30000,
+ unit_id=unit,
+ )
+
+ num_cores = len(units)
+ max_num_cores = multiprocessing.cpu_count() if multiprocessing.cpu_count() < 60 else 60
+ if num_cores > max_num_cores:
+ num_cores = max_num_cores
+ Parallel(n_jobs=num_cores, prefer="processes")(delayed(aux_plot_features_1cell_vertical)(i, unit) for i, unit in enumerate(good_units))
+
+
+
# m = raw_probabilities.mean(2).max(1) >= args.threshold
# masked_raw_probas = raw_probabilities[m,:,:].mean(2)
plot_survival_confidence(
@@ -367,5 +448,9 @@ def main():
)
+ end_time = time.time()
+ print('Cell type classfication execution time: ', end_time-start_time)
+
+
if __name__ == "__main__":
main()
| Parallel version of predict_cell_types script
A parallel version of prepare_dataset and plot_features_1cell_vertical functions have been included. The new prepare_dataset_parallel function produces an error in the execution when all the units in the cluster_group.tsv file are defined as good or noise (there are not unsorted units). That condition will produce a race condition, when several processes will be reading the file cluster_group.tsv, while other ones will be writing the same file (see load_units_quality function in file gl.py). A lock should be introduced in this reading/writing process
| 2023-08-31T10:53:49 | 0.0 | [] | [] |
|||
largecats/sparksql-formatter | largecats__sparksql-formatter-75 | c7d81b7eaca42f2a736943a26b19fb75fc5056ad | diff --git a/CHANGELOG.md b/CHANGELOG.md
index fb11a9a..53b62ad 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -78,4 +78,7 @@ Added test.
## 2021-06-25
1. Updated `_parse_args_in_correct_type()` logic to exclude parsing the value of `indent` key when defining style using a dictionary. See https://github.com/largecats/sparksql-formatter/issues/72.
2. Added test for setting indent style via dictionary.
-3. Updated tests to use unittest library.
\ No newline at end of file
+3. Updated tests to use unittest library.
+
+## 2024-11-06
+1. Updated `Tokenizer.BLOCK_COMMENT_REGEX` from `u'(\/\*(?s).*?\*\/)'` to `u'(?s:(\/\*.*?\*\/))'` to avoid "re.error: global flags not at the start of the expression at position 5" error. See https://github.com/largecats/sparksql-formatter/issues/74.
\ No newline at end of file
diff --git a/setup.py b/setup.py
index e775c34..bfaa33f 100644
--- a/setup.py
+++ b/setup.py
@@ -5,7 +5,7 @@
setuptools.setup(
name='sparksqlformatter',
- version='0.1.12',
+ version='0.1.13',
author='largecats',
author_email='[email protected]',
description=
diff --git a/sparksqlformatter/src/tokenizer.py b/sparksqlformatter/src/tokenizer.py
index 24ef42e..18d870e 100644
--- a/sparksqlformatter/src/tokenizer.py
+++ b/sparksqlformatter/src/tokenizer.py
@@ -65,7 +65,7 @@ def __init__(self, style):
self.NUMBER_REGEX = r'^((-\s*)?[0-9]+(\.[0-9]+)?|0x[0-9a-fA-F]+|0b[01]+)\b'
self.OPERATOR_REGEX = u'^([^\{\}]!=|<>|==|<=|>=|!=|!<|!>|\|\||::|->>|->|~~\*|~~|!~~\*|!~~|~\*|!~\*|!~|:=|.)'
- self.BLOCK_COMMENT_REGEX = u'(\/\*(?s).*?\*\/)' # (?s) is inline flag for re.DOTALL
+ self.BLOCK_COMMENT_REGEX = u'(?s:(\/\*.*?\*\/))' # (?s:...) applies flag for re.DOTALL over ...
self.LINE_COMMENT_REGEX = Tokenizer.create_line_comment_regex(style.lineCommentTypes)
self.TOP_LEVEL_KEYWORD_REGEX = Tokenizer.create_keyword_regex(style.topLevelKeywords)
| Not compatible with Python 3.11
Thanks for your effort on this awesome tool!
I have migrated to Python 3.11 recently, but something seems to be broken, it can't work anymore:
Command:
```
echo 'select * from foo;' | sparksqlformatter -f /dev/stdin
```
Exception:
```
Traceback (most recent call last):
File "/usr/local/bin/sparksqlformatter", line 8, in <module>
sys.exit(run_main())
^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/__init__.py", line 92, in run_main
main(sys.argv)
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/__init__.py", line 58, in main
api.format_file(filePath=filePath, inPlace=args.get('in_place'))
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/api.py", line 69, in format_file
_format_file(filePath, formatter, inPlace)
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/api.py", line 115, in _format_file
formattedQuery = _format_query(query, formatter)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/api.py", line 168, in _format_query
return formatter.format(query)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/formatter.py", line 79, in format
self.tokens = self.tokenizer.tokenize(input=query) # identify tokens in the query
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/tokenizer.py", line 231, in tokenize
token = self.get_next_token(input, token) # get next token
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/tokenizer.py", line 251, in get_next_token
return (self.get_white_space_token(input) or self.get_comment_token(input) or self.get_string_token(input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/tokenizer.py", line 280, in get_comment_token
return self.get_line_comment_token(input) or self.get_block_comment_token(input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/tokenizer.py", line 308, in get_block_comment_token
return Tokenizer.get_token_on_first_match(input=input,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sparksqlformatter/src/tokenizer.py", line 509, in get_token_on_first_match
matches = re.match(pattern=regex, string=input, flags=re.UNICODE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/__init__.py", line 166, in match
return _compile(pattern, flags).match(string)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/__init__.py", line 294, in _compile
p = _compiler.compile(pattern, flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/_compiler.py", line 743, in compile
p = _parser.parse(p, flags)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/_parser.py", line 982, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/_parser.py", line 457, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/_parser.py", line 865, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/_parser.py", line 457, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/re/_parser.py", line 843, in _parse
raise source.error('global flags not at the start '
re.error: global flags not at the start of the expression at position 5
```
| 2024-11-06T23:37:33 | 0.0 | [] | [] |
|||
williballenthin/viv-utils | williballenthin__viv-utils-116 | 9d3731cca23b234ba0d6fbe8f927401445d5eb3d | diff --git a/viv_utils/flirt.py b/viv_utils/flirt.py
index 053f7ef..ba59cad 100644
--- a/viv_utils/flirt.py
+++ b/viv_utils/flirt.py
@@ -185,7 +185,12 @@ def match_function_flirt_signatures(matcher, vw, va, cache=None):
# the reference offset may be inside an instruction,
# so we use getLocation to select the containing instruction address.
- loc_va = vw.getLocation(ref_va)[vivisect.const.L_VA]
+ location = vw.getLocation(ref_va)
+ if location is None:
+ does_match_references = False
+ break
+
+ loc_va = location[vivisect.const.L_VA]
# an instruction may have multiple xrefs from
# so we loop through all code references,
| Resolves getLocation return NoneType when expecting Tuple
https://github.com/mandiant/capa/issues/1094#issuecomment-1589984754
| 2023-06-13T21:55:59 | 0.0 | [] | [] |
|||
Xilinx/PYNQ | Xilinx__PYNQ-1241 | 635f8e508b6194e7ed74396b20690f3266d22c1b | diff --git a/docs/source/pynq_libraries/axigpio.rst b/docs/source/pynq_libraries/axigpio.rst
index 840d9a5703..fd05163e0e 100644
--- a/docs/source/pynq_libraries/axigpio.rst
+++ b/docs/source/pynq_libraries/axigpio.rst
@@ -5,7 +5,8 @@ AxiGPIO
The AxiGPIO class provides methods to read, write, and receive
interrupts from external general purpose peripherals such as LEDs,
-buttons, switches connected to the PL using AXI GPIO controller IP.
+buttons, switches connected to the PL using AXI GPIO controller IP.
+This class is automatically assigned as a driver to IP of the type AXI GPIO
Block Diagram
@@ -51,9 +52,10 @@ to exist in the overlay used with this class.
Examples
--------
-This example is for illustration, to show how to use the AxiGPIO class.
+This example is for illustration, and shows how to use the AxiGPIO class.
In practice, the LED, Button, Switches, and RGBLED classes may be available
-to extend the AxiGPIO class should be used for these peripherals in an overlay.
+to extend the AxiGPIO class and should be used for these peripherals in
+an overlay.
After an overlay has been loaded, an AxiGPIO instance can be instantiated
by passing the name of the AXI GPIO controller to the class.
@@ -64,8 +66,8 @@ by passing the name of the AXI GPIO controller to the class.
from pynq.lib import AxiGPIO
ol = Overlay("base.bit")
- led_ip = ol.ip_dict['gpio_leds']
- switches_ip = ol.ip_dict['gpio_switches']
+ led_ip = ol.ip_dict['leds_gpio']
+ switches_ip = ol.ip_dict['switches_gpio']
leds = AxiGPIO(led_ip).channel1
switches = AxiGPIO(switches_ip).channel1
@@ -96,4 +98,5 @@ PYNQ-Z1/PYNQ-Z2 board at:
<Jupyter Home>/base/board/board_btns_leds.ipynb
-The same notebook may be found in the corresponding folder in the GitHub repository.
+The same notebook may be found in the corresponding folder in the GitHub
+repository.
diff --git a/pynq/lib/axigpio.py b/pynq/lib/axigpio.py
index cb431ad82f..52887f1e0e 100644
--- a/pynq/lib/axigpio.py
+++ b/pynq/lib/axigpio.py
@@ -234,6 +234,8 @@ def write(self, val, mask):
"""Set the state of the output pins
"""
+ if self.slicetype == AxiGPIO.Input:
+ raise RuntimeError('You cannot write to an Input')
self.val = (self.val & ~mask) | (val & mask)
self._parent.write(self._channel * 8, self.val)
@@ -241,11 +243,13 @@ def read(self):
"""Read the state of the input pins
"""
+ if self.slicetype == AxiGPIO.Output:
+ raise RuntimeError('You cannot read from an output')
return self._parent.read(self._channel * 8)
@property
def trimask(self):
- """Gets or sets the tri-state mask for an inout channel
+ """Gets or sets the tristate mask for an inout channel
"""
return self._parent.read(self._channel * 8 + 4)
@@ -331,6 +335,9 @@ def setdirection(self, direction, channel=1):
'in', 'out' or 'inout'
"""
+ if type(direction) is str:
+ if direction in _direction_map:
+ direction = _direction_map[direction]
if direction not in [AxiGPIO.Input, AxiGPIO.Output, AxiGPIO.InOut]:
raise ValueError(
"direction should be one of AxiGPIO.{Input,Output,InOut}")
@@ -341,6 +348,7 @@ def __getitem__(self, idx):
bindto = ['xilinx.com:ip:axi_gpio:2.0']
-_direction_map = { "in": AxiGPIO.Input,
- "out": AxiGPIO.Output,
- "inout": AxiGPIO.InOut }
+
+_direction_map = {"in": AxiGPIO.Input,
+ "out": AxiGPIO.Output,
+ "inout": AxiGPIO.InOut}
| Some errors in the document.
* PYNQ version (e.g. v2.5): 2.5
* Board name (e.g. Pynq-Z1): Pynq-Z2
* Description: I try to follow the example, however, there may be two mistakes in that.
*
When the following code ran, I got a KeyError in Jupyter which is `KeyError: 'gpio_leds'`.
I find that the above document mentions that we should use `[led|switch|button|rgbleds]_gpio`, and I do that.
Then I find it's ok without errors. So I think these two lines may have some errors.
I think`gpio_leds` should be modified to `leds_gpio` and `gpio_switches` to `switches_gpio`.
* The code
https://github.com/Xilinx/PYNQ/commit/82da59069aa56f62dbc7dda484b6ff2f805b60cc#diff-9b7aa25cb5b22e220f72782035e943209a204ecac4db6ffab0cc0a588bc920e4R67
https://github.com/Xilinx/PYNQ/commit/82da59069aa56f62dbc7dda484b6ff2f805b60cc#diff-9b7aa25cb5b22e220f72782035e943209a204ecac4db6ffab0cc0a588bc920e4R68
* The document
https://github.com/Xilinx/PYNQ/commit/82da59069aa56f62dbc7dda484b6ff2f805b60cc#diff-9b7aa25cb5b22e220f72782035e943209a204ecac4db6ffab0cc0a588bc920e4R48
| Hallo , thank you for commet. Wery useful for me. For my board pynq-z2, was btns working instead of buttons.
> Hallo , thank you for commet. Wery useful for me. For my board pynq-z2, was btns working instead of buttons.
You're welcome. But I think this repo may have stopped maintenance so long, lol. | 2021-08-28T12:55:48 | 0.0 | [] | [] |
||
pyomeca/ezc3d | pyomeca__ezc3d-283 | dfda501ba00b99c0fba84d086e49e0dc90dcab21 | diff --git a/binding/python3/__init__.py b/binding/python3/__init__.py
index c1193b39..797dfbf7 100644
--- a/binding/python3/__init__.py
+++ b/binding/python3/__init__.py
@@ -478,9 +478,9 @@ def write(self, path):
if nb_analog_frames % nb_point_frames != 0:
raise ValueError("Number of frames of Points and Analogs should be a multiple of an integer")
else:
- if (
- nb_analog_frames * self._storage["parameters"]["POINT"]["RATE"]["value"][0]
- != nb_point_frames * self._storage["parameters"]["ANALOG"]["RATE"]["value"][0]
+ if ~np.isclose(
+ nb_analog_frames * self._storage["parameters"]["POINT"]["RATE"]["value"][0],
+ nb_point_frames * self._storage["parameters"]["ANALOG"]["RATE"]["value"][0]
):
raise ValueError("Number of frames in the data set must match the analog rate X point frame")
| Floating point rounding error when writing c3d
Salut Benjamin
In ezc3d/__init__.py, lines 481 to 487:
```
if (
nb_analog_frames
!= self._storage["parameters"]["ANALOG"]["RATE"]["value"][0]
/ self._storage["parameters"]["POINT"]["RATE"]["value"][0]
* nb_point_frames
):
raise ValueError("Number of frames in the data set must match the analog rate X point frame")
```
I think there should be accommodation for floating-point rounding error before raising the ValueError. I have some data where nb_analog_frames = 4420, but the calculation of
```
self._storage["parameters"]["ANALOG"]["RATE"]["value"][0]
/ self._storage["parameters"]["POINT"]["RATE"]["value"][0]
* nb_point_frames
```
gives 4420.000000000001.
So I get a ValueError instead of successfully saving the c3d. In my view this is only a floating-point difference and the function should go on with saving anyway. I could modify it myself and do a PR, but I'm not sure, is this code autogenerated or it's manually maintained?
Thanks
| In any case, changing these lines to
```
if nb_analog_frames != round(
self._storage["parameters"]["ANALOG"]["RATE"]["value"][0]
/ self._storage["parameters"]["POINT"]["RATE"]["value"][0]
* nb_point_frames
):
```
solves the problem.
Hi @felixchenier
The code is manually generated so changing it makes senses :)
I think you can PR this modification :)
Thanks! | 2023-01-19T21:09:18 | 0.0 | [] | [] |
||
daleal/receipt-scanner | daleal__receipt-scanner-26 | 192330f04d7075a9be4a49bdbc69b61850cf12c2 | diff --git a/receipt_scanner/image/contour.py b/receipt_scanner/image/contour.py
index a3a4f32..a4cfab3 100644
--- a/receipt_scanner/image/contour.py
+++ b/receipt_scanner/image/contour.py
@@ -138,7 +138,7 @@ def contour_not_too_big(image: np.ndarray, contour: np.ndarray) -> bool:
def contour_not_too_small(image: np.ndarray, contour: np.ndarray) -> bool:
- return cv2.contourArea(contour) >= (0.3 * image.shape[0] * image.shape[1])
+ return cv2.contourArea(contour) >= (0.1 * image.shape[0] * image.shape[1])
def detect_contours(
diff --git a/receipt_scanner/image/core.py b/receipt_scanner/image/core.py
index 9eb218f..acc3045 100644
--- a/receipt_scanner/image/core.py
+++ b/receipt_scanner/image/core.py
@@ -43,7 +43,7 @@ def process_image(file_name: str, debug: bool = False) -> np.ndarray:
downsized_image,
MorphologicalCloseFilter(iterations=4, debug=debug),
MedianBlurFilter(debug=debug),
- GaussianBlurFilter(debug=debug),
+ GaussianBlurFilter(size=3, debug=debug),
CannyFilter(debug=debug),
DilateFilter(debug=debug),
)
diff --git a/receipt_scanner/image/filters/gaussian_blur.py b/receipt_scanner/image/filters/gaussian_blur.py
index ca23a0a..a2bb0e9 100644
--- a/receipt_scanner/image/filters/gaussian_blur.py
+++ b/receipt_scanner/image/filters/gaussian_blur.py
@@ -10,11 +10,12 @@
class GaussianBlurFilter(Filter):
- def __init__(self, debug: bool = False) -> None:
+ def __init__(self, size: int = 7, debug: bool = False) -> None:
+ self.size = size
self.debug = debug
def eval(self, image: np.ndarray) -> np.ndarray:
- logger.debug("Applying 'GaussianBlurFilter'...")
- blurred_image = cv2.GaussianBlur(image, (7, 7), 0)
+ logger.debug(f"Applying 'GaussianBlurFilter' with size {self.size}...")
+ blurred_image = cv2.GaussianBlur(image, (self.size, self.size), 0)
debug_show(blurred_image, debug=self.debug)
return blurred_image
| Some borders don't get fully closed
Sometimes the receipt border doesn't get closed correctly, I have noticed that it mainly happens on the bottom border of the receipts (maybe something to do with the shadows?).
| 2022-10-22T02:18:19 | 0.0 | [] | [] |
|||
WSE-research/LinguaF | WSE-research__LinguaF-10 | 40b51870d29b691fc25c5a7b10cecc61305e893e | diff --git a/linguaf/__init__.py b/linguaf/__init__.py
index 2f3befa..7cdb67a 100644
--- a/linguaf/__init__.py
+++ b/linguaf/__init__.py
@@ -1,7 +1,7 @@
import json
SUPPORTED_LANGS = ['en', 'ru']
-__version__ = '0.0.8'
+__version__ = '0.1.0'
def __load_json(filepath):
@@ -40,4 +40,4 @@ def __check_lang_param(param):
if type(param) != str:
raise TypeError(f"The lang parameter has to be of type str. Now: {type(param)}")
if param not in SUPPORTED_LANGS:
- raise ValueError(f"The given language isn't supported. The supported ones are: {SUPPORTED_LANGS}")
\ No newline at end of file
+ raise ValueError(f"The given language isn't supported. The supported ones are: {SUPPORTED_LANGS}")
diff --git a/linguaf/descriptive_statistics.py b/linguaf/descriptive_statistics.py
index ee0bed2..9d25af5 100644
--- a/linguaf/descriptive_statistics.py
+++ b/linguaf/descriptive_statistics.py
@@ -354,7 +354,7 @@ def get_lexical_items(documents: list, remove_stopwords: bool = False, lang: str
return lex_items
-def words_per_sentence(documents: list, lang: str = 'en', remove_stopwords: bool = False) -> list:
+def avg_words_per_sentence(documents: list, lang: str = 'en', remove_stopwords: bool = False) -> list:
"""Calculate average number of words in a sentence based on a list of documents.
Keyword arguments:
diff --git a/linguaf/readability.py b/linguaf/readability.py
index b936c08..ee9cabf 100644
--- a/linguaf/readability.py
+++ b/linguaf/readability.py
@@ -1,5 +1,5 @@
from linguaf.descriptive_statistics import get_words, syllable_count, avg_word_length, \
- number_of_n_syllable_words, sentence_count, words_per_sentence
+ number_of_n_syllable_words, sentence_count, avg_words_per_sentence
from linguaf import __check_bool_param, __check_documents_param, __check_lang_param
@@ -19,7 +19,7 @@ def flesch_reading_ease(documents: list, lang: str = 'en', remove_stopwords: boo
__check_bool_param(remove_stopwords)
words = get_words(documents, lang, remove_stopwords)
- asl = words_per_sentence(documents, lang, remove_stopwords)
+ asl = avg_words_per_sentence(documents, lang, remove_stopwords)
syl_total = syllable_count(words, lang)
if lang == 'en':
@@ -44,7 +44,7 @@ def flesch_kincaid_grade(documents: list, lang: str = 'en', remove_stopwords: bo
__check_bool_param(remove_stopwords)
words = get_words(documents, lang, remove_stopwords)
- asl = words_per_sentence(documents)
+ asl = avg_words_per_sentence(documents)
syl_total = syllable_count(words, lang)
if lang == 'en':
@@ -69,7 +69,7 @@ def automated_readability_index(documents: list, lang: str = 'en', remove_stopwo
__check_lang_param(lang)
__check_bool_param(remove_stopwords)
- asl = words_per_sentence(documents, lang, remove_stopwords)
+ asl = avg_words_per_sentence(documents, lang, remove_stopwords)
awl = avg_word_length(documents, lang, remove_stopwords)
return 0.5*asl + 4.71*awl - 21.43
@@ -90,7 +90,7 @@ def automated_readability_index_simple(documents: list, lang: str = 'en', remove
__check_lang_param(lang)
__check_bool_param(remove_stopwords)
- asl = words_per_sentence(documents, lang, remove_stopwords)
+ asl = avg_words_per_sentence(documents, lang, remove_stopwords)
awl = avg_word_length(documents, lang, remove_stopwords)
return asl + 9.0*awl
diff --git a/setup.py b/setup.py
index 14ab548..dd34811 100644
--- a/setup.py
+++ b/setup.py
@@ -21,7 +21,7 @@ def read_requirements():
setuptools.setup(
name="linguaf",
- version="0.0.8",
+ version="0.1.0",
author="Aleksandr Perevalov",
author_email="[email protected]",
description="Python package for calculating famous measures in computational linguistics",
| Rename words_per_sentence
Rename `words_per_sentence` to `avg_words_per_sentence`
| 2021-06-12T19:00:16 | 0.0 | [] | [] |
|||
ViCCo-Group/thingsvision | ViCCo-Group__thingsvision-61 | f079eeb4675aab1c69e5f5988b552cba4868eefa | diff --git a/thingsvision/dataset.py b/thingsvision/dataset.py
index 13e181d0..f9177bc2 100644
--- a/thingsvision/dataset.py
+++ b/thingsvision/dataset.py
@@ -9,9 +9,6 @@
import numpy as np
import pandas as pd
import tensorflow as tf
-from torchvision import transforms as T
-
-import thingsvision.vision as vision
from collections import defaultdict
from os.path import join as pjoin
@@ -144,7 +141,7 @@ def find_classes_(self) -> Tuple[list, dict, dict]:
if self.things_behavior:
# sort objects according to item names in THINGS database
classes = [''.join((name, '.jpg'))
- for name in vision.load_item_names()]
+ for name in load_item_names()]
else:
if self.file_names:
classes = list(filter(parse_img_name, self.file_names))
@@ -344,3 +341,6 @@ def get_ref_img(
img_name = ref_img.rstrip(suffix)
if re.search(f'^{img_name}', first_img):
return os.path.join(folder, ref_img)
+
+def load_item_names(folder: str = './data') -> np.ndarray:
+ return pd.read_csv(pjoin(folder, 'item_names.tsv'), encoding='utf-8', sep='\t').uniqueID.values
| Circular imports in `thingsvision.vision`
Cannot import `thingsvision.dataaset.ImageDataset` as a stand-alone, without prior import of `thingsvision.vision`. Code to reproduce:
```
from thingsvision.dataset import ImageDataset
```
throws: `ImportError: cannot import name 'ImageDataset' from partially initialized module 'thingsvision.dataset' (most likely due to a circular import)`. Temporary fix is:
```
import thingsvision.vision
from thingsvision.dataset import ImageDataset
```
| Feel free to open a PR.
Is this still an open issue @hahahannes @florianmahner? If so, we would be happy if you @florianmahner could open a PR to resolve that issue? | 2022-08-15T13:07:32 | 0.0 | [] | [] |
||
codalab/codalab-worksheets | codalab__codalab-worksheets-3913 | 8f7d1e65ccdcabeb9da9376bfcb76fd69581afde | diff --git a/codalab/rest/bundles.py b/codalab/rest/bundles.py
index 1c222d2d8..2b62af663 100644
--- a/codalab/rest/bundles.py
+++ b/codalab/rest/bundles.py
@@ -1,4 +1,5 @@
import http.client
+import json
import logging
import mimetypes
import os
@@ -429,7 +430,10 @@ def _fetch_locations():
return dict(data=uuids_to_locations)
-@get('/bundles/<bundle_uuid:re:%s>/locations/', apply=AuthenticatedProtectedPlugin())
+@get(
+ '/bundles/<bundle_uuid:re:%s>/locations/' % spec_util.UUID_STR,
+ apply=AuthenticatedProtectedPlugin(),
+)
def _fetch_bundle_locations(bundle_uuid: str):
"""
Returns a list of BundleLocations associated with the given bundle.
@@ -438,10 +442,13 @@ def _fetch_bundle_locations(bundle_uuid: str):
- `bundle_uuid`: Bundle UUID to get the locations for
"""
bundle_locations = local.model.get_bundle_locations(bundle_uuid)
- return BundleLocationListSchema(many=True).dump(bundle_locations).data
+ return json.dumps(bundle_locations)
-@post('/bundles/<bundle_uuid:re:%s>/locations/', apply=AuthenticatedProtectedPlugin())
+@post(
+ '/bundles/<bundle_uuid:re:%s>/locations/' % spec_util.UUID_STR,
+ apply=AuthenticatedProtectedPlugin(),
+)
def _add_bundle_location(bundle_uuid: str):
"""
Adds a new BundleLocation to a bundle. Request body must contain the fields in BundleLocationSchema.
diff --git a/docs/REST-API-Reference.md b/docs/REST-API-Reference.md
index 1220f7e17..8c01e0da0 100644
--- a/docs/REST-API-Reference.md
+++ b/docs/REST-API-Reference.md
@@ -510,14 +510,14 @@ Fetch locations of bundles.
Query parameters:
- `uuids`: List of bundle UUID's to get the locations for
-### `GET /bundles/<bundle_uuid:re:%s>/locations/`
+### `GET /bundles/<bundle_uuid:re:0x[0-9a-f]{32}>/locations/`
Returns a list of BundleLocations associated with the given bundle.
Query parameters:
- `bundle_uuid`: Bundle UUID to get the locations for
-### `POST /bundles/<bundle_uuid:re:%s>/locations/`
+### `POST /bundles/<bundle_uuid:re:0x[0-9a-f]{32}>/locations/`
Adds a new BundleLocation to a bundle. Request body must contain the fields in BundleLocationSchema.
diff --git a/frontend/src/components/Bundle/Bundle.js b/frontend/src/components/Bundle/Bundle.js
index a7c723ef9..bce67e1ad 100644
--- a/frontend/src/components/Bundle/Bundle.js
+++ b/frontend/src/components/Bundle/Bundle.js
@@ -140,15 +140,16 @@ class Bundle extends React.Component<
.then(callback)
.catch(errorHandler);
- callback = (response) => {
- const storeInfo = response.data;
- if (!storeInfo) return;
+ callback = (storeInfo) => {
+ if (!storeInfo || storeInfo.length == 0) return;
this.setState({ storeInfo });
};
fetchBundleStores(this.props.uuid)
.then(callback)
- .catch(errorHandler);
+ // TODO: Add error handling when the migration #3802 is ready.
+ // Right now legacy bundles will have errors, which is expected.
+ .catch(() => {});
};
componentDidMount() {
@@ -203,7 +204,7 @@ class Bundle extends React.Component<
<FileBrowser uuid={bundleInfo.uuid} />
{renderMetadata(bundleInfo, bundleMetadataChanged)}
{renderHostWorksheets(bundleInfo)}
- {renderStoreInfo(storeInfo)}
+ {storeInfo && renderStoreInfo(storeInfo)}
</div>
);
@@ -561,18 +562,18 @@ function renderHostWorksheets(bundleInfo) {
function renderStoreInfo(storeInfo) {
let rows = [];
- for (const [uuid, path] of Object.entries(storeInfo)) {
+ storeInfo.forEach(({ bundle_store_uuid, url }) => {
rows.push(
<tr>
<td>
- <a href={`/stores/${uuid}`}>{uuid}</a>
+ <a href={`/stores/${bundle_store_uuid}`}>{bundle_store_uuid}</a>
</td>
<td>
- <span>{path}</span>
+ <span>{url}</span>
</td>
</tr>,
);
- }
+ });
return (
<div>
diff --git a/frontend/src/util/apiWrapper.js b/frontend/src/util/apiWrapper.js
index ae633254d..d95bff19c 100644
--- a/frontend/src/util/apiWrapper.js
+++ b/frontend/src/util/apiWrapper.js
@@ -103,8 +103,8 @@ export const fetchBundleContents = (uuid) => {
return get(url, { depth: 1 });
};
-export const fetchBundleStores = (uuids) => {
- const url = '/rest/bundles/locations?' + new URLSearchParams({ uuids });
+export const fetchBundleStores = (uuid) => {
+ const url = `/rest/bundles/${uuid}/locations/`;
return get(url);
};
| Bundle store frontend error

| 2021-12-13T21:34:22 | 0.0 | [] | [] |
|||
kharyam/litra-driver | kharyam__litra-driver-23 | 2d64c8cb98f4f514c08220e43af70cd5879c0916 | diff --git a/.gitignore b/.gitignore
index 64177c6..895a8d2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -7,3 +7,4 @@ __pycache__
.tox
pdoc3-html
.coverage
+build
diff --git a/README.md b/README.md
index 6cc2c8f..5f3d3ce 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
## Introduction
-After purchasing a [Logitech Litra Glow](https://www.logitech.com/en-us/products/lighting/litra-glow.946-000001.html) I was unable to find any support for linux. This project reverse-engineers the basic functionality of the litra pro so that we can control it via USB without using the physical buttons on the device.
+After purchasing a [Logitech Litra Glow](https://www.logitech.com/en-us/products/lighting/litra-glow.946-000001.html) I was unable to find any support for linux. This project reverse-engineers the basic functionality of the litra gloq so that we can control it via USB without using the physical buttons on the device. It also now supports the [Logitech Litra Beam](https://www.logitech.com/en-us/products/lighting/litra-beam.946-000006.html).
## Quick Start
@@ -11,6 +11,7 @@ After purchasing a [Logitech Litra Glow](https://www.logitech.com/en-us/products
```bash
# If necessary, create a udev role to grant permission to access the light
sudo tee /etc/udev/rules.d/82-litra-glow.rules <<< 'SUBSYSTEM=="usb", ATTR{idVendor}=="046d", ATTR{idProduct}=="c900",MODE="0666"'
+sudo tee /etc/udev/rules.d/82-litra-glow.rules <<< 'SUBSYSTEM=="usb", ATTR{idVendor}=="046d", ATTR{idProduct}=="c901",MODE="0666"'
sudo reboot
diff --git a/setup.cfg b/setup.cfg
index 54f543b..13adcb7 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -2,10 +2,10 @@
name = litra-driver
author = Khary Mendez
author_email = [email protected]
-description = Logitech Lumitra Glow Driver
-version = 0.0.5
+description = Logitech Lumitra Glow and Beam Driver
+version = 0.0.6
url = https://github.com/kharyam/litra-driver
-download_url = https://github.com/kharyam/litra-driver/archive/v0.0.4.tar.gz
+download_url = https://github.com/kharyam/litra-driver/archive/v0.0.6.tar.gz
keywords =
logitech lumitra glow
CLI
diff --git a/src/llgd/config/llgd_config.py b/src/llgd/config/llgd_config.py
index aeaa57f..be88722 100644
--- a/src/llgd/config/llgd_config.py
+++ b/src/llgd/config/llgd_config.py
@@ -8,7 +8,6 @@
format='%(asctime)s [%(levelname)s] %(message)s', level=getenv('LITRA_LOGLEVEL',
default='WARNING'))
-
class LlgdConfig:
"""This class allows the state of the light along with custom profiles
to be persisted into a user config file
diff --git a/src/llgd/lib/llgd_lib.py b/src/llgd/lib/llgd_lib.py
index b49e8d6..2084324 100644
--- a/src/llgd/lib/llgd_lib.py
+++ b/src/llgd/lib/llgd_lib.py
@@ -9,7 +9,8 @@
from llgd.config.llgd_config import LlgdConfig
VENDOR_ID = 0x046d
-PRODUCT_ID = 0xc900
+LITRA_PRODUCT_IDS = [0xc900, 0xc901]
+
LIGHT_OFF = 0x00
LIGHT_ON = 0x01
TIMEOUT_MS = 3000
@@ -17,15 +18,22 @@
MAX_BRIGHTNESS = 0xfa
config = LlgdConfig()
+devices = []
+
+def search_devices():
+ """ Search for Litra Devices
+ """
+ logging.info("Searching for litra devices...")
+ for product_id in LITRA_PRODUCT_IDS:
+ product_devices = usb.core.find(idVendor=VENDOR_ID, idProduct=product_id, find_all=True)
+ for product_device in product_devices:
+ logging.info('Found Device "%s"', product_device.product)
+ devices.append(product_device)
def count():
""" Returns a count of all devices
"""
- devs = usb.core.find(idVendor=VENDOR_ID, idProduct=PRODUCT_ID, find_all=True)
- total_dev_count = 0
- for _ in devs:
- total_dev_count+=1
- return total_dev_count
+ return len(devices)
def setup(index):
"""Sets up the device
@@ -37,11 +45,7 @@ def setup(index):
[device, reattach]: where device is a Device object and reattach
is a bool indicating whether the kernel driver should be reattached
"""
- devs = usb.core.find(idVendor=VENDOR_ID, idProduct=PRODUCT_ID, find_all=True)
- dev_list = []
- for a_dev in devs:
- dev_list.append(a_dev)
- dev = dev_list[index]
+ dev = devices[index]
if dev is None:
raise ValueError('Device not found')
@@ -65,7 +69,6 @@ def setup(index):
return dev, reattach
-
def teardown(dev, reattach):
"""Tears down the device
"""
@@ -79,8 +82,8 @@ def light_on():
"""
for index in range(0, count()):
dev, reattach = setup(index)
- dev.write(0x02, [0x11, 0xff, 0x04, 0x1c, LIGHT_ON, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00], TIMEOUT_MS)
+ dev.write(0x02, [0x11, 0xff, 0x04, 0x1c, LIGHT_ON, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00], TIMEOUT_MS)
dev.read(0x02, 64)
logging.info("Light On")
teardown(dev, reattach)
@@ -91,8 +94,8 @@ def light_off():
"""
for index in range(0, count()):
dev, reattach = setup(index)
- dev.write(0x02, [0x11, 0xff, 0x04, 0x1c, LIGHT_OFF, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00], TIMEOUT_MS)
+ dev.write(0x02, [0x11, 0xff, 0x04, 0x1c, LIGHT_OFF, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00], TIMEOUT_MS)
dev.read(0x02, 64)
logging.info("Light Off")
teardown(dev, reattach)
@@ -109,8 +112,8 @@ def set_brightness(level):
dev, reattach = setup(index)
adjusted_level = math.floor(
MIN_BRIGHTNESS + ((level/100) * (MAX_BRIGHTNESS - MIN_BRIGHTNESS)))
- dev.write(0x02, [0x11, 0xff, 0x04, 0x4c, 0x00, adjusted_level, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00], TIMEOUT_MS)
+ dev.write(0x02, [0x11, 0xff, 0x04, 0x4c, 0x00, adjusted_level, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00], TIMEOUT_MS)
dev.read(0x02, 64)
config.update_current_state(brightness=level)
logging.info("Brightness set to %d", level)
@@ -133,3 +136,5 @@ def set_temperature(temp):
config.update_current_state(temp=temp)
logging.info("Temperature set to %d", temp)
teardown(dev, reattach)
+
+search_devices()
\ No newline at end of file
diff --git a/tox.ini b/tox.ini
index f1e8571..e8891e0 100644
--- a/tox.ini
+++ b/tox.ini
@@ -57,7 +57,7 @@ deps =
pylint
flake8
commands =
- python -m pylint --rcfile=setup.cfg src/
+ python -m pylint --rcfile=setup.cfg --disable=C src.version src/
flake8 src/ --count --select=E9,F63,F7,F82 --show-source --statistics
[testenv:bandit]
| Different idProduct for the Litra Beam
Thank you for making this software. I just got a [Litra Beam](https://www.logitech.com/en-us/products/lighting/litra-beam.946-000006.html) which is very similar to the Litra Glow. So I am optimistic that this software will work for the Litra Beam as well. The output of the `lsusb` command shows the following for the Litra Beam
```
Bus 001 Device 113: ID 046d:c901 Logitech, Inc. Litra Beam
```
As far as I can tell, the only change that should be necessary is that the udev rule should mention `ATTR{idProduct}=="c901"` instead of `"c900"`. However, I also noticed that the `c900` is hard coded as the `PRODUCT_ID` in `src/llgd/lib/llgd_lib.py`. Is there a way to specify the product id via the command line or a config file, so that this will work with the Litra Beam too?
Thanks in advance.
| I'd also love this for my beam please!
@kharyam Do you think this could be added, if you have time?
Hey @sagarbehere and @henricook - this fell off my radar but I'll try to get to it in the next few days. Thanks! | 2022-12-06T02:04:07 | 0.0 | [] | [] |
||
getindata/dbt-airflow-factory | getindata__dbt-airflow-factory-101 | df3a39ab2701b455c0a6fa0236c11125bd1afc7c | diff --git a/.github/workflows/prepare-release.yml b/.github/workflows/prepare-release.yml
index c2c3cad..3b0dcf0 100644
--- a/.github/workflows/prepare-release.yml
+++ b/.github/workflows/prepare-release.yml
@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
- python-version: ["3.8"]
+ python-version: ["3.9"]
env:
PYTHON_PACKAGE: dbt_airflow_factory
steps:
diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml
index bb3f311..3ebc9ab 100644
--- a/.github/workflows/publish.yml
+++ b/.github/workflows/publish.yml
@@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
- python-version: ["3.8"]
+ python-version: ["3.9"]
env:
PYTHON_PACKAGE: dbt_airflow_factory
steps:
diff --git a/.github/workflows/python-package.yml b/.github/workflows/python-package.yml
index 4a24008..cedc0b7 100644
--- a/.github/workflows/python-package.yml
+++ b/.github/workflows/python-package.yml
@@ -16,7 +16,7 @@ jobs:
- name: Setup python
uses: actions/[email protected]
with:
- python-version: "3.8"
+ python-version: "3.9"
- name: Setup virtualenv
run: |
@@ -35,7 +35,7 @@ jobs:
- name: Test with tox
run: |
- tox -e py38
+ tox -e py39
- name: Report coverage
uses: paambaati/[email protected]
diff --git a/.readthedocs.yaml b/.readthedocs.yaml
index 2afdf5d..e91e7fc 100644
--- a/.readthedocs.yaml
+++ b/.readthedocs.yaml
@@ -9,7 +9,7 @@ version: 2
build:
os: ubuntu-22.04
tools:
- python: "3.11"
+ python: "3.9"
# Build documentation in the docs/ directory with Sphinx
sphinx:
diff --git a/CHANGELOG.md b/CHANGELOG.md
index bf45046..4c40304 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,11 @@
## [Unreleased]
+## [0.33.0] - 2023-08-04
+
+- Add `kwargs` to `BashExecutionParameters` [#90](https://github.com/getindata/dbt-airflow-factory/issues/90)
+- Correcting required packages [#97](https://github.com/getindata/dbt-airflow-factory/issues/97)
+
## [0.32.0] - 2023-07-04
## [0.31.1] - 2023-05-12
@@ -146,7 +151,9 @@ This version brings compatibility with `dbt 1.0`.
- Initial implementation of `dbt_airflow_manifest_parser` library.
-[Unreleased]: https://github.com/getindata/dbt-airflow-factory/compare/0.32.0...HEAD
+[Unreleased]: https://github.com/getindata/dbt-airflow-factory/compare/0.33.0...HEAD
+
+[0.33.0]: https://github.com/getindata/dbt-airflow-factory/compare/0.32.0...0.33.0
[0.32.0]: https://github.com/getindata/dbt-airflow-factory/compare/0.31.1...0.32.0
diff --git a/README.md b/README.md
index 8148d0d..bcff861 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
# DBT Airflow Factory
-[](https://github.com/getindata/dbt-airflow-factory)
+[](https://github.com/getindata/dbt-airflow-factory)
[](https://pypi.org/project/dbt-airflow-factory/)
[](https://pepy.tech/project/dbt-airflow-factory)
[](https://codeclimate.com/github/getindata/dbt-airflow-factory/maintainability)
diff --git a/dbt_airflow_factory/__init__.py b/dbt_airflow_factory/__init__.py
index 0f31bac..893bfec 100644
--- a/dbt_airflow_factory/__init__.py
+++ b/dbt_airflow_factory/__init__.py
@@ -1,1 +1,1 @@
-version = "0.32.0"
+version = "0.33.0"
diff --git a/dbt_airflow_factory/bash/bash_parameters.py b/dbt_airflow_factory/bash/bash_parameters.py
index 9d59bda..dde554b 100644
--- a/dbt_airflow_factory/bash/bash_parameters.py
+++ b/dbt_airflow_factory/bash/bash_parameters.py
@@ -1,5 +1,7 @@
"""POD representing Bash operator config file."""
+from typing import Any
+
class BashExecutionParameters:
"""POD representing Bash operator config file.
@@ -7,8 +9,5 @@ class BashExecutionParameters:
:type execution_script: str
"""
- def __init__(
- self,
- execution_script: str = "dbt --no-write-json",
- ) -> None:
+ def __init__(self, execution_script: str = "dbt --no-write-json", **kwargs: Any) -> None:
self.execution_script = execution_script
diff --git a/dbt_airflow_factory/builder_factory.py b/dbt_airflow_factory/builder_factory.py
index 8d3757d..0c36cfe 100644
--- a/dbt_airflow_factory/builder_factory.py
+++ b/dbt_airflow_factory/builder_factory.py
@@ -83,6 +83,7 @@ def _create_tasks_airflow_config(self) -> TasksBuildingParameters:
self.airflow_config.get("use_task_group", False),
self.airflow_config.get("show_ephemeral_models", True),
self.airflow_config.get("enable_project_dependencies", False),
+ self.airflow_config.get("check_all_deps_for_multiple_deps_tests", True),
)
def _create_operator_builder(
diff --git a/dbt_airflow_factory/tasks_builder/builder.py b/dbt_airflow_factory/tasks_builder/builder.py
index 7a85fb7..18056e6 100644
--- a/dbt_airflow_factory/tasks_builder/builder.py
+++ b/dbt_airflow_factory/tasks_builder/builder.py
@@ -170,6 +170,7 @@ def _make_dbt_tasks(self, manifest_path: str) -> ModelExecutionTasks:
gateway_config=self.gateway_config,
enable_dags_dependencies=self.airflow_config.enable_dags_dependencies,
show_ephemeral_models=self.airflow_config.show_ephemeral_models,
+ check_all_deps_for_multiple_deps_tests=self.airflow_config.check_all_deps_for_multiple_deps_tests,
),
)
tasks_with_context = self._create_tasks_from_graph(dbt_airflow_graph)
diff --git a/dbt_airflow_factory/tasks_builder/parameters.py b/dbt_airflow_factory/tasks_builder/parameters.py
index 9fe6315..eaf9e89 100644
--- a/dbt_airflow_factory/tasks_builder/parameters.py
+++ b/dbt_airflow_factory/tasks_builder/parameters.py
@@ -1,10 +1,9 @@
+from dataclasses import dataclass
+
+
+@dataclass(frozen=True)
class TasksBuildingParameters:
- def __init__(
- self,
- use_task_group: bool = True,
- show_ephemeral_models: bool = True,
- enable_dags_dependencies: bool = False,
- ) -> None:
- self.use_task_group = use_task_group
- self.show_ephemeral_models = show_ephemeral_models
- self.enable_dags_dependencies = enable_dags_dependencies
+ use_task_group: bool = True
+ show_ephemeral_models: bool = True
+ enable_dags_dependencies: bool = False
+ check_all_deps_for_multiple_deps_tests: bool = False
diff --git a/setup.cfg b/setup.cfg
index 40bf7e4..6ad580e 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,5 +1,5 @@
[bumpversion]
-current_version = 0.32.0
+current_version = 0.33.0
[bumpversion:file:setup.py]
@@ -7,7 +7,7 @@ current_version = 0.32.0
[flake8]
exclude = .git,__pycache__,build,dist,docs/source/conf.py
-max-line-length = 100
+max-line-length = 120
extend-ignore = E203
[mypy]
diff --git a/setup.py b/setup.py
index fa6d6c1..c2d1541 100644
--- a/setup.py
+++ b/setup.py
@@ -7,8 +7,10 @@
# Runtime Requirements.
INSTALL_REQUIRES = [
- "pytimeparse==1.1.8",
- "dbt-graph-builder>=0.3.0",
+ "pytimeparse>=1.1, <2",
+ "dbt-graph-builder>=0.6.3, <2",
+ "apache-airflow[kubernetes,slack]>=2.5, <3",
+ "apache-airflow-providers-airbyte>=3.1, <4",
]
# Dev Requirements
@@ -18,9 +20,7 @@
"pytest-cov>=2.8.0, <3.0.0",
"tox==3.21.1",
"pre-commit==2.9.3",
- "pandas==1.2.5",
- "apache-airflow[kubernetes,slack]==2.5.2",
- "apache-airflow-providers-airbyte==3.1.0",
+ "pandas>=1.2.5, <2.0.0",
],
"docs": [
"sphinx==4.3.1",
@@ -33,7 +33,7 @@
setup(
name="dbt-airflow-factory",
- version="0.32.0",
+ version="0.33.0",
description="Library to convert DBT manifest metadata to Airflow tasks",
long_description=README,
long_description_content_type="text/markdown",
@@ -41,7 +41,6 @@
python_requires=">=3",
classifiers=[
"Development Status :: 3 - Alpha",
- "Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
diff --git a/tox.ini b/tox.ini
index 1a233fd..084324e 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
[tox]
-envlist = py38
+envlist = py39
[testenv]
extras =
@@ -10,7 +10,7 @@ commands=
# Lint
[flake8]
exclude = .git,__pycache__,docs/source/conf.py,old,build,dist
-max-line-length = 100
+max-line-length = 120
extend-ignore = E203
[mypy]
| Release 0.33.0
Bump version and CHANGELOG for next release.
| 2023-08-04T10:28:00 | 0.0 | [] | [] |
|||
cdgriffith/puremagic | cdgriffith__puremagic-97 | 72ee16483483b240293f060f8b030c7856bbe76b | diff --git a/puremagic/main.py b/puremagic/main.py
index a2d2416..d69a8bf 100644
--- a/puremagic/main.py
+++ b/puremagic/main.py
@@ -133,7 +133,7 @@ def _confidence(matches, ext=None) -> list[PureMagicWithConfidence]:
if not results:
raise PureError("Could not identify file")
- return sorted(results, key=lambda x: (x.confidence, x.byte_match), reverse=True)
+ return sorted(results, key=lambda x: (x.confidence, len(x.byte_match)), reverse=True)
def _identify_all(header: bytes, footer: bytes, ext=None) -> list[PureMagicWithConfidence]:
@@ -226,6 +226,10 @@ def _stream_details(stream):
"""Grab the start and end of the stream"""
max_head, max_foot = _max_lengths()
head = stream.read(max_head)
+ try:
+ stream.seek(-max_foot, os.SEEK_END)
+ except OSError:
+ stream.seek(stream.tell(), os.SEEK_END)
stream.seek(-max_foot, os.SEEK_END)
foot = stream.read()
stream.seek(0)
| Weird issue with non-compliant AIFF files
Just starting a PR based on #85 and came across a weird issue. It appears the certain malformed AIFF files cannot be read under certain conditions. If we use the example in python:
```
import puremagic
filename = "r:\aiff\Fnonull.aif"
puremagic.magic_file(filename)
```
We get the following:
```
[PureMagicWithConfidence(byte_match=b'FORM', offset=0, extension='.aif', mime_type='audio/x-aiff', name='Audio Interchange File', confidence=0.9), PureMagicWithConfidence(byte_match=b'FORM\x00\x00\x00\\AIFC', offset=8, extension='.aifc', mime_type='audio/x-aiff', name='Audio Interchange File Format (Compressed)', confidence=0.8), PureMagicWithConfidence(byte_match=b'FORM', offset=0, extension='.aiff', mime_type='audio/aiff', name='Audio Interchange File', confidence=0.4), PureMagicWithConfidence(byte_match=b'FORM', offset=0, extension='.djv', mime_type='image/vnd.djvu', name='DjVu image', confidence=0.4), PureMagicWithConfidence(byte_match=b'FORM', offset=0, extension='.djv', mime_type='image/vnd.djvu+multipage', name='DjVu document', confidence=0.4), PureMagicWithConfidence(byte_match=b'FORM', offset=0, extension='', mime_type='application/x-iff', name='IFF file', confidence=0.4), PureMagicWithConfidence(byte_match=b'FORM', offset=0, extension='.sc2', mime_type='', name='SimCity 2000 Map File', confidence=0.4), PureMagicWithConfidence(byte_match=b'AIFC', offset=8, extension='.aiffc', mime_type='audio/x-aifc', name='AIFC audio', confidence=0.4)]
```
However, if we do the following in a .py file
```
import puremagic
with open(r"r:\aiff\Fnonull.aif", "rb") as file:
print(puremagic.magic_stream(file))
```
We get:
```
Traceback (most recent call last):
File "R:\WinUAE\pm.py", line 3, in <module>
print(puremagic.magic_stream(file))
File "C:\Users\Andy\AppData\Local\Programs\Python\Python310\lib\site-packages\puremagic\main.py", line 351, in magic_stream
head, foot = _stream_details(stream)
File "C:\Users\Andy\AppData\Local\Programs\Python\Python310\lib\site-packages\puremagic\main.py", line 229, in _stream_details
stream.seek(-max_foot, os.SEEK_END)
OSError: [Errno 22] Invalid argument
```
From reading around it appears to have something to do with malformed files and seek errors, but I can't quite see how Puremagic can read it one way and not the other.
Any thoughts on this?
## Test files.
[aiff.zip](https://github.com/user-attachments/files/15973648/aiff.zip)
The files causing trouble are the ones labelled as `Perverse Files` from this page [Samples](https://www.mmsp.ece.mcgill.ca/Documents/AudioFormats/AIFF/Samples.html)
| OK I think I have this partially solved, rather than duplicate it all here I'll continue this in #96 which is the same issue. | 2024-08-05T14:55:15 | 0.0 | [] | [] |
||
borgbase/vorta | borgbase__vorta-1626 | 82270adf4f27b228fe0871aba36c39926d6ab6b5 | diff --git a/src/vorta/borg/create.py b/src/vorta/borg/create.py
index 167a87b13..0eea27cc8 100644
--- a/src/vorta/borg/create.py
+++ b/src/vorta/borg/create.py
@@ -49,6 +49,7 @@ def process_result(self, result):
).format(LOG_DIR.as_uri())
)
else:
+ self.app.backup_log_event.emit('', {})
self.app.backup_progress_event.emit(f"[{self.params['profile_name']}] {self.tr('Backup finished.')}")
def progress_event(self, fmt):
| Clear contents of `logText` after successfull backup.
Currently the label `logText` is not cleared after a borg command finished. When creating a backup the label will show the path of the last backuped file even after backup completion.
Instead the `progressText` and `logText` labels should be cleared if they show information only relevant during the command execution.
| This was already reported here: https://github.com/borgbase/vorta/issues/1356#issuecomment-1157515164
Hey @real-yfprojects can I work on this issue?
Sure, you'll need to be familiar with (py)qt signals for this one.
Hey, @real-yfprojects can you explain me about this issue?
> Hey, @real-yfprojects can you explain me about this issue?
What is your question?
Hey! @aadityasinha-dotcom are you still working on this ? otherwise I would like to work on this
Yes
> Hey, @real-yfprojects can you explain me about this issue?
I guess @real-yfprojects means something like this

See the bottom text disappears after the job is finished ð
> See the bottom text disappears after the job is finished
That is the goal. Vorta doesn't do this yet afaik.
okay got it, thank you @DaffyTheDuck
https://user-images.githubusercontent.com/75474786/222726002-8366c311-a68f-4844-928a-7bd66faeae6d.mp4
But in my system it does not disappears
> But in my system it does not disappears
That's what you have to figure out ð
@real-yfprojects can you explain this?
> can you explain this?
*this* is an ambiguous pronoun. Be more specific with your question. State what you have already figured out. What problems you are facing and what kind of advice you are expecting.
If I tell you in detail what lines of code have to be written. I can just write the code instead.
> @real-yfprojects can you explain this?
In simple, he means that you've to modify the code in such a way that it should remove the labels after the backup is complete ð (you can also tweak the code in a way that as shown in my gif it wont remove it instantly)
okay
But my question is, why it is not disappearing in my system? @DaffyTheDuck | 2023-03-05T11:20:36 | 0.0 | [] | [] |
||
coderedcorp/wagtail-cache | coderedcorp__wagtail-cache-76 | 0df74a64b1e302498e9182c5f6ce9570691763b8 | diff --git a/wagtailcache/cache.py b/wagtailcache/cache.py
index 0d92c75..460023c 100644
--- a/wagtailcache/cache.py
+++ b/wagtailcache/cache.py
@@ -183,10 +183,9 @@ class FetchFromCacheMiddleware(MiddlewareMixin):
Mostly stolen from ``django.middleware.cache.FetchFromCacheMiddleware``.
"""
- def __init__(self, get_response=None):
+ def __init__(self, get_response):
self._wagcache = caches[wagtailcache_settings.WAGTAIL_CACHE_BACKEND]
- self.get_response = get_response
- self._async_check()
+ super().__init__(get_response)
def process_request(self, request: WSGIRequest) -> Optional[HttpResponse]:
if not wagtailcache_settings.WAGTAIL_CACHE:
@@ -250,10 +249,9 @@ class UpdateCacheMiddleware(MiddlewareMixin):
Mostly stolen from ``django.middleware.cache.UpdateCacheMiddleware``.
"""
- def __init__(self, get_response=None):
+ def __init__(self, get_response):
self._wagcache = caches[wagtailcache_settings.WAGTAIL_CACHE_BACKEND]
- self.get_response = get_response
- self._async_check()
+ super().__init__(get_response)
def process_response(
self, request: WSGIRequest, response: HttpResponse
@@ -411,13 +409,15 @@ def _wrapped_view_func(
request: WSGIRequest, *args, **kwargs
) -> HttpResponse:
# Try to fetch an already cached page from wagtail-cache.
- response = FetchFromCacheMiddleware().process_request(request)
+ response = FetchFromCacheMiddleware(view_func).process_request(request)
if response:
return response
# Since we don't have a response at this point, process the request.
response = view_func(request, *args, **kwargs)
# Cache the response.
- response = UpdateCacheMiddleware().process_response(request, response)
+ response = UpdateCacheMiddleware(view_func).process_response(
+ request, response
+ )
return response
return _wrapped_view_func
| Cache middleware breaks in Django 5.1

| Experiencing similar issue,
python 3.11 using https://github.com/coderedcorp/[email protected]
```shell
app-1 | File "/app/myapp/myapp/asgi.py", line 25, in <module>
app-1 | django_asgi_app = get_asgi_application()
app-1 | ^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/django/core/asgi.py", line 13, in get_asgi_application
app-1 | return ASGIHandler()
app-1 | ^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/asgi.py", line 148, in __init__
app-1 | self.load_middleware(is_async=True)
app-1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 61, in load_middleware
app-1 | mw_instance = middleware(adapted_handler)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/wagtailcache/cache.py", line 189, in __init__
app-1 | self._async_check()
app-1 | ^^^^^^^^^^^^^^^^^
app-1 | AttributeError: 'FetchFromCacheMiddleware' object has no attribute '_async_check'
```
Appears the `_async_check()` was removed after Django 5.0
see https://github.com/django/django/commit/e2922b0d5f18169d1d0115a6db5d2ed8c42d0692.
Also note that `django/utils/deprecation.py` has deprecation warnings now set to 6.0 and 6.1 but unclear what this deprecation.py is really for. A confusing but enlightening overview at https://stackoverflow.com/a/52913499/6940121.
Appears the following works as a simple fix ?
```python
# and then ammend the __init__ to call super()
class FetchFromCacheMiddleware(MiddlewareMixin):
def __init__(self, get_response=None):
super().__init__(get_response)
.............. # rest of __init__
class UpdateCacheMiddleware(MiddlewareMixinFixed):
def __init__(self, get_response=None):
super().__init__(get_response)
.............. # rest of __init__
````
@AnthonyUphof-zacailab thanks for the link. The code structure of wagtail-cache originally a copy of the Django cache middleware, then grew into its own beast over time. I'd be open to refactoring it to the "correct" way.
What I'm currently trying to figure out is can we have compatibility with both Django 4.2-5.0, and 5.1 at the same time? Or do we need to cut a separate release for those? | 2024-08-21T21:07:38 | 0.0 | [] | [] |
||
PNNL-CompBio/coderdata | PNNL-CompBio__coderdata-157 | 16c6c5e9316c7e982f9120eb7b91060bff746a19 | diff --git a/.dockerignore b/.dockerignore
index f1887ba3..d18d9ac9 100644
--- a/.dockerignore
+++ b/.dockerignore
@@ -4,4 +4,5 @@ coderdata/
dataSummary/
docs/
candle_bmd/
-schema/
\ No newline at end of file
+schema/
+build/local/
\ No newline at end of file
diff --git a/build/beatAML/GetBeatAML.py b/build/beatAML/GetBeatAML.py
index 3fcfa6e2..f2b433dd 100755
--- a/build/beatAML/GetBeatAML.py
+++ b/build/beatAML/GetBeatAML.py
@@ -7,7 +7,7 @@
import numpy as np
import subprocess
import argparse
-
+import time
def download_from_github(raw_url, save_path):
"""
@@ -159,11 +159,14 @@ def retrieve_drug_info(compound_name):
"""
if pd.isna(compound_name):
return np.nan, np.nan, np.nan, np.nan, np.nan, np.nan
+
+ ##limit is 1 call per 5 seconds. add in wait call.
url = f"https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/name/{compound_name}/property/CanonicalSMILES,IsomericSMILES,InChIKey,MolecularFormula,MolecularWeight/JSON"
response = requests.get(url)
if response.status_code != 200:
+ print(response.text)
return np.nan, np.nan, np.nan, np.nan, np.nan, np.nan
data = response.json()
@@ -206,16 +209,20 @@ def update_dataframe_with_pubchem(d_df):
for name in chem_names:
print("Attempting to call pubchem API for chem_name: ", name)
chem_data_dict[name] = retrieve_drug_info(name)
+ time.sleep(0.2)
failed_chem_names = {k for k, v in chem_data_dict.items() if all(pd.isna(val) for val in v)}
other_names = d_df[d_df['chem_name'].isin(failed_chem_names)]['other_name'].dropna().unique()
other_data_dict = {}
for name in other_names:
print("Attempting to call pubchem API for other_name: ", name)
other_data_dict[name] = retrieve_drug_info(name)
+ time.sleep(0.2)
# Combine both dictionaries for easy lookup
data_dict = {**chem_data_dict, **other_data_dict}
+ #print(data_dict)
+# print(data_dict['isoSMILES'])
# Update the DataFrame using the data dictionary
for idx, row in d_df.iterrows():
if row['chem_name'] in data_dict and not all(pd.isna(val) for val in data_dict[row['chem_name']]):
@@ -248,6 +255,9 @@ def merge_drug_info(d_df,drug_map):
pd.DataFrame
The merged dataframe containing combined drug information.
"""
+ #print(drug_map)
+ #print(d_df.columns)
+ #print(d_df)
result_df = d_df.merge(drug_map[['isoSMILES', 'improve_drug_id']], on='isoSMILES', how='left')
return result_df
@@ -292,7 +302,7 @@ def format_drug_df(drug_path):
"""
d_df = pd.read_csv(drug_path, index_col=None,sep="\t")
d_df[['chem_name', 'other_name']] = d_df['inhibitor'].str.extract(r'^(.*?)\s*(?:\((.+)\))?$')
- d_df["chem_name"] = d_df["chem_name"].str.replace('\s-\s', ':')
+ d_df["chem_name"] = d_df["chem_name"].str.replace('\s-\s', ':',regex=True)
d_df['chem_name'] = [a.lower() for a in d_df['chem_name']]
return d_df
diff --git a/build/beatAML/requirements.txt b/build/beatAML/requirements.txt
new file mode 100755
index 00000000..48b08e50
--- /dev/null
+++ b/build/beatAML/requirements.txt
@@ -0,0 +1,6 @@
+pandas
+wget==3.2
+requests
+synapseclient
+argparse
+numpy
diff --git a/build/broad_sanger/02-broadSangerOmics.R b/build/broad_sanger/02-broadSangerOmics.R
index d6bfc9ab..ddd3cbce 100755
--- a/build/broad_sanger/02-broadSangerOmics.R
+++ b/build/broad_sanger/02-broadSangerOmics.R
@@ -31,7 +31,7 @@ variant_schema =list(`3'UTR`=c("3'UTR",'THREE_PRIME_UTR','3prime_UTR_variant','3
IGR=c('IGR','nc_variant'),
In_Frame_Del=c('IN_FRAME_DEL','In_Frame_Del','inframe'),
In_Frame_Ins=c('IN_FRAME_INS','In_Frame_Ins'),
- Intron=c('INTRON','Intron','intronic'),
+ Intron=c('INTRON','Intron','intronic','intron'),
Missense_Mutation=c('Missense_Mutation','MISSENSE','missense'),
Nonsense_Mutation=c('Nonsense_Mutation','NONSENSE','nonsense'),
Nonstop_Mutation=c('Nonstop_Mutation','NONSTOP'),
@@ -160,8 +160,17 @@ sanger_files<-function(fi,value){
left_join(smap)|>
mutate(study='Sanger')|>
dplyr::select(-c(other_id,gene_symbol))|>
- left_join(as.data.frame(sanger_vtab))|>
- dplyr::select(-effect)|>
+ left_join(as.data.frame(sanger_vtab))
+
+ ##now many variants are missing???
+ missing<-res|>
+ select(effect,variant_classification)|>
+ distinct()|>
+ subset(is.na(variant_classification))
+ print(missing)
+
+###TODO double check to see if any variants are missing
+ res<-res|>dplyr::select(-effect)|>
subset(!is.na(improve_sample_id))|>
distinct()
@@ -387,7 +396,16 @@ depmap_files<-function(fi,value){
res<-exp_file|>
mutate(entrez_id=as.numeric(EntrezGeneID))|>
- left_join(as.data.frame(depmap_vtab))|>
+ left_join(as.data.frame(depmap_vtab))
+
+ ##now many variants are missing???
+ missing<-res|>
+ select(VariantInfo,variant_classification)|>
+ distinct()|>
+ subset(is.na(variant_classification))
+ print(missing)
+
+ res<-res|>
dplyr::select(-c(EntrezGeneID,VariantInfo))|>
distinct()|>
subset(!is.na(entrez_id)) ##removes thos with unknonw entrez
@@ -538,13 +556,12 @@ main<-function(){
lapply(alltypes,function(dt){
print(dt)
- temps<-sanger_files(sanger_filenames[[dt]],dt)
- tempd<-depmap_files(depmap_filenames[[dt]],dt)
+ temps<-sanger_files(sanger_filenames[[dt]],dt)|>tidyr::drop_na()
+ tempd<-depmap_files(depmap_filenames[[dt]],dt)|>tidyr::drop_na()
readr::write_csv(rbind(tempd,temps),file=paste0('/tmp/broad_sanger_',dt,'.csv.gz'))
rm(tempd)
rm(temps)
})
- system(paste0('/opt/venv/bin/python 02a-broad_sanger_proteomics.py --gene ',gfile,' --sample ',sfile))
}
diff --git a/build/broad_sanger/build_omics.sh b/build/broad_sanger/build_omics.sh
index ca6fb10c..d898c289 100644
--- a/build/broad_sanger/build_omics.sh
+++ b/build/broad_sanger/build_omics.sh
@@ -1,2 +1,3 @@
+/opt/venv/bin/python 02a-broad_sanger_proteomics.py --gene $1 --sample $2
Rscript 02-broadSangerOmics.R $1 $2
#python 02a-broad/sanger_proteomics.py $1 $2
diff --git a/build/docker/Dockerfile.beataml b/build/docker/Dockerfile.beataml
index 73aa168b..033a21f3 100644
--- a/build/docker/Dockerfile.beataml
+++ b/build/docker/Dockerfile.beataml
@@ -6,8 +6,8 @@ WORKDIR /usr/src/app
COPY build/beatAML/GetBeatAML.py .
COPY build/utils/fit_curve.py .
COPY build/beatAML/*sh ./
+COPY build/beatAML/requirements.txt .
-COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
VOLUME ['/tmp']
# CMD python GetBeatAML.py --token ${SYNAPSE_TOKEN}
| Drop NA values in Broad_Sanger Mutation Data
This data should not have missing values for entrez IDS or mutation data.
I'll make this change when I take over the local build script.
Drop NA values in Broad_Sanger Copy Number Data
This data should not have missing values for entrez IDS or copy number data.
I'll make this change when I take over the local build script.
| 2024-04-26T16:44:09 | 0.0 | [] | [] |
|||
ffalcinelli/pydivert | ffalcinelli__pydivert-27 | 724bfdc0554ed6fe966eedda3b74197519526cd4 | diff --git a/CHANGELOG b/CHANGELOG
index c612446..5c0d9ef 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,6 @@
+Version 2.1.0
+ - Bundle WinDivert 1.3.
+
Version 2.0.7
- Headers have handy fields to manipulate raw data
diff --git a/README.rst b/README.rst
index 7768d86..cdf1017 100644
--- a/README.rst
+++ b/README.rst
@@ -34,6 +34,7 @@ PyDivert WinDivert
0.0.7 1.0.x or 1.1.x
1.0.x (API-compatible with 0.0.7) 1.1.8 (bundled)
2.0.x 1.1.8 (bundled)
+2.1.x 1.3 (bundled)
================================= ===============
Getting Started
diff --git a/pydivert/windivert.py b/pydivert/windivert.py
index 41b784e..967c228 100644
--- a/pydivert/windivert.py
+++ b/pydivert/windivert.py
@@ -85,7 +85,7 @@ def is_registered():
"""
Check if the WinDivert service is currently installed on the system.
"""
- return subprocess.call("sc query WinDivert1.1", stdout=subprocess.PIPE,
+ return subprocess.call("sc query WinDivert1.3", stdout=subprocess.PIPE,
stderr=subprocess.PIPE) == 0
@staticmethod
@@ -95,7 +95,7 @@ def unregister():
This function only requests a service stop, which may not be processed immediately if there are still open
handles.
"""
- subprocess.check_call("sc stop WinDivert1.1", stdout=subprocess.PIPE,
+ subprocess.check_call("sc stop WinDivert1.3", stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
def open(self):
diff --git a/pydivert/windivert_dll/WinDivert32.dll b/pydivert/windivert_dll/WinDivert32.dll
index a1bbffe..05397c1 100644
Binary files a/pydivert/windivert_dll/WinDivert32.dll and b/pydivert/windivert_dll/WinDivert32.dll differ
diff --git a/pydivert/windivert_dll/WinDivert32.sys b/pydivert/windivert_dll/WinDivert32.sys
index 724c398..6df52c0 100644
Binary files a/pydivert/windivert_dll/WinDivert32.sys and b/pydivert/windivert_dll/WinDivert32.sys differ
diff --git a/pydivert/windivert_dll/WinDivert64.dll b/pydivert/windivert_dll/WinDivert64.dll
index ef43f8a..4889e3d 100644
Binary files a/pydivert/windivert_dll/WinDivert64.dll and b/pydivert/windivert_dll/WinDivert64.dll differ
diff --git a/pydivert/windivert_dll/WinDivert64.sys b/pydivert/windivert_dll/WinDivert64.sys
index 515b72b..ce7bf07 100644
Binary files a/pydivert/windivert_dll/WinDivert64.sys and b/pydivert/windivert_dll/WinDivert64.sys differ
diff --git a/pydivert/windivert_dll/__init__.py b/pydivert/windivert_dll/__init__.py
index 6e25922..04c7973 100644
--- a/pydivert/windivert_dll/__init__.py
+++ b/pydivert/windivert_dll/__init__.py
@@ -15,7 +15,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
pydivert bundles the WinDivert binaries from
-https://github.com/basil00/Divert/releases/download/v1.1.8/WinDivert-1.1.8-WDDK.zip
+https://reqrypt.org/download/WinDivert-1.3.0-WDDK.zip
"""
import functools
import os
diff --git a/setup.cfg b/setup.cfg
index 00eb183..e384808 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -8,4 +8,4 @@ universal=1
[tool:pytest]
testpaths = pydivert
addopts = --capture=no --color=yes
-timeout = 5
\ No newline at end of file
+timeout = 20
\ No newline at end of file
| Support Windivert 1.3
As the new release is available we have to support and embed the new version inside our binding.
| 2017-10-18T09:58:43 | 0.0 | [] | [] |
|||
bopen/xarray-sentinel | bopen__xarray-sentinel-107 | 043bea76d21ac6f91b70716ed28230bc5d933ec5 | diff --git a/xarray_sentinel/sentinel1.py b/xarray_sentinel/sentinel1.py
index ca217c1..46bac17 100644
--- a/xarray_sentinel/sentinel1.py
+++ b/xarray_sentinel/sentinel1.py
@@ -756,15 +756,17 @@ def slant_range_time_to_ground_range(
template=slant_range_time,
)
x = slant_range - sr0
+ template = coordinate_conversion.srgrCoefficients.broadcast_like(slant_range_time)
+ template = template.isel(azimuth_time=0).drop_vars("azimuth_time")
+ template = template.chunk(azimuth_time.chunksizes)
+
srgrCoefficients = xr.map_blocks(
interp_block,
azimuth_time,
kwargs={
"data": coordinate_conversion.srgrCoefficients,
},
- template=slant_range_time.expand_dims(
- {"degree": coordinate_conversion.degree.size}
- ),
+ template=template,
)
ground_range = (srgrCoefficients * x**srgrCoefficients.degree).sum("degree")
return ground_range # type: ignore
| Make helper functions dask friendly with `xr.map_blocks`
Candidates:
- [x] calibrate_amplitude / calibrate_intensity
- [x] mosaic_slc_iw
- [x] slant_range_time_to_ground_range
| 2022-05-17T15:20:22 | 0.0 | [] | [] |
|||
voneiden/ocp-freecad-cam | voneiden__ocp-freecad-cam-29 | 4bf4a7715463a578854d92b3a5f0fa068292d1e2 | diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 1e2812b..3ce5340 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -1,6 +1,7 @@
name: ocp-freecad-cam-ci
-
on:
+ schedule:
+ - cron: "30 23 * * 1,5"
push:
branches: [ dev ]
pull_request:
@@ -13,11 +14,32 @@ on:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
build:
- runs-on: ubuntu-latest
+ runs-on: ${{ matrix.os }}
strategy:
+ fail-fast: false
matrix:
- python: [ "3.10" ]
+ os: ["ubuntu-latest", "windows-latest"]
+ release: ["weekly-builds", "0.21.2"]
+ python: [ "3.10", "3.11" ]
+ exclude:
+ # does not exist
+ - release: "weekly-builds"
+ python: "3.11"
+
+ # causes an access violation on python interpreter exit
+ - os: "windows-latest"
+ python: "3.11"
+
+ include:
+ - python: "3.10"
+ release: "weekly-builds"
+ os: "ubuntu-latest"
+ primary: true
+ - python: "3.10"
+ pypattern: "py310"
+ - python: "3.11"
+ pypattern: "py311"
steps:
- name: Checkout
uses: actions/checkout@v3
@@ -31,30 +53,74 @@ jobs:
python-version: ${{ matrix.python }}
cache: pip
- - name: Setup FreeCAD
+ - name: Setup FreeCAD (Linux)
+ if: matrix.os == 'ubuntu-latest'
+ env:
+ RELEASE: ${{ matrix.release }}
+ PYPATTERN: ${{ matrix.pypattern }}
+ PYVER: ${{ matrix.python }}
run: |
mkdir $PWD/freecad
cd $PWD/freecad
- wget -O freecad.AppImage https://github.com$(curl -v --silent https://github.com/voneiden/FreeCAD-Bundle/releases/expanded_assets/weekly-builds 2>&1 | sed -n 's/.*href="\([^"]*\).*/\1/p' | grep x86_64 | grep AppImage$)
+ wget -O freecad.AppImage https://github.com$(curl -v --silent https://github.com/voneiden/FreeCAD-Bundle/releases/expanded_assets/${RELEASE} 2>&1 | sed -n 's/.*href="\([^"]*\).*/\1/p' | grep x86_64 | grep $PYPATTERN | grep AppImage$)
chmod +x freecad.AppImage
./freecad.AppImage --appimage-extract > /dev/null
- export PYTHONPATH=.:$PWD/../src:$PWD/squashfs-root/usr/lib:$PWD/squashfs-root/usr/Mod/Path:$PWD/squashfs-root/usr/lib/python3.10/site-packages/
- echo "PYTHONPATH=$PYTHONPATH" >> $GITHUB_ENV
+
+ $PWD/squashfs-root/usr/bin/python -m venv --system-site-packages venv
+ echo "$PWD/squashfs-root/usr/lib" > venv/lib/python${PYVER}/site-packages/freecad.pth
+ echo "$PWD/../src" > venv/lib/python${PYVER}/site-packages/ocp_freecad_cam.pth
- - name: Test FreeCAD is available
+ - name: Setup FreeCAD (Windows)
+ if: matrix.os == 'windows-latest'
+ env:
+ RELEASE: ${{ matrix.release }}
+ PYPATTERN: ${{ matrix.pypattern }}
+ PYVER: ${{ matrix.python }}
run: |
+ mkdir freecad
+ cd freecad
+ (Invoke-WebRequest -Uri "https://github.com/voneiden/FreeCAD-Bundle/releases/expanded_assets/${{ matrix.release }}").Content -match 'href="([^"]*.x86_64-${{ matrix.pypattern }}.7z)"' | Out-Null
+ Invoke-WebRequest -Uri ("https://github.com" + $matches[1]) -OutFile freecad.7z
+ 7z x freecad.7z
+ Invoke-Expression (".\" + (Get-ChildItem . "FreeCAD_*" | select -first 1).Name + "\bin\python -m venv --system-site-packages venv")
+ "$($PWD)\..\src" | Out-File -FilePath "venv\Lib\site-packages\ocp_freecad_cam.pth"
+
+ - name: Test FreeCAD is available (Linux)
+ if: matrix.os == 'ubuntu-latest'
+ run: |
+ source freecad/venv/bin/activate
echo $PYTHONPATH
python -c "import sys; print(sys.path)"
python -c "import FreeCAD"
- - name: Install dependencies
+ - name: Test FreeCAD is available (Windows)
+ if: matrix.os == 'windows-latest'
+ run: |
+ .\freecad\venv\Scripts\activate
+ echo $PYTHONPATH
+ python -c "import sys; print(sys.path)"
+ python -c "import FreeCAD"
+
+ - name: Install dependencies (Linux)
+ if: matrix.os == 'ubuntu-latest'
+ run: |
+ source freecad/venv/bin/activate
+ python -m pip install --upgrade pip
+ pip install cadquery build123d
+ pip install -r requirements-dev.txt
+
+ - name: Install dependencies (Windows)
+ if: matrix.os == 'windows-latest'
run: |
+ .\freecad\venv\Scripts\activate
python -m pip install --upgrade pip
- pip install cadquery
+ pip install cadquery build123d
pip install -r requirements-dev.txt
- name: Check black
+ if: matrix.primary
run: |
+ source freecad/venv/bin/activate
black --check src
black --check tests
@@ -65,25 +131,25 @@ jobs:
# run: flake8
- name: Check isort
- uses: liskin/gh-problem-matcher-wrap@v2
- with:
- linters: isort
- run: |
- isort src --check-only --diff
- - name: Check isort
- uses: liskin/gh-problem-matcher-wrap@v2
- with:
- linters: isort
- run: |
- isort tests --check-only --diff
+ if: matrix.primary
+ run: |
+ source freecad/venv/bin/activate
+ isort src tests --check-only --diff
- - name: Run tests
- uses: liskin/gh-problem-matcher-wrap@v2
- with:
- linters: pytest
- run: pytest -ra -vvv --cov=src --cov-report xml tests
+ - name: Run tests (Linux)
+ if: matrix.os == 'ubuntu-latest'
+ run: |
+ source freecad/venv/bin/activate
+ python -m pytest -ra -vvv --cov=src --cov-report xml tests
+
+ - name: Run tests (Windows)
+ if: matrix.os == 'windows-latest'
+ run: |
+ .\freecad\venv\Scripts\activate
+ python -m pytest -ra -vvv tests
- name: Codecov
+ if: matrix.primary
uses: codecov/codecov-action@v3
with:
files: ./coverage.xml
diff --git a/src/ocp_freecad_cam/fc_impl.py b/src/ocp_freecad_cam/fc_impl.py
index 18db3da..c5139f3 100644
--- a/src/ocp_freecad_cam/fc_impl.py
+++ b/src/ocp_freecad_cam/fc_impl.py
@@ -7,6 +7,7 @@
"""
+import os
import tempfile
from abc import ABC
from copy import copy
@@ -182,12 +183,13 @@ def to_gcode(self, rebuild=False):
for idx, section in enumerate(postlist):
name, sublist = section
- with tempfile.NamedTemporaryFile() as tmp_file:
+ with tempfile.TemporaryDirectory() as tmp_dir:
+ tmp_file = os.path.join(tmp_dir, "output.nc")
options = ["--no-show-editor"]
if self.units == "imperial":
options.append("--inches")
- gcode = processor.export(sublist, tmp_file.name, " ".join(options))
+ gcode = processor.export(sublist, tmp_file, " ".join(options))
return gcode
def show(self, show_object=None, rebuild=False):
| Permission denied error on Windows when exporting gcode
ocp-freecad-cam creates a temporary file that is used for capturing the export output from FreeCAD:
https://github.com/voneiden/ocp-freecad-cam/blob/4bf4a7715463a578854d92b3a5f0fa068292d1e2/src/ocp_freecad_cam/fc_impl.py#L185-L190
However on windows this crashes, because FreeCAD attemps to re-open the file while it is already open and it is not possible to open the file twice.
Relates to #21
| 2023-12-27T19:34:45 | 0.0 | [] | [] |
|||
aiokitchen/aiomisc | aiokitchen__aiomisc-180 | 9db719776290ee2f189b7639cfa16e13fa495d84 | diff --git a/aiomisc/compat.py b/aiomisc/compat.py
index 91541925..631bfdd7 100644
--- a/aiomisc/compat.py
+++ b/aiomisc/compat.py
@@ -3,7 +3,7 @@
import os
import socket
import sys
-from typing import Optional
+from typing import Any, Iterator, Optional
from ._context_vars import EVENT_LOOP
@@ -23,12 +23,45 @@ def time_ns() -> int:
except ImportError:
from typing_extensions import final # type: ignore
+
if sys.version_info >= (3, 10):
from typing import ParamSpec
else:
from typing_extensions import ParamSpec
+if sys.version_info >= (3, 8):
+ from typing import Protocol
+else:
+ from typing_extensions import Protocol
+
+
+class EntrypointProtocol(Protocol):
+ @property
+ def name(self) -> str:
+ ...
+
+ def load(self) -> Any:
+ ...
+
+
+# noinspection PyUnresolvedReferences
+try:
+ from importlib.metadata import Distribution, EntryPoint
+
+ def entry_pont_iterator(entry_point: str) -> Iterator[EntrypointProtocol]:
+ ep: EntryPoint
+ for dist in Distribution.discover():
+ for ep in dist.entry_points:
+ if ep.group == entry_point:
+ yield ep
+except ImportError:
+ import pkg_resources
+
+ def entry_pont_iterator(entry_point: str) -> Iterator[EntrypointProtocol]:
+ yield from pkg_resources.iter_entry_points(entry_point)
+
+
class EventLoopMixin:
__slots__ = "_loop",
@@ -81,8 +114,11 @@ def sock_set_reuseport(sock: socket.socket, reuse_port: bool) -> None:
get_current_loop = EVENT_LOOP.get
__all__ = (
+ "EntrypointProtocol",
"EventLoopMixin",
"ParamSpec",
+ "Protocol",
+ "entry_pont_iterator",
"event_loop_policy",
"final",
"get_current_loop",
diff --git a/aiomisc/plugins/__init__.py b/aiomisc/plugins/__init__.py
index 90dced50..a28e6cc7 100644
--- a/aiomisc/plugins/__init__.py
+++ b/aiomisc/plugins/__init__.py
@@ -1,24 +1,30 @@
import logging
import os
+from itertools import chain
from types import MappingProxyType
from typing import Callable, Mapping
+from aiomisc.compat import entry_pont_iterator
+
def setup_plugins() -> Mapping[str, Callable]:
if os.getenv("AIOMISC_NO_PLUGINS"):
return MappingProxyType({})
- import pkg_resources
-
plugins = {}
+ logger = logging.getLogger(__name__)
- for entry_point in pkg_resources.iter_entry_points("aiomisc.plugins"):
- plugins[entry_point.name] = entry_point.load()
-
- for entry_point in pkg_resources.iter_entry_points("aiomisc"):
- plugins[entry_point.name] = entry_point.load()
+ for entry_point in chain(
+ entry_pont_iterator("aiomisc.plugins"),
+ entry_pont_iterator("aiomisc"),
+ ):
+ try:
+ plugins[entry_point.name] = entry_point.load()
+ except: # noqa
+ logger.exception(
+ "Failed to load entrypoint %r", entry_point,
+ )
- logger = logging.getLogger(__name__)
for name, plugin in plugins.items():
try:
logger.debug("Trying to load %r %r", name, plugin)
diff --git a/pyproject.toml b/pyproject.toml
index a3818d0a..69d10d1c 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,7 +1,7 @@
[tool.poetry]
name = "aiomisc"
# This is a dummy version which will be rewritten with poem-plugins
-version = "17.2.23"
+version = "17.3.0"
description = "aiomisc - miscellaneous utils for asyncio"
authors = ["Dmitry Orlov <[email protected]>"]
readme = "README.rst"
@@ -55,7 +55,6 @@ packages = [
"Documentation" = "https://aiomisc.readthedocs.io/en/latest/"
[tool.poetry.dependencies]
-python = "^3.7"
aiocarbon = { version = "^0.15", optional = true }
aiohttp = { version = ">3", optional = true }
aiohttp-asgi = { version = "^0.5.2", optional = true }
@@ -64,8 +63,10 @@ croniter = { version = "^1.3.8", optional = true }
grpcio = { version = "^1.56.0", optional = true }
grpcio-tools = { version = "^1.56.0", optional = true }
logging-journald = [{ version = '*', platform = 'linux' }]
+python = "^3.7"
raven = { version = "*", optional = true }
rich = { version = "*", optional = true }
+setuptools = [{ version = '*', python = "< 3.8" }]
typing_extensions = [{ version = '*', python = "< 3.10" }]
uvloop = { version = ">=0.14, <1", optional = true }
| plugins probed even if not using entrypoints
I'm using aiohttp-s3-client and I'm getting pkg_resource not found.
I could give in and install setuptools, or set the AIOMISC_NO_PLUGINS variables, but I consider the automatic probing for plugins to be unnecessary. It seems that PLUGINS are specifically tied to aiomisc.entrypoint() API, which aiohttp-s3-client doesn't use.
maybe aiomisc.entrypoint.entrypoint can run setup_plugins() before returning Entrypoint, that setup_plugins doesn't need to be called on import.
| 2023-07-04T10:33:02 | 0.0 | [] | [] |
|||
LiBa001/disputils | LiBa001__disputils-34 | cf199cc1ea271ebc5db5d163d34f0bd11b7fac10 | diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000..88391cb
--- /dev/null
+++ b/CODE_OF_CONDUCT.md
@@ -0,0 +1,76 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as
+contributors and maintainers pledge to making participation in our project and
+our community a harassment-free experience for everyone, regardless of age, body
+size, disability, ethnicity, sex characteristics, gender identity and expression,
+level of experience, education, socio-economic status, nationality, personal
+appearance, race, religion, or sexual identity and orientation.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment
+include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy towards other community members
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or
+ advances
+* Trolling, insulting/derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or electronic
+ address, without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a
+ professional setting
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable
+behavior and are expected to take appropriate and fair corrective action in
+response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or
+reject comments, commits, code, wiki edits, issues, and other contributions
+that are not aligned to this Code of Conduct, or to ban temporarily or
+permanently any contributor for other behaviors that they deem inappropriate,
+threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community. Examples of
+representing a project or community include using an official project e-mail
+address, posting via an official social media account, or acting as an appointed
+representative at an online or offline event. Representation of a project may be
+further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting the project team at [email protected]. All
+complaints will be reviewed and investigated and will result in a response that
+is deemed necessary and appropriate to the circumstances. The project team is
+obligated to maintain confidentiality with regard to the reporter of an incident.
+Further details of specific enforcement policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good
+faith may face temporary or permanent repercussions as determined by other
+members of the project's leadership.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
+available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+
+[homepage]: https://www.contributor-covenant.org
+
+For answers to common questions about this code of conduct, see
+https://www.contributor-covenant.org/faq
diff --git a/disputils/confirmation.py b/disputils/confirmation.py
index 06c4c68..cddfbea 100644
--- a/disputils/confirmation.py
+++ b/disputils/confirmation.py
@@ -35,6 +35,7 @@ async def confirm(
user: discord.User,
channel: discord.TextChannel = None,
hide_author: bool = False,
+ timeout: int = 20
) -> bool or None:
"""
Run the confirmation.
@@ -52,6 +53,10 @@ async def confirm(
:param hide_author: Whether or not the ``user`` should be set as embed author.
:type hide_author: bool, optional
+ :type timeout: int
+ :param timeout:
+ Seconds to wait until stopping to listen for user interaction.
+
:return: True when it's been confirmed, otherwise False. Will return None when a
timeout occurs.
:rtype: :class:`bool`, optional
@@ -80,7 +85,7 @@ async def confirm(
check=lambda r, u: (r.message.id == msg.id)
and (u.id == user.id)
and (r.emoji in self.emojis),
- timeout=20,
+ timeout=timeout,
)
except asyncio.TimeoutError:
self._confirmed = None
@@ -114,6 +119,7 @@ async def confirm(
user: discord.User = None,
channel: discord.TextChannel = None,
hide_author: bool = False,
+ timeout: int = 20
) -> bool or None:
if user is None:
@@ -122,4 +128,4 @@ async def confirm(
if self.message is None and channel is None:
channel = self._ctx.channel
- return await super().confirm(text, user, channel, hide_author=hide_author)
+ return await super().confirm(text, user, channel, hide_author, timeout)
diff --git a/disputils/pagination.py b/disputils/pagination.py
index 6070dcd..2ed2eba 100644
--- a/disputils/pagination.py
+++ b/disputils/pagination.py
@@ -56,7 +56,12 @@ def formatted_pages(self) -> List[discord.Embed]:
)
return pages
- async def run(self, users: List[discord.User], channel: discord.TextChannel = None):
+ async def run(
+ self,
+ users: List[discord.User],
+ channel: discord.TextChannel = None,
+ timeout: int = 100,
+ ):
"""
Runs the paginator.
@@ -70,6 +75,10 @@ async def run(self, users: List[discord.User], channel: discord.TextChannel = No
The text channel to send the embed to.
Must only be specified if `self.message` is `None`.
+ :type timeout: int
+ :param timeout:
+ Seconds to wait until stopping to listen for user interaction.
+
:return: None
"""
@@ -101,7 +110,7 @@ def check(r: discord.Reaction, u: discord.User):
while True:
try:
reaction, user = await self._client.wait_for(
- "reaction_add", check=check, timeout=100
+ "reaction_add", check=check, timeout=timeout
)
except asyncio.TimeoutError:
if not isinstance(
@@ -215,7 +224,10 @@ def __init__(
)
async def run(
- self, channel: discord.TextChannel = None, users: List[discord.User] = None
+ self,
+ channel: discord.TextChannel = None,
+ users: List[discord.User] = None,
+ timeout: int = 100,
):
"""
Runs the paginator.
@@ -231,6 +243,10 @@ async def run(
Default is the context author.
Passing an empty list will grant access to all users. (Not recommended.)
+ :type timeout: int
+ :param timeout:
+ Seconds to wait until stopping to listen for user interaction.
+
:return: None
"""
@@ -240,4 +256,4 @@ async def run(
if self.message is None and channel is None:
channel = self._ctx.channel
- await super().run(users, channel)
+ await super().run(users, channel, timeout)
| fix embed property not being used when run
closes #25 since footer and timestamp can now be set like this:
```python
mc = MultipleChoice(...)
mc.embed.set_footer(text="example", icon_url="https://example.com/icon.png")
mc.embed.timestamp = datetime.datetime.now()
await mc.run()
```
| 2021-05-28T14:26:19 | 0.0 | [] | [] |
|||
internetarchive/openlibrary-client | internetarchive__openlibrary-client-323 | 12450d65228c2ae6474a63337cf24d55604e8e7d | diff --git a/olclient/cli.py b/olclient/cli.py
index 3fc80687..bde859ed 100644
--- a/olclient/cli.py
+++ b/olclient/cli.py
@@ -81,7 +81,9 @@ def main() -> None:
raise ValueError("--email required for configuration")
password = getpass.getpass("Password: ")
- ia.configure(email, password)
+ # Explicitly specify host until next release of ia tool
+ # See https://github.com/internetarchive/openlibrary-client/issues/322
+ ia.configure(email, password, host='archive.org')
config_tool = Config()
config = config_tool._get_config()
config['s3'] = ia.config.get_config()['s3']
| Unable to login to a OL account (on colab)
OS:Linux
No config file existed before
I tried to login with `ia --configure`, but the login fails with:
```
Traceback (most recent call last):
File "/usr/local/bin/ol", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/olclient/cli.py", line 84, in main
ia.configure(email, password)
File "/usr/local/lib/python3.7/dist-packages/internetarchive/api.py", line 548, in configure
host,
File "/usr/local/lib/python3.7/dist-packages/internetarchive/config.py", line 44, in get_auth_config
r = requests.post(u, params=p, data=d)
File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 573, in request
prep = self.prepare_request(req)
File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 496, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/usr/local/lib/python3.7/dist-packages/requests/models.py", line 368, in prepare
self.prepare_url(url, params)
File "/usr/local/lib/python3.7/dist-packages/requests/models.py", line 445, in prepare_url
raise InvalidURL(f"Invalid URL {url!r}: No host supplied")
requests.exceptions.InvalidURL: Invalid URL 'https:///services/xauthn/': No host supplied
```
| After some thought,this seems to me like the host in that url is missing entirely...
Or it is just `/`. In any case, the `requests` error message is correct that `https:///anythingatall` is an invalid URL. `https:///` --> `https://`
Here is the function call that is passing an empty string for the host: https://github.com/jjjake/internetarchive/blob/bd9aed72e0c11bf94dc8b004eb7524b7801f48ee/internetarchive/api.py#L545
It's an empty string due to the default parameter right above:
https://github.com/jjjake/internetarchive/blob/bd9aed72e0c11bf94dc8b004eb7524b7801f48ee/internetarchive/api.py#L531
Everything works if the default parameter is set to "archive.org". | 2022-07-22T16:20:31 | 0.0 | [] | [] |
||
megagonlabs/ginza | megagonlabs__ginza-196 | 31a22bc8bb0c0ee79dc05ac4cd9de7db90650223 | diff --git a/README.md b/README.md
index 9e14486..379898e 100644
--- a/README.md
+++ b/README.md
@@ -221,6 +221,11 @@ Please read the official documents to compile user dictionaries with `sudachipy`
### version 5.x
+#### ginza-5.0.3
+- 2021-10-15
+- Bug fix
+ - `Bunsetu span should not cross the sentence boundary` #195
+
#### ginza-5.0.2
- 2021-09-06
- Bug fix
diff --git a/docs/index.md b/docs/index.md
index b26d8fc..12a86cd 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -225,6 +225,11 @@ Contains information from mC4 which is made available under the ODC Attribution
### version 5.x
+#### ginza-5.0.3
+- 2021-10-15
+- Bug fix
+ - `Bunsetu span should not cross the sentence boundary` #195
+
#### ginza-5.0.2
- 2021-09-06
- Bug fix
diff --git a/ginza/bunsetu_recognizer.py b/ginza/bunsetu_recognizer.py
index b7f36ff..f28405f 100644
--- a/ginza/bunsetu_recognizer.py
+++ b/ginza/bunsetu_recognizer.py
@@ -79,7 +79,7 @@ def bunsetu_span(token: Token) -> Span:
start = token.i
end = start + 1
for idx in range(start, 0, -1):
- if bunsetu_bi_list[idx] == "B":
+ if bunsetu_bi_list[idx] == "B" or token.doc[idx].is_sent_start:
start = idx
break
else:
diff --git a/setup.py b/setup.py
index 1329cc5..fbeaf4f 100644
--- a/setup.py
+++ b/setup.py
@@ -26,5 +26,5 @@
name="ginza",
packages=find_packages(include=["ginza"]),
url="https://github.com/megagonlabs/ginza",
- version='5.0.2',
+ version='5.0.3',
)
| Incorrect bunsetu_span detection boundary condition in bunsetu_recognizer pipeline
When processing a multi-sentence line document from the command line (reproducible by running the ginza command against the text file [here](https://send.bitwarden.com/#z27EgODvbUaeia3CAH0skg/S1YTo0wqLouIiKsOx7st1A)), analyze_conllu in command_line.py can trigger an `IndexError` at line 287 below:
https://github.com/megagonlabs/ginza/blob/31a22bc8bb0c0ee79dc05ac4cd9de7db90650223/ginza/command_line.py#L286-L287
This can trigger because the bunsetu_span function in bunsetu_recognizer.py uses 0 as the ending boundary condition in the for loop (L81) and else branch (L86), which can apparently traverse into a previous sentence, return a phrase from there and thus result in a negative index above.
https://github.com/megagonlabs/ginza/blob/31a22bc8bb0c0ee79dc05ac4cd9de7db90650223/ginza/bunsetu_recognizer.py#L77-L96
I am actually not sure this isn't a bug with incorrect BI labels, but the logic change that fixes this error for me is to change the boundary conditions (L81 and L86) from 0 (=token.doc.start) to `token.sent.start`. If a PR would help, I can make one. Note that I could only get this to trigger with the ja-ginza-electra model and not with the ja-ginza version. This obviously does not trigger with the sentencizer disabled. Used versions:
```
ginza==5.0.2
ginza-transformers==0.3.1
ja-ginza-electra==5.0.0
```
| 2021-10-15T09:03:31 | 0.0 | [] | [] |
|||
vatlab/sos | vatlab__sos-1458 | 441cf7e3577f6639c8aa8933ae58991f3d9fe4f1 | diff --git a/src/sos/section_analyzer.py b/src/sos/section_analyzer.py
index 263ee0a13..6d1f7ccd9 100644
--- a/src/sos/section_analyzer.py
+++ b/src/sos/section_analyzer.py
@@ -244,6 +244,7 @@ def get_all_used_vars(section):
raise ValueError(f"Failed to parse parameter for_each: {e}")
if section.task:
all_used_vars |= accessed_vars(section.task)
+ all_used_vars |= accessed_vars(section.task_params, mode='eval')
# now we have a list of global variables that are actually used in the functions
# this is specifically designed to handle the last case in #1225
| Step invoked by output_from/named_output/provides cannot recognized task variable defined in global statement.
Hi Bo,
I have encountered an error where invoking other step by output_from , named_output , and `provides` will cause the task statement not being able to recognize the global statement. Following error was given
```
ERROR: [A (A)]: [A]: Failed to execute process
"bash(fr"""touch {_output}\n""", stderr = f'{_output}.stderr',...'{_output}.stdout')\n"
name 'walltime' is not defined
[B]: Exits with 1 pending step (B)
```
when running
```
sos run test_global.sos B -J 1 -q csg -c csg.yml
```
Where test_global.sos is:
```
[global]
# The output directory for generated files. MUST BE FULL PATH
parameter: cwd = path("./")
# For cluster jobs, number commands to run per job
parameter: job_size = 1
# Wall clock time expected
parameter: walltime = "5h"
# Memory expected
parameter: mem = "16G"
# Number of threads
parameter: numThreads = 8
[A]
output: file = f'{cwd:a}/test_file'
task: trunk_workers = 1, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: expand= "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
touch $[_output]
[B]
input: output_from("A")
task: trunk_workers = 1, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: expand= "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
touch $[_input]
```
However, I was under the impression that this mechanism should work fine previously.
The sos version I am using is the latest. And the pbs version is sos-pbs 0.20.8
| Simplifying the test case to
```
[global]
parameter: mem = "16G"
[A]
output: 'test_file'
task: mem = mem
bash: expand=True
touch {_output}
[B]
input: output_from("A")
bash:
touch {_input}.processed
```
with command error being
```
> sos run test_task_params.sos B -s force -q localhost
INFO: Running A:
ERROR: [A (A)]: [A]: Failed to execute process
"'bash(fr"""touch {_output}\n\n""")\n'"
name 'mem' is not defined
[B]: Exits with 1 pending step (B)
```
Note that the pipeline works without `-q localhost`. | 2022-02-24T22:57:58 | 0.0 | [] | [] |
||
ASFHyP3/hyp3-sdk | ASFHyP3__hyp3-sdk-220 | 11feb5c5616263b44ab61dccdb4b6d66f5750753 | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2d65c56..6a7ff46 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [PEP 440](https://www.python.org/dev/peps/pep-0440/)
and uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [2.1.1]
+### Fixed
+* The `user_id` parameter has been moved to the end of the `HyP3.find_jobs` parameter list, to avoid
+ introducing breaking changes for users who rely on the order of the parameters.
+
## [2.1.0]
### Added
* The `HyP3.find_jobs` method now includes a `user_id` parameter that allows retrieving jobs for a given user.
diff --git a/docs/search_other_user_jobs.ipynb b/docs/search_other_user_jobs.ipynb
index 2e86e41..2bac5ca 100644
--- a/docs/search_other_user_jobs.ipynb
+++ b/docs/search_other_user_jobs.ipynb
@@ -23,7 +23,7 @@
"cells": [
{
"cell_type": "markdown",
- "source": "# Using the HyP3 SDK to search for jobs run by another user\n\nTo facilitate collaboration, HyP3 allows you to search for jobs run by other users.\n\nFollow [Using the HyP3 SDK for Python](https://nbviewer.jupyter.org/github/ASFHyP3/hyp3-sdk/blob/main/docs/sdk_example.ipynb) to install the `hyp3-sdk` package (version `2.1.0` or higher) and authenticate using your Earthdata credentials.\n\nSuppose you have run a number of RTC jobs with the name `rtc-example`. You can search for them using `find_jobs`:",
+ "source": "# Using the HyP3 SDK to search for jobs run by another user\n\nTo facilitate collaboration, HyP3 allows you to search for jobs run by other users.\n\nFollow [Using the HyP3 SDK for Python](https://nbviewer.jupyter.org/github/ASFHyP3/hyp3-sdk/blob/main/docs/sdk_example.ipynb) to install the `hyp3-sdk` package (version `2.1.1` or higher) and authenticate using your Earthdata credentials.\n\nSuppose you have run a number of RTC jobs with the name `rtc-example`. You can search for them using `find_jobs`:",
"metadata": {}
},
{
@@ -63,4 +63,4 @@
"metadata": {}
}
]
-}
\ No newline at end of file
+}
diff --git a/src/hyp3_sdk/hyp3.py b/src/hyp3_sdk/hyp3.py
index 2da5457..8cce8e9 100644
--- a/src/hyp3_sdk/hyp3.py
+++ b/src/hyp3_sdk/hyp3.py
@@ -47,27 +47,27 @@ def __init__(self, api_url: str = PROD_API, username: Optional[str] = None, pass
self.session.headers.update({'User-Agent': f'{hyp3_sdk.__name__}/{hyp3_sdk.__version__}'})
def find_jobs(self,
- user_id: Optional[str] = None,
start: Optional[datetime] = None,
end: Optional[datetime] = None,
status_code: Optional[str] = None,
name: Optional[str] = None,
- job_type: Optional[str] = None) -> Batch:
+ job_type: Optional[str] = None,
+ user_id: Optional[str] = None) -> Batch:
"""Gets a Batch of jobs from HyP3 matching the provided search criteria
Args:
- user_id: only jobs submitted by this user (defaults to the current user)
start: only jobs submitted after given time
end: only jobs submitted before given time
status_code: only jobs matching this status (SUCCEEDED, FAILED, RUNNING, PENDING)
name: only jobs with this name
job_type: only jobs with this job_type
+ user_id: only jobs submitted by this user (defaults to the current user)
Returns:
A Batch object containing the found jobs
"""
params = {}
- for param_name in ('user_id', 'start', 'end', 'status_code', 'name', 'job_type'):
+ for param_name in ('start', 'end', 'status_code', 'name', 'job_type', 'user_id'):
param_value = locals().get(param_name)
if param_value is not None:
if isinstance(param_value, datetime):
| Move `user_id` to end of `find_jobs` param list
Fixes potentially breaking changes introduced by https://github.com/ASFHyP3/hyp3-sdk/pull/218
| 2023-05-16T19:57:52 | 0.0 | [] | [] |
|||
adafruit/Adafruit_CircuitPython_Display_Text | adafruit__Adafruit_CircuitPython_Display_Text-173 | a5672998d9a555247ebd69b17f56813a03d8d40f | diff --git a/adafruit_display_text/__init__.py b/adafruit_display_text/__init__.py
index 81c9073..054f9ca 100644
--- a/adafruit_display_text/__init__.py
+++ b/adafruit_display_text/__init__.py
@@ -7,20 +7,22 @@
=======================
"""
+__version__ = "0.0.0-auto.0"
+__repo__ = "https://github.com/adafruit/Adafruit_CircuitPython_Display_Text.git"
+
+from displayio import Group, Palette
+
try:
- from typing import Optional, Union, List, Tuple
- from fontio import BuiltinFont
- from adafruit_bitmap_font.bdf import BDF
- from adafruit_bitmap_font.pcf import PCF
+ from typing import Optional, List, Tuple
+ from fontio import FontProtocol
except ImportError:
pass
-from displayio import Group, Palette
def wrap_text_to_pixels(
string: str,
max_width: int,
- font: Optional[Union[BuiltinFont, BDF, PCF]] = None,
+ font: Optional[FontProtocol] = None,
indent0: str = "",
indent1: str = "",
) -> List[str]:
@@ -35,7 +37,7 @@ def wrap_text_to_pixels(
:param str string: The text to be wrapped.
:param int max_width: The maximum number of pixels on a line before wrapping.
:param font: The font to use for measuring the text.
- :type font: ~BuiltinFont, ~BDF, or ~PCF
+ :type font: ~FontProtocol
:param str indent0: Additional character(s) to add to the first line.
:param str indent1: Additional character(s) to add to all other lines.
@@ -191,7 +193,7 @@ class LabelBase(Group):
:param font: A font class that has ``get_bounding_box`` and ``get_glyph``.
Must include a capital M for measuring character size.
- :type font: ~BuiltinFont, ~BDF, or ~PCF
+ :type font: ~FontProtocol
:param str text: Text to display
:param int color: Color of all text in RGB hex
:param int background_color: Color of the background, use `None` for transparent
@@ -218,7 +220,7 @@ class LabelBase(Group):
def __init__(
self,
- font: Union[BuiltinFont, BDF, PCF],
+ font: FontProtocol,
x: int = 0,
y: int = 0,
text: str = "",
@@ -304,15 +306,15 @@ def _get_ascent_descent(self) -> Tuple[int, int]:
return ascender_max, descender_max
@property
- def font(self) -> Union[BuiltinFont, BDF, PCF]:
+ def font(self) -> FontProtocol:
"""Font to use for text display."""
return self._font
- def _set_font(self, new_font: Union[BuiltinFont, BDF, PCF]) -> None:
+ def _set_font(self, new_font: FontProtocol) -> None:
raise NotImplementedError("{} MUST override '_set_font'".format(type(self)))
@font.setter
- def font(self, new_font: Union[BuiltinFont, BDF, PCF]) -> None:
+ def font(self, new_font: FontProtocol) -> None:
self._set_font(new_font)
@property
diff --git a/adafruit_display_text/bitmap_label.py b/adafruit_display_text/bitmap_label.py
index 50c426f..37a11ab 100755
--- a/adafruit_display_text/bitmap_label.py
+++ b/adafruit_display_text/bitmap_label.py
@@ -26,18 +26,15 @@
__version__ = "0.0.0-auto.0"
__repo__ = "https://github.com/adafruit/Adafruit_CircuitPython_Display_Text.git"
+import displayio
+from adafruit_display_text import LabelBase
try:
- from typing import Union, Optional, Tuple
- from fontio import BuiltinFont
- from adafruit_bitmap_font.bdf import BDF
- from adafruit_bitmap_font.pcf import PCF
+ from typing import Optional, Tuple
+ from fontio import FontProtocol
except ImportError:
pass
-import displayio
-
-from adafruit_display_text import LabelBase
# pylint: disable=too-many-instance-attributes
class Label(LabelBase):
@@ -56,7 +53,7 @@ class Label(LabelBase):
:param font: A font class that has ``get_bounding_box`` and ``get_glyph``.
Must include a capital M for measuring character size.
- :type font: ~BuiltinFont, ~BDF, or ~PCF
+ :type font: ~FontProtocol
:param str text: Text to display
:param int color: Color of all text in RGB hex
:param int background_color: Color of the background, use `None` for transparent
@@ -93,9 +90,7 @@ class Label(LabelBase):
"RTL": (False, False, False),
}
- def __init__(
- self, font: Union[BuiltinFont, BDF, PCF], save_text: bool = True, **kwargs
- ) -> None:
+ def __init__(self, font: FontProtocol, save_text: bool = True, **kwargs) -> None:
self._bitmap = None
self._tilegrid = None
@@ -116,7 +111,7 @@ def __init__(
def _reset_text(
self,
- font: Optional[Union[BuiltinFont, BDF, PCF]] = None,
+ font: Optional[FontProtocol] = None,
text: Optional[str] = None,
line_spacing: Optional[float] = None,
scale: Optional[int] = None,
@@ -270,15 +265,13 @@ def _reset_text(
self.anchored_position = self._anchored_position
@staticmethod
- def _line_spacing_ypixels(
- font: Union[BuiltinFont, BDF, PCF], line_spacing: float
- ) -> int:
+ def _line_spacing_ypixels(font: FontProtocol, line_spacing: float) -> int:
# Note: Scaling is provided at the Group level
return_value = int(line_spacing * font.get_bounding_box()[1])
return return_value
def _text_bounding_box(
- self, text: str, font: Union[BuiltinFont, BDF, PCF]
+ self, text: str, font: FontProtocol
) -> Tuple[int, int, int, int, int, int]:
# pylint: disable=too-many-locals
@@ -360,7 +353,7 @@ def _place_text(
self,
bitmap: displayio.Bitmap,
text: str,
- font: Union[BuiltinFont, BDF, PCF],
+ font: FontProtocol,
xposition: int,
yposition: int,
skip_index: int = 0, # set to None to write all pixels, other wise skip this palette index
@@ -534,7 +527,7 @@ def _set_line_spacing(self, new_line_spacing: float) -> None:
else:
raise RuntimeError("line_spacing is immutable when save_text is False")
- def _set_font(self, new_font: Union[BuiltinFont, BDF, PCF]) -> None:
+ def _set_font(self, new_font: FontProtocol) -> None:
self._font = new_font
if self._save_text:
self._reset_text(font=new_font, scale=self.scale)
diff --git a/adafruit_display_text/label.py b/adafruit_display_text/label.py
index 3145c4f..b87f6c3 100755
--- a/adafruit_display_text/label.py
+++ b/adafruit_display_text/label.py
@@ -26,18 +26,15 @@
__repo__ = "https://github.com/adafruit/Adafruit_CircuitPython_Display_Text.git"
+from displayio import Bitmap, Palette, TileGrid
+from adafruit_display_text import LabelBase
+
try:
- from typing import Union, Optional, Tuple
- from fontio import BuiltinFont
- from adafruit_bitmap_font.bdf import BDF
- from adafruit_bitmap_font.pcf import PCF
+ from typing import Optional, Tuple
+ from fontio import FontProtocol
except ImportError:
pass
-from displayio import Bitmap, Palette, TileGrid
-
-from adafruit_display_text import LabelBase
-
class Label(LabelBase):
# pylint: disable=too-many-instance-attributes
@@ -49,7 +46,7 @@ class Label(LabelBase):
:param font: A font class that has ``get_bounding_box`` and ``get_glyph``.
Must include a capital M for measuring character size.
- :type font: ~BuiltinFont, ~BDF, or ~PCF
+ :type font: ~FontProtocol
:param str text: Text to display
:param int color: Color of all text in RGB hex
:param int background_color: Color of the background, use `None` for transparent
@@ -83,7 +80,7 @@ class Label(LabelBase):
configurations possibles ``LTR``-Left-To-Right ``RTL``-Right-To-Left
``TTB``-Top-To-Bottom ``UPR``-Upwards ``DWR``-Downwards. It defaults to ``LTR``"""
- def __init__(self, font: Union[BuiltinFont, BDF, PCF], **kwargs) -> None:
+ def __init__(self, font: FontProtocol, **kwargs) -> None:
self._background_palette = Palette(1)
self._added_background_tilegrid = False
@@ -403,7 +400,7 @@ def _reset_text(self, new_text: str) -> None:
self._update_text(str(self._replace_tabs(new_text)))
self.anchored_position = current_anchored_position
- def _set_font(self, new_font: Union[BuiltinFont, BDF, PCF]) -> None:
+ def _set_font(self, new_font: FontProtocol) -> None:
old_text = self._text
current_anchored_position = self.anchored_position
self._text = ""
diff --git a/adafruit_display_text/scrolling_label.py b/adafruit_display_text/scrolling_label.py
index a83ef2a..f432a38 100644
--- a/adafruit_display_text/scrolling_label.py
+++ b/adafruit_display_text/scrolling_label.py
@@ -26,15 +26,15 @@
__version__ = "0.0.0-auto.0"
__repo__ = "https://github.com/adafruit/Adafruit_CircuitPython_Display_Text.git"
+import time
+from adafruit_display_text import bitmap_label
+
try:
from typing import Optional
from fontio import FontProtocol
except ImportError:
pass
-import time
-from adafruit_display_text import bitmap_label
-
class ScrollingLabel(bitmap_label.Label):
"""ScrollingLabel - A fixed-width label that will scroll to the left
| Use `fontio.FontProtocol` for type annotations
Looking at the library, `fontio.FontProtocol` should should be good to replace `Union[BuiltinFont, BDF, PCF]`!
| 2022-06-15T03:49:55 | 0.0 | [] | [] |
|||
FishPiOffical/fishpi-pyclient | FishPiOffical__fishpi-pyclient-104 | 1f41911aa828e243267cac74620a26a4012ffb0b | diff --git a/README.md b/README.md
index 239dbe6..20f5e57 100644
--- a/README.md
+++ b/README.md
@@ -8,11 +8,15 @@
## å®è£
-### MacOSç³»ç»
+[çæ¬å表](https://github.com/gakkiyomi/fishpi-pyclient/releases)
+
+### Windowsç³»ç»
-[v2.0.0ä¸è½½](https://github.com/gakkiyomi/fishpi-pyclient/releases/download/v2.0.0/fishpi-pyclient)
+ä¸è½½åï¼åå»æå¼
+
+### MacOSç³»ç»
-æ§è¡å¦ä¸å½ä»¤
+ä¸è½½åï¼æ§è¡å¦ä¸å½ä»¤
1. ```bash
chmod a+x ./fishpi-pyclient
diff --git a/src/core/command.py b/src/core/command.py
index 3717e32..8460c7c 100644
--- a/src/core/command.py
+++ b/src/core/command.py
@@ -92,9 +92,13 @@ class ConfigCommand(Command):
def exec(self, api: FishPi, args: Tuple[str, ...]):
current_user = api.sockpuppets[api.current_user]
lt = [i for i in args]
+ if len(lt) == 0:
+ print('éæ³æ令, æ£ç¡®æ令为: config [dump|show] {-d|-c} (file_path)')
+ return
it = iter(lt)
if len(lt) < 2:
print('éæ³æ令, æ£ç¡®æ令为: config [dump|show] {-d|-c} (file_path)')
+ return
opreator = next(it)
if opreator == 'dump':
if len(lt) != 3:
| é
ç½®æ件showæ令
api key ä¿åå°é
ç½®æ件
å°å°¾å·´å»é¤
configæ令ä¸è¾å
¥åæ°ç´æ¥æ¥é
| 2023-12-08T12:30:07 | 0.0 | [] | [] |
|||
webdjoe/pyvesync | webdjoe__pyvesync-75 | faf0f8db66f81bba48af0b1193e4e4d4772069de | diff --git a/README.md b/README.md
index 9d92590..753a9f3 100644
--- a/README.md
+++ b/README.md
@@ -16,15 +16,17 @@ pyvesync is a library to manage VeSync compatible [smart home devices](#supporte
- [Outlet Specific Energy Methods and Properties](#outlet-specific-energy-methods-and-properties)
- [Model ESW15-USA 15A/1800W Methods](#model-esw15-usa-15a1800w-methods)
- [Air Purifier LV-PUR131S Methods](#air-purifier-lv-pur131s-methods)
- - [Dimmable Smart Light Bulb Method and Properties](#dimmable-smart-light-bulb-method-and-properties)
- - [Tunable Smart Light Bulb Methods and Properties](#tunable-smart-light-bulb-methods-and-properties)
+ - [Dimmable Smart Light Bulb Method and Properties (ESL100)](#dimmable-smart-light-bulb-method-and-properties)
+ - [Tunable Smart Light Bulb Methods and Properties (ESL100CW)](#tunable-smart-light-bulb-methods-and-properties)
- [Dimmable Switch Methods and Properties](#dimmable-switch-methods-and-properties)
+ - [Levoit 300S humidifer](#levoit-humidifier-300s-methods-and-properties)
- [JSON Output API](#json-output-api)
- [JSON Output for All Devices](#json-output-for-all-devices)
- [JSON Output for Outlets](#json-output-for-outlets)
- [JSON Output for Dimmable Switch](#json-output-for-dimmable-switch)
- [JSON Output for Bulbs](#json-output-for-bulbs)
- [JSON Output for Air Purifier](#json-output-for-air-purifier)
+ - [JSON Output for 300S Humidifier](#json-output-for-300s-humidifier)
- [Notes](#notes)
- [Feature Requests](#feature-requests)
@@ -56,7 +58,7 @@ To start with the module:
```python
from pyvesync import VeSync
-manager = VeSync("EMAIL", "PASSWORD")
+manager = VeSync("EMAIL", "PASSWORD", "TIME_ZONE")
manager.login()
# Get/Update Devices from server - populate device lists
@@ -93,7 +95,7 @@ manger.bulbs = [VeSyncBulbObjects]
If outlets are going to be continuously polled, a custom energy update interval can be set - The default is 6 hours (21600 seconds)
```python
-manager.energy_update_interval = time # time in seconds
+manager.energy_update_interval = 360 # time in seconds
```
## Example Usage
@@ -215,6 +217,52 @@ The rectangular smart switch model supports some additional functionality on top
`VeSyncSwitch.rgb_color_set(red, green, blue)` - Set color of rgb light (0 - 255)
+### Levoit Humidifier 300S Methods and Properties
+
+The details dictionary contains all device status details
+
+```python
+VeSync300S.details = {
+ 'humidity': 80, # percent humidity in room
+ 'mist_virtual_level': 0, # Level of mist output 1 - 9
+ 'mist_level': 0,
+ 'mode': 'manual', # auto, manual, sleep
+ 'water_lacks': False,
+ 'humidity_high': False,
+ 'water_tank_lifted': False,
+ 'display': False,
+ 'automatic_stop_reach_target': False,
+ 'night_light_brightness': 0
+ }
+```
+
+The configuration dictionary shows current settings
+
+```python
+VeSync300S.config = {
+ 'auto_target_humidity': 80, # percent humidity in room
+ 'display': True, # Display on/off
+ 'automatic_stop': False
+ }
+```
+
+`VeSync300S.automatic_stop_on()` Set humidifier to stop at set humidity
+
+`VeSync300S.automatic_stop_off` Set humidifier to run continuously
+
+`VeSync300S.turn_on_display()` Turn display on
+
+`VeSync300S.turn_off_display()` Turn display off
+
+`VeSync300S.set_humidity(30)` Set humidity between 30 and 80 percent
+
+`VeSync300S.set_night_light_brightness(50)` Set nightlight brightness between 1 and 100
+
+`VeSync300S.set_humidity_mode('sleep')` Set humidity mode - sleep/auto
+
+`VeSync300S.set_mist_level(4)` Set mist output 1 - 9
+
+
### JSON Output API
The `device.displayJSON()` method outputs properties and status of the device
@@ -290,6 +338,22 @@ This output only applies to dimmable switch. The standard switch has the defaul
'Filter Life': '99' # remaining filter life in percent
}
```
+#### JSON Output for 300S Humidifier
+
+```python
+{
+ 'Mode': 'manual', # auto, manual, sleep
+ 'Humidity': 20, # percent
+ 'Mist Virtual Level': 6, # Mist level 1 - 9
+ 'Water Lacks': True, # True/False
+ 'Water Tank Lifted': True, # True/False
+ 'Display': True, # True/False
+ 'Automatic Stop Reach Target': True,
+ 'Night Light Brightness': 10, # 1 - 100
+ 'Auto Target Humidity': True, # True/False
+ 'Automatic Stop': True # True/False
+}
+```
## Notes
diff --git a/setup.py b/setup.py
index 2e77d93..2e7da8d 100644
--- a/setup.py
+++ b/setup.py
@@ -16,8 +16,8 @@
long_description=long_description,
long_description_content_type='text/markdown',
url='https://github.com/markperdue/pyvesync',
- author='Mark Perdue',
- author_email='[email protected]',
+ author='Mark Perdue, Joe Trabulsy',
+ author_email='[email protected]',
license='MIT',
classifiers=[
'License :: OSI Approved :: MIT License',
@@ -26,7 +26,7 @@
'Natural Language :: English',
'Programming Language :: Python :: 3.6',
],
- keywords=['iot', 'vesync'],
+ keywords=['iot', 'vesync', 'levoit'],
packages=find_packages('src', exclude=['tests', 'tests.*']),
package_dir={'': 'src'},
zip_safe=False,
diff --git a/src/pyvesync/helpers.py b/src/pyvesync/helpers.py
index 7586d1b..95c0f1c 100644
--- a/src/pyvesync/helpers.py
+++ b/src/pyvesync/helpers.py
@@ -12,12 +12,14 @@
API_TIMEOUT = 5
DEFAULT_TZ = 'America/New_York'
+DEFAULT_REGION = 'US'
APP_VERSION = '2.5.1'
PHONE_BRAND = 'SM N9005'
PHONE_OS = 'Android'
MOBILE_ID = '1234567890123456'
USER_TYPE = '1'
+BYPASS_APP_V = "VeSync 3.0.51"
class Helpers:
@@ -85,7 +87,7 @@ def req_body(cls, manager, type_) -> dict:
}
body['method'] = 'devices'
body['pageNo'] = '1'
- body['pageSize'] = '50'
+ body['pageSize'] = '100'
elif type_ == 'devicestatus':
body = {**cls.req_body_base(manager),
**cls.req_body_auth(manager)}
@@ -228,3 +230,23 @@ def build_config_dict(r: dict) -> dict:
'power_protection': r.get('powerProtectionStatus'),
'energy_saving_status': r.get('energySavingStatus'),
}
+
+ @classmethod
+ def bypass_body_v2(cls, manager):
+ """Build body dict for bypass calls."""
+ bdy = {}
+ bdy.update(
+ **cls.req_body(manager, "bypass")
+ )
+ bdy['method'] = 'bypassV2'
+ bdy['debugMode'] = False
+ bdy['deviceRegion'] = DEFAULT_REGION
+ return bdy
+
+ @staticmethod
+ def bypass_header():
+ """Build bypass header dict."""
+ return {
+ 'Content-Type': 'application/json; charset=UTF-8',
+ 'User-Agent': 'VeSync/VeSync 3.0.51(F5321;Android 8.0.0)'
+ }
diff --git a/src/pyvesync/vesync.py b/src/pyvesync/vesync.py
index d33f393..9bba119 100755
--- a/src/pyvesync/vesync.py
+++ b/src/pyvesync/vesync.py
@@ -9,7 +9,7 @@
from pyvesync.helpers import Helpers
from pyvesync.vesyncbulb import VeSyncBulbESL100, VeSyncBulbESL100CW
-from pyvesync.vesyncfan import VeSyncAir131
+from pyvesync.vesyncfan import VeSyncAir131, VeSync300S
from pyvesync.vesyncoutlet import (
VeSyncOutlet7A,
VeSyncOutlet10A,
@@ -39,13 +39,14 @@
'ESL100': VeSyncBulbESL100,
'ESL100CW': VeSyncBulbESL100CW,
'ESWD16': VeSyncDimmerSwitch,
+ 'Classic300S': VeSync300S
}
_DEVICE_TYPES_DICT: Dict[str, List[str]] = dict(
outlets=['wifi-switch-1.3', 'ESW03-USA',
'ESW01-EU', 'ESW15-USA', 'ESO15-TB'],
switches=['ESWL01', 'ESWL03', 'ESWD16'],
- fans=['LV-PUR131S'],
+ fans=['LV-PUR131S', 'Classic300S'],
bulbs=['ESL100', 'ESL100CW'],
)
diff --git a/src/pyvesync/vesyncfan.py b/src/pyvesync/vesyncfan.py
index 20d9bb2..64f2e73 100644
--- a/src/pyvesync/vesyncfan.py
+++ b/src/pyvesync/vesyncfan.py
@@ -2,8 +2,10 @@
import json
import logging
+
+from typing import Dict, Tuple, Union
from pyvesync.vesyncbasedevice import VeSyncBaseDevice
-from pyvesync.helpers import Helpers as helpers
+from pyvesync.helpers import Helpers
logger = logging.getLogger(__name__)
@@ -16,22 +18,22 @@ def __init__(self, details, manager):
"""Initilize air purifier class."""
super().__init__(details, manager)
- self.details = {}
+ self.details: Dict = {}
def get_details(self) -> None:
"""Build Air Purifier details dictionary."""
- body = helpers.req_body(self.manager, 'devicedetail')
+ body = Helpers.req_body(self.manager, 'devicedetail')
body['uuid'] = self.uuid
- head = helpers.req_headers(self.manager)
+ head = Helpers.req_headers(self.manager)
- r, _ = helpers.call_api(
+ r, _ = Helpers.call_api(
'/131airPurifier/v1/device/deviceDetail',
method='post',
headers=head,
json=body,
)
- if r is not None and helpers.code_check(r):
+ if r is not None and Helpers.code_check(r):
self.device_status = r.get('deviceStatus', 'unknown')
self.connection_status = r.get('connectionStatus', 'unknown')
self.details['active_time'] = r.get('activeTime', 0)
@@ -45,19 +47,19 @@ def get_details(self) -> None:
def get_config(self) -> None:
"""Get configuration info for air purifier."""
- body = helpers.req_body(self.manager, 'devicedetail')
+ body = Helpers.req_body(self.manager, 'devicedetail')
body['method'] = 'configurations'
body['uuid'] = self.uuid
- r, _ = helpers.call_api(
+ r, _ = Helpers.call_api(
'/131airpurifier/v1/device/configurations',
'post',
- headers=helpers.req_headers(self.manager),
+ headers=Helpers.req_headers(self.manager),
json=body,
)
- if helpers.code_check(r):
- self.config = helpers.build_config_dict(r)
+ if Helpers.code_check(r):
+ self.config = Helpers.build_config_dict(r)
else:
logger.warning('Unable to get config info for %s',
self.device_name)
@@ -93,17 +95,17 @@ def screen_status(self) -> str:
def turn_on(self) -> bool:
"""Turn Air Purifier on."""
if self.device_status != 'on':
- body = helpers.req_body(self.manager, 'devicestatus')
+ body = Helpers.req_body(self.manager, 'devicestatus')
body['uuid'] = self.uuid
body['status'] = 'on'
- head = helpers.req_headers(self.manager)
+ head = Helpers.req_headers(self.manager)
- r, _ = helpers.call_api(
+ r, _ = Helpers.call_api(
'/131airPurifier/v1/device/deviceStatus', 'put',
json=body, headers=head
)
- if r is not None and helpers.code_check(r):
+ if r is not None and Helpers.code_check(r):
self.device_status = 'on'
return True
logger.warning('Error turning %s on', self.device_name)
@@ -113,17 +115,17 @@ def turn_on(self) -> bool:
def turn_off(self) -> bool:
"""Turn Air Purifier Off."""
if self.device_status == 'on':
- body = helpers.req_body(self.manager, 'devicestatus')
+ body = Helpers.req_body(self.manager, 'devicestatus')
body['uuid'] = self.uuid
body['status'] = 'off'
- head = helpers.req_headers(self.manager)
+ head = Helpers.req_headers(self.manager)
- r, _ = helpers.call_api(
+ r, _ = Helpers.call_api(
'/131airPurifier/v1/device/deviceStatus', 'put',
json=body, headers=head
)
- if r is not None and helpers.code_check(r):
+ if r is not None and Helpers.code_check(r):
self.device_status = 'off'
return True
logger.warning('Error turning %s off', self.device_name)
@@ -162,9 +164,9 @@ def change_fan_speed(self, speed: int = None) -> bool:
)
return False
- body = helpers.req_body(self.manager, 'devicestatus')
+ body = Helpers.req_body(self.manager, 'devicestatus')
body['uuid'] = self.uuid
- head = helpers.req_headers(self.manager)
+ head = Helpers.req_headers(self.manager)
if speed is not None:
if speed == level:
return True
@@ -180,12 +182,12 @@ def change_fan_speed(self, speed: int = None) -> bool:
else:
body['level'] = int(level + 1)
- r, _ = helpers.call_api(
+ r, _ = Helpers.call_api(
'/131airPurifier/v1/device/updateSpeed', 'put',
json=body, headers=head
)
- if r is not None and helpers.code_check(r):
+ if r is not None and Helpers.code_check(r):
self.details['level'] = body['level']
return True
logger.warning('Error changing %s speed', self.device_name)
@@ -193,20 +195,20 @@ def change_fan_speed(self, speed: int = None) -> bool:
def mode_toggle(self, mode: str) -> bool:
"""Set mode to manual, auto or sleep."""
- head = helpers.req_headers(self.manager)
- body = helpers.req_body(self.manager, 'devicestatus')
+ head = Helpers.req_headers(self.manager)
+ body = Helpers.req_body(self.manager, 'devicestatus')
body['uuid'] = self.uuid
if mode != self.mode and mode in ['sleep', 'auto', 'manual']:
body['mode'] = mode
if mode == 'manual':
body['level'] = 1
- r, _ = helpers.call_api(
+ r, _ = Helpers.call_api(
'/131airPurifier/v1/device/updateMode', 'put',
json=body, headers=head
)
- if r is not None and helpers.code_check(r):
+ if r is not None and Helpers.code_check(r):
self.mode = mode
return True
@@ -226,7 +228,7 @@ def display(self) -> None:
('Air Quality: ', self.air_quality, ''),
('Mode: ', self.mode, ''),
('Screen Status: ', self.screen_status, ''),
- ('Filter List: ', self.filter_life, ' percent'),
+ ('Filter Life: ', self.filter_life, ' percent'),
]
for line in disp1:
print('{:.<15} {} {}'.format(line[0], line[1], line[2]))
@@ -235,7 +237,7 @@ def displayJSON(self) -> str:
"""Return air purifier status and properties in JSON output."""
sup = super().displayJSON()
sup_val = json.loads(sup)
- sup_val.append(
+ sup_val.update(
{
'Active Time': str(self.active_time),
'Fan Level': self.fan_level,
@@ -246,3 +248,380 @@ def displayJSON(self) -> str:
}
)
return sup_val
+
+
+class VeSync300S(VeSyncBaseDevice):
+ """300S Humidifier Class."""
+
+ def __init__(self, details, manager):
+ """Initilize 300S Humidifier class."""
+ super().__init__(details, manager)
+ self.enabled = True
+ self.details: Dict[str, Union[str, int, float]] = {
+ 'humidity': 0,
+ 'mist_virtual_level': 0,
+ 'mist_level': 0,
+ 'mode': 'manual',
+ 'water_lacks': False,
+ 'humidity_high': False,
+ 'water_tank_lifted': False,
+ 'display': False,
+ 'automatic_stop_reach_target': False,
+ 'night_light_brightness': 0
+ }
+ self.config: Dict[str, Union[str, int, float]] = {
+ 'auto_target_humidity': 0,
+ 'display': False,
+ 'automatic_stop': True
+ }
+
+ def __build_api_dict(self, method: str) -> Tuple[Dict, Dict]:
+ """Build 300S api call header and body.
+
+ Available methods are: 'getHumidifierStatus', 'setAutomaticStop',
+ 'setSwitch', 'setNightLightBrightness', 'setVirtualLevel',
+ 'setTargetHumidity', 'setHumidityMode'
+ """
+ modes = ['getHumidifierStatus', 'setAutomaticStop',
+ 'setSwitch', 'setNightLightBrightness', 'setVirtualLevel',
+ 'setTargetHumidity', 'setHumidityMode', 'setDisplay']
+ if method not in modes:
+ logger.debug('Invalid mode - %s', method)
+ return {}, {}
+ head = Helpers.bypass_header()
+ body = Helpers.bypass_body_v2(self.manager)
+ body['cid'] = self.cid
+ body['configModule'] = self.config_module
+ body['payload'] = {
+ 'method': method,
+ 'source': 'APP'
+ }
+ return head, body
+
+ def build_humid_dict(self, dev_dict: Dict):
+ """Build 300S humidifier status dictionary."""
+ self.enabled = dev_dict.get('enabled')
+ self.details['humidity'] = dev_dict.get('humidity', 0)
+ self.details['mist_virtual_level'] = dev_dict.get(
+ 'mist_virtual_level', 0)
+ self.details['mist_level'] = dev_dict.get('mist_level', 0)
+ self.details['mode'] = dev_dict.get('mode', 'manual')
+ self.details['water_lacks'] = dev_dict.get('water_lacks', False)
+ self.details['humidity_high'] = dev_dict.get('humidity_high', False)
+ self.details['water_tank_lifted'] = dev_dict.get(
+ 'water_tank_lifted', False)
+ self.details['display'] = dev_dict.get('display', False)
+ self.details['automatic_stop_reach_target'] = dev_dict.get(
+ 'automatic_stop_reach_target', True
+ )
+ self.details['night_light_brightness'] = dev_dict.get(
+ 'night_light_brightness', 0)
+
+ def build_config_dict(self, conf_dict):
+ """Build configuration dict for 300s humidifier."""
+ self.config['auto_target_humidity'] = conf_dict.get(
+ 'auto_target_humidity', 0)
+ self.config['display'] = conf_dict.get('display', False)
+ self.config['automatic_stop'] = conf_dict.get('automatic_stop', True)
+
+ def get_details(self) -> None:
+ """Build 300S Humidifier details dictionary."""
+ head = Helpers.bypass_header()
+ body = Helpers.bypass_body_v2(self.manager)
+ body['cid'] = self.cid
+ body['configModule'] = self.config_module
+ body['payload'] = {
+ 'method': 'getHumidifierStatus',
+ 'source': 'APP',
+ 'data': {}
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+ outer_result = r.get('result', {})
+ inner_result = None
+
+ if outer_result is not None:
+ inner_result = r.get('result', {}).get('result')
+ if inner_result is not None and Helpers.code_check(r):
+ if outer_result.get('code') == 0:
+ self.build_humid_dict(inner_result)
+ else:
+ logger.debug('error in inner result dict from humidifier')
+ if inner_result.get('configuration', {}):
+ self.build_config_dict(inner_result.get('configuration', {}))
+ else:
+ logger.debug('No configuration found in humidifier status')
+ else:
+ logger.debug('Error in humidifier response')
+
+ def update(self):
+ """Update 300S Humidifier details."""
+ self.get_details()
+
+ def toggle_switch(self, toggle: bool) -> bool:
+ """Toggle humidifier on/off."""
+ if not isinstance(toggle, bool):
+ logger.debug('Invalid toggle value for humidifier switch')
+ return False
+
+ head = Helpers.bypass_header()
+ body = Helpers.bypass_body_v2(self.manager)
+ body['cid'] = self.cid
+ body['configModule'] = self.config_module
+ body['payload'] = {
+ 'data': {
+ 'enabled': toggle,
+ 'id': 0
+ },
+ 'method': 'setSwitch',
+ 'source': 'APP'
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+
+ if Helpers.code_check(r):
+ return True
+ logger.debug("Error toggling 300S humidifier - %s", self.device_name)
+ return False
+
+ def turn_on(self) -> bool:
+ """Turn 300S Humidifier on."""
+ return self.toggle_switch(True)
+
+ def turn_off(self):
+ """Turn 300S Humidifier off."""
+ return self.toggle_switch(False)
+
+ def automatic_stop_on(self) -> bool:
+ """Turn 300S Humidifier automatic stop on."""
+ return self.set_automatic_stop(True)
+
+ def automatic_stop_off(self) -> bool:
+ """Turn 300S Humidifier automatic stop on."""
+ return self.set_automatic_stop(False)
+
+ def set_automatic_stop(self, mode: bool) -> bool:
+ """Set 300S Humidifier to automatic stop."""
+ if mode not in (True, False):
+ logger.debug(
+ 'Invalid mode passed to set_automatic_stop - %s', mode)
+ return False
+
+ head, body = self.__build_api_dict('setAutomaticStop')
+ if not head and not body:
+ return False
+
+ body['payload']['data'] = {
+ 'enabled': mode
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+
+ if Helpers.code_check(r):
+ return True
+ if isinstance(r, dict):
+ logger.debug('Error toggling automatic stop')
+ else:
+ logger.debug('Error in api return json for %s', self.device_name)
+ return False
+
+ def set_display(self, mode: bool) -> bool:
+ """Toggle display on/off."""
+ if not isinstance(mode, bool):
+ logger.debug("Mode must be True or False")
+ return False
+
+ head, body = self.__build_api_dict('setDisplay')
+
+ body['payload']['data'] = {
+ 'state': mode
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+
+ if Helpers.code_check(r):
+ return True
+ logger.debug("Error toggling 300S display - %s", self.device_name)
+ return False
+
+ def turn_on_display(self) -> bool:
+ """Turn 300S Humidifier on."""
+ return self.set_display(True)
+
+ def turn_off_display(self):
+ """Turn 300S Humidifier off."""
+ return self.set_display(False)
+
+ def set_humidity(self, humidity: int) -> bool:
+ """Set target 300S Humidifier humidity."""
+ if humidity < 30 or humidity > 80:
+ logger.debug("Humidity value must be set between 30 and 80")
+ return False
+ head, body = self.__build_api_dict('setTargetHumidity')
+
+ if not head and not body:
+ return False
+
+ body['payload']['data'] = {
+ 'target_humidity': humidity
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+
+ if Helpers.code_check(r):
+ return True
+ logger.debug('Error setting humidity')
+ return False
+
+ def set_night_light_brightness(self, brightness: int) -> bool:
+ """Set target 300S Humidifier night light brightness."""
+ if brightness < 0 or brightness > 100:
+ logger.debug("Brightness value must be set between 0 and 100")
+ return False
+ head, body = self.__build_api_dict('setNightLightBrightness')
+
+ if not head and not body:
+ return False
+
+ body['payload']['data'] = {
+ 'night_light_brightness': brightness
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+
+ if Helpers.code_check(r):
+ return True
+ logger.debug('Error setting humidity')
+ return False
+
+ def set_humidity_mode(self, mode: str) -> bool:
+ """Set humidifier mode - sleep or auto."""
+ if mode.lower() not in ['sleep', 'auto']:
+ logger.debug('Invalid humidity mode used (sleep or auto)- %s',
+ mode)
+ return False
+ head, body = self.__build_api_dict('setHumidityMode')
+ if not head and not body:
+ return False
+ body['payload']['data'] = {
+ 'mode': mode.lower()
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+
+ if Helpers.code_check(r):
+ return True
+ logger.debug('Error setting humidity mode')
+ return False
+
+ def set_mist_level(self, level: int) -> bool:
+ """Set humidifier mist level with int between 0 - 9."""
+ if level < 1 or level > 9:
+ logger.debug('Humidifier mist level must be between 0 and 9')
+ return False
+
+ head, body = self.__build_api_dict('setVirtualLevel')
+ if not head and not body:
+ return False
+
+ body['payload']['data'] = {
+ 'id': 0,
+ 'level': level,
+ 'type': 'mist'
+ }
+
+ r, _ = Helpers.call_api(
+ '/cloud/v2/deviceManaged/bypassV2',
+ method='post',
+ headers=head,
+ json=body,
+ )
+
+ if Helpers.code_check(r):
+ return True
+ logger.debug('Error setting mist level')
+ return False
+
+ def display(self) -> None:
+ """Return formatted device info to stdout."""
+ super().display()
+ disp1 = [
+ ('Mode: ', self.details['mode'], ''),
+ ('Humidity: ', self.details['humidity'], 'percent'),
+ ('Mist Virtual Level: ', self.details['mist_virtual_level'], ''),
+ ('Mist Level: ', self.details['mist_level'], ''),
+ ('Water Lacks: ', self.details['water_lacks'], ''),
+ ('Humidity High: ', self.details['humidity_high'], ''),
+ ('Water Tank Lifted: ', self.details['water_tank_lifted'], ''),
+ ('Display: ', self.details['display'], ''),
+ ('Automatic Stop Reach Target: ',
+ self.details['automatic_stop_reach_target'], ''),
+ ('Night Light Brightness: ',
+ self.details['night_light_brightness'], 'percent'),
+ ('Auto Target Humidity: ',
+ self.config['auto_target_humidity'], 'percent'),
+ ('Automatic Stop: ', self.config['automatic_stop'], ''),
+ ]
+ for line in disp1:
+ print('{:.<29} {} {}'.format(line[0], line[1], line[2]))
+
+ def displayJSON(self) -> str:
+ """Return air purifier status and properties in JSON output."""
+ sup = super().displayJSON()
+ sup_val = json.loads(sup)
+ sup_val.update(
+ {
+ 'Mode': self.details['mode'],
+ 'Humidity': str(self.details['humidity']),
+ 'Mist Virtual Level': str(
+ self.details['mist_virtual_level']),
+ 'Mist Level': str(self.details['mist_level']),
+ 'Water Lacks': self.details['water_lacks'],
+ 'Humidity High': self.details['humidity_high'],
+ 'Water Tank Lifted': self.details['water_tank_lifted'],
+ 'Display': self.details['display'],
+ 'Automatic Stop Reach Target': self.details[
+ 'automatic_stop_reach_target'],
+ 'Night Light Brightness': self.details[
+ 'night_light_brightness'],
+ 'Auto Target Humidity': str(self.config[
+ 'auto_target_humidity']),
+ 'Automatic Stop': self.config['automatic_stop'],
+ }
+ )
+ return json.dumps(sup_val)
| calculation for current power usage (watts) is incorrect.
Can you incorporate the changes from the pull request into the code?
It was super easy to pip install this project, but it has some bugs that the pull request fixes.
Thanks for the good work!
| Code has been merged and the bug squashed. Thanks for the heads up | 2021-02-09T00:16:21 | 0.0 | [] | [] |
||
jupyter/papyri | jupyter__papyri-245 | 1cdfbea398bdd217b776c36f1a016108ad11866d | diff --git a/examples/matplotlib.toml b/examples/matplotlib.toml
index 9d6583cd..e5c908dc 100644
--- a/examples/matplotlib.toml
+++ b/examples/matplotlib.toml
@@ -7,16 +7,16 @@ submodules = [ "image", "pyplot", "axes", "axes._base", "dviread", "image","figu
examples_folder = '/Users/bussonniermatthias/dev/matplotlib/examples/'
early_error = false
execute_exclude_patterns = [
- "matplotlib.axes._base._AxesBase.set_prop_cycle",
- "matplotlib.axes._axes.Axes.axvspan",
- "matplotlib.backend_bases.FigureCanvasBase.new_timer",
- "matplotlib.cbook.pts_to_prestep",
- "matplotlib.cbook.pts_to_poststep",
- "matplotlib.cbook.pts_to_midstep",
- "matplotlib._api.check_shape",
- "matplotlib._api.check_isinstance",
- "matplotlib._api.check_in_list",
- "matplotlib._api.check_getitem",
+ "matplotlib.axes._base:_AxesBase.set_prop_cycle",
+ "matplotlib.axes._axes:Axes.axvspan",
+ "matplotlib.backend_bases:FigureCanvasBase.new_timer",
+ "matplotlib.cbook:pts_to_prestep",
+ "matplotlib.cbook:pts_to_poststep",
+ "matplotlib.cbook:pts_to_midstep",
+ "matplotlib._api:check_shape",
+ "matplotlib._api:check_isinstance",
+ "matplotlib._api:check_in_list",
+ "matplotlib._api:check_getitem",
]
examples_exclude = [
# jedi inference issue
@@ -99,32 +99,32 @@ VisitSubstitutionDefinitionNotImplementedError = [
]
IncorrectInternalDocsLen = [
"matplotlib.rc",
- "matplotlib.pyplot.rc",
- "matplotlib.axes._base._process_plot_var_args",
- "matplotlib.dates.ConciseDateFormatter",
- "matplotlib.font_manager.win32FontDirectory",
- "matplotlib.transforms.Affine2D.__init__",
- "matplotlib.transforms.Affine2D.get_matrix",
- "matplotlib.transforms.Affine2D.set_matrix",
+ "matplotlib.pyplot:rc",
+ "matplotlib.axes._base:_process_plot_var_args",
+ "matplotlib.dates:ConciseDateFormatter",
+ "matplotlib.font_manager:win32FontDirectory",
+ "matplotlib.transforms:Affine2D.__init__",
+ "matplotlib.transforms:Affine2D.get_matrix",
+ "matplotlib.transforms:Affine2D.set_matrix",
"matplotlib.ticker.LogLocator.__init__",
- "matplotlib.figure.FigureBase._process_projection_requirements",
- "matplotlib.transforms.Transform.__sub__",
- "matplotlib.patches.ConnectionStyle._Base",
- "matplotlib.tri.triinterpolate._safe_inv22_vectorized",
+ "matplotlib.figure:FigureBase._process_projection_requirements",
+ "matplotlib.transforms:Transform.__sub__",
+ "matplotlib.patches:ConnectionStyle._Base",
+ "matplotlib.tri.triinterpolate:_safe_inv22_vectorized",
"matplotlib.transforms.Affine2D.from_values",
- "matplotlib.backend_bases.FigureCanvasBase._switch_canvas_and_return_print_method",
+ "matplotlib.backend_bases:FigureCanvasBase._switch_canvas_and_return_print_method",
]
ValueError = [
- "matplotlib.image.thumbnail",
- "matplotlib.artist.Artist.set_sketch_params",
- "matplotlib.artist.Artist.set_agg_filter",
- "matplotlib.axes._base._AxesBase.set_xlim",
- "matplotlib.axes._base._AxesBase.set_ylim",
- "matplotlib.cm.ScalarMappable.set_clim",
- "matplotlib.patches.FancyBboxPatch.set_boxstyle",
- "matplotlib.spines.Spine.set_bounds",
+ "matplotlib.image:thumbnail",
+ "matplotlib.artist:Artist.set_sketch_params",
+ "matplotlib.artist:Artist.set_agg_filter",
+ "matplotlib.axes._base:_AxesBase.set_xlim",
+ "matplotlib.axes._base:_AxesBase.set_ylim",
+ "matplotlib.cm:ScalarMappable.set_clim",
+ "matplotlib.patches:FancyBboxPatch.set_boxstyle",
+ "matplotlib.spines:Spine.set_bounds",
"matplotlib.backends.backend_agg.FigureCanvasAgg.print_png",
- "matplotlib.patches.FancyArrowPatch.set_connectionstyle",
+ "matplotlib.patches:FancyArrowPatch.set_connectionstyle",
"matplotlib.backends.backend_svg.FigureCanvasSVG.print_svg",
]
SpaceAfterBlockDirectiveError = [
diff --git a/papyri/crosslink.py b/papyri/crosslink.py
index 38702c06..d258b11a 100644
--- a/papyri/crosslink.py
+++ b/papyri/crosslink.py
@@ -23,14 +23,11 @@
SeeAlsoItem,
Signature,
encoder,
- FullQual,
- Cannonical,
TocTree,
)
from .common_ast import Node, register
from .tree import PostDVR, resolve_, TreeVisitor
-from .utils import progress, dummy_progress
-
+from .utils import progress, dummy_progress, FullQual, Cannonical
warnings.simplefilter("ignore", UserWarning)
diff --git a/papyri/gen.py b/papyri/gen.py
index 15cebf5d..d956cc48 100644
--- a/papyri/gen.py
+++ b/papyri/gen.py
@@ -48,10 +48,8 @@
from .errors import IncorrectInternalDocsLen, NumpydocParseError, UnseenError
from .miscs import BlockExecutor, DummyP
from .take2 import (
- Cannonical,
Code,
Fig,
- FullQual,
GenToken,
Link,
NumpydocExample,
@@ -67,7 +65,15 @@
)
from .toc import make_tree
from .tree import DVR
-from .utils import TimeElapsedColumn, dedent_but_first, full_qual, pos_to_nl, progress
+from .utils import (
+ TimeElapsedColumn,
+ dedent_but_first,
+ full_qual,
+ pos_to_nl,
+ progress,
+ FullQual,
+ Cannonical,
+)
from .vref import NumpyDocString
# delayed import
@@ -616,6 +622,7 @@ def prune(self) -> None:
for qa, item in self.obj.items():
if (nqa := full_qual(item)) != qa:
print("after import qa differs : {qa} -> {nqa}")
+ assert isinstance(nqa, str)
if self.obj[nqa] == item:
print("present twice")
del self.obj[nqa]
@@ -1774,7 +1781,7 @@ def helper_1(
sig: Optional[str]
try:
sig = str(inspect.signature(target_item))
- sig = qa.split(".")[-1] + sig
+ sig = qa.split(":")[-1] + sig
sig = re.sub("at 0x[0-9a-f]+", "at 0x0000000", sig)
except (ValueError, TypeError):
sig = None
@@ -2076,13 +2083,16 @@ def find_cannonical(qa: str, aliases: List[str]):
If we can't find a canonical, there are many, or are identical to the fqa, return None.
"""
- qa_level = qa.count(".")
- min_alias_level = min(a.count(".") for a in set(aliases))
+
+ def _level(c):
+ return c.count(".") + c.count(":")
+
+ qa_level = _level(qa)
+ min_alias_level = min(_level(a) for a in set(aliases))
if min_alias_level < qa_level:
- shorter_candidates = [c for c in aliases if c.count(".") <= min_alias_level]
+ shorter_candidates = [c for c in aliases if _level(c) <= min_alias_level]
else:
- shorter_candidates = [c for c in aliases if c.count(".") <= qa_level]
-
+ shorter_candidates = [c for c in aliases if _level(c) <= qa_level]
if (
len(shorter_candidates) == 1
and not is_private(shorter_candidates[0])
diff --git a/papyri/take2.py b/papyri/take2.py
index 67b5c58b..eff9f15b 100644
--- a/papyri/take2.py
+++ b/papyri/take2.py
@@ -59,7 +59,7 @@
import sys
from dataclasses import dataclass
-from typing import Any, List, NewType, Optional, Tuple, Union
+from typing import Any, List, Optional, Tuple, Union
import cbor2
from there import print
@@ -69,9 +69,6 @@
from .utils import dedent_but_first
-FullQual = NewType("FullQual", str)
-Cannonical = NewType("Cannonical", str)
-
register(tuple)(4444)
diff --git a/papyri/tree.py b/papyri/tree.py
index 0012151d..650976d7 100644
--- a/papyri/tree.py
+++ b/papyri/tree.py
@@ -12,10 +12,8 @@
from .take2 import (
BlockDirective,
- Cannonical,
Code2,
Directive,
- FullQual,
Link,
RefInfo,
SubstitutionDef,
@@ -36,7 +34,7 @@
MCode,
MParagraph,
)
-from .utils import full_qual
+from .utils import full_qual, FullQual, Cannonical
from textwrap import indent
from .ts import parse
from .take2 import Section
diff --git a/papyri/utils.py b/papyri/utils.py
index 6147e6ed..85e8f35a 100644
--- a/papyri/utils.py
+++ b/papyri/utils.py
@@ -1,26 +1,32 @@
+from __future__ import annotations
+
import time
+import typing
from datetime import timedelta
from textwrap import dedent
-from typing import Tuple
+from typing import Tuple, NewType
from rich.progress import BarColumn, Progress, ProgressColumn, Task, TextColumn
from rich.text import Text
from types import ModuleType
+FullQual = NewType("FullQual", str)
+Cannonical = NewType("Cannonical", str)
+
-def full_qual(obj):
+def full_qual(obj) -> typing.Optional[FullQual]:
if isinstance(obj, ModuleType):
- return obj.__name__
+ return FullQual(obj.__name__)
else:
try:
if hasattr(obj, "__qualname__") and (
getattr(obj, "__module__", None) is not None
):
- return obj.__module__ + "." + obj.__qualname__
+ return FullQual(obj.__module__ + ":" + obj.__qualname__)
elif hasattr(obj, "__name__") and (
getattr(obj, "__module__", None) is not None
):
- return obj.__module__ + "." + obj.__name__
+ return FullQual(obj.__module__ + ":" + obj.__name__)
except Exception:
pass
return None
| Replace fully qualified keys to use a `:` between the module and objects.
The current format for fully qualified name is `module.submodule.Class.module`, it would be great to change to `module.submodule:Class.method`
```python
In [1]: import importlib
In [2]: from matplotlib.tri import triplot as t1
...: import matplotlib.tri as tri
...: t2 = tri.triplot
...: from matplotlib.tri.triplot import triplot as t3
...: t4 = importlib.import_module('matplotlib.tri.triplot')
...:
...:
In [3]: t1, t2, t3, t4
Out[3]:
(<function matplotlib.tri.triplot.triplot(ax, *args, **kwargs)>,
<function matplotlib.tri.triplot.triplot(ax, *args, **kwargs)>,
<function matplotlib.tri.triplot.triplot(ax, *args, **kwargs)>,
<module 'matplotlib.tri.triplot' from '/Users/bussonniermatthias/miniforge3/envs/arm64/lib/python3.11/site-packages/matplotlib/tri/triplot.py'>)
```
As you can guess as the qualname of `triplot` is `matplotlib.tri.triplot.triplot`, but we can't recursievly use getattr: `matplotlib.tri.triplot` would already get the function. But if we have `matplotlib.tri.triplot:triplot` , we can split on : , use importlib on the first part and getattr on the second.
| There should be two function in papyri : `obj_from_qualname`, and `full_qual`,
One goes from object to fully-qualified name, and vice versa. Those should be updated (maybe the names should also be tweaked). And there might be other place in the code that rely on `.` instead of `:`.
Maybe also add type annotation, like there is already a `FullQual = NewType("FullQual", str)` type defined we might be able to use.
| 2023-04-14T14:41:01 | 0.0 | [] | [] |
||
DataBiosphere/terra-notebook-utils | DataBiosphere__terra-notebook-utils-257 | 713d42b44efe04e40d93580bddab9dbbedf6c988 | diff --git a/terra_notebook_utils/cli/table.py b/terra_notebook_utils/cli/table.py
index 3eb9251e..91c7e5e7 100644
--- a/terra_notebook_utils/cli/table.py
+++ b/terra_notebook_utils/cli/table.py
@@ -58,6 +58,29 @@ def get_row(args: argparse.Namespace):
if row is not None:
print(row.name, json.dumps(row.attributes))
+@table_cli.command("delete-table", arguments={
+ "--table": dict(type=str, required=True, help="table name"),
+})
+def delete_table(args: argparse.Namespace):
+ """
+ Get one row
+ """
+ args.workspace, args.workspace_namespace = Config.resolve(args.workspace, args.workspace_namespace)
+ kwargs = dict(workspace_name=args.workspace, workspace_google_project=args.workspace_namespace)
+ tnu_table.delete(args.table, **kwargs)
+
+@table_cli.command("delete-row", arguments={
+ "--table": dict(type=str, required=True, help="table name"),
+ "--row": dict(type=str, required=True, help="row name"),
+})
+def delete_row(args: argparse.Namespace):
+ """
+ Delete a row
+ """
+ args.workspace, args.workspace_namespace = Config.resolve(args.workspace, args.workspace_namespace)
+ kwargs = dict(workspace_name=args.workspace, workspace_google_project=args.workspace_namespace)
+ tnu_table.del_row(args.table, args.row, **kwargs)
+
@table_cli.command("fetch-drs-url", arguments={
"--table": dict(type=str, required=True, help="table name"),
"--file-name": dict(type=str, required=True, help="file name"),
diff --git a/terra_notebook_utils/table.py b/terra_notebook_utils/table.py
index d830a5b7..deaa24f1 100644
--- a/terra_notebook_utils/table.py
+++ b/terra_notebook_utils/table.py
@@ -239,6 +239,15 @@ def put_rows(table: str, items: Iterable[Union[ROW_LIKE, ATTRIBUTES]], **kwargs)
def put_row(table: str, item: Union[ROW_LIKE, ATTRIBUTES], **kwargs) -> str:
return put_rows(table, [item], **kwargs)[0]
+def del_rows(table: str, items: Iterable[ROW_OR_NAME], **kwargs):
+ with Deleter(table, **kwargs) as td:
+ for row in items:
+ td.del_row(row)
+
+def del_row(table: str, item: ROW_OR_NAME, **kwargs):
+ del_rows(table, [item], **kwargs)
+
+
def delete(table: str, **kwargs):
with Deleter(table, **kwargs) as td:
for row in list_rows(table, **kwargs):
| Introduce table upload convenience methods
depends on #253 #254 #255
| 2021-01-11T20:14:33 | 0.0 | [] | [] |
|||
pydantic/bump-pydantic | pydantic__bump-pydantic-143 | f5ef9768e01ec12616f659581be140d1160dfc18 | diff --git a/bump_pydantic/main.py b/bump_pydantic/main.py
index d9832c2..33db7f3 100644
--- a/bump_pydantic/main.py
+++ b/bump_pydantic/main.py
@@ -72,8 +72,10 @@ def main(
filtered_files = [file for file in all_files if not any(match_glob(file, pattern) for pattern in ignore)]
files = [str(file.relative_to(".")) for file in filtered_files]
- if files:
- console.log(f"Found {len(files)} files to process")
+ if len(files) == 1:
+ console.log("Found 1 file to process.")
+ elif len(files) > 1:
+ console.log(f"Found {len(files)} files to process.")
else:
console.log("No files to process.")
raise Exit()
@@ -137,8 +139,11 @@ def main(
modified = [Path(f) for f in files if os.stat(f).st_mtime > start_time]
- if modified and not diff:
- console.log(f"Refactored {len(modified)} files.")
+ if not diff:
+ if modified:
+ console.log(f"Refactored {len(modified)} files.")
+ else:
+ console.log("No files were modified.")
for _difflines in difflines:
color_diff(console, _difflines)
| [BUG]: bumb-pydantic doesnt change files
OS: Wndows 11
IDE: Cursor
I am in a fresh [fork ](https://github.com/jjfantini/patito.git) of `patito` in local dev, using a conda env.
THHs depends on pydantic v1 and I am looking to change to support v2, thus using `bump-pydantic`
```bash
(venv_patito)PS C:\Users\<user>\<wd>\patito> bump-pydantic src/patito
[10:38:16] Start bump-pydantic. main.py:61
Found 10 files to process main.py:76
[10:38:32] Run successfully! main.py:149
(venv_patito)PS C:\Users\<user>\<wd>\patito>
```
However, git doesn't track any changes and only generates an empty `log.txt` file. I've also been in src directory with the same outcome.
Shouldn't I get an error somewhere?
| ERROR: type should be string, got "\r\nhttps://github.com/pydantic/bump-pydantic/assets/63276164/5a88e99f-f719-483e-aeae-2f8556a91eb8\r\n\r\n\r\nhttps://github.com/pydantic/bump-pydantic/assets/63276164/7c07244e-2367-4489-9fa0-b7de10f89aff\r\n\r\n" | 2023-12-27T07:06:38 | 0.0 | [] | [] |
||
quic/aimet | quic__aimet-2653 | 02c1de10cd3d7fdb3b22154cb0425e8f10b89aaa | diff --git a/TrainingExtensions/torch/src/python/aimet_torch/utils.py b/TrainingExtensions/torch/src/python/aimet_torch/utils.py
index 6c4e6cd73b..f01c5466a1 100644
--- a/TrainingExtensions/torch/src/python/aimet_torch/utils.py
+++ b/TrainingExtensions/torch/src/python/aimet_torch/utils.py
@@ -686,7 +686,8 @@ def change_tensor_device_placement(tensor_data: Union[torch.Tensor, List, Tuple]
:param device: device
:return: tensor_data with modified device placement
"""
- return nested_map(tensor_data, lambda x: x.to(device=device))
+ return nested_map(tensor_data,
+ lambda x: x.to(device=device) if isinstance(x, (torch.Tensor, torch.nn.Module)) else x)
def nested_map(tensor, fn: Callable[[torch.Tensor], torch.Tensor]):
| constrain device placement routine for tensor and modules.
| 2024-01-18T23:03:09 | 0.0 | [] | [] |
|||
kmhess/SoFiA-image-pipeline | kmhess__SoFiA-image-pipeline-29 | f1df1463b1b290912ef583d0c4f4424b8bb1d56c | diff --git a/src/make_spectra.py b/src/make_spectra.py
index 691cebf..364e90e 100644
--- a/src/make_spectra.py
+++ b/src/make_spectra.py
@@ -74,61 +74,64 @@ def get_noise_spec(source, src_basename, cube_params, original=None):
# Make full spectrum plot:
def make_specfull(source, src_basename, cube_params, suffix='png', full=False):
- outfile = src_basename.replace('cubelets', 'figures') + '_{}_specfull.{}'.format(source['id'], suffix)
+ outfile2 = src_basename.replace('cubelets', 'figures') + '_{}_specfull.{}'.format(source['id'], suffix)
- if not os.path.isfile(outfile):
+ if not os.path.isfile(outfile2):
print("\tMaking HI spectrum plot covering the full frequency range.")
convention = 'Optical'
if 'freq' in source.colnames:
- spec = ascii.read(outfile[:-1*len(suffix)] + 'txt')
+ spec = ascii.read(outfile2[:-1*len(suffix)] + 'txt')
optical_velocity = (spec['freq'] * u.Hz).to(u.km / u.s, equivalencies=optical_HI).value
maskmin = (spec['freq'][spec['chan'] == source['z_min']] * u.Hz).to(u.km / u.s, equivalencies=optical_HI).value
maskmax = (spec['freq'][spec['chan'] == source['z_max']] * u.Hz).to(u.km / u.s, equivalencies=optical_HI).value
else:
if 'vrad' in source.colnames: convention = 'Radio'
- spec = ascii.read(outfile[:-1 * len(suffix)] + 'txt', names=['chan', 'velo', 'f_sum', 'n_pix'])
+ spec = ascii.read(outfile2[:-1 * len(suffix)] + 'txt', names=['chan', 'velo', 'f_sum', 'n_pix'])
optical_velocity = (spec['velo'] * u.m / u.s).to(u.km / u.s).value
maskmin = (spec['velo'][spec['chan'] == source['z_min']] * u.m / u.s).to(u.km / u.s).value
maskmax = (spec['velo'][spec['chan'] == source['z_max']] * u.m / u.s).to(u.km / u.s).value
if full == True:
- fig = plt.figure(figsize=(15, 4))
+ fig2 = plt.figure(figsize=(15, 4))
else:
- fig = plt.figure(figsize=(8, 4))
+ fig2 = plt.figure(figsize=(8, 4))
- ax_spec = fig.add_subplot(111)
- ax_spec.plot([np.min(optical_velocity) - 10, np.max(optical_velocity) + 10], [0, 0], '--', color='gray')
- ax_spec.errorbar(optical_velocity, spec['f_sum'] / cube_params['pix_per_beam'], elinewidth=0.75,
+ ax2_spec = fig2.add_subplot(111)
+ ax2_spec.plot([np.min(optical_velocity) - 10, np.max(optical_velocity) + 10], [0, 0], '--', color='gray')
+ ax2_spec.errorbar(optical_velocity, spec['f_sum'] / cube_params['pix_per_beam'], elinewidth=0.75,
yerr=source['rms'] * np.sqrt(spec['n_pix'] / cube_params['pix_per_beam']), capsize=1)
- ax_spec.set_title(source['name'])
- ax_spec.set_xlim(np.min(optical_velocity) - 5, np.max(optical_velocity) + 5)
- ax_spec.set_ylabel("Integrated Flux [Jy]")
- ax_spec.set_xlabel("{} {} Velocity [km/s]".format(cube_params['spec_sys'].capitalize(), convention))
+ ax2_spec.set_title(source['name'])
+ ax2_spec.set_xlim(np.min(optical_velocity) - 5, np.max(optical_velocity) + 5)
+ ax2_spec.set_ylabel("Integrated Flux [Jy]")
+ ax2_spec.set_xlabel("{} {} Velocity [km/s]".format(cube_params['spec_sys'].capitalize(), convention))
spectrumJy = spec["f_sum"] / cube_params['pix_per_beam']
# Plot limit of SoFiA mask
- ymin, ymax = ax_spec.get_ylim()
- ax_spec.plot([maskmin, maskmin], [0.95*ymin, 0.95*ymax], ':', color='gray')
- ax_spec.plot([maskmax, maskmax], [0.95*ymin, 0.95*ymax], ':', color='gray')
+ ymin, ymax = ax2_spec.get_ylim()
+ ax2_spec.plot([maskmin, maskmin], [0.95*ymin, 0.95*ymax], ':', color='gray')
+ ax2_spec.plot([maskmax, maskmax], [0.95*ymin, 0.95*ymax], ':', color='gray')
# Condition from Apertif experience that if the RFI is *really* bad, plot based on strength of HI profile
if (np.max(spectrumJy) > 2.) | (np.min(spectrumJy) < -1.):
- ax_spec.set_ylim(np.max(spectrumJy[source['z_min']:source['z_max']+1]) * -2,
+ ax2_spec.set_ylim(np.max(spectrumJy[source['z_min']:source['z_max']+1]) * -2,
np.max(spectrumJy[source['z_min']:source['z_max']+1]) * 2)
- fig.savefig(outfile, bbox_inches='tight')
+# fig.savefig(outfile2, bbox_inches='tight')
- return
+ else:
+ fig2, ax2_spec, outfile2 = None, None, None
+
+ return fig2, ax2_spec, outfile2
# Make SoFiA masked spectrum plot (no noise):
def make_spec(source, src_basename, cube_params, suffix='png'):
- outfile = src_basename.replace('cubelets', 'figures') + '_{}_spec.{}'.format(source['id'], suffix)
+ outfile1 = src_basename.replace('cubelets', 'figures') + '_{}_spec.{}'.format(source['id'], suffix)
- if not os.path.isfile(outfile):
+ if not os.path.isfile(outfile1):
print("\tMaking HI SoFiA masked spectrum plot.")
convention = 'Optical'
@@ -149,22 +152,25 @@ def make_spec(source, src_basename, cube_params, suffix='png'):
ll += 1
specunits = (spec.meta['comments'][ll+1].split()[spec.meta['comments'][ll].split().index('f_sum')])
- fig = plt.figure(figsize=(8, 4))
- ax_spec = fig.add_subplot(111)
- ax_spec.plot([np.min(optical_velocity) - 10, np.max(optical_velocity) + 10], [0, 0], '--', color='gray')
+ fig1 = plt.figure(figsize=(8, 4))
+ ax1_spec = fig1.add_subplot(111)
+ ax1_spec.plot([np.min(optical_velocity) - 10, np.max(optical_velocity) + 10], [0, 0], '--', color='gray')
if specunits == 'Jy/beam':
- ax_spec.errorbar(optical_velocity, spec['f_sum'] / cube_params['pix_per_beam'], elinewidth=0.75,
+ ax1_spec.errorbar(optical_velocity, spec['f_sum'] / cube_params['pix_per_beam'], elinewidth=0.75,
yerr=source['rms'] * np.sqrt(spec['n_pix'] / cube_params['pix_per_beam']), capsize=1)
elif specunits == 'Jy':
- ax_spec.errorbar(optical_velocity, spec['f_sum'], elinewidth=0.75,
+ ax1_spec.errorbar(optical_velocity, spec['f_sum'], elinewidth=0.75,
yerr=source['rms'] * np.sqrt(spec['n_pix'] / cube_params['pix_per_beam']), capsize=1)
- ax_spec.set_title(source['name'])
- ax_spec.set_xlim(np.min(optical_velocity) - 5, np.max(optical_velocity) + 5)
- ax_spec.set_ylabel("Integrated Flux [Jy]")
- ax_spec.set_xlabel("{} {} Velocity [km/s]".format(cube_params['spec_sys'].capitalize(), convention))
- fig.savefig(outfile, bbox_inches='tight')
+ ax1_spec.set_title(source['name'])
+ ax1_spec.set_xlim(np.min(optical_velocity) - 5, np.max(optical_velocity) + 5)
+ ax1_spec.set_ylabel("Integrated Flux [Jy]")
+ ax1_spec.set_xlabel("{} {} Velocity [km/s]".format(cube_params['spec_sys'].capitalize(), convention))
+# fig1.savefig(outfile1, bbox_inches='tight')
- return
+ else:
+ fig1, ax1_spec, outfile1 = None, None, None
+
+ return fig1, ax1_spec, outfile1
def main(source, src_basename, original=None, suffix='png', beam=None):
@@ -175,7 +181,7 @@ def main(source, src_basename, original=None, suffix='png', beam=None):
cube_params = get_info(src_basename + '_{}_cube.fits'.format(source['id']), beam)
# Make plot of SoFiA masked spectrum
- make_spec(source, src_basename, cube_params, suffix=suffix)
+ fig1, ax1_spec, outfile1 = make_spec(source, src_basename, cube_params, suffix=suffix)
# Make text file of spectrum with noise; use full frequency range of original cube if provided:
# Can be a bit more precise here in the output options/specification.
@@ -184,8 +190,17 @@ def main(source, src_basename, original=None, suffix='png', beam=None):
get_noise_spec(source, src_basename, cube_params, original)
# Make plot of spectrum with noise
- make_specfull(source, src_basename, cube_params, suffix=suffix, full=False)
-
+ fig2, ax2_spec, outfile2 = make_specfull(source, src_basename, cube_params, suffix=suffix, full=False)
+
+ if outfile1 and outfile2:
+ ymin = min([ax1_spec.get_ylim()[0],ax2_spec.get_ylim()[0]])
+ ymax = max([ax1_spec.get_ylim()[1],ax2_spec.get_ylim()[1]])
+ ax1_spec.set_ylim([ymin,ymax])
+ ax2_spec.set_ylim([ymin,ymax])
+ if outfile1:
+ fig1.savefig(outfile1, bbox_inches='tight')
+ if outfile2:
+ fig2.savefig(outfile2, bbox_inches='tight')
plt.close('all')
print("\tDone making spectral profiles of the spectral line source {}: {}.".format(source['id'], source['name']))
| Fixed Y range for 1D spectra
Having the same Y range for the two 1D spectra (3D mask and 2D aperture) would facilitate their comparison for QA.
Fixed Y range for 1D spectra
Having the same Y range for the two 1D spectra (3D mask and 2D aperture) would facilitate their comparison for QA.
| 2022-02-24T18:05:12 | 0.0 | [] | [] |
|||
lf1-io/padl | lf1-io__padl-273 | 31ddc974a3848570cf28c5e20e6a6af4aa575eb4 | diff --git a/padl/dumptools/inspector.py b/padl/dumptools/inspector.py
index 13803fe2..b261a813 100644
--- a/padl/dumptools/inspector.py
+++ b/padl/dumptools/inspector.py
@@ -170,11 +170,11 @@ def get_statement(source: str, lineno: int):
continue
try:
try:
- statement = _get_statement_from_block(block, lineno_in_block + row_offset)
- return statement, (lineno - row_offset - 1, -col_offset)
+ statement, offset = _get_statement_from_block(block, lineno_in_block + row_offset)
+ return statement, (lineno - offset - 1, -col_offset)
except SyntaxError:
- statement = _get_statement_from_block('(\n' + block + '\n)',
- lineno_in_block + row_offset + 1)
+ statement, offset = _get_statement_from_block('(\n' + block + '\n)',
+ lineno_in_block + row_offset + 1)
return statement, (lineno - lineno_in_block - 1, -col_offset)
except SyntaxError:
continue
@@ -182,13 +182,16 @@ def get_statement(source: str, lineno: int):
def _get_statement_from_block(block: str, lineno_in_block: int):
- """Get a statement from ."""
+ """Get a statement from a block."""
module = ast.parse(block)
stmts = []
+ offset = 0
for stmt in module.body:
if stmt.lineno <= lineno_in_block <= stmt.end_lineno:
stmts.append(ast.get_source_segment(block, stmt))
- return '\n'.join(stmts)
+ offset = lineno_in_block - stmt.lineno
+ assert len(stmts) == 1
+ return '\n'.join(stmts), offset
def get_surrounding_block(source: str, lineno: int):
diff --git a/padl/wrap.py b/padl/wrap.py
index 06cfe452..ec36ce19 100644
--- a/padl/wrap.py
+++ b/padl/wrap.py
@@ -200,7 +200,7 @@ def _wrap_lambda(fun, ignore_scope=False):
if not len(instrs) == len(target_instrs):
continue
for instr, target_instr in zip(instrs, target_instrs):
- if (instr.opname, target_instr.argval) != (instr.opname, target_instr.argval):
+ if (instr.opname, instr.argval) != (target_instr.opname, target_instr.argval):
break
else:
found = True
| Multiple lambdas in a compose breaks the print statement.
## ð Bug
```python
t = transform(lambda x: torch.randn(2)) >> batch >> identity >> unbatch >> this.tolist() >> transform(lambda x: x + ['hello'])
print(t)
```
Output of print:
```
Compose:
â
â¼ x
0: lambda x: torch.randn(2)
â
â¼ args
1: Batchify(dim=0)
â
â¼ args
2: padl.Identity()
â
â¼ args
3: Unbatchify(dim=0, cpu=True)
â
â¼ args
4: tolist()
â
â¼ x
5: lambda x: torch.randn(2)
```
The last step is incorrect.
| 2021-11-17T17:48:12 | 0.0 | [] | [] |
|||
delvtech/pypechain | delvtech__pypechain-44 | 3f8e92d445aa26a9d5a4696eb83334be67ea14a5 | diff --git a/pypechain/main.py b/pypechain/main.py
index 94f64f2a..1cea4dc3 100644
--- a/pypechain/main.py
+++ b/pypechain/main.py
@@ -11,6 +11,9 @@
from web3.exceptions import NoABIFunctionsFound
from pypechain.render.main import render_files
+from pypechain.utilities.file import write_string_to_file
+from pypechain.utilities.format import apply_black_formatting
+from pypechain.utilities.templates import get_jinja_env
def main(argv: Sequence[str] | None = None) -> None:
@@ -32,7 +35,7 @@ def main(argv: Sequence[str] | None = None) -> None:
setup_directory(output_dir)
# List to store all JSON ABI files to be processed
- json_files_to_process = []
+ json_files_to_process: list[Path] = []
# Check if provided path is a directory or file
if os.path.isdir(abi_file_path):
@@ -42,16 +45,31 @@ def main(argv: Sequence[str] | None = None) -> None:
# Otherwise, add the single file to the list
json_files_to_process.append(Path(abi_file_path))
+ file_names: list[str] = []
+
# Now process all gathered files
for json_file in json_files_to_process:
try:
- render_files(str(json_file), output_dir, line_length)
+ rendered_file_names = render_files(str(json_file), output_dir, line_length)
+ file_names.extend(rendered_file_names)
except NoABIFunctionsFound:
print(f"No ABI Functions found in {json_file}, skipping...")
except BaseException as err:
print(f"Error creating types for {json_file}")
raise err
+ # Finally, render the __init__.py file
+ render_init_file(output_dir, file_names, line_length)
+
+
+def render_init_file(output_dir: str, file_names: list[str], line_length):
+ """Creates an __init__.py file that imports all other files."""
+ env = get_jinja_env()
+ init_template = env.get_template("init.py.jinja2")
+ init_code = init_template.render(file_names=file_names)
+ formatted_init_code = apply_black_formatting(init_code, line_length)
+ write_string_to_file(f"{output_dir}/__init__.py", formatted_init_code)
+
def gather_json_files(directory: str) -> list:
"""Gathers all JSON files in the specified directory and its subdirectories."""
@@ -68,11 +86,6 @@ def setup_directory(directory: str) -> None:
# Create the directory
os.makedirs(directory)
- # Create an empty __init__.py file in the directory
- init_file_path = os.path.join(directory, "__init__.py")
- with open(init_file_path, "a", encoding="utf-8"):
- pass
-
class Args(NamedTuple):
"""Command line arguments for pypechain."""
diff --git a/pypechain/render/main.py b/pypechain/render/main.py
index 0e59ffa8..1c3a6001 100644
--- a/pypechain/render/main.py
+++ b/pypechain/render/main.py
@@ -3,14 +3,13 @@
import os
from pathlib import Path
-
from pypechain.render.contract import render_contract_file
from pypechain.render.types import render_types_file
from pypechain.utilities.file import write_string_to_file
from pypechain.utilities.format import apply_black_formatting
-def render_files(abi_file_path: str, output_dir: str, line_length: int) -> None:
+def render_files(abi_file_path: str, output_dir: str, line_length: int) -> list[str]:
"""Processes a single JSON file to generate class and types files."""
# get names
@@ -22,12 +21,19 @@ def render_files(abi_file_path: str, output_dir: str, line_length: int) -> None:
# render the code
rendered_contract_code = render_contract_file(contract_name, file_path)
+ # TODO: if there are no types generated, then this should return None
rendered_types_code = render_types_file(contract_name, file_path)
- # Format the generated code using Black
+ file_names: list[str] = []
+ # Format the generated code using Black and rite the code to file
formatted_contract_code = apply_black_formatting(rendered_contract_code, line_length)
- formatted_types_code = apply_black_formatting(rendered_types_code, line_length)
-
- # Write the code to file
write_string_to_file(f"{contract_path}Contract.py", formatted_contract_code)
- write_string_to_file(f"{contract_path}Types.py", formatted_types_code)
+ file_names.append(f"{contract_name}Contract")
+
+ # TODO: write tests for this conditional write.
+ if rendered_types_code:
+ formatted_types_code = apply_black_formatting(rendered_types_code, line_length)
+ write_string_to_file(f"{contract_path}Types.py", formatted_types_code)
+ file_names.append(f"{contract_name}Types")
+
+ return file_names
diff --git a/pypechain/render/types.py b/pypechain/render/types.py
index ec411006..a9257bed 100644
--- a/pypechain/render/types.py
+++ b/pypechain/render/types.py
@@ -1,4 +1,6 @@
"""Functions to render Python types from an abi usng a jinja2 template."""
+from __future__ import annotations
+
from dataclasses import asdict
from pathlib import Path
@@ -6,7 +8,7 @@
from pypechain.utilities.templates import get_jinja_env
-def render_types_file(contract_name: str, abi_file_path: Path) -> str:
+def render_types_file(contract_name: str, abi_file_path: Path) -> str | None:
"""Returns the serialized code of the types file to be generated.
Arguments
@@ -33,8 +35,12 @@ def render_types_file(contract_name: str, abi_file_path: Path) -> str:
structs = [asdict(struct) for struct in structs_list]
events = [asdict(event) for event in get_events_for_abi(abi)]
has_events = bool(events)
+ has_structs = bool(structs)
has_event_params = any(len(event["inputs"]) > 0 for event in events)
+ if not has_events and not has_structs:
+ return None
+
return types_template.render(
contract_name=contract_name,
structs=structs,
diff --git a/pypechain/templates/init.py.jinja2 b/pypechain/templates/init.py.jinja2
new file mode 100644
index 00000000..87e61966
--- /dev/null
+++ b/pypechain/templates/init.py.jinja2
@@ -0,0 +1,5 @@
+"""Export all types from generated files."""
+
+{% for name in file_names -%}
+from .{{name}} import *
+{% endfor %}
\ No newline at end of file
| Have __init__.py export types
We should export types from __init__.py to make imports for cosumers easier. Right now we have:
```from hypertypes.IERC4626HyperdriveContract import IERC4626HyperdriveContract```
And we'd like to have:
```from hypertypes import IERC4626HyperdriveContract```
| 2023-11-20T22:55:15 | 0.0 | [] | [] |
|||
biopragmatics/biomappings | biopragmatics__biomappings-95 | bfb1b136bdde74ec39e548569502402e7f5e8078 | diff --git a/scripts/add_mesh_xrefs_to_mondo.py b/scripts/add_mesh_xrefs_to_mondo.py
new file mode 100644
index 00000000..2ed4caaa
--- /dev/null
+++ b/scripts/add_mesh_xrefs_to_mondo.py
@@ -0,0 +1,68 @@
+"""This script adds newly inferred cross-references for MONDO.
+
+These are added directly to the version controlled MONDO OBO file.
+"""
+
+from biomappings import load_mappings
+
+EDITABLE_OBO_PATH = "/home/ben/src/mondo/src/ontology/mondo-edit.obo"
+
+
+def add_xref(lines, node, xref):
+ """Add xref to OBO file lines in the appropriate place."""
+ look_for_xref = False
+ start_xref_idx = None
+ def_idx = None
+ name_idx = None
+ xref_entries = []
+ for idx, line in enumerate(lines):
+ # If this is the block for the given node, we start looking for xrefs
+ if line == "id: %s\n" % node:
+ look_for_xref = True
+ continue
+ # If we are looking for xrefs
+ elif look_for_xref:
+ # If we find the definition, we save its index
+ if line.startswith("def"):
+ def_idx = idx
+ if line.startswith("name"):
+ name_idx = idx
+ # If we find an xref, we keep track of it
+ if line.startswith("xref"):
+ if not start_xref_idx:
+ start_xref_idx = idx
+ xref_entries.append(line[6:].strip())
+ # If we've already found some xrefs and then hit a line that
+ # is not an xref, then we are done collecting xrefs
+ if start_xref_idx and not line.startswith("xref"):
+ break
+ # If we then find an empty line, we are at the end of the
+ # OBO entry and never found any xrefs. In this case, we put
+ # the xref after the definition line or the name line
+ if not line.strip():
+ if def_idx:
+ start_xref_idx = def_idx + 1
+ else:
+ start_xref_idx = name_idx + 1
+ break
+ xref_entries.append(xref)
+ xref_entries = sorted(xref_entries)
+ xr_idx = xref_entries.index(xref)
+ lines.insert(start_xref_idx + xr_idx, 'xref: %s {source="MONDO:equivalentTo"}\n' % xref)
+ return lines
+
+
+if __name__ == "__main__":
+ mappings = load_mappings()
+ mondo_mappings = [m for m in mappings if m["source prefix"] == "mondo"]
+
+ with open(EDITABLE_OBO_PATH, "r") as fh:
+ lines = fh.readlines()
+
+ for mapping in mondo_mappings:
+ lines = add_xref(
+ lines, mapping["source identifier"], "MESH:" + mapping["target identifier"]
+ )
+
+ with open(EDITABLE_OBO_PATH, "w") as fh:
+ fh.writelines(lines)
diff --git a/scripts/generate_mondo_mesh_mappings.py b/scripts/generate_mondo_mesh_mappings.py
new file mode 100644
index 00000000..0791278c
--- /dev/null
+++ b/scripts/generate_mondo_mesh_mappings.py
@@ -0,0 +1,74 @@
+"""Generate mappings using Gilda from MONDO to MeSH."""
+from collections import Counter
+
+import gilda
+import obonet
+from indra.databases import mesh_client
+from indra.ontology.standardize import standardize_db_refs
+from indra.tools.fix_invalidities import fix_invalidities_db_refs
+
+from biomappings import load_mappings
+from biomappings.resources import PredictionTuple, append_prediction_tuples
+
+g = obonet.read_obo("http://purl.obolibrary.org/obo/mondo.obo")
+
+
+curated_mappings = {
+ m["source identifier"] for m in load_mappings() if m["source prefix"] == "mondo"
+}
+
+mappings = {}
+existing_refs_to_mesh = set()
+already_mappable = set()
+for node, data in g.nodes(data=True):
+ if not node.startswith("MONDO"):
+ continue
+ if "name" not in data:
+ continue
+ mondo_id = node.split(":", maxsplit=1)[1]
+ if mondo_id in curated_mappings:
+ continue
+ xrefs = [xref.split(":", maxsplit=1) for xref in data.get("xref", [])]
+ xrefs_dict = fix_invalidities_db_refs(dict(xrefs))
+ standard_refs = standardize_db_refs(xrefs_dict)
+ if "MESH" in standard_refs:
+ already_mappable.add(node)
+ existing_refs_to_mesh |= {id for ns, id in standard_refs.items() if ns == "MESH"}
+ matches = gilda.ground(data["name"], namespaces=["MESH"])
+ if matches:
+ for grounding in matches[0].get_groundings():
+ if grounding[0] == "MESH":
+ mappings[node] = matches[0].term.id
+
+
+print("Found %d MONDO->MESH mappings." % len(mappings))
+
+mappings = {
+ k: v
+ for k, v in mappings.items()
+ if v not in existing_refs_to_mesh and k not in already_mappable
+}
+
+cnt = Counter(mappings.values())
+
+mappings = {k: v for k, v in mappings.items() if cnt[v] == 1}
+
+print("Found %d MONDO->MESH mappings." % len(mappings))
+
+predictions = []
+for mondo_id, mesh_id in mappings.items():
+ pred = PredictionTuple(
+ source_prefix="mondo",
+ source_id=mondo_id[6:],
+ source_name=g.nodes[mondo_id]["name"],
+ relation="skos:exactMatch",
+ target_prefix="mesh",
+ target_identifier=mesh_id,
+ target_name=mesh_client.get_mesh_name(mesh_id),
+ type="lexical",
+ confidence=0.9,
+ source="generate_mondo_mesh_mappings.py",
+ )
+ predictions.append(pred)
+
+append_prediction_tuples(predictions, deduplicate=True, sort=True)
diff --git a/src/biomappings/resources/incorrect.tsv b/src/biomappings/resources/incorrect.tsv
index f92b852c..a4166ad3 100644
--- a/src/biomappings/resources/incorrect.tsv
+++ b/src/biomappings/resources/incorrect.tsv
@@ -540,6 +540,11 @@ mesh D065627 Familial Primary Pulmonary Hypertension skos:exactMatch doid DOID:1
mesh D065637 Cytochrome P-450 CYP2A6 skos:exactMatch hgnc 2610 CYP2A6 manually_reviewed orcid:0000-0003-1307-2508
mesh D066167 Slit Lamp skos:exactMatch ncit C75583 Slit-lamp Examination manually_reviewed orcid:0000-0001-9439-5346
mesh D066246 ErbB Receptors skos:exactMatch ncit C17068 Epidermal Growth Factor Receptor manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0005187 human herpesvirus 8 infection skos:exactMatch mesh D019288 Herpesvirus 8, Human manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015053 hereditary angioedema type 1 skos:exactMatch mesh D056829 Hereditary Angioedema Types I and II manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0020320 acute myeloblastic leukemia with maturation skos:exactMatch mesh D000650 Amnion manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0044346 echinococcus granulosus infectious disease skos:exactMatch mesh D048209 Echinococcus granulosus manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100073 methicillin-resistant staphylococcus aureus infectious disease skos:exactMatch mesh D055624 Methicillin-Resistant Staphylococcus aureus manually_reviewed orcid:0000-0001-9439-5346
pr PR:000035716 ubiquitin (RPS27A) skos:exactMatch uniprot.chain PRO_0000396174 Ubiquitin manually_reviewed orcid:0000-0001-9439-5346
pr PR:000035720 ubiquitin (UBC) skos:exactMatch uniprot.chain PRO_0000396174 Ubiquitin manually_reviewed orcid:0000-0001-9439-5346
pr PR:000035722 ubiquitin (UBA52) skos:exactMatch uniprot.chain PRO_0000396174 Ubiquitin manually_reviewed orcid:0000-0001-9439-5346
diff --git a/src/biomappings/resources/mappings.tsv b/src/biomappings/resources/mappings.tsv
index 4fff4caf..2320ee52 100644
--- a/src/biomappings/resources/mappings.tsv
+++ b/src/biomappings/resources/mappings.tsv
@@ -6447,6 +6447,207 @@ mesh D066270 Social Capital skos:exactMatch ncit C93209 Social Capital manually_
mesh D066271 External Capsule skos:exactMatch ncit C32550 External Capsule manually_reviewed orcid:0000-0001-9439-5346
mesh D066292 Lipid Droplets skos:exactMatch go GO:0005811 lipid droplet manually_reviewed orcid:0000-0001-9439-5346
mesh D066293 Renshaw Cells skos:exactMatch ncit C33463 Renshaw Cell manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0000158 developmental dysplasia of the hip skos:exactMatch mesh D000082602 Developmental Dysplasia of the Hip manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0000239 adiaspiromycosis skos:exactMatch mesh C000656784 adiaspiromycosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0000242 tinea barbae skos:exactMatch mesh C000656825 tinea barbae manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0000455 cone dystrophy skos:exactMatch mesh D000077765 Cone Dystrophy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0000598 aphasia skos:exactMatch mesh D001037 Aphasia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0000716 agraphia skos:exactMatch mesh D000381 Agraphia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0000819 anencephaly skos:exactMatch mesh D000757 Anencephaly manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0001020 amblyopia skos:exactMatch mesh D000550 Amblyopia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0001045 intestinal atresia skos:exactMatch mesh D007409 Intestinal Atresia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0001134 essential hypertension skos:exactMatch mesh D000075222 Essential Hypertension manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0001149 microcephaly skos:exactMatch mesh D008831 Microcephaly manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0001247 social phobia skos:exactMatch mesh D000072861 Phobia, Social manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0001357 hypochromic anemia skos:exactMatch mesh D000747 Anemia, Hypochromic manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0001688 toxic optic neuropathy skos:exactMatch mesh D000081028 Toxic Optic Neuropathy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0002012 methylmalonic acidemia skos:exactMatch mesh C537358 Methylmalonic acidemia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0002648 mammary Paget disease skos:exactMatch mesh D010144 Paget's Disease, Mammary manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0005607 chronic bronchitis skos:exactMatch mesh D029481 Bronchitis, Chronic manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0006033 diffuse intrinsic pontine glioma skos:exactMatch mesh D000080443 Diffuse Intrinsic Pontine Glioma manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0006094 Askin tumor skos:exactMatch mesh C563168 Askin Tumor manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007082 alopecia areata 1 skos:exactMatch mesh C566303 Alopecia Areata 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007088 Alzheimer disease type 1 skos:exactMatch mesh C536594 Alzheimer disease type 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007110 Diamond-Blackfan anemia 1 skos:exactMatch mesh C567302 Diamond-Blackfan Anemia 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007329 cirrhosis, familial skos:exactMatch mesh C566123 Cirrhosis, Familial manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007390 coumarin resistance skos:exactMatch mesh C563039 Coumarin Resistance manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007440 major affective disorder 1 skos:exactMatch mesh C565111 Major Affective Disorder 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007459 dilution, pigmentary skos:exactMatch mesh C566872 Dilution, Pigmentary manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007547 epidermoid cysts skos:exactMatch mesh D004814 Epidermal Cyst manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007578 esterase B skos:exactMatch mesh C049262 esterase B manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007817 IgE responsiveness, atopic skos:exactMatch mesh C564133 Ige Responsiveness, Atopic manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007909 familial multiple lipomatosis skos:exactMatch mesh D000071070 Familial Multiple Lipomatosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0007942 Mammastatin skos:exactMatch mesh C060120 mammastatin manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008062 narcolepsy 1 skos:exactMatch mesh C563534 Narcolepsy 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008163 otofaciocervical syndrome skos:exactMatch mesh C563481 Otofaciocervical Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008197 parietal foramina 1 skos:exactMatch mesh C566827 Parietal Foramina 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008273 actinic prurigo skos:exactMatch mesh C566780 Actinic Prurigo manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008328 glaucoma 1, open angle, P skos:exactMatch mesh C566748 Glaucoma 1, Open Angle, P manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008426 Shprintzen-Goldberg syndrome skos:exactMatch mesh C537328 Shprintzen Golberg craniosynostosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008527 tarsal coalition skos:exactMatch mesh D000070604 Tarsal Coalition manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008597 trichorhinophalangeal syndrome, type III skos:exactMatch mesh C566033 Trichorhinophalangeal Syndrome, Type III manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008612 tuberous sclerosis 1 skos:exactMatch mesh C565346 Tuberous Sclerosis 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008734 adrenocortical carcinoma, hereditary skos:exactMatch mesh C565972 Adrenocortical Carcinoma, Hereditary manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0008738 aganglionosis, total intestinal skos:exactMatch mesh C538058 Aganglionosis, total intestinal manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009003 achromatopsia 2 skos:exactMatch mesh C536128 Achromatopsia 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009130 Dyggve-Melchior-Clausen disease skos:exactMatch mesh C535726 Dyggve-Melchior-Clausen syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009175 eosinophilic fasciitis skos:exactMatch mesh C562487 Eosinophilic Fasciitis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009202 Thakker-Donnai syndrome skos:exactMatch mesh C536503 Thakker Donnai syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009220 visceral steatosis, congenital skos:exactMatch mesh C536351 Visceral Steatosis, Congenital manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009263 gapo syndrome skos:exactMatch mesh C535642 Growth retardation, Alopecia, Pseudoanodontia and Optic atrophy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009488 keratoconus posticus circumscriptus skos:exactMatch mesh C536151 Keratoconus posticus circumscriptus manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009503 pyruvate dehydrogenase E3-binding protein deficiency skos:exactMatch mesh C565447 Pyruvate Dehydrogenase E3-Binding Protein Deficiency manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009526 fibular aplasia, tibial campomelia, and oligosyndactyly syndrome skos:exactMatch mesh C565436 Fibular Aplasia, Tibial Campomelia, and Oligosyndactyly Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009573 megaepiphyseal dwarfism skos:exactMatch mesh C536140 Megaepiphyseal dwarfism manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009612 methylmalonic aciduria due to methylmalonyl-CoA mutase deficiency skos:exactMatch mesh C565390 Methylmalonic Aciduria due to Methylmalonyl-CoA Mutase Deficiency manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009630 microphthalmia, isolated, with coloboma 4 skos:exactMatch mesh C565378 Microphthalmia, Isolated, with Coloboma 4 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009649 moyamoya disease 1 skos:exactMatch mesh C536991 Moyamoya disease 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009703 myopathy with abnormal lipid metabolism skos:exactMatch mesh C562935 Myopathy with Abnormal Lipid Metabolism manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009766 oculocerebral hypopigmentation syndrome of Preus skos:exactMatch mesh C537866 Oculocerebral hypopigmentation syndrome type Preus manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009853 Imerslund-Grasbeck syndrome skos:exactMatch mesh C538556 Imerslund-Grasbeck syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0009878 pituitary hormone deficiency, combined, 2 skos:exactMatch mesh C563172 Pituitary Hormone Deficiency, Combined, 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010056 spinal muscular atrophy, type IV skos:exactMatch mesh C563948 Spinal Muscular Atrophy, Type IV manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010103 teeth, fused skos:exactMatch mesh D005671 Fused Teeth manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010131 thyroid hormone resistance, generalized, autosomal recessive skos:exactMatch mesh C567936 Thyroid Hormone Resistance, Generalized, Autosomal Recessive manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010186 vitamin D-dependent rickets, type 2A skos:exactMatch mesh C562794 Vitamin D-Dependent Rickets, Type 2A manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010201 Winchester syndrome skos:exactMatch mesh C536709 Winchester syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010261 microphthalmia, syndromic 2 skos:exactMatch mesh C537465 Microphthalmia, syndromic 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010391 angioma serpiginosum, X-linked skos:exactMatch mesh C536366 Angioma serpiginosum, X-linked manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010535 Bazex-Dupre-Christol syndrome skos:exactMatch mesh C537663 Bazex-Dupre-Christol syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010623 ichthyosis and male hypogonadism skos:exactMatch mesh C537365 Ichthyosis and male hypogonadism manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010644 proteinuria, low molecular weight, with hypercalciuria and nephrocalcinosis skos:exactMatch mesh C545036 Low Molecular Weight Proteinuria with Hypercalciuria and Nephrocalcinosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010693 nystagmus 1, congenital, X-linked skos:exactMatch mesh C537853 Nystagmus 1, congenital, X- linked manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010717 pyruvate dehydrogenase E1-alpha deficiency skos:exactMatch mesh C564071 Pyruvate Dehydrogenase E1 Alpha Deficiency manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010743 thrombocytopenia 1 skos:exactMatch mesh C564052 Thrombocytopenia 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0010760 XH antigen skos:exactMatch mesh C009691 Xh antigen manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0011042 Martinez-Frias syndrome skos:exactMatch mesh C563346 Martinez-Frias Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0011120 neural tube defects, folate-sensitive skos:exactMatch mesh C536409 Neural tube defect, folate-sensitive manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0011544 paragangliomas 3 skos:exactMatch mesh C565335 Paragangliomas 3 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0011713 melanoma-pancreatic cancer syndrome skos:exactMatch mesh C563985 Melanoma-Pancreatic Cancer Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0011826 glucocorticoid deficiency 2 skos:exactMatch mesh C564577 Glucocorticoid Deficiency 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0011885 tubulointerstitial nephritis and uveitis syndrome skos:exactMatch mesh C536922 Tubulointerstitial nephritis and uveitis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0011918 anxiety skos:exactMatch mesh D001007 Anxiety manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012164 Meacham syndrome skos:exactMatch mesh C538162 Meacham Winn Culler syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012214 glucocorticoid deficiency 3 skos:exactMatch mesh C563776 Glucocorticoid Deficiency 3 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012217 Bruck syndrome 2 skos:exactMatch mesh C537407 Bruck syndrome 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012252 rhabdoid tumor predisposition syndrome 1 skos:exactMatch mesh C563738 Rhabdoid Tumor Predisposition Syndrome 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012270 Tukel syndrome skos:exactMatch mesh C536925 Tukel syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012287 Stickler syndrome, type I, nonsyndromic ocular skos:exactMatch mesh C563709 Stickler Syndrome, Type I, Nonsyndromic Ocular manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012310 fibrosis of extraocular muscles, congenital, with synergistic divergence skos:exactMatch mesh C566508 Fibrosis of Extraocular Muscles, Congenital, with Synergistic Divergence manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012400 cortical dysplasia-focal epilepsy syndrome skos:exactMatch mesh C567657 Cortical Dysplasia-Focal Epilepsy Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012408 microphthalmia, isolated, with coloboma 3 skos:exactMatch mesh C566447 Microphthalmia, Isolated, with Coloboma 3 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012425 corneal dystrophy, fuchs endothelial, 2 skos:exactMatch mesh C535479 Corneal dystrophy, Fuchs' endothelial, 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012484 prosopagnosia, hereditary skos:exactMatch mesh C537242 Prosopagnosia, hereditary manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0012522 diabetes mellitus, transient neonatal, 3 skos:exactMatch mesh C566432 Diabetes Mellitus, Transient Neonatal, 3 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0013129 cone dystrophy 4 skos:exactMatch mesh C567758 Cone Dystrophy 4 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0013199 tuberous sclerosis 2 skos:exactMatch mesh C566021 Tuberous Sclerosis 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0013203 corneal dystrophy, Fuchs endothelial, 3 skos:exactMatch mesh C567678 Corneal Dystrophy, Fuchs Endothelial, 3 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0013204 corneal dystrophy, Fuchs endothelial, 4 skos:exactMatch mesh C567677 Corneal Dystrophy, Fuchs Endothelial, 4 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0013205 corneal dystrophy, fuchs endothelial, 5 skos:exactMatch mesh C567676 Corneal Dystrophy, Fuchs Endothelial, 5 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0013206 corneal dystrophy, Fuchs endothelial, 6 skos:exactMatch mesh C567675 Corneal Dystrophy, Fuchs Endothelial, 6 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0013207 corneal dystrophy, fuchs endothelial, 7 skos:exactMatch mesh C567674 Corneal Dystrophy, Fuchs Endothelial, 7 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015240 digitotalar dysmorphism skos:exactMatch mesh C565097 Digitotalar Dysmorphism manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015281 atrial standstill skos:exactMatch mesh C563984 Atrial Standstill manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015451 univentricular heart skos:exactMatch mesh D000080039 Univentricular Heart manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015467 craniosynostosis, Philadelphia type skos:exactMatch mesh C563368 Craniosynostosis, Philadelphia Type manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015564 Castleman disease skos:exactMatch mesh D005871 Castleman Disease manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015993 cone-rod dystrophy skos:exactMatch mesh D000071700 Cone-Rod Dystrophies manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0015995 melorheostosis with osteopoikilosis skos:exactMatch mesh C563593 Melorheostosis with Osteopoikilosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016101 neurolymphomatosis skos:exactMatch mesh D000077162 Neurolymphomatosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016217 mal de Debarquement skos:exactMatch mesh C537840 Mal de debarquement manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016295 neuronal ceroid lipofuscinosis skos:exactMatch mesh D009472 Neuronal Ceroid-Lipofuscinoses manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016301 congenitally corrected transposition of the great arteries skos:exactMatch mesh D000080041 Congenitally Corrected Transposition of the Great Arteries manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016366 maternal phenylketonuria skos:exactMatch mesh D017042 Phenylketonuria, Maternal manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016567 locked-in syndrome skos:exactMatch mesh D000080422 Locked-In Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016607 odontohypophosphatasia skos:exactMatch mesh C564146 Odontohypophosphatasia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016798 ataxia neuropathy spectrum skos:exactMatch mesh C579922 Ataxia Neuropathy Spectrum manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0016809 spinocerebellar ataxia with epilepsy skos:exactMatch mesh C564395 Spinocerebellar Ataxia with Epilepsy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0017161 frontotemporal dementia with motor neuron disease skos:exactMatch mesh C566288 Frontotemporal Dementia With Motor Neuron Disease manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0017169 multiple endocrine neoplasia skos:exactMatch mesh D009377 Multiple Endocrine Neoplasia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0017198 osteopetrosis skos:exactMatch mesh D010022 Osteopetrosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0017267 self-healing collodion baby skos:exactMatch mesh C565473 Self-Healing Collodion Baby manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0017349 myopericytoma skos:exactMatch mesh D000077777 Myopericytoma manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0017715 3-hydroxyacyl-CoA dehydrogenase deficiency skos:exactMatch mesh C535310 3-Hydroxyacyl-CoA Dehydrogenase Deficiency manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0017999 fatty acid hydroxylase-associated neurodegeneration skos:exactMatch mesh C580102 Fatty Acid Hydroxylase-Associated Neurodegeneration manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018072 persistent truncus arteriosus skos:exactMatch mesh D014339 Truncus Arteriosus, Persistent manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018082 aorto-ventricular tunnel skos:exactMatch mesh D000082903 Aortico-Ventricular Tunnel manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018135 oculocutaneous albinism type 1 skos:exactMatch mesh C537728 Oculocutaneous albinism type 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018154 Madelung deformity skos:exactMatch mesh C562398 Madelung Deformity manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018226 infantile epileptic-dyskinetic encephalopathy skos:exactMatch mesh C567924 Infantile Epileptic-Dyskinetic Encephalopathy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018338 activated PI3K-delta syndrome skos:exactMatch mesh C585640 Activated PI3K-delta Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018363 focal facial dermal dysplasia skos:exactMatch mesh C537068 Focal facial dermal dysplasia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018668 scedosporiosis skos:exactMatch mesh C000656924 scedosporiosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018781 KID syndrome skos:exactMatch mesh C536168 Keratitis, Ichthyosis, and Deafness (KID) Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018815 aneurysmal bone cyst skos:exactMatch mesh D017824 Bone Cysts, Aneurysmal manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018830 Kimura disease skos:exactMatch mesh D000082242 Kimura Disease manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018869 cobblestone lissencephaly skos:exactMatch mesh D054222 Cobblestone Lissencephaly manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018878 branchiootic syndrome skos:exactMatch mesh C537104 Branchiootic syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0018949 distal myopathy skos:exactMatch mesh D049310 Distal Myopathies manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019107 Rh deficiency syndrome skos:exactMatch mesh C562717 Rh Deficiency Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019155 Leydig cell hypoplasia skos:exactMatch mesh C562567 Leydig Cell Hypoplasia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019169 pyruvate dehydrogenase deficiency skos:exactMatch mesh D015325 Pyruvate Dehydrogenase Complex Deficiency Disease manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019353 Stargardt disease skos:exactMatch mesh D000080362 Stargardt Disease manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019669 hypochondrogenesis skos:exactMatch mesh C563007 Hypochondrogenesis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019760 terminal transverse defects of arm skos:exactMatch mesh C565681 Terminal Transverse Defects of Arm manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019804 tracheomalacia skos:exactMatch mesh D055090 Tracheomalacia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0019978 Robinow syndrome skos:exactMatch mesh C562492 Robinow Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0020110 pulmonary agenesis skos:exactMatch mesh C562992 Lung agenesis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0020540 ovarian gynandroblastoma skos:exactMatch mesh C538459 Ovarian gynandroblastoma manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0020756 migraine, familial hemiplegic, 1 skos:exactMatch mesh C536890 Hemiplegic migraine, familial type 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0020792 dwarfism with tall vertebrae skos:exactMatch mesh C535725 Dwarfism tall vertebrae manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0020806 sinoatrial block skos:exactMatch mesh D012848 Sinoatrial Block manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0021065 pleural neoplasm skos:exactMatch mesh D010997 Pleural Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0021092 fallopian tube neoplasm skos:exactMatch mesh D005185 Fallopian Tube Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0021106 laminopathy skos:exactMatch mesh D000083083 Laminopathies manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0021224 iris neoplasm skos:exactMatch mesh D015811 Iris Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0021234 spinal cord neoplasm skos:exactMatch mesh D013120 Spinal Cord Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0021253 gallbladder neoplasm skos:exactMatch mesh D005706 Gallbladder Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0021662 bile duct neoplasm skos:exactMatch mesh D001650 Bile Duct Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0022140 Charles bonnet syndrome skos:exactMatch mesh D000075562 Charles Bonnet Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0022208 crystal arthropathy skos:exactMatch mesh D000070657 Crystal Arthropathies manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0023203 Fuchs atrophia gyrata chorioideae et retinae skos:exactMatch mesh C538071 Fuchs atrophia gyrata chorioideae et retinae manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0023305 heavy metal poisoning skos:exactMatch mesh D000075322 Heavy Metal Poisoning manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024264 hypothyroidism, congenital, nongoitrous, 2 skos:exactMatch mesh C566852 Hypothyroidism, Congenital, Nongoitrous, 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024288 hyperbilirubinemia skos:exactMatch mesh D006932 Hyperbilirubinemia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024332 perennial allergic rhinitis skos:exactMatch mesh D012221 Rhinitis, Allergic, Perennial manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024361 circadian rhythm sleep disorder skos:exactMatch mesh D020178 Sleep Disorders, Circadian Rhythm manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024465 surfactant metabolism dysfunction, pulmonary, 2 skos:exactMatch mesh C567048 Surfactant Metabolism Dysfunction, Pulmonary, 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024543 brittle cornea syndrome 1 skos:exactMatch mesh C536192 Brittle cornea syndrome 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024559 aortic aneurysm, familial thoracic 1 skos:exactMatch mesh C562834 Aortic Aneurysm, Familial Thoracic 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024647 urolithiasis skos:exactMatch mesh D052878 Urolithiasis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0024653 skull neoplasm skos:exactMatch mesh D012888 Skull Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0026404 X inactivation, familial skewed, 1 skos:exactMatch mesh C564716 X Inactivation, Familial Skewed, 1 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0026426 X inactivation, familial skewed, 2 skos:exactMatch mesh C564572 X Inactivation, Familial Skewed, 2 manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0027091 xanthogranulomatous sialadenitis skos:exactMatch mesh C536763 Xanthogranulomatous sialadenitis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0030048 harderoporphyria skos:exactMatch mesh C562816 Harderoporphyria manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0036591 adrenal cortex neoplasm skos:exactMatch mesh D000306 Adrenal Cortex Neoplasms manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0037748 hyperlipoproteinemia skos:exactMatch mesh D006951 Hyperlipoproteinemias manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0041161 endometrial hyperplasia skos:exactMatch mesh D004714 Endometrial Hyperplasia manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0041656 ST-elevation myocardial infarction skos:exactMatch mesh D000072657 ST Elevation Myocardial Infarction manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0041751 multibacillary leprosy skos:exactMatch mesh D056006 Leprosy, Multibacillary manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0041752 paucibacillary leprosy skos:exactMatch mesh D056005 Leprosy, Paucibacillary manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0044877 paraneoplastic cerebellar degeneration skos:exactMatch mesh D020362 Paraneoplastic Cerebellar Degeneration manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0054868 meconium ileus skos:exactMatch mesh D000074270 Meconium Ileus manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100053 anaphylaxis skos:exactMatch mesh D000707 Anaphylaxis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100075 jaw fracture skos:exactMatch mesh D007572 Jaw Fractures manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100115 acute flaccid myelitis skos:exactMatch mesh C000629404 acute flaccid myelitis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100120 vector-borne disease skos:exactMatch mesh D000079426 Vector Borne Diseases manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100128 coinfection skos:exactMatch mesh D060085 Coinfection manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100164 permanent neonatal diabetes mellitus skos:exactMatch mesh C563425 Diabetes Mellitus, Permanent Neonatal manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100185 immune reconstitution inflammatory syndrome skos:exactMatch mesh D054019 Immune Reconstitution Inflammatory Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100192 liver failure skos:exactMatch mesh D017093 Liver Failure manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100338 urinary tract infection skos:exactMatch mesh D014552 Urinary Tract Infections manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100345 lactose intolerance skos:exactMatch mesh D007787 Lactose Intolerance manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100350 neuronopathy, distal hereditary motor, type 5 skos:exactMatch mesh C563443 Neuronopathy, Distal Hereditary Motor, Type V manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100457 achalasia, familial esophageal skos:exactMatch mesh C536011 Achalasia, familial esophageal manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0100482 extensively drug-resistant tuberculosis skos:exactMatch mesh D054908 Extensively Drug-Resistant Tuberculosis manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0400005 refeeding syndrome skos:exactMatch mesh D055677 Refeeding Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0600008 cytokine release syndrome skos:exactMatch mesh D000080424 Cytokine Release Syndrome manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0700064 aneuploidy skos:exactMatch mesh D000782 Aneuploidy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0700065 trisomy skos:exactMatch mesh D014314 Trisomy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:0700086 uniparental disomy skos:exactMatch mesh D024182 Uniparental Disomy manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:8000018 benign paroxysmal positional vertigo skos:exactMatch mesh D065635 Benign Paroxysmal Positional Vertigo manually_reviewed orcid:0000-0001-9439-5346
+mondo MONDO:8000019 vertigo, benign recurrent, 1 skos:exactMatch mesh C567620 Vertigo, Benign Recurrent, 1 manually_reviewed orcid:0000-0001-9439-5346
ncit C1707 Voriconazole skos:exactMatch chebi CHEBI:10023 voriconazole manual orcid:0000-0003-4423-4370
ncit C2160 Proteasome Inhibitor skos:exactMatch chebi CHEBI:52726 proteasome inhibitor manual orcid:0000-0003-4423-4370
ncit C65538 Esomeprazole skos:exactMatch chebi CHEBI:50275 esomeprazole manual orcid:0000-0003-4423-4370
diff --git a/src/biomappings/resources/predictions.tsv b/src/biomappings/resources/predictions.tsv
index 39d33a71..cfcfaa06 100644
--- a/src/biomappings/resources/predictions.tsv
+++ b/src/biomappings/resources/predictions.tsv
@@ -40339,6 +40339,92 @@ mesh D066208 Olfactory Tubercle skos:exactMatch ncit C33371 Posterior Olfactory
mesh D066229 Speech Sound Disorder skos:exactMatch ncit C92564 Phonological Disorder lexical 0.95 https://github.com/biomappings/biomappings/blob/a80ed2/scripts/import_gilda_mappings.py
mesh D066259 Betacellulin skos:exactMatch hgnc 1121 BTC lexical 0.95 https://github.com/biomappings/biomappings/blob/492ede/scripts/import_gilda_mappings.py
mesh D066261 Epigen skos:exactMatch hgnc 17470 EPGN lexical 0.95 https://github.com/biomappings/biomappings/blob/492ede/scripts/import_gilda_mappings.py
+mondo MONDO:0001049 Dressler syndrome skos:exactMatch mesh C538618 Donath-Landsteiner hemolytic anemia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0001115 familial polycythemia skos:exactMatch mesh C536842 Polycythemia, primary familial and congenital lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0002814 adrenal carcinoma skos:exactMatch mesh D018268 Adrenocortical Carcinoma lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0005161 human papilloma virus infection skos:exactMatch mesh D030361 Papillomavirus Infections lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0005594 severe cutaneous adverse reaction skos:exactMatch mesh D002921 Cicatrix lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0006209 fibroblastic neoplasm skos:exactMatch mesh D005354 Fibrosarcoma lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007028 rotator cuff syndrome skos:exactMatch mesh D000070636 Rotator Cuff Injuries lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007116 hereditary neurocutaneous angioma skos:exactMatch mesh C536364 Angioma hereditary neurocutaneous lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007323 Chondronectin skos:exactMatch mesh C029172 chondronectin protein, human lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007608 desmoid tumor skos:exactMatch mesh D018222 Fibromatosis, Aggressive lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007614 congenital fibrosis of extraocular muscles skos:exactMatch mesh C580012 Congenital Fibrosis of the Extraocular Muscles lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007767 hyperparathyroidism 1 skos:exactMatch mesh C564166 Hyperparathyroidism 1 lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007794 hypogonadotropic hypogonadism 7 with or without anosmia skos:exactMatch mesh C562785 Idiopathic Hypogonadotropic Hypogonadism lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0007876 laryngeal abductor paralysis skos:exactMatch mesh C536354 Vocal cord dysfunction familial lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0008161 otodental syndrome skos:exactMatch mesh C563482 Otodental Dysplasia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0008718 Morvan syndrome skos:exactMatch mesh D020385 Myokymia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0008751 corticosterone methyloxidase type 1 deficiency skos:exactMatch mesh C537806 18-Hydroxylase deficiency lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0008888 Williams-Campbell syndrome skos:exactMatch mesh D055089 Tracheobronchomalacia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0008990 cleft larynx, posterior skos:exactMatch mesh C537851 Novak syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0009097 persistent hyperplastic primary vitreous, autosomal recessive skos:exactMatch mesh C566966 Persistent Hyperplastic Primary Vitreous, Autosomal Recessive lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0009336 hemosiderosis, pulmonary, with deficiency of gamma-a globulin skos:exactMatch mesh C536281 Idiopathic pulmonary hemosiderosis lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0009349 holoprosencephaly 1 skos:exactMatch mesh C562573 cyclopia sequence lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0009606 methemoglobinemia due to deficiency of methemoglobin reductase skos:exactMatch mesh C537841 NADH cytochrome B5 reductase deficiency lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0009622 Jawad syndrome skos:exactMatch mesh C567101 Microcephaly with Mental Retardation and Digital Anomalies lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0009809 multicentric osteolysis, nodulosis, and arthropathy skos:exactMatch mesh C536051 Osteolysis hereditary multicentric lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0010323 Atkin-Flaitz syndrome skos:exactMatch mesh C538195 Atkin syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0010746 thumbs, congenital Clasped skos:exactMatch mesh C562949 Adducted Thumbs Syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0010778 cyclic vomiting syndrome skos:exactMatch mesh C536228 Familial cyclic vomiting syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0010831 familial caudal dysgenesis skos:exactMatch mesh C535879 Rudd Klimek syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0010880 telangiectasia, hereditary hemorrhagic, type 2 skos:exactMatch mesh C537139 Osler-rendu-weber syndrome 2 lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0011232 migraine, familial hemiplegic, 2 skos:exactMatch mesh C537246 Hemiplegic migraine, familial type 2 lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0011898 Charcot-Marie-Tooth disease, axonal, with vocal cord paresis, autosomal recessive skos:exactMatch mesh C539595 Charcot-Marie-Tooth disease, Type 4A, axonal form lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0012548 Kostmann syndrome skos:exactMatch mesh C537592 Neutropenia, Severe Congenital, Autosomal Recessive 3 lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0012914 chromosome 1q21.1 deletion syndrome skos:exactMatch mesh C567291 Chromosome 1q21.1 Deletion Syndrome, 1.35-Mb lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0012937 Diamond-Blackfan anemia 6 skos:exactMatch mesh C538442 Aase Smith syndrome 2 lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0013996 focal facial dermal dysplasia type II skos:exactMatch mesh C536385 Facial ectodermal dysplasia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0014945 myopathy, distal, with rimmed vacuoles skos:exactMatch mesh C536816 Distal myopathy, Nonaka type lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0015450 triatrial heart skos:exactMatch mesh D003310 Cor Triatriatum lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0015465 craniometaphyseal dysplasia skos:exactMatch mesh C537519 Schwartz-Lelek syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0015518 infantile bilateral striatal necrosis skos:exactMatch mesh C537500 Striatonigral degeneration infantile lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0015903 hyperalphalipoproteinemia skos:exactMatch mesh C564591 Cholesteryl Ester Transfer Protein Deficiency lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0015924 pulmonary arterial hypertension skos:exactMatch mesh D000081029 Pulmonary Arterial Hypertension lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0015986 bilateral renal agenesis skos:exactMatch mesh C536482 Hereditary renal agenesis lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0016581 conotruncal heart malformations skos:exactMatch mesh C535464 Conotruncal cardiac defects lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0016743 tumor of meninges skos:exactMatch mesh D008577 Meningeal Neoplasms lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0016784 gestational trophoblastic disease skos:exactMatch mesh D031901 Gestational Trophoblastic Disease lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0017182 familial hyperinsulinism skos:exactMatch mesh D044903 Congenital Hyperinsulinism lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0017388 celiac trunk compression syndrome skos:exactMatch mesh D000074742 Median Arcuate Ligament Syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0017568 Prata-Liberal-Goncalves syndrome skos:exactMatch mesh C538180 Acrodysplasia scoliosis lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0017609 renal tubular dysgenesis skos:exactMatch mesh C537048 Allanson Pantzar McLeod syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0017672 focal palmoplantar keratoderma skos:exactMatch mesh C538682 Hyperkeratosis of the palms and soles and esophageal papillomas lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0017779 alpha-N-acetylgalactosaminidase deficiency skos:exactMatch mesh C536631 Schindler Disease, Type I lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0017816 primary systemic amyloidosis skos:exactMatch mesh D000075363 Immunoglobulin Light-chain Amyloidosis lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018037 hyper-IgE syndrome skos:exactMatch mesh D007589 Job Syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018215 paraneoplastic neurologic syndrome skos:exactMatch mesh D020361 Paraneoplastic Syndromes, Nervous System lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018274 GM3 synthase deficiency skos:exactMatch mesh C563799 Amish Infantile Epilepsy Syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018440 autosomal recessive distal renal tubular acidosis skos:exactMatch mesh C537758 Renal tubular acidosis, distal, autosomal recessive lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018484 semicircular canal dehiscence syndrome skos:exactMatch mesh D000084322 Semicircular Canal Dehiscence lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018669 snakebite envenomation skos:exactMatch mesh D012909 Snake Bites lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018755 scorpion envenomation skos:exactMatch mesh D065008 Scorpion Stings lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018772 Joubert syndrome skos:exactMatch mesh C536293 Agenesis of Cerebellar Vermis lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018787 genetic cerebral small vessel disease skos:exactMatch mesh D059345 Cerebral Small Vessel Diseases lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018801 congenital bilateral absence of vas deferens skos:exactMatch mesh C535984 Congenital bilateral aplasia of vas deferens lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018871 acute myelomonocytic leukemia M4 skos:exactMatch mesh D015479 Leukemia, Myelomonocytic, Acute lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018948 multiminicore myopathy skos:exactMatch mesh C564969 Minicore Myopathy with External Ophthalmoplegia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0018951 distal myopathy with vocal cord weakness skos:exactMatch mesh C565262 Myopathy, Distal 2 lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0019203 acute interstitial pneumonia skos:exactMatch mesh D000080203 Hamman-Rich Syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0019308 junctional epidermolysis bullosa inversa skos:exactMatch mesh C535958 Epidermolysis bullosa inversa dystrophica lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0019518 Waardenburg-Shah syndrome skos:exactMatch mesh C536467 Waardenburg syndrome, type 4 lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0019636 renal agenesis, unilateral skos:exactMatch mesh D000075529 Solitary Kidney lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0020506 ovarioleukodystrophy skos:exactMatch mesh C565836 Vanishing White Matter Leukodystrophy with Ovarian Failure lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0021053 carotid body paraganglioma skos:exactMatch mesh D002345 Carotid Body Tumor lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0021248 nervous system neoplasm skos:exactMatch mesh D009380 Neoplasms, Nerve Tissue lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0021661 coronary atherosclerosis skos:exactMatch mesh D003324 Coronary Artery Disease lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0024685 Philadelphia-positive myelogenous leukemia skos:exactMatch mesh D015466 Leukemia, Myeloid, Chronic-Phase lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0030639 Teebi hypertelorism syndrome skos:exactMatch mesh C536951 Teebi syndrome lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0031001 vitreoretinopathy with phalangeal epiphyseal dysplasia skos:exactMatch mesh C565179 Vitreoretinopathy with Phalangeal Epiphyseal Dysplasia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0031169 odontochondrodysplasia skos:exactMatch mesh C535792 Spondylometaphyseal dysplasia with dentinogenesis imperfecta lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0044746 zoonotic bacterial infection skos:exactMatch mesh D000086966 Bacterial Zoonoses lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0100001 alpha-gal syndrome skos:exactMatch mesh C000655084 red meat allergy lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0100064 tyrosine hydroxylase deficiency skos:exactMatch mesh C537537 Segawa syndrome, autosomal recessive lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0100089 GATA1-Related X-Linked Cytopenia skos:exactMatch mesh C564525 Dyserythropoietic Anemia with Thrombocytopenia lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0100184 GTP cyclohydrolase I deficiency skos:exactMatch mesh C562656 Hyperphenylalaninemia, BH4-Deficient, B lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0100234 paroxysmal familial ventricular fibrillation skos:exactMatch mesh C537182 Paroxysmal ventricular fibrillation lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0100312 vestibular ataxia skos:exactMatch mesh D000071699 Bilateral Vestibulopathy lexical 0.9 generate_mondo_mesh_mappings.py
+mondo MONDO:0700127 mosaic trisomy 21 skos:exactMatch mesh C536794 Chromosome 21, uniparental disomy of lexical 0.9 generate_mondo_mesh_mappings.py
reactome R-CEL-72172 mRNA Splicing speciesSpecific go GO:0000372 Group I intron splicing lexical 0.549371263656978 https://github.com/biomappings/biomappings/blob/0969bd/scripts/generate_pathway_mappings.py
reactome R-CEL-72172 mRNA Splicing speciesSpecific go GO:0000373 Group II intron splicing lexical 0.549371263656978 https://github.com/biomappings/biomappings/blob/0969bd/scripts/generate_pathway_mappings.py
reactome R-CFA-72172 mRNA Splicing speciesSpecific go GO:0000372 Group I intron splicing lexical 0.549371263656978 https://github.com/biomappings/biomappings/blob/0969bd/scripts/generate_pathway_mappings.py
diff --git a/src/biomappings/utils.py b/src/biomappings/utils.py
index f66ee3e0..9e6f7dd3 100644
--- a/src/biomappings/utils.py
+++ b/src/biomappings/utils.py
@@ -146,6 +146,7 @@ def check_valid_prefix_id(prefix, identifier):
norm_identifier = resource.miriam_standardize_identifier(identifier)
if norm_identifier != identifier:
raise InvalidNormIdentifier(prefix, identifier, norm_identifier)
+ return
miriam_pattern = resource.miriam.get("pattern") if resource.miriam else None
if not miriam_pattern:
pattern = resource.get_pattern_re()
| Prefix validation is inconsistent for OBOfoundry ontologies not in identifiers.org
In the case of MONDO, the validity checks are self-contradictory in that some checks are looking for the MONDO: prefix embedded in IDs whereas others expect it not the be there so some tests fail in either case.
- [test_valid_mappings](https://github.com/biopragmatics/biomappings/blob/bfb1b136bdde74ec39e548569502402e7f5e8078/tests/test_validity.py#L109) expects no prefix in ID whereas
- [test_normalized_identifiers](https://github.com/biopragmatics/biomappings/blob/bfb1b136bdde74ec39e548569502402e7f5e8078/tests/test_validity.py#L61) expects the prefix to be embedded in the ID.
In this particular case, MONDO doesn't appear in identifiers.org so the expectation to have the namespace embedded in the ID wouldn't have come from there, so should probably be expected to not be embedded.
| 2022-05-06T17:48:19 | 0.0 | [] | [] |
|||
mochipon/pysesameos2 | mochipon__pysesameos2-59 | c97d1b5cdf12bf61552b6498c971f842b1e2684a | diff --git a/pysesameos2/chsesame2.py b/pysesameos2/chsesame2.py
index 0477e3c..a6567a8 100644
--- a/pysesameos2/chsesame2.py
+++ b/pysesameos2/chsesame2.py
@@ -166,16 +166,6 @@ def setIntention(self, intent: CHSesame2Intention) -> None:
logger.debug(f"setIntention: {intent}")
self._intention = intent
- def setDeviceStatusCallback(
- self, callback: Optional[Callable[["CHSesame2"], None]]
- ) -> None:
- """Set a device status callback.
-
- Args:
- callback (Optional[Callable[[CHSesame2], None]]): The callback to be called on device status changing.
- """
- pass
-
async def connect(self) -> None:
adv = self.getAdvertisement()
if not adv:
diff --git a/pysesameos2/chsesamebot.py b/pysesameos2/chsesamebot.py
index f04ef02..a1f5ccc 100644
--- a/pysesameos2/chsesamebot.py
+++ b/pysesameos2/chsesamebot.py
@@ -166,16 +166,6 @@ def setIntention(self, intent: CHSesame2Intention) -> None:
logger.debug(f"setIntention: {intent}")
self._intention = intent
- def setDeviceStatusCallback(
- self, callback: Optional[Callable[["CHSesameBot"], None]]
- ) -> None:
- """Set a device status callback.
-
- Args:
- callback (Optional[Callable[[CHSesameBot], None]]): The callback to be called on device status changing.
- """
- pass
-
async def connect(self) -> None:
adv = self.getAdvertisement()
if not adv:
diff --git a/pysesameos2/device.py b/pysesameos2/device.py
index 2c8c97b..60b2d59 100644
--- a/pysesameos2/device.py
+++ b/pysesameos2/device.py
@@ -1,7 +1,7 @@
import asyncio
import logging
import uuid
-from typing import TYPE_CHECKING, Any, Callable, Optional, Union
+from typing import TYPE_CHECKING, Any, Callable, Optional, TypeVar, Union
from bleak.backends.characteristic import BleakGATTCharacteristic
@@ -82,6 +82,9 @@ def setSesame2PublicKey(self, key: Union[bytes, str]) -> None:
self._sesame2PublicKey = key
+CHD = TypeVar("CHD", bound="CHDevices")
+
+
class CHDevices:
def __init__(self) -> None:
"""Generic Implementation for Candyhouse products."""
@@ -90,7 +93,7 @@ def __init__(self) -> None:
self._registered: bool = False
self._rssi: int = -100
self._deviceStatus: CHSesame2Status = CHSesame2Status.NoBleSignal # type: ignore
- self._deviceStatus_callback: Optional[Callable[[CHDevices], None]] = None
+ self._deviceStatus_callback: Optional[Callable[[CHD], None]] = None
self._advertisement: Optional[BLEAdvertisement] = None
self._key: CHDeviceKey = CHDeviceKey()
self._login_event = asyncio.Event()
@@ -264,7 +267,7 @@ def setAdvertisement(self, adv: Optional["BLEAdvertisement"] = None) -> None:
self.setDeviceStatus(CHSesame2Status.ReceivedBle)
def setDeviceStatusCallback(
- self, callback: Optional[Callable[["CHDevices"], None]]
+ self, callback: Optional[Callable[[CHD], None]]
) -> None:
"""Set a device status callback.
| [Bug]: device status callback never executed
### Description - 説æ
Everything works fine with the version 0.0.4. After I upgrade pysesameos2 to 0.0.5, I found the device status callback never executed.
I checked the source code, found the `setDeviceStatusCallback` of `CHSesame2` wrongfully executes `pass` instead of calling the method extended from its parent: https://github.com/mochipon/pysesameos2/blob/c97d1b5cdf12bf61552b6498c971f842b1e2684a/pysesameos2/chsesame2.py#L169-L177
### Expected Behavior - æå¾
ããåä½
After device status changed, the callback set with `setDeviceStatusCallback` should be executed.
### Pysesameos2 version (`pip show pysesameos2`)
0.0.5
### Python version (`python --version`)
3.9.2
### OS
pi os
### BlueZ version (`bluetoothctl -v`) in case of Linux
_No response_
### How to Reproduce - åç¾æ¹æ³
Just run the example code and try to lock/unlock with mqtt command.
### Output - åºåãããå
容, å
·ä½çãªåé¡å
容
_No response_
| 2022-06-18T14:02:41 | 0.0 | [] | [] |
|||
eight04/ptt-mail-backup | eight04__ptt-mail-backup-11 | acbd7fc732960357954df6d15bb8ee8728355ad2 | diff --git a/ptt_mail_backup/article.py b/ptt_mail_backup/article.py
index 5416091..4ca9dc1 100644
--- a/ptt_mail_backup/article.py
+++ b/ptt_mail_backup/article.py
@@ -1,13 +1,9 @@
-import re
-import collections
import uao
from .ansi import chars_to_bytes
uao.register_uao()
-RX_FOOT = re.compile(r"(?:(\d+)~(\d+)\s*æ¬.+?)?(\d+)~(\d+)\s*è¡".encode("big5-uao"))
-
def is_default(char):
return not char.bold and char.fg == "default" and char.bg == "default"
@@ -36,33 +32,13 @@ def __init__(self, line, line_no, col_start):
self.left_truncated = bool(skip_start)
self.right_truncated = bool(skip_end)
-Foot = collections.namedtuple("Foot", ["col_start", "col_end", "line_start", "line_end"])
-
-def match_foot(s):
- match = RX_FOOT.search(s)
- if not match:
- return None
- col_start, col_end, line_start, line_end = match.groups()
-
- col_start = int(col_start) - 2 if col_start is not None else 0
- col_end = int(col_end) if col_end is not None else 78
- line_start = int(line_start) - 1
- line_end = int(line_end)
- return Foot(col_start, col_end, line_start, line_end)
-
class ArticleScreen:
- def __init__(self, lines):
- lines = list(lines)
- foot = match_foot("".join(c.data for c in lines[-1]).encode("latin-1"))
-
- self.line_start = foot.line_start
- self.line_end = foot.line_end
- self.col_start = foot.col_start
- self.col_end = foot.col_end
-
+ def __init__(self, lines, y, x):
+ self.y = y
+ self.x = x
self.lines = [
- ArticleScreenLine(line, line_no, self.col_start)
- for line_no, line in enumerate(lines[:-1], self.line_start)
+ ArticleScreenLine(line, line_no, self.x)
+ for line_no, line in enumerate(lines, self.y)
]
class Article:
@@ -93,9 +69,8 @@ def draw_line(self, line):
for col_no, char in enumerate(line.chars, line.col_no):
self.draw_char(line.line_no, col_no, char)
- def add_screen(self, lines, skip_line=None):
- screen = ArticleScreen(lines)
- assert screen.line_start <= len(self.lines)
+ def add_screen(self, lines, y, x, skip_line=None):
+ screen = ArticleScreen(lines, y, x)
for line in screen.lines:
if skip_line and skip_line(line):
continue
diff --git a/ptt_mail_backup/ptt_bot.py b/ptt_mail_backup/ptt_bot.py
index edc51b9..6ec58dc 100644
--- a/ptt_mail_backup/ptt_bot.py
+++ b/ptt_mail_backup/ptt_bot.py
@@ -9,7 +9,7 @@
from .byte_screen import ByteScreen
from .byte_stream import ByteStream
-from .article import match_foot, Article
+from .article import Article
uao.register_uao()
@@ -195,16 +195,19 @@ def update_article_config(self):
self.send("o")
self.unt(self.on_pmore_conf)
self.send("wmlq")
- self.unt(self.on_col(0))
+ self.unt(self.in_article())
def on_pmore_conf(self, _data):
return "piaip's more: pmore 2007+ è¨å®é¸é
".encode("big5-uao") in self.get_line(-9)
- def on_col(self, col_no):
- def callback(_data):
- foot = match_foot(self.get_line(-1))
- return foot and foot.col_start == col_no
- return callback
+ def article_refresh(self):
+ self.send("h")
+ self.unt(self.detect("å¼å«å°å¤©ä½¿", -1))
+ self.send("q")
+ self.unt(self.in_article())
+
+ def in_article(self):
+ return self.detect("ç覽 第", -1)
def get_article(self, index):
log.info("get %sth article", index)
@@ -234,56 +237,71 @@ def handle_animated(data):
log.info("skip animation")
self.send("n")
is_animated = True
- self.unt(self.on_col(0), on_data=handle_animated)
+ self.unt(self.in_article(), on_data=handle_animated)
log.info("enter the article. is_animated=%s", is_animated)
if is_animated:
- self.send("hq")
- self.unt(self.on_col(0))
- log.info("refresh animation page")
+ self.article_refresh()
+ log.info("refresh animation page to show ^L code")
if not self.article_configured:
self.update_article_config()
log.info("start collecting body")
+ y = 0
+ x = 0
while True:
- screen = article.add_screen(self.lines(raw=True))
- log.info("add screen %s~%s", screen.line_start, screen.line_end)
+ screen = article.add_screen([*self.lines(raw=True)][:-1], y, x)
+ log.info("add screen %s~%s", y + 1, y + self.screen.lines - 1)
indent = 0
while any(line.right_truncated for line in screen.lines):
truncated_lines = set(line.line_no for line in screen.lines if line.right_truncated)
- log.info("has truncated right")
- indent += 1
- self.send(">")
- if screen.col_start == 0:
+ log.info("has truncated lines")
+ indent_count = int(self.screen.columns / 8) - 1
+ if x == 0:
# the first indent is shorter
- next_col = 7
- else:
- next_col = screen.col_start + 8
- self.unt(self.on_col(next_col))
+ x -= 1
+ self.send(">" * indent_count)
+ x += 8 * indent_count
+ indent += indent_count
+ self.article_refresh()
screen = article.add_screen(
- self.lines(raw=True),
+ [*self.lines(raw=True)][:-1],
+ y,
+ x,
skip_line=lambda line: line.line_no not in truncated_lines
)
- log.info("move right to col %s", screen.col_start)
- # if screen.col_start == 136:
- # set_trace()
+ log.info("move right to col %s", x)
log.info("max indent %s", indent)
if indent:
self.send("<" * indent)
- self.unt(self.on_col(0))
+ self.article_refresh()
log.info("back to first col")
+ x = 0
if self.on_last_page():
break
- self.send(" ")
- self.unt(lambda _data: (
- self.on_last_page() or self.on_line(screen.line_start + self.screen.lines - 2)
- ))
+ self.send(":{}\r".format(y + 1 + self.screen.lines - 1))
+ self.article_refresh()
+ if not self.on_last_page():
+ y += self.screen.lines - 1
+ continue
+
+ # return to y and find how many lines are left
+ self.send(":{}\r".format(y + 1))
+ self.article_refresh()
+
+ scrolls = 0
+ while not self.on_last_page():
+ self.send("j")
+ self.article_refresh()
+ scrolls += 1
+
+ y += scrolls
self.send("q")
log.info("get article success")
@@ -291,7 +309,3 @@ def handle_animated(data):
def on_last_page(self):
return RX_LAST_PAGE.search(self.get_line(-1))
-
- def on_line(self, line_no):
- foot = match_foot(self.get_line(-1))
- return foot and foot.line_start == line_no
| ç¡æ³è®åå
å«æ§å¶ç¢¼çæç«
```
C:\>ptt-mail-backup -d 2018-06-12 -v --all
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_7.9p1)
INFO:paramiko.transport:Authentication (publickey) failed.
INFO:paramiko.transport:Authentication (password) successful!
User: adern9
Password:
INFO:ptt_mail_backup.ptt_bot:start login
INFO:ptt_mail_backup.ptt_bot:adern9 login success
INFO:ptt_mail_backup.ptt_bot:enter main menu
Login success, try entering your mail box
INFO:ptt_mail_backup.ptt_bot:get in the mail box
INFO:ptt_mail_backup.ptt_bot:get last index
INFO:ptt_mail_backup.ptt_bot:get last index success: 200
Fetching mail: 1
INFO:ptt_mail_backup.ptt_bot:get 1th article
INFO:ptt_mail_backup.ptt_bot:title: [註åæåå]
INFO:ptt_mail_backup.ptt_bot:uncaught error, here is the last screen:
âââ®âââ®âââ®
â ââ®â ââ®â°ââ® EMailèªèéé
âââ âââ¯âââ¯â°ââ¯âââââââââââââââââââââââââ
å¨å¨ï¼adern9ä½ å¥½ï¼
æ¡è¿å å
¥ pttçè¡å ^o^
è¥éä¸è½Postè«éæ°loginä¸æ¬¡ ^_^ (è¦æ身份å)
ç¥ ä½¿ç¨æå¿«
è¨å¾å¸¸å¸¸ä¾ç©å.... ^_^
ç«é·~~
ââââââââââââââââââââââââââââââââââââââ
â
æ°ææ è«ç¨ mä¾è¨å®ãæçææã
â
æ°è¨å® Ptt æä¾å人åè¨å®, è«è³ (U)ser->(I)nfo->(C)ustomize
â
æ°æ°è Ptt åè¯åæ°è網åä½, æ¨å¯è³ udnnewsæ¿çæ¯æ¥ææ°æ°è
â
æ°å人 æ¡è¿è³æ¹è¸¢è¸¢å
ç³è«åäººæ¿ (telnet://ptt2.cc)
ç覽 第 1/1 é (100%) â²æ¤é å
§å®¹æä¾é±è®è
ä¸å,åææªå¿
ææ¨çè³æ (âq)é¢é
Traceback (most recent call last):
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\site-packages\paramiko\channel.py", line 699, in recv
out = self.in_buffer.read(nbytes, self.timeout)
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\site-packages\paramiko\buffered_pipe.py", line 164, in read
raise PipeTimeout()
paramiko.buffered_pipe.PipeTimeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Hankv4\AppData\Local\Programs\Python\Python37\Scripts\ptt-mail-backup.exe\__main__.py", line 9, in <module>
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\site-packages\ptt_mail_backup\__init__.py", line 83, in main
article = bot.get_article(i)
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\site-packages\ptt_mail_backup\ptt_bot.py", line 225, in get_article
self.unt(self.on_col(0), on_data=handle_animated)
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\site-packages\ptt_mail_backup\ptt_bot.py", line 121, in unt
data = self.channel.recv(math.inf)
File "c:\users\hankv4\appdata\local\programs\python\python37\lib\site-packages\paramiko\channel.py", line 701, in recv
raise socket.timeout()
socket.timeout
```
| æ§å¶ç¢¼è¦åæä½äºç®åè¡æ¸ç顯示â¦â¦æ以ç¡æ³åµæ¸¬ç®åå¨ç¬¬å¹¾è¡ãéæé»éº»ç
©ï¼å çºé樣ç¨å¼å°±ä¸ç¥éç®åæ¯å¨åªä¸è¡/é ã | 2019-08-26T07:28:22 | 0.0 | [] | [] |
||
superlinear-ai/raglite | superlinear-ai__raglite-44 | fdf803b39a6a992c90332ad2d777d9499e84d45d | diff --git a/src/raglite/_database.py b/src/raglite/_database.py
index 490772e..36a7fe4 100644
--- a/src/raglite/_database.py
+++ b/src/raglite/_database.py
@@ -8,7 +8,6 @@
from typing import Any
import numpy as np
-from litellm import get_model_info # type: ignore[attr-defined]
from markdown_it import MarkdownIt
from pydantic import ConfigDict
from sqlalchemy.engine import Engine, make_url
@@ -24,7 +23,7 @@
)
from raglite._config import RAGLiteConfig
-from raglite._litellm import LlamaCppPythonLLM
+from raglite._litellm import get_embedding_dim
from raglite._typing import Embedding, FloatMatrix, FloatVector, PickledObject
@@ -274,14 +273,8 @@ def create_database_engine(config: RAGLiteConfig | None = None) -> Engine:
with Session(engine) as session:
session.execute(text("CREATE EXTENSION IF NOT EXISTS vector;"))
session.commit()
- # If the user has configured a llama-cpp-python model, we ensure that LiteLLM's model info is up
- # to date by loading that LLM.
- if config.embedder.startswith("llama-cpp-python"):
- _ = LlamaCppPythonLLM.llm(config.embedder, embedding=True)
- llm_provider = "llama-cpp-python" if config.embedder.startswith("llama-cpp") else None
- model_info = get_model_info(config.embedder, custom_llm_provider=llm_provider)
- embedding_dim = model_info.get("output_vector_size") or -1
- assert embedding_dim > 0
+ # Get the embedding dimension.
+ embedding_dim = get_embedding_dim(config)
# Create all SQLModel tables.
ChunkEmbedding.set_embedding_dim(embedding_dim)
SQLModel.metadata.create_all(engine)
diff --git a/src/raglite/_litellm.py b/src/raglite/_litellm.py
index 5f4e565..c759165 100644
--- a/src/raglite/_litellm.py
+++ b/src/raglite/_litellm.py
@@ -14,6 +14,7 @@
GenericStreamingChunk,
ModelResponse,
convert_to_model_response_object,
+ get_model_info,
)
from litellm.llms.custom_httpx.http_handler import AsyncHTTPHandler, HTTPHandler
from llama_cpp import ( # type: ignore[attr-defined]
@@ -24,6 +25,8 @@
LlamaRAMCache,
)
+from raglite._config import RAGLiteConfig
+
# Reduce the logging level for LiteLLM and flashrank.
logging.getLogger("litellm").setLevel(logging.WARNING)
logging.getLogger("flashrank").setLevel(logging.WARNING)
@@ -259,3 +262,54 @@ async def astreaming( # type: ignore[misc,override] # noqa: PLR0913
{"provider": "llama-cpp-python", "custom_handler": LlamaCppPythonLLM()}
)
litellm.suppress_debug_info = True
+
+
+@cache
+def get_context_size(config: RAGLiteConfig, *, fallback: int = 2048) -> int:
+ """Get the context size for the configured LLM."""
+ # If the user has configured a llama-cpp-python model, we ensure that LiteLLM's model info is up
+ # to date by loading that LLM.
+ if config.llm.startswith("llama-cpp-python"):
+ _ = LlamaCppPythonLLM.llm(config.llm)
+ # Attempt to read the context size from LiteLLM's model info.
+ llm_provider = "llama-cpp-python" if config.llm.startswith("llama-cpp") else None
+ model_info = get_model_info(config.llm, custom_llm_provider=llm_provider)
+ max_tokens = model_info.get("max_tokens")
+ if isinstance(max_tokens, int) and max_tokens > 0:
+ return max_tokens
+ # Fall back to a default context size if the model info is not available.
+ if fallback > 0:
+ warnings.warn(
+ f"Could not determine the context size of {config.llm} from LiteLLM's model_info, using {fallback}.",
+ stacklevel=2,
+ )
+ return 2048
+ error_message = f"Could not determine the context size of {config.llm}."
+ raise ValueError(error_message)
+
+
+@cache
+def get_embedding_dim(config: RAGLiteConfig, *, fallback: bool = True) -> int:
+ """Get the embedding dimension for the configured embedder."""
+ # If the user has configured a llama-cpp-python model, we ensure that LiteLLM's model info is up
+ # to date by loading that LLM.
+ if config.embedder.startswith("llama-cpp-python"):
+ _ = LlamaCppPythonLLM.llm(config.embedder, embedding=True)
+ # Attempt to read the embedding dimension from LiteLLM's model info.
+ llm_provider = "llama-cpp-python" if config.embedder.startswith("llama-cpp") else None
+ model_info = get_model_info(config.embedder, custom_llm_provider=llm_provider)
+ embedding_dim = model_info.get("output_vector_size")
+ if isinstance(embedding_dim, int) and embedding_dim > 0:
+ return embedding_dim
+ # If that fails, fall back to embedding a single sentence and reading its embedding dimension.
+ if fallback:
+ from raglite._embed import embed_sentences
+
+ warnings.warn(
+ f"Could not determine the embedding dimension of {config.embedder} from LiteLLM's model_info, using fallback.",
+ stacklevel=2,
+ )
+ fallback_embeddings = embed_sentences(["Hello world"], config=config)
+ return fallback_embeddings.shape[1]
+ error_message = f"Could not determine the embedding dimension of {config.embedder}."
+ raise ValueError(error_message)
diff --git a/src/raglite/_rag.py b/src/raglite/_rag.py
index 643a374..81ffda2 100644
--- a/src/raglite/_rag.py
+++ b/src/raglite/_rag.py
@@ -2,11 +2,11 @@
from collections.abc import AsyncIterator, Iterator
-from litellm import acompletion, completion, get_model_info # type: ignore[attr-defined]
+from litellm import acompletion, completion
from raglite._config import RAGLiteConfig
from raglite._database import Chunk
-from raglite._litellm import LlamaCppPythonLLM
+from raglite._litellm import get_context_size
from raglite._search import hybrid_search, rerank_chunks, retrieve_segments
from raglite._typing import SearchMethod
@@ -27,15 +27,9 @@ def _max_contexts(
config: RAGLiteConfig | None = None,
) -> int:
"""Determine the maximum number of contexts for RAG."""
- # If the user has configured a llama-cpp-python model, we ensure that LiteLLM's model info is up
- # to date by loading that LLM.
+ # Get the model's context size.
config = config or RAGLiteConfig()
- if config.llm.startswith("llama-cpp-python"):
- _ = LlamaCppPythonLLM.llm(config.llm)
- # Get the model's maximum context size.
- llm_provider = "llama-cpp-python" if config.llm.startswith("llama-cpp") else None
- model_info = get_model_info(config.llm, custom_llm_provider=llm_provider)
- max_tokens = model_info.get("max_tokens") or 2048
+ max_tokens = get_context_size(config)
# Reduce the maximum number of contexts to take into account the LLM's context size.
max_context_tokens = (
max_tokens
| Azure OpenAI embeddings missing `output_vector_size`
LLMLite Azure OpenAI embedding configuration is missing the `output_vector_size` property. It makes the embeddings to fail as it is checked in RAGLite.
I am not sure if we should rely on it in any case, as these models allow variable length (i.e. Matryoshka embeddings)
Example (OpenAI vs. Azure OpenAI) from [LiteLLM](https://github.com/BerriAI/litellm/blob/7ced9c8c0e3de866d20602c8d85056df7a76d61c/model_prices_and_context_window.json#L777):
```
"text-embedding-3-large": {
"max_tokens": 8191,
"max_input_tokens": 8191,
"output_vector_size": 3072,
"input_cost_per_token": 0.00000013,
"output_cost_per_token": 0.000000,
"litellm_provider": "openai",
"mode": "embedding"
},
```
```
"azure/text-embedding-3-large": {
"max_tokens": 8191,
"max_input_tokens": 8191,
"input_cost_per_token": 0.00000013,
"output_cost_per_token": 0.000000,
"litellm_provider": "azure",
"mode": "embedding"
},
```
| 2024-11-06T16:48:54 | 0.0 | [] | [] |
|||
agronholm/typeguard | agronholm__typeguard-366 | d1b9398a28b0b904c60b67f78ca1274febdf2733 | diff --git a/docs/versionhistory.rst b/docs/versionhistory.rst
index e86e675..2535171 100644
--- a/docs/versionhistory.rst
+++ b/docs/versionhistory.rst
@@ -3,6 +3,11 @@ Version history
This library adheres to `Semantic Versioning 2.0 <https://semver.org/#semantic-versioning-200>`_.
+**UNRELEASED**
+
+- Fix handling of ``typing_extensions.Literal`` on Python 3.8 and 3.9
+ when ``typing_extensions>=4.6.0`` is installed.
+
**4.0.0** (2023-05-12)
- No changes
diff --git a/src/typeguard/_checkers.py b/src/typeguard/_checkers.py
index 61b048b..ea87e0d 100644
--- a/src/typeguard/_checkers.py
+++ b/src/typeguard/_checkers.py
@@ -4,6 +4,7 @@
import inspect
import sys
import types
+import typing
import warnings
from enum import Enum
from inspect import Parameter, isclass, isfunction
@@ -32,6 +33,11 @@
)
from unittest.mock import Mock
+try:
+ import typing_extensions
+except ImportError:
+ typing_extensions = None # type: ignore[assignment]
+
from ._config import ForwardRefPolicy
from ._exceptions import TypeCheckError, TypeHintWarning
from ._memo import TypeCheckMemo
@@ -40,11 +46,7 @@
if sys.version_info >= (3, 11):
from typing import (
Annotated,
- Literal,
- LiteralString,
- Self,
TypeAlias,
- TypeGuard,
get_args,
get_origin,
get_type_hints,
@@ -55,11 +57,7 @@
else:
from typing_extensions import (
Annotated,
- Literal,
- LiteralString,
- Self,
TypeAlias,
- TypeGuard,
get_args,
get_origin,
get_type_hints,
@@ -530,6 +528,23 @@ def check_typevar(
)
+if sys.version_info >= (3, 8):
+ if typing_extensions is None:
+
+ def _is_literal_type(typ: object) -> bool:
+ return typ is typing.Literal
+
+ else:
+
+ def _is_literal_type(typ: object) -> bool:
+ return typ is typing.Literal or typ is typing_extensions.Literal
+
+else:
+
+ def _is_literal_type(typ: object) -> bool:
+ return typ is typing_extensions.Literal
+
+
def check_literal(
value: Any,
origin_type: Any,
@@ -539,7 +554,7 @@ def check_literal(
def get_literal_args(literal_args: tuple[Any, ...]) -> tuple[Any, ...]:
retval: list[Any] = []
for arg in literal_args:
- if get_origin(arg) is Literal:
+ if _is_literal_type(get_origin(arg)):
# The first check works on py3.6 and lower, the second one on py3.7+
retval.extend(get_literal_args(arg.__args__))
elif arg is None or isinstance(arg, (int, str, bytes, bool, Enum)):
@@ -782,14 +797,11 @@ def check_type_internal(
IO: check_io,
list: check_list,
List: check_list,
- Literal: check_literal,
- LiteralString: check_literal_string,
Mapping: check_mapping,
MutableMapping: check_mapping,
None: check_none,
collections.abc.Mapping: check_mapping,
collections.abc.MutableMapping: check_mapping,
- Self: check_self,
Sequence: check_sequence,
collections.abc.Sequence: check_sequence,
collections.abc.Set: check_set,
@@ -800,11 +812,27 @@ def check_type_internal(
Tuple: check_tuple,
type: check_class,
Type: check_class,
- TypeGuard: check_typeguard,
Union: check_union,
}
+if sys.version_info >= (3, 8):
+ origin_type_checkers[typing.Literal] = check_literal
if sys.version_info >= (3, 10):
origin_type_checkers[types.UnionType] = check_uniontype
+ origin_type_checkers[typing.TypeGuard] = check_typeguard
+if sys.version_info >= (3, 11):
+ origin_type_checkers.update(
+ {typing.LiteralString: check_literal_string, typing.Self: check_self}
+ )
+if typing_extensions is not None:
+ # On some Python versions, these may simply be re-exports from typing,
+ # but exactly which Python versions is subject to change,
+ # so it's best to err on the safe side
+ # and update the dictionary on all Python versions
+ # if typing_extensions is installed
+ origin_type_checkers[typing_extensions.Literal] = check_literal
+ origin_type_checkers[typing_extensions.LiteralString] = check_literal_string
+ origin_type_checkers[typing_extensions.Self] = check_self
+ origin_type_checkers[typing_extensions.TypeGuard] = check_typeguard
def builtin_checker_lookup(
| Literal checking discrepancy between typing vs typing-extensions 3.9
### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### Typeguard version
4.0
### Python version
3.9.16
### What happened?
```python
>>> from typeguard import check_type
>>> from typing import Literal
>>> check_type(42, Literal["hello"])
42
>>> from typing_extensions import Literal
>>> check_type(42, Literal["hello"])
typeguard.TypeCheckError: int is not any of ('hello')
```
### How can we reproduce the bug?
It looks like typing_extensions.Literal works fine, but typing.Literal in 3.9 checking simple int on Literal["hello"] passes incorrectly. typing extensions version I have is 4.6. My guess is issue is related to [here](https://github.com/agronholm/typeguard/blob/d1b9398a28b0b904c60b67f78ca1274febdf2733/src/typeguard/_checkers.py#L40). Unsure why Literal is imported in 3.11 from typing, but both typing.Literal and typing_extensions.Literal need to be in origin_type_checkers. Or some more type normalization needs to happen.
My guess is other imports there like LiteralString may have similar bug.
| 2023-06-02T11:13:46 | 0.0 | [] | [] |
|||
Julius2342/pyvlx | Julius2342__pyvlx-356 | 0a2836728d534b10b9a57393ada88340448d258e | diff --git a/pyvlx/opening_device.py b/pyvlx/opening_device.py
index 9b9c9a73..2a595bda 100644
--- a/pyvlx/opening_device.py
+++ b/pyvlx/opening_device.py
@@ -5,8 +5,7 @@
from .api.get_limitation import GetLimitation
from .exception import PyVLXException
from .node import Node
-from .parameter import (
- CurrentPosition, IgnorePosition, Parameter, Position, TargetPosition)
+from .parameter import CurrentPosition, IgnorePosition, Parameter, Position
if TYPE_CHECKING:
from pyvlx import PyVLX
@@ -145,7 +144,7 @@ def __str__(self) -> str:
)
async def get_limitation(self) -> GetLimitation:
- """Return limitaation."""
+ """Return limitation."""
get_limitation = GetLimitation(pyvlx=self.pyvlx, node_id=self.node_id)
await get_limitation.do_api_call()
if not get_limitation.success:
@@ -176,21 +175,22 @@ def __init__(
position_parameter=position_parameter,
)
self.orientation = Position(position_percent=0)
- self.target_orientation = TargetPosition()
- self.target_position = TargetPosition()
+ self.target_orientation = Position()
+ self.target_position = Position()
self.open_orientation_target: int = 50
self.close_orientation_target: int = 100
- async def set_position_and_orientation(self,
- position: Position,
- wait_for_completion: bool = True,
- orientation: Optional[Position] = None) -> None:
+ async def set_position_and_orientation(
+ self,
+ position: Position,
+ wait_for_completion: bool = True,
+ orientation: Optional[Position] = None) -> None:
"""Set window to desired position.
Parameters:
* position: Position object containing the current position.
* target_position: Position object holding the target position
- which allows to ajust the position while the blind is in movement
+ which allows to adjust the position while the blind is in movement
without stopping the blind (if orientation position has been changed.)
* wait_for_completion: If set, function will return
after device has reached target position.
@@ -198,7 +198,7 @@ async def set_position_and_orientation(self,
Note, that, if the position is set to 0, the orientation will be set to 0 too.
"""
- self.target_position = TargetPosition.from_position(position)
+ self.target_position = position
self.position = position
fp3: Position
@@ -227,7 +227,7 @@ async def set_position(self, position: Position, wait_for_completion: bool = Tru
Parameters:
* position: Position object containing the current position.
* target_position: Position object holding the target position
- which allows to ajust the position while the blind is in movement
+ which allows to adjust the position while the blind is in movement
without stopping the blind (if orientation position has been changed.)
* wait_for_completion: If set, function will return
after device has reached target position.
@@ -276,7 +276,7 @@ async def set_orientation(self, orientation: Position, wait_for_completion: bool
after device has reached target position.
"""
- self.target_orientation = TargetPosition.from_position(orientation)
+ self.target_orientation = orientation
self.orientation = orientation
fp3 = Position(position_percent=0)\
diff --git a/pyvlx/parameter.py b/pyvlx/parameter.py
index 2bc2c05d..c06039ee 100644
--- a/pyvlx/parameter.py
+++ b/pyvlx/parameter.py
@@ -30,7 +30,7 @@ def from_parameter(self, parameter: "Parameter") -> None:
@staticmethod
def from_int(value: int) -> bytes:
- """Create raw out of position vlaue."""
+ """Create raw out of position value."""
if not isinstance(value, int):
raise PyVLXException("value_has_to_be_int")
if not Parameter.is_valid_int(value):
@@ -201,7 +201,7 @@ def from_percent(position_percent: int) -> bytes:
@staticmethod
def to_percent(raw: bytes) -> int:
"""Create percent position value out of raw."""
- # The first byte has the vlue from 0 to 200. Ignoring the second one.
+ # The first byte has the value from 0 to 200. Ignoring the second one.
# Adding 0.5 allows a slight tolerance for devices (e.g. Velux SML) that
# do not return exactly 51200 as final position when closed.
return int(raw[0] / 2 + 0.5)
@@ -230,26 +230,12 @@ def __init__(self) -> None:
class TargetPosition(Position):
- """Class for using a target position, if another parameter is set.
-
- It is implemented by taking the target parameter value and loads it into the execution
- parameter buffer. When the target value is read, it holds for a given parameter always the
- latest stored target value about a command execution.
-
- """
+ """Class for using a target position."""
def __init__(self) -> None:
"""Initialize TargetPosition class."""
super().__init__(position=Position.TARGET)
- @staticmethod
- def from_position(from_position: Position) -> "TargetPosition":
- """Create TargetPosition from an existing position."""
- target = TargetPosition()
- target.position = from_position.position
- target.position_percent = from_position.position_percent
- return target
-
class IgnorePosition(Position):
"""The Ignore is used where a parameter in the frame is to be ignored."""
@@ -304,7 +290,7 @@ def intensity(self, intensity: int) -> None:
@property
def intensity_percent(self) -> int:
"""Intensity percent property."""
- # inclear why it returns a <property object> here
+ # unclear why it returns a <property object> here
return int(self.to_percent(self.raw))
@intensity_percent.setter
| Stop cover command throws an error
I have problems with changes introduced with #338
```
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/websocket_api/commands.py", line 230, in handle_call_service
await hass.services.async_call(
File "/usr/src/homeassistant/homeassistant/core.py", line 2035, in async_call
response_data = await coro
^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/core.py", line 2072, in _execute_service
return await target(service_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/entity_component.py", line 235, in handle_service
return await service.entity_service_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 876, in entity_service_call
response_data = await _handle_entity_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 948, in _handle_entity_call
result = await task
^^^^^^^^^^
File "/config/custom_components/velux/cover.py", line 285, in async_stop_cover
await self.node.stop(wait_for_completion=False)
File "/usr/local/lib/python3.11/site-packages/pyvlx/opening_device.py", line 404, in stop
await self.set_position_and_orientation(
File "/usr/local/lib/python3.11/site-packages/pyvlx/opening_device.py", line 307, in set_position_and_orientation
self.target_position = TargetPosition.from_position(position)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pyvlx/parameter.py", line 277, in from_position
target.position_percent = from_position.position_percent
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pyvlx/parameter.py", line 240, in position_percent
self.raw = self.from_percent(percent=position_percent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pyvlx/parameter.py", line 91, in from_percent
raise PyVLXException("Position::percent_out_of_range")
pyvlx.exception.PyVLXException: <PyVLXException description="Position::percent_out_of_range" />
```
| 2023-12-01T20:57:05 | 0.0 | [] | [] |
|||
brainglobe/brainglobe-segmentation | brainglobe__brainglobe-segmentation-108 | 398772a155739f1173fa69f96244db5f1735792f | diff --git a/brainreg_segment/layout/gui_elements.py b/brainreg_segment/layout/gui_elements.py
index b7c50eb..00bc343 100644
--- a/brainreg_segment/layout/gui_elements.py
+++ b/brainreg_segment/layout/gui_elements.py
@@ -1,7 +1,3 @@
-# GUI ELEMENTS
-# from napari.resources import build_icons # Contains .SVGPATH to all icons
-# for napari
-
from qtpy.QtWidgets import (
QCheckBox,
QComboBox,
@@ -16,8 +12,8 @@ def add_combobox(
layout,
label,
items,
- row,
- column=0,
+ row: int = 0,
+ column: int = 0,
label_stack=False,
callback=None,
width=150,
@@ -53,8 +49,8 @@ def add_button(
layout,
connected_function,
*,
- row: int,
- column: int,
+ row: int = 0,
+ column: int = 0,
visibility=True,
minimum_width=0,
alignment="center",
@@ -78,55 +74,9 @@ def add_button(
return button
-# def add_radiobutton(
-# label,
-# layout,
-# connected_function,
-# row,
-# column,
-# visibility=True,
-# minimum_width=0,
-# alignment="center",
-# ):
-# button = QRadioButton(label)
-# if alignment == "center":
-# pass
-# elif alignment == "left":
-# button.setStyleSheet(
-# "QRadioButton { text-align: left; padding: 0; spacing: 30px;}"
-# )
-# elif alignment == "right":
-# button.setStyleSheet(
-# "QRadioButton { text-align: right; padding: 0; spacing: 30px;}"
-# )
-
-# # Too change indicator button ... needs to dynamically retrieve icon
-# # from Napari.
-# # Icons are saved as .svg files under napari.resources SVGPATH
-# # "QRadioButton::indicator"
-# # "{"
-# # "width:16px;"
-# # "height:16px;"
-# # "}"
-# # "QRadioButton::indicator::unchecked"
-# # "{"
-# # "image: url(build_icons.SVGPATH/visibility_off.svg);"
-# # "}"
-# # "QRadioButton::indicator::checked"
-# # "{"
-# # "image: url(/opt/miniconda3/envs/analysis/lib/python3.6/site-packages/
-# napari/resources/icons/visibility.svg);"
-# # "}"
-# # )
-
-# button.setVisible(visibility)
-# button.setMinimumWidth(minimum_width)
-# layout.addWidget(button, row, column)
-# button.clicked.connect(connected_function)
-# return button
-
-
-def add_checkbox(layout, default, label, row, column=0, tooltip=None):
+def add_checkbox(
+ layout, default, label, row: int = 0, column: int = 0, tooltip=None
+):
box = QCheckBox()
box.setChecked(default)
if tooltip:
@@ -137,7 +87,15 @@ def add_checkbox(layout, default, label, row, column=0, tooltip=None):
def add_float_box(
- layout, default, minimum, maximum, label, step, row, column=0, tooltip=None
+ layout,
+ default,
+ minimum,
+ maximum,
+ label,
+ step,
+ row: int = 0,
+ column: int = 0,
+ tooltip=None,
):
box = QDoubleSpinBox()
box.setMinimum(minimum)
@@ -152,7 +110,14 @@ def add_float_box(
def add_int_box(
- layout, default, minimum, maximum, label, row, column=0, tooltip=None
+ layout,
+ default,
+ minimum,
+ maximum,
+ label,
+ row: int = 0,
+ column: int = 0,
+ tooltip=None,
):
box = QSpinBox()
box.setMinimum(minimum)
diff --git a/brainreg_segment/segment.py b/brainreg_segment/segment.py
index 94be609..ba786aa 100644
--- a/brainreg_segment/segment.py
+++ b/brainreg_segment/segment.py
@@ -277,8 +277,8 @@ def add_atlas_menu(self, layout):
layout,
None,
list_of_atlasses,
- 2,
- 0,
+ row=2,
+ column=0,
label_stack=True,
callback=self.initialise_atlas,
width=COLUMN_WIDTH,
diff --git a/brainreg_segment/segmentation_panels/regions.py b/brainreg_segment/segmentation_panels/regions.py
index 9a60072..11aea94 100644
--- a/brainreg_segment/segmentation_panels/regions.py
+++ b/brainreg_segment/segmentation_panels/regions.py
@@ -91,7 +91,7 @@ def add_region_panel(self, row):
region_layout,
self.calculate_volumes_default,
"Calculate volumes",
- 0,
+ row=0,
tooltip="Calculate and save the volume of each "
"brain region included in the segmented "
"region.",
@@ -101,7 +101,7 @@ def add_region_panel(self, row):
region_layout,
self.summarise_volumes_default,
"Summarise volumes",
- 1,
+ row=1,
tooltip="Summarise each segmented region "
"(e.g. center, volume etc.).",
)
diff --git a/brainreg_segment/segmentation_panels/tracks.py b/brainreg_segment/segmentation_panels/tracks.py
index f3983e5..0c59012 100644
--- a/brainreg_segment/segmentation_panels/tracks.py
+++ b/brainreg_segment/segmentation_panels/tracks.py
@@ -135,7 +135,7 @@ def add_track_panel(self, row):
1,
5,
"Fit degree",
- 1,
+ row=1,
tooltip="Degree of polynomial to fit to the track.",
)
@@ -146,7 +146,7 @@ def add_track_panel(self, row):
1,
"Spline smoothing",
0.1,
- 2,
+ row=2,
tooltip="How closely or not to fit the points "
"(lower numbers fit more closely, for "
"a less smooth interpolation).",
@@ -158,7 +158,7 @@ def add_track_panel(self, row):
1,
10000,
"Spline points",
- 3,
+ row=3,
tooltip="How many points are sampled from the "
"interpolation (used for the summary).",
)
| Make row/column keyword args for other GUI elements
Extend https://github.com/brainglobe/brainreg-segment/commit/450cc576 for checkboxes etc to make the code easier to read
| 2023-06-23T10:10:27 | 0.0 | [] | [] |
|||
databricks/koalas | databricks__koalas-2029 | 060fee36f73a9abe6a2dbbf00aa0b88282ff258b | diff --git a/databricks/koalas/frame.py b/databricks/koalas/frame.py
index 73ef4755b3..d5deb11218 100644
--- a/databricks/koalas/frame.py
+++ b/databricks/koalas/frame.py
@@ -44,7 +44,6 @@
TYPE_CHECKING,
)
-import matplotlib
import numpy as np
import pandas as pd
from pandas.api.types import is_list_like, is_dict_like, is_scalar
@@ -858,12 +857,12 @@ def add(self, other) -> "DataFrame":
# create accessor for Koalas specific methods.
koalas = CachedAccessor("koalas", KoalasFrameMethods)
- def hist(self, bins=10, **kwds) -> matplotlib.axes.Axes:
+ def hist(self, bins=10, **kwds):
return self.plot.hist(bins, **kwds)
hist.__doc__ = KoalasPlotAccessor.hist.__doc__
- def kde(self, bw_method=None, ind=None, **kwds) -> matplotlib.axes.Axes:
+ def kde(self, bw_method=None, ind=None, **kwds):
return self.plot.kde(bw_method, ind, **kwds)
kde.__doc__ = KoalasPlotAccessor.kde.__doc__
diff --git a/databricks/koalas/plot/__init__.py b/databricks/koalas/plot/__init__.py
index 71008b31d2..37b7b8642a 100644
--- a/databricks/koalas/plot/__init__.py
+++ b/databricks/koalas/plot/__init__.py
@@ -14,4 +14,3 @@
# limitations under the License.
#
from databricks.koalas.plot.core import * # noqa: F401
-from databricks.koalas.plot.matplotlib import * # noqa: F401
diff --git a/databricks/koalas/series.py b/databricks/koalas/series.py
index 4fcd33cb05..93b818d2f1 100644
--- a/databricks/koalas/series.py
+++ b/databricks/koalas/series.py
@@ -27,7 +27,6 @@
from itertools import chain
from typing import Any, Generic, Iterable, List, Optional, Tuple, TypeVar, Union, cast
-import matplotlib
import numpy as np
import pandas as pd
from pandas.core.accessor import CachedAccessor
@@ -3004,7 +3003,7 @@ def sample(
sample.__doc__ = DataFrame.sample.__doc__
- def hist(self, bins=10, **kwds) -> matplotlib.axes.Axes:
+ def hist(self, bins=10, **kwds):
return self.plot.hist(bins, **kwds)
hist.__doc__ = KoalasPlotAccessor.hist.__doc__
diff --git a/docs/source/getting_started/install.rst b/docs/source/getting_started/install.rst
index 6c2870cf70..a1d35eda09 100644
--- a/docs/source/getting_started/install.rst
+++ b/docs/source/getting_started/install.rst
@@ -124,7 +124,6 @@ Package Required version
`pandas` >=0.23.2
`pyspark` >=2.4.0
`pyarrow` >=0.10
-`matplotlib` >=3.0.0,<3.3.0
`numpy` >=1.14
============= ================
@@ -137,4 +136,5 @@ Package Required version
============= ================
`mlflow` >=1.0
`plotly` >=4.8
+`matplotlib` >=3.0.0,<3.3.0
============= ================
diff --git a/postBuild b/postBuild
index d65a583e85..62855a1fce 100755
--- a/postBuild
+++ b/postBuild
@@ -19,5 +19,5 @@
# Install PySpark manually because it does not exist in requirement file.
# This file is used in Binder integration.
-pip install 'pyspark>=2.4'
+pip install 'pyspark>=2.4' 'matplotlib>=3.0.0,<3.3.0' 'plotly>=4.8'
echo "export PYARROW_IGNORE_TIMEZONE=1" >> ~/.profile
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 2284159bca..4664c57133 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -1,12 +1,12 @@
# Dependencies in Koalas. When you update don't forget to update setup.py and install.rst in docs.
pandas>=0.23.2,<1.2.0
pyarrow>=0.10
-matplotlib>=3.0.0,<3.3.0
numpy>=1.14,<1.20.0
# Optional dependencies in Koalas.
mlflow>=1.0
plotly>=4.8
+matplotlib>=3.0.0,<3.3.0
# Documentation build.
sphinx>=2.0.0,<3.1.0
diff --git a/setup.py b/setup.py
index a0c5828c38..3090432534 100644
--- a/setup.py
+++ b/setup.py
@@ -51,13 +51,13 @@
'spark': ['pyspark>=2.4.0'],
'mlflow': ['mlflow>=1.0'],
'plotly': ['plotly>=4.8'],
+ 'matplotlib': ['matplotlib>=3.0.0,<3.3.0'],
},
python_requires='>=3.5,<3.9',
install_requires=[
'pandas>=0.23.2,<1.2.0',
'pyarrow>=0.10',
'numpy>=1.14,<1.20.0',
- 'matplotlib>=3.0.0,<3.3.0',
],
author="Databricks",
author_email="[email protected]",
| Pandas doesn't require matplotlib, koalas shouldn't either.
Pandas will use `matplotlib` if it's available, but `matplotlib` isn't a dependency if Pandas.
We're deploying batch-processing applications for use in data pipelines, which will never try to plot anything and don't need `matplotlib`.
We are required to scan our deployments with Veracode, and Veracode raises many issues with `matplotlib` and `matplotlib`'s dependencies.
We can work around this by replacing `databricks.koalas.plot.KoalasPlotAccessor` with a `unittest.mock.MagicMock`, but that's evil. :)
Koalas shouldn't require `matplotlib`.
| Yeah, I think we can keep it as an optional dependency | 2021-02-01T12:42:11 | 0.0 | [] | [] |
||
m-beau/NeuroPyxels | m-beau__NeuroPyxels-370 | f4b93d884a1b7520dd0f5c56bf587e75101f6821 | diff --git a/npyx/__init__.py b/npyx/__init__.py
index 438ea88..47d8c1c 100644
--- a/npyx/__init__.py
+++ b/npyx/__init__.py
@@ -55,7 +55,7 @@
.h5
"""
-__version__ = "4.0.0"
+__version__ = "4.0.1"
npyx_build = "npyx[c4]" if C4_IMPORTED else "npyx"
diff --git a/npyx/spk_wvf.py b/npyx/spk_wvf.py
index 885c4e7..fd3ee87 100644
--- a/npyx/spk_wvf.py
+++ b/npyx/spk_wvf.py
@@ -24,6 +24,7 @@
import matplotlib.pyplot as plt
import numpy as np
+
from npyx.gl import get_npyx_memory, get_units
from npyx.inout import chan_map, get_binary_file_path, read_metadata
from npyx.preprocess import apply_filter, bandpass_filter, med_substract, whitening
@@ -35,7 +36,8 @@ def wvf(dp, u=None, n_waveforms=100, t_waveforms=82, selection='regular', period
spike_ids=None, wvf_batch_size=10, ignore_nwvf=True,
save=True, verbose=False, again=False,
whiten=False, med_sub=False, hpfilt=False, hpfiltf=300,
- nRangeWhiten=None, nRangeMedSub=None, ignore_ks_chanfilt=True):
+ nRangeWhiten=None, nRangeMedSub=None, ignore_ks_chanfilt=True,
+ return_corrupt_mask=False):
'''
********
Extracts a sample of waveforms from the raw data file.
@@ -93,11 +95,20 @@ def wvf(dp, u=None, n_waveforms=100, t_waveforms=82, selection='regular', period
if verbose: print("File {} found in NeuroPyxels cache.".format(fn))
return np.load(Path(dpnm,fn))
- waveforms = get_waveforms(dp, u, n_waveforms, t_waveforms, selection, periods, spike_ids, wvf_batch_size, ignore_nwvf,
- whiten, med_sub, hpfilt, hpfiltf, nRangeWhiten, nRangeMedSub, ignore_ks_chanfilt, verbose)
- # Save it
+ waveforms = get_waveforms(dp, u, n_waveforms, t_waveforms,
+ selection, periods, spike_ids, wvf_batch_size, ignore_nwvf,
+ whiten, med_sub, hpfilt, hpfiltf, nRangeWhiten, nRangeMedSub,
+ ignore_ks_chanfilt, verbose,
+ True, return_corrupt_mask)
+ if return_corrupt_mask:
+ (waveforms, corrupt_mask) = waveforms
+
+ # Memoize
if (save and (spike_ids is None)):
np.save(Path(dpnm,fn), waveforms)
+
+ if return_corrupt_mask:
+ return waveforms, corrupt_mask
return waveforms
@@ -116,7 +127,7 @@ def get_waveforms(dp, u, n_waveforms=100, t_waveforms=82, selection='regular', p
spike_ids=None, wvf_batch_size=10, ignore_nwvf=True,
whiten=0, med_sub=0, hpfilt=0, hpfiltf=300,
nRangeWhiten=None, nRangeMedSub=None, ignore_ks_chanfilt=0, verbose=False,
- med_sub_in_time=True):
+ med_sub_in_time=True, return_corrupt_mask=False):
f"{wvf.__doc__}"
# Extract and process metadata
@@ -178,9 +189,9 @@ def get_waveforms(dp, u, n_waveforms=100, t_waveforms=82, selection='regular', p
except:
print(f"WARNING it seems the binary file at {dp} is corrupted. Waveform {i} (at byte {t1}, {t1/n_channels_dat/item_size/sample_rate}s) could not be loaded.")
waveforms[i,:,:] = np.nan
- nanmask = np.isnan(waveforms[:,0,0])
- waveforms = waveforms[~nanmask,:,:]
- n_spikes -= np.sum(nanmask)
+ corrupt_mask = np.isnan(waveforms[:,0,0])
+ waveforms = waveforms[~corrupt_mask,:,:]
+ n_spikes -= np.sum(corrupt_mask)
if med_sub_in_time:
medians = np.median(waveforms, axis = 1)
waveforms = waveforms - medians[:,np.newaxis,:]
@@ -206,6 +217,9 @@ def get_waveforms(dp, u, n_waveforms=100, t_waveforms=82, selection='regular', p
# Correct voltage scaling
waveforms *= meta['bit_uV_conv_factor']
+ if return_corrupt_mask:
+ return waveforms, corrupt_mask
+
return waveforms.astype(np.float32)
def wvf_dsmatch(dp, u, n_waveforms=100, t_waveforms=82, periods='all',
@@ -356,22 +370,37 @@ def wvf_dsmatch(dp, u, n_waveforms=100, t_waveforms=82, periods='all',
spike_ids_split = spike_ids_split_all[spike_ids_subsample]
else:
spike_ids_split=spike_ids_split_all
- spike_ids_split_indices = np.arange(0,spike_ids_split.shape[0],1)
+ # spike_ids_split_indices = np.arange(0,spike_ids_split.shape[0],1)
## Extract the waveforms using the wvf function in blocks of 10 (n_waveforms_per_batch).
# After waves have been extracted, put the index of the channel with the
# max amplitude as well as the max amplitude into the peak_chan_split array
spike_ids_split = spike_ids_split.flatten()
- raw_waves = wvf(dp, u = None, n_waveforms= 100, t_waveforms = t_waveforms,
+ raw_waves, corrupt_mask = wvf(dp, u = None,
+ n_waveforms= 100, t_waveforms = t_waveforms,
selection='regular', periods=periods, spike_ids=spike_ids_split,
wvf_batch_size =wvf_batch_size , ignore_nwvf=ignore_nwvf,
save=save , verbose = verbose, again=True,
whiten = whiten, med_sub = med_sub,
hpfilt = hpfilt, hpfiltf = hpfiltf, nRangeWhiten=nRangeWhiten,
- nRangeMedSub=nRangeMedSub, ignore_ks_chanfilt=True)
+ nRangeMedSub=nRangeMedSub, ignore_ks_chanfilt=True,
+ return_corrupt_mask=True)
+
+ # Remove waveforms and spike_ids of batches with corrupt waveforms
spike_ids_split = spike_ids_split.reshape(-1,n_waveforms_per_batch)
+ if np.any(corrupt_mask):
+ reshaped_corrupt_mask = corrupt_mask.reshape(-1,n_waveforms_per_batch).copy()
+ reshaped_corrupt_mask[np.any(reshaped_corrupt_mask, axis=1)] = True # if any of the waveforms in a batch is corrupt, mark the batch as corrupt
+ corrupt_mask = reshaped_corrupt_mask.ravel()[~corrupt_mask] # match size of raw_waveforms
+ raw_waves = raw_waves[~corrupt_mask]
+
+ corrupt_batches_mask = np.any(reshaped_corrupt_mask, axis=1)
+ spike_ids_split = spike_ids_split[~corrupt_batches_mask]
+
+ # Compute mean waveforms, batch-wise
raw_waves = raw_waves.reshape(spike_ids_split.shape[0], n_waveforms_per_batch, t_waveforms, -1)
mean_waves = np.mean(raw_waves, axis = 1)
+
## Find peak channel (and store amplitude) of every batch
# only consider amplitudes on channels around original peak channel
original_peak_chan = get_peak_chan(dp, u, again=again)
@@ -380,11 +409,13 @@ def wvf_dsmatch(dp, u, n_waveforms=100, t_waveforms=82, periods='all',
amp_t_span = 20 #samples
t1, t2 = max(0,mean_waves.shape[1]//2-amp_t_span), min(mean_waves.shape[1]//2+amp_t_span, mean_waves.shape[1])
amplitudes = np.ptp(mean_waves[:,t1:t2,c_left:c_right], axis=1)
+
+ spike_ids_split_indices = np.arange(0, spike_ids_split.shape[0], 1)
batch_peak_channels = np.zeros(shape=(spike_ids_split_indices.shape[0], 3))
batch_peak_channels[:,0] = spike_ids_split_indices # store batch indices (batch = averaged 10 spikes)
batch_peak_channels[:,1] = c_left+np.argmax(amplitudes, axis = 1) # store peak channel of each batch
batch_peak_channels[:,2] = np.max(amplitudes, axis = 1) # store peak channel amplitude
-
+
# Filter out batches with too large amplitude (probably artefactual)
batch_peak_channels = batch_peak_channels[batch_peak_channels[:,2] < max_allowed_amplitude]
diff --git a/setup.py b/setup.py
index b15b71b..262fee8 100644
--- a/setup.py
+++ b/setup.py
@@ -19,7 +19,7 @@ def get_version(rel_path):
raise RuntimeError("Unable to find version string.")
-with open("README.md", "r") as readme_file:
+with open("README.md", "r", encoding="utf-8") as readme_file:
readme = readme_file.read()
requirements = [
| handling of partially corrupt binary file in wvf_dsmatch
| 2024-02-11T14:13:04 | 0.0 | [] | [] |
|||
wireviz/WireViz | wireviz__WireViz-261 | 7f33517a79b3c577f52107b12a3217dd22c4183c | diff --git a/src/wireviz/Harness.py b/src/wireviz/Harness.py
index 2f9eb641..95419a42 100644
--- a/src/wireviz/Harness.py
+++ b/src/wireviz/Harness.py
@@ -93,8 +93,8 @@ def connect(self, from_name: str, from_pin: (int, str), via_name: str, via_wire:
def create_graph(self) -> Graph:
dot = Graph()
- dot.body.append(f'// Graph generated by {APP_NAME} {__version__}')
- dot.body.append(f'// {APP_URL}')
+ dot.body.append(f'// Graph generated by {APP_NAME} {__version__}\n')
+ dot.body.append(f'// {APP_URL}\n')
dot.attr('graph', rankdir='LR',
ranksep='2',
bgcolor=wv_colors.translate_color(self.options.bgcolor, "HEX"),
| [bug] Strange output (input, cable and output on separate "lines")

I am a very new user, but trying to run tutorial examples (and example yml-files), all end up the same, with the input X1, wire W1 and output X2 on separate lines, making them very strange and hard to read compared to the nice versions on Github tutorial.
This happens on all yml-files I have tried, also the last one, but it seems that it consistently places the input on the first row, the wire on second and output on third even when there are several inputs, wires and outputs like it is in the final tutorial example.
| That is quite weird! I've never seen something like this myself.
Could you please post
- The WireViz version / GitHub branch you are using (using the `-V` CLI flag)
- Your installed GraphViz version (`dot -V`)
- The contents of the `.gv` file that gets generated along with the image you have posted above?
You can also paste the `.gv` output on https://edotor.net/ yourself to see if it gets rendered correctly there... If it appears fine, it seems like your local GraphViz installation could be to blame...
Thank you for the reply.
WireViz version 0.3.1 (Installed using pip3 today, running on Python 3.10 installed from Windows 10 store).
GraphViz version 2.49.3 (20211023.0002)
.gv file (renamed to .txt to allow it to be uploaded):
[test1.gv.txt](https://github.com/formatc1702/WireViz/files/7540987/test1.gv.txt)
It does, however, look like the png file shown on top when pasting to https://edotor.net/
I am, as you see above, running this on Windows, and I installed today, so I am not surprised if that has something to do with it. Also, Windows store installs python into a somewhat odd location I think, but it doesn't give any errors when running it, so I don't really know.
It looks like a Windows vs. Unix newline issue. Look at the first two/four lines of the `.gv` file:
```
graph {
// Graph generated by WireViz 0.3.1// https://github.com/formatc1702/WireViz graph [bgcolor="#FFFFFF" fontname=arial nodesep=0.33 rankdir=LR ranksep=2]
```
There is a missing newline between the end of the GithUb link, and the `graph []` keyword.
Edit: (and another one between version number and URL)
The code should show:
```
graph {
// Graph generated by WireViz 0.3.1
// https://github.com/formatc1702/WireViz
graph [bgcolor="#FFFFFF" fontname=arial nodesep=0.33 rankdir=LR ranksep=2]
```
and editing that line in edotor.net fixes the issue.
I generally develop and use WireViz on macOS, but I have successfully been using a custom branch, very similar to v0.3 on Windows 10 using Python 3.8.10 and GraphViz 2.38.0, with no issues.
Please have a look at `Harness.py`, lines 94 to 97:
```
def create_graph(self) -> Graph:
dot = Graph()
dot.body.append(f'// Graph generated by {APP_NAME} {__version__}')
dot.body.append(f'// {APP_URL}')
```
What happens when you add `\n` to the end of the two appended strings? Is that enough?
What happens when you add `\r\n` to simulate a full Windows-style CR+LF?
Thanks, that fixes it.
Looking a bit more, it appears that the crucial part is that "rankdir=LR" gets commented away and without that it looks strange.
Looks like my last comment beat yours by two seconds, so I'm writing again in case you missed it.
Both \n and \r\n gives good results with the same output to png, the only difference I spot is an extra line in the gv file.
Thanks a lot for the help.
No problem.
We'll have to look into who is introducing the issue:
- Python 3.10 vs 3.8
- GraphViz 2.49 vs 2.38
- The graphviz Python module 0.18.2* vs 0.17
My bet is on the last one.
\* 0.18.2 is the current version on PyPI so I assume this is what you have installed.
Correct, 0.18.2
Had similar issue running in WSL
wireviz: 0.3.1
graphviz: 2.42.2
python graphviz: 0.18.2
If it can be any help | 2021-11-26T15:05:27 | 0.0 | [] | [] |
||
CrowdStrike/falcon-integration-gateway | CrowdStrike__falcon-integration-gateway-161 | 2751f3c5dd77d716d07fc1e6734b72ece6b67c6b | diff --git a/.gitignore b/.gitignore
index 5bc83a9..8b1e11e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,7 +1,7 @@
-/venv
+/**venv
*.pyc
*.swp
*.egg
*.egg-info
/config/devel.ini
-/runtest.sh
\ No newline at end of file
+/runtest.sh
diff --git a/fig/falcon_data.py b/fig/falcon_data.py
index 071bec0..ceb3c7e 100644
--- a/fig/falcon_data.py
+++ b/fig/falcon_data.py
@@ -1,4 +1,5 @@
import json
+from threading import Lock
from .falcon import Event
from .log import log
@@ -21,6 +22,7 @@ def __init__(self, falcon_api):
self._host_detail = {}
self._mdm_id = {}
self._arc_config = {}
+ self._arc_config_lock = {}
def device_details(self, sensor_id):
if not sensor_id:
@@ -41,14 +43,21 @@ def device_details(self, sensor_id):
def azure_arc_config(self, sensor_id):
if not sensor_id:
return EventDataError("Cannot fetch Azure Arc info. SensorId field is missing")
- if sensor_id not in self._arc_config:
- is_linux = self.device_details(sensor_id)['platform_name'] == 'Linux'
- path = '/var/opt/azcmagent/agentconfig.json' if is_linux else 'C:\\ProgramData\\AzureConnectedMachineAgent\\Config\\agentconfig.json'
- log.info('Fetching Azure Arc Config %s from the system %s', path, sensor_id)
- file_bytes = self.falcon_api.rtr_fetch_file(sensor_id, path)
- log.info('Fetched Azure Arc Config from the system: %s', str(file_bytes))
- self._arc_config[sensor_id] = json.loads(file_bytes)
- return self._arc_config[sensor_id]
+
+ with self._get_lock(sensor_id):
+ if sensor_id not in self._arc_config:
+ is_linux = self.device_details(sensor_id)['platform_name'] == 'Linux'
+ path = '/var/opt/azcmagent/agentconfig.json' if is_linux else 'C:\\ProgramData\\AzureConnectedMachineAgent\\Config\\agentconfig.json'
+ log.info('Fetching Azure Arc Config %s from the system %s', path, sensor_id)
+ file_bytes = self.falcon_api.rtr_fetch_file(sensor_id, path)
+ log.info('Fetched Azure Arc Config from the system: %s', str(file_bytes))
+ self._arc_config[sensor_id] = json.loads(file_bytes)
+ return self._arc_config[sensor_id]
+
+ def _get_lock(self, sensor_id):
+ if sensor_id not in self._arc_config_lock:
+ self._arc_config_lock[sensor_id] = Lock()
+ return self._arc_config_lock[sensor_id]
def mdm_identifier(self, sensor_id, event_platform):
if not sensor_id:
| Azure Arc RTR - race condition issue
## Issue
When processing multiple detection events from the same host with multiple worker threads (default is 4) - a race condition occurs in the Azure backend when trying to fetch the Azure Arc config for the same host.
| 2023-07-26T19:21:24 | 0.0 | [] | [] |
|||
roeniss/dhlottery-api | roeniss__dhlottery-api-27 | a2d6e63ac46f622a77ef4d101e0f0de01e6ecca4 | diff --git a/requirements.txt b/requirements.txt
index e3cf08f..0f73dd3 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,3 +1,4 @@
requests~=2.28.2
beautifulsoup4~=4.12.0
html5lib~=1.1
+johnnydep~=1.18.0
diff --git a/src/dhapi/purchase/lotto645_controller.py b/src/dhapi/purchase/lotto645_controller.py
index 1dd7d78..5af34a3 100644
--- a/src/dhapi/purchase/lotto645_controller.py
+++ b/src/dhapi/purchase/lotto645_controller.py
@@ -22,7 +22,7 @@ def buy(self, req: Lotto645BuyRequest, quiet: bool):
def _confirm_purchase(self, req, quiet):
print(
f"""{req.format()}
-ìì ê°ì´ 구매íìê² ìµëê¹? [[Y]/n] """,
+ìì ê°ì´ 구매íìê² ìµëê¹? [Y/n] """,
end="",
)
diff --git a/src/dhapi/router/arg_parser.py b/src/dhapi/router/arg_parser.py
index 3007a46..75b39ee 100644
--- a/src/dhapi/router/arg_parser.py
+++ b/src/dhapi/router/arg_parser.py
@@ -2,6 +2,7 @@
import getpass
import sys
+from dhapi.router.version_checker import get_versions
from dhapi.domain_object.lotto645_buy_request import Lotto645BuyRequest
@@ -15,7 +16,8 @@ def error(self, message):
class ArgParser:
def __init__(self):
parser = HelpOnErrorParser(description="ëíë³µê¶ ë¹ê³µì API", formatter_class=argparse.RawTextHelpFormatter)
- parser.add_argument("-v", "--version", action="version", version="%(prog)s 1.3.1")
+ installed_version, _ = get_versions()
+ parser.add_argument("-v", "--version", action="version", version="%(prog)s " + installed_version)
command_subparser = parser.add_subparsers(title="ëª
ë ¹ì´ êµ¬ë¶", dest="command", required=True)
diff --git a/src/dhapi/router/router.py b/src/dhapi/router/router.py
index b100814..587dd45 100644
--- a/src/dhapi/router/router.py
+++ b/src/dhapi/router/router.py
@@ -1,10 +1,14 @@
import sys
from dhapi.purchase.lotto645_controller import Lotto645Controller
from dhapi.router.arg_parser import ArgParser
+from dhapi.router.version_checker import suggest_upgrade
def entrypoint():
sys.tracebacklimit = 0
+
+ suggest_upgrade()
+
arg_parser = ArgParser()
if arg_parser.is_buylotto645():
diff --git a/src/dhapi/router/version_checker.py b/src/dhapi/router/version_checker.py
new file mode 100644
index 0000000..5656bbb
--- /dev/null
+++ b/src/dhapi/router/version_checker.py
@@ -0,0 +1,36 @@
+from packaging import version
+from subprocess import call
+
+from johnnydep.lib import JohnnyDist
+from johnnydep import logs
+
+logs.configure_logging(verbosity=0)
+
+PACKAGE_NAME = "dhapi"
+
+
+def _upgrade():
+ call("pip install --upgrade " + PACKAGE_NAME, shell=True)
+
+def get_versions():
+ """
+ :return (installed_version, latest_version)
+ """
+ dist = JohnnyDist(PACKAGE_NAME)
+ return dist.version_installed, dist.version_latest
+
+
+
+def suggest_upgrade():
+ installed_version, latest_version = get_versions()
+ if version.parse(installed_version) != version.parse(latest_version):
+ print(
+ f"""íì¬ ì¤ì¹ë ë²ì ì ìµì ë²ì ì´ ìëëë¤. (íì¬ ë²ì : {installed_version} / ìµì ë²ì : {latest_version})
+ìµì ë²ì ì ì¤ì¹íê² ìµëê¹? [Y/n] """,
+ end="",
+ )
+ if not input().strip().lower() in ["y", "yes", ""]:
+ return
+
+ else:
+ _upgrade()
| ë²ì ê´ë¦¬ ì 보를 íê³³ì¼ë¡ 모ì¼ê¸°
íì¬ë ë°°í¬í ëë§ë¤ ë êµ°ë°ë¥¼ ìì íê³ ìë¤.
Fail to purchase (reason: íì¬ ìê°ì í매ìê°ì´ ìëëë¤.)
https://github.com/roeniss/dhlottery-api/blob/main/docs/CONTRIBUTING.md#ë°°í¬
> ë°°í¬í ëë§ë¤, setup.py ì arg_parser.py ì ë²ì ì 보를 ìëì¼ë¡ ê°±ì í´ì£¼ê³ ììµëë¤.
ì´ê±¸ í ê³³ì¼ë¡ íµì¼í ë°©ë²ì´ íìí¨
| 2023-04-30T13:15:44 | 0.0 | [] | [] |
|||
tekktrik/circlink | tekktrik__circlink-55 | 17d3a6bb58e0069f59a8e44d6226391fa1c1462d | diff --git a/circlink/__init__.py b/circlink/__init__.py
index 623966f..8d77976 100644
--- a/circlink/__init__.py
+++ b/circlink/__init__.py
@@ -21,7 +21,14 @@
from typer import Typer, Option, Argument, Exit
from circup import find_device
from tabulate import tabulate
-from circlink.link import LINKS_DIRECTORY, APP_DIRECTORY, CircuitPythonLink
+from circlink.link import (
+ LINKS_DIRECTORY,
+ APP_DIRECTORY,
+ CircuitPythonLink,
+ ensure_links_folder,
+ ensure_ledger_file,
+ iter_ledger_entries,
+)
_TableRowEntry: TypeAlias = Tuple[
int, str, bool, pathlib.Path, pathlib.Path, bool, int, str
@@ -45,8 +52,8 @@ def _ensure_app_folder_setup() -> None:
if not os.path.exists(APP_DIRECTORY):
os.mkdir(APP_DIRECTORY)
- if not os.path.exists(LINKS_DIRECTORY):
- os.mkdir(LINKS_DIRECTORY)
+ ensure_links_folder()
+ ensure_ledger_file()
@app.command()
@@ -463,14 +470,14 @@ def about_cb() -> None:
print("Originally built with love by Tekktrik")
print("Happy hackin'!")
- Exit()
+ raise Exit()
def version_cb() -> None:
"""Display the current version of circlink"""
print(__version__)
- Exit()
+ raise Exit()
@app.callback(invoke_without_command=True)
@@ -485,6 +492,8 @@ def callback(
) -> None:
"""Display the current version of circlink"""
+ _ensure_app_folder_setup()
+
if version:
version_cb()
if about:
@@ -502,15 +511,15 @@ def reset_cb() -> None:
print("Removed circlink app directory, settngs and history deleted!")
print("These will be created on next use of circlink.")
print("Please check the integrity of any files handled by circlink.")
- Exit()
-
+ raise Exit()
-def main() -> None:
- """Main function that runs when ``circlink`` is called as a CLI"""
-
- _ensure_app_folder_setup()
- app()
[email protected]()
+def ledger() -> None:
+ """View the ledger of files controlled by links"""
-if __name__ == "__main__":
- main()
+ ledger_entries = list(iter_ledger_entries())
+ if not ledger_entries:
+ print("No files being tracked by circlink")
+ raise Exit()
+ print(tabulate(ledger_entries, headers=("Write Path", "Link", "Process ID")))
diff --git a/circlink/link.py b/circlink/link.py
index dbc7e7e..16d7da4 100644
--- a/circlink/link.py
+++ b/circlink/link.py
@@ -13,11 +13,33 @@
import json
import pathlib
import shutil
-from typing import Dict, List, Union
+import csv
+import functools
+import fcntl
+from collections import namedtuple
+from typing import Dict, List, Union, Iterator, Optional, Literal
from typer import get_app_dir, Exit
+
APP_DIRECTORY = get_app_dir("circlink")
LINKS_DIRECTORY = os.path.join(APP_DIRECTORY, "links")
+LEDGER_FILE = os.path.join(APP_DIRECTORY, "ledger.csv")
+
+LedgerEntry = namedtuple("LedgerEntry", ("filename", "link_id", "process_id"))
+
+
+def ensure_links_folder() -> None:
+ """Ensure the links folder is created"""
+
+ if not os.path.exists(LINKS_DIRECTORY):
+ os.mkdir(LINKS_DIRECTORY)
+
+
+def ensure_ledger_file() -> None:
+ """Ensure the ledger file exists, or create it if not"""
+
+ ledger_path = pathlib.Path(LEDGER_FILE)
+ ledger_path.touch(exist_ok=True)
# pylint: disable=too-many-instance-attributes
@@ -200,6 +222,7 @@ def _get_files_monitored(self):
return [file for file in all_potential if file.is_file()]
+ # pylint: disable=too-many-branches
def begin_monitoring(self) -> None:
"""Monitor the listed file(s) for changes"""
@@ -222,9 +245,16 @@ def begin_monitoring(self) -> None:
read_path_basis_str = os.path.join("..", read_path_basis_str)
for read_file in read_files:
- update_map[read_file] = read_file.stat().st_mtime
- if not self._skip_presave:
- self._copy_file(self._write_path, read_file, self.base_dir)
+ ledger_file_path = str(
+ self._get_write_filepath(self.write_path, read_file, self.base_dir)
+ )
+ if append_to_ledger(
+ LedgerEntry(ledger_file_path, self.link_id, self.process_id),
+ expect_entry=False,
+ ):
+ update_map[read_file] = read_file.stat().st_mtime
+ if not self._skip_presave:
+ self._copy_file(self._write_path, read_file, self.base_dir)
marked_delete = []
@@ -238,7 +268,16 @@ def begin_monitoring(self) -> None:
read_files = self._get_files_monitored()
new_files: List[pathlib.Path] = []
for file in read_files:
- if file not in update_map:
+ ledger_file_path = str(
+ self._get_write_filepath(self.write_path, file, self.base_dir)
+ )
+ if (
+ append_to_ledger(
+ LedgerEntry(ledger_file_path, self.link_id, self.process_id),
+ expect_entry=False,
+ )
+ and file not in update_map
+ ):
new_files.append(file)
for file in new_files:
update_map[file] = file.stat().st_mtime
@@ -261,18 +300,39 @@ def begin_monitoring(self) -> None:
# Delete marked files
for file in marked_delete:
self._delete_file(self._write_path, file, self.base_dir)
+ ledger_file_path = str(
+ self._get_write_filepath(self.write_path, file, self.base_dir)
+ )
+ ledger_entry = LedgerEntry(
+ ledger_file_path, self.link_id, self.process_id
+ )
+ remove_from_ledger(ledger_entry, expect_entry=True)
try:
del update_map[file]
except KeyError:
pass
marked_delete = []
+ # Remove files from ledger
+ for file in self._get_files_monitored():
+ ledger_entry = LedgerEntry(
+ str(file.resolve()), self.link_id, self.process_id
+ )
+ remove_from_ledger(ledger_entry, expect_entry=True)
+
self.end_flag = True
self._stopped = True
self.save_link()
+ @staticmethod
+ def _get_write_filepath(
+ write_path: pathlib.Path, read_file: pathlib.Path, base_dir: pathlib.Path
+ ) -> pathlib.Path:
+ read_file_relative = read_file.relative_to(base_dir)
+ return write_path / read_file_relative
+
+ @staticmethod
def _copy_file(
- self,
write_path: pathlib.Path,
read_file: pathlib.Path,
base_dir: pathlib.Path,
@@ -285,8 +345,8 @@ def _copy_file(
shutil.copyfile(str(read_file.resolve()), file_dest.resolve())
+ @staticmethod
def _delete_file(
- self,
write_path: pathlib.Path,
read_file: pathlib.Path,
base_dir: pathlib.Path,
@@ -297,3 +357,89 @@ def _delete_file(
if file_dest.resolve().exists():
os.remove(file_dest.resolve())
+
+
+def with_ledger(mode: str = "a"):
+ """
+ Decorator for using the ledger file; manages locking and
+ unlocking the file
+ """
+
+ def decorator_with_ledger(func):
+ """Decorator for working with the ledger file"""
+
+ @functools.wraps(func)
+ def wrapper_with_ledger(
+ entry: LedgerEntry,
+ *,
+ expect_entry: Optional[bool] = None,
+ use_lock: bool = True,
+ ) -> bool:
+ """Edit the ledger"""
+
+ with open(LEDGER_FILE, mode=mode, encoding="utf-8") as filedesc:
+
+ if use_lock:
+ fcntl.lockf(filedesc, fcntl.LOCK_EX)
+
+ if (expect_entry is None) or (
+ expect_entry == (entry.filename in iter_ledger_filenames())
+ ):
+ result = func(entry, filedesc=filedesc)
+ else:
+ result = False
+
+ if use_lock:
+ fcntl.lockf(filedesc, fcntl.LOCK_UN)
+
+ return result
+
+ return wrapper_with_ledger
+
+ return decorator_with_ledger
+
+
+@with_ledger(mode="a")
+def append_to_ledger(entry: LedgerEntry, **args) -> Literal[True]:
+ """Add a file to the ledger; returns whether the file actually
+ was added (True) or if it already existed (False)
+ """
+
+ csvwriter = csv.writer(args["filedesc"])
+ csvwriter.writerow(entry)
+ return True
+
+
+@with_ledger(mode="w")
+def remove_from_ledger(entry: LedgerEntry, **args) -> Literal[True]:
+ """Remove a file from the ledger; returns whether the file actually
+ was removed (True) or if it didn't exist (False)
+ """
+
+ csvwriter = csv.writer(args["filedesc"])
+ for existing_entry in iter_ledger_filenames(False):
+ if existing_entry != entry:
+ csvwriter.writerow(existing_entry)
+ return True
+
+
+def iter_ledger_entries(use_lock: bool = True) -> Iterator[LedgerEntry]:
+ """Iterate through ledger entries"""
+
+ with open(LEDGER_FILE, mode="r+", encoding="utf-8") as csvfile:
+ if use_lock:
+ fcntl.lockf(csvfile, fcntl.LOCK_EX)
+ csvreader = csv.reader(csvfile)
+ for ledger_entry in csvreader:
+ yield LedgerEntry(
+ ledger_entry[0], int(ledger_entry[1]), int(ledger_entry[2])
+ )
+ if use_lock:
+ fcntl.lockf(csvfile, fcntl.LOCK_UN)
+
+
+def iter_ledger_filenames(use_lock: bool = True) -> Iterator[str]:
+ """Iterate through ledger entry filenames"""
+
+ for entry in iter_ledger_entries(use_lock):
+ yield entry.filename
diff --git a/pyproject.toml b/pyproject.toml
index b6862ee..ac21824 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -42,7 +42,7 @@ classifiers = [
dynamic = ["dependencies", "optional-dependencies"]
[project.scripts]
-circlink = "circlink:main"
+circlink = "circlink:app"
[tool.setuptools]
packages = ["circlink"]
| Only allow files to be saved by one link
An alternative to #21 where only one link can manage any given file at a time, and allow links to take control as duplicates are stopped.
| 2022-10-21T01:37:59 | 0.0 | [] | [] |
|||
cpp-linter/clang-tools-pip | cpp-linter__clang-tools-pip-88 | 4182f6fc6b3228b156f53de1a8d1e0b03fe6f265 | diff --git a/clang_tools/main.py b/clang_tools/main.py
index 9e1b7f1..b67b303 100644
--- a/clang_tools/main.py
+++ b/clang_tools/main.py
@@ -65,7 +65,7 @@ def main():
if args.uninstall:
uninstall_clang_tools(args.uninstall, args.directory)
- if args.install:
+ elif args.install:
install_clang_tools(
args.install,
args.tool,
| disable usage output when running `clang-tools --uninstall`
`clang-tools --uninstall` needs to check if it works, at least do not print the usage like it seems not to work.
```bash
$ clang-tools -u 13
Uninstalling version 13 from /home/gitpod/.local/bin/
Nothing to do because `--install` and `--uninstall` was not specified.
usage: clang-tools [-h] [-i VERSION] [-t TOOL [TOOL ...]] [-d DIR] [-f] [-b] [-u VERSION]
options:
-h, --help show this help message and exit
-i VERSION, --install VERSION
Install clang-tools about a specific version.
-t TOOL [TOOL ...], --tool TOOL [TOOL ...]
Specify which tool(s) to install.
-d DIR, --directory DIR
The directory where the clang-tools are installed.
-f, --overwrite Force overwriting the symlink to the installed binary. This will only overwrite an existing symlink.
-b, --no-progress-bar
Do not display a progress bar for downloads.
-u VERSION, --uninstall VERSION
Uninstall clang-tools with specific version. This is done before any install.
```
| 2024-03-06T04:20:27 | 0.0 | [] | [] |
|||
lanl/BEE | lanl__BEE-854 | 8f0f955a06d838591b50f9cbe915c7beb9f4579c | diff --git a/beeflow/client/bee_client.py b/beeflow/client/bee_client.py
index 54578a167..3a536e0f5 100644
--- a/beeflow/client/bee_client.py
+++ b/beeflow/client/bee_client.py
@@ -206,6 +206,9 @@ def is_parent(parent, path):
path = os.path.abspath(path)
return os.path.commonpath([parent]) == os.path.commonpath([parent, path])
+ wf_path = wf_path.resolve()
+ workdir = workdir.resolve()
+
tarball_path = ""
if os.path.exists(wf_path):
# Check to see if the wf_path is a tarball or a directory. Package if directory
@@ -222,17 +225,15 @@ def is_parent(parent, path):
# Packaging in temp dir, after copying alternate cwl_main or yaml file
cwl_indir = is_parent(wf_path, main_cwl_path)
yaml_indir = is_parent(wf_path, yaml_path)
+ # Always create temp dir for the workflow
tempdir_path = pathlib.Path(tempfile.mkdtemp())
- if cwl_indir and yaml_indir:
- package_path = package(wf_path, tempdir_path)
- else:
- tempdir_wf_path = pathlib.Path(tempdir_path / wf_path.name)
- shutil.copytree(wf_path, tempdir_wf_path, dirs_exist_ok=False)
- if not cwl_indir:
- shutil.copy2(main_cwl, tempdir_wf_path)
- if not yaml_indir:
- shutil.copy2(yaml, tempdir_wf_path)
- package_path = package(tempdir_wf_path, tempdir_path)
+ tempdir_wf_path = pathlib.Path(tempdir_path / wf_name)
+ shutil.copytree(wf_path, tempdir_wf_path, dirs_exist_ok=False)
+ if not cwl_indir:
+ shutil.copy2(main_cwl, tempdir_wf_path)
+ if not yaml_indir:
+ shutil.copy2(yaml, tempdir_wf_path)
+ package_path = package(tempdir_wf_path, tempdir_path)
else:
package_path = wf_path
# Untar and parse workflow
diff --git a/beeflow/common/integration/utils.py b/beeflow/common/integration/utils.py
index 1c92213ad..a3fc45ec2 100644
--- a/beeflow/common/integration/utils.py
+++ b/beeflow/common/integration/utils.py
@@ -58,7 +58,7 @@ def __init__(self, name, path, main_cwl, job_file, workdir, containers):
self.path = path
self.main_cwl = main_cwl
self.job_file = job_file
- self.workdir = workdir
+ self.workdir = Path(workdir)
self.containers = containers
self.wf_id = None
self.tarball = None
| Enable Running Beeflow Submit from within a Workflow Directory
Currently beeflow submit requires that workflow directory for a submission is not the current working directory.
e.g. `beefolow submit workflow . ./wf.cwl ./job.yml .` breaks.
We'll need to figure out how to get this working. One particular issue is that the code which packages a directory into a workflow package for the workflow manager stores the intermediate tarball in the current working directory which means that the directory we're trying to tar is changing since tar is writing the file to that directory. This makes tar super unhappy. So, we'll want to move that step to a temporary directory.
| 2024-06-11T20:31:53 | 0.0 | [] | [] |
|||
ubccr/coldfront | ubccr__coldfront-269 | ad92a1e855fd4d27c6a02777ec32e5915a5cab5a | diff --git a/coldfront/plugins/freeipa/management/commands/freeipa_check.py b/coldfront/plugins/freeipa/management/commands/freeipa_check.py
index 9470051ca..dee440d04 100644
--- a/coldfront/plugins/freeipa/management/commands/freeipa_check.py
+++ b/coldfront/plugins/freeipa/management/commands/freeipa_check.py
@@ -163,10 +163,26 @@ def process_user(self, user):
active_groups = []
for ua in user_allocations:
- if ua.status.name == 'Active' and ua.allocation.status.name == 'Active':
- for g in ua.allocation.get_attribute_list(UNIX_GROUP_ATTRIBUTE_NAME):
- if g not in active_groups:
- active_groups.append(g)
+ if ua.status.name != 'Active':
+ logger.info("Skipping inactive allocation to %s for user %s", ua.allocation.get_resources_as_string, user.username)
+ continue
+
+ if ua.allocation.status.name != 'Active':
+ logger.info("Skipping allocation to %s for user %s because they are not an active user", ua.allocation.get_resources_as_string, user.username)
+ continue
+
+ all_resources_inactive = True
+ for r in ua.allocation.resources.all():
+ if r.is_available:
+ all_resources_inactive = False
+
+ if all_resources_inactive:
+ logger.info("Skipping allocation to %s for user %s due to all resources being inactive", ua.allocation.get_resources_as_string, user.username)
+ continue
+
+ for g in ua.allocation.get_attribute_list(UNIX_GROUP_ATTRIBUTE_NAME):
+ if g not in active_groups:
+ active_groups.append(g)
removed_groups = []
for ua in user_allocations:
| FreeIPA plugin: attributes should not sync if resource is not active
If a resource is unmarked 'available' in the database, the coldfront freeipa plugin should not sync any active allocations for that resource.
| 2021-03-05T15:29:35 | 0.0 | [] | [] |
|||
OceanParcels/Parcels | OceanParcels__Parcels-990 | 7e7215f3f3fc67c97c810c8c44f13ea01207746a | diff --git a/environment_py3_osx.yml b/environment_py3_osx.yml
index c1b8b5258..3782f284d 100644
--- a/environment_py3_osx.yml
+++ b/environment_py3_osx.yml
@@ -25,7 +25,7 @@ dependencies:
- six>=1.10.0
- xarray>=0.10.8
- dask>=2.0
- - cftime
+ - cftime>=1.3.1
- pytest
- nbval
- scikit-learn
diff --git a/environment_py3_win.yml b/environment_py3_win.yml
index fac9825e7..876b54bdf 100644
--- a/environment_py3_win.yml
+++ b/environment_py3_win.yml
@@ -22,7 +22,7 @@ dependencies:
- six>=1.10.0
- xarray>=0.5.1
- dask>=2.0
- - cftime
+ - cftime>=1.3.1
- ipykernel<5.0
- pytest
- nbval
diff --git a/environment_py3p6_linux.yml b/environment_py3p6_linux.yml
index 0275418c4..7c90d12e3 100644
--- a/environment_py3p6_linux.yml
+++ b/environment_py3p6_linux.yml
@@ -24,7 +24,7 @@ dependencies:
- scipy>=0.16.0
- six >=1.10.0
- xarray>=0.10.8
- - cftime
+ - cftime>=1.3.1
- dask>=2.0
- pytest
- nbval
diff --git a/parcels/particlesets/baseparticleset.py b/parcels/particlesets/baseparticleset.py
index b6163aa44..5a0e16682 100644
--- a/parcels/particlesets/baseparticleset.py
+++ b/parcels/particlesets/baseparticleset.py
@@ -5,6 +5,7 @@
from datetime import timedelta as delta
from os import path
import time as time_module
+import cftime
import progressbar
@@ -350,6 +351,8 @@ def execute(self, pyfunc=AdvectionRK4, endtime=None, runtime=None, dt=1.,
raise RuntimeError('endtime must be either a datetime or a double')
if isinstance(endtime, datetime):
endtime = np.datetime64(endtime)
+ elif isinstance(endtime, cftime.datetime):
+ endtime = self.time_origin.reltime(endtime)
if isinstance(endtime, np.datetime64):
if self.time_origin.calendar is None:
raise NotImplementedError('If fieldset.time_origin is not a date, execution endtime must be a double')
diff --git a/parcels/tools/converters.py b/parcels/tools/converters.py
index 7132fbbc4..296030188 100644
--- a/parcels/tools/converters.py
+++ b/parcels/tools/converters.py
@@ -1,3 +1,4 @@
+# flake8: noqa: E999
import inspect
from datetime import timedelta as delta
from math import cos
@@ -53,9 +54,19 @@ def reltime(self, time):
return (time - self.time_origin) / np.timedelta64(1, 's')
elif self.calendar in _get_cftime_calendars():
if isinstance(time, (list, np.ndarray)):
- return np.array([(t - self.time_origin).total_seconds() for t in time])
+ try:
+ return np.array([(t - self.time_origin).total_seconds() for t in time])
+ except ValueError:
+ raise ValueError("Cannot subtract 'time' (a %s object) from a %s calendar.\n"
+ "Provide 'time' as a %s object?"
+ % (type(time), self.calendar, type(self.time_origin)))
else:
- return (time - self.time_origin).total_seconds()
+ try:
+ return (time - self.time_origin).total_seconds()
+ except ValueError:
+ raise ValueError("Cannot subtract 'time' (a %s object) from a %s calendar.\n"
+ "Provide 'time' as a %s object?"
+ % (type(time), self.calendar, type(self.time_origin)))
elif self.calendar is None:
return time - self.time_origin
else:
| Fixing error message when FieldSet calendar is a cftime.datetime object
This issue came up because GlobCurrentv3.0 has a 'julian' calendar, which means that all datetimes for the `ParticleSet` also need to be julian calendars. This is now better handled (requires `cftime>=1.3.1`)
Fixing error message when FieldSet calendar is a cftime.datetime object
This issue came up because GlobCurrentv3.0 has a 'julian' calendar, which means that all datetimes for the `ParticleSet` also need to be julian calendars. This is now better handled (requires `cftime>=1.3.1`)
| 2021-01-26T11:00:57 | 0.0 | [] | [] |
|||
jameschapman19/cca_zoo | jameschapman19__cca_zoo-156 | 7575d3b47b8ee2c62a8f004141d09bf23055ae83 | diff --git a/cca_zoo/models/_grcca.py b/cca_zoo/models/_grcca.py
index 3cfb6760..546898c9 100644
--- a/cca_zoo/models/_grcca.py
+++ b/cca_zoo/models/_grcca.py
@@ -56,7 +56,7 @@ def fit(self, views: Iterable[np.ndarray], y=None, feature_groups=None, **kwargs
warnings.warn(f"No feature groups provided, using all features")
feature_groups = [np.ones(view.shape[1], dtype=int) for view in views]
for feature_group in feature_groups:
- assert feature_group.dtype == int, "subject_groups must be integers"
+ assert np.issubdtype(feature_group.dtype, np.integer), "feature groups must be integers"
views = self._validate_inputs(views)
self._check_params()
views, idxs = self._preprocess(views, feature_groups)
| GRCCA: Checking on feature groups fails for feature groups with np.int8
In [`cca_zoo.models.GRCCA`](https://github.com/jameschapman19/cca_zoo/blob/7575d3b47b8ee2c62a8f004141d09bf23055ae83/cca_zoo/models/_grcca.py#L58-L59) each feature group is checked for an integer data type. This however fails for `np.int8`:
Example:
```python
import numpy as np
from cca_zoo.models import GRCCA
# create two random matrices and pretend both of them would have two feature
# groups
rng = np.random.RandomState(0)
X1 = rng.random((100,4))
X2 = rng.random((100,4))
feature_groups = [np.array([0,0,1,1],dtype=np.int8),np.array([0,0,-1,-1])]
# fit GRCCA on the data
estimator = GRCCA()
estimator.fit([X1,X2],feature_groups=feature_groups)
File "/home/johannes.wiesner/.conda/envs/csp_wiesner_johannes/lib/python3.9/site-packages/cca_zoo/models/_grcca.py", line 59, in fit
assert feature_group.dtype == int, "subject_groups must be integers"
AssertionError: subject_groups must be integers
```
I ran into this error because I converted a `pandas.Series` to an integer array. This could a typical workflow because you might start with X1 and X2 being multi-index `pd.DataFrames` (where the first column level describes the variable names and the second level describes groups of these variables)
```python
import pandas as pd
feature_group = pd.Series(['cognition','cognition','psychopathology','psychopathology']).astype('category').cat.codes.values
feature_group.dtype
Out[32]: dtype('int8')
```
Solution:
Use a more generic check like:
```python
for feature_group in feature_groups:
assert np.issubdtype(feature_group.dtype,np.integer), "feature groups must be integers"
```
Also: change the error message from "subject groups must be integers" to "**feature** groups must be integers"
I will submit a pull-request :)
| 2022-11-23T12:58:08 | 0.0 | [] | [] |
|||
tOgg1/graphene-django-cud | tOgg1__graphene-django-cud-54 | ca2be9b64a449eba1c60b1e1b0820d8933c5e9bb | diff --git a/graphene_django_cud/converter.py b/graphene_django_cud/converter.py
index 0648ef1..3673625 100644
--- a/graphene_django_cud/converter.py
+++ b/graphene_django_cud/converter.py
@@ -78,7 +78,7 @@ def get_choices(choices):
def convert_choices_field(field, choices, required=None):
meta = field.model._meta
- name = to_camel_case("{}_{}_{}".format(meta.object_name, field.name, "Input"))
+ name = to_camel_case("{}_{}".format(meta.object_name, field.name))
choices = list(get_choices(choices))
named_choices = [(c[0], c[1]) for c in choices]
named_choices_descriptions = {c[0]: c[2] for c in choices}
@@ -106,16 +106,16 @@ def convert_django_field_with_choices(
choices = getattr(field, "choices", None)
if choices:
registry_name = to_camel_case(
- "{}_{}_{}".format(field.model._meta.object_name, field.name, "Input")
+ "{}_{}".format(field.model._meta.object_name, field.name)
)
# Fetch this from registry, if exists. We don't want to duplicate enum fields.
enum = None
if registry:
- from_registry = registry.get_converted_field(registry_name)
+ from_registry = registry.get_converted_field(field)
if from_registry:
- return from_registry(
- description=field.help_text, required=is_required(field, required)
- )
+ from_registry.kwargs['description'] = field.help_text
+ from_registry.kwargs['required'] = is_required(field, required)
+ return from_registry
converted = convert_choices_field(field, choices, required)
# Register enum fields
| Redundant type created for choice charfield
When creating a simple model with a choice charfield:
```
# Model
BAR_CHOICES = [
('OPTION1', 'Option 1'),
('OPTION2', 'Option 2'),
('OPTION3', 'Option 3')
]
class Foo(models.Model):
bar = models.CharField(max_length=32, choices=BAR_CHOICES, null=True, blank=True)
# Type
from graphene_django import DjangoObjectType as DOT
class FooType(DOT):
class Meta:
model = Foo
filter_fields: Dict[str, str] = {}
# Mutation
class CreateFoo(DjangoCreateMutation):
class Meta:
model = Foo
```
This results in two identical enum types being generated with different names, `FooBar` and `FooBarInput`. Is this intended behavior, or a valid issue?
| Hi!
This is not intentional behaviour, and should not happen. Although I know we've had some issues with this before. Could you please send me which graphene, graphene-django and graphene-django cud versions you have installed?
I see! Good to know it is not intentional. My versions are:
```
graphene-django==2.13.0
graphene-django-cud==0.6.5
```
I was taking a look at this issue today, and it looks like it might be intentional. `Input` is appended to choice field types in two places:
https://github.com/tOgg1/graphene-django-cud/blob/develop/graphene_django_cud/converter.py#L108
and here:
https://github.com/tOgg1/graphene-django-cud/blob/develop/graphene_django_cud/converter.py#L80
Tests will not run with it removed. | 2021-03-29T04:50:38 | 0.0 | [] | [] |
||
sktelecom/onot | sktelecom__onot-15 | db787c477ae69edbfe67123db715959fe2646e83 | diff --git a/.github/workflows/python-app.yml b/.github/workflows/python-app.yml
new file mode 100644
index 0000000..00f2f51
--- /dev/null
+++ b/.github/workflows/python-app.yml
@@ -0,0 +1,34 @@
+# This workflow will install Python dependencies, run tests and lint with a single version of Python
+# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
+
+name: Python application
+
+on:
+ push:
+ branches: [ "main", "dev" ]
+
+permissions:
+ contents: read
+
+jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v3
+ - name: Set up Python 3.8
+ uses: actions/setup-python@v3
+ with:
+ python-version: "3.8.9"
+ architecture: 'x64'
+ - name: Install dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install flake8 pytes
+ pip install -r requirements.txt
+ - name: Make File
+ run: |
+ pyinstaller -w onot/gui/onot_app.py
+ - uses: actions/upload-artifact@v2
+ with:
+ name: onot
+ path: dist/*
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
index 3487d5e..cedbd23 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,6 +4,9 @@ output
# vscode
.vscode
+# idea
+.idea
+
# .DS_Store
.DS_Store
._.DS_Store
diff --git a/README.md b/README.md
index c3ef7d1..01073e5 100644
--- a/README.md
+++ b/README.md
@@ -19,6 +19,8 @@ cd ~/onot; python setup.py install
## Usage
+### Command Line
+
1. Prepare your input file. The input file is an [Excel format SPDX document](./sample/SPDXRdfExample-v2.1.xlsx), and refer to the next page for [how to prepare it](./docs/how_to_prepare.md).
2. Run onot command with two arguments.
@@ -26,13 +28,25 @@ cd ~/onot; python setup.py install
- `-o` or `--output_format` : File type of OSS notice to be generated (`html` or `text`)
- Sample output : [output/OSS_Notice_SPDX-Tools-v2.0_20221009_180948.html](https://sktelecom.github.io/compliance/OSS_Notice_Sample_Application_20221011_140301.html)
-```python
+```shell
$ onot --input sample/SPDXRdfExample-v2.1.xlsx --output_format html
```
+### GUI for windows
+
+1. Prepare your input file. The input file is an [Excel format SPDX document](./sample/SPDXRdfExample-v2.1.xlsx), and refer to the next page for [how to prepare it](./docs/how_to_prepare.md).
+
+2. Run the command below or download zip file from release assets.
+
+```shell
+$ pyinstaller -w onot/gui/onot_app.py
+```
+
+3. Run the onot_app.exe file. Executable file is located in the onot_app directory.
+
## Test
-```python
+```shell
$ python -m unittest
```
diff --git a/onot/__main__.py b/onot/__main__.py
index ecd834f..6625194 100644
--- a/onot/__main__.py
+++ b/onot/__main__.py
@@ -4,9 +4,11 @@
# SPDX-License-Identifier: Apache-2.0
from onot.tools import create_notice
+from onot.log import log_setting
def main():
+ log_setting.init()
create_notice.main()
diff --git a/onot/generating/generate.py b/onot/generating/generate.py
index eaf2982..82b571d 100644
--- a/onot/generating/generate.py
+++ b/onot/generating/generate.py
@@ -8,5 +8,5 @@ def generate_notice(doc, ext):
if ext == 'html':
file_type = html
g = file_type.Generator()
- g.generate(doc)
+ return g.generate(doc)
\ No newline at end of file
diff --git a/onot/generating/html.py b/onot/generating/html.py
index cceea68..06dc0d9 100644
--- a/onot/generating/html.py
+++ b/onot/generating/html.py
@@ -7,13 +7,17 @@
import os
import re
-from onot.generating.html_resource import *
+import sys
+import logging
from datetime import datetime
+from onot.generating.html_resource import *
+
+logger = logging.getLogger("root")
class Generator():
def __init__(self):
- print("debug:" + "Html class")
+ logger.debug("Html class")
def convert_license_expression(self, license_name):
splited = re.split(r'OR|AND|WITH', str(license_name))
@@ -98,14 +102,26 @@ def generate_html_file(self, doc, html_code):
now = datetime.now()
date_time = now.strftime("%Y%m%d_%H%M%S")
name = doc['name'].replace(' ', '_')
- filename = 'OSS_Notice_' + doc['name'].replace(' ', '_') + '_' + date_time + '.html'
- filepathname = os.path.join('output', filename)
- f = open(filepathname, 'w')
+
+ if "/Contents" in sys.executable:
+ current_path = os.path.dirname(sys.executable.split("/Contents")[0])
+ directory_name = os.path.join(current_path, "output")
+ else:
+ directory_name = os.path.abspath("output")
+
+ if not os.path.exists(directory_name):
+ os.makedirs(directory_name)
+
+ file_name = 'OSS_Notice_' + doc['name'].replace(' ', '_') + '_' + date_time + '.html'
+ file_path_name = os.path.join(directory_name, file_name)
+ f = open(file_path_name, 'w', encoding='UTF-8')
f.write(html_code)
f.close()
- print("debug: output is here - " + str(filepathname))
+ logger.debug("output is here - " + str(file_path_name))
+ return file_path_name
def generate(self, doc):
html_code = self.make_html_code(doc)
- self.generate_html_file(doc, html_code)
- print("debug: " + "generate completed")
+ file_path_name = self.generate_html_file(doc, html_code)
+ logger.debug("generate completed")
+ return file_path_name
diff --git a/onot/gui/__init__.py b/onot/gui/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/onot/gui/onot_app.py b/onot/gui/onot_app.py
new file mode 100644
index 0000000..42cf619
--- /dev/null
+++ b/onot/gui/onot_app.py
@@ -0,0 +1,83 @@
+#!/usr/bin/env python3
+# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
+# SPDX-FileCopyrightText: Copyright (c) 2022 Kakao Corp. https://www.kakaocorp.com
+#
+# SPDX-License-Identifier: Apache-2.0
+
+import logging
+import sys
+
+from PyQt6 import QtCore
+from PyQt6.QtWidgets import QApplication, QStackedWidget, QMainWindow
+
+from onot.gui.widget_finish import FinishWidget
+from onot.gui.widget_home import HomeWidget
+from onot.gui.widget_progress import ProgressWidget
+from onot.log import log_setting
+
+logger = logging.getLogger("root")
+
+
+class MainWindow(QMainWindow):
+ def __init__(self, parent=None):
+ super().__init__()
+ self.init_ui()
+
+ def init_ui(self):
+ self.setWindowTitle("onot")
+ self.setGeometry(600, 300, 600, 200)
+ self.central_widget = QStackedWidget()
+ self.setCentralWidget(self.central_widget)
+
+ self.widget_home = HomeWidget()
+ self.widget_home.btn_start.clicked.connect(self.start)
+ self.central_widget.addWidget(self.widget_home)
+
+ self.widget_progress = ProgressWidget()
+ self.widget_progress.signal_stop.connect(self.stop)
+ self.widget_progress.signal_finish.connect(self.finish)
+ self.widget_progress.signal_exception.connect(self.handle_exception)
+ self.central_widget.addWidget(self.widget_progress)
+
+ self.widget_finish = FinishWidget()
+ self.widget_finish.signal_go_home.connect(self.go_home)
+ self.central_widget.addWidget(self.widget_finish)
+
+ def start(self):
+ self.central_widget.setCurrentWidget(self.widget_progress)
+ file = self.widget_home.text_label_selected_input_file.text()
+ output_format = self.widget_home.combo_box_select_output_format.currentText()
+ self.widget_progress.create_notice(file, output_format)
+
+ @QtCore.pyqtSlot(str)
+ def stop(self, msg):
+ self.widget_finish.show_message(str(msg))
+ self.central_widget.setCurrentWidget(self.widget_finish)
+
+ @QtCore.pyqtSlot(str)
+ def finish(self, msg):
+ self.setGeometry(600, 300, 600, 400)
+ self.widget_finish.add_notice_and_show(msg)
+ self.central_widget.setCurrentWidget(self.widget_finish)
+
+ @QtCore.pyqtSlot(Exception)
+ def handle_exception(self, exception):
+ self.widget_finish.show_message(str(exception))
+ self.central_widget.setCurrentWidget(self.widget_finish)
+
+ @QtCore.pyqtSlot()
+ def go_home(self):
+ self.setGeometry(600, 300, 600, 200)
+ self.central_widget.setCurrentWidget(self.widget_home)
+
+
+def main():
+ app = QApplication(sys.argv)
+ window = MainWindow()
+ window.show()
+ sys.exit(app.exec())
+
+
+if __name__ == '__main__':
+ log_setting.init()
+ main()
\ No newline at end of file
diff --git a/onot/gui/widget_finish.py b/onot/gui/widget_finish.py
new file mode 100644
index 0000000..092cef9
--- /dev/null
+++ b/onot/gui/widget_finish.py
@@ -0,0 +1,104 @@
+#!/usr/bin/env python3
+# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
+# SPDX-FileCopyrightText: Copyright (c) 2022 Kakao Corp. https://www.kakaocorp.com
+#
+# SPDX-License-Identifier: Apache-2.0
+
+import logging
+import os.path
+
+from PyQt6 import QtCore
+from PyQt6.QtCore import QUrl, Qt
+from PyQt6.QtGui import QDesktopServices, QFileSystemModel
+from PyQt6.QtWidgets import QWidget, QGridLayout, QLabel, QPushButton, QTreeView, QStackedWidget
+
+logger = logging.getLogger("root")
+
+class MessageWidget(QWidget):
+ def __init__(self):
+ super().__init__()
+ self.init_ui()
+
+ def init_ui(self):
+ layout = QGridLayout()
+ self.setLayout(layout)
+
+ self.text_label_result = QLabel("", self)
+ layout.addWidget(self.text_label_result, 0, 0, alignment=Qt.AlignmentFlag.AlignCenter)
+
+ self.btn_go_home = QPushButton("Go home", self)
+ self.btn_go_home.setFixedWidth(180)
+ layout.addWidget(self.btn_go_home, 1, 0, alignment=Qt.AlignmentFlag.AlignCenter)
+
+ def set_message(self, msg):
+ self.text_label_result.setText(msg)
+
+class FileTreeWidget(QWidget):
+ def __init__(self):
+ super().__init__()
+ self.init_ui()
+
+ def init_ui(self):
+ layout = QGridLayout()
+ self.setLayout(layout)
+
+ self.model = QFileSystemModel()
+
+ self.tree = QTreeView()
+ self.tree.setModel(self.model)
+ self.tree.setColumnWidth(0, 400)
+ self.tree.setAlternatingRowColors(True)
+ self.tree.doubleClicked.connect(lambda index: QDesktopServices.openUrl(QUrl.fromLocalFile(self.model.filePath(index))))
+ layout.addWidget(self.tree, 0, 0, 1, 2)
+
+ self.btn_open_notice = QPushButton("Open", self)
+ self.btn_open_notice.setFixedWidth(180)
+ self.btn_open_notice.clicked.connect(lambda: QDesktopServices.openUrl(QUrl.fromLocalFile(self.model.filePath(self.tree.currentIndex()))))
+ layout.addWidget(self.btn_open_notice, 1, 0, alignment=Qt.AlignmentFlag.AlignCenter)
+
+ self.btn_go_home = QPushButton("Go home", self)
+ self.btn_go_home.setFixedWidth(180)
+ layout.addWidget(self.btn_go_home, 1, 1, alignment=Qt.AlignmentFlag.AlignCenter)
+
+ def add_notice_and_show(self, notice_path):
+ self.notice_path = notice_path
+ dir_path = os.path.dirname(self.notice_path)
+ self.model.setRootPath(dir_path)
+ self.tree.setRootIndex(self.model.index(dir_path))
+ self.tree.setCurrentIndex(self.model.index(self.notice_path))
+ self.model.directoryLoaded.connect(lambda: self.tree.scrollTo(self.tree.currentIndex()))
+
+
+class FinishWidget(QWidget):
+ # FinishWidget has two widgets (MessageWidget, FileTreeWidget)
+ # MessageWidget: This widget show message like exception message
+ # FileTreeWidget: This widget show file tree that shows the generated notices.
+ signal_go_home = QtCore.pyqtSignal()
+
+ def __init__(self):
+ super().__init__()
+ self.init_ui()
+
+ def init_ui(self):
+ self.layout = QGridLayout()
+ self.layout.setContentsMargins(20, 20, 20, 20)
+ self.setLayout(self.layout)
+
+ self.central_widget = QStackedWidget()
+ self.layout.addWidget(self.central_widget)
+
+ self.widget_message = MessageWidget()
+ self.widget_message.btn_go_home.clicked.connect(self.signal_go_home)
+ self.central_widget.addWidget(self.widget_message)
+
+ self.widget_file_tree = FileTreeWidget()
+ self.widget_file_tree.btn_go_home.clicked.connect(self.signal_go_home)
+ self.central_widget.addWidget(self.widget_file_tree)
+
+ def add_notice_and_show(self, file_path_name):
+ self.widget_file_tree.add_notice_and_show(file_path_name)
+ self.central_widget.setCurrentWidget(self.widget_file_tree)
+
+ def show_message(self, msg):
+ self.widget_message.set_message(msg)
+ self.central_widget.setCurrentWidget(self.widget_message)
\ No newline at end of file
diff --git a/onot/gui/widget_home.py b/onot/gui/widget_home.py
new file mode 100644
index 0000000..b49b6eb
--- /dev/null
+++ b/onot/gui/widget_home.py
@@ -0,0 +1,54 @@
+#!/usr/bin/env python3
+# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
+# SPDX-FileCopyrightText: Copyright (c) 2022 Kakao Corp. https://www.kakaocorp.com
+#
+# SPDX-License-Identifier: Apache-2.0
+
+import logging
+
+from PyQt6.QtCore import Qt
+from PyQt6.QtWidgets import QWidget, QGridLayout, QLabel, QPushButton, QFileDialog, \
+ QComboBox
+
+logger = logging.getLogger("root")
+
+class HomeWidget(QWidget):
+ def __init__(self):
+ super().__init__()
+ self.init_ui()
+
+ def init_ui(self):
+ layout = QGridLayout()
+ layout.setContentsMargins(30, 30, 30, 30)
+ layout.setSpacing(30)
+ self.setLayout(layout)
+
+ self.text_label_selected_input_file = QLabel("", self)
+ layout.addWidget(self.text_label_selected_input_file, 0, 0)
+
+
+ self.btn_select_input_file = QPushButton("Select input file", self)
+ self.btn_select_input_file.clicked.connect(self.open_selection_window)
+ self.btn_select_input_file.setFixedWidth(180)
+ layout.addWidget(self.btn_select_input_file, 0, 1)
+
+ self.text_label_output_format = QLabel("Output format: ")
+ layout.addWidget(self.text_label_output_format, 1, 0)
+
+ self.combo_box_select_output_format = QComboBox(self)
+ self.combo_box_select_output_format.addItems(["html"])
+ self.combo_box_select_output_format.setFixedWidth(180)
+ self.combo_box_select_output_format.setEditable(True)
+ self.combo_box_select_output_format.lineEdit().setAlignment(Qt.AlignmentFlag.AlignCenter)
+ self.combo_box_select_output_format.lineEdit().setReadOnly(True)
+ layout.addWidget(self.combo_box_select_output_format, 1, 1)
+
+ self.btn_start = QPushButton("Start")
+ self.btn_start.setFixedWidth(120)
+ layout.addWidget(self.btn_start, 2, 0, 1, 2, alignment=Qt.AlignmentFlag.AlignCenter)
+
+
+ def open_selection_window(self):
+ file_name = QFileDialog.getOpenFileName(self, 'Open file', './')
+ if file_name[0]:
+ self.text_label_selected_input_file.setText(file_name[0])
\ No newline at end of file
diff --git a/onot/gui/widget_progress.py b/onot/gui/widget_progress.py
new file mode 100644
index 0000000..3e9cffa
--- /dev/null
+++ b/onot/gui/widget_progress.py
@@ -0,0 +1,107 @@
+#!/usr/bin/env python3
+# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
+# SPDX-FileCopyrightText: Copyright (c) 2022 Kakao Corp. https://www.kakaocorp.com
+#
+# SPDX-License-Identifier: Apache-2.0
+
+import logging
+import traceback
+
+from PyQt6 import QtWidgets, QtCore
+from PyQt6.QtCore import QThread, Qt
+from PyQt6.QtWidgets import QWidget, QGridLayout, QPushButton
+
+from onot.generating.generate import generate_notice
+from onot.parsing.parse import parse_file
+
+logger = logging.getLogger("root")
+
+class WidgetLogHandler(logging.Handler, QtCore.QObject):
+ signal_log = QtCore.pyqtSignal(str)
+
+ def __init__(self, widget):
+ super().__init__()
+ QtCore.QObject.__init__(self)
+ self.setFormatter(logging.Formatter('%(asctime)s:%(module)s:%(levelname)s:%(message)s', '%Y-%m-%d %H:%M:%S'))
+ self.widget = widget
+
+ def emit(self, record):
+ msg = self.format(record)
+ self.signal_log.emit(msg)
+
+class CreateNoticeThread(QThread):
+ signal_finish_job = QtCore.pyqtSignal(str)
+ signal_exception = QtCore.pyqtSignal(Exception)
+
+ def __init__(self, parent, input, output_format):
+ super().__init__(parent)
+ self.input = input
+ self.output_format = output_format
+
+ def run(self):
+ try:
+ # parse excel file
+ doc = parse_file(self.input)
+
+ # generate html format oss notice
+ file_path_name = generate_notice(doc, self.output_format)
+ self.signal_finish_job.emit(file_path_name)
+ except Exception as ex:
+ logger.error(ex)
+ logger.debug(traceback.format_exc())
+ self.signal_exception.emit(ex)
+
+
+class ProgressWidget(QWidget):
+ signal_stop = QtCore.pyqtSignal(str)
+ signal_finish = QtCore.pyqtSignal(str)
+ signal_exception = QtCore.pyqtSignal(Exception)
+
+ def __init__(self):
+ super().__init__()
+ self.init_ui()
+
+ def init_ui(self):
+ layout = QGridLayout()
+ layout.setContentsMargins(20, 20, 20, 20)
+ self.setLayout(layout)
+
+ self.log_text_box = QtWidgets.QPlainTextEdit(self)
+ self.log_text_box.setReadOnly(True)
+ layout.addWidget(self.log_text_box, 0, 0)
+
+ # When log added, it will be displayed in the log_text_box.
+ logger_handler = WidgetLogHandler(self.log_text_box)
+ logger_handler.signal_log.connect(lambda text: [
+ self.log_text_box.appendPlainText(text),
+ self.log_text_box.verticalScrollBar().setValue(self.log_text_box.verticalScrollBar().maximum())
+ ])
+ logger.addHandler(logger_handler)
+
+ self.btn_stop_job = QPushButton("Stop", self)
+ self.btn_stop_job.setFixedWidth(180)
+ self.btn_stop_job.clicked.connect(self.stop_job)
+ layout.addWidget(self.btn_stop_job, 1, 0, alignment=Qt.AlignmentFlag.AlignCenter)
+
+ def create_notice(self, input, output_format):
+ self.job = CreateNoticeThread(self, input, output_format)
+ self.job.signal_finish_job.connect(self.finish_create_notice)
+ self.job.signal_exception.connect(self.handle_exception)
+ self.job.start()
+
+ def stop_job(self):
+ self.job.terminate()
+ self.job.signal_finish_job.disconnect()
+ self.job.signal_exception.disconnect()
+ self.log_text_box.clear()
+ self.signal_stop.emit("It has been stopped.")
+
+ @QtCore.pyqtSlot(str)
+ def finish_create_notice(self, msg):
+ self.log_text_box.clear()
+ self.signal_finish.emit(msg)
+
+ @QtCore.pyqtSlot(Exception)
+ def handle_exception(self, exception):
+ self.log_text_box.clear()
+ self.signal_exception.emit(exception)
\ No newline at end of file
diff --git a/onot/log/__init__.py b/onot/log/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/onot/log/log_setting.py b/onot/log/log_setting.py
new file mode 100644
index 0000000..c9d52b9
--- /dev/null
+++ b/onot/log/log_setting.py
@@ -0,0 +1,16 @@
+#!/usr/bin/env python3
+
+# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
+# SPDX-License-Identifier: Apache-2.0
+
+import logging
+
+def init():
+ logger = logging.getLogger("root")
+ logger.setLevel(logging.DEBUG)
+
+ stream_handler = logging.StreamHandler()
+ formatter = logging.Formatter("%(asctime)s:%(module)s:%(levelname)s:%(message)s", "%Y-%m-%d %H:%M:%S")
+ stream_handler.setFormatter(formatter)
+
+ logger.addHandler(stream_handler)
diff --git a/onot/parsing/excel.py b/onot/parsing/excel.py
index 58c1948..d87be21 100644
--- a/onot/parsing/excel.py
+++ b/onot/parsing/excel.py
@@ -7,6 +7,7 @@
import openpyxl
import pandas as pd
+import logging
from onot.parsing import parser
# sheets
@@ -32,12 +33,13 @@
COLUMN_FILE_COPYRIGHT_TEXT = "File Copyright Text"
COLUMN_ARTIFACT_OF_HOMEPAGE = "Artifact of Homepage"
+logger = logging.getLogger("root")
class Parser(parser.AbstractParser):
def __init__(self, file):
- print("debug:" + "excel Parser class")
super().__init__()
+ logger.debug("excel Parser class")
self.wb = openpyxl.load_workbook(file)
def validate_file(self, file):
@@ -59,7 +61,7 @@ def validate_file(self, file):
# - License Info in File
# - File Copyright Text
ws_names = self.wb.sheetnames
- print("debug: " + str(ws_names))
+ logger.debug(str(ws_names))
required_sheets = [
SHEET_DOCUMENT_INFO,
SHEET_PACKAGE_INFO,
diff --git a/onot/parsing/parse.py b/onot/parsing/parse.py
index c2a6b5d..0cf9842 100644
--- a/onot/parsing/parse.py
+++ b/onot/parsing/parse.py
@@ -2,18 +2,22 @@
# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
# SPDX-License-Identifier: Apache-2.0
+
+import logging
from onot.parsing import excel
from onot.parsing import rdf_xml
+logger = logging.getLogger("root")
+
def parse_file(infile):
- print("debug: " + "parse_file - " + infile)
+ logger.debug("parse_file - " + infile)
if infile.endswith(".xls") or infile.endswith(".xlsx"):
parsing_module = excel
elif infile.endswith(".rdf") or infile.endswith(".rdf.xml"):
parsing_module = rdf_xml
else:
- raise Exception("FileType Not Supported" + str(infile))
-
+ raise Exception("FileType Not Supported: " + str(infile))
+
p = parsing_module.Parser(infile)
return p.parse(infile)
# with open(infile) as f:
diff --git a/onot/parsing/parser.py b/onot/parsing/parser.py
index d590bd2..a016183 100644
--- a/onot/parsing/parser.py
+++ b/onot/parsing/parser.py
@@ -8,8 +8,11 @@
import os.path
import re
from abc import abstractmethod, ABC
+import logging
from onot.parsing import spdx_license
+logger = logging.getLogger("root")
+
class AbstractParser(ABC):
def __init__(self):
@@ -95,7 +98,7 @@ def get_details_license(self, license_name):
# check whether the licenseId is existed in the spdx license list
details_url = sl.get_spdx_license_detailsUrl(license_name)
- print("debug: " + str(details_url))
+ logger.debug(str(details_url))
if details_url is not None:
# if so, get the license text from spdx repo : "https://spdx.org/licenses/[LICENSE_ID].json"
details_license = sl.get_spdx_license_details(details_url)
diff --git a/onot/parsing/rdf_xml.py b/onot/parsing/rdf_xml.py
index 58313bb..1dcb141 100644
--- a/onot/parsing/rdf_xml.py
+++ b/onot/parsing/rdf_xml.py
@@ -1,5 +1,4 @@
#!/usr/bin/env python3
-import pprint
# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
# SPDX-FileCopyrightText: Copyright (c) 2022 Kakao Corp. https://www.kakaocorp.com
@@ -9,6 +8,7 @@
import os
from rdflib import Graph, Namespace, RDF
from rdflib.term import Literal, URIRef, BNode
+import logging
from onot.parsing import parser
SUBJECT_DOCUMENT = "SpdxDocument"
@@ -42,9 +42,11 @@
"WithExceptionOperator": " WITH "
}
+logger = logging.getLogger("root")
+
class Parser(parser.AbstractParser):
def __init__(self, file):
- print("debug:" + "RDF/XML Parser class")
+ logger.debug("RDF/XML Parser class")
super().__init__()
self.graph = Graph().parse(source=file, format="xml")
self.spdx_namespace = Namespace("http://spdx.org/rdf/terms#")
diff --git a/onot/parsing/spdx_license.py b/onot/parsing/spdx_license.py
index ed71222..893b372 100644
--- a/onot/parsing/spdx_license.py
+++ b/onot/parsing/spdx_license.py
@@ -6,15 +6,18 @@
# SPDX-License-Identifier: Apache-2.0
import requests
+import logging
SPDX_LICENSE_URL_PREFIX = "https://spdx.org/licenses/"
SPDX_LICENSE_JSON_URL = "https://spdx.org/licenses/licenses.json"
SPDX_LICENSE_EXCEPTION_JSON_URL = "https://spdx.org/licenses/exceptions.json"
+logger = logging.getLogger("root")
+
class SPDX_License():
def __init__(self):
- print("debug:" + "SPDX_License")
+ logger.debug("SPDX_License")
self.spdx_license_list = []
self.spdx_license_exception_list = []
@@ -27,7 +30,7 @@ def get_spdx_license_exception_list(self):
self.spdx_license_exception_list = r.json()
def get_spdx_license_detailsUrl(self, license_id):
- print("debug:" + "licenseid - " + license_id)
+ logger.debug("licenseid - " + license_id)
if not self.spdx_license_list: # list is empty
self.get_spdx_license_list()
if not self.spdx_license_exception_list: # list is empty
diff --git a/onot/tools/create_notice.py b/onot/tools/create_notice.py
index 8f9369a..9ea0416 100644
--- a/onot/tools/create_notice.py
+++ b/onot/tools/create_notice.py
@@ -1,4 +1,5 @@
#!/usr/bin/env python3
+import logging
# SPDX-FileCopyrightText: Copyright 2022 SK TELECOM CO., LTD. <[email protected]>
# SPDX-License-Identifier: Apache-2.0
@@ -27,12 +28,13 @@ def main(input, output_format):
text_format: bool
if True, only create text format oss notice
"""
- print("debug: " + 'called create')
- print("debug: " + "input - " + input)
- print("debug: " + "output - " + output_format)
+ logger = logging.getLogger()
+ logger.debug('called create')
+ logger.debug("input - " + input)
+ logger.debug("output - " + output_format)
if output_format != 'html':
- print("Sorry! Current version only supports html type output.")
+ logger.warning("Sorry! Current version only supports html type output.")
else:
# parse excel file
doc = parse_file(input)
diff --git a/requirements.txt b/requirements.txt
index 7f53c99..00e9a7d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10,4 +10,6 @@ pytz==2022.4
requests==2.28.1
six==1.16.0
urllib3==1.26.12
-rdflib==6.2.0
\ No newline at end of file
+rdflib==6.2.0
+pyqt6==6.4.1
+pyinstaller==5.7.0
\ No newline at end of file
| Windows GUIì© executable ì§ì
Python íê²½ 구ì¶ì´ 곤ëí ì¬ì©ì를 ìí Windows GUIì© executable ì§ìì´ íìí©ëë¤.
- PyQt5ë¡ input fileì ì ííê³ ì¤íìì¤ ê³ ì§ë¬¸ì ìì±í기 ìí ê°ë¨í GUI를 구ì±íê³ ,
- PyInstallerë¡ executableì ìì±
| @haksungjang
ìë
íì¸ì~ ì¹´ì¹´ì¤ ë¡ì ì¤ì
ëë¤:)
ì´ ì´ì를 ì§íí기 ìí´ PyQt5 ë¡ íê²½ì
í
ì íë ¤ê³ íëë°ì.
ì ê° ê°ë°ì©ì¼ë¡ ì¬ì©íê³ ìë M1 맥ë¶ììë í´ë¹ë²ì ì¼ë¡ ì¤ì¹ê° ìëë ì´ìê° ììµëë¤.
ë¤íí ìµì ë²ì ì¸ PyQt6ë M1ìì ì ìì ì¼ë¡ ëìíê³ ìëì°ììë ìíë¡ ë§ë exe íì¼ë ì ëìí©ëë¤.
PyQt5ì PyQt6ì ì°¨ì´ì ë¤ì ëë¶ë¶ ë§ì´ëíë¤ê³ íì¬ ë³ ë¬¸ì ë ìì´ë³´ì´ëë° PyQt6ë¡ ì§íí´ë ê´ì°®ìê¹ì?
https://coderslegacy.com/differences-between-pyqt5-and-pyqt6/
ìë
íì¸ì, ë¡ì ì¤!
ì, ê·¸ë êµ°ì! PyQt6ë¡ ì§ííì
ë 문ì ëì§ ììµëë¤. Python CLIì ìµìíì§ ìì ìëì° ì¬ì©ì를 ìí GUI ì ê³µì´ ê°ë¥íë¤ë©´ PyQtê° ìë ë¤ë¥¸ GUI ë구를 ì¬ì©í´ë ì¢ìµëë¤.
ê°ì¬í©ëë¤. :)
ê·¸ë¼ PyQt6ë¡ ì§ííê² ìµëë¤.
ê°ì¬í©ëë¤^^ | 2023-02-15T04:18:09 | 0.0 | [] | [] |
||
rachtibat/zennit-crp | rachtibat__zennit-crp-7 | 85e2f415b0c0f4b8e1b151bcd48ea0fd7b2d3ddf | diff --git a/crp/maximization.py b/crp/maximization.py
index fb5d31b..3a5cd70 100644
--- a/crp/maximization.py
+++ b/crp/maximization.py
@@ -42,17 +42,17 @@ def __init__(self, mode="relevance", max_target="sum", abs_norm=False, path=None
def analyze_layer(self, rel, concept: Concept, layer_name: str, data_indices):
- d_c_sorted, rel_c_sorted, rf_c_sorted = concept.reference_sampling(
+ argsort, rel_c_sorted, rf_c_sorted = concept.reference_sampling(
rel, layer_name, self.max_target, self.abs_norm)
# convert batch index to dataset wide index
- data_indices = torch.from_numpy(data_indices).to(d_c_sorted)
- d_c_sorted = torch.take(data_indices, d_c_sorted)
+ data_indices = torch.from_numpy(data_indices).to(argsort)
+ d_c_sorted = torch.take(data_indices, argsort)
SZ = self.SAMPLE_SIZE
self.concatenate_with_results(layer_name, d_c_sorted[:SZ], rel_c_sorted[:SZ], rf_c_sorted[:SZ])
self.sort_result_array(layer_name)
- return d_c_sorted, rel_c_sorted, rf_c_sorted
+ return d_c_sorted, rel_c_sorted, rf_c_sorted, argsort
def delete_result_arrays(self):
diff --git a/crp/statistics.py b/crp/statistics.py
index 43e7eea..8e04b54 100644
--- a/crp/statistics.py
+++ b/crp/statistics.py
@@ -35,18 +35,18 @@ def __init__(self, mode="relevance", max_target="sum", abs_norm=False, path=None
# TODO: how preprocessing?
def analyze_layer(self, d_c_sorted, rel_c_sorted, rf_c_sorted, layer_name, targets):
+ t_unique = torch.unique(targets)
- t_unique = np.unique(targets)
for t in t_unique:
+ t_indices = targets.t() == t
+ num_channels = targets.shape[1]
- t_indices = np.where(targets == t)[0]
+ d_c_t = d_c_sorted.t()[t_indices].view(num_channels, -1).t()
+ rel_c_t = rel_c_sorted.t()[t_indices].view(num_channels, -1).t()
+ rf_c_t = rf_c_sorted.t()[t_indices].view(num_channels, -1).t()
- d_c_t = d_c_sorted[t_indices]
- rel_c_t = rel_c_sorted[t_indices]
- rf_c_t = rf_c_sorted[t_indices]
-
- self.concatenate_with_results(layer_name, t, d_c_t, rel_c_t, rf_c_t)
- self.sort_result_array(layer_name, t)
+ self.concatenate_with_results(layer_name, int(t), d_c_t, rel_c_t, rf_c_t)
+ self.sort_result_array(layer_name, int(t))
def delete_result_arrays(self):
diff --git a/crp/visualization.py b/crp/visualization.py
index 03b4f99..a3183e1 100644
--- a/crp/visualization.py
+++ b/crp/visualization.py
@@ -163,8 +163,10 @@ def analyze_relevance(self, rel, layer_name, concept, data_indices, targets):
Finds input samples that maximally activate each neuron in a layer and most relevant samples
"""
# TODO: dummy target for extra dataset
- d_c_sorted, rel_c_sorted, rf_c_sorted = self.RelMax.analyze_layer(rel, concept, layer_name, data_indices)
+ d_c_sorted, rel_c_sorted, rf_c_sorted, argsort = self.RelMax.analyze_layer(
+ rel, concept, layer_name, data_indices)
+ targets = torch.take(torch.Tensor(targets).to(argsort), argsort)
self.RelStats.analyze_layer(d_c_sorted, rel_c_sorted, rf_c_sorted, layer_name, targets)
@torch.no_grad()
@@ -179,8 +181,10 @@ def analyze_activation(self, act, layer_name, concept, data_indices, targets):
act = act[unique_indices]
targets = targets[unique_indices]
- d_c_sorted, act_c_sorted, rf_c_sorted = self.ActMax.analyze_layer(act, concept, layer_name, data_indices)
+ d_c_sorted, act_c_sorted, rf_c_sorted, argsort = self.ActMax.analyze_layer(
+ act, concept, layer_name, data_indices)
+ targets = torch.take(torch.Tensor(targets).to(argsort), argsort)
self.ActStats.analyze_layer(d_c_sorted, act_c_sorted, rf_c_sorted, layer_name, targets)
def _save_results(self, d_index=None):
| RelStats and ActStats use wrong (unsorted) targets in FeatureVisualization method
Hi all,
I think the result of the methods **analyze_relevance** and **analyze_activation** from the **FeatureVisualization** class are not as expected.
Let us take the method (line 162 in visualization.py):
```
def analyze_relevance(self, rel, layer_name, concept, data_indices, targets):
"""
Finds input samples that maximally activate each neuron in a layer and most relevant samples
"""
d_c_sorted, rel_c_sorted, rf_c_sorted = self.RelMax.analyze_layer(rel, concept, layer_name, data_indices)
self.RelStats.analyze_layer(d_c_sorted, rel_c_sorted, rf_c_sorted, layer_name, targets)
```
Here, the relevance values (identified by **rel_c_sorted**) of each channel are sorted in descending fashion. Therefore, **rel_c_sorted** has shape (batch_size, num_channels). However, for each channel, the values might be sorted differently (in different order) compared to another channel.
Afterwards, in **self.RelStats.analyze_layer(d_c_sorted, rel_c_sorted, rf_c_sorted, layer_name, targets)**, however, the values are assumed to not have been sorted:
```
def analyze_layer(self, d_c_sorted, rel_c_sorted, rf_c_sorted, layer_name, targets):
t_unique = np.unique(targets)
for t in t_unique:
t_indices = np.where(targets == t)[0]
d_c_t = d_c_sorted[t_indices]
rel_c_t = rel_c_sorted[t_indices]
rf_c_t = rf_c_sorted[t_indices]
self.concatenate_with_results(layer_name, t, d_c_t, rel_c_t, rf_c_t)
self.sort_result_array(layer_name, t)
```
Here, ultimately, the highest channel values are associated with the first target/value in **t_unique**, which is often not true.
In order to fix the behavior, we should take into account the actual target associated to each channel value. Maybe something like:
```
def analyze_relevance(self, rel, layer_name, concept, data_indices, targets):
d_c_sorted, rel_c_sorted, rf_c_sorted, argsort = self.RelMax.analyze_layer(rel, concept, layer_name, data_indices)
targets = torch.take(torch.Tensor(targets).to(argsort), argsort)
self.RelStats.analyze_layer(d_c_sorted, rel_c_sorted, rf_c_sorted, layer_name, targets)
```
where **argsort** describes the sorting order and **targets** becomes a tensor of shape (batch_size, num_channels) indicating for each channel value the true target class. Method ** self.RelStats.analyze_layer** then needs to handle the different target shape.
| 2022-07-19T14:10:50 | 0.0 | [] | [] |
|||
AsyncAlgoTrading/aat | AsyncAlgoTrading__aat-160 | ba0cb179a04e9e10f2bac76a60bbac9e6206d79b | diff --git a/aat/engine/dispatch/periodic.py b/aat/engine/dispatch/periodic.py
index 3e441989..79c7aa6a 100644
--- a/aat/engine/dispatch/periodic.py
+++ b/aat/engine/dispatch/periodic.py
@@ -1,4 +1,5 @@
import asyncio
+from asyncio import Future
from datetime import datetime
from typing import Awaitable, Callable, List, Optional
@@ -56,10 +57,12 @@ def expires(self, timestamp: datetime) -> bool:
)
return should_expire(self._last, timestamp, self.second, self.minute, self.hour)
- async def execute(self, timestamp: datetime) -> None:
+ async def execute(self, timestamp: datetime) -> Optional[Future]:
if self.expires(timestamp):
- asyncio.ensure_future(self._function(timestamp=timestamp))
self._last = timestamp
+ return asyncio.ensure_future(self._function(timestamp=timestamp))
+ else:
+ return None
class PeriodicManagerMixin(object):
diff --git a/aat/engine/engine.py b/aat/engine/engine.py
index cacae859..b3305b2a 100644
--- a/aat/engine/engine.py
+++ b/aat/engine/engine.py
@@ -455,7 +455,7 @@ async def run(self) -> None:
self._latest = datetime.now(tz=self.tz)
# process any periodics
- await asyncio.gather(
+ periodic_result = await asyncio.gather(
*(
asyncio.create_task(p.execute(self._latest))
for p in self.manager.periodics()
@@ -463,6 +463,10 @@ async def run(self) -> None:
)
)
+ exceptions = [r for r in periodic_result if r.exception()]
+ if any(exceptions):
+ raise exceptions[0].exception()
+
# Before engine shutdown, send an exit event
await self.processEvent(Event(type=EventType.EXIT, target=None))
| Propogate exceptions that are called in periodic
| 2021-02-19T01:42:21 | 0.0 | [] | [] |
|||
dipu-bd/lightnovel-crawler | dipu-bd__lightnovel-crawler-2478 | 827d8ac524891a315776b71218df325acb8be728 | diff --git a/sources/en/t/teanovel.py b/sources/en/t/teanovel.py
index 83d6c34fc..23e3304dd 100644
--- a/sources/en/t/teanovel.py
+++ b/sources/en/t/teanovel.py
@@ -11,7 +11,12 @@
class TeaNovelCrawler(Crawler):
- base_url = ["https://www.teanovel.com/", "https://www.teanovel.net/"]
+ base_url = "https://www.teanovel.com"
+
+ def initialize(self):
+ self.init_executor(
+ workers=4
+ )
def read_novel_info(self):
soup = self.get_soup(self.novel_url)
@@ -22,40 +27,26 @@ def read_novel_info(self):
next_data = json.loads(script_tag.get_text())
- build_id = next_data["buildId"]
- novel_data = next_data["props"]["pageProps"]["novelData"]["novel"]
+ novel_data = next_data["props"]["pageProps"]["novel"]
self.novel_title = novel_data["name"]
self.novel_author = novel_data["author"]
- # img_tag = soup.select_one("main img[src*='_next/']")
- # if isinstance(img_tag, Tag):
- # self.novel_cover = self.absolute_url(img_tag["src"])
-
- slug = novel_data["slug"]
-
- toc_url = f"{self.home_url}api/chapters/{slug}?slug={slug}&orderBy=asc"
- toc_json = self.get_json(toc_url)
-
- while True:
- for chapter in toc_json["data"]:
- chapter_id = len(self.chapters) + 1
- self.chapters.append(
- {
- "id": chapter_id,
- "title": f"Chapter {chapter_id}: {chapter['title']}",
- "url": (
- f"{self.home_url}_next/data/{build_id}/novel/{slug}/{chapter['slug']}.json"
- ),
- }
- )
- if "nextId" in toc_json:
- toc_json = self.get_json(toc_url + f"&nextId={toc_json['nextId']}")
- else:
- break
+ img_tag = soup.select_one("main img[src*='_next/']")
+ if isinstance(img_tag, Tag):
+ self.novel_cover = self.absolute_url(img_tag["src"])
+
+ chapters = self.get_soup(self.novel_url + "/chapter-list").select("a.border-b")
+ for chapter in chapters:
+ chapter_id = len(self.chapters) + 1
+ self.chapters.append(
+ {
+ "id": chapter_id,
+ "title": chapter.select_one("p").get_text(strip=True),
+ "url": self.absolute_url(chapter["href"]),
+ }
+ )
def download_chapter_body(self, chapter):
- chapter_json = self.get_json(chapter["url"])
- chapter_data = chapter_json["pageProps"]["chapterData"]
-
- return chapter_data["content"].replace("\n", "<br>")
+ chapter = self.get_soup(chapter["url"])
+ return self.cleaner.extract_contents(chapter.select_one("div.prose"))
| Fix this source https://www.teanovel.com/
Hello,
I'm trying to download a novel from https://www.teanovel.com/, but is not showing any chapters.

| 2024-10-07T09:53:53 | 0.0 | [] | [] |
|||
BIONF/fDOG | BIONF__fDOG-23 | b3663fc73869cce3aca6efd48a3cb7f8e2b56542 | diff --git a/.DS_Store b/.DS_Store
index bcbd073..592bcd7 100644
Binary files a/.DS_Store and b/.DS_Store differ
diff --git a/.gitignore b/.gitignore
index 38cf321..90963b6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -128,6 +128,13 @@ dmypy.json
# Pyre type checker
.pyre/
+# DS_store
+**/.DS_Store
+/fdog/.DS_Store
+/fdog/data/.DS_Store
+/fdog/bin/.DS_Store
+/fdog/setup/.DS_Store
+
#Hannah
/fdog/data/core_orthologs/
/fdog/data/assembly_dir/
diff --git a/fdog/.gitignore b/fdog/.gitignore
new file mode 100644
index 0000000..1912743
--- /dev/null
+++ b/fdog/.gitignore
@@ -0,0 +1,143 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+pip-wheel-metadata/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+.python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# DS_store
+**/.DS_Store
+/fdog/.DS_Store
+/fdog/data/.DS_Store
+/fdog/bin/.DS_Store
+/fdog/setup/.DS_Store
+
+#Hannah
+/fdog/data/core_orthologs/
+/fdog/data/assembly_dir/
+/fdog/fdog_goes_assembly/tmp/
+taxdump*
+/fdog/fDOGassembly.py
diff --git a/fdog/fDOGassembly.py b/fdog/fDOGassembly.py
index 9fc840d..9c0dc6b 100644
--- a/fdog/fDOGassembly.py
+++ b/fdog/fDOGassembly.py
@@ -721,6 +721,9 @@ def ortholog_search_tblastn(args):
assembly_path = assemblyDir + "/" + asName + "/" + asName + ".fa"
db_path = assemblyDir + "/" + asName + "/blast_dir/" + asName + ".fa"
blast_dir_path = assemblyDir + "/" + asName + "/blast_dir/"
+ if not os.path.exists(blast_dir_path):
+ cmd = 'mkdir ' + blast_dir_path
+ starting_subprocess(cmd, 'silent')
db_check = searching_for_db(blast_dir_path)
if db_check == 0:
diff --git a/fdog/runMulti.py b/fdog/runMulti.py
index be552a7..ca8a058 100644
--- a/fdog/runMulti.py
+++ b/fdog/runMulti.py
@@ -75,7 +75,7 @@ def prepare(args, step):
return(basicArgs, ioArgs, pathArgs, coreArgs, orthoArgs, fasArgs, otherArgs, mute)
def getSeedName(seedFile):
- seqName = seedFile.split('.')[0]
+ seqName = seedFile.rsplit('.', 1)[0]
seqName = re.sub('[\|\.]', '_', seqName)
return(seqName)
@@ -217,14 +217,11 @@ def createConfigPP(outpath, jobName, refspec):
settings['rank'] = 'species'
settings['refspec'] = refspec
settings['clusterProfile'] = 'TRUE'
- print("HERER")
- print(settings)
- print('%s/%s.config.yml' % (outpath, jobName))
with open('%s/%s.config.yml' % (outpath, jobName), 'w') as configfile:
yaml.dump(settings, configfile, default_flow_style = False)
def main():
- version = '0.0.51'
+ version = '0.0.52'
parser = argparse.ArgumentParser(description='You are running fdogs.run version ' + str(version) + '.')
parser.add_argument('--version', action='version', version=str(version))
required = parser.add_argument_group('Required arguments')
@@ -535,7 +532,10 @@ def main():
### join output
finalFa = joinOutputs(outpath, jobName, seeds, keep, silent)
else:
- print("%s.extended.fa found in %s! If you want to re-run the ortholog search, please use --force option." % (jobName, outpath))
+ if append == True:
+ sys.exit("Currently the append option is not available. Please use fdog.run if you need this option!")
+ else:
+ sys.exit("%s.extended.fa found in %s! If you want to re-run the ortholog search, please use --force or --append option." % (jobName, outpath))
### calculate FAS scores
if fasoff == False:
if os.path.exists('%s/%s.phyloprofile' % (outpath, jobName)):
diff --git a/fdog/runSingle.py b/fdog/runSingle.py
index f239f90..df22bd2 100644
--- a/fdog/runSingle.py
+++ b/fdog/runSingle.py
@@ -199,7 +199,7 @@ def getTaxName(taxId):
return(name)
def main():
- version = '0.0.51'
+ version = '0.0.52'
parser = argparse.ArgumentParser(description='You are running fdog.run version ' + str(version) + '.')
parser.add_argument('--version', action='version', version=str(version))
required = parser.add_argument_group('Required arguments')
diff --git a/setup.py b/setup.py
index 86d521d..63bc4cb 100644
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
setup(
name="fdog",
- version="0.0.51",
+ version="0.0.52",
python_requires='>=3.7.0',
description="Feature-aware Directed OrtholoG search tool",
| Fdog goes assembly
enables printing output during parallel compuatation
| 2022-06-15T14:08:07 | 0.0 | [] | [] |
|||
wasi-master/rich-rst | wasi-master__rich-rst-8 | e36049ce89b121aa2cf4875d20ec957c61b04243 | diff --git a/rich_rst/__init__.py b/rich_rst/__init__.py
index 1df8869..27ed383 100644
--- a/rich_rst/__init__.py
+++ b/rich_rst/__init__.py
@@ -89,6 +89,7 @@ def __init__(
self.footer = []
self.guess_lexer = guess_lexer
self.default_lexer = default_lexer
+ self.refname_to_renderable = {}
def _find_lexer(self, node):
lexer = (
@@ -109,7 +110,40 @@ def _find_lexer(self, node):
return lexer
return lexer
+ def visit_reference(self, node):
+ refuri = node.attributes.get("refuri")
+ style = self.console.get_style("restructuredtext.reference", default="blue underline on default")
+ if refuri:
+ style = style.update_link(refuri)
+ renderable = Text(node.astext().replace("\n", " "), style=style, end="")
+ if self.renderables and isinstance(self.renderables[-1], Text):
+ renderable.end = " "
+ start = len(self.renderables[-1])
+ self.renderables[-1].append_text(renderable)
+ else:
+ start = 0
+ self.renderables.append(renderable)
+ end = len(self.renderables[-1])
+
+ if not refuri:
+ # We'll get the URL reference later in visit_target.
+ refname = node.attributes.get("refname")
+ if refname:
+ self.refname_to_renderable[refname] = (self.renderables[-1], start, end)
+ raise docutils.nodes.SkipChildren()
+ def visit_target(self, node):
+ uri = node.get("refuri")
+ if uri:
+ for name in node["names"]:
+ try:
+ renderable, start, end = self.refname_to_renderable[name]
+ except KeyError:
+ continue
+ style = renderable.get_style_at_offset(self.console, start)
+ style = style.update_link(uri)
+ renderable.stylize(style, start, end)
+ raise docutils.nodes.SkipChildren()
def visit_paragraph(self, node):
if hasattr(node, "parent") and isinstance(node.parent, docutils.nodes.system_message):
@@ -126,9 +160,9 @@ def visit_title(self, node):
raise docutils.nodes.SkipChildren()
def visit_Text(self, node):
- style = self.console.get_style("restructuredtext.text", default="default on default")
+ style = self.console.get_style("restructuredtext.text", default="default on default not underline")
if self.renderables and isinstance(self.renderables[-1], Text):
- self.renderables[-1].append(Text(node.astext().replace("\n", " "), style=style, end=" "))
+ self.renderables[-1].append_text(Text(node.astext().replace("\n", " "), style=style, end=" "))
return
self.renderables.append(Text(node.astext().replace("\n", " "), end="", style=style))
@@ -188,7 +222,7 @@ def visit_warning(self, node):
def visit_subscript(self, node):
style = self.console.get_style("restructuredtext.subscript", default="none")
if self.renderables and isinstance(self.renderables[-1], Text):
- self.renderables[-1].append(Text(node.astext().translate(self.subscript), style=style, end=" "))
+ self.renderables[-1].append_text(Text(node.astext().translate(self.subscript), style=style, end=" "))
raise docutils.nodes.SkipChildren()
self.renderables.append(Text(node.astext().translate(self.subscript), end="", style=style))
raise docutils.nodes.SkipChildren()
@@ -196,7 +230,7 @@ def visit_subscript(self, node):
def visit_superscript(self, node):
style = self.console.get_style("restructuredtext.superscript", default="none")
if self.renderables and isinstance(self.renderables[-1], Text):
- self.renderables[-1].append(Text(node.astext().translate(self.supercript), style=style, end=" "))
+ self.renderables[-1].append_text(Text(node.astext().translate(self.supercript), style=style, end=" "))
raise docutils.nodes.SkipChildren()
self.renderables.append(Text(node.astext().translate(self.supercript), end="", style=style))
raise docutils.nodes.SkipChildren()
@@ -204,7 +238,7 @@ def visit_superscript(self, node):
def visit_emphasis(self, node):
style = self.console.get_style("restructuredtext.emphasis", default="italic")
if self.renderables and isinstance(self.renderables[-1], Text):
- self.renderables[-1].append(Text(node.astext().replace("\n", " "), style=style, end=" "))
+ self.renderables[-1].append_text(Text(node.astext().replace("\n", " "), style=style, end=" "))
raise docutils.nodes.SkipChildren()
self.renderables.append(Text(node.astext().replace("\n", " "), style=style, end=""))
raise docutils.nodes.SkipChildren()
@@ -212,7 +246,7 @@ def visit_emphasis(self, node):
def visit_strong(self, node):
style = self.console.get_style("restructuredtext.strong", default="bold")
if self.renderables and isinstance(self.renderables[-1], Text):
- self.renderables[-1].append(Text(node.astext().replace("\n", " "), style=style, end=" "))
+ self.renderables[-1].append_text(Text(node.astext().replace("\n", " "), style=style, end=" "))
raise docutils.nodes.SkipChildren()
self.renderables.append(Text(node.astext().replace("\n", " "), style=style, end=""))
raise docutils.nodes.SkipChildren()
@@ -268,7 +302,7 @@ def visit_enumerated_list(self, node):
def visit_literal(self, node):
style = self.console.get_style("restructuredtext.inline_codeblock", default="grey78 on grey7")
if self.renderables and isinstance(self.renderables[-1], Text):
- self.renderables[-1].append(Text(node.astext().replace("\n", " "), style=style, end=" "))
+ self.renderables[-1].append_text(Text(node.astext().replace("\n", " "), style=style, end=" "))
raise docutils.nodes.SkipChildren()
self.renderables.append(Text(node.astext().replace("\n", " "), style=style, end=""))
raise docutils.nodes.SkipChildren()
@@ -277,7 +311,7 @@ def visit_literal_block(self, node):
style = self.console.get_style("restructuredtext.literal_block_border", default="grey58")
if self.renderables and isinstance(self.renderables[-1], Text):
self.renderables[-1].rstrip()
- self.renderables[-1].append(Text("\n"))
+ self.renderables[-1].append_text(Text("\n"))
lexer = self._find_lexer(node)
self.renderables.append(
Panel(Syntax(node.astext(), lexer, theme=self.code_theme), border_style=style, box=box.SQUARE, title=lexer)
@@ -408,7 +442,7 @@ def visit_block_quote(self, node):
try:
paragraph, attribution = node.children
except ValueError:
- paragraph ,= node.children
+ paragraph = node.children[0]
self.renderables.append(
Text(" ")
+ Text(paragraph.astext().replace('\n', ' '), style=text_style)
| fix too many arguments to unpack when visiting block quote.
Without this change, I frequently run into:
```
ValueError: too many values to unpack (expected 1)
```
| 2024-01-20T17:40:43 | 0.0 | [] | [] |
|||
ComPWA/tensorwaves | ComPWA__tensorwaves-476 | 39f64aa0f6ffa729cceada748fec1d64d4f2c32a | diff --git a/src/tensorwaves/optimizer/minuit.py b/src/tensorwaves/optimizer/minuit.py
index cd4df422..93aaa615 100644
--- a/src/tensorwaves/optimizer/minuit.py
+++ b/src/tensorwaves/optimizer/minuit.py
@@ -4,7 +4,7 @@
import logging
import time
-from typing import Callable, Iterable, Mapping
+from typing import Any, Callable, Iterable, Mapping
import iminuit
from tqdm.auto import tqdm
@@ -30,6 +30,8 @@ class Minuit2(Optimizer):
minuit_modifier: Modify the internal `iminuit.Minuit` optimizer that is
constructed during the :meth:`optimize` call. See
:ref:`usage/basics:Minuit2` for an example.
+
+ migrad_args: Keyword arguments given to :meth:`iminuit.Minuit.migrad`.
"""
def __init__(
@@ -37,6 +39,7 @@ def __init__(
callback: Callback | None = None,
use_analytic_gradient: bool = False,
minuit_modifier: Callable[[iminuit.Minuit], None] | None = None,
+ migrad_args: dict[str, Any] | None = None,
) -> None:
self.__callback = callback
self.__use_gradient = use_analytic_gradient
@@ -47,6 +50,7 @@ def __init__(
"instance. See constructor signature."
)
self.__minuit_modifier = minuit_modifier
+ self.__migrad_args = {} if migrad_args is None else migrad_args
def optimize( # pylint: disable=too-many-locals
self,
@@ -120,7 +124,7 @@ def wrapped_gradient(pars: list) -> Iterable[float]:
self.__minuit_modifier(minuit)
start_time = time.time()
- minuit.migrad()
+ minuit.migrad(**self.__migrad_args)
end_time = time.time()
parameter_values = {}
| Setting number of iterations in migrad
I noticed that is not possible to set the number of function calls currently.
This has to be set when calling `migrad`, currently `migrad` does not get any parameters which results in `Minuit` setting an max number of function calls:
https://iminuit.readthedocs.io/en/stable/reference.html?highlight=_migrad_maxcall#iminuit.Minuit.migrad
| 2023-01-23T11:38:40 | 0.0 | [] | [] |
|||
Farama-Foundation/Arcade-Learning-Environment | Farama-Foundation__Arcade-Learning-Environment-498 | 9fd31e0b3d8d6241e1b28e14123fcdb398ed1437 | diff --git a/src/python/scripts/import_roms.py b/src/python/scripts/import_roms.py
index 7ed6c68ac..b8f9dc582 100644
--- a/src/python/scripts/import_roms.py
+++ b/src/python/scripts/import_roms.py
@@ -1,6 +1,7 @@
import argparse
import pathlib
import shutil
+import sys
import warnings
from typing import Optional
@@ -65,7 +66,7 @@ def main() -> None:
if not romdir.exists():
print(f"Path {romdir} doesn't exist.")
- exit(1)
+ sys.exit(1)
elif args.import_from_pkg:
if "." in args.import_from_pkg:
root, subpackage = args.import_from_pkg.split(".", maxsplit=1)
@@ -76,13 +77,13 @@ def main() -> None:
romdir = path.resolve()
if not romdir.exists():
print(f"Unable to find path {subpackage} in module {root}.")
- exit(1)
+ sys.exit(1)
except ModuleNotFoundError:
print(f"Unable to find module {root}.")
- exit(1)
+ sys.exit(1)
except Exception as e:
print(f"Unknown error {str(e)}.")
- exit(1)
+ sys.exit(1)
with warnings.catch_warnings():
warnings.filterwarnings(
| Use `sys.exit` instead of `exit`
In several places (e. g. [here](https://github.com/mgbellemare/Arcade-Learning-Environment/blob/master/src/python/scripts/import_roms.py#L68)) the code uses the builtin `exit` function instead of `sys.exit`. [This](https://stackoverflow.com/a/6501134/2414411) stackoverflow answer states that the builtin `exit` function should only be used in interactive mode.
| @ChristofKaufmann Would you be able to make a PR to change this? | 2023-09-20T20:44:13 | 0.0 | [] | [] |
||
clegaspi/saml_reader | clegaspi__saml_reader-64 | d877c2499bc2addc55a229907acd606bf41d1c19 | diff --git a/saml_reader/cli.py b/saml_reader/cli.py
index ae9dcb7..c814ef5 100644
--- a/saml_reader/cli.py
+++ b/saml_reader/cli.py
@@ -166,6 +166,9 @@ def run_analysis(
output_stream(f"The input data does not appear to be the specified input type '{input_type}'.\n"
f"Check to make sure that the input data is of the correct type.")
return
+ except FileNotFoundError:
+ output_stream(f"The input file {filepath} was not found or could not be opened.")
+ return
for msg in saml_data.get_errors():
output_stream(msg)
@@ -187,6 +190,9 @@ def run_analysis(
output_stream(f"Attribute '{e.args[1]}' in the provided JSON did not pass validation")
return
raise e
+ except FileNotFoundError:
+ output_stream(f"Comparison JSON file {compare_file} was not found or could not be opened.")
+ return
output_stream("Done")
elif compare_object:
federation_config = compare_object
@@ -227,6 +233,7 @@ def parse_saml_data(input_type='xml', source='clip', filepath=None, raw_data=Non
Raises:
ValueError: If an invalid combination of options is specified.
+ FileNotFoundError: If an input file does not exist
Returns:
BaseSamlParser: parsed SAML data object
@@ -240,6 +247,8 @@ def parse_saml_data(input_type='xml', source='clip', filepath=None, raw_data=Non
elif source == 'file':
if filepath and os.path.exists(filepath):
constructor_func = partial(TextReader.from_file, filename=filepath)
+ else:
+ raise FileNotFoundError(f"Input file {filepath} not found!")
elif source == 'raw' and raw_data:
constructor_func = partial(TextReader, raw_data=raw_data)
else:
| Handle FileNotFoundError for input files
Right now, it appears that if an invalid file name is specified for either the SAML input or the JSON comparison values, we get an ugly error; the former comes from `TextReader` and the latter from the function in `cli.py` that opens the JSON file. I thought we configured `argparse` to handle these files...but maybe not, because argparse likes to send back file handles instead of just the path when you tell it an argument is supposed to be a filename.
Either way, we should handle these errors more gracefully.
| 2022-03-28T17:31:55 | 0.0 | [] | [] |
|||
ndsev/zswag | ndsev__zswag-111 | 9a0380f4d235bf4092ec871ffaa8a73a25cb6025 | diff --git a/CMakeLists.txt b/CMakeLists.txt
index f85869d..aba5a56 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -16,7 +16,7 @@ project(zswag)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
-set(ZSWAG_VERSION 1.6.1)
+set(ZSWAG_VERSION 1.6.3)
option(ZSWAG_BUILD_WHEELS "Enable zswag whl-output to WHEEL_DEPLOY_DIRECTORY." ON)
option(ZSWAG_KEYCHAIN_SUPPORT "Enable zswag keychain support." ON)
@@ -118,7 +118,7 @@ if(ZSWAG_BUILD_WHEELS AND NOT TARGET pybind11)
endif()
if (NOT TARGET zserio-cmake-helper)
- set(ZSERIO_VERSION "2.11.0")
+ set(ZSERIO_VERSION "2.12.0")
FetchContent_Declare(zserio-cmake-helper
GIT_REPOSITORY "https://github.com/Klebert-Engineering/zserio-cmake-helper.git"
GIT_TAG "main"
diff --git a/README.md b/README.md
index b25de4b..da87af4 100644
--- a/README.md
+++ b/README.md
@@ -309,6 +309,11 @@ server class with a user-written app controller and a fitting OpenAPI specificat
It is based on [Flask](https://flask.palletsprojects.com/en/1.1.x/) and
[Connexion](https://connexion.readthedocs.io/en/latest/).
+**Implementation choice regarding HTTP response codes:** The server as implemented
+here will return HTTP code `400` (Bad Request) when the user request could not
+be parsed, and `500` (Internal Server Error) when a different exception occurred while
+generating the response/running the user's controller implementation.
+
### Integration Example
We consider the same `myapp` directory with a `services.zs` zserio file
diff --git a/libs/zswag/app.py b/libs/zswag/app.py
index 501ebcc..f33621f 100644
--- a/libs/zswag/app.py
+++ b/libs/zswag/app.py
@@ -165,7 +165,13 @@ def wsgi_method(fun=zserio_modem_function, spec=method_spec, req_t=request_type,
spec=spec,
headers=flask_request.headers,
**kwargs)
- return bytes(fun(request_blob, None).byte_array)
+ try:
+ return bytes(fun(request_blob, None).byte_array)
+ except zserio.PythonRuntimeException as e:
+ if str(e).startswith("BitStreamReader"):
+ return "Error in BitStreamReader: Could not parse malformed request.", 400
+ else:
+ return f"Internal Server Error: {e}", 500
setattr(self.service_instance, method_name, wsgi_method)
def method_impl(request, ctx=None, fun=user_function):
diff --git a/requirements.txt b/requirements.txt
index 43cdafd..bb99851 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,6 +1,6 @@
-connexion
+connexion~=2.14.2
requests
-zserio>=2.4.2
+zserio<3.0.0
pyyaml
-pyzswagcl>=1.6.1
+pyzswagcl==1.6.3
openapi-spec-validator
| HTTP status code on invalid input
I have observed that when the user provides junk input data (wrong payload type, or no payload) that an internal server error code (500) is returned to the user (before userspace code is called).
I think it would be useful to instead report an http 400 error to show that the error is a user error rather than a server error.
How to reproduce:
- Perform a call to an endpoint in the /ui for which a payload request is expected, but provide none / junk text.
- Observe trace in server, and the status code 500 in the ui.
Desired result:
- Perform a call to an endpoint in the /ui for which a payload request is expected.
- HTTP 400 is returned (stack trace on server might still be useful for debugging, but not desirable in production)
| Thanks for reporting, and sorry for the delayed response! This seems like a very sensible idea ð | 2023-11-15T14:41:07 | 0.0 | [] | [] |
||
equinor/tagreader-python | equinor__tagreader-python-260 | 2fc96ff741cd182caf0c01ab3ebc974c473135c1 | diff --git a/tagreader/clients.py b/tagreader/clients.py
index a6b333b4..686f45cd 100644
--- a/tagreader/clients.py
+++ b/tagreader/clients.py
@@ -605,8 +605,8 @@ def read(
if isinstance(ts, pd.Timedelta):
ts = ts.to_pytimedelta()
- elif isinstance(ts, int):
- ts = timedelta(seconds=ts)
+ elif isinstance(ts, (int, float)):
+ ts = timedelta(seconds=int(ts))
elif not isinstance(ts, timedelta):
raise ValueError(
"ts needs to be either a None, timedelta or and integer (number of seconds)."
| Client.read input
Input checking for ts is a bit too strict. Input ts is not required for read types snapshot and raw.
Also there is no need to
`raise ValueError(
"ts needs to be either a None, timedelta or and integer (number of seconds)."
f" Given type: {type(ts)}"
)`
just because inputtype is float. A conversion from float to int is nicer.
Also in same function, are you sure that input end is required for snapshots? If not, should this line be indented one more level?
` if end:
end = ensure_datetime_with_tz(end, tz=self.tz)`
| Hi @asmfstatoil,
Thanks for notifying us. I'll allow float converted to int for timedelta and release it ASAP.
I'm not sure if I understand you correctly for the last part. End is not required, so it should be Okay with respect to snapshot. Or am I mistaken? | 2023-09-12T12:22:48 | 0.0 | [] | [] |
||
Lightning-AI/litgpt | Lightning-AI__litgpt-1720 | df5b273be04ff33bb6da0e67705238284d218790 | diff --git a/litgpt/api.py b/litgpt/api.py
index 91f4e96492..3cc9d67b49 100644
--- a/litgpt/api.py
+++ b/litgpt/api.py
@@ -299,6 +299,7 @@ def distribute(
accelerator = "cuda"
elif torch.backends.mps.is_available():
# accelerator = "mps"
+ accelerator = "cpu"
warnings.warn("MPS is currently not supported. Using CPU instead.", UserWarning)
else:
accelerator = "cpu"
diff --git a/litgpt/utils.py b/litgpt/utils.py
index 0a929e2a12..8c6bb8c5c3 100644
--- a/litgpt/utils.py
+++ b/litgpt/utils.py
@@ -348,19 +348,29 @@ def map_old_state_dict_weights(state_dict: Dict, mapping: Mapping, prefix: str)
return state_dict
-def get_default_supported_precision(training: bool) -> str:
- """Return default precision that is supported by the hardware: either `bf16` or `16`.
+def get_default_supported_precision(training: bool, use_mps: bool = False) -> str:
+ """
+ Return the default precision that is supported by the hardware: either `bf16` or `16`.
Args:
- training: `-mixed` or `-true` version of the precision to use
+ training: If True, returns '-mixed' version of the precision; if False, returns '-true' version.
+ use_mps: Flag to determine if MPS should be used when available.
Returns:
- default precision that is suitable for the task and is supported by the hardware
+ The default precision that is suitable for the task and is supported by the hardware.
"""
from lightning.fabric.accelerators import MPSAccelerator
+ import torch
- if MPSAccelerator.is_available() or (torch.cuda.is_available() and not torch.cuda.is_bf16_supported()):
+ if use_mps and MPSAccelerator.is_available():
return "16-mixed" if training else "16-true"
+
+ if torch.cuda.is_available():
+ if torch.cuda.is_bf16_supported():
+ return "bf16-mixed" if training else "bf16-true"
+ else:
+ return "16-mixed" if training else "16-true"
+
return "bf16-mixed" if training else "bf16-true"
| llm.generate issue on CPU machines
### Bug description
Another issue with the llm.generate function that was somehow introduced in recent commits (I am surprised that CI didn't catch this):
```python
from litgpt import LLM
llm = LLM.load("EleutherAI/pythia-160m")
llm.generate("What do Llamas eat?")
```
results in:
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[1], line 4
1 from litgpt import LLM
3 llm = LLM.load("EleutherAI[/pythia-160m](http://localhost:8888/pythia-160m)")
----> 4 llm.generate("What do Llamas eat?")
File [~/miniforge3/envs/litgpt/lib/python3.9/site-packages/torch/utils/_contextlib.py:116](http://localhost:8888/lab/workspaces/~/miniforge3/envs/litgpt/lib/python3.9/site-packages/torch/utils/_contextlib.py#line=115), in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File [~/Desktop/litgpt/litgpt/api.py:534](http://localhost:8888/lab/workspaces/~/Desktop/litgpt/litgpt/api.py#line=533), in LLM.generate(self, prompt, max_new_tokens, temperature, top_k, top_p, return_as_token_ids, stream)
532 outputs = iterator()
533 else:
--> 534 outputs = generate_fn(
535 model=self.model,
536 prompt=input_ids,
537 max_returned_tokens=max_returned_tokens,
538 temperature=temperature,
539 top_k=top_k,
540 top_p=top_p,
541 eos_id=self.preprocessor.tokenizer.eos_id,
542 include_prompt=False,
543 )
545 if stream:
546 return outputs
File [~/miniforge3/envs/litgpt/lib/python3.9/site-packages/torch/utils/_contextlib.py:116](http://localhost:8888/lab/workspaces/~/miniforge3/envs/litgpt/lib/python3.9/site-packages/torch/utils/_contextlib.py#line=115), in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File [~/Desktop/litgpt/litgpt/generate/base.py:383](http://localhost:8888/lab/workspaces/~/Desktop/litgpt/litgpt/generate/base.py#line=382), in generate(model, prompt, max_returned_tokens, temperature, top_k, top_p, eos_id, include_prompt)
343 @torch.inference_mode()
344 def generate(
345 model: GPT,
(...)
353 include_prompt: bool = True,
354 ) -> torch.Tensor:
355 """
356 Takes a conditioning sequence (prompt) as input and continues to generate as many tokens as requested.
357 The implementation of this function is modified from A. Karpathy's nanoGPT.
(...)
380 include_prompt: If true (default) prepends the prompt (after applying the prompt style) to the output.
381 """
--> 383 token_list = list(generate_fn(
384 include_prompt=include_prompt,
385 include_eos=True,
386 model=model,
387 prompt=prompt,
388 max_returned_tokens=max_returned_tokens,
389 temperature=temperature,
390 top_k=top_k,
391 top_p=top_p,
392 stop_tokens=(([eos_id],) if eos_id is not None else ())
393 ))
395 return torch.cat(token_list) if not len(token_list) == 0 else torch.Tensor()
File [~/miniforge3/envs/litgpt/lib/python3.9/site-packages/torch/utils/_contextlib.py:36](http://localhost:8888/lab/workspaces/~/miniforge3/envs/litgpt/lib/python3.9/site-packages/torch/utils/_contextlib.py#line=35), in _wrap_generator.<locals>.generator_context(*args, **kwargs)
33 try:
34 # Issuing `None` to a generator fires it up
35 with ctx_factory():
---> 36 response = gen.send(None)
38 while True:
39 try:
40 # Forward the response to our caller and get its next request
File [~/Desktop/litgpt/litgpt/generate/base.py:172](http://localhost:8888/lab/workspaces/~/Desktop/litgpt/litgpt/generate/base.py#line=171), in generate_fn(model, prompt, max_returned_tokens, temperature, top_k, top_p, stop_tokens, include_prompt, include_eos)
168 input_pos = torch.arange(0, prompt_size, device=device, dtype=torch.int64)
169 for current_idx in range(max_returned_tokens - prompt_size):
170
171 # Generate the token
--> 172 token = next_token(model, input_pos, token.view(1, -1), temperature=temperature, top_k=top_k, top_p=top_p)
173 tokens.append(token)
174 int_token = token.item()
File [~/Desktop/litgpt/litgpt/generate/base.py:78](http://localhost:8888/lab/workspaces/~/Desktop/litgpt/litgpt/generate/base.py#line=77), in next_token(model, input_pos, x, **kwargs)
76 def next_token(model: GPT, input_pos: torch.Tensor, x: torch.Tensor, **kwargs: Any) -> torch.Tensor:
77 logits = model(x, input_pos)
---> 78 _next = sample(logits, **kwargs).to(dtype=torch.int64)
79 return _next
File [~/Desktop/litgpt/litgpt/generate/base.py:72](http://localhost:8888/lab/workspaces/~/Desktop/litgpt/litgpt/generate/base.py#line=71), in sample(logits, temperature, top_k, top_p)
70 logits = sample_top_p(logits, top_p)
71 probs = torch.nn.functional.softmax(logits, dim=-1)
---> 72 return multinomial_num_samples_1(probs)
73 return torch.argmax(logits, dim=-1, keepdim=True)
File [~/Desktop/litgpt/litgpt/generate/base.py:35](http://localhost:8888/lab/workspaces/~/Desktop/litgpt/litgpt/generate/base.py#line=34), in multinomial_num_samples_1(probs)
33 distribution = torch.empty_like(probs).exponential_(1)
34 return torch.argmax(probs [/](http://localhost:8888/) distribution, dim=-1, keepdim=True)
---> 35 return torch.multinomial(probs, num_samples=1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
Works fine in previous versions like 0.4.9.
### What operating system are you using?
macOS
### LitGPT Version
```
Version: 0.4.11
```
| This issue seems to only occur on MacBooks. It works fine on Studio CPUs.
I pinpointed it a bit more. Something in the model forward path. After the ~7th block the inputs turn nan:
https://github.com/Lightning-AI/litgpt/blob/3d36b6b26aea56317774ffb65769e74cb1d8db5a/litgpt/model.py#L98
```
Users/sebastian/Desktop/litgpt/litgpt/api.py:222: UserWarning: MPS is currently not supported. Using CPU instead.
warnings.warn("MPS is currently not supported. Using CPU instead.", UserWarning)
block 1 tensor([[[-0.1057, 0.2296, 0.0062, ..., 0.4619, 0.3906, 0.6367],
[-0.4836, 0.2103, 0.6401, ..., 0.5747, 0.6416, 0.7041],
[-0.3235, 0.0849, 0.9512, ..., 0.1890, 0.2151, 0.1394],
...,
[-0.1047, 0.2368, -0.9492, ..., -0.0238, -0.1179, -0.2322],
[-0.3896, 0.2751, -0.2380, ..., -0.2274, 0.1450, 0.3435],
[-0.6011, -0.2581, 0.1309, ..., 0.4829, -0.1338, -0.0518]]])
block 2 tensor([[[-0.0986, -0.1464, -0.2467, ..., 0.4736, 0.4595, 0.4951],
[-0.1748, -0.1700, 0.1436, ..., 0.4585, 0.8359, 0.5918],
[-0.2993, -0.5112, 0.5020, ..., 0.1832, 0.3770, 0.0740],
...,
[-0.1707, 0.2238, -1.0098, ..., 0.2377, -0.2566, -0.1475],
[-0.2678, 0.6162, -0.7803, ..., 0.0831, 0.0305, 0.3169],
[-0.3025, -0.1704, -0.3274, ..., 0.3608, -0.1277, -0.2117]]])
block 3 tensor([[[ 0.1680, -0.1973, 0.2661, ..., -0.8584, 1.4062, -0.4258],
[-0.0076, -0.9214, -0.4199, ..., -0.2085, 0.3550, 0.6611],
[-0.2158, -0.6768, -0.1826, ..., 0.3328, 0.1467, 0.3203],
...,
[-0.6362, 0.3423, -1.6582, ..., 0.2013, -0.6396, -0.3462],
[-0.0599, 0.3320, -1.4980, ..., 0.0963, 0.3542, 0.3433],
[-0.4653, -0.4614, -0.9268, ..., 0.5674, -0.1849, -0.0605]]])
block 4 tensor([[[ 1.7744, -1.4297, 1.4746, ..., -1.5049, 2.2109, -0.3230],
[-0.5703, -1.1035, -1.2637, ..., 0.1472, 0.9717, 0.3552],
[-0.3464, -0.8906, -0.9473, ..., -0.1326, -0.0806, 0.3298],
...,
[-0.5708, 0.1072, -2.0820, ..., -0.1400, -0.2275, -0.5664],
[-1.0576, -0.2246, -2.3242, ..., -0.3274, 0.3459, 0.1765],
[-0.9800, -1.0176, -1.3828, ..., 0.3643, -0.6680, -0.0145]]])
block 5 tensor([[[ 1.3242, -1.4248, 1.2607, ..., -1.5957, 1.8232, -0.3926],
[-0.8477, -0.7812, -1.1465, ..., 0.5068, 0.7959, 0.4487],
[ 0.1035, -1.0010, -0.7876, ..., -0.0477, 0.0704, 0.3572],
...,
[-0.3098, -0.0284, -2.2227, ..., 0.5464, 0.1379, -0.5723],
[-0.9932, -0.2793, -2.6914, ..., 0.0000, 0.5757, 0.3267],
[-0.9204, -0.7842, -1.6943, ..., 0.4355, -0.4875, 0.1433]]])
block 6 tensor([[[ 1.1211, -1.9609, 0.9072, ..., -1.3203, 1.3613, -0.0569],
[-0.2979, -0.8257, -1.3096, ..., 0.7959, 0.4268, 0.8403],
[ 0.0416, -0.4849, -0.7119, ..., -0.1052, 0.2598, 0.3496],
...,
[-0.4631, 0.3843, -2.2461, ..., 0.2756, 0.1716, -0.2839],
[-0.8379, 0.1685, -2.9551, ..., 0.0771, 0.3660, 0.3999],
[-0.7383, -0.2847, -1.5391, ..., 0.2377, -0.2969, 0.4036]]])
block 7 tensor([[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
...,
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]])
block 8 tensor([[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
```
Ok found it! Itâs just that the default precision for CPU on MacBooks is float 16. If you change it to 32, it works fine | 2024-09-11T18:59:01 | 0.0 | [] | [] |
||
FlexMeasures/flexmeasures | FlexMeasures__flexmeasures-113 | c42cb137186c279ba4e302d3d1f474055ee92911 | diff --git a/documentation/changelog.rst b/documentation/changelog.rst
index 1aaeea434..c13edad1e 100644
--- a/documentation/changelog.rst
+++ b/documentation/changelog.rst
@@ -6,6 +6,8 @@ FlexMeasures Changelog
v0.5.0 | May XX, 2021
===========================
+.. warning:: If you retrieve weather forecasts through FlexMeasures: we had to switch to OpenWeatherMap, as Dark Sky is closing. This requires an update to config variables â the new setting is called ``OPENWEATHERMAP_API_KEY``.
+
New features
-----------
* Allow plugins to overwrite UI routes and customise the teaser on the login form [see `PR #106 <http://www.github.com/SeitaBV/flexmeasures/pull/106>`_]
@@ -18,6 +20,7 @@ Bugfixes
Infrastructure / Support
----------------------
* Make assets use MW as their default unit and enforce that in CLI, as well (API already did) [see `PR #108 <http://www.github.com/SeitaBV/flexmeasures/pull/108>`_]
+* For weather forecasts, switch from Dark Sky (closed from Aug 1, 2021) to OpenWeatherMap API [see `PR #113 <http://www.github.com/SeitaBV/flexmeasures/pull/113>`_]
* Re-use the database between automated tests, if possible. This shaves 2/3rd off of the time it takes for the FlexMeasures test suite to run [see `PR #115 <http://www.github.com/SeitaBV/flexmeasures/pull/115>`_]
* Let CLI package and plugins use Marshmallow Field definitions [see `PR #125 <http://www.github.com/SeitaBV/flexmeasures/pull/125>`_]
@@ -35,6 +38,8 @@ Bugfixes
v0.4.0 | April 29, 2021
===========================
+.. warning:: Upgrading to this version requires running ``flexmeasures db upgrade`` (you can create a backup first with ``flexmeasures db-ops dump``).
+
New features
-----------
* Configure the UI menu with ``FLEXMEASURES_LISTED_VIEWS`` [see `PR #91 <https://github.com/SeitaBV/flexmeasures/pull/91>`_]
diff --git a/documentation/configuration.rst b/documentation/configuration.rst
index 954c09e4a..68b3c4759 100644
--- a/documentation/configuration.rst
+++ b/documentation/configuration.rst
@@ -136,15 +136,10 @@ Default: ``timedelta(hours=2 * 24)``
Tokens
------
-DARK_SKY_API_KEY
+OPENWEATHERMAP_API_KEY
^^^^^^^^^^^^^^^^
-Token for accessing the DarkSky weather forecasting service.
-
-.. note:: DarkSky will soon become non-public (Aug 1, 2021), so they are not giving out new tokens.
- We'll use another service soon (`see this issue <https://github.com/SeitaBV/flexmeasures/issues/3>`_).
- This is unfortunate.
- In the meantime, if you can't find anybody lending their token, consider posting weather forecasts to the FlexMeasures database yourself.
+Token for accessing the OPenWeatherMap weather forecasting service.
Default: ``None``
diff --git a/documentation/dev/data.rst b/documentation/dev/data.rst
index 4db5635db..1f398f03d 100644
--- a/documentation/dev/data.rst
+++ b/documentation/dev/data.rst
@@ -175,7 +175,7 @@ Then we import the data dump we made earlier:
.. code-block:: bash
- flask db-ops restore <DATABASE DUMP FILENAME>
+ flexmeasures db-ops restore <DATABASE DUMP FILENAME>
A potential ``alembic_version`` error should not prevent other data tables from being restored.
diff --git a/flexmeasures/api/v2_0/routes.py b/flexmeasures/api/v2_0/routes.py
index 60cc27f07..b32b7d7e1 100644
--- a/flexmeasures/api/v2_0/routes.py
+++ b/flexmeasures/api/v2_0/routes.py
@@ -477,7 +477,7 @@ def reset_user_password(id: int):
.. :quickref: User; Password reset
Reset the user's password, and send them instructions on how to reset the password.
- This endoint is useful from a security standpoint, in case of worries the password might be compromised.
+ This endpoint is useful from a security standpoint, in case of worries the password might be compromised.
It sets the current password to something random, invalidates cookies and auth tokens,
and also sends an email for resetting the password to the user.
diff --git a/flexmeasures/data/models/assets.py b/flexmeasures/data/models/assets.py
index 83cf4dcf8..c547bd427 100644
--- a/flexmeasures/data/models/assets.py
+++ b/flexmeasures/data/models/assets.py
@@ -232,7 +232,7 @@ def __init__(self, **kwargs):
super(Power, self).__init__(**kwargs)
def __repr__(self):
- return "<Power %.2f on Asset %s at %s by DataSource %s, horizon %s>" % (
+ return "<Power %.5f on Asset %s at %s by DataSource %s, horizon %s>" % (
self.value,
self.asset_id,
self.datetime,
diff --git a/flexmeasures/data/scripts/cli_tasks/data_add.py b/flexmeasures/data/scripts/cli_tasks/data_add.py
index 42278ebf1..82e265432 100644
--- a/flexmeasures/data/scripts/cli_tasks/data_add.py
+++ b/flexmeasures/data/scripts/cli_tasks/data_add.py
@@ -467,11 +467,12 @@ def create_forecasts(
@fm_add_data.command("external-weather-forecasts")
+@with_appcontext
@click.option(
"--region",
type=str,
default="",
- help="Name of the region (will create sub-folder, should later tag the forecast in the DB, probably).",
+ help="Name of the region (will create sub-folder if you store json files, should later probably tag the forecast in the DB).",
)
@click.option(
"--location",
@@ -486,7 +487,7 @@ def create_forecasts(
"--num_cells",
type=int,
default=1,
- help="Number of cells on the grid. Only used if a region of interest has been mapped in the location parameter.",
+ help="Number of cells on the grid. Only used if a region of interest has been mapped in the location parameter. Defaults to 1.",
)
@click.option(
"--method",
@@ -497,13 +498,13 @@ def create_forecasts(
@click.option(
"--store-in-db/--store-as-json-files",
default=False,
- help="Store forecasts in the database, or simply save as json files.",
+ help="Store forecasts in the database, or simply save as json files. (defaults to json files)",
)
def collect_weather_data(region, location, num_cells, method, store_in_db):
"""
Collect weather forecasts from the DarkSky API
- This function can get weather data for one location or for several location within
+ This function can get weather data for one location or for several locations within
a geometrical grid (See the --location parameter).
"""
from flexmeasures.data.scripts.grid_weather import get_weather_forecasts
diff --git a/flexmeasures/data/scripts/grid_weather.py b/flexmeasures/data/scripts/grid_weather.py
index 09e927e88..99b21924c 100755
--- a/flexmeasures/data/scripts/grid_weather.py
+++ b/flexmeasures/data/scripts/grid_weather.py
@@ -1,13 +1,14 @@
#!/usr/bin/env python
import os
-from typing import Tuple, List
+from typing import Tuple, List, Dict
import json
from datetime import datetime
import click
from flask import Flask, current_app
-from forecastiopy import ForecastIO
+import requests
+import pytz
from flexmeasures.utils.time_utils import as_server_time, get_timezone
from flexmeasures.utils.geo_utils import compute_irradiance
@@ -18,7 +19,7 @@
from flexmeasures.data.models.data_sources import DataSource
FILE_PATH_LOCATION = "/../raw_data/weather-forecasts"
-DATA_SOURCE_NAME = "DarkSky"
+DATA_SOURCE_NAME = "OpenWeatherMap"
class LatLngGrid(object):
@@ -217,12 +218,12 @@ def locations_hex(self) -> List[Tuple[float, float]]:
sw = (
lat + self.cell_size_lat / 2,
lng - self.cell_size_lat / 3 ** (1 / 2) / 2,
- ) # South west coord.
+ ) # South west coordinates
locations.append(sw)
se = (
lat + self.cell_size_lat / 2,
lng + self.cell_size_lng / 3 ** (1 / 2) / 2,
- ) # South east coord.
+ ) # South east coordinates
locations.append(se)
return locations
@@ -317,22 +318,30 @@ def get_data_source() -> DataSource:
return data_source
-def call_darksky(api_key: str, location: Tuple[float, float]) -> dict:
- """Make a single call to the Dark Sky API and return the result parsed as dict"""
- return ForecastIO.ForecastIO(
- api_key,
- units=ForecastIO.ForecastIO.UNITS_SI,
- lang=ForecastIO.ForecastIO.LANG_ENGLISH,
- latitude=location[0],
- longitude=location[1],
- extend="hourly",
- ).forecast
+def call_openweatherapi(
+ api_key: str, location: Tuple[float, float]
+) -> Tuple[int, List[Dict]]:
+ """
+ Make a single "one-call" to the Open Weather API and return the API timestamp as well as the 48 hourly forecasts.
+ See https://openweathermap.org/api/one-call-api for docs.
+ Note that the first forecast is about the current hour.
+ """
+ query_str = f"lat={location[0]}&lon={location[1]}&units=metric&exclude=minutely,daily,alerts&appid={api_key}"
+ res = requests.get(f"http://api.openweathermap.org/data/2.5/onecall?{query_str}")
+ assert (
+ res.status_code == 200
+ ), f"OpenWeatherMap returned status code {res.status_code}: {res.text}"
+ data = res.json()
+ return data["current"]["dt"], data["hourly"]
def save_forecasts_in_db(
- api_key: str, locations: List[Tuple[float, float]], data_source: DataSource
+ api_key: str,
+ locations: List[Tuple[float, float]],
+ data_source: DataSource,
+ max_degree_difference_for_nearest_weather_sensor: int = 2,
):
- """Process the response from DarkSky into Weather timed values.
+ """Process the response from OpenWeatherMap API into Weather timed values.
Collects all forecasts for all locations and all sensors at all locations, then bulk-saves them.
"""
click.echo("[FLEXMEASURES] Getting weather forecasts:")
@@ -344,22 +353,24 @@ def save_forecasts_in_db(
for location in locations:
click.echo("[FLEXMEASURES] %s, %s" % location)
- forecasts = call_darksky(api_key, location)
+ api_timestamp, forecasts = call_openweatherapi(api_key, location)
time_of_api_call = as_server_time(
- datetime.fromtimestamp(forecasts["currently"]["time"], get_timezone())
+ datetime.fromtimestamp(api_timestamp, tz=get_timezone())
).replace(second=0, microsecond=0)
click.echo(
- "[FLEXMEASURES] Called Dark Sky API successfully at %s." % time_of_api_call
+ "[FLEXMEASURES] Called OpenWeatherMap API successfully at %s."
+ % time_of_api_call
)
- # map sensor name in our db to sensor name/label in dark sky response
+ # map sensor name in our db to sensor name/label in OWM response
sensor_name_mapping = dict(
- temperature="temperature", wind_speed="windSpeed", radiation="cloudCover"
+ temperature="temp", wind_speed="wind_speed", radiation="clouds"
)
- for fc in forecasts["hourly"]["data"]:
+ # loop through forecasts, including the one of current hour (horizon 0)
+ for fc in forecasts:
fc_datetime = as_server_time(
- datetime.fromtimestamp(fc["time"], get_timezone())
+ datetime.fromtimestamp(fc["dt"], get_timezone())
).replace(second=0, microsecond=0)
fc_horizon = fc_datetime - time_of_api_call
click.echo(
@@ -375,6 +386,16 @@ def save_forecasts_in_db(
flexmeasures_sensor_type, lat=location[0], lng=location[1]
)
if weather_sensor is not None:
+ # Complain if the nearest weather sensor is further away than 2 degrees
+ if abs(
+ location[0] - weather_sensor.latitude
+ ) > max_degree_difference_for_nearest_weather_sensor or abs(
+ location[1] - weather_sensor.longitude
+ > max_degree_difference_for_nearest_weather_sensor
+ ):
+ raise Exception(
+ f"No sufficiently close weather sensor found (within 2 degrees distance) for type {flexmeasures_sensor_type}! We're looking for: {location}, closest available: ({weather_sensor.latitude}, {weather_sensor.longitude})"
+ )
weather_sensors[flexmeasures_sensor_type] = weather_sensor
else:
raise Exception(
@@ -383,13 +404,14 @@ def save_forecasts_in_db(
)
fc_value = fc[needed_response_label]
- # the radiation is not available in dark sky -> we compute it ourselves
+ # the radiation is not available in OWM -> we compute it ourselves
if flexmeasures_sensor_type == "radiation":
fc_value = compute_irradiance(
location[0],
location[1],
fc_datetime,
- fc[needed_response_label],
+ # OWM sends cloud coverage in percent, we need a ratio
+ fc[needed_response_label] / 100.0,
)
db_forecasts.append(
@@ -424,15 +446,19 @@ def save_forecasts_as_json(
click.echo("[FLEXMEASURES] Getting weather forecasts:")
click.echo("[FLEXMEASURES] Latitude, Longitude")
click.echo("[FLEXMEASURES] ----------------------")
- # UTC timestamp to remember when data was fetched.
- now_str = datetime.utcnow().strftime("%Y-%m-%dT%H-%M-%S")
- os.mkdir("%s/%s" % (data_path, now_str))
for location in locations:
click.echo("[FLEXMEASURES] %s, %s" % location)
- forecasts = call_darksky(api_key, location)
- forecasts_file = "%s/%s/forecast_lat_%s_lng_%s.json" % (
- data_path,
- now_str,
+ api_timestamp, forecasts = call_openweatherapi(api_key, location)
+ time_of_api_call = as_server_time(
+ datetime.fromtimestamp(api_timestamp, tz=pytz.utc)
+ ).replace(second=0, microsecond=0)
+ now_str = time_of_api_call.strftime("%Y-%m-%dT%H-%M-%S")
+ path_to_files = os.path.join(data_path, now_str)
+ if not os.path.exists(path_to_files):
+ click.echo(f"Making directory: {path_to_files} ...")
+ os.mkdir(path_to_files)
+ forecasts_file = "%s/forecast_lat_%s_lng_%s.json" % (
+ path_to_files,
str(location[0]),
str(location[1]),
)
@@ -451,11 +477,11 @@ def get_weather_forecasts(
):
"""
Get current weather forecasts for a latitude/longitude grid and store them in individual json files.
- Note that 1000 free calls per day can be made to the Dark Sky API,
- so we can make a call every 15 minutes for up to 10 assets or every hour for up to 40 assets.
+ Note that 1000 free calls per day can be made to the OpenWeatherMap API,
+ so we can make a call every 15 minutes for up to 10 assets or every hour for up to 40 assets (or get a paid account).
"""
- if app.config.get("DARK_SKY_API_KEY") is None:
- raise Exception("No DarkSky API key available.")
+ if app.config.get("OPENWEATHERMAP_API_KEY") is None:
+ raise Exception("Setting OPENWEATHERMAP_API_KEY not available.")
if (
location.count(",") == 0
@@ -504,7 +530,7 @@ def get_weather_forecasts(
else:
raise Exception("location parameter '%s' has too many locations." % location)
- api_key = app.config.get("DARK_SKY_API_KEY")
+ api_key = app.config.get("OPENWEATHERMAP_API_KEY")
# Save the results
if store_in_db:
diff --git a/flexmeasures/utils/config_defaults.py b/flexmeasures/utils/config_defaults.py
index 685736d0f..4c787a8c1 100644
--- a/flexmeasures/utils/config_defaults.py
+++ b/flexmeasures/utils/config_defaults.py
@@ -65,7 +65,7 @@ class Config(object):
CORS_RESOURCES: Union[dict, list, str] = [r"/api/*"]
CORS_SUPPORTS_CREDENTIALS: bool = True
- DARK_SKY_API_KEY: Optional[str] = None
+ OPENWEATHERMAP_API_KEY: Optional[str] = None
MAPBOX_ACCESS_TOKEN: Optional[str] = None
diff --git a/requirements/app.in b/requirements/app.in
index 82dd94e72..2264d838c 100644
--- a/requirements/app.in
+++ b/requirements/app.in
@@ -26,7 +26,6 @@ rq-win; os_name == 'nt' or os_name == 'win'
redis; os_name == 'nt' or os_name == 'win'
tldextract
pyomo>=5.6
-forecastiopy
pvlib
# the following three are optional in pvlib, but we use them
netCDF4
diff --git a/requirements/app.txt b/requirements/app.txt
index 977070295..05c12e7b2 100644
--- a/requirements/app.txt
+++ b/requirements/app.txt
@@ -99,8 +99,6 @@ flask==1.1.2
# flask-sslify
# flask-wtf
# rq-dashboard
-forecastiopy==0.22
- # via -r requirements/app.in
greenlet==1.0.0
# via sqlalchemy
humanize==3.3.0
@@ -260,7 +258,6 @@ requests-file==1.5.1
# via tldextract
requests==2.25.1
# via
- # forecastiopy
# pvlib
# requests-file
# siphon
| DarkSky API replacement
DarkSky will discontinue service end of 2021. Alternatives:
- [Climacell](https://www.climacell.co/weather-api/)
- [OpenWeatherMap](https://openweathermap.org/api) ([with migration tutorial](https://openweathermap.org/darksky-openweather))
- [Yr](https://hjelp.yr.no/hc/en-us/articles/360001940793-Free-weather-data-service-from-Yr)
- [Some more](https://www.climacell.co/blog/top-8-weather-apis-for-2020/)
- [And more](https://medium.com/@AriNoman/8-weather-api-alternatives-now-that-darksky-is-shutting-down-42a5ac395f93)
DarkSky API replacement
DarkSky will discontinue service end of 2021. Alternatives:
- [Climacell](https://www.climacell.co/weather-api/)
- [OpenWeatherMap](https://openweathermap.org/api) ([with migration tutorial](https://openweathermap.org/darksky-openweather))
- [Yr](https://hjelp.yr.no/hc/en-us/articles/360001940793-Free-weather-data-service-from-Yr)
- [Some more](https://www.climacell.co/blog/top-8-weather-apis-for-2020/)
- [And more](https://medium.com/@AriNoman/8-weather-api-alternatives-now-that-darksky-is-shutting-down-42a5ac395f93)
| We seem to be most interested in OWM and Climacell at the moment. Of course, the optimal outcome for FlexMeasures is to support multiple weather services, but that is probably one ticket at a time, and we can do it if someone needs it.
Useful stuff in [this thread](https://github.com/n0bel/PiClock/issues/185), too, where the developer will likely switch to Climacell, and some fork already switched to OpenWeatherMap.
Update: "Service to existing users and subscribers of the Android app will now continue until August 1, 2020"
Also, we have been using [a wrapper for DarkSky](https://github.com/dvdme/forecastiopy), so the replacement will be a bit more work. But I believe it's still overseeable, as the work does not cross module boundaries, it mostly comes down to replacing one function in `data/scripts/grid_weather.py`.
Found a well-supported Python wrapper around OpenWeatherMap: https://github.com/csparpa/pyowm
Branch [issue-3-DarkSky_API_replacement](https://github.com/SeitaBV/flexmeasures/tree/issue-3-DarkSky_API_replacement) created!
We seem to be most interested in OWM and Climacell at the moment. Of course, the optimal outcome for FlexMeasures is to support multiple weather services, but that is probably one ticket at a time, and we can do it if someone needs it.
Useful stuff in [this thread](https://github.com/n0bel/PiClock/issues/185), too, where the developer will likely switch to Climacell, and some fork already switched to OpenWeatherMap.
Update: "Service to existing users and subscribers of the Android app will now continue until August 1, 2020"
Also, we have been using [a wrapper for DarkSky](https://github.com/dvdme/forecastiopy), so the replacement will be a bit more work. But I believe it's still overseeable, as the work does not cross module boundaries, it mostly comes down to replacing one function in `data/scripts/grid_weather.py`.
Found a well-supported Python wrapper around OpenWeatherMap: https://github.com/csparpa/pyowm
Branch [issue-3-DarkSky_API_replacement](https://github.com/SeitaBV/flexmeasures/tree/issue-3-DarkSky_API_replacement) created! | 2021-05-03T09:28:56 | 0.0 | [] | [] |
||
ufal/factgenie | ufal__factgenie-45 | d4ffa9f14d94ee65aeb83485ea1d1290f6eecb64 | diff --git a/factgenie/main.py b/factgenie/main.py
index cd77b705..59e99ced 100755
--- a/factgenie/main.py
+++ b/factgenie/main.py
@@ -133,6 +133,15 @@ def annotate():
db = campaign.db
metadata = campaign.metadata
annotation_set = utils.get_annotator_batch(app, campaign, db, prolific_pid, session_id, study_id)
+
+ if not annotation_set:
+ # no more available examples
+ return render_template(
+ "campaigns/closed.html",
+ host_prefix=app.config["host_prefix"],
+ metadata=metadata,
+ )
+
return render_template(
f"campaigns/{campaign.campaign_id}/annotate.html",
host_prefix=app.config["host_prefix"],
diff --git a/factgenie/templates/campaigns/closed.html b/factgenie/templates/campaigns/closed.html
new file mode 100755
index 00000000..7db9b595
--- /dev/null
+++ b/factgenie/templates/campaigns/closed.html
@@ -0,0 +1,58 @@
+<!DOCTYPE html>
+<html>
+
+<head>
+ <title>Annotation Page Unavailable</title>
+ <link rel="stylesheet" media="screen" href="{{ host_prefix }}/static/css/bootstrap.min.css">
+ <link rel="stylesheet" type="text/css" href="{{ host_prefix }}/static/css/custom.css">
+ <link rel="shortcut icon" href="{{ host_prefix }}/static/img/favicon.ico">
+ <meta name="viewport" content="width=1024">
+ <script src="{{ host_prefix }}/static/js/jquery.min.js"></script>
+ <script src="{{ host_prefix }}/static/js/popper.min.js"></script>
+ <script src="{{ host_prefix }}/static/js/bootstrap.min.js"></script>
+ <link href="https://netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.css" rel="stylesheet">
+</head>
+
+
+<body class="body">
+ <nav class="navbar navbar-light bg-annotate">
+ <div class="container navbar-left">
+ <a class="navbar-brand" href="#">
+ </a>
+ <div class="navblock" id="actions-area">
+ <ul class="pagination" id="nav-example-cnt">
+ </ul>
+ </div>
+ </div>
+ </nav>
+
+
+
+ <div id="overlay-start" class="overlay">
+ <div id="overlay-start-content" class="overlay-content">
+ <h1>Welcome!</h1>
+ <p>Unfortunately, there are no available examples for the campaign.
+
+ <p>That can mean that the campaign
+ is <b>full or
+ closed</b>.</p>
+
+ <p>Please, contact the campaign authors for more information.</p>
+ <p>Thank you for your interest!</p>
+ </div>
+ </div>
+
+
+
+</body>
+
+
+<script>
+ window.url_prefix = "{{ host_prefix }}";
+ window.mode = "annotate";
+ window.annotator_id = "{{ annotator_id }}";
+ window.compl_code = "{{ compl_code }}";
+ window.metadata = {{ metadata | tojson | safe }};
+</script>
+
+<script src="{{ host_prefix }}/static/js/factgenie.js"></script>
\ No newline at end of file
diff --git a/factgenie/utils.py b/factgenie/utils.py
index 014e42b6..1bef39ed 100644
--- a/factgenie/utils.py
+++ b/factgenie/utils.py
@@ -222,10 +222,14 @@ def free_idle_examples(db):
def select_batch_idx(db, seed):
free_examples = db[db["status"] == "free"]
+ assigned_examples = db[db["status"] == "assigned"]
- # if no free examples, take the oldest assigned example
- if len(free_examples) == 0:
- free_examples = db[db["status"] == "assigned"]
+ if len(free_examples) == 0 and len(assigned_examples) == 0:
+ raise ValueError("No examples available")
+
+ # if no free examples but still assigned examples, take the oldest assigned example
+ if len(free_examples) == 0 and len(assigned_examples) > 0:
+ free_examples = assigned_examples
free_examples = free_examples.sort_values(by=["start"])
free_examples = free_examples.head(1)
@@ -246,7 +250,12 @@ def get_annotator_batch(app, campaign, db, prolific_pid, session_id, study_id):
seed = random.seed(str(start) + prolific_pid + session_id + study_id)
- batch_idx = select_batch_idx(db, seed)
+ try:
+ batch_idx = select_batch_idx(db, seed)
+ except ValueError:
+ # no available batches
+ return []
+
if prolific_pid != "test":
db = free_idle_examples(db)
| Error page after crowdsourcing campaign is finished
After examples in the human evaluation campaign are finished, we get an error:
```
Traceback (most recent call last)
File "/lnet/work/people/kasner/virtualenv/factgenie/lib/python3.10/site-packages/flask/app.py", line 1498, in __call__
return self.wsgi_app(environ, start_response)
File "/lnet/work/people/kasner/virtualenv/factgenie/lib/python3.10/site-packages/flask/app.py", line 1476, in wsgi_app
response = self.handle_exception(e)
File "/lnet/work/people/kasner/virtualenv/factgenie/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
File "/lnet/work/people/kasner/virtualenv/factgenie/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/lnet/work/people/kasner/virtualenv/factgenie/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/lnet/work/people/kasner/virtualenv/factgenie/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/lnet/work/people/kasner/projects/factgenie/factgenie/main.py", line 130, in annotate
annotation_set = utils.get_annotator_batch(app, campaign, db, prolific_pid, session_id, study_id)
File "/lnet/work/people/kasner/projects/factgenie/factgenie/utils.py", line 246, in get_annotator_batch
batch_idx = select_batch_idx(db, seed)
File "/lnet/work/people/kasner/projects/factgenie/factgenie/utils.py", line 229, in select_batch_idx
logger.info(f"Annotating extra example {free_examples.index[0]}")
File "/lnet/work/people/kasner/virtualenv/factgenie/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 5389, in __getitem__
return getitem(key)
```
We should display a more user-friendly page instead.
| 2024-07-15T14:09:12 | 0.0 | [] | [] |
|||
baler-collaboration/baler | baler-collaboration__baler-186 | 80ffc15603f027dfa78e7e2954f16a2e4cdf89aa | diff --git a/baler/modules/helper.py b/baler/modules/helper.py
index d1dc9386..5e8f3404 100644
--- a/baler/modules/helper.py
+++ b/baler/modules/helper.py
@@ -98,8 +98,7 @@ class Config:
def create_default_config(project_name: str) -> str:
return f"""
def set_config(c):
- c.data_path = "data/{project_name}/{project_name}_data.npy"
- c.names_path = "data/{project_name}/{project_name}_names.npy"
+ c.input_path = "data/{project_name}/{project_name}_data.npz"
c.compression_ratio = 2.0
c.epochs = 5
c.energy_conversion = False
| new project old config
--mode = newProject creates old config file with npy data paths instead of new npz format
| 2023-03-22T15:03:24 | 0.0 | [] | [] |
|||
ansible/ansible-navigator | ansible__ansible-navigator-1071 | 4a834bacfade623e1e8bbe154caf5db27de1c7c0 | diff --git a/.flake8 b/.flake8
index b5eed6065..c40bbf047 100644
--- a/.flake8
+++ b/.flake8
@@ -69,7 +69,6 @@ per-file-ignores =
src/ansible_navigator/configuration_subsystem/navigator_configuration.py: D200, D205, D400, DAR201
src/ansible_navigator/configuration_subsystem/navigator_post_processor.py: D200, D400, DAR101, DAR201
src/ansible_navigator/configuration_subsystem/parser.py: D200, D400, DAR101, DAR201
- src/ansible_navigator/initialization.py: D202, D205, D400, D403, DAR101, DAR201
src/ansible_navigator/tm_tokenize/__init__.py: D104
src/ansible_navigator/tm_tokenize/compiler.py: D100, D101, D102
src/ansible_navigator/tm_tokenize/fchainmap.py: D100, D101, D105
diff --git a/src/ansible_navigator/initialization.py b/src/ansible_navigator/initialization.py
index 4ae2f919e..30fae9bf0 100644
--- a/src/ansible_navigator/initialization.py
+++ b/src/ansible_navigator/initialization.py
@@ -1,5 +1,6 @@
-"""initialization helpers that are used early in application
-initialization and are specific to ansible_navigator
+"""Initialization helpers that are used early in application initialization.
+
+These helpers are specific to ansible_navigator.
"""
import logging
import os
@@ -24,18 +25,22 @@
def error_and_exit_early(exit_messages: List[ExitMessage]) -> NoReturn:
- """get out of here fast"""
+ """Exit the application early.
+
+ :param exit_messages: List of all exit messages to be printed
+ """
for exit_msg in exit_messages:
print(exit_msg)
sys.exit(1)
def find_config() -> Tuple[List[LogMessage], List[ExitMessage], Optional[str], C]:
- """
- Find a configuration file, logging each step.
- Return (log messages, path).
+ """Find a configuration file, logging each step.
+
If the config can't be found/loaded, use default settings.
If it's found but empty or not well formed, bail out.
+
+ :returns: All log messages and config path
"""
messages: List[LogMessage] = []
exit_messages: List[ExitMessage] = []
@@ -81,9 +86,12 @@ def find_config() -> Tuple[List[LogMessage], List[ExitMessage], Optional[str], C
def get_and_check_collection_doc_cache(
collection_doc_cache_path: str,
) -> Tuple[List[LogMessage], List[ExitMessage], Optional[KeyValueStore]]:
- """ensure the collection doc cache
- has the current version of the application
- as a safeguard, always delete and rebuild if not
+ """Ensure the collection doc cache has current application version as a safeguard.
+
+ Always delete and rebuild if not.
+
+ :param collection_doc_cache_path: Path for collection documentation cache
+ :returns: All messages and collection cache or None
"""
messages: List[LogMessage] = []
exit_messages: List[ExitMessage] = []
@@ -144,12 +152,12 @@ def parse_and_update(
Return after the CDC is mounted, even if exit messages are generated, the CDC may still
be needed. e.g. ``:collections --ee NotBool``.
+ :param params: A list of parameters e.g. ['-x', 'value']
:param args: The application args
:param apply_previous_cli_entries: Should previous params from the CLI be applied
:param attach_cdc: Should the collection doc cache be attached to the args.internals
:returns: Log and exit messages
"""
-
messages: List[LogMessage] = []
exit_messages: List[ExitMessage] = []
| Fix for winston.cfg in path.home
| 2022-03-09T17:48:54 | 0.0 | [] | [] |
|||
pepkit/geofetch | pepkit__geofetch-72 | 4fa8f0144b76ce746ba2646d0128c6508552716d | diff --git a/.gitignore b/.gitignore
index 407053a..d97b82c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -89,3 +89,9 @@ peppy.egg-info/
docs_jupyter/*
!docs_jupyter/*.ipynb
!docs_jupyter/build/*
+
+# envs
+.env/
+env/
+.venv/
+venv/
\ No newline at end of file
diff --git a/docs/changelog.md b/docs/changelog.md
index 95d2b09..27616f6 100644
--- a/docs/changelog.md
+++ b/docs/changelog.md
@@ -1,5 +1,8 @@
# Changelog
+## [0.10.0] -- 2022-07-07
+- Fixed subprocesses continuing to run during program interrupt.
+
## [0.9.0] -- 2022-06-20
- Updated `--pipeline-interface` argument that adds it in for looper. `--pipeline-interface` argument was divided into:
`--pipeline-samples` and `--pipeline-project`.
diff --git a/geofetch/_version.py b/geofetch/_version.py
index 3e2f46a..61fb31c 100644
--- a/geofetch/_version.py
+++ b/geofetch/_version.py
@@ -1,1 +1,1 @@
-__version__ = "0.9.0"
+__version__ = "0.10.0"
diff --git a/geofetch/geofetch.py b/geofetch/geofetch.py
index b3a5501..32c6782 100755
--- a/geofetch/geofetch.py
+++ b/geofetch/geofetch.py
@@ -8,7 +8,6 @@
import csv
import os
import re
-import subprocess
import sys
from string import punctuation
@@ -21,6 +20,7 @@
parse_SOFT_line,
convert_size,
clean_soft_files,
+ run_subprocess,
)
from ._version import __version__
@@ -1162,7 +1162,7 @@ def download_SRA_file(self, run_name):
t = 0
while True:
t = t + 1
- subprocess_return = subprocess.call(
+ subprocess_return = run_subprocess(
["prefetch", run_name, "--max-size", "50000000"]
)
@@ -1222,7 +1222,7 @@ def sra_bam_conversion(self, bam_file, run_name):
# sam-dump -u SRR020515.sra | samtools view -bS - > test.bam
self._LOGGER.info(f"Conversion command: {cmd}")
- subprocess.call(cmd, shell=True)
+ run_subprocess(cmd, shell=True)
@staticmethod
def update_columns(metadata, experiment_name, sample_name, read_type):
@@ -1286,7 +1286,7 @@ def sra_bam_conversion2(self, bam_file, run_name, picard_path=None):
+ os.path.join(self.sra_folder, run_name + ".sra")
)
self._LOGGER.info(f"Command: {cmd}")
- subprocess.call(cmd, shell=True)
+ run_subprocess(cmd, shell=True)
if not picard_path:
self._LOGGER.warning("Can't convert the fastq to bam without picard path")
else:
@@ -1306,7 +1306,7 @@ def sra_bam_conversion2(self, bam_file, run_name, picard_path=None):
cmd += " SAMPLE_NAME=" + run_name
cmd += " QUIET=true"
self._LOGGER.info(f"Conversion command: {cmd}")
- subprocess.call(cmd, shell=True)
+ run_subprocess(cmd, shell=True)
def write_subannotation(self, tabular_data, filepath, column_names=None):
"""
@@ -1354,7 +1354,7 @@ def download_file(self, file_url, data_folder, new_name=None, sleep_after=0.5):
# if dir does not exist:
if not os.path.exists(data_folder):
os.makedirs(data_folder)
- ret = subprocess.call(
+ ret = run_subprocess(
["wget", "--no-clobber", file_url, "-O", full_filepath]
)
self._LOGGER.info(f"\033[38;5;242m{ret}\033[0m")
diff --git a/geofetch/utils.py b/geofetch/utils.py
index 55997ff..184e8a6 100644
--- a/geofetch/utils.py
+++ b/geofetch/utils.py
@@ -3,6 +3,7 @@
import logging
import os
import subprocess
+import sys
import re
@@ -90,7 +91,7 @@ def parse_accessions(input_arg, metadata_folder, just_metadata=False):
run_ids.append(r_id)
_LOGGER.info("{} run(s)".format(len(run_ids)))
for r_id in run_ids:
- subprocess.call(["prefetch", r_id, "--max-size", "50000000"])
+ run_subprocess(["prefetch", r_id, "--max-size", "50000000"])
# Early return if we've just handled SRP accession directly.
return
else:
@@ -227,7 +228,7 @@ def fetch_metadata(self, outpath=None, typename=None):
else:
cmd = "wget {}".format(full_url)
- subprocess.call(cmd.split(" "))
+ run_subprocess(cmd.split(" "))
@staticmethod
def _validate(accn):
@@ -321,3 +322,18 @@ def clean_soft_files(meta_dir: str):
or item.endswith("SRA_filt.csv")
):
os.remove(os.path.join(meta_dir, item))
+
+
+def run_subprocess(*args, **kwargs):
+ """Wrapper to gracefully start and stop a running subprocess"""
+ p = subprocess.Popen(*args, **kwargs)
+ try:
+ return p.wait()
+ except KeyboardInterrupt:
+ _LOGGER.info(f"Terminating subprocess: {p.pid} | ({p.args})")
+ try:
+ p.terminate()
+ print("Pipeline aborted.")
+ except OSError as ose:
+ _LOGGER.warn(f"Exception raised during subprocess termination: {ose}")
+ sys.exit(1)
| killing child processes
If you `ctrl+c` signal a running process that has already started a `prefetch`, `geofetch` will quit but the subprocess appears to complete.
perhaps we need it to monitor the child process and kill it, too.
| just bit me again, so this is still an issue.
I see two places where `prefetch` is called:
https://github.com/pepkit/geofetch/blob/6f8b32a80c52c9ec957d7f609193367113b77c15/geofetch/geofetch.py#L1166
https://github.com/pepkit/geofetch/blob/6f8b32a80c52c9ec957d7f609193367113b77c15/geofetch/utils.py#L93
Does it make sense to switch all the `subprocess.call` functions to:
```python
p = subprocess.Popen(["prefetch", run_name, "--max-size", "50000000"])
PROCESS_LIST.append(p)
PROCESS_LIST[-1].wait()
...
```
since [`subprocess.call()` is just `Popen().wait()`](https://github.com/python/cpython/blob/b01cfb8f69e0990be0615b8613a7412f88bd217a/Lib/subprocess.py#L552-L566). `PROCESS_LIST` can be a global, maybe?
Then in the global [`KeyboardInterupt`](https://github.com/pepkit/geofetch/blob/50e2d272cc4a2d37756193b2334841ee076fbe0b/geofetch/geofetch.py#L2105) handler we can loop through `PROCESS_LIST` and kill all processes?
```python
for p in PROCESS_LIST:
try:
p.kill()
except:
_LOGGER.warn("Error killing process...")
```
@nsheff @Khoroshevskyi | 2022-06-28T15:57:55 | 0.0 | [] | [] |
Subsets and Splits