message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
Add index=False description to read_parquet
Discussion [here](https://github.com/dask/dask/issues/2161) | @@ -42,8 +42,9 @@ def read_parquet(path, columns=None, filters=None, categories=None, index=None,
List of column names to load
filters: list
List of filters to apply, like ``[('x', '>' 0), ...]``
- index: string or None
- Name of index column to use if that column is sorted
+ index: string or None (default) or False
+ Name of index column to use if that column is sorted;
+ False to force dask to not use any column as the index
categories: list, dict or None
For any fields listed here, if the parquet encoding is Dictionary,
the column will be created with dtype category. Use only if it is
|
[TEST] Fix test_topi_batch_matmul_tensorcore.py:test_batch_matmul requirement
* this test current sets a requirement to "uses_gpu", which
causes it to fail in cpu-only machine
* this patch changes it to be "requires_tensorcore", as per discussion
on issue | @@ -63,7 +63,7 @@ def verify_batch_matmul(x_batch, y_batch, M, N, K):
check_device("cuda")
[email protected]_gpu
[email protected]_tensorcore
def test_batch_matmul():
verify_batch_matmul(1, 1, 16, 16, 32)
verify_batch_matmul(5, 5, 16, 16, 32)
|
Update gtk.py
[GTK] Fix close window confirmation | @@ -91,7 +91,7 @@ class BrowserView:
message_format=localization['global.quitConfirmation'])
result = dialog.run()
if result == gtk.ResponseType.OK:
- close_window()
+ self.close_window()
else:
dialog.destroy()
return True
|
More consistent formatting in RELEASE.md
Consistently enclose filenames referred to througout the release process in
backticks to ensure they are rendered in the code style. | # Release process
-* Ensure docs/CHANGELOG.md contains a one-line summary of each [notable
+* Ensure `docs/CHANGELOG.md` contains a one-line summary of each [notable
change](https://keepachangelog.com/) since the prior release
-* Update setup.py and `tuf/__init__.py` to the new version number vA.B.C
+* Update `setup.py` and `tuf/__init__.py` to the new version number vA.B.C
* Test packaging, uploading to Test PyPI and installing from a virtual environment
* Remove existing dist build dirs
* Create source dist `python setup.py sdist`
* Sign the dists `gpg --detach-sign -a dist/tuf-vA.B.C.tar.gz`
* Upload to test PyPI `twine upload --repository testpypi dist/*`
* Verify the uploaded package https://testpypi.python.org/pypi/tuf/
-* Create a PR with updated CHANGELOG.md and version bumps
+* Create a PR with updated `CHANGELOG.md` and version bumps
* Once the PR is merged, pull the updated `develop` branch locally
* Create a signed tag matching the updated version number on the merge commit
`git tag --sign vA.B.C -m "vA.B.C"`
* Push the tag to GitHub `git push origin vA.B.C`
-* Create a new release on GitHub, copying the CHANGELOG.md entries for the release
+* Create a new release on GitHub, copying the `CHANGELOG.md` entries for the
+ release
* Create a package for the formal release
* Remove existing dist build dirs
* Create source dist `python setup.py sdist`
|
Use FileWrapper for both kinds of blobs
Just because blob has an __iter__ method, doesn't guarantee it can iterate | @@ -1578,10 +1578,7 @@ class DataFileDownloadDetail(BaseProjectDataView):
try:
data_file = DataFile.objects.filter(domain=self.domain).get(pk=kwargs['pk'])
blob = data_file.get_blob()
- response = StreamingHttpResponse(
- blob if hasattr(blob, '__iter__') else FileWrapper(blob),
- content_type=data_file.content_type
- )
+ response = StreamingHttpResponse(FileWrapper(blob), content_type=data_file.content_type)
except (DataFile.DoesNotExist, NotFound):
raise Http404
response['Content-Disposition'] = 'attachment; filename="' + data_file.filename + '"'
|
Re-adds on_ludwig_end.
* Re-adds on_ludwig_end.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see | @@ -314,6 +314,13 @@ class Callback(ABC):
"""
pass
+ def on_ludwig_end(self):
+ """Convenience method for any cleanup.
+
+ Not yet implemented.
+ """
+ pass
+
def prepare_ray_tune(self, train_fn: Callable, tune_config: Dict[str, Any], tune_callbacks: List[Callable]):
"""Configures Ray Tune callback and config.
|
fix ProcessGroupGlooTest
Summary:
Pull Request resolved:
This test had 2 issues. A timeout would occasionally happen due to a timeout of 50ms, and CUDA could would get compiled and run on CPU, leading to errors. This PR fixes those issues. | #include <sstream>
#include <thread>
+#include <torch/cuda.h>
+
#include <c10d/FileStore.hpp>
#include <c10d/ProcessGroupGloo.hpp>
#include <c10d/test/TestUtils.hpp>
@@ -37,9 +39,10 @@ class SignalTest {
std::shared_ptr<::c10d::ProcessGroup::Work> run(int rank, int size) {
auto store = std::make_shared<::c10d::FileStore>(path_, size);
- // Use tiny timeout to make this test run fast
::c10d::ProcessGroupGloo::Options options;
- options.timeout = std::chrono::milliseconds(50);
+ // Set a timeout that is small enough to make this test run fast, but also
+ // make sure that we don't get timeouts in the ProcessGroupGloo constructor.
+ options.timeout = std::chrono::milliseconds(1000);
options.devices.push_back(
::c10d::ProcessGroupGloo::createDeviceForHostname("127.0.0.1"));
@@ -395,9 +398,11 @@ int main(int argc, char** argv) {
#ifdef USE_CUDA
{
+ if (torch::cuda::is_available()) {
TemporaryFile file;
testAllreduce(file.path, at::DeviceType::CUDA);
}
+ }
#endif
{
@@ -407,9 +412,11 @@ int main(int argc, char** argv) {
#ifdef USE_CUDA
{
+ if (torch::cuda::is_available()) {
TemporaryFile file;
testBroadcast(file.path, at::DeviceType::CUDA);
}
+ }
#endif
{
|
llvm, mechanism: Update output ports after updating mech state
Update mech value and num_executions counts | @@ -3120,8 +3120,6 @@ class Mechanism_Base(Mechanism):
new_val = builder.add(new_val, new_val.type(1))
builder.store(new_val, num_exec_time_ptr)
- builder = self._gen_llvm_output_ports(ctx, builder, value, m_base_params, m_state, arg_in, arg_out)
-
val_ptr = pnlvm.helpers.get_state_ptr(builder, self, m_state, "value")
if val_ptr.type.pointee == value.type.pointee:
pnlvm.helpers.push_state_val(builder, self, m_state, "value", value)
@@ -3129,6 +3127,9 @@ class Mechanism_Base(Mechanism):
# FIXME: Does this need some sort of parsing?
warnings.warn("Shape mismatch: function result does not match mechanism value param: {} vs. {}".format(value.type.pointee, val_ptr.type.pointee))
+ # Run output ports after updating the mech state (num_executions and value)
+ builder = self._gen_llvm_output_ports(ctx, builder, value, m_base_params, m_state, arg_in, arg_out)
+
# is_finished should be checked after output ports ran
is_finished_f = ctx.import_llvm_function(self, tags=tags.union({"is_finished"}))
is_finished_cond = builder.call(is_finished_f, [m_params, m_state, arg_in,
|
Update initial-config.yaml
Typo i think | @@ -31,7 +31,7 @@ ALERTS_URL: https://download.chia.net/notify/mainnet_alert.txt
CHIA_ALERTS_PUBKEY: 89b7fd87cb56e926ecefb879a29aae308be01f31980569f6a75a69d2a9a69daefd71fb778d865f7c50d6c967e3025937
# public ssl ca is included in source code
-# Privet ssl ca is used for trusted connections between machines user owns
+# Private ssl ca is used for trusted connections between machines user owns
private_ssl_ca:
crt: "config/ssl/ca/private_ca.crt"
key: "config/ssl/ca/private_ca.key"
|
Fixed issue with highlight titles
you must not modify the cache data | @@ -75,7 +75,7 @@ def add_info(videoid, list_item, item, raw_data, handle_highlighted_title=False,
# Short information about future release of tv show season or other
infos_copy['plot'] += '[CR][COLOR green]{}[/COLOR]'.format(item['dpSupplementalMessage'])
if handle_highlighted_title:
- add_highlighted_title(list_item, videoid, infos)
+ add_highlighted_title(list_item, videoid, infos_copy)
list_item.setInfo('video', infos_copy)
return infos_copy
|
Enroll XClarity machines in Ironic's devstack setting
In the current XClarity CI environment, we have to manually
add XClarity machines. This patch is going to update Ironic's
devstack setting to enroll XClarity machines automatically.
Story:
Task: 28353 | @@ -650,6 +650,11 @@ function is_deployed_by_irmc {
return 1
}
+function is_deployed_by_xclarity {
+ [[ "$IRONIC_DEPLOY_DRIVER" == xclarity ]] && return 0
+ return 1
+}
+
function is_drac_enabled {
[[ -z "${IRONIC_ENABLED_HARDWARE_TYPES%%*idrac*}" ]] && return 0
return 1
@@ -1955,6 +1960,13 @@ function enroll_nodes {
if [[ -n "$IRONIC_DEPLOY_ISO_ID" ]]; then
node_options+=" --driver-info irmc_deploy_iso=$IRONIC_DEPLOY_ISO_ID"
fi
+ elif is_deployed_by_xclarity; then
+ local xclarity_hardware_id
+ xclarity_hardware_id=$(echo $hardware_info |awk '{print $5}')
+ node_options+=" --driver-info xclarity_manager_ip=$bmc_address \
+ --driver-info xclarity_password=$bmc_passwd \
+ --driver-info xclarity_username=$bmc_username \
+ --driver-info xclarity_hardware_id=$xclarity_hardware_id"
fi
interface_info="${mac_address}"
|
Update obsolete credref link
Replaces the link to the old credentials reference to the new
AWS SDKs and Tools reference guide. | @@ -143,8 +143,8 @@ Boto3 was made generally available on 06/22/2015 and is currently in the full su
For information about maintenance and support for SDK major versions and their underlying dependencies, see the following in the AWS SDKs and Tools Shared Configuration and Credentials Reference Guide:
-* `AWS SDKs and Tools Maintenance Policy <https://docs.aws.amazon.com/credref/latest/refdocs/maint-policy.html>`__
-* `AWS SDKs and Tools Version Support Matrix <https://docs.aws.amazon.com/credref/latest/refdocs/version-support-matrix.html>`__
+* `AWS SDKs and Tools Maintenance Policy <https://docs.aws.amazon.com/sdkref/latest/guide/maint-policy.html>`__
+* `AWS SDKs and Tools Version Support Matrix <https://docs.aws.amazon.com/sdkref/latest/guide/version-support-matrix.html>`__
More Resources
|
Add dumpsys
Add get_top_activity_name
Add get_top_activity_name_and_pid
Add get_top_activity_uri | @@ -20,7 +20,7 @@ limitations under the License.
from __future__ import print_function
-__version__ = '21.7.0'
+__version__ = '21.8.0'
import json
import os
@@ -30,7 +30,9 @@ import subprocess
import sys
import threading
import time
+import warnings
from abc import ABC
+from typing import Optional
import culebratester_client
from culebratester_client import Text, ObjectRef, StatusResponse
@@ -248,6 +250,46 @@ class UiAutomatorHelper:
"""
return self.uiAutomatorHelper.api_instance.device_display_real_size_get()
+ def dumpsys(self, service, **kwargs) -> str:
+ """
+ :see https://github.com/dtmilano/CulebraTester2-public/blob/master/openapi.yaml
+ :param service: the service
+ :return: the dumpsys output
+ """
+ return self.uiAutomatorHelper.api_instance.device_dumpsys_get(service=service, _preload_content=False,
+ **kwargs).read().decode('UTF-8')
+
+ def get_top_activity_name_and_pid(self) -> Optional[str]:
+ dat = self.dumpsys('activity', arg1='top')
+ activityRE = re.compile(r'\s*ACTIVITY ([A-Za-z0-9_.]+)/([A-Za-z0-9_.\$]+) \w+ pid=(\d+)')
+ m = activityRE.findall(dat)
+ if len(m) > 0:
+ return m[-1]
+ else:
+ warnings.warn("NO MATCH:" + dat)
+ return None
+
+ def get_top_activity_name(self) -> Optional[str]:
+ tanp = self.get_top_activity_name_and_pid()
+ if tanp:
+ return tanp[0] + '/' + tanp[1]
+ else:
+ return None
+
+ def get_top_activity_uri(self) -> Optional[str]:
+ tan = self.get_top_activity_name()
+ dat = self.dumpsys('activity')
+ startActivityRE = re.compile(r'^\s*mStartActivity:')
+ intentRE = re.compile(f'^\\s*Intent {{ act=(\\S+) dat=(\\S+) flg=(\\S+) cmp={tan} }}')
+ lines = dat.splitlines()
+ for n, _line in enumerate(lines):
+ if startActivityRE.match(_line):
+ for i in range(n, n + 6):
+ m = intentRE.match(lines[i])
+ if m:
+ return m.group(2)
+ return None
+
def wait_for_new_toast(self, timeout=10000):
"""
:see https://github.com/dtmilano/CulebraTester2-public/blob/master/openapi.yaml
|
Add a paragraph about tuning couchjs
xref: apache/couchdb#1670 | @@ -59,6 +59,15 @@ Query Servers Definition
javascript = /usr/bin/couchjs /usr/share/couchdb/server/main.js
coffeescript = /usr/bin/couchjs /usr/share/couchdb/server/main-coffee.js
+ By default, ``couchjs`` limits the max runtime allocation to 64MiB.
+ If you run into out of memory issue in your ddoc functions,
+ you can adjust the memory limitation::
+
+ [query_servers]
+ javascript = /usr/bin/couchjs -S 536870912 /usr/share/couchdb/server/main.js ; 512 MiB
+
+ For more info about the available options, please consult ``couchjs -h``.
+
.. _Mozilla SpiderMonkey: https://developer.mozilla.org/en/docs/SpiderMonkey
.. seealso::
|
Reduce paramiko log messages for dynamic workloads
This patch changes level of paramiko logging for dynamic workloads to WARNING, to
reduce the number of paramiko log messages that are produced. | @@ -24,6 +24,7 @@ from oslo_db import exception as db_exc
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
+logging.getLogger("paramiko").setLevel(logging.WARNING)
class NovaUtils(vm_utils.VMScenario):
|
Bring back SITE_URL setting, as it's quite important in extform views
Instead of configuring Django sites contrib app, we 'skip' this step and
just use one single site parameter we're concerned about: site URL. | @@ -49,6 +49,8 @@ if not DEBUG and SECRET_KEY == DEFAULT_SECRET_KEY:
raise ImproperlyConfigured('You must specify non-default value for '
'SECRET_KEY when running with Debug=FALSE.')
+SITE_URL = env.str('AMY_SITE_URL',
+ default='https://amy.software-carpentry.org')
ALLOWED_HOSTS = env.list('AMY_ALLOWED_HOSTS',
default=['amy.software-carpentry.org'])
if DEBUG:
|
Clean up README
We'll direct users to a snappy install to help get early feedback, moving the
alternate installation below that as well. | -# conjure-up [](https://travis-ci.org/conjure-up/conjure-up) [](https://requires.io/github/conjure-up/conjure-up/requirements/?branch=master)
+# conjure-up [](https://travis-ci.org/conjure-up/conjure-up)
> Installing cloud packages like whoa.
# what it is
@@ -15,7 +15,15 @@ solutions up and going with as little hindrance as possible.
> Xenial and above
-## pre-reqs
+## recommended
+
+```
+$ sudo snap install conjure-up --edge --classic
+```
+
+## alternative installation
+
+### pre-reqs
If you plan to use conjure-up to deploy onto LXD containers on your local
machine (aka the _localhost_ cloud type), you will need to set up LXD
@@ -33,9 +41,7 @@ $ sudo dpkg-reconfigure -p medium lxd
$ lxc finger
```
-## recommended installation
-We will eventually move to pure snap distribution, however, until that time
-packages are built and located at:
+Install the packages from the below PPA's
```
$ sudo apt-add-repository ppa:juju/stable
@@ -44,13 +50,6 @@ $ sudo apt update
$ sudo apt install conjure-up
```
-## alternative installation
-If you want to try the snap distribution, you can install it with:
-
-```
-sudo snap install conjure-up --classic
-```
-
# how to use
## Run the installer interactively
|
Properly remove string array from pipeline node properties
When `StringArrayInput` deletes a value in string array properties
it does not actually delete it, instead leaving an `undefined` value.
This needs to be addressed by removing the undefined values when
saving property changes. This is due to js allowing sparse arrays.
Fixes | @@ -545,9 +545,13 @@ export class PipelineEditor extends React.Component<
}
app_data.runtime_image = propertySet.runtime_image;
- app_data.outputs = propertySet.outputs;
- app_data.env_vars = propertySet.env_vars;
- app_data.dependencies = propertySet.dependencies;
+ app_data.outputs = propertySet.outputs.filter((x: any) => x !== undefined);
+ app_data.env_vars = propertySet.env_vars.filter(
+ (x: any) => x !== undefined
+ );
+ app_data.dependencies = propertySet.dependencies.filter(
+ (x: any) => x !== undefined
+ );
app_data.include_subdirectories = propertySet.include_subdirectories;
app_data.cpu = propertySet.cpu;
app_data.memory = propertySet.memory;
|
Update task_set.py
use _prepare_text for text data_type | @@ -167,7 +167,7 @@ class TaskSet(TorchDataset):
))
if self.data_type == "text":
- x, y, t = self._prepare(x, y, t)
+ x, y, t = self._prepare_text(x, y, t)
elif self.data_type == "segmentation":
x, y, t = self._prepare_segmentation(x, y, t)
else:
|
documentation
please check the parameters | @@ -188,7 +188,7 @@ def substation_HEX_sizing(building_demand, substation_systems):
def calc_hex_area_from_demand(building_demand, load_type, building_system, T_supply_C):
'''
This function returns the heat exchanger specifications for given building demand, HEX type and supply temperature.
-
+ primary side: network; secondary side: building
:param building_demand: DataFrame with demand values
:param load_type: 'csf' or 'hsf' for cooling or heating
:param building_system: 'aru', 'ahu', 'scu', 'dataf'
@@ -402,6 +402,7 @@ def calc_substation_return_DC(building, T_DC_supply_K, substation_HEX_specs):
def calc_cooling_substation_heat_exchange(ch_0, Qnom, thi_0, tci_0, tho_0):
"""
this function calculates the state of the heat exchanger at the substation of every customer with cooling needs
+ cold/primary side: network; hot/secondary side: building
:param Q: cooling load
:param thi: in temperature of primary side
:param tho: out temperature of primary side
|
jenkins: remove composer directories before the tests
This commit turns on the `cleanup_composer_directories` option to clean up
the osbuild-composer directories during the time the services are stopped
(when ansible-osbuild is about to deploy the new versions of the
services).
Taken from osbuild/osbuild-composer#575, thanks to | @@ -39,6 +39,7 @@ ansible-playbook \
-i hosts.ini \
-e osbuild_repo=${WORKSPACE} \
-e osbuild_version=$(git rev-parse HEAD) \
+ -e cleanup_composer_directories=yes \
ansible-osbuild/playbook.yml
# Run the tests only on Fedora 31 for now.
|
Onefile: For linux: icons look for versioned icon files too
* First look for an icon for "pythonMAJOR.MINOR.xpm", then "pythonMAJOR.xpm"
then "python.xpm".
* On some systems (e.g. ubuntu) python.xpm does not exist. | @@ -869,14 +869,21 @@ def getIconPaths():
# Check if Linux icon requirement is met.
if getOS() == "Linux" and not result and isOnefileMode():
- default_icon = "/usr/share/pixmaps/python.xpm"
- if os.path.exists(default_icon):
- result.append(default_icon)
+ default_icons = (
+ "/usr/share/pixmaps/python%s.%s.xpm" % python_version_str,
+ "/usr/share/pixmaps/python%s.xpm" % sys.version_info[0],
+ "/usr/share/pixmaps/python.xpm",
+ )
+
+ for icon in default_icons:
+ if os.path.exists(icon):
+ result.append(icon)
+ break
else:
Tracing.options_logger.sysexit(
"""\
-Error, the default icon %r does not exist, making --linux-onefile-icon required."""
- % default_icon
+Error, the non of the default icons '%s' exist, making --linux-onefile-icon required."""
+ % ", ".join(default_icons)
)
return result
|
Scons: Call installed copy with known Python2 binary.
* Otherwise in Python3 "virtualenv", the "#!/usr/bin/env python"
of installed copy will use Python3 and error out. | @@ -60,7 +60,10 @@ def _getSconsBinaryCall():
scons_path = Execution.getExecutablePath("scons")
if scons_path is not None:
- return [scons_path]
+ return [
+ _getPython2ExePath(),
+ scons_path
+ ]
return [
_getPython2ExePath(),
|
Bump opam-nix
To provide more workarounds required for building. | "homepage": null,
"owner": "serokell",
"repo": "opam-nix",
- "rev": "bdd7e6730bdf0ea91ada3ebc68387424b087c9f8",
- "sha256": "1wda717d6391vbgrp0vcv27pxfj4xy1511mssky8ll3iy7i851hn",
+ "rev": "ee285a3b6e05dc274f9ebf59ad64d8a4fded915e",
+ "sha256": "0f3dm834zf7y4c4pp8xr12zl0n1xk0zql3a9m2wh6a5shdcdcwj7",
"type": "tarball",
- "url": "https://github.com/serokell/opam-nix/archive/bdd7e6730bdf0ea91ada3ebc68387424b087c9f8.tar.gz",
+ "url": "https://github.com/serokell/opam-nix/archive/ee285a3b6e05dc274f9ebf59ad64d8a4fded915e.tar.gz",
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
},
"opam-repository": {
|
Make convert_to_onnx runable as script again
* Make convert_to_onnx runable as script again
Fix `convert_graph_to_onnx.py` relative import so it can be run as a script again.
* Trigger CI | @@ -273,7 +273,7 @@ def convert_pytorch(nlp: Pipeline, opset: int, output: Path, use_external_format
import torch
from torch.onnx import export
- from .pytorch_utils import is_torch_less_than_1_11
+ from transformers.pytorch_utils import is_torch_less_than_1_11
print(f"Using framework PyTorch: {torch.__version__}")
|
$.Introspection: introduce a supertype for field and property refs
TN: | @@ -68,21 +68,37 @@ package ${ada_lib_name}.Introspection is
function Derived_Types (Id : Node_Type_Id) return Node_Type_Id_Array;
-- Return type references for all direct derivations for Id
+ <% all_abstract = ctx.sorted_parse_fields + ctx.sorted_properties %>
+
+ type Abstract_Node_Data_Reference is
+ (${', '.join(f.introspection_enum_literal for f in all_abstract)});
+ -- Enumeration of all data attached to nodes (syntax fields and properties)
+
+ type Abstract_Node_Data_Reference_Array is
+ array (Positive range <>) of Abstract_Node_Data_Reference;
+
-------------------
-- Syntax fields --
-------------------
## In a lot of testcases, there is a single concrete node that has no
## field. For these, generate a type that has no valid value.
- type Field_Reference is
+ subtype Field_Reference is Abstract_Node_Data_Reference range
% if ctx.sorted_parse_fields:
- (${', '.join(f.introspection_enum_literal
- for f in ctx.sorted_parse_fields)})
+ <%
+ first = ctx.sorted_parse_fields[0]
+ last = ctx.sorted_parse_fields[-1]
+ %>
% else:
- new Integer range 1 .. 0
+ <%
+ first = all_abstract[-1]
+ last = all_abstract[0]
+ %>
% endif
+ ${first.introspection_enum_literal}
+ .. ${last.introspection_enum_literal}
;
- -- Enumeration of all fields for regular nodes
+ -- Enumeration of all syntax fields for regular nodes
function Field_Name (Field : Field_Reference) return String;
-- Return a lower-case name for ``Field``
@@ -116,9 +132,9 @@ package ${ada_lib_name}.Introspection is
-- Properties --
----------------
- type Property_Reference is
- (${', '.join(p.introspection_enum_literal
- for p in ctx.sorted_properties)});
+ subtype Property_Reference is Abstract_Node_Data_Reference
+ range ${ctx.sorted_properties[0].introspection_enum_literal}
+ .. ${ctx.sorted_properties[-1].introspection_enum_literal};
-- Enumeration of all available node properties
function Property_Name (Property : Property_Reference) return String;
|
added call to process join and close to make sure pool is terminated.
modified one call of pool in line 348 to use core_count | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
-import numpy as np
-import pandas as pd
-import pydicom as dicom
-import png, os, glob
-import PIL as pil
-from pprint import pprint
-import hashlib
+import os
+import glob
from shutil import copyfile
-import logging
-from multiprocessing import Pool
+import hashlib
import json
import sys
import subprocess
+import logging
+from multiprocessing import Pool
import pdb
+import time
import pickle
+import numpy as np
+import pandas as pd
+import pydicom as dicom
#pydicom imports needed to handle data errrors
from pydicom import config
from pydicom import datadict
from pydicom import values
-from subprocess import Popen
-import time
with open('config.json', 'r') as f:
niffler = json.load(f)
@@ -243,7 +241,7 @@ def fix_mismatch_callback(raw_elem, **kwargs):
pass
else:
raw_elem = raw_elem._replace(VR=vr)
- break # I want to exit immediately after change is applied
+ break
return raw_elem
@@ -347,7 +345,7 @@ for i,chunk in enumerate(file_chunks):
filedata=data
total = len(chunk)
stamp = time.time()
- p = Pool(os.cpu_count())
+ p = Pool(core_count)
res = p.imap_unordered(extract_images,range(len(filedata)))
for out in res:
(fmap,fail_path,err) = out
@@ -358,6 +356,8 @@ for i,chunk in enumerate(file_chunks):
logging.error(err_msg)
else:
fm.write(fmap)
+ p.join()
+ p.close()
fm.close()
logging.info('Chunk run time: %s %s', time.time() - t_start, ' seconds!')
|
Change still unreleased.
not released yet ;) | @@ -15,6 +15,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
* Added `compas.geometry.booleans`.
### Changed
+* Fixed scaling bug in `compas.geometry.Sphere`
### Removed
@@ -51,7 +52,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
* Renamed ``compas.geometry.Projection.perspective`` to ``compas.geometry.Projection.from_plane_and_point`` and changed input params
* Changed constructor of all ``compas.geometry.Transformation`` and derivatives. Preferred way of creating any ``compas.geometry.Transformation`` is with the classmethods ``from_*``
* Changed params (point, normal) into plane for ``compas.geometry.matrix_from_parallel_projection``, ``compas.geometry.matrix_from_orthogonal_projection`` and ``compas.geometry.matrix_from_perspective_projection``
-* Fixed scaling bug in `compas.geometry.Sphere`
### Removed
|
Providing additional useful messages for JSONDecodeError
According to , I added additional and useful information when encountering the JSONDecodeError. | @@ -949,10 +949,17 @@ class DockerCommandRunner(CommandRunnerInterface):
.strip()
)
home_directory = "/root"
+ try:
for env_var in json.loads(image_env):
if env_var.startswith("HOME="):
home_directory = env_var.split("HOME=")[1]
break
+ except json.JSONDecodeError as e:
+ cli_logger.error(
+ "Unable to deserialize `image_env` to Python object. "
+ f"The `image_env` is:\n{image_env}"
+ )
+ raise e
user_docker_run_options = self.docker_config.get(
"run_options", []
|
Make it work with development version string of the Big Graphviz
* Make it work with development version string of the Big Graphviz
such as:
$ dot -V
dot - graphviz version 2.44.2~dev.20200927.0217 (20200927.0217)
* Add development version of `dot` and its support there of. | @@ -261,6 +261,7 @@ def test_version_parsefail_mocked(mocker, Popen): # noqa: N803
@pytest.mark.parametrize('stdout, expected', [
(b'dot - graphviz version 1.2.3 (mocked)', (1, 2, 3)),
(b'dot - graphviz version 2.43.20190912.0211 (20190912.0211)\n', (2, 43, 20190912, 211)),
+ (b'dot - graphviz version 2.44.2~dev.20200927.0217 (20200927.0217)\n', (2, 44, 2)),
])
def test_version_mocked(mocker, Popen, stdout, expected): # noqa: N803
proc = Popen.return_value
|
Fix standby detach
As the check added in commit tries to open the caching devcie
exclusively, it is impossible to detach cache from a standby instance. | @@ -2116,10 +2116,12 @@ int standby_handle() {
return FAILURE;
}
+ if (standby_params.subcmd != standby_opt_subcmd_detach) {
if (validate_cache_path(standby_params.cache_device,
standby_params.force) == FAILURE) {
return FAILURE;
}
+ }
switch (standby_params.subcmd) {
case standby_opt_subcmd_init:
|
DOC: ndarray.reshape allows shape as int arguments or tuple
Adding note about difference between `numpy.reshape`
and `ndarray.reshape`. See issue | @@ -4125,6 +4125,13 @@ def luf(lamdaexpr, *args, **kwargs):
--------
numpy.reshape : equivalent function
+ Notes
+ -----
+ Unlike the free function `numpy.reshape`, this method on `ndarray` allows
+ the elements of the shape parameter to be passed in as separate arguments.
+ For example, ``a.reshape(10, 11)`` is equivalent to
+ ``a.reshape((10, 11))``.
+
"""))
|
Fixing minor bug in `configuration_parser` error message.
One of the error messages in `configuration_parser` says "feature_subset_file" when it should say "feature_subset" | @@ -762,7 +762,7 @@ class ConfigurationParser:
msg = msg.format("feature_subset_file")
raise ValueError(msg)
if new_config['features'] and new_config['feature_subset']:
- msg = msg.format("feature_subset_file")
+ msg = msg.format("feature_subset")
raise ValueError(msg)
# 6. Check for fields that require feature_subset_file and try
|
Mention what the delimiter is
Prompted by | @@ -485,6 +485,7 @@ SecDefaultAction "phase:2,log,auditlog,pass"
# setvar:'tx.static_extensions=/.jpg/ /.jpeg/ /.png/ /.gif/ /.js/ /.css/ /.ico/ /.svg/ /.webp/'"
# Content-Types charsets that a client is allowed to send in a request.
+# The content-types are enclosed by |pipes| as delimiters to guarantee exact matches.
# Default: |utf-8| |iso-8859-1| |iso-8859-15| |windows-1252|
# Uncomment this rule to change the default.
#SecAction \
|
only uses lxml now
previously has an option to use xml module - but this module actually did not work | @@ -29,28 +29,29 @@ HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
+
+Slightly modified by Colin Curtain
"""
import logging
-try:
+# note lxml is not installed on all OS's needs to be installed
from lxml import etree
-except:
- from xml import etree
-# note lxml is not installed on all OS, may then try xml module instead
+import re
+import shutil
+import time
+import os
+from os.path import join
try:
from PIL import Image
except ImportError:
import Image
-import zipfile
-import shutil
-import re
-import time
-import os
import sys
-from os.path import join
+import traceback
+import zipfile
log = logging.getLogger(__name__)
+
def exception_handler(exception_type, value, tb_obj):
""" Global exception handler useful in GUIs.
tb_obj: exception.__traceback__ """
@@ -101,12 +102,13 @@ nsprefixes = {
def opendocx(file):
'''Open a docx file, return a document XML tree'''
+
+ sys.excepthook = exception_handler
mydoc = zipfile.ZipFile(file)
xmlcontent = mydoc.read('word/document.xml')
document = etree.fromstring(xmlcontent)
return document
-
def newdocument():
document = makeelement('document')
document.append(makeelement('body'))
|
Update lombscargle.rst
Small change to formatting to remove warning. | @@ -135,7 +135,7 @@ Unit(dimensionless)
We see that the output is dimensionless, which is always the case for the
standard normalized periodogram (for more on normalizations,
see :ref:`lomb-scargle-normalization` below). If you include arguments to
-autopower such as `minimum_frequency` or `maximum_frequency`, make sure to
+autopower such as ``minimum_frequency`` or ``maximum_frequency``, make sure to
specify units as well:
>>> frequency, power = LombScargle(t_days, y_mags, dy_mags).autopower(minimum_frequency=1e-5*u.Hz)
|
Wrap telemetry decorator
Summary: This is required for docstrings to be available on the wrapped functions.
Test Plan: Unit
Reviewers: catherinewu, sashank | import time
import uuid
import zlib
+from functools import wraps
from logging.handlers import RotatingFileHandler
import click
@@ -223,6 +224,7 @@ def telemetry_wrapper(f):
)
)
+ @wraps(f)
def wrap(*args, **kwargs):
start_time = datetime.datetime.now()
log_action(action=f.__name__ + '_started', client_time=start_time)
|
Sync: chunk user requests
The site can't handle huge syncs. Even a bulk patch of 10k users will
crash the service. Chunk the requests into groups of 1000 users and
await them sequentially. Testing showed that concurrent requests
are not scalable and would also crash the service. | @@ -5,12 +5,15 @@ from collections import namedtuple
from discord import Guild
from discord.ext.commands import Context
+from more_itertools import chunked
import bot
from bot.api import ResponseCodeError
log = logging.getLogger(__name__)
+CHUNK_SIZE = 1000
+
# These objects are declared as namedtuples because tuples are hashable,
# something that we make use of when diffing site roles against guild roles.
_Role = namedtuple('Role', ('id', 'name', 'colour', 'permissions', 'position'))
@@ -207,10 +210,13 @@ class UserSyncer(Syncer):
@staticmethod
async def _sync(diff: _Diff) -> None:
"""Synchronise the database with the user cache of `guild`."""
+ # Using asyncio.gather would still consume too many resources on the site.
log.trace("Syncing created users...")
if diff.created:
- await bot.instance.api_client.post("bot/users", json=diff.created)
+ for chunk in chunked(diff.created, CHUNK_SIZE):
+ await bot.instance.api_client.post("bot/users", json=chunk)
log.trace("Syncing updated users...")
if diff.updated:
- await bot.instance.api_client.patch("bot/users/bulk_patch", json=diff.updated)
+ for chunk in chunked(diff.updated, CHUNK_SIZE):
+ await bot.instance.api_client.patch("bot/users/bulk_patch", json=chunk)
|
Update extensions.py
add mkv as format | @@ -4,7 +4,7 @@ valid_tagging_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass', 'sup']
-valid_formats = ['mp4', 'mov']
+valid_formats = ['mp4', 'mov', 'mkv']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass', 'pgs']
|
Fix circular import
This is a temporary fix since those methods will soon be removed | @@ -31,7 +31,6 @@ from dateutil import tz
from dateutil.parser import parse
from pkg_resources import packaging
-from pcluster.aws.aws_api import AWSApi
from pcluster.aws.common import get_region
from pcluster.constants import SUPPORTED_OSES_FOR_ARCHITECTURE, SUPPORTED_OSES_FOR_SCHEDULER
@@ -144,6 +143,8 @@ def verify_stack_status(stack_name, waiting_states, successful_states):
:param successful_states: list of final status considered as successful
:return: True if the final status is in the successful_states list, False otherwise.
"""
+ from pcluster.aws.aws_api import AWSApi # pylint: disable=import-outside-toplevel
+
status = AWSApi.instance().cfn.describe_stack(stack_name).get("StackStatus")
resource_status = ""
while status in waiting_states:
@@ -166,6 +167,8 @@ def log_stack_failure_recursive(stack_name, failed_states=None, indent=2):
if not failed_states:
failed_states = ["CREATE_FAILED"]
+ from pcluster.aws.aws_api import AWSApi # pylint: disable=import-outside-toplevel
+
events = AWSApi.instance().cfn.get_stack_events(stack_name)
for event in events:
if event.get("ResourceStatus") in failed_states:
@@ -184,6 +187,8 @@ def log_stack_failure_recursive(stack_name, failed_states=None, indent=2):
def _log_cfn_event(event, indent):
"""Log failed CFN events."""
+ from pcluster.aws.aws_api import AWSApi # pylint: disable=import-outside-toplevel
+
print("%s- %s", " " * indent, AWSApi.instance().cfn.format_event(event))
|
add additional check for value in form props
only search for last path component | @@ -370,7 +370,7 @@ def _get_export_properties(export):
return properties
-def find_question_id(form, value):
+def find_question_id(form, value, last_path=False):
if not isinstance(form, dict):
# Recursive calls should always give `form` a form value.
# However, https://dimagi-dev.atlassian.net/browse/SAAS-11326
@@ -381,15 +381,18 @@ def find_question_id(form, value):
for k, v in form.items():
if isinstance(v, dict):
- ret = find_question_id(v, value)
+ ret = find_question_id(v, value, last_path=last_path)
if ret:
return [k] + ret
elif isinstance(v, list):
for repeat in v:
- ret = find_question_id(repeat, value)
+ ret = find_question_id(repeat, value, last_path=last_path)
if ret:
return [k] + ret
else:
+ if last_path:
+ v = os.path.basename(os.path.normpath(v))
+
if v == value:
return [k]
@@ -424,8 +427,12 @@ def _extract_form_attachment_info(form, properties):
if content_type == 'text/xml':
continue
try:
- question_id = str(
- '-'.join(find_question_id(form.form_data, attachment_name)))
+ question_id_components = find_question_id(form.form_data, attachment_name)
+ if question_id_components is None:
+ # NOTE: special case until rd-toolkit bug is fixed, search for question_id again
+ # See https://dimagi-dev.atlassian.net/browse/SAAS-11792
+ question_id_components = find_question_id(form.form_data, attachment_name, last_path=True)
+ question_id = str('-'.join(question_id_components))
except TypeError:
question_id = 'unknown' + str(unknown_number)
unknown_number += 1
|
Update gtk.py
Save file dialog should return one file not a tuple of files. This change is to keep window.create_file_dialog(webview.SAVE_DIALOG) standardized and returning a string of a single file on linux as it does in the other platoforms | @@ -327,6 +327,9 @@ class BrowserView:
response = dialog.run()
if response == gtk.ResponseType.OK:
+ if dialog_type == SAVE_DIALOG:
+ file_name = dialog.get_filename()
+ else:
file_name = dialog.get_filenames()
else:
file_name = None
|
Fix exception with wrong field.
Use message instead of msg_fmt in zun exception.
Closes-Bug: | @@ -449,31 +449,31 @@ class CPUPinningUnknown(ZunException):
class CPUUnpinningUnknown(Invalid):
- msg_fmt = _("CPU set to unpin %(requested)s must be a subset of "
+ message = _("CPU set to unpin %(requested)s must be a subset of "
"known CPU set %(cpuset)s")
class CPUPinningInvalid(Invalid):
- msg_fmt = _("CPU set to pin %(requested)s must be a subset of "
+ message = _("CPU set to pin %(requested)s must be a subset of "
"free CPU set %(free)s")
class CPUUnpinningInvalid(Invalid):
- msg_fmt = _("CPU set to unpin %(requested)s must be a subset of "
+ message = _("CPU set to unpin %(requested)s must be a subset of "
"pinned CPU set %(pinned)s")
class NotFound(ZunException):
- msg_fmt = _("Resource could not be found.")
+ message = _("Resource could not be found.")
code = 404
class SchedulerHostFilterNotFound(NotFound):
- msg_fmt = _("Scheduler Host Filter %(filter_name)s could not be found.")
+ message = _("Scheduler Host Filter %(filter_name)s could not be found.")
class ClassNotFound(NotFound):
- msg_fmt = _("Class %(class_name)s could not be found: %(exception)s")
+ message = _("Class %(class_name)s could not be found: %(exception)s")
class ApiVersionsIntersect(ZunException):
@@ -482,23 +482,23 @@ class ApiVersionsIntersect(ZunException):
class ConnectionFailed(ZunException):
- msg_fmt = _("Failed to connect to remote host")
+ message = _("Failed to connect to remote host")
class SocketException(ZunException):
- msg_fmt = _("Socket exceptions")
+ message = _("Socket exceptions")
class InvalidWebsocketUrl(ZunException):
- msg_fmt = _("Websocket Url invalid")
+ message = _("Websocket Url invalid")
class InvalidWebsocketToken(ZunException):
- msg_fmt = _("Websocket token is invalid")
+ message = _("Websocket token is invalid")
class ValidationError(ZunException):
- msg_fmt = _("Validation error")
+ message = _("Validation error")
class ResourcesUnavailable(ZunException):
|
Notification improvements
Reorder the eligibility test so that always_notify takes precedence
over notify_ids, fallback to rarity score if iv_score is None, allow
configuration of the recent_notification deque size, remove block
comment around notification settings in config example, adjust some
explanatory comments. | @@ -29,7 +29,8 @@ _optional = {
'FULL_TIME': 1800,
'TIME_REQUIRED': 300,
'NOTIFY_RANKING': 90,
- 'ALWAYS_NOTIFY_IDS': set()
+ 'ALWAYS_NOTIFY_IDS': set(),
+ 'NOTIFICATION_CACHE': 100
}
# set defaults for unset config options
for setting_name, default in _optional.items():
@@ -485,7 +486,7 @@ class Notifier:
def __init__(self, spawns):
self.spawns = spawns
- self.recent_notifications = deque(maxlen=100)
+ self.recent_notifications = deque(maxlen=config.NOTIFICATION_CACHE)
self.notify_ranking = config.NOTIFY_RANKING
self.session = Session(autoflush=False)
self.initial_score = config.INITIAL_SCORE
@@ -558,10 +559,13 @@ class Notifier:
pokemon_id = pokemon['pokemon_id']
if (pokemon_id in self.never_notify
- or pokemon_id not in self.notify_ids
or pokemon['encounter_id'] in self.recent_notifications):
return False
- if config.IGNORE_RARITY or pokemon_id in self.always_notify:
+ if pokemon_id in self.always_notify:
+ return True
+ if pokemon_id not in self.notify_ids:
+ return False
+ if config.IGNORE_RARITY:
return True
rareness = self.get_rareness_score(pokemon_id)
@@ -614,7 +618,7 @@ class Notifier:
if score_required:
if config.IGNORE_RARITY:
score = iv_score
- elif config.IGNORE_IVS:
+ elif config.IGNORE_IVS or iv_score is None:
score = self.get_rareness_score(pokemon_id)
else:
rareness = self.get_rareness_score(pokemon_id)
|
NodeEditor : Have some content even when empty
This allows keyboard shortcuts registered by extensions to work even without a node selected. | import functools
+import imath
+
import IECore
import Gaffer
@@ -102,6 +104,8 @@ class NodeEditor( GafferUI.NodeSetEditor ) :
node = self._lastAddedNode()
if not node :
+ with self.__column :
+ GafferUI.Spacer( imath.V2i( 0 ) )
return
with self.__column :
|
[Starboard] Change `content` to `system_content`
adds support of starring system messages (join, boost) | @@ -111,13 +111,13 @@ class StarboardEvents:
) -> discord.Embed:
channel = cast(discord.TextChannel, message.channel)
author = message.author
- if message.embeds != []:
+ if message.embeds:
em = message.embeds[0]
- if message.content != "":
+ if message.system_content:
if em.description != discord.Embed.Empty:
- em.description = "{}\n\n{}".format(message.content, em.description)[:2048]
+ em.description = "{}\n\n{}".format(message.system_content, em.description)[:2048]
else:
- em.description = message.content
+ em.description = message.system_content
if not author.bot:
em.set_author(
name=author.display_name,
@@ -132,7 +132,7 @@ class StarboardEvents:
em.color = await self._get_colour(channel)
else:
em.color = discord.Colour(starboard.colour)
- em.description = message.content
+ em.description = message.system_content
em.set_author(
name=author.display_name, url=message.jump_url, icon_url=str(author.avatar_url)
)
|
Update MouthControl.py
Changed how to access the Arduino service + changed to MarySpeech | python = Runtime.createAndStart("python","Python")
mouth = Runtime.createAndStart("Mouth","MouthControl")
-arduino = mouth.getArduino()
-arduino.connect('COM11')
+arduino = mouth.arduino
+arduino.connect('COM3')
jaw = mouth.getJaw()
jaw.detach()
jaw.attach(arduino,11)
mouth.setmouth(110,120)
mouth.autoAttach = False
-speech = Runtime.createAndStart("Speech","AcapelaSpeech")
+speech = Runtime.createAndStart("Speech","MarySpeech")
+print ("these are the voices I can have", speech.getVoices())
+speech.setVoice('cmu-bdl-hsmm')
mouth.setMouth(speech)
-speech.setVoice("Will")
def onEndSpeaking(text):
mouth.setmouth(90,120)
jaw.moveTo(95)
|
Add get_stack_buffer method to state_machine
is a controlled method to get stack and buffer to faccilitate
compatibility with other state machines | @@ -136,6 +136,9 @@ class AMRStateMachine:
print('INIT')
print(self.printStackBuffer())
+ def get_stack_buffer(self):
+ return self.buffer, self.stack
+
def __str__(self):
"""Command line styling"""
|
slack: Use get_timestamp_from_message helper function where relevant.
get_timestamp_from_message was extracted in the previous commit. We can
deduplicate and the code a bit cleaner by using it where appropriate
instead of message["ts"]. | @@ -794,7 +794,7 @@ def get_messages_iterator(
# we sort the messages according to the timestamp to show messages with
# the proper date order
- yield from sorted(messages_for_one_day, key=lambda m: m["ts"])
+ yield from sorted(messages_for_one_day, key=get_timestamp_from_message)
def channel_message_to_zerver_message(
@@ -924,7 +924,7 @@ def channel_message_to_zerver_message(
zulip_message = build_message(
topic_name,
- float(message["ts"]),
+ get_timestamp_from_message(message),
message_id,
content,
rendered_content,
|
Fix documentation
Summary:
Current documentation example doesn't compile. This fixes the doc so the example works.
Pull Request resolved: | @@ -53,7 +53,7 @@ neural network on the MNIST dataset:
torch::Tensor forward(torch::Tensor x) {
// Use one of many tensor manipulation functions.
x = torch::relu(fc1->forward(x));
- x = torch::dropout(x, /*p=*/0.5);
+ x = torch::dropout(x, /*p=*/0.5, /*train=*/true);
x = torch::sigmoid(fc2->forward(x));
return x;
}
|
Fix assertation
Summary: Allow distance penalty to be 0, which makes parameter sweeping easier. | @@ -105,7 +105,7 @@ class Seq2SlateSimulationTrainer(Trainer):
if self.parameters.simulation_distance_penalty is not None:
# pyre-fixme[16]: `Optional` has no attribute `__gt__`.
- assert self.parameters.simulation_distance_penalty > 0
+ assert self.parameters.simulation_distance_penalty >= 0
self.permutation_distance = (
torch.tensor(
[swap_dist(x.tolist()) for x in self.permutation_index],
|
[IMPR] Remove unused private variables
_get_base_dir was held for backward compatibility but it is private and
can be removed
_base_dir is also private and only used inside config2.py; it can be
replaced by the correspninding public base_dir | @@ -376,10 +376,8 @@ def get_base_dir(test_directory=None):
return base_dir
-_get_base_dir = get_base_dir # for backward compatibility
-_base_dir = get_base_dir()
# Save base_dir for use by other modules
-base_dir = _base_dir
+base_dir = get_base_dir()
for arg in sys.argv[1:]:
if arg.startswith(str('-verbose')) or arg == str('-v'):
@@ -1023,7 +1021,7 @@ if __no_user_config:
warning('Skipping loading of user-config.py.')
_fns = []
else:
- _fns = [os.path.join(_base_dir, "user-config.py")]
+ _fns = [os.path.join(base_dir, 'user-config.py')]
for _filename in _fns:
_thislevel += 1
if os.path.exists(_filename):
|
integrations: Rename HUBOT_INTEGRATIONS_LEGACY.
It is now simply called HUBOT_INTEGRATIONS.
Fixes | @@ -444,28 +444,22 @@ BOT_INTEGRATIONS = [
BotIntegration('xkcd', ['bots', 'misc'], display_name='xkcd'),
] # type: List[BotIntegration]
-# Note: These are not actually displayed anywhere; we're keeping them
-# around so they can be migrated into the newer HUBOT_INTEGRATIONS
-HUBOT_INTEGRATIONS_LEGACY = {
- 'bonusly': HubotIntegration('bonusly', ['hr']),
- 'chartbeat': HubotIntegration('chartbeat', ['marketing']),
- 'darksky': HubotIntegration('darksky', ['misc'], display_name='Dark Sky',
- logo_alt='Dark Sky logo'),
- 'hangouts': HubotIntegration('google-hangouts', ['communication'], display_name="Hangouts"),
- 'instagram': HubotIntegration('instagram', ['misc'],
- logo='static/images/integrations/logos/instagram.png'),
- 'mailchimp': HubotIntegration('mailchimp', ['communication', 'marketing'],
- display_name='MailChimp', logo_alt='MailChimp logo'),
- 'translate': HubotIntegration('google-translate', ['misc'],
- display_name="Translate", logo_alt='Google Translate logo'),
- 'youtube': HubotIntegration('youtube', ['misc'], display_name='YouTube',
- logo_alt='YouTube logo')
-}
-
-HUBOT_INTEGRATIONS = {
+HUBOT_INTEGRATIONS = [
HubotIntegration('assembla', ['version-control', 'project-management'],
display_name='Assembla', logo_alt='Assembla'),
-}
+ HubotIntegration('bonusly', ['hr']),
+ HubotIntegration('chartbeat', ['marketing'], display_name='Chartbeat'),
+ HubotIntegration('darksky', ['misc'], display_name='Dark Sky',
+ logo_alt='Dark Sky logo'),
+ HubotIntegration('google-hangouts', ['communication'], display_name='Google Hangouts',
+ logo_alt='Google Hangouts logo'),
+ HubotIntegration('instagram', ['misc'], display_name='Instagram'),
+ HubotIntegration('mailchimp', ['communication', 'marketing'],
+ display_name='MailChimp'),
+ HubotIntegration('google-translate', ['misc'],
+ display_name="Google Translate", logo_alt='Google Translate logo'),
+ HubotIntegration('youtube', ['misc'], display_name='YouTube'),
+] # type: List[HubotIntegration]
for hubot_integration in HUBOT_INTEGRATIONS:
INTEGRATIONS[hubot_integration.name] = hubot_integration
|
BUG: unit test bugs
Bugs revealed by running unit tests:
self specified twice when calling default and clean, and
stop default specificaiton of `start` and `stop` in kwargs. | @@ -1631,11 +1631,11 @@ class Instrument(object):
# apply default instrument routine, if data present
if not self.empty:
- self._default_rtn(self)
+ self._default_rtn()
# clean data, if data is present and cleaning requested
if (not self.empty) & (self.clean_level != 'none'):
- self._clean_rtn(self)
+ self._clean_rtn()
# apply custom functions via the nanokernel in self.custom
if not self.empty:
@@ -2927,7 +2927,7 @@ def _get_supported_keywords(local_func):
# account for keywords that exist for the standard functions
pre_kws = ['fnames', 'inst_id', 'tag', 'date_array', 'data_path',
'format_str', 'supported_tags', 'fake_daily_files_from_monthly',
- 'two_digit_year_break', 'delimiter']
+ 'two_digit_year_break', 'delimiter', 'start', 'stop']
# insert 'missing' default for 'fnames'
defaults.insert(0, None)
|
container.yaml schema: store top-level Go module name
This will be used by image owners to declare the top-level Go package
which will be built into the image from source.
Also contains the initial work started in | "type": ["object", "null"],
"properties": {
+ "go": {
+ "type": "object",
+ "properties": {
+ "modules": {
+ "type": ["array", "null"],
+ "items": {
+ "type": "object",
+ "properties": {
+ "module": {
+ "type": "string",
+ "description": "Top-level Go module (package) name which will be built"
+ },
+ "archive": {
+ "type": "string",
+ "description": "Possibly-compressed archive containing full source code including dependencies"
+ },
+ "path": {
+ "type": "string",
+ "description": "Path to directory containing source code (or its parent), possibly within archive"
+ }
+ },
+ "additionalProperties": false,
+ "required": ["module"]
+ }
+ }
+ },
+ "additionalProperties": false
+ },
"platforms": {
"type": ["object", "null"],
"properties": {
|
updated example;
g | @@ -91,10 +91,11 @@ H ul
acc = PGradDescriptor(
EnergyAccumulator(mol),
LinearTransform(wf.parameters, freeze=freeze),
- {'tbdm': [tbdm_updn,tbdm_dnup]}, #'obdm': [obdm_up, obdm_down], 'tbdm': [tbdm_updn, tbdm_dnup]},
{
- #'obdm': DescriptorFromOBDM(descriptors, norm=2.0),
- 'tbdm': DescriptorFromTBDM(descriptors_tbdm, norm=2.0*(2.0 - 1.0)),
+ 'obdm': [obdm_up, obdm_down],
+ },
+ {
+ 'obdm': DescriptorFromOBDM(descriptors, norm=2.0),
},
)
@@ -126,18 +127,12 @@ if __name__ == "__main__":
forcing = {}
obj = {}
- #for k in sys["descriptors"]:
- # forcing[k] = 0.0
- # obj[k] = 0.0
-
- #forcing["t"] = 0.5
- #forcing["trace"] = 1.0
- #obj["t"] = 0.0
- #obj["trace"] = 2.0
- forcing["U"] = 5.0
- obj["U"] = 1.0
-
- hdf_file = "saveh2_2rdm.hdf5"
+ forcing["t"] = 0.0
+ forcing["trace"] = 0.0
+ obj["t"] = 0.0
+ obj["trace"] = 2.0
+
+ hdf_file = "saveh2.hdf5"
wf, df = cvmc_optimize(
sys["wf"],
configs,
|
Update PPOClipAgent to include PPO parameter
compute_value_and_advantage_in_train. | @@ -86,6 +86,7 @@ class PPOClipAgent(ppo_agent.PPOAgent):
log_prob_clipping=0.0,
gradient_clipping=None,
check_numerics=False,
+ compute_value_and_advantage_in_train=False,
debug_summaries=False,
summarize_grads_and_vars=False,
train_step_counter=None,
@@ -132,6 +133,11 @@ class PPOClipAgent(ppo_agent.PPOAgent):
gradient_clipping: Norm length to clip gradients. Default: no clipping.
check_numerics: If true, adds tf.debugging.check_numerics to help find NaN
/ Inf values. For debugging only.
+ compute_value_and_advantage_in_train: A bool to indicate where value
+ prediction and advantage calculation happen. If True, both happen in
+ agent.train(). If False, value prediction is computed during data
+ collection. This argument must be set to `False` if mini batch learning
+ is enabled.
debug_summaries: A bool to gather debug summaries.
summarize_grads_and_vars: If true, gradient summaries will be written.
train_step_counter: An optional counter to increment every time the train
@@ -164,6 +170,7 @@ class PPOClipAgent(ppo_agent.PPOAgent):
normalize_observations,
gradient_clipping=gradient_clipping,
check_numerics=check_numerics,
+ compute_value_and_advantage_in_train=compute_value_and_advantage_in_train,
debug_summaries=debug_summaries,
summarize_grads_and_vars=summarize_grads_and_vars,
train_step_counter=train_step_counter,
|
added a few steps
Added a few steps that are needed during the install on a fresh Ubuntu image | @@ -52,6 +52,10 @@ Clone Lemur inside the just created directory and give yourself write permission
.. code-block:: bash
+ $ sudo useradd lemur
+ $ sudo passwd lemur
+ $ sudo mkdir /home/lemur
+ $ sudo chown lemur:lemur /home/lemur
$ sudo git clone https://github.com/Netflix/lemur
$ sudo chown -R lemur lemur/
@@ -59,6 +63,7 @@ Create the virtual environment, activate it and enter the Lemur's directory:
.. code-block:: bash
+ $ su lemur
$ virtualenv -p python3 lemur
$ source /www/lemur/bin/activate
$ cd lemur
@@ -105,9 +110,24 @@ Update your configuration
Once created, you will need to update the configuration file with information about your environment, such as which database to talk to, where keys are stored etc.
+.. code-block:: bash
+
+ $ vi ~/.lemur/lemur.conf.py
+
.. note:: If you are unfamiliar with the SQLALCHEMY_DATABASE_URI string it can be broken up like so:
``postgresql://userame:password@<database-fqdn>:<database-port>/<database-name>``
+Before Lemur will run you need to fill in a few required variables in the configuration file:
+
+.. code-block:: bash
+
+ LEMUR_SECURITY_TEAM_EMAIL
+ #/the e-mail address needs to be enclosed in quotes
+ LEMUR_DEFAUL_COUNTRY
+ LEMUR_DEFAULT_STATE
+ LEMUR_DEFAULT_LOCATION
+ LEMUR_DEFAUTL_ORGANIZATION
+ LEMUR_DEFAULT_ORGANIZATIONAL_UNIT
Setup Postgres
--------------
|
convert to NPZ option after a single training step
this is for restoring an existing checkpoint from TF
format and writing it into an NPZ. A step is required
for the deferred mode network to hydrate properly in order
to properly save | @@ -164,7 +164,6 @@ def train():
parser.add_argument("--weight_decay", type=float, default=1.0e-2, help="Weight decay")
parser.add_argument("--epochs", type=int, default=32, help="Num training epochs")
parser.add_argument("--restart", type=str2bool, help="Option allows you to restart from a previous checkpoint")
- parser.add_argument("--restart_tt", type=str, help="Optional param for legacy checkpoints (step|epoch)")
parser.add_argument("--warmup_steps", type=int, default=10000, help="Num warmup steps")
parser.add_argument("--mlm", type=str2bool, default=True, help="Use Masked Language Model (MLM) objective")
parser.add_argument("--saves_per_epoch", type=int, default=10, help="The number of checkpoints to save per epoch")
@@ -174,10 +173,13 @@ def train():
parser.add_argument("--strategy", help="Training strategy, defaults to `mirror`", choices=["mirror"])
parser.add_argument("--npz", help="Should we write out NPZ files?", type=str2bool, default=False)
parser.add_argument("--tb", help="Turn on tensorboard?", type=str2bool, default=False)
-
+ parser.add_argument("--convert_only", help="Should we just convert this file to NPZ and exit?", type=str2bool, default=False)
args = parser.parse_args()
SET_TRAIN_FLAG(True)
+ if args.convert_only:
+ args.restart = True
+
if args.basedir is None:
args.basedir = f'lm-{args.dataset_key}-bpe-{os.getpid()}'
logging.basicConfig(level=logging.INFO)
@@ -321,6 +323,14 @@ def train():
loss = _distributed_train_step(next(train_iter))
avg_loss.update(loss.numpy().item())
tf.summary.scalar("train_loss", data=loss, step=optimizer.global_step)
+
+ if args.convert_only:
+ logger.warning("Convert only flag specified. Stopping after one step")
+ steps = optimizer.global_step.numpy()
+ npz_checkpoint = os.path.join(args.basedir, f'checkpoint-step-{steps}.npz')
+ save_tlm_npz(model, npz_checkpoint)
+ return
+
if (i + 1) % report_on == 0:
logging.info(avg_loss)
if (i + 1) % update_on == 0:
|
cephadm_adopt: fix rgw placement task
Due to a recent breaking change in ceph, this command must be modified
to add the <svc_id> parameter. | CEPHADM_IMAGE: '{{ ceph_docker_registry }}/{{ ceph_docker_image }}:{{ ceph_docker_image_tag }}'
- name: update the placement of radosgw hosts
- command: "{{ cephadm_cmd }} shell --fsid {{ fsid }} -- ceph --cluster {{ cluster }} orch apply rgw {{ rgw_realm | default('default') }} {{ rgw_zone | default('default') }} --placement='count-per-host:{{ radosgw_num_instances }} label:{{ rgw_group_name }}' --port={{ radosgw_frontend_port }} {{ '--ssl' if radosgw_frontend_ssl_certificate else '' }}"
+ command: "{{ cephadm_cmd }} shell --fsid {{ fsid }} -- ceph --cluster {{ cluster }} orch apply rgw {{ cluster }} {{ rgw_realm | default('default') }} {{ rgw_zone | default('default') }} --placement='count-per-host:{{ radosgw_num_instances }} label:{{ rgw_group_name }}' --port={{ radosgw_frontend_port }} {{ '--ssl' if radosgw_frontend_ssl_certificate else '' }}"
run_once: true
changed_when: false
delegate_to: "{{ groups[mon_group_name][0] }}"
|
adding prometheus to VITRAGE_DEFAULT_DATASOURCES in devstack
Depends-On: | @@ -39,7 +39,7 @@ VITRAGE_DEPLOY=${VITRAGE_DEPLOY}
# Toggle for deploying Vitrage with/without nagios
VITRAGE_USE_NAGIOS=$(trueorfalse False VITRAGE_USE_NAGIOS)
-VITRAGE_DEFAULT_DATASOURCES=${VITRAGE_DEFAULT_DATASOURCES:-nova.host,nova.instance,nova.zone,nagios,static,static_physical,aodh,cinder.volume,neutron.network,neutron.port,heat.stack,doctor}
+VITRAGE_DEFAULT_DATASOURCES=${VITRAGE_DEFAULT_DATASOURCES:-nova.host,nova.instance,nova.zone,nagios,static,static_physical,aodh,cinder.volume,neutron.network,neutron.port,heat.stack,doctor,prometheus}
# for now dont use pip install for the client
LIBS_FROM_GIT=python-vitrageclient
|
Change Hash generation for IAM resource
Keeping the Hash as Upper case
Change _cluster_scoped_iam_path t have consistent styling. | @@ -260,9 +260,7 @@ def add_cluster_iam_resource_prefix(stack_name, config, name: str, iam_type: str
if iam_name_prefix:
# Creating a Globally Unique Hash using Region, Type, Name and stack name
resource_hash = (
- hashlib.sha256((name + stack_name + iam_type + config.region).encode("utf-8"))
- .hexdigest()[:12]
- .capitalize()
+ hashlib.sha256((name + stack_name + iam_type + config.region).encode("utf-8")).hexdigest()[:12].upper()
)
full_resource_name = iam_name_prefix + name + "-" + resource_hash
if iam_path:
@@ -494,6 +492,7 @@ class NodeIamResourcesBase(Construct):
"""Return a path to be associated with IAM roles and instance profiles."""
if iam_path:
return f"{iam_path}{Stack.of(self).stack_name}/"
+ else:
return f"{IAM_ROLE_PATH}{Stack.of(self).stack_name}/"
def _format_arn(self, **kwargs):
|
[Doc][Core] Fixed a bug in Ray core Pi calculation example, issue#31105
It's missing parameter for pi4_sample.
The correct code should be
pi4_sample.remote(sample_count = SAMPLE_COUNT) | "print(f'Doing {BATCHES} batches')\n",
"results = []\n",
"for _ in range(BATCHES):\n",
- " results.append(pi4_sample.remote())\n",
+ " results.append(pi4_sample.remote(sample_count = SAMPLE_COUNT))\n",
"output = ray.get(results)"
]
},
|
Windows: Fix, always check for stdin, stdout, and stderr presence
* This avoids making this specific to options, where it's unclear if
these are sufficient conditions. | @@ -289,6 +289,15 @@ static void PRINT_REFCOUNTS() {
}
#endif
+// Small helper to open files with few arguments.
+static PyObject *BUILTIN_OPEN_SIMPLE(PyObject *filename, char const *mode) {
+#if PYTHON_VERSION < 300
+ return BUILTIN_OPEN(filename, Nuitka_String_FromString(mode), NULL);
+#else
+ return BUILTIN_OPEN(filename, Nuitka_String_FromString(mode), NULL, NULL, NULL, NULL, NULL, NULL);
+#endif
+}
+
#ifdef _NUITKA_WINMAIN_ENTRY_POINT
int __stdcall WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, char *lpCmdLine, int nCmdShow) {
#if defined(__MINGW32__) && !defined(_W64)
@@ -522,30 +531,35 @@ int main(int argc, char **argv) {
assert(_Py_Ticker >= 20);
}
- /* On Windows, we support disabling the console via linker flag, but now
+ /* At least on Windows, we support disabling the console via linker flag, but now
need to provide the NUL standard file handles manually in this case. */
-
-#if defined(_NUITKA_WINMAIN_ENTRY_POINT) && PYTHON_VERSION >= 300
{
- PyObject *filename = Nuitka_String_FromString("NUL:");
+ PyObject *nul_filename = Nuitka_String_FromString("NUL:");
- PyObject *stdin_file =
- BUILTIN_OPEN(filename, Nuitka_String_FromString("r"), NULL, NULL, NULL, NULL, NULL, NULL);
+ if (PySys_GetObject("stdin") == NULL) {
+ PyObject *stdin_file = BUILTIN_OPEN_SIMPLE(nul_filename, "r");
CHECK_OBJECT(stdin_file);
PySys_SetObject("stdin", stdin_file);
+ }
- PyObject *stdout_file =
- BUILTIN_OPEN(filename, Nuitka_String_FromString("w"), NULL, NULL, NULL, NULL, NULL, NULL);
+ if (PySys_GetObject("stdout") == NULL) {
+ PyObject *stdout_file = BUILTIN_OPEN_SIMPLE(nul_filename, "w");
CHECK_OBJECT(stdout_file);
PySys_SetObject("stdout", stdout_file);
- PySys_SetObject("stderr", stdout_file);
+ }
+
+ if (PySys_GetObject("stderr") == NULL) {
+ PyObject *stderr_file = BUILTIN_OPEN_SIMPLE(nul_filename, "w");
- Py_DECREF(filename);
+ CHECK_OBJECT(stderr_file);
+
+ PySys_SetObject("stderr", stderr_file);
}
-#endif
+ Py_DECREF(nul_filename);
+ }
#ifdef _NUITKA_STANDALONE
NUITKA_PRINT_TRACE("main(): Calling setEarlyFrozenModulesFileAttribute().");
|
sort magpie generated features alphabetically to avoid issues of feature
order changing | @@ -212,7 +212,7 @@ class Magpie(BaseEstimator, TransformerMixin):
df = clean_dataframe(df)
df = df.select_dtypes(['number']).dropna(axis=1)
assert self.composition_feature not in df.columns
- return df
+ return df[sorted(df.columns.tolist())]
class MaterialsProject(BaseEstimator, TransformerMixin):
"""
|
ResolvedExpression.flat_subexprs: add a "filter" argument
TN: | @@ -905,9 +905,16 @@ class ResolvedExpression(object):
"""
return []
- def flat_subexprs(self):
+ def flat_subexprs(
+ self, filter=lambda expr: isinstance(expr, ResolvedExpression)
+ ):
"""
- Like "subexprs", but return a flat list of ResovedExpression.
+ Wrapper around "subexprs" to return a flat list of items matching
+ "filter". By default, get all ResolvedExpressions.
+
+ :param filter: Predicate to test whether a subexpression should be
+ returned.
+ :type filter: (T) -> bool
:rtype: list[ResolvedExpression]
"""
@@ -921,7 +928,7 @@ class ResolvedExpression(object):
return mapcat(values, explore)
elif isinstance(values, dict):
return mapcat(values.values(), explore)
- elif isinstance(values, ResolvedExpression):
+ elif filter(values):
return [values]
else:
return []
|
Fix the spelling mistake in host.py
TrivialFix | @@ -527,7 +527,7 @@ class Host(object):
:returns: a nova.virt.libvirt.Guest object
:raises exception.InstanceNotFound: The domain was not found
- :raises exception.InternalError: A libvirt error occured
+ :raises exception.InternalError: A libvirt error occurred
"""
return libvirt_guest.Guest(self.get_domain(instance))
@@ -542,7 +542,7 @@ class Host(object):
:returns: a libvirt.Domain object
:raises exception.InstanceNotFound: The domain was not found
- :raises exception.InternalError: A libvirt error occured
+ :raises exception.InternalError: A libvirt error occurred
"""
try:
conn = self.get_connection()
|
type stubs: Allows `v_args` to decorate a class.
v_args is described as taking a callbable as argument:
Yet the documentation states it can decorate a class: | # -*- coding: utf-8 -*-
-from typing import TypeVar, Tuple, List, Callable, Generic, Type
+from typing import TypeVar, Tuple, List, Callable, Generic, Type, Union
from abc import ABC
from .tree import Tree
_T = TypeVar('_T')
_R = TypeVar('_R')
_FUNC = Callable[..., _T]
-
+_DECORED = Union[_FUNC, type]
class Transformer(ABC, Generic[_T]):
@@ -76,7 +76,7 @@ def v_args(
inline: bool = False,
meta: bool = False,
tree: bool = False
-) -> Callable[[_FUNC], _FUNC]:
+) -> Callable[[_DECORED], _DECORED]:
...
|
Makefile improvements
- only rechef headnodes/worknodes if amount is great than 1 | @@ -5,6 +5,8 @@ export inventory = ansible/inventory
export playbooks = ansible/playbooks
export ANSIBLE_CONFIG = ansible/ansible.cfg
+headnodes = $$(ansible headnodes -i ${inventory} --list | tail -n +2 | wc -l)
+worknodes = $$(ansible worknodes -i ${inventory} --list | tail -n +2 | wc -l)
all : \
download-assets \
@@ -21,50 +23,41 @@ create :
virtual/bin/create-virtual-environment.sh
-
destroy :
virtual/bin/destroy-virtual-environment.sh
-
operator :
ansible-playbook -v -i ${inventory} ${playbooks}/site.yml -t operator
-
download-assets :
ansible-playbook -v -i ${inventory} ${playbooks}/site.yml -t download-assets
-
chef-server :
ansible-playbook -v -i ${inventory} ${playbooks}/site.yml -t chef-server
-
chef-workstation :
ansible-playbook -v -i ${inventory} ${playbooks}/site.yml -t chef-workstation
-
chef-node :
ansible-playbook -v -i ${inventory} ${playbooks}/site.yml -t chef-node
-
chef-client : \
chef-client-bootstraps \
chef-client-headnodes \
chef-client-worknodes
-
chef-client-bootstraps :
ansible-playbook -v \
-i ${inventory} ${playbooks}/site.yml \
-t chef-client --limit bootstraps
-
chef-client-headnodes :
ansible-playbook -v \
@@ -72,23 +65,25 @@ chef-client-headnodes :
-t chef-client --limit headnodes \
-e "step=1"
+ @if [ "${headnodes}" -gt 1 ]; then \
ansible-playbook -v \
-i ${inventory} ${playbooks}/site.yml \
-t chef-client --limit headnodes \
- -e "step=1"
-
+ -e "step=1"; \
+ fi
chef-client-worknodes :
ansible-playbook -v \
-i ${inventory} ${playbooks}/site.yml \
- -t chef-client --limit worknodes
+ -t chef-client --limit worknodes \
-e 'run_once=true'
+ @if [ "${worknodes}" -gt 1 ]; then \
ansible-playbook -v \
-i ${inventory} ${playbooks}/site.yml \
- -t chef-client --limit worknodes
-
+ -t chef-client --limit worknodes; \
+ fi
discover-compute-nodes:
@@ -96,14 +91,12 @@ discover-compute-nodes:
-i ${inventory} ${playbooks}/site.yml \
-t discover-compute-nodes --limit headnodes
-
upload-bcpc :
ansible-playbook -v \
-i ${inventory} ${playbooks}/site.yml \
-t upload-bcpc
-
upload-all :
ansible-playbook -v \
@@ -114,14 +107,12 @@ upload-all :
-i ${inventory} ${playbooks}/site.yml \
-t upload-bcpc
-
file-server :
ansible-playbook -v \
-i ${inventory} ${playbooks}/site.yml \
-t file-server
-
###############################################################################
# helper targets
###############################################################################
|
Update the file name of tune-TiKV.md
To fix the 404 error when switching from the Chinese to English version on PingCAP website.
Besides, make the metadata and the title consistent. | ---
-title: TiKV Performance Tuning
+title: Tune TiKV Performance
category: tuning
---
-# Performance Tuning for TiKV
+# Tune TiKV Performance
This document describes how to tune the TiKV parameters for optimal performance.
|
Fix murano-api docs
Session response body is in inelegant format and the explanation
may cause misunderstanding. | @@ -341,7 +341,9 @@ User could not open new session for environment that in
Configure environment / open session
------------------------------------
-During this call new working session is created, and session ID should be sent in a request header with name ``X-Configuration-Session``.
+During this call a new working session is created with its ID returned in response body.
+Notice that the session ID should be added to request headers with name ``X-Configuration-Session``
+in subsequent requests when necessary.
*Request*
@@ -361,11 +363,13 @@ During this call new working session is created, and session ID should be sent i
::
{
+ "id": "257bef44a9d848daa5b2563779714820",
"updated": datetime.datetime(2014, 5, 14, 14, 17, 58, 949358),
"environment_id": "744e44812da84e858946f5d817de4f72",
"ser_id": "4e91d06270c54290b9dbdf859356d3b3",
"created": datetime.datetime(2014, 5, 14, 14, 17, 58, 949305),
- "state": "open", "version": 0L, "id": "257bef44a9d848daa5b2563779714820"
+ "state": "open",
+ "version": 0L
}
+----------------+-----------------------------------------------------------+
|
[JAX] Disables large k test cases in ann_test.
Will investigate probability properties for the corner cases in the future. | @@ -60,12 +60,14 @@ def compute_recall(result_neighbors, ground_truth_neighbors) -> float:
class AnnTest(jtu.JaxTestCase):
+ # TODO(b/258315194) Investigate probability property when input is around
+ # few thousands.
@jtu.sample_product(
qy_shape=[(200, 128), (128, 128)],
db_shape=[(128, 500), (128, 3000)],
dtype=jtu.dtypes.all_floating,
- k=[1, 10, 50],
- recall=[0.9, 0.95],
+ k=[1, 10],
+ recall=[0.95],
)
def test_approx_max_k(self, qy_shape, db_shape, dtype, k, recall):
rng = jtu.rand_default(self.rng())
@@ -76,14 +78,14 @@ class AnnTest(jtu.JaxTestCase):
_, ann_args = lax.approx_max_k(scores, k, recall_target=recall)
self.assertEqual(k, len(ann_args[0]))
ann_recall = compute_recall(np.asarray(ann_args), np.asarray(gt_args))
- self.assertGreater(ann_recall, recall)
+ self.assertGreaterEqual(ann_recall, recall*0.9)
@jtu.sample_product(
qy_shape=[(200, 128), (128, 128)],
db_shape=[(128, 500), (128, 3000)],
dtype=jtu.dtypes.all_floating,
- k=[1, 10, 50],
- recall=[0.9, 0.95],
+ k=[1, 10],
+ recall=[0.95],
)
def test_approx_min_k(self, qy_shape, db_shape, dtype, k, recall):
rng = jtu.rand_default(self.rng())
@@ -92,9 +94,8 @@ class AnnTest(jtu.JaxTestCase):
scores = lax.dot(qy, db)
_, gt_args = lax.top_k(-scores, k)
_, ann_args = lax.approx_min_k(scores, k, recall_target=recall)
- self.assertEqual(k, len(ann_args[0]))
ann_recall = compute_recall(np.asarray(ann_args), np.asarray(gt_args))
- self.assertGreater(ann_recall, recall * 0.98)
+ self.assertGreaterEqual(ann_recall, recall*0.9)
@jtu.sample_product(
dtype=[np.float32],
|
remove old dependencies
pydantic, docutils and its stubs are no longer needed by rstcheck directly
rstcheck-core took over the dependencies | @@ -48,9 +48,6 @@ sphinx-click = "^4.0.3"
rstcheck-core = "^1.0.2"
importlib-metadata = {version = ">=1.6, <5.0", python = "<3.8"}
typing-extensions = {version = ">=3.7.4, <5.0", python = "<3.8"}
-docutils = ">=0.7, <0.19"
-types-docutils = ">=0.18, <0.19"
-pydantic = ">=1.2, <2.0"
typer = {extras = ["all"], version = ">=0.4.1,<0.8"}
[tool.poetry.dev-dependencies]
@@ -112,13 +109,6 @@ disallow_any_generics = true
check_untyped_defs = true
implicit_reexport = false
python_version = "3.10" # CHANGE ME
-plugins = "pydantic.mypy"
-
-[tool.pydantic-mypy]
-init_forbid_extra = true
-init_typed = false
-warn_required_dynamic_aliases = true
-warn_untyped_fields = true
# -- FLAKEHEAVEN CONFIG ----------------------------------------------------------------
|
ocs_ci/ocs/resources/pod.py
- Added function get_pod_count() to get count of any pod with label specified | @@ -684,6 +684,12 @@ def get_osd_pods(osd_label=constants.OSD_APP_LABEL, namespace=None):
return osd_pods
+def get_pod_count(label, namespace=None):
+ namespace = namespace or config.ENV_DATA['cluster_namespace']
+ pods = get_pods_having_label(label=label, namespace=namespace)
+ return len(pods)
+
+
def get_cephfsplugin_provisioner_pods(
cephfsplugin_provisioner_label=constants.CSI_CEPHFSPLUGIN_PROVISIONER_LABEL,
namespace=None
|
Casctl: Python version check fix
Fixes the python version check code in the casctl. | # Copyright(c) 2012-2021 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
#
-
-import platform
import sys
-min_ver = "3.6"
-ver = platform.python_version()
-if ver < min_ver:
- print(
- "Minimum required python version is {}. Detected python version is {}".format(
- min_ver,
- ver,
- ),
- file=sys.stderr,
- )
+min_ver = (3, 6)
+if sys.version_info < min_ver:
+ print("Minimum required python version is {}.{}. Detected python version is '{}'"
+ .format(*min_ver, sys.version), file=sys.stderr)
exit(1)
import argparse
|
include multi output example in doc block
Summary: resolves
Test Plan: eyes
Reviewers: sandyryza, catherinewu, max | @@ -81,26 +81,31 @@ def pipeline(
pipeline. When a hook is applied to a pipeline, it will be attached to all solid
instances within the pipeline.
- Examples:
+ Example:
.. code-block:: python
- @lambda_solid
- def emit_one() -> int:
- return 1
+ @solid(output_defs=[OutputDefinition(int, "two"), OutputDefinition(int, "four")])
+ def emit_two_four(_) -> int:
+ yield Output(2, "two")
+ yield Output(4, "four")
+
@lambda_solid
def add_one(num: int) -> int:
return num + 1
+
@lambda_solid
def mult_two(num: int) -> int:
return num * 2
- @pipeline
- def add_pipeline():
- add_one(mult_two(emit_one()))
+ @pipeline
+ def math_pipeline():
+ two, four = emit_two_four()
+ add_one(two)
+ mult_two(four)
"""
if callable(name):
check.invariant(description is None)
|
try to split testsuites into two
try to split tests | @@ -43,6 +43,9 @@ jobs:
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
pyv: ["3.8", "3.9", "3.10"]
+ pytest-filter:
+ - "import or plot or live or experiment"
+ - "not (import or plot or live or experiment)"
include:
- {os: ubuntu-latest, pyv: "3.11-dev"}
@@ -75,10 +78,7 @@ jobs:
AIOHTTP_NO_EXTENSIONS: ${{ matrix.pyv == '3.11-dev' && '1' }}
- name: run tests
timeout-minutes: 40
- run: >-
- python -m tests -n=auto
- --cov-report=xml --cov-report=term
- ${{ env.extra_test_args }}
+ run: pytest -nauto -ra --durations 100 --cov=dvc --cov-report=xml --cov-report=term -k "${{ matrix.pytest-filter }}"
- name: upload coverage report
uses: codecov/codecov-action@v3
with:
|
CostInference for 1D conv
Summary:
Pull Request resolved:
As title | @@ -406,12 +406,12 @@ class ConvPoolOpBase : public Operator<Context> {
const auto order =
StringToStorageOrder(helper.GetSingleArgument<string>("order", "NCHW"));
uint64_t N;
- uint64_t Y_t = 1;
uint64_t Y_h;
- uint64_t Y_w;
- uint64_t kernel_t = 1;
+ uint64_t Y_w = 1;
+ uint64_t Y_t = 1;
uint64_t kernel_h;
- uint64_t kernel_w;
+ uint64_t kernel_w = 1;
+ uint64_t kernel_t = 1;
uint64_t in_channels;
uint64_t out_channels;
@@ -430,14 +430,8 @@ class ConvPoolOpBase : public Operator<Context> {
kernel_w = W.dims(4);
in_channels = W.dims(1);
out_channels = W.dims(0);
- } else if (X.dims_size() == 3) {
- // TODO(T36817818): This inference function needs a case for 1-d
- // convolution. Until then, we are disabling it with the following early
- // return
- return OpSchema::Cost();
- } else {
+ } else if (X.dims_size() == 4) {
// 2D convolution
- CAFFE_ENFORCE_EQ(X.dims_size(), 4, "Conv2D should have 4D input tensor");
CAFFE_ENFORCE_EQ(W.dims_size(), 4, "Conv2D should have 4D filter tensor");
if (order == StorageOrder::NHWC) {
Y_h = Y.dims(1);
@@ -454,6 +448,20 @@ class ConvPoolOpBase : public Operator<Context> {
in_channels = W.dims(1);
out_channels = W.dims(0);
}
+ } else {
+ // 1D convolution
+ CAFFE_ENFORCE_EQ(W.dims_size(), 3, "Conv1D should have 3D filter tensor");
+ if (order == StorageOrder::NHWC) {
+ Y_h = Y.dims(1);
+ kernel_h = W.dims(1);
+ in_channels = W.dims(2);
+ out_channels = W.dims(0);
+ } else {
+ Y_h = Y.dims(2);
+ kernel_h = W.dims(2);
+ in_channels = W.dims(1);
+ out_channels = W.dims(0);
+ }
}
uint64_t nElemX = nElemFromDim(X);
|
Improve PEP8 compliance: Fix E265 error
E265 - Block comment should start with '#' | # E731 - Prefer def over lambda
# W503 - line break before binary operator, to be replaced with W504.
# Refer to https://github.com/PyCQA/pycodestyle/issues/498
-ignore=E265,E501,E722,E731,W503
+ignore=E501,E722,E731,W503
exclude=config,galaxy/*/migrations,galaxy/*/south_migrations,galaxy/static,provisioning
|
Nix the welcome popup
Closes | @@ -62,7 +62,6 @@ for key, tab in tabs.items():
title = _("Projects")
suppress_sidebar = True
-suppress_welcome = 'suppress-welcome' in request.cookie
page_id = "homepage"
[---]
{% extends "templates/base.html" %}
@@ -87,23 +86,6 @@ page_id = "homepage"
{% endblock %}
{% block content %}
-{% if not suppress_welcome %}
-<div class="welcome modal">
- <p><b>{{ _("Welcome to Gratipay!") }}</b></p>
- <p>{{ _( "We have {nteams} projects receiving and sharing about {volume} each week."
- , nteams=tabs['approved']['n']
- , volume=format_currency(volume, 'USD', trailing_zeroes=False)
- ) }}</p>
- <p>{{ _( "Continue to explore our projects, or {a}read more about us{_a}."
- , a='<a href="/about/">'|safe
- , _a='</a>'|safe
- ) }}</p>
- <div class="continue">
- <button>{{ _("Continue") }}</button>
- </div>
-</div>
-{% endif %}
-
<form action="new" class="apply">
<button type="submit">{{ _("Fund Your Project") }}</button>
</form>
@@ -164,16 +146,3 @@ page_id = "homepage"
{% endfor %}
</div>
{% endblock %}
-
-
-{% block scripts %}
-<script>
- $(document).ready(function () {
- $('.welcome').show();
- $('.welcome button').click(function() {
- document.cookie = 'suppress-welcome=';
- $(this).parent().parent().fadeOut(100);
- });
- });
-</script>
-{% endblock %}
|
Rename crack attributes
This just renames the unbalance attributes so that we have consistent
names throughout the code.
This way the name is also pep8 compliant. | @@ -97,8 +97,8 @@ class Crack(Defect):
self.speed = speed
self.speedI = speed
self.speedF = speed
- self.MassUnb = unbalance_magnitude
- self.PhaseUnb = unbalance_phase
+ self.unbalance_magnitude = unbalance_magnitude
+ self.unbalance_phase = unbalance_phase
self.print_progress = print_progress
if crack_type is None or crack_type == "Mayes":
@@ -108,7 +108,7 @@ class Crack(Defect):
else:
raise Exception("Check the crack model!")
- if len(self.MassUnb) != len(self.PhaseUnb):
+ if len(self.unbalance_magnitude) != len(self.unbalance_phase):
raise Exception(
"The unbalance magnitude vector and phase must have the same size!"
)
@@ -128,7 +128,7 @@ class Crack(Defect):
self.rotor = rotor
self.n_disk = len(self.rotor.disk_elements)
- if self.n_disk != len(self.MassUnb):
+ if self.n_disk != len(self.unbalance_magnitude):
raise Exception("The number of discs and unbalances must agree!")
self.ndof = rotor.ndof
@@ -286,7 +286,7 @@ class Crack(Defect):
self.Omega = self.sA + self.sB * np.exp(-self.lambdat * T)
self.AccelV = -self.lambdat * self.sB * np.exp(-self.lambdat * T)
- self.tetaUNB = np.zeros((len(self.PhaseUnb), len(self.angular_position)))
+ self.tetaUNB = np.zeros((len(self.unbalance_phase), len(self.angular_position)))
unbx = np.zeros(len(self.angular_position))
unby = np.zeros(len(self.angular_position))
@@ -294,15 +294,15 @@ class Crack(Defect):
self.forces_crack = np.zeros((self.ndof, len(t_eval)))
for ii in range(self.n_disk):
- self.tetaUNB[ii, :] = self.angular_position + self.PhaseUnb[ii] + np.pi / 2
+ self.tetaUNB[ii, :] = self.angular_position + self.unbalance_phase[ii] + np.pi / 2
- unbx = self.MassUnb[ii] * (self.AccelV) * (
+ unbx = self.unbalance_magnitude[ii] * (self.AccelV) * (
np.cos(self.tetaUNB[ii, :])
- ) - self.MassUnb[ii] * ((self.Omega ** 2)) * (np.sin(self.tetaUNB[ii, :]))
+ ) - self.unbalance_magnitude[ii] * ((self.Omega ** 2)) * (np.sin(self.tetaUNB[ii, :]))
- unby = -self.MassUnb[ii] * (self.AccelV) * (
+ unby = -self.unbalance_magnitude[ii] * (self.AccelV) * (
np.sin(self.tetaUNB[ii, :])
- ) - self.MassUnb[ii] * (self.Omega ** 2) * (np.cos(self.tetaUNB[ii, :]))
+ ) - self.unbalance_magnitude[ii] * (self.Omega ** 2) * (np.cos(self.tetaUNB[ii, :]))
FFunb[int(self.ndofd[ii]), :] += unbx
FFunb[int(self.ndofd[ii] + 1), :] += unby
|
Cleanup sms attachment copying
use ilapfunc.sanitize_file_name() for regex replacement instead of iterative
minor formatting updates | @@ -3,7 +3,7 @@ import pandas as pd
import shutil
from scripts.artifact_report import ArtifactHtmlReport
-from scripts.ilapfuncs import logfunc, tsv, timeline, is_platform_windows, open_sqlite_db_readonly
+from scripts.ilapfuncs import logfunc, tsv, timeline, is_platform_windows, open_sqlite_db_readonly, sanitize_file_name
from scripts.chat_rendering import render_chat, chat_HTML
@@ -61,19 +61,15 @@ def get_sms(files_found, report_folder, seeker):
pathToAttachment = None
if rec["FILENAME"]:
attachment = seeker.search('**'+rec["FILENAME"].replace('~', '', 1), return_on_first_hit=True)
- pathToAttachment = os.path.join((os.path.basename(os.path.abspath(report_folder))), os.path.basename(rec["FILENAME"]))
if not attachment:
logfunc(' [!] Unable to extract attachment file: "{}"'.format(rec['FILENAME']))
return
if is_platform_windows():
- invalid = '<>:"/\|?*'
- cleanFilename = os.path.basename(rec["FILENAME"])
- for values in invalid:
- cleanFilename = cleanFilename.replace(values, '')
- shutil.copy(attachment[0], os.path.join(report_folder, cleanFilename))
- pathToAttachment = os.path.join(report_folder, cleanFilename)
+ destFileName = sanitize_file_name(os.path.basename(rec["FILENAME"]))
else:
- shutil.copy(attachment[0], os.path.join(report_folder, os.path.basename(rec["FILENAME"])))
+ destFileName = os.path.basename(rec["FILENAME"])
+ pathToAttachment = os.path.join((os.path.basename(os.path.abspath(report_folder))), destFileName)
+ shutil.copy(attachment[0], os.path.join(report_folder, destFileName))
return pathToAttachment
sms_df["file-path"] = sms_df.apply(lambda rec: copyAttachments(rec), axis=1)
|
Add Docker references in contributing, main readme
Add docker hype to readme preheader, "dev env" section of contributing. | @@ -60,6 +60,9 @@ We welcome direct contributions to the sendgrid-python code base. Thank you!
### Development Environment ###
+#### Using Docker ####
+You can use our Docker image to avoid setting up the development environment yourself. See [USAGE.md](https://github.com/sendgrid/sendgrid-python/docker/USAGE.md).
+
#### Install and Run Locally ####
##### Prerequisites #####
|
Skip define 'extern "C"' test on Windows
I was struggling to get the macro passed on the command-line while being escaped properly | @@ -25,19 +25,20 @@ bar = Extension(
["bar.pyx", "bar1.c", "bar2.cpp"],
)
-if sys.platform == "win32":
- # escape the quotes on the command line
- extern_c_definition = r'extern \"C\"'
-else:
- extern_c_definition = 'extern "C"'
baz = Extension(
"baz",
["baz.pyx", "baz1.c", "baz2.cpp"],
- define_macros = [("__PYX_EXTERN_C", extern_c_definition)],
+ define_macros = [("__PYX_EXTERN_C", 'extern "C"')],
)
+ext_modules = [foo, bar]
+if sys.platform != "win32":
+ # It's very hard to get the command-line macro define to escape properly on Windows,
+ # so skip it
+ ext_modules.append(baz)
+
setup(
- ext_modules=cythonize([foo, bar, baz]),
+ ext_modules=cythonize(ext_modules),
)
######## foo.pyx ########
@@ -156,3 +157,12 @@ int get_int1() { return (int)get_char(); }
#include "baz.h"
int get_int2() { return (int)get_char(); }
+
+######## baz.py ##########
+
+# Dummy module so Windows test works
+import sys
+assert sys.platform == "win32"
+
+def test():
+ pass
|
Update telegram
added download link for installation | @@ -6,6 +6,9 @@ Integrating Hummingbot with [Telegram Messenger](https://telegram.org/) allows y
Whether you are running Hummingbot in the cloud or on your local machine, you can use Telegram to monitor and control bots from wherever you are!
+!!! note
+ Make sure to install Telegram on your system before setting up your Telegram Bot. If not, you can download Telegram for [Windows/MAC/Linux](https://desktop.telegram.org/) and install.
+
## Set up your Telegram Bot
Below, we show how to create a Telegram bot that integrates with your Hummingbot deployment.
|
Docs: switch from outdated "pngmath" sphinx package to "imgmath", and use "svg" as output format.
See | @@ -41,7 +41,7 @@ highlight_language = 'cython'
extensions = [
'ipython_console_highlighting',
'cython_highlighting',
- 'sphinx.ext.pngmath',
+ 'sphinx.ext.imgmath',
'sphinx.ext.todo',
'sphinx.ext.intersphinx',
'sphinx.ext.autodoc'
@@ -132,6 +132,9 @@ intersphinx_mapping = {'python': ('https://docs.python.org/3/', None)}
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
+# The output image format. The default is 'png'. It should be either 'png' or 'svg'.
+imgmath_image_format = "svg"
+
# -- Options for HTML output ---------------------------------------------------
|
Return wrong_preds correctly for eval_model()
eval_examples can be 1) a list of InputExamples, 2) a tuple of two or three lists, with the first one or two columns as the text columns. | @@ -1862,7 +1862,12 @@ class ClassificationModel:
mismatched = labels != preds
if eval_examples:
+ if instanceof(eval_examples, list):
wrong = [i for (i, v) in zip(eval_examples, mismatched) if v.any()]
+ elif len(eval_examples) == 2:
+ wrong = [i for (i, v) in zip(eval_examples[0], mismatched) if v.any()]
+ else:
+ wrong = [i for (i, v) in zip(zip(eval_examples[0], eval_examples[1]), mismatched) if v.any()]
else:
wrong = ["NA"]
|
Adding a link to MO dashboard in graphana to MO card
HG--
branch : card-graphana-link | <th scope="row">{{ _("Description") }}</th>
<td>{% if description %}{{ description }}{% endif %}</td>
</tr>
+<tr>
+ <th scope="row">{{ _("Dashboard") }}</th>
+ <td><a href="/ui/grafana/dashboard/script/noc.js?dashboard=mo&id={{ object.id }}">View metrics</a></td>
+</tr>
<tr>
<th scope="row">{{ _("Service Range") }}</th>
<td>N/A</td>
|
docs: add --export option to the osbuild man page
This is used to export an image but isn't present in the osbuild man page. | @@ -45,6 +45,7 @@ is not listed here, **osbuild** will deny startup and exit with an error.
the osbuild library
--checkpoint=CHECKPOINT stage to commit to the object store during
build (can be passed multiple times)
+--export=OBJECT object to export (can be passed multiple times)
--json output results in JSON format
--output-directory=DIR directory where result objects are stored
--inspect return the manifest in JSON format including
|
Fixed build file for service_only bootstrap
The build.py file expects the window arg to be set, however, when building with a service_only bootstrap, this variable is not set. This results in an error when building the private.mp3 resource. | @@ -299,6 +299,7 @@ main.py that loads it.''')
# Add extra environment variable file into tar-able directory:
env_vars_tarpath = tempfile.mkdtemp(prefix="p4a-extra-env-")
with open(os.path.join(env_vars_tarpath, "p4a_env_vars.txt"), "w") as f:
+ if hasattr(args, "window"):
f.write("P4A_IS_WINDOWED=" + str(args.window) + "\n")
if hasattr(args, "orientation"):
f.write("P4A_ORIENTATION=" + str(args.orientation) + "\n")
|
Avoid force calculation error when printing
Now if compute_forces is set to False, will allow __str__ method to print without error from trying to calculate self.forces. | @@ -364,11 +364,19 @@ class EwaldSummation(object):
return self._eta
def __str__(self):
+ if self._compute_forces:
+ output = ["Real = " + str(self.real_space_energy),
+ "Reciprocal = " + str(self.reciprocal_space_energy),
+ "Point = " + str(self.point_energy),
+ "Total = " + str(self.total_energy),
+ "Forces:\n" + str(self.forces)
+ ]
+ else:
output = ["Real = " + str(self.real_space_energy),
"Reciprocal = " + str(self.reciprocal_space_energy),
"Point = " + str(self.point_energy),
"Total = " + str(self.total_energy),
- "Forces:\n" + str(self.forces)]
+ "Forces were not computed"]
return "\n".join(output)
|
io: StringIO seems happy enough to take None
Didn't check C code, but the _pyio implementation explicitly checks for
None | @@ -203,7 +203,7 @@ class TextIOWrapper(TextIO):
def tell(self) -> int: ...
class StringIO(TextIOWrapper):
- def __init__(self, initial_value: str = ...,
+ def __init__(self, initial_value: Optional[str] = ...,
newline: Optional[str] = ...) -> None: ...
# StringIO does not contain a "name" field. This workaround is necessary
# to allow StringIO sub-classes to add this field, as it is defined
|
Update CVE-2019-15858.yaml
version number on the description was ok :) | @@ -8,7 +8,7 @@ info:
This template supports the detection part only. See references.
admin/includes/class.import.snippet.php in the "Woody ad snippets" plugin
- before 2.2.4 for WordPress allows unauthenticated options import,
+ before 2.2.5 for WordPress allows unauthenticated options import,
as demonstrated by storing an XSS payload for remote code execution.
Source/References:
|
Fix testcase uploads for OSS-Fuzz.
Previously platform IDs were being incorrectly set to `project-linux`
when then should just be e.g. `linux`. This only affects OSS-Fuzz.
This was reported in | @@ -1272,6 +1272,10 @@ def create_user_uploaded_testcase(key,
utils.current_date_time(), uploader_email)
# External jobs never get minimized.
testcase.minimized_keys = 'NA'
+
+ # analyze_task sets this for non-external reproductions.
+ testcase.platform = job.platform.lower()
+ testcase.platform_id = testcase.platform
else:
testcase.crash_type = ''
testcase.crash_state = 'Pending'
@@ -1295,8 +1299,6 @@ def create_user_uploaded_testcase(key,
testcase.http_flag = bool(http_flag)
testcase.archive_state = archive_state
testcase.project_name = get_project_name(job.name)
- testcase.platform = job.platform.lower()
- testcase.platform_id = testcase.platform
if archive_state or bundled:
testcase.absolute_path = file_path_input
|
[tasks] Add Loop.restart
This implementation waits until the task is done before starting it
again.
Closes | @@ -128,9 +128,36 @@ class Loop:
self._task = self.loop.create_task(self._loop(*args, **kwargs))
return self._task
+ def _can_be_cancelled(self):
+ return not self._is_being_cancelled and self._task and not self._task.done()
+
def cancel(self):
"""Cancels the internal task, if it is running."""
- if not self._is_being_cancelled and self._task and not self._task.done():
+ if self._can_be_cancelled():
+ self._task.cancel()
+
+ def restart(self, *args, **kwargs):
+ r"""A convenience method to restart the internal start.
+
+ .. note::
+
+ Due to the way this function works, the task is not
+ returned like :meth:`start`.
+
+ Parameters
+ ------------
+ \*args
+ The arguments to to use.
+ \*\*kwargs
+ The keyword arguments to use.
+ """
+
+ def restart_when_over(fut, *, args=args, kwargs=kwargs):
+ self._task.remove_done_callback(restart_when_over)
+ self.start(*args, **kwargs)
+
+ if self._can_be_cancelled():
+ self._task.add_done_callback(restart_when_over)
self._task.cancel()
def add_exception_type(self, exc):
|
Show password box as soon as any validation fails.
Autofocus password box if simpleLogin is active. | <transition name="textbox">
<core-textbox
:label="$tr('password')"
- v-if="(!simpleLogin || (simpleLogin && passwordMissing))"
+ v-if="(!simpleLogin || (simpleLogin && (passwordMissing || invalidCredentials)))"
id="password"
type="password"
:placeholder="$tr('enterPassword')"
:aria-label="$tr('password')"
v-model="password"
autocomplete="current-password"
+ :autofocus="simpleLogin"
:required="!simpleLogin"
:invalid="passwordMissing"
:error="passwordMissing ? $tr('enterPassword') : ''"/>
|
Sync worker requirement mismatches
Summary:
Syncing worker requirement mismatches to improve remote build time.
Created actions:
MEDIUM: 981
LARGE: 56
Updated actions:
From MEDIUM to LARGE: 10
From LARGE to MEDIUM: 3
From LARGE to XLARGE: 1 | "ATen-cu#platform007-clang,shared": {
"workerSize": "MEDIUM",
"platformType": "LINUX"
+ },
+ "ATen-cpu#compile-pic-THTensorMoreMath.cpp.oc1c23613,platform007-clang": {
+ "workerSize": "MEDIUM",
+ "platformType": "LINUX"
}
}
\ No newline at end of file
|
Bump links for Effective Python to 2nd edition
Updated and expanded book for Python 3. Has 30 new major guidelines added compared to the 1st edition. | -description: A book that gives 59 best practices for writing excellent Python. Great
+description: A book that gives 90 best practices for writing excellent Python. Great
for intermediates.
name: Effective Python
payment: paid
@@ -8,7 +8,7 @@ urls:
url: https://effectivepython.com/
- icon: branding/amazon
title: Amazon
- url: https://www.amazon.com/Effective-Python-Specific-Software-Development/dp/0134034287
+ url: https://www.amazon.com/Effective-Python-Specific-Software-Development/dp/0134853989
- icon: branding/github
title: GitHub
url: https://github.com/bslatkin/effectivepython
|
Remove duplicate entry in test Vagrantfile
remove some leftover since code has been refactored | @@ -72,12 +72,6 @@ ansible_provision = proc do |ansible|
# In a production deployment, these should be secret
if DOCKER then
ansible.extra_vars = ansible.extra_vars.merge({
- containerized_deployment: 'true',
- containerized_deployment: 'true',
- containerized_deployment: 'true',
- containerized_deployment: 'true',
- containerized_deployment: 'true',
- containerized_deployment: 'true',
containerized_deployment: 'true',
ceph_mon_docker_interface: ETH,
ceph_mon_docker_subnet: "#{PUBLIC_SUBNET}.0/24",
|
group builds by executor
matrix looks good but results in slower overall build | @@ -136,21 +136,17 @@ jobs:
test:
executor: <<parameters.executor_name>>
- environment:
- COVERAGE_FILE: "coverage-results/.coverage.<<parameters.executor_name>>-<<parameters.event_loop>>"
- HYPOTHESIS_PROFILE: "ci"
- TOXENV: "<<parameters.executor_name>>-<<parameters.event_loop>>"
parameters:
executor_name:
type: string
description: "executor name"
- event_loop:
- description: "event loop type (asyncio or uvloop)"
- default: "asyncio"
- type: enum
- enum:
- - asyncio
- - uvloop
+ toxenv:
+ type: string
+ description: "tox env name"
+ environment:
+ COVERAGE_FILE: "coverage-results/.coverage.<<parameters.executor_name>>"
+ HYPOTHESIS_PROFILE: "ci"
+ TOXENV: "<<parameters.toxenv>>"
steps:
- python_version
- checkout
@@ -235,32 +231,72 @@ workflows:
tags:
only: /.*/
- test:
- name: "test-<< matrix.executor_name >>-<<matrix.event_loop>>"
+ name: "test-py36"
+ executor_name: py36
+ toxenv: "py36-asyncio,py36-uvloop"
+ requires:
+ - lint
+ - build
+ context:
+ - docker-hub-credentials
+ filters:
+ tags:
+ only: /.*/
+ - test:
+ name: "test-py37"
+ executor_name: py37
+ toxenv: "py37-asyncio,py37-uvloop"
+ requires:
+ - lint
+ - build
+ context:
+ - docker-hub-credentials
+ filters:
+ tags:
+ only: /.*/
+ - test:
+ name: "test-py38"
+ executor_name: py38
+ toxenv: "py38-asyncio,py38-uvloop"
+ requires:
+ - lint
+ - build
+ context:
+ - docker-hub-credentials
+ filters:
+ tags:
+ only: /.*/
+ - test:
+ name: "test-py39"
+ executor_name: py39
+ toxenv: "py39-asyncio,py39-uvloop"
+ requires:
+ - lint
+ - build
+ context:
+ - docker-hub-credentials
+ filters:
+ tags:
+ only: /.*/
+ - test:
+ name: "test-pypy3"
+ executor_name: pypy3
+ toxenv: "pypy3-asyncio"
requires:
- lint
- build
context:
- docker-hub-credentials
- matrix:
- parameters:
- event_loop:
- - asyncio
- - uvloop
- executor_name:
- - py36
- - py37
- - py38
- - py39
- - pypy3
- exclude:
- - executor_name: pypy3
- event_loop: uvloop
filters:
tags:
only: /.*/
- coverage:
requires:
- - test
+ - test-py36
+ - test-py37
+ - test-py38
+ - test-py39
+ - test-pypy3
context:
- docker-hub-credentials
filters:
@@ -271,7 +307,11 @@ workflows:
- build
- lint
- docs
- - test
+ - test-py36
+ - test-py37
+ - test-py38
+ - test-py39
+ - test-pypy3
context:
- docker-hub-credentials
filters:
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.