message
stringlengths 13
484
| diff
stringlengths 38
4.63k
|
---|---|
Add Train version to adjustment-nova-scheduler.yml playbook
This patch modifies the adjustment-nova-scheduler.yml playbook
to support the Openstack Train version | # Playbook to adjust Nova Scheduler settings to avoid over-scheduling hosts
# with greater memory in uneven memory environments.
#
-# Versions tested: Newton, Ocata, Pike
+# Versions tested: Newton, Ocata, Pike, Train
#
# Examples:
# ansible-playbook -i hosts browbeat/adjustment-nova-scheduler.yml -e 'max_instances_per_host=350'
nova_config_file: /etc/nova/nova.conf
when: "('Newton' in osp_version['content'] | b64decode or 'Ocata' in osp_version['content'] | b64decode)"
- - name: (Pike) Set Config File based on OpenStack Version
+ - name: (Pike, Train) Set Config File based on OpenStack Version
set_fact:
nova_config_file: /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
- when: "'Pike' in osp_version['content'] | b64decode"
+ when: "('Pike' in osp_version['content'] | b64decode or 'Train' in osp_version['content'] | b64decode)"
+
+ - name: (Pike) Set container cli based on Openstack Version
+ set_fact:
+ container_cli: docker
+ when: "('Pike' in osp_version['content'] | b64decode)"
+
+ - name: (Train) Set container cli based on Openstack Version
+ set_fact:
+ container_cli: podman
+ when: "('Train' in osp_version['content'] | b64decode)"
- name: Set default max_instances_per_host, ram_weight_multiplier, enabled_filters, and host_subset_size
set_fact:
- httpd
when: "('Newton' in osp_version['content'] | b64decode or 'Ocata' in osp_version['content'] | b64decode) and (restart_nova)"
- - name: (Pike) Restart Nova Scheduler Container
+ - name: (Pike, Train) Restart Nova Scheduler Container
become: true
- command: "docker restart {{item}}"
+ command: "{{container_cli}} restart {{item}}"
with_items:
- nova_scheduler
- when: "('Pike' in osp_version['content'] | b64decode) and (restart_nova)"
+ when: "('Pike' in osp_version['content'] | b64decode or 'Train' in osp_version['content'] | b64decode) and (restart_nova)"
|
Update nir_tutorial.md
make it a little closer to the interactive tutorial | @@ -108,17 +108,17 @@ We start by [subtracting](image_subtract.md) the background.
# Inputs:
# gray_img1 - Grayscale image data from which gray_img2 will be subtracted
# gray_img2 - Grayscale image data which will be subtracted from gray_img1
- bkg_sub_img = pcv.image_subtract(gray_img1=img, img_bkgrd)
+ bkg_sub_img = pcv.image_subtract(gray_img1=img_bkgrd, gray_img2=img)
- # Threshold the image of interest using the two-sided custom range function (keep what is between 50-190)
+ # Threshold the image of interest using the two-sided custom range function (keep what is between 20-190)
# Inputs:
# img - RGB or grayscale image data
# lower_thresh - List of lower threshold values
# upper_thresh - List of upper threshold values
# channel - Color-space channels of interest (either 'RGB', 'HSV', 'LAB', or 'gray')
- bkg_sub_thres_img, masked_img = pcv.threshold.custom_range(img=bkg_sub_img, lower_thresh=[50],
- upper_thresh=[190], channel='gray')
+ bkg_sub_thres_img, masked_img = pcv.threshold.custom_range(img=bkg_sub_img, lower_thresh=[20],
+ upper_thresh=[250], channel='gray')
```
@@ -132,7 +132,7 @@ Images were subtracted using the PlantCV [image subtract](image_subtract.md) fun
This function is built using the numpy '-' operator.
It is a modulo operator rather than a saturation operator.
-Thresholding was done using the [custom range threshold](custom_range_threshold.md) function. Pixels that have a signal value less than 50 and greater than 190 will be set to 0 (black),
+Thresholding was done using the [custom range threshold](custom_range_threshold.md) function. Pixels that have a signal value less than 20 and greater than 250 will be set to 0 (black),
while those with a value between these two will be set to 255 (white).
This approach works very well if you have image of the background without plant material.
|
circle-docker: Simplify a bit for clarity and efficiency.
Install `jq` with APT -- that's a lot simpler to read than this
explicit download.
And coalesce several commands, following Docker upstream's
recommendation and avoiding unnecessary overhead. | @@ -31,20 +31,12 @@ RUN apt-get update \
&& apt-get install -y \
git mercurial xvfb \
locales sudo openssh-client ca-certificates tar gzip parallel \
- net-tools netcat unzip zip bzip2 \
- python3 python3-pip
-
-RUN ln -sf /usr/share/zoneinfo/Etc/UTC /etc/localtime
-
-RUN locale-gen C.UTF-8 || true
+ net-tools netcat unzip zip bzip2 jq \
+ python3 python3-pip \
+ && ln -sf /usr/share/zoneinfo/Etc/UTC /etc/localtime \
+ && { locale-gen C.UTF-8 || true; }
ENV LANG=C.UTF-8
-RUN JQ_URL="https://circle-downloads.s3.amazonaws.com/circleci-images/cache/linux-amd64/jq-latest" \
- && curl --silent --show-error --location --fail --retry 3 --output /usr/bin/jq $JQ_URL \
- && chmod +x /usr/bin/jq \
- && jq --version
-
-
# Install Docker. This logic comes from Circle's Dockerfile; it's probably
# faster than the upstream-recommended approach of using their apt repo,
# and fine for an image that will be rebuilt rather than upgraded.
|
Use optimized kernel instead of naive kernel
Currently, the last `tune_kernel` was passing `kernel_source` (the naive kernel) instead of `convolution_kernel_string` (the optimized kernel). This raised an error since params like `filter_height` etc. weren't defined in the former. This commit fixes that. | },
"outputs": [],
"source": [
- "results, env = tune_kernel(kernel_name, kernel_source, problem_size, arguments, tune_params,\n",
+ "results, env = tune_kernel(kernel_name, convolution_kernel_string, problem_size, arguments, tune_params,\n",
" grid_div_x=grid_div_x, grid_div_y=grid_div_y)"
]
},
|
fix build's project selection list
Tuple should have been a list. Build now correctly prompts for a project
if multiple projects are detected by hsdev. | @@ -267,8 +267,8 @@ class Builder(object):
if idx != -1:
run_selected(projs[idx])
- modlist = [(m[0], m[1].get('path', '??')) for m in projs]
- self.view.window().show_quick_panel(modlist, on_done, 0, current_project_idx)
+ self.view.window().show_quick_panel([[m[0], m[1].get('path', '??')] for m in projs], on_done, 0,
+ current_project_idx)
# Retrieve projects as dictionary that refers to this app instance
|
Update updates-october-2020.md
ATT&CK consumers have requested a list of the techniques and sub-techniques added in the v8 ATT&CK release that were not a part of the PRE merger or the new Network platform. Adding a list of the 1 Technique and 7 sub-techniques. | @@ -140,7 +140,18 @@ We will continue to build out additional Network techniques and sub-techniques a
**Enterprise**
-We also added 1 additional new technique and 7 sub-techniques to Enterprise in this ATT&CK release beyond the scope of the above updates. All Enterprise technique changes, including this new technique and these new sub-techniques, are documented below.
+We also added 1 additional new technique and 7 sub-techniques to Enterprise in this ATT&CK release beyond the scope of the above updates:
+
+* Boot or Logon Autostart Execution: [Print Processors](/techniques/T1547/012)
+* [Cloud Infrastructure Discovery](/techniques/T1580)
+* Hide Artifacts: [VBA Stomping](/techniques/T1564/007)
+* Impair Defenses: [Disable Cloud Logs](/techniques/T1562/008)
+* Man-in-the-Middle: [ARP Cache Poisoning](/techniques/T1557/002)
+* Scheduled Task/Job: [Systemd Timers](/techniques/T1053/006)
+* Signed Binary Proxy Execution: [Verclsid](/techniques/T1218/012)
+* Steal or Forge Kerberos Tickets: [AS-REP Roasting](/techniques/T1558/004)
+
+All Enterprise technique changes are documented below.
New Techniques:
|
(from AES) HACK per Don't regenerate OPENSOURCE.md when doing `make generate`.
The Golang part of OPENSOURCE.md isn't properly running in the builder container, which makes it effectively impossible to get it right from a Mac. :( | @@ -9,7 +9,7 @@ generate/files += $(OSS_HOME)/pkg/api/envoy
generate/files += $(OSS_HOME)/pkg/api/pb
generate/files += $(OSS_HOME)/pkg/envoy-control-plane
generate/files += $(OSS_HOME)/docker/test-ratelimit/ratelimit.proto
-generate/files += $(OSS_HOME)/OPENSOURCE.md
+# generate/files += $(OSS_HOME)/OPENSOURCE.md # Per @LukeShu for 1.7.0 -- something is broken here
generate/files += $(OSS_HOME)/builder/requirements.txt
generate: ## Update generated sources that get committed to git
generate:
@@ -29,7 +29,7 @@ generate-clean:
rm -f $(OSS_HOME)/tools/sandbox/grpc_web/*_pb.js
rm -rf $(OSS_HOME)/pkg/envoy-control-plane
rm -f $(OSS_HOME)/docker/test-ratelimit/ratelimit.proto
- rm -f $(OSS_HOME)/OPENSOURCE.md
+# rm -f $(OSS_HOME)/OPENSOURCE.md # Per @LukeShu for 1.7.0 -- something is broken here
.PHONY: generate _generate generate-clean
go-mod-tidy/oss:
|
Update user_manual.md
MPI description added | @@ -548,7 +548,8 @@ needed for this is a compliant C compiler and a local MPI installation such as O
In what follows we describe the steps to execute openQCD using udocker in a HPC system with a batch system /eg. SLURM).
An analogous procedure can be followed for generic MPI applications
-A container version can be downloaded in the docker hub repository, and the image created by udocker as described above:
+A container version can be downloaded in the docker hub repository, and the extract the container to the filesystem (udocker
+create) as described above:
```
./udocker pull iscampos/openqcd
@@ -556,7 +557,7 @@ A container version can be downloaded in the docker hub repository, and the imag
fbeb130b-9f14-3a9d-9962-089b4acf3ea8
```
-Next enter in the container (notice we set the variable LD_LIBRARY_PATH explicitly):
+Next we execute the container in the filesystem image (notice we set the variable LD_LIBRARY_PATH explicitly):
```
./udocker run -e LD_LIBRARY_PATH=/usr/lib openqcd /bin/bash
|
[gradle] update shadow plugin
The changelog for [6.0.0](https://github.com/johnrengelman/shadow/releases/tag/6.0.0) claims performance improvements. In practice,
I save maybe a few second on the `shadowJar` step. | @@ -9,7 +9,7 @@ plugins {
id 'java'
id 'scala'
id 'idea'
- id 'com.github.johnrengelman.shadow' version '5.0.0'
+ id 'com.github.johnrengelman.shadow' version '6.1.0'
id "de.undercouch.download" version "3.2.0"
id 'eclipse'
}
|
Add a mising import
`path` needs to be imported. | @@ -43,7 +43,8 @@ URLconf
Add the Debug Toolbar's URLs to your project's URLconf as follows::
from django.conf import settings
- from django.conf.urls import include, url
+ from django.conf.urls import include, url # For django versions before 2.0
+ from django.urls import include, path # For django versions from 2.0 and up
if settings.DEBUG:
import debug_toolbar
|
fix: Update modified to pick up field change
Default Print Language is not picked up in some migrations | "issingle": 0,
"istable": 0,
"max_attachments": 0,
- "modified": "2017-09-05 14:01:05.658719",
+ "modified": "2017-09-05 14:02:05.658719",
"modified_by": "Administrator",
"module": "Printing",
"name": "Print Format",
|
[ci] Add branch protections to .asf.yaml
Moving these into the repo means we will be able to change them at-will.
`tvm-ci/pr-merge` will change soon into `tvm-ci/pr-head` to fix an
unrelated bug, but codifying it here means we can more easily coordinate
the change. | @@ -49,3 +49,14 @@ github:
- denise-k
- driazati
- tvm-bot # For automated feedback in PR review.
+
+ # See https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features#Git.asf.yamlfeatures-Branchprotection
+ protected_branches:
+ main:
+ required_status_checks:
+ contexts:
+ # Require a passing run from Jenkins
+ - tvm-ci/pr-merge
+
+ required_pull_request_reviews:
+ required_approving_review_count: 1
|
kotlin: the `plugin_id` field on `kotlinc_plugin` target is optional
The `plugin_id` field on the `kotlinc_plugin` target has a default and should not have been marked as required.
[ci skip-rust] | @@ -172,7 +172,6 @@ class KotlincPluginArtifactField(StringField):
class KotlincPluginIdField(StringField):
alias = "plugin_id"
- required = True
help = softwrap(
"""
The ID for `kotlinc` to use when setting options for the plugin.
|
Tests: Cleanup ".pdb" files on Windows too.
* In debug mode these were created, but never removed, polluting the
checkout. | @@ -555,6 +555,8 @@ Taking coverage of '{filename}' using '{python}' with flags {args} ...""".
os.path.join(output_dir, exe_filename)
]
+ pdb_filename = exe_filename[:-4] + ".pdb"
+
if trace_command:
my_print("CPython command:", *cpython_cmd)
@@ -746,6 +748,8 @@ Exit codes {exit_cpython:d} (CPython) != {exit_nuitka:d} (Nuitka)""".format(
assert not os.path.exists(nuitka_cmd2[0]+".away")
+ if os.path.exists(pdb_filename):
+ os.unlink(pdb_filename)
else:
os.unlink(nuitka_cmd2[0])
else:
|
Update intro to product overview
Messaging adjustments. Please feel free to edit typos and grammar. | Product Overview
============================
-**Mattermost** empowers organizations to achieve their highest priorities through modern, enterprise-grade communication. Enjoy all the productivity benefits of workplace messaging across web, mobile and PC, with unlimited archiving, search and integrations in a single-tenant, private cloud solution under IT control.
+**Mattermost** provides high trust collaboration and messaging solutions through an open source, community-powered approach. Enjoy all the productivity benefits of workplace messaging across web, mobile and PC, with unlimited archiving, search and integrations within IT-controlled private environments in public clouds, including AWS and Azure, as well as on-premise in private clouds and virtual or physical servers.
Thousands of organizations use Mattermost around the world in 14 languages for its unmatched benefits:
-- **Security** - Keep vital communications, including access to mobile and desktop apps, behind your firewall. Deploy using `dozens of security features <https://docs.mattermost.com/overview/security.html>`_ vetted by top security researchers. Data stays on servers you control, encrypted using keys you control.
+- **Security** - Keep vital communications, including access to mobile and desktop apps, within your private environments. Deploy using `dozens of security features <https://docs.mattermost.com/overview/security.html>`_ vetted by global information security communities. Data stays on servers you control, encrypted using keys you control.
-- **Configurability** - Adapt your deployment to your needs, preferences, and existing systems. The open source Mattermost server enables advanced white labelling and customization with complete access to RESTful server APIs, drivers, webhooks and hundreds of 3rd party extensions.
+- **Configurability** - Adapt your deployment to your needs, preferences, policies and existing systems. Mattermost integrates with your evolving security, compliance and monitoring infrastructure and offers a host of app integrations, webhooks, APIs, and drivers to bring all your communication and workflow into one place.
- **Scalability** - Grow from dozens of users to tens of thousands on the same server. Built on a high performance, single-tenant infrastructure, Mattermost E20 offers cluster-based high availability deployments with horizontal scaling and advanced performance monitoring.
About the Mattermost open source project
----------------------------------------------
-At its core, Mattermost is an open source, private cloud alternative to proprietary SaaS messaging for teams. The software, developed in partnership with over 500 contributors from around the world, rivals the productivity of SaaS solutions. We achieve this while offering organizations superior control, data sovereignty, configurability, freedom from lock-in, and enhanced security by keeping vital communications behind your firewall.
+At its core, Mattermost is an open source, hybrid cloud alternative to proprietary SaaS messaging for teams. The software, developed in partnership with over 500 contributors from around the world, is designed to increase the agility, efficiency and innovation in high trust organizations while keeping data and operations under IT control.
Core committers, including both community contributors and paid staff at Mattermost, Inc., determine the project roadmap. For enterprises with needs beyond the scope of the open source project, commercial "Enterprise Edition" extensions are available from Mattermost, Inc. Partnership with our core committer community, along with revenue from Enterprise Edition, ensures the continued improvement of all editions.
|
refactor test
simplifies the devce checking test | @@ -215,23 +215,8 @@ class MixedInt8TestMultiGpu(BaseMixedInt8Test):
self.model_name, load_in_8bit=True, max_memory=memory_mapping, device_map="auto"
)
- def get_list_devices(model):
- list_devices = []
- for _, module in model.named_children():
- if len(list(module.children())) > 0:
- list_devices.extend(get_list_devices(module))
- else:
- # Do a try except since we can encounter Dropout modules that does not
- # have any device set
- try:
- list_devices.append(next(module.parameters()).device.index)
- except BaseException:
- continue
- return list_devices
-
- list_devices = get_list_devices(model_parallel)
- # Check that we have dispatched the model into 2 separate devices
- self.assertTrue((1 in list_devices) and (0 in list_devices))
+ # Check correct device map
+ self.assertEqual(set(model_parallel.hf_device_map.values()), {0, 1})
# Check that inference pass works on the model
encoded_input = self.tokenizer(self.input_text, return_tensors="pt")
|
Update dynamic_step_driver.py
fixed lint errors | @@ -193,7 +193,8 @@ class DynamicStepDriver(driver.Driver):
self.env.time_step_spec())
counter = tf.zeros(batch_dims, tf.int32)
- [_, time_step, policy_state] = tf.nest.map_structure(tf.stop_gradient,tf.while_loop(
+ [_, time_step, policy_state] = tf.nest.map_structure(tf.stop_gradient,
+ tf.while_loop(
cond=self._loop_condition_fn(),
body=self._loop_body_fn(),
loop_vars=[counter, time_step, policy_state],
|
Subclass checks
Move type checking in the facade back to checking subclass checks. | @@ -224,7 +224,7 @@ def strcast(kind, keep_builtins=False):
return str(kind)[1:]
if kind is typing.Any:
return 'Any'
- if kind is typing.GenericMeta:
+ if issubclass(kind, typing.GenericMeta):
return str(kind)[1:]
return kind
|
Fix model name not including user
The model names in models.yaml include the username prefix.
Also fixed a lint error. | @@ -267,6 +267,7 @@ class Connection:
username = accounts['user']
password = accounts.get('password')
models = jujudata.models()[controller_name]
+ model_name = '{}/{}'.format(username, model_name)
model_uuid = models['models'][model_name]['uuid']
macaroons = get_macaroons() if not password else None
@@ -349,7 +350,7 @@ def get_macaroons():
cookie_file = os.path.expanduser('~/.go-cookies')
with open(cookie_file, 'r') as f:
cookies = json.load(f)
- except (OSError, ValueError) as e:
+ except (OSError, ValueError):
log.warn("Couldn't load macaroons from %s", cookie_file)
return []
|
custom_profile_fields: Control-group to input-group class correction.
There is no control-group class to hide/show in custom profile fields
list instead there is input-group class, kind of little typo I guess
from this commit. | @@ -295,11 +295,11 @@ function set_up_external_account_field_edit_form(field_elem, url_pattern_val) {
if (field_elem.$form.find("select[name=external_acc_field_type]").val() === "custom") {
field_elem.$form.find("input[name=url_pattern]").val(url_pattern_val);
field_elem.$form.find(".custom_external_account_detail").show();
- field_elem.$form.find("input[name=name]").val("").closest(".control-group").show();
- field_elem.$form.find("input[name=hint]").val("").closest(".control-group").show();
+ field_elem.$form.find("input[name=name]").val("").closest(".input-group").show();
+ field_elem.$form.find("input[name=hint]").val("").closest(".input-group").show();
} else {
- field_elem.$form.find("input[name=name]").closest(".control-group").hide();
- field_elem.$form.find("input[name=hint]").closest(".control-group").hide();
+ field_elem.$form.find("input[name=name]").closest(".input-group").hide();
+ field_elem.$form.find("input[name=hint]").closest(".input-group").hide();
field_elem.$form.find(".custom_external_account_detail").hide();
}
}
|
Override finalize in pure_nccl_communicator
Override finalize to destroy nccl communicator | @@ -47,6 +47,12 @@ class PureNcclCommunicator(mpi_communicator_base.MpiCommunicatorBase):
self.allreduce_dtype_to_grad_dtype_kernel = None
self.params_data = None
+ def finalize(self):
+ super(PureNcclCommunicator, self).finalize()
+ if self.nccl_comm is not None:
+ self.nccl_comm.destroy()
+ self.nccl_comm = None
+
def _init_comms(self):
if self.nccl_comm is not None:
return
|
Fix calling to_timedelta twice
to_timedelta is called in schedule_relative already | @@ -79,7 +79,6 @@ class EventLoopScheduler(SchedulerBase, Disposable):
def schedule_periodic(self, period, action, state=None):
"""Schedule a periodic piece of work."""
- dt = self.to_timedelta(period)
disposed = []
s = [state]
@@ -88,12 +87,12 @@ class EventLoopScheduler(SchedulerBase, Disposable):
if disposed:
return
- self.schedule_relative(dt, tick)
+ self.schedule_relative(period, tick)
new_state = action(s[0])
if new_state is not None:
s[0] = new_state
- self.schedule_relative(dt, tick)
+ self.schedule_relative(period, tick)
def dispose():
disposed.append(True)
|
audio bitrate 0 fix
Adjust the 0 of audio bitrate settings
Improved the 'guess' based no source material
Also fixed a bug where iOS audio would fail to guess | @@ -422,6 +422,16 @@ class MkvtoMp4:
# Create iOS friendly audio stream if the default audio stream has too many channels (iOS only likes AAC stereo)
if self.iOS and a.audio_channels > 2:
iOSbitrate = 256 if (self.audio_bitrate * 2) > 256 else (self.audio_bitrate * 2)
+
+ # Bitrate calculations/overrides
+ if self.audio_bitrate is 0:
+ self.log.debug("Attempting to set ios stream bitrate based on source stream bitrate.")
+ try:
+ iOSbitrate = ((a.bitrate / 1000) / a.audio_channels) * 2
+ except:
+ self.log.warning("Unable to determine iOS audio bitrate from source stream %s, defaulting to 256 per channel." % a.index)
+ iOSbitrate = audio_channels * 256
+
self.log.info("Creating audio stream %s from source audio stream %s [iOS-audio]." % (str(l), a.index))
self.log.debug("Audio codec: %s." % self.iOS[0])
self.log.debug("Channels: 2.")
@@ -460,15 +470,16 @@ class MkvtoMp4:
else:
audio_channels = a.audio_channels
abitrate = a.audio_channels * self.audio_bitrate
+ afilter = self.audio_filter
+
# Bitrate calculations/overrides
if self.audio_bitrate is 0:
self.log.debug("Attempting to set bitrate based on source stream bitrate.")
try:
- abitrate = a.bitrate / 1000
+ abitrate = ((a.bitrate / 1000) / a.audio_channels) * audio_channels
except:
self.log.warning("Unable to determine audio bitrate from source stream %s, defaulting to 256 per channel." % a.index)
- abitrate = a.audio_channels * 256
- afilter = self.audio_filter
+ abitrate = audio_channels * 256
self.log.debug("Audio codec: %s." % acodec)
self.log.debug("Channels: %s." % audio_channels)
|
suggestion for:
more friendly/less personal suggestion. | @@ -3,14 +3,14 @@ When creating a new JSON file you might run into the following error.
`JSONDecodeError: Expecting value: line 1 column 1 (char 0)`
In short, this means that your JSON is invalid in its current state. This could very well happen because the file is just new and completely empty.
-Whilst the JSON data, the data you wish to store, may be empty, the .json file must not. It is recommended to have at least one of the following data types in your .json file:
+Whilst the JSON data, the data you wish to store, may be empty, the .json file must not. You most likely want to use one of the following data types in your .json file:
```
object
array
```
-To resolve this issue, you create one of the above data types in your .json file. It is very common to use `{}` to make an object, which works similar to a dictionary in python.
+To resolve this issue, create one of the above data types in your .json file. It is very common to use `{}` to make an object, which works similar to a dictionary in python.
When this is added to your .json file, it will look like this:
```json
|
Adding two cryptocurrencies APIs
CoinMarketCap and CryptoCompare both are equally good but CryptoCompare supports logos for each currency. Thought this might be a good addition. | @@ -224,7 +224,9 @@ API | Description | Auth | HTTPS | Link |
| Barchart OnDemand | Stock, Futures, and Forex Market Data | `apiKey` | Yes | [Go!](https://www.barchartondemand.com/free) |
| Blockchain | Bitcoin Payment, Wallet & Transaction Data | No | Yes | [Go!](https://www.blockchain.info/api) |
| CoinDesk | Bitcoin Price Index | No | No | [Go!](http://www.coindesk.com/api/) |
+| CoinMarketCap | Cryptocurrencies Prices | No | No | [Go!](https://coinmarketcap.com/api/) |
| Consumer Financial Protection Bureau | Financial services consumer complains data | `apiKey` | Yes | [Go!](https://data.consumerfinance.gov/resource/jhzv-w97w.json) |
+| CryptoCompare | Cryptocurrencies Comparison | No | No | [Go!](https://www.cryptocompare.com/api#) |
| Czech National Bank | A collection of exchange rates | No | No | [Go!](https://www.cnb.cz/cs/financni_trhy/devizovy_trh/kurzy_devizoveho_trhu/denni_kurz.xml) |
| IEX | Stocks and Market Data | No | Yes | [Go!](https://iextrading.com/developer/) |
| Razorpay IFSC | Indian Financial Systems Code (Bank Branch Codes) | No | Yes | [Go!](https://ifsc.razorpay.com/) |
|
Fix seemingly broken padding semantics in praxis/convolutions.py and lingvo/conv_layers_with_time_padding.py
I added a few tests passing NaN showing that at least now it seems to work.
I'll be happy to take comments on how to progress and possibly split this up. | @@ -338,7 +338,7 @@ class BaseConv2DLayerWithPadding(base_layer.BaseLayer):
def _ApplyPadding(tensor_in, padding_in):
padding_expanded = tf.expand_dims(tf.expand_dims(padding_in, -1), -1)
- return tensor_in * (1.0 - padding_expanded)
+ return py_utils.ApplyPadding(padding_expanded, tensor_in)
# Zeroing out padded inputs.
inputs = _ApplyPadding(inputs, paddings)
|
Update README.md with API information
Detailed the difference between Xarray and NumPy APIs. | @@ -30,3 +30,9 @@ pip install --prefix $PREFIX .
```
where $PREFIX is the path that `ncomp` is installed.
+
+
+Xarray interface vs NumPy interface
+===================================
+
+GeoCAT-comp provides a high-level Xarray interface under the `geocat.comp` namespace. However, a stripped-down NumPy interface is used under the hood to bridge the gap between NumPy arrays and the C data structures used by `NComp`. These functions are accessible under the `geocat.comp._ncomp` namespace, but are minimally documented and are intended primarily for internal use.
|
Clarify docs for pytest.raises `match`.
For
Document explicit behavior of `match` and brief note on how to handle matching a string that may contain special re chars. | @@ -558,7 +558,13 @@ def raises(expected_exception, *args, **kwargs):
Assert that a code block/function call raises ``expected_exception``
or raise a failure exception otherwise.
- :kwparam match: if specified, asserts that the exception matches a text or regex
+ :kwparam match: if specified, a string containing a regular expression,
+ or a regular expression object, that is tested against the string
+ representation of the exception using ``re.match``. To match a literal
+ string that may contain ``special characters``__, the pattern can
+ first be escaped with ``re.escape``.
+
+ __ https://docs.python.org/3/library/re.html#regular-expression-syntax
:kwparam message: **(deprecated since 4.1)** if specified, provides a custom failure message
if the exception is not raised. See :ref:`the deprecation docs <raises message deprecated>` for a workaround.
|
Handle 'file:' URIs in hover content
VSCode assumes the fragment is a 1-based row offset so we'll assume that too. | @@ -9,8 +9,8 @@ from .core.sessions import SessionBufferProtocol
from .core.settings import userprefs
from .core.typing import List, Optional, Any, Dict, Tuple, Sequence
from .core.views import diagnostic_severity
-from .core.views import format_diagnostic_for_html
from .core.views import first_selection_region
+from .core.views import format_diagnostic_for_html
from .core.views import FORMAT_MARKED_STRING, FORMAT_MARKUP_CONTENT, minihtml
from .core.views import make_command_link
from .core.views import make_link
@@ -18,6 +18,7 @@ from .core.views import show_lsp_popup
from .core.views import text_document_position_params
from .core.views import update_lsp_popup
from .core.windows import AbstractViewListener
+from urllib.parse import urlparse
import functools
import sublime
import webbrowser
@@ -186,6 +187,12 @@ class LspHoverCommand(LspTextCommand):
def _on_navigate(self, href: str, point: int) -> None:
if href.startswith("subl:"):
pass
+ elif href.startswith("file:"):
+ window = self.view.window()
+ if window:
+ parsed = urlparse(href)
+ fn = "{}:{}".format(parsed.path, parsed.fragment) if parsed.fragment else parsed.path
+ window.open_file(fn, flags=sublime.ENCODED_POSITION)
elif href.startswith('code-actions:'):
_, config_name = href.split(":")
titles = [command["title"] for command in self._actions_by_config[config_name]]
|
Fix incorrect billable_units for email test data
Emails always retain the default of "0" [^1].
[^1]: | @@ -49,7 +49,7 @@ def set_up_yearly_data():
# doesn't accidentally bleed over into them
for dt in (date(2016, 3, 31), date(2017, 4, 1)):
create_ft_billing(bst_date=dt, template=sms_template, rate=0.163)
- create_ft_billing(bst_date=dt, template=email_template, rate=0)
+ create_ft_billing(bst_date=dt, template=email_template, rate=0, billable_unit=0)
create_ft_billing(bst_date=dt, template=letter_template, rate=0.34, postage='second')
create_ft_billing(bst_date=dt, template=letter_template, rate=0.31, postage='second')
@@ -60,7 +60,7 @@ def set_up_yearly_data():
dt = start_date + timedelta(days=n)
create_ft_billing(bst_date=dt, template=sms_template, rate=0.162)
- create_ft_billing(bst_date=dt, template=email_template, rate=0)
+ create_ft_billing(bst_date=dt, template=email_template, rate=0, billable_unit=0)
create_ft_billing(bst_date=dt, template=letter_template, rate=0.33, postage='second')
create_ft_billing(bst_date=dt, template=letter_template, rate=0.30, postage='second')
|
Add 1.4.0 release in CHANGELOG
Separate unreleased commits to new 1.4.0 release. | @@ -17,7 +17,10 @@ Changelog
.. Release notes for existing releases are MUTABLE! If there is something that
was missed or can be improved, feel free to change it!
-unreleased
+usreleased
+--------------------
+
+[1.4.0] - 2019-01-29
--------------------
Changed
|
add simplification for Transpose._add
This patch adds a simplification to the addition of a transpose, similar to the
existing simplification of a multiplication of a transpose. Additionally, a
simplification shortcut is placed in Transpose._transpose for the benefit of
these operations. | @@ -1370,6 +1370,12 @@ class Transpose(Array):
return ','.join(map(str, self.axes))
def _transpose(self, axes):
+ if axes == self._invaxes:
+ # NOTE: While we could leave this particular simplification to be dealt
+ # with by Transpose, the benefit of handling it directly is that _add and
+ # _multiply can rely on _transpose for the right hand side without having
+ # to separately account for the trivial case.
+ return self.func
newaxes = [self.axes[i] for i in axes]
return Transpose(self.func, newaxes)
@@ -1409,11 +1415,13 @@ class Transpose(Array):
return Transpose(trydot, [ax-(ax>invaxis) for ax in self.axes if ax != invaxis])
def _add(self, other):
- if isinstance(other, Transpose) and self.axes == other.axes:
- return Transpose(Add([self.func, other.func]), self.axes)
other_trans = other._transpose(self._invaxes)
- if other_trans is not None:
- return Transpose(Add([self.func, other_trans]), self.axes)
+ if other_trans is not None and not isinstance(other_trans, Transpose):
+ # The second clause is to avoid infinite recursions
+ return Transpose(self.func + other_trans, self.axes)
+ tryadd = self.func._add(Transpose(other, self._invaxes))
+ if tryadd is not None:
+ return Transpose(tryadd, self.axes)
def _take(self, indices, axis):
trytake = self.func._take(indices, self.axes[axis])
|
Update `show_title` to False in `get_full_name`
This fixes duplicate titles shown in some places. | @@ -83,7 +83,7 @@ class PersonMixin:
return get_default_values(type(self)).get('_title', UserTitle.none).title
return self._title.title
- def get_full_name(self, show_title=True, last_name_first=True, last_name_upper=True,
+ def get_full_name(self, show_title=False, last_name_first=True, last_name_upper=True,
abbrev_first_name=True, _show_empty_names=False):
"""Return the person's name in the specified notation.
|
facts: always set ceph_run_cmd and ceph_admin_command
always set these facts on monitor nodes whatever we run with `--limit`.
Otherwise, playbook will fail when using `--limit` on nodes where these
facts are used on a delegated task to monitor. | - name: set_fact ceph_run_cmd
set_fact:
ceph_run_cmd: "{{ container_binary + ' run --rm --net=host -v /etc/ceph:/etc/ceph:z -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --entrypoint=ceph ' + ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment else 'ceph' }}"
+ delegate_to: "{{ item }}"
+ delegate_facts: True
+ run_once: True
+ with_items:
+ - "{{ groups[mon_group_name] if groups[mon_group_name] | default([]) | length > 0 else [] }}"
+ - "{{ inventory_hostname }}"
- name: set_fact ceph_admin_command
set_fact:
ceph_admin_command: "{{ ceph_run_cmd }} -n client.admin -k /etc/ceph/{{ cluster }}.client.admin.keyring"
+ delegate_to: "{{ item }}"
+ delegate_facts: True
+ run_once: True
+ with_items:
+ - "{{ groups[mon_group_name] if groups[mon_group_name] | default([]) | length > 0 else [] }}"
+ - "{{ inventory_hostname }}"
\ No newline at end of file
|
removing print statement in else clause, this is being executed in
several buildtest commands which is not appropriate. | @@ -167,8 +167,7 @@ def load_configuration(config_path=None):
for tree in os.getenv("MODULEPATH", "").split(":"):
if os.path.isdir(tree):
tree_list.append(tree)
- else:
- print(f"Skipping module tree {tree} because path does not exist")
+
config_opts["BUILDTEST_MODULEPATH"] = tree_list
return config_opts
|
[meta] update backport config for 7.13 branch
This commits update sqren/backport config to handle 7.13 branch.
Also remove 7.12 branch. | "upstream": "elastic/helm-charts",
"targetBranchChoices": [
"6.8",
- "7.12",
+ "7.13",
"7.x"
],
"all": true,
"prFilter": "label:need-backport",
- "targetPRLabels": ["backport"],
- "sourcePRLabels": ["backported"]
+ "targetPRLabels": [
+ "backport"
+ ],
+ "sourcePRLabels": [
+ "backported"
+ ]
}
|
[tests] use generated config and real sha for tests
This should reduce external cloud storage dependencies making
infrastructure changes easier. | @@ -225,6 +225,8 @@ test_project() {
}
test_gcp() {
+ local conf_file="./hail-config-0.2-test.json"
+ python ./create_config_file.py '0.2' $conf_file
time gsutil cp \
build/libs/hail-all-spark.jar \
gs://hail-ci-0-1/temp/$SOURCE_SHA/$TARGET_SHA/hail.jar
@@ -241,7 +243,9 @@ test_gcp() {
--version 0.2 \
--spark 2.2.0 \
--max-idle 10m \
- --bucket=hail-ci-0-1-dataproc-staging-bucket \
+ --bucket hail-ci-0-1-dataproc-staging-bucket \
+ --config-file $conf_file \
+ --hash $(git rev-parse --short=12 HEAD) \
--jar gs://hail-ci-0-1/temp/$SOURCE_SHA/$TARGET_SHA/hail.jar \
--zip gs://hail-ci-0-1/temp/$SOURCE_SHA/$TARGET_SHA/hail.zip \
--vep
|
Suppress missing Content-Type headers when fetching content
Fixes | @@ -40,8 +40,13 @@ log = logging.getLogger(__name__)
async def json_or_text(response):
text = await response.text(encoding='utf-8')
+ try:
if response.headers['content-type'] == 'application/json':
return json.loads(text)
+ except KeyError:
+ # Thanks Cloudflare
+ pass
+
return text
class Route:
|
Added county-data filling
Data class now pulls county data wherever there is a gap in the state
data. | @@ -62,8 +62,6 @@ class CovidDatasets:
series.at[i, 'cases'] = self.step_down(i, series)
return series
-
-
def backfill(self, series):
# Backfill the data as necessary for the model
return self.backfill_synthetic_cases(
@@ -96,14 +94,39 @@ class CovidDatasets:
return self.BED_DATA
def get_timeseries_by_country_state(self, country, state):
- # First, attempt to pull the state-level data without aggregating.
- return self.backfill(
- self.get_all_timeseries()[
+ # First, pull all available state data
+ state_data = self.get_all_timeseries()[
(self.get_all_timeseries()["state"] == state) &
(self.get_all_timeseries()["country"] == country) &
(self.get_all_timeseries()["county"].isna())
]
- )
+ # Second pull all county data for the state
+ county_data = self.get_all_timeseries()[
+ (self.get_all_timeseries()["state"] == state) &
+ (self.get_all_timeseries()["country"] == country) &
+ (self.get_all_timeseries()["county"].notna())
+ ][['date', 'country', 'state', 'cases', 'deaths', 'recovered', 'active']].groupby(
+ ['date', 'country', 'state'], as_index=False
+ )[['cases', 'deaths', 'recovered', 'active']].sum()
+ # Now we fill in whatever gaps we can in the state data using the county data
+ curr_date = state_data['date'].min() # Start on the first date of state data we have
+ county_data_to_insert = []
+ while curr_date > self.start_date:
+ curr_date -= datetime.timedelta(days=1)
+ # If there is no state data for a day, we need to get some country data for the day
+ if len(state_data[state_data['date'] == curr_date]) == 0:
+ county_data_for_date = copy(county_data[county_data['date'] == curr_date])
+ if len(county_data_for_date) == 0: # If there's no county data, we're SOL.
+ continue # TODO: Revisit. This should be more intelligent
+ county_data_for_date = county_data_for_date.iloc[0]
+ new_state_row = copy(state_data.iloc[0]) # Copy the first row of the state data to get the right format
+ new_state_row['date'] = county_data_for_date['date']
+ new_state_row['cases'] = county_data_for_date['cases']
+ new_state_row['deaths'] = county_data_for_date['deaths']
+ new_state_row['recovered'] = county_data_for_date['recovered']
+ new_state_row['active'] = county_data_for_date['active']
+ county_data_to_insert.append(copy(new_state_row))
+ return state_data.append(pd.DataFrame(county_data_to_insert))
def get_timeseries_by_country(self, country):
return self.get_all_timeseries()[self.get_all_timeseries()["country"] == country]
|
Fix path to Telegraf helper script
The working directory is `/` when systemd runs this script, so we can't
use `pwd` to get to Telegraf's package directory. Instead we can use the
symlink at `/opt/mesosphere/active/telegraf`. | @@ -21,7 +21,7 @@ export DCOS_NODE_PRIVATE_IP="${node_private_ip}"
# Retrieve the fault domain for this machine
fault_domain_script="/opt/mesosphere/bin/detect_fault_domain"
-fault_domain_extractor="$(pwd)/tools/extract_fault_domain.py"
+fault_domain_extractor="/opt/mesosphere/active/telegraf/tools/extract_fault_domain.py"
if [ -x $fault_domain_script ]; then
# If a fault domain script exists, export environment variables so that
|
Fix broken link in configuration reference
Link to the 'configuration reference documentation' was broken | @@ -45,7 +45,7 @@ The configuration of opsdroid is done in a [YAML](https://yaml.org/) file called
_Note: if no configuration file is found then opsdroid will use an `example_configuration.yaml` and place it in one of the default locations.`_
-Make sure to read the [configuration reference documentation](../configuration.md) for further information about configuring opsdroid.
+Make sure to read the [configuration reference documentation](./configuration.md) for further information about configuring opsdroid.
Using a single YAML file for every configuration of opsdroid ensures we have a single reference for how we expect our bot to behave.
|
Fix Lint
Summary:
Pull Request resolved:
As pointed out in the previous PR broke the Lint.
ghstack-source-id: | @@ -4,9 +4,9 @@ import unittest
import torch
import torch.nn.quantized as nnq
from torch.quantization import \
- QConfig_dynamic, default_observer, default_weight_observer, \
+ QConfig_dynamic, default_weight_observer, \
quantize, prepare, convert, prepare_qat, quantize_qat, fuse_modules, \
- quantize_dynamic, default_qconfig, default_dynamic_qconfig
+ quantize_dynamic, default_dynamic_qconfig
from common_utils import run_tests
from common_quantization import QuantizationTestCase, SingleLayerLinearModel, \
|
Update signing lib for verifying signature bytes
Updates the signing library to allow verification of signatures as bytes
in addition to hex. | @@ -114,10 +114,11 @@ class Secp256k1Context(Context):
def verify(self, signature, message, public_key):
try:
- sig_bytes = bytes.fromhex(signature)
+ if isinstance(signature, str):
+ signature = bytes.fromhex(signature)
sig = public_key.secp256k1_public_key.ecdsa_deserialize_compact(
- sig_bytes)
+ signature)
return public_key.secp256k1_public_key.ecdsa_verify(message, sig)
# pylint: disable=broad-except
except Exception:
|
Update tests/eth2/core/beacon/operations/test_pool.py
Simplify constant calculation via PR feedback | @@ -9,7 +9,7 @@ from eth2.beacon.types.attestations import Attestation
def mk_attestation(index, sample_attestation_params):
return Attestation(**sample_attestation_params).copy(
- custody_bits=(True,) * 8 * 16,
+ custody_bits=(True,) * 128,
)
|
Several stylistic refactors to _cached_results
Invert if expressions and convert block to "return check". This helps
to reduce the number of levels of indentation.
Shuffle the logging a bit to make it more succinct | @@ -311,7 +311,8 @@ class InsightsClientApi(object):
def _cached_results(self):
# archive_tmp_dir and .lastcollected must both exist
file_name = constants.archive_last_collected_date_file
- if os.path.isfile(file_name):
+ if not os.path.isfile(file_name):
+ return
# get .lastcollected timestamp and archive
# .lastcollected contains the timestamp on the first line
@@ -325,10 +326,13 @@ class InsightsClientApi(object):
lastcollected = 0
last_collected_archive = coll_file.readline().strip()
- logger.debug("Found last collected archive %s." % (last_collected_archive))
-
# make sure the archive actually exists on the filesystem
- if os.path.isfile(last_collected_archive):
+ if not os.path.isfile(last_collected_archive):
+ logger.debug("Found last collected archive %s in .lastcollected"
+ " but file does not exist" % (last_collected_archive))
+ return
+ else:
+ logger.debug("Found last collected archive %s." % (last_collected_archive))
# get the latest archive if .lastcollected is < 24hrs
try:
@@ -338,18 +342,13 @@ class InsightsClientApi(object):
logger.debug("Time since last collection is less than 24 hours.")
logger.debug("Latest archive %s found." % (last_collected_archive))
return last_collected_archive
+ else:
+ logger.debug("Last time collected greater than 24 hours")
except:
logger.debug("There was an error with the last collected timestamp"
" file or archives.")
- else:
- logger.debug("Found last collected archive %s in .lastcollected but file does not exist" %
- (last_collected_archive))
-
- logger.debug("Last time collected greater than 24 hours OR less than 24"
- " hours but no archive found.")
-
def collect(self, format="json", options=None, config=None, check_timestamp=True):
"""
returns (str, json): will return a string path to archive, or json facts
|
fix typo: stdio gui with no wallet
same as - should have been included there | @@ -26,7 +26,7 @@ class ElectrumGui(BaseElectrumGui):
BaseElectrumGui.__init__(self, config=config, daemon=daemon, plugins=plugins)
self.network = daemon.network
storage = WalletStorage(config.get_wallet_path())
- if not storage.file_exists:
+ if not storage.file_exists():
print("Wallet not found. try 'electrum create'")
exit()
if storage.is_encrypted():
|
documentation.py: remove a duplicate entry
TN: minor | @@ -80,7 +80,7 @@ base_langkit_docs = {
Data type for env rebindings. For internal use only.
""",
'langkit.token_kind': """
- Type for individual tokens.
+ Kind for this token.
""",
'langkit.token_type': """
Reference to a token in an analysis unit.
@@ -613,9 +613,6 @@ base_langkit_docs = {
Return 1 if successful.
% endif
""",
- 'langkit.token_kind': """
- Kind for this token.
- """,
'langkit.token_is_trivia': """
Return whether this token is a trivia. If it's not, it's a regular
token.
|
Update ckdtree.pyx
fixing typo in the documentation | @@ -421,7 +421,7 @@ cdef class cKDTree:
point.
The algorithm used is described in Maneewongvatana and Mount 1999.
- The general idea is that the kd-tree is a binary trie, each of whose
+ The general idea is that the kd-tree is a binary tree, each of whose
nodes represents an axis-aligned hyperrectangle. Each node specifies
an axis and splits the set of points based on whether their coordinate
along that axis is greater than or less than a particular value.
|
Add safeguards to our channel fuzzing operations to prevent
different operations from colliding. | @@ -55,6 +55,8 @@ class ChannelBuilder(object):
def __init__(self, levels=3):
self.levels = levels
+ self.modified = set()
+
try:
self.load_data()
except KeyError:
@@ -208,17 +210,22 @@ class ChannelBuilder(object):
def duplicate_resources(self, num_resources):
self.duplicated_resources = []
for i in range(0, num_resources):
+ child = None
+ while child is None or child["id"] in self.modified:
parent = self.recurse_tree_until_leaf_container(self.root_node)
child = random.choice(parent["children"])
duplicate = self.duplicate_resource(child)
self.duplicated_resources.append(duplicate)
parent["children"].append(duplicate)
+ self.modified.add(duplicate["id"])
self.generate_nodes_from_root_node()
def move_resources(self, num_resources):
self.moved_resources = []
self.deleted_resources = []
for i in range(0, num_resources):
+ child = None
+ while child is None or child["id"] in self.modified:
parent = self.recurse_tree_until_leaf_container(self.root_node)
child = random.choice(parent["children"])
moved = self.duplicate_resource(child)
@@ -226,6 +233,7 @@ class ChannelBuilder(object):
self.deleted_resources.append(child)
parent["children"].pop(parent["children"].index(child))
parent["children"].append(moved)
+ self.modified.add(moved["id"])
self.generate_nodes_from_root_node()
def upgrade(self, new_resources=0, updated_resources=0, deleted_resources=0):
@@ -239,19 +247,26 @@ class ChannelBuilder(object):
# To emulate a common occurrence that produces edge cases
# we also update the parent's thumbnail here
self.updated_thumbnails.extend(self.update_thumbnail(parent))
+ self.modified.add(child["id"])
self.updated_resources = []
self.updated_resource_localfiles = []
for i in range(0, updated_resources):
+ child = None
+ while child is None or child["id"] in self.modified:
parent = self.recurse_tree_until_leaf_container(self.root_node)
child = random.choice(parent["children"])
self.updated_resource_localfiles.extend(self.update_resource(child))
self.updated_resources.append(child)
+ self.modified.add(child["id"])
self.deleted_resources = []
for i in range(0, deleted_resources):
+ child = None
+ while child is None or child["id"] in self.modified:
parent = self.recurse_tree_until_leaf_container(self.root_node)
child_index = random.randint(0, len(parent["children"]) - 1)
+ child = parent["children"][child_index]
child = parent["children"].pop(child_index)
self.delete_resource_files(child)
self.deleted_resources.append(child)
|
Moved FHIR API from "Health" to "Test Data" section
Per recommendation, moved this new link to "Test Data" section since data contained by the API is "dummy data" conforming to the FHIR spec. | @@ -363,7 +363,6 @@ API | Description | Auth | HTTPS | Link |
|---|---|---|---|---|
| BetterDoctor | Detailed information about doctors in your area | `apiKey` | Yes | [Go!](https://developer.betterdoctor.com/) |
| Diabetes | Logging and retrieving diabetes information | No | No | [Go!](http://predictbgl.com/api/) |
-| FHIR | Fast Healthcare Interoperability Resources test data | No | Yes | [Go!](http://fhirtest.uhn.ca/home) |
| Flutrack | Influenza-like symptoms with geotracking | No | No | [Go!](http://www.flutrack.org/) |
| Healthcare.gov | Educational content about the US Health Insurance Marketplace | No | Yes | [Go!](https://www.healthcare.gov/developers/) |
| Makeup | Makeup Information | No | No | [Go!](http://makeup-api.herokuapp.com/) |
@@ -594,6 +593,7 @@ API | Description | Auth | HTTPS | Link |
|---|---|---|---|---|
| Adorable Avatars | Generate random cartoon avatars | No | Yes | [Go!](http://avatars.adorable.io) |
| Bacon Ipsum | A Meatier Lorem Ipsum Generator | No | Yes | [Go!](https://baconipsum.com/json-api/) |
+| FHIR | Fast Healthcare Interoperability Resources test data | No | Yes | [Go!](http://fhirtest.uhn.ca/home) |
| Hipster Ipsum | Generates Hipster Ipsum text | No | No | [Go!](http://hipsterjesus.com/) |
| JSONPlaceholder | Fake data for testing and prototyping | No | No | [Go!](http://jsonplaceholder.typicode.com/) |
| Lorem Text | Generates Lorem Ipsum text | `X-Mashape-Key` | Yes | [Go!](https://market.mashape.com/montanaflynn/lorem-text-generator) |
|
config/ie/options : Fix typo with IE_STOMP_VERSION
When previously testing IE_STOMP_VERSION, it happened to be with a patch version of 0, so I didn't notice this typo | @@ -95,7 +95,7 @@ targetAppVersion = None
if int( getOption( "IE_STOMP_VERSION", "0" ) ):
registryVersion = os.environ["GAFFER_COMPATIBILITY_VERSION"] + ".0.0"
- GAFFER_MILESTONE_VERSION, GAFFER_MAJOR_VERSION, GAFFER_MINOR_VERSION, GAFFER_PATH_VERSION = os.environ["GAFFER_VERSION"].rstrip( "dev" ).split( "." )
+ GAFFER_MILESTONE_VERSION, GAFFER_MAJOR_VERSION, GAFFER_MINOR_VERSION, GAFFER_PATCH_VERSION = os.environ["GAFFER_VERSION"].rstrip( "dev" ).split( "." )
else:
registryVersion = gafferRegistryVersion()
|
Adding the UpdateModelMixin to the ReminderViewSet.
This will allow us to edit durations using the PATCH method,
which the bot implements already but which was overlooked
when this viewset was written. | @@ -3,7 +3,8 @@ from rest_framework.filters import SearchFilter
from rest_framework.mixins import (
CreateModelMixin,
DestroyModelMixin,
- ListModelMixin
+ ListModelMixin,
+ UpdateModelMixin
)
from rest_framework.viewsets import GenericViewSet
@@ -11,7 +12,7 @@ from pydis_site.apps.api.models.bot.reminder import Reminder
from pydis_site.apps.api.serializers import ReminderSerializer
-class ReminderViewSet(CreateModelMixin, ListModelMixin, DestroyModelMixin, GenericViewSet):
+class ReminderViewSet(CreateModelMixin, ListModelMixin, DestroyModelMixin, UpdateModelMixin, GenericViewSet):
"""
View providing CRUD access to reminders.
|
Ensure we aren't operating on a closed sqlite3 db w/hooks.
Fixes | @@ -1411,6 +1411,9 @@ def sqlite_get_db_status(conn, flag):
int current, highwater, rc
pysqlite_Connection *c_conn = <pysqlite_Connection *>conn
+ if not c_conn.db:
+ return (None, None)
+
rc = sqlite3_db_status(c_conn.db, flag, ¤t, &highwater, 0)
if rc == SQLITE_OK:
return (current, highwater)
@@ -1440,6 +1443,9 @@ cdef class ConnectionHelper(object):
sqlite3_update_hook(self.conn.db, NULL, NULL)
def set_commit_hook(self, fn):
+ if not self.conn.initialized or not self.conn.db:
+ return
+
self._commit_hook = fn
if fn is None:
sqlite3_commit_hook(self.conn.db, NULL, NULL)
@@ -1447,6 +1453,9 @@ cdef class ConnectionHelper(object):
sqlite3_commit_hook(self.conn.db, _commit_callback, <void *>fn)
def set_rollback_hook(self, fn):
+ if not self.conn.initialized or not self.conn.db:
+ return
+
self._rollback_hook = fn
if fn is None:
sqlite3_rollback_hook(self.conn.db, NULL, NULL)
@@ -1454,6 +1463,9 @@ cdef class ConnectionHelper(object):
sqlite3_rollback_hook(self.conn.db, _rollback_callback, <void *>fn)
def set_update_hook(self, fn):
+ if not self.conn.initialized or not self.conn.db:
+ return
+
self._update_hook = fn
if fn is None:
sqlite3_update_hook(self.conn.db, NULL, NULL)
@@ -1465,17 +1477,23 @@ cdef class ConnectionHelper(object):
Replace the default busy handler with one that introduces some "jitter"
into the amount of time delayed between checks.
"""
+ if not self.conn.initialized or not self.conn.db:
+ return False
+
cdef sqlite3_int64 n = timeout * 1000
sqlite3_busy_handler(self.conn.db, _aggressive_busy_handler, <void *>n)
return True
def changes(self):
+ if self.conn.initialized and self.conn.db:
return sqlite3_changes(self.conn.db)
def last_insert_rowid(self):
+ if self.conn.initialized and self.conn.db:
return <int>sqlite3_last_insert_rowid(self.conn.db)
def autocommit(self):
+ if self.conn.initialized and self.conn.db:
return sqlite3_get_autocommit(self.conn.db) != 0
@@ -1529,6 +1547,9 @@ def backup(src_conn, dest_conn, pages=None, name=None, progress=None):
sqlite3 *dest_db = dest.db
sqlite3_backup *backup
+ if not src_db or not dest_db:
+ raise OperationalError('cannot backup to or from a closed database')
+
# We always backup to the "main" database in the dest db.
backup = sqlite3_backup_init(dest_db, b'main', src_db, bname)
if backup == NULL:
|
Update explainer link(s) help text
This updates the explainer link(s) help text to reflect that in the
document version of the intent templates. | @@ -974,7 +974,7 @@ class FeatureForm(forms.Form):
explainer_links = forms.CharField(label='Explainer link(s)', required=False,
widget=forms.Textarea(attrs={'rows': 4, 'cols': 50, 'maxlength': 500}),
- help_text='Link to explainer(s) (one URL per line). You should have at least an explainer in hand and have discussed the API on a public forum with other browser vendors or standards bodies before sending an Intent to Implement. If your change is not yet at this stage of maturity, feel free to solicit feedback informally on blink-dev instead.')
+ help_text='Link to explainer(s) (one URL per line). You should have at least an explainer in hand and have shared it on a public forum before sending an intent to implement in order to enable discussion with other browser vendors, standards bodies, or other interested parties.')
intent_to_implement_url = forms.URLField(required=False, label='Intent to Implement link',
help_text='Link to the "Intent to Implement" discussion thread.')
|
resources jar is reproducible by default, no need to strip
Remove use of the `[jvm].reproducible_jars` option for resources jars, since they are now deterministic by default.
[ci skip-rust]
[ci skip-build-wheels] | @@ -28,7 +28,6 @@ from pants.jvm.compile import (
FallibleClasspathEntries,
FallibleClasspathEntry,
)
-from pants.jvm.strip_jar.strip_jar import StripJarRequest
from pants.jvm.subsystems import JvmSubsystem
from pants.util.logging import LogLevel
@@ -119,8 +118,6 @@ async def assemble_resources_jar(
)
output_digest = resources_jar_result.output_digest
- if jvm.reproducible_jars:
- output_digest = await Get(Digest, StripJarRequest(output_digest, tuple(output_files)))
cpe = ClasspathEntry(output_digest, output_files, [])
merged_cpe_digest = await Get(
|
Cache interfaces indefinitely
Setting timeout to 0 does not cache at all, should be None instead | @@ -1014,7 +1014,7 @@ class ReaderStudy(UUIDModel, TitleSlugDescriptionModel, ViewContentMixin):
"selected": "",
"selected_image": "",
}
- cache.set(cache_key, values_for_interfaces, timeout=0)
+ cache.set(cache_key, values_for_interfaces, timeout=None)
return values_for_interfaces
@@ -1093,7 +1093,7 @@ class DisplaySet(UUIDModel):
for slug, civ, image in values:
options[slug]["selected"] = civ
options[slug]["selected_image"] = image or ""
- cache.set(cache_key, options, timeout=0)
+ cache.set(cache_key, options, timeout=None)
return options
@cached_property
|
test: Fix flaky grid flaky test
Simplified test case as bit | context('Grid Keyboard Shortcut', () => {
let total_count = 0;
- beforeEach(() => {
- cy.login();
- cy.visit('/app/doctype/User');
- });
before(() => {
cy.login();
- cy.visit('/app/doctype/User');
- return cy.window().its('frappe').then(frappe => {
- frappe.db.count('DocField', {
- filters: {
- 'parent': 'User', 'parentfield': 'fields', 'parenttype': 'DocType'
- }
- }).then((r) => {
- total_count = r;
- });
- });
+ })
+ beforeEach(() => {
+ cy.reload();
+ cy.visit('/app/contact/new-contact-1');
+ cy.get('.frappe-control[data-fieldname="email_ids"]').find(".grid-add-row").click()
});
it('Insert new row at the end', () => {
cy.add_new_row_in_grid('{ctrl}{shift}{downarrow}', (cy, total_count) => {
- cy.get('[data-name="new-docfield-1"]').should('have.attr', 'data-idx', `${total_count+1}`);
+ cy.get('[data-name="new-contact-email-1"]').should('have.attr', 'data-idx', `${total_count+1}`);
}, total_count);
});
it('Insert new row at the top', () => {
cy.add_new_row_in_grid('{ctrl}{shift}{uparrow}', (cy) => {
- cy.get('[data-name="new-docfield-1"]').should('have.attr', 'data-idx', '1');
+ cy.get('[data-name="new-contact-email-1"]').should('have.attr', 'data-idx', '2');
});
});
it('Insert new row below', () => {
cy.add_new_row_in_grid('{ctrl}{downarrow}', (cy) => {
- cy.get('[data-name="new-docfield-1"]').should('have.attr', 'data-idx', '2');
+ cy.get('[data-name="new-contact-email-1"]').should('have.attr', 'data-idx', '1');
});
});
it('Insert new row above', () => {
cy.add_new_row_in_grid('{ctrl}{uparrow}', (cy) => {
- cy.get('[data-name="new-docfield-1"]').should('have.attr', 'data-idx', '1');
+ cy.get('[data-name="new-contact-email-1"]').should('have.attr', 'data-idx', '2');
});
});
});
Cypress.Commands.add('add_new_row_in_grid', (shortcut_keys, callbackFn, total_count) => {
- cy.get('.frappe-control[data-fieldname="fields"]').as('table');
- cy.get('@table').find('.grid-body .col-xs-2').first().click();
- cy.get('@table').find('.grid-body .col-xs-2')
+ cy.get('.frappe-control[data-fieldname="email_ids"]').as('table');
+ cy.get('@table').find('.grid-body [data-fieldname="email_id"]').first().click();
+ cy.get('@table').find('.grid-body [data-fieldname="email_id"]')
.first().type(shortcut_keys);
callbackFn(cy, total_count);
|
Added MLPlan to frameworks.yaml
Changed setup url from local server to public release server. | @@ -68,6 +68,14 @@ AutoWEKA:
version: '2.6'
project: https://www.cs.ubc.ca/labs/beta/Projects/autoweka/
+MLPlanWEKA:
+ version: 'latest'
+ project: https://mlplan.org
+
+MLPlanSKLearn:
+ version: 'latest'
+ project: https://mlplan.org
+
H2OAutoML:
version: '3.30.0.3'
project: http://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html
|
Add recusive_zip function based on zipfile
The function add can zip files in an arbitrary directory without
the need of switching to that directory which is important
for the parallel use of the function.
Since it is not possible to change the current working direcotry only
for a single thread. | @@ -5,6 +5,7 @@ import re
import shutil
import time
import math
+import zipfile
from datetime import datetime, timezone
from typing import cast, Dict, Optional, Tuple, List, Type
@@ -581,3 +582,43 @@ class GCP(System):
# @abstractmethod
# def download_metrics(self):
# pass
+
+ """
+ Helper method for recursive_zip
+
+ :param base_directory: path to directory to be zipped
+ :param path: path to file of subdirecotry to be zipped
+ :param archive: ZipFile object
+ """
+ @staticmethod
+ def helper_zip(base_directory : str, path : str, archive : zipfile.ZipFile):
+ paths = os.listdir(path)
+ for p in paths:
+ directory = os.path.join(path, p)
+ if os.path.isdir(directory):
+ GCP.helper_zip(base_directory, directory, archive)
+ else:
+ if(directory != archive.filename): # prevent form including itself
+ archive.write(directory, os.path.relpath(directory, base_directory))
+
+ """
+ https://gist.github.com/felixSchl/d38b455df8bf83a78d3d
+
+ Zip directory with relative paths given an absolute path
+ If the archive exists only new files are added and updated.
+ If the archive does not exist a new one is created.
+
+ :param path: absolute path to the direcotry to be zipped
+ :param archname: path to the zip file
+ """
+ @staticmethod
+ def recursive_zip( directory : str, archname : str):
+ archive = zipfile.ZipFile(archname, "w", zipfile.ZIP_DEFLATED, compresslevel=9)
+ if os.path.isdir(directory):
+ GCP.helper_zip(directory, directory, archive)
+ else:
+ # if the passed direcotry is acually a file we just add the file to the zip archive
+ _, name = os.path.split(directory)
+ archive.write(directory, name)
+ archive.close()
+ return True
|
let's see chromium output inside brozzler-worker
using --trace, because chromium seems to be working ok when we just run
it | @@ -11,7 +11,7 @@ exec nice setuidgid {{user}} \
brozzler-worker \
--rethinkdb-servers={{groups['rethinkdb'] | join(',')}} \
--max-browsers=4 \
- --verbose \
+ --trace \
--warcprox-auto \
>> $logfile 2>&1
|
Add support for OK
Add support for OK | "username_claimed": "blue",
"username_unclaimed": "noonewouldeverusethis7"
},
+ "OK": {
+ "errorType": "message",
+ "errorMsg": "This page does not exist on OK",
+ "rank": 1,
+ "regexCheck": "^[a-zA-Z][a-zA-Z0-9_-.]*$",
+ "url": "https://ok.ru/{}",
+ "urlMain": "https://ok.ru/",
+ "username_claimed": "ok",
+ "username_unclaimed": "noonewouldeverusethis7"
+ },
"Pastebin": {
"errorType": "response_url",
"errorUrl": "https://pastebin.com/index",
|
No need to reinstall `tensorflow-gpu`
The `tensorflow` package on `pip` supports both CPU & GPU. | @@ -12,8 +12,6 @@ COPY --from=nvidia /etc/apt/trusted.gpg /etc/apt/trusted.gpg.d/cuda.gpg
# See b/142337634#comment28
RUN sed -i 's/deb https:\/\/developer.download.nvidia.com/deb http:\/\/developer.download.nvidia.com/' /etc/apt/sources.list.d/*.list
-# Ensure the cuda libraries are compatible with the custom Tensorflow wheels.
-# TODO(b/120050292): Use templating to keep in sync or COPY installed binaries from it.
ENV CUDA_MAJOR_VERSION=11
ENV CUDA_MINOR_VERSION=0
ENV CUDA_VERSION=$CUDA_MAJOR_VERSION.$CUDA_MINOR_VERSION
@@ -82,9 +80,7 @@ RUN pip install jax==0.2.16 jaxlib==0.1.68+cuda$CUDA_MAJOR_VERSION$CUDA_MINOR_VE
/tmp/clean-layer.sh
# Reinstall packages with a separate version for GPU support.
-RUN pip uninstall -y tensorflow && \
- pip install tensorflow-gpu==2.4.1 && \
- pip uninstall -y mxnet && \
+RUN pip uninstall -y mxnet && \
pip install mxnet-cu$CUDA_MAJOR_VERSION$CUDA_MINOR_VERSION && \
/tmp/clean-layer.sh
|
Update CHANGELOG.md
Updated changelog with description of this bugfix | @@ -9,6 +9,7 @@ This project adheres to [Semantic Versioning](http://semver.org/).
- Added some tests for model_utils
- Bug fix
- Fixed implementation of utils routines in model_utils and jro_isr
+ - Fixed error catching bug in model_utils
## [2.0.0] - 2019-07-11
- New Features
|
Weak solution, tree shifts.
Turns it off if not Windows. | # generated by wxGlade 0.9.3 on Thu Jun 27 21:45:40 2019
#
+import os
import sys
import traceback
@@ -1847,7 +1848,6 @@ class RootNode(list):
if node.type == NODE_ELEMENTS_BRANCH:
for n in self.node_elements:
self.tree.SelectItem(n.item, True)
- self.tree.SetFocusedItem(item)
self.gui.request_refresh()
return
elif node.type == NODE_FILE_FILE:
@@ -1856,17 +1856,17 @@ class RootNode(list):
links = self.tree_lookup[id(obj)]
for link in links:
self.tree.SelectItem(link.item, True)
- self.tree.SetFocusedItem(item)
self.gui.request_refresh()
return
elif node.type == NODE_OPERATION_ELEMENT:
obj = node.object
if len(list(self.tree.GetSelections())) != 1:
return # If this is a multi-selection event, do not select other nodeop_elements
+ if os.name != 'nt':
+ return # Windows does this fine, the other ones try to move.
links = self.tree_lookup[id(obj)]
for link in links:
self.tree.SelectItem(link.item, True)
- self.tree.SetFocusedItem(item)
return
for item in list(self.tree.GetSelections()):
node = self.tree.GetItemData(item)
@@ -1876,6 +1876,7 @@ class RootNode(list):
self.selected_operations.append(node.object)
self.gui.request_refresh()
self.selection_updated()
+ event.Allow()
def set_selected_by_position(self, position):
if self.selected_elements is not None:
|
Add additional special entries ...
... which are only created when you use the Parameter Test functionality | @@ -798,6 +798,8 @@ class AboutPanel(wx.Panel):
+ "\n\t* 'op_device' - Device you are burning on"
+ "\n\t* 'op_speed' - Speed of the current operation"
+ "\n\t* 'op_power' - PPI of the current operation"
+ + "\n\t* 'op_dpi' - DPI of the current (raster) operation"
+ + "\n\t* 'op_passes' - Operation passes of the current operation"
)
s += "\n\n" + _(
|
Fix SAC gpu unittest
Summary: Temporary fix gpu unittest by avoiding exporting the model before training. I will make proper fix to exporting logic in a follow-up diff. | @@ -137,8 +137,9 @@ class TestGridworldSAC(unittest.TestCase):
samples=samples,
)
- critic_predictor = self.get_predictor(trainer, environment)
- self.assertGreater(evaluator.evaluate(critic_predictor), 0.15)
+ # FIXME: need to be able to export w/o calling .cpu()
+ # critic_predictor = self.get_predictor(trainer, environment)
+ # self.assertGreater(evaluator.evaluate(critic_predictor), 0.15)
tdps = environment.preprocess_samples(
samples, self.minibatch_size, use_gpu=use_gpu
|
Update all_ccgs.html
"NHS *in* England" & remove reference to mean | <p>Clinical commissioning groups (CCGs) are NHS organisations that organise the delivery of NHS services in England. They are clinically led groups that include all of the GP groups in their geographical area.</p>
-<p>Search for a CCG by name or code, and see how the CCG's GP prescribing compares to the national mean for key prescribing indicators.</p>
+<p>Search for a CCG by name or code, and see how this CCG's GP prescribing compares with its peers across the NHS in England</p>
<input class="form-control" id="search" placeholder="Search by name or code, e.g. Birmingham" />
|
Reminders: show error to users if reminder is in use
Silent failure is confusing to users. Showing an error message clears up
why nothing happened with their command. | @@ -166,7 +166,7 @@ class Reminders(Cog):
log.trace(f"Scheduling new task #{reminder['id']}")
self.schedule_reminder(reminder)
- @mutually_exclusive_arg(NAMESPACE, "reminder", itemgetter("id"))
+ @mutually_exclusive_arg(NAMESPACE, "reminder", itemgetter("id"), raise_error=True)
async def send_reminder(self, reminder: dict, late: relativedelta = None) -> None:
"""Send the reminder."""
is_valid, user, channel = self.ensure_valid_reminder(reminder)
@@ -373,7 +373,7 @@ class Reminders(Cog):
mention_ids = [mention.id for mention in mentions]
await self.edit_reminder(ctx, id_, {"mentions": mention_ids})
- @mutually_exclusive_arg(NAMESPACE, "id_")
+ @mutually_exclusive_arg(NAMESPACE, "id_", raise_error=True)
async def edit_reminder(self, ctx: Context, id_: int, payload: dict) -> None:
"""Edits a reminder with the given payload, then sends a confirmation message."""
reminder = await self._edit_reminder(id_, payload)
@@ -391,7 +391,7 @@ class Reminders(Cog):
await self._reschedule_reminder(reminder)
@remind_group.command("delete", aliases=("remove", "cancel"))
- @mutually_exclusive_arg(NAMESPACE, "id_")
+ @mutually_exclusive_arg(NAMESPACE, "id_", raise_error=True)
async def delete_reminder(self, ctx: Context, id_: int) -> None:
"""Delete one of your active reminders."""
await self.bot.api_client.delete(f"bot/reminders/{id_}")
|
doc: update link to the code of conduct
PR-URL: | ## Code of Conduct
Please read the
-[Code of Conduct](https://github.com/nodejs/TSC/blob/master/CODE_OF_CONDUCT.md)
+[Code of Conduct](https://github.com/nodejs/admin/blob/master/CODE_OF_CONDUCT.md)
which explains the minimum behavior expectations for node-gyp contributors.
<a id="developers-certificate-of-origin"></a>
|
Update README.md
+ symbol must be url-encoded before latex rendering | @@ -135,9 +135,9 @@ A `Road` is composed of a `RoadNetwork` and a list of `Vehicles`. The `RoadNetwo
The vehicles kinematics are represented in the `Vehicle` class by a _Kinematic Bicycle Model_.
-)
+)
-)
+)

|
Avoid using version.all_files (which is cached) in ReviewBase
We're getting some weird IndexError exceptions because all_files
is empty, and I can't reproduce it locally. Using self.files should
be more reliable, the class needs it to sign the files etc. | @@ -522,9 +522,8 @@ class ReviewBase(object):
add_prefix=False)),
'comments': self.data.get('comments'),
'SITE_URL': settings.SITE_URL,
- 'legacy_addon': (
- not self.version.all_files[0].is_webextension
- if self.version else False)}
+ 'legacy_addon':
+ not self.files[0].is_webextension if self.files else False}
def request_information(self):
"""Send a request for information to the authors."""
|
installDependencies : Clean extraction directory
We were leaving behind the temporary extraction directory which got included in the release archive.
Note that the use of `str( p )` in `shutil.move()` is needed until Python 3.9, where a parsing bug is fixed | #
##########################################################################
-import os
+import pathlib
import sys
import argparse
import hashlib
import subprocess
-import glob
import shutil
if sys.version_info[0] < 3 :
@@ -83,7 +82,7 @@ args = parser.parse_args()
sys.stderr.write( "Downloading dependencies \"%s\"\n" % args.archiveURL )
archiveFileName, headers = urlretrieve( args.archiveURL )
-os.makedirs( args.dependenciesDir )
+pathlib.Path( args.dependenciesDir ).mkdir( parents = True )
if platform != "windows":
subprocess.check_call( [ "tar", "xf", archiveFileName, "-C", args.dependenciesDir, "--strip-components=1" ] )
else:
@@ -93,15 +92,11 @@ else:
)
# 7z (and zip extractors generally) don't have an equivalent of --strip-components=1
# Copy the files up one directory level to compensate
- for p in glob.glob(
- os.path.join(
- args.dependenciesDir.replace( "/", "\\" ),
- os.path.splitext( args.archiveURL.split( "/" )[-1] )[0],
- "*"
- )
- ):
- shutil.move( p, args.dependenciesDir )
+ extractedPath = pathlib.Path( args.dependenciesDir ) / pathlib.Path( args.archiveURL ).stem
+ for p in extractedPath.glob( "*" ) :
+ shutil.move( str( p ), args.dependenciesDir )
+ extractedPath.rmdir()
# Tell the world
|
The recognition model expects grayscale, not BGR planar.
It still worked somewhat previously, as the Blue plane was taken as input | @@ -46,11 +46,6 @@ def to_tensor_result(packet):
for name in [tensor.name for tensor in packet.getRaw().tensors]
}
-
-def to_planar(arr: np.ndarray, shape: tuple) -> list:
- return [val for channel in cv2.resize(arr, shape).transpose(2, 0, 1) for y_col in channel for val in y_col]
-
-
q_prev = device.getOutputQueue("preview")
q_det = device.getOutputQueue("detections")
q_rec_in = device.getInputQueue("in_recognition")
@@ -99,13 +94,23 @@ while True:
if frame is not None:
if in_det is not None:
+ cropped_stacked = None
for point_arr in points:
cv2.polylines(frame, [point_arr], isClosed=True, color=(255, 0, 0), thickness=1, lineType=cv2.LINE_8)
transformed = east.four_point_transform(frame, point_arr)
+ transformed = cv2.cvtColor(transformed, cv2.COLOR_BGR2GRAY)
+ transformed = cv2.resize(transformed, (120, 32), interpolation=cv2.INTER_AREA)
+ transformed = np.ascontiguousarray(transformed)
nn_data = depthai.NNData()
- nn_data.setLayer("Placeholder", to_planar(transformed, (120, 32)))
+ nn_data.setLayer("Placeholder", transformed)
q_rec_in.send(nn_data)
+ if cropped_stacked is None:
+ cropped_stacked = transformed
+ else:
+ cropped_stacked = np.vstack((cropped_stacked, transformed))
+ if cropped_stacked is not None:
+ cv2.imshow('cropped_stacked', cropped_stacked)
cv2.imshow('preview', frame)
|
added tag formatting to support spanish POS
since the spanish tags are formatted as a new and clearer formatting is proposed via removing the 0's and the tag identifier converted to upper case
more details presented in the referencing pull request | @@ -132,7 +132,7 @@ class StanfordTagger(TaggerI):
sentence = []
for tagged_word in tagged_sentence.strip().split():
word_tags = tagged_word.strip().split(self._SEPARATOR)
- sentence.append(("".join(word_tags[:-1]), word_tags[-1]))
+ sentence.append(("".join(word_tags[:-1]), word_tags[-1].replace('0', '').upper()))
tagged_sentences.append(sentence)
return tagged_sentences
|
Switch on API rate limiting on production.
Limits POST API calls to send notifications on V1/V2 to 3000 in 60 seconds. | @@ -282,7 +282,7 @@ class Live(Config):
FUNCTIONAL_TEST_PROVIDER_SERVICE_ID = '6c1d81bb-dae2-4ee9-80b0-89a4aae9f649'
FUNCTIONAL_TEST_PROVIDER_SMS_TEMPLATE_ID = 'ba9e1789-a804-40b8-871f-cc60d4c1286f'
PERFORMANCE_PLATFORM_ENABLED = True
- API_RATE_LIMIT_ENABLED = False
+ API_RATE_LIMIT_ENABLED = True
class CloudFoundryConfig(Config):
|
CI Bugfix: change Jenkins node label
CI bugfix commit changing Jenkins to request 8-core nodes for default back-end runs, aiming to fix memory limit issues. | @@ -13,7 +13,7 @@ pipeline {
parallel {
// For each combination of parameters required, build and test
stage('Build and test gcc-4.9 container') {
- agent { dockerfile { label 'azure-linux'
+ agent { dockerfile { label 'azure-linux-8core'
filename 'Dockerfile.jenkins'
additionalBuildArgs "--build-arg gccvers=4.9" } }
environment {
@@ -28,7 +28,7 @@ pipeline {
}
}
stage('Build and test gcc-4.9 OpenMP container') {
- agent { dockerfile { label 'azure-linux'
+ agent { dockerfile { label 'azure-linux-8core'
filename 'Dockerfile.jenkins'
additionalBuildArgs "--build-arg gccvers=4.9" } }
environment {
@@ -46,7 +46,7 @@ pipeline {
}
}
stage('Build and test gcc-5 container') {
- agent { dockerfile { label 'azure-linux'
+ agent { dockerfile { label 'azure-linux-8core'
filename 'Dockerfile.jenkins'
additionalBuildArgs "--build-arg gccvers=5" } }
environment {
|
parsers/transform_code_ada.mako: minor reformatting
TN: | @@ -16,7 +16,7 @@ end if;
if ${parser.pos_var} /= No_Token_Index then
## Create the transform wrapper node
- ${parser.res_var} := (${parser.type.parser_allocator} (Parser.Mem_Pool));
+ ${parser.res_var} := ${parser.type.parser_allocator} (Parser.Mem_Pool);
## Initialize components common to all nodes
Initialize
|
Enhance diagnostic location for missing entity prefix for field access
TN: | @@ -14,7 +14,7 @@ from __future__ import absolute_import, division, print_function
import ast
from collections import defaultdict
-from contextlib import contextmanager
+from contextlib import contextmanager, nested
from distutils.spawn import find_executable
from glob import glob
import inspect
@@ -801,8 +801,14 @@ class CompileCtx(object):
def process_expr(expr):
if isinstance(expr, FieldAccess.Expr):
+ context_mgrs = []
+ if expr.abstract_expr:
+ context_mgrs.append(expr.abstract_expr.diagnostic_context)
+
+ with nested(*context_mgrs):
check_source_language(
- not expr.node_data.uses_entity_info or expr.implicit_deref,
+ not expr.node_data.uses_entity_info
+ or expr.implicit_deref,
'Call to {} must be done on an entity'.format(
expr.node_data.qualname
)
|
Disable changing map size for 32-bit intepreters.
Also, reduced max size for 64-bit systems. Eliminated unused constant. | @@ -29,9 +29,6 @@ import xxhash
# Largest primary key value. No more rows than this
MAX_PK = sys.maxsize
-# Bytes in MAX_PK
-MAX_PK_BYTES = 8 if sys.maxsize > 2**32 else 4
-
# Prefix to indicate that a v is a nonnegative value
NONNEGATIVE_VAL_MARKER = 0
@@ -221,6 +218,11 @@ class Cortex(s_cores_common.Cortex):
Checks if there's enough extra space in the map to accomodate a commit of at least
self._slack_space size and increase it if not.
'''
+ # Don't change map size if 32-bit interpreter. set_mapsize failure will lead to seg fault,
+ # so avoid it altogether
+ if sys.maxsize <= 2**32:
+ return
+
# Figure how how much space the DB is using
used = 4096 * self.dbenv.info()['last_pgno']
@@ -237,9 +239,9 @@ class Cortex(s_cores_common.Cortex):
dbname = dbinfo.get('name')
# Initial DB Size. Must be < 2 GiB for 32-bit. Can be big for 64-bit systems. Will create
- # a file of that size. On MacOS/Windows, will actually immediately take up that much
+ # a file of that size. On Windows, will actually immediately take up that much
# disk space.
- DEFAULT_MAP_SIZE = 256 * 1024 * 1024
+ DEFAULT_MAP_SIZE = 512 * 1024 * 1024
# _write_lock exists solely to hold off other threads' write transactions long enough to
# potentially increase the map size.
@@ -247,7 +249,7 @@ class Cortex(s_cores_common.Cortex):
map_size = self._link[1].get('lmdb:mapsize', DEFAULT_MAP_SIZE)
self._map_size, _ = s_datamodel.getTypeNorm('int', map_size)
- self._max_map_size = 2**48 if sys.maxsize > 2**32 else 2**31
+ self._max_map_size = 2**46 if sys.maxsize > 2**32 else 2**30
map_slack = self._link[1].get('lmdb:mapslack', 2 ** 30)
self._map_slack, _ = s_datamodel.getTypeNorm('int', map_slack)
|
Bugfix missing absolute statement
sum not including absolute function call | @@ -873,7 +873,7 @@ def solve_network_temperatures(locator, gv, T_ground, edge_node_df, all_nodes_df
"""
- if edge_mass_flow_df.values.sum() != 0:
+ if np.absolute(edge_mass_flow_df.values).sum() != 0:
## change pipe flow directions in the edge_node_df_t according to the flow conditions
change_to_edge_node_matrix_t(edge_mass_flow_df, edge_node_df)
|
Optimization: Store node children as node attributes
* This saves having a dictionary per for node with any child,
even just one.
* Memory usage goes down by 6% doing this.
* This will also allow more direct access without going through
accessor functions. | @@ -208,7 +208,9 @@ class NodeBase(NodeMetaClassBase):
"""
parent = self.getParent()
- for key, value in parent.child_values.items():
+ for key in parent.named_children:
+ value = parent.getChild(key)
+
if self is value:
return key
@@ -566,16 +568,14 @@ class ChildrenHavingMixin:
# but of course, might be put to None.
assert set(values.keys()) == set(self.named_children)
- self.child_values = dict(values)
+ for name, value in values.items():
+ if name in self.checkers:
+ value = self.checkers[name](value)
- for key, value in self.child_values.items():
- if key in self.checkers:
- value = self.child_values[key] = self.checkers[key](value)
-
- assert type(value) is not list, key
+ assert type(value) is not list, name
if type(value) is tuple:
- assert None not in value, key
+ assert None not in value, name
for val in value:
val.parent = self
@@ -586,13 +586,16 @@ class ChildrenHavingMixin:
else:
assert False, type(value)
+ attr_name = "subnode_" + name
+ setattr(self, attr_name, value)
+
def setChild(self, name, value):
""" Set a child value.
Do not overload, provider self.checkers instead.
"""
# Only accept legal child names
- assert name in self.child_values, name
+ assert name in self.named_children, name
# Lists as inputs are OK, but turn them into tuples.
if type(value) is list:
@@ -608,26 +611,26 @@ class ChildrenHavingMixin:
elif value is not None:
value.parent = self
+ attr_name = "subnode_" + name
+
# Determine old value, and inform it about loosing its parent.
- old_value = self.child_values[name]
+ old_value = getattr(self, attr_name)
assert old_value is not value, value
- self.child_values[name] = value
+ setattr(self, attr_name, value)
def getChild(self, name):
# Only accept legal child names
- assert name in self.child_values, name
-
- return self.child_values[name]
-
- def hasChild(self, name):
- return name in self.child_values
+ attr_name = "subnode_" + name
+ return getattr(self, attr_name)
@staticmethod
def childGetter(name):
+ attr_name = "subnode_" + name
+
def getter(self):
- return self.getChild(name)
+ return getattr(self, attr_name)
return getter
@@ -639,10 +642,13 @@ class ChildrenHavingMixin:
return setter
def getVisitableNodes(self):
+ # TODO: Consider it a generator would be faster.
result = []
for name in self.named_children:
- value = self.child_values[ name ]
+ attr_name = "subnode_" + name
+
+ value = getattr(self, attr_name)
if value is None:
pass
@@ -677,7 +683,9 @@ class ChildrenHavingMixin:
# Find the replaced node, as an added difficulty, what might be
# happening, is that the old node is an element of a tuple, in which we
# may also remove that element, by setting it to None.
- for key, value in self.child_values.items():
+ for key in self.named_children:
+ value = self.getChild(key)
+
if value is None:
pass
elif type(value) is tuple:
@@ -721,7 +729,9 @@ class ChildrenHavingMixin:
def makeClone(self):
values = {}
- for key, value in self.child_values.items():
+ for key in self.named_children:
+ value = self.getChild(key)
+
assert type(value) is not list, key
if value is None:
|
Fix Cisco.IOS.get_portchannel script
HG--
branch : feature/microservices | @@ -33,10 +33,10 @@ class Script(BaseScript):
return []
for i in parse_table(s, allow_wrap=True):
iface = {
- "interface": self.extract_iface(i[1]),
+ "interface": self.extract_iface(i[1].strip()),
"members": []
}
- if (len(i) == 4) and (i[2] == "LACP"):
+ if (len(i) == 4) and (i[2].strip() == "LACP"):
iface["type"] = "L"
else:
iface["type"] = "S"
|
feat: add /anomalies endpoint
proxies the google sheet | @@ -5,6 +5,7 @@ from flask import Blueprint, request
from flask.json import loads, jsonify
from bisect import bisect_right
from sqlalchemy import text
+from pandas import read_csv
from .._common import is_compatibility_mode, db
from .._exceptions import ValidationFailedException, DatabaseErrorException
@@ -32,7 +33,7 @@ from .._validate import (
require_any,
)
from .._db import sql_table_has_columns
-from .._pandas import as_pandas
+from .._pandas import as_pandas, print_pandas
from .covidcast_utils import compute_trend, compute_trends, compute_correlations, compute_trend_value, CovidcastMetaEntry, AllSignalsMap
from ..utils import shift_time_value, date_to_time_value, time_value_to_iso, time_value_to_date
@@ -534,3 +535,18 @@ def handle_coverage():
_handle_lag_issues_as_of(q, None, None, None)
return execute_query(q.query, q.params, fields_string, fields_int, [])
+
+
[email protected]("/anomalies", methods=("GET", "POST"))
+def handle_anomalies():
+ """
+ proxy to the excel sheet about data anomalies
+ """
+
+ signal = parse_source_signal_arg("signal")
+
+ df = read_csv(
+ "https://docs.google.com/spreadsheets/d/e/2PACX-1vToGcf9x5PNJg-eSrxadoR5b-LM2Cqs9UML97587OGrIX0LiQDcU1HL-L2AA8o5avbU7yod106ih0_n/pub?gid=0&single=true&output=csv", skip_blank_lines=True
+ )
+ df = df[df["source"].notnull() & df["published"]]
+ return print_pandas(df)
|
ansible: document and rearrange Runner params.
Move emulate_tty to where it's used. | @@ -91,15 +91,24 @@ class Runner(object):
returned by `run()`.
Subclasses may override `_run`()` and extend `setup()` and `revert()`.
- """
- def __init__(self, module, service_context, emulate_tty=None,
- args=None, env=None):
+
+ :param str module:
+ Name of the module to execute, e.g. "shell"
+ :param mitogen.core.Context service_context:
+ Context to which we should direct FileService calls. For now, always
+ the connection multiplexer process on the controller.
+ :param dict args:
+ Ansible module arguments. A strange mixture of user and internal keys
+ created by ActionBase._execute_module().
+ :param dict env:
+ Additional environment variables to set during the run.
+ """
+ def __init__(self, module, service_context, args=None, env=None):
if args is None:
args = {}
self.module = utf8(module)
self.service_context = service_context
- self.emulate_tty = emulate_tty
self.args = args
self.env = env
@@ -119,6 +128,9 @@ class Runner(object):
self._cleanup_temp()
def _cleanup_temp(self):
+ """
+ Empty temp_dir in time for the next module invocation.
+ """
for name in os.listdir(ansible_mitogen.target.temp_dir):
if name in ('.', '..'):
continue
@@ -202,9 +214,21 @@ class NewStyleStdio(object):
class ProgramRunner(Runner):
- def __init__(self, path, **kwargs):
+ """
+ Base class for runners that run external programs.
+
+ :param str path:
+ Absolute path to the program file on the master, as it can be retrieved
+ via :class:`ansible_mitogen.services.FileService`.
+ :param bool emulate_tty:
+ If :data:`True`, execute the program with `stdout` and `stderr` merged
+ into a single pipe, emulating Ansible behaviour when an SSH TTY is in
+ use.
+ """
+ def __init__(self, path, emulate_tty=None, **kwargs):
super(ProgramRunner, self).__init__(**kwargs)
- self.path = path
+ self.emulate_tty = emulate_tty
+ self.path = utf8(path)
def setup(self):
super(ProgramRunner, self).setup()
|
Fix, the matching for packages to not recompile was broken
* It was working on the basename of the module, not seeing the fullname, and
therefore never matched. | @@ -341,7 +341,7 @@ existing '%s' extension module by default. Candidates were: %s <-> %s."""
)
-def _findModuleInPath2(module_name, search_path):
+def _findModuleInPath2(package_name, module_name, search_path):
"""This is out own module finding low level implementation.
Just the full module name and search path are given. This is then
@@ -436,7 +436,13 @@ def _findModuleInPath2(module_name, search_path):
# On case sensitive systems, no resolution needed.
if case_sensitive:
- _reportCandidates(module_name, candidates[0], candidates)
+ _reportCandidates(
+ module_name=package_name.getChildNamed(module_name)
+ if package_name is not None
+ else module_name,
+ candidate=candidates[0],
+ candidates=candidates,
+ )
return candidates[0].full_path
else:
for candidate in candidates:
@@ -549,7 +555,7 @@ def _findModuleInPath(module_name):
try:
module_filename = _findModuleInPath2(
- module_name=module_name, search_path=search_path
+ package_name=package_name, module_name=module_name, search_path=search_path
)
except SyntaxError:
# Warn user, as this is kind of unusual.
|
Assert that our Sphinx is >= v1.8.
Before that, `:type: Foo` would not link to the Foo class. | @@ -16,11 +16,19 @@ import logging
import sys
import os
import re
+import sys
+
+import sphinx
logging.basicConfig()
+if sphinx.version_info < (1, 8):
+ print("Sphinx {} is too old; we require >= 1.8.".format(sphinx.__version__), file=sys.stderr)
+ exit(1)
+
+
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
Fix CI cache
Fix conditon of downloading the fixtures
Use `.circleci/config.yml` checksum for cache id
Add restore path | @@ -111,7 +111,7 @@ eth2_fixtures: ð2_fixtures
when: on_fail
- restore_cache:
keys:
- - cache-v3-{{ arch }}-{{ .Environment.CIRCLE_JOB }}-{{ checksum "setup.py" }}-{{ checksum "tox.ini" }}
+ - cache-v3-{{ arch }}-{{ .Environment.CIRCLE_JOB }}-{{ checksum "setup.py" }}-{{ checksum "tox.ini" }}-{{ checksum ".circleci/config.yml" }}
- run:
name: install libsnappy-dev
command: sudo apt install -y libsnappy-dev
@@ -121,8 +121,10 @@ eth2_fixtures: ð2_fixtures
- run:
name: download the required yaml files
command: |
- wget -c https://github.com/hwwhww/eth2.0-spec-tests/releases/download/v0.8.1b2/archive.tar.gz
+ if [ ! -d "./eth2-fixtures/tests" ]; then
+ wget -c https://github.com/hwwhww/eth2.0-spec-tests/releases/download/v0.8.1b/archive.tar.gz
tar zxvf archive.tar.gz -C ./eth2-fixtures
+ fi
- run:
name: install dependencies
command: pip install --user tox
@@ -137,7 +139,8 @@ eth2_fixtures: ð2_fixtures
- ~/.local
- ./eggs
- .pytest_cache/v/eth2/bls/key-cache
- key: cache-v3-{{ arch }}-{{ .Environment.CIRCLE_JOB }}-{{ checksum "setup.py" }}-{{ checksum "tox.ini" }}
+ - ./eth2-fixtures
+ key: cache-v3-{{ arch }}-{{ .Environment.CIRCLE_JOB }}-{{ checksum "setup.py" }}-{{ checksum "tox.ini" }}-{{ checksum ".circleci/config.yml" }}
jobs:
py36-lint:
|
Update local executor failure details
Make them similar to the SWF ones. | import collections
import logging
+import sys
+import traceback
from simpleflow import (
exceptions,
@@ -11,7 +13,7 @@ from simpleflow.marker import Marker
from simpleflow.signal import WaitForSignal
from simpleflow.task import ActivityTask, WorkflowTask, SignalTask, MarkerTask
from simpleflow.activity import Activity
-from simpleflow.utils import format_exc
+from simpleflow.utils import format_exc, json_dumps, issubclass_
from simpleflow.workflow import Workflow
from swf.models.history import builder
from simpleflow.history import History
@@ -90,12 +92,26 @@ class Executor(executor.Executor):
if hasattr(task, 'post_execute'):
task.post_execute()
state = 'completed'
- except Exception as err:
- future._exception = err
- logger.exception('rescuing exception: {}'.format(err))
- if isinstance(func, Activity) and func.raises_on_failure:
- message = format_exc(err)
- raise exceptions.TaskFailed(func.name, message)
+ except Exception:
+ exc_type, exc_value, exc_traceback = sys.exc_info()
+ future._exception = exc_value
+ logger.exception('rescuing exception: {}'.format(exc_value))
+ if (isinstance(func, Activity) or issubclass_(func, Workflow)) and getattr(func, 'raises_on_failure', None):
+ tb = traceback.format_tb(exc_traceback)
+ message = format_exc(exc_value)
+ details = json_dumps(
+ {
+ 'error': exc_type.__name__,
+ 'message': str(exc_value),
+ 'traceback': tb,
+ },
+ default=repr
+ )
+ raise exceptions.TaskFailed(
+ func.name,
+ message,
+ details,
+ )
state = 'failed'
finally:
future._state = futures.FINISHED
|
fix non-existent vcf file
Authored by: Vicente | @@ -57,8 +57,8 @@ if(any(sa$aux1 == F)){
}
#' Check for nonexistent VCF files
-if('DNA_VCF_FILE' %in% colnames(sa)){
- sa[, aux1 := file.exists(DNA_VCF_FILE) | is.na(DNA_ID)]
+if(! all(sa[,is.na(DNA_VCF_FILE)])){
+ sa[, aux1 := file.exists(DNA_VCF_FILE)]
if(any(sa$aux1 == F)){
print('The following VCF files do not exist: ')
DT::datatable(sa[aux1 == F])
|
Partial Metadata fix
Added partial fix for kp meta data. There's a bug that needs to be fixed. | @@ -103,7 +103,7 @@ def load(fnames, tag=None, sat_id=None):
"""
from pysat.utils import parse_date
-
+ meta = pysat.Meta()
if tag == '':
# Kp data stored monthly, need to return data daily
# the daily date is attached to filename
@@ -148,20 +148,26 @@ def load(fnames, tag=None, sat_id=None):
flag = np.array([x[1] for x in s])
ind, = np.where(flag == '+')
- first[ind] += 1./3.
+ first[ind] += 1.0 / 3.0
ind, = np.where(flag == '-')
- first[ind] -= 1./3.
+ first[ind] -= 1.0 / 3.0
result = pds.DataFrame(first, columns=['Kp'], index=s.index)
+ fill_val = np.nan
elif tag == 'forecast':
# load forecast data
result = pds.read_csv(fnames[0], index_col=0, parse_dates=True)
-
+ fill_val = -1
elif tag == 'recent':
# load recent Kp data
result = pds.read_csv(fnames[0], index_col=0, parse_dates=True)
+ fill_val = -1
- return result, pysat.Meta()
+ # Initalize the meta data
+ for kk in result.keys():
+ initialize_kp_metadata(meta, kk, fill_val)
+
+ return result, meta
def list_files(tag=None, sat_id=None, data_path=None, format_str=None):
"""Return a Pandas Series of every file for chosen satellite data
@@ -439,4 +445,33 @@ def filter_geoquiet(sat, maxKp=None, filterTime=None, kpData=None,
return
+def initialize_kp_metadata(meta, data_key, fill_val=-1):
+ """ Initialize the Kp meta data using our knowledge of the index
+
+ Parameters
+ ----------
+ meta : (pysat._meta.Meta)
+ Pysat metadata
+ data_key : (str)
+ String denoting the data key
+ fill_val : (int or float)
+ File-specific fill value (default=-1)
+ Returns
+ -------
+ Void
+
+ Updates metadata
+
+ """
+
+ data_label = data_key.replace("_", " ")
+
+ meta[data_key] = {meta.units_label: '', meta.name_label: data_key,
+ meta.desc_label: data_label ,
+ meta.plot_label: data_label.capitalize(),
+ meta.axis_label: data_label.capitalize(),
+ meta.scale_label: 'linear', meta.min_label: 0,
+ meta.max_label: 9, meta.fill_label: fill_val}
+
+ return
|
Change log.error to log.exception
See issue | @@ -78,13 +78,12 @@ class Cogs:
try:
self.bot.load_extension(full_cog)
except ImportError:
- log.error(f"{ctx.author} requested we load the '{cog}' cog, "
+ log.exception(f"{ctx.author} requested we load the '{cog}' cog, "
f"but the cog module {full_cog} could not be found!")
embed.description = f"Invalid cog: {cog}\n\nCould not find cog module {full_cog}"
except Exception as e:
- log.error(f"{ctx.author} requested we load the '{cog}' cog, "
- "but the loading failed with the following error: \n"
- f"{e}")
+ log.exception(f"{ctx.author} requested we load the '{cog}' cog, "
+ "but the loading failed")
embed.description = f"Failed to load cog: {cog}\n\n```{e}```"
else:
log.debug(f"{ctx.author} requested we load the '{cog}' cog. Cog loaded!")
@@ -134,9 +133,8 @@ class Cogs:
try:
self.bot.unload_extension(full_cog)
except Exception as e:
- log.error(f"{ctx.author} requested we unload the '{cog}' cog, "
- "but the unloading failed with the following error: \n"
- f"{e}")
+ log.exception(f"{ctx.author} requested we unload the '{cog}' cog, "
+ "but the unloading failed")
embed.description = f"Failed to unload cog: {cog}\n\n```{e}```"
else:
log.debug(f"{ctx.author} requested we unload the '{cog}' cog. Cog unloaded!")
@@ -239,9 +237,8 @@ class Cogs:
self.bot.unload_extension(full_cog)
self.bot.load_extension(full_cog)
except Exception as e:
- log.error(f"{ctx.author} requested we reload the '{cog}' cog, "
- "but the unloading failed with the following error: \n"
- f"{e}")
+ log.exception(f"{ctx.author} requested we reload the '{cog}' cog, "
+ "but the unloading failed")
embed.description = f"Failed to reload cog: {cog}\n\n```{e}```"
else:
log.debug(f"{ctx.author} requested we reload the '{cog}' cog. Cog reloaded!")
|
Composition: optimize _update_processing_graph
- reduces unnecessary iterations | @@ -2353,46 +2353,39 @@ class Composition(Composition_Base, metaclass=ComponentsMeta):
self._graph_processing = self.graph.copy()
- visited_vertices = set()
- next_vertices = [] # a queue
-
- unvisited_vertices = True
-
- while unvisited_vertices:
- for vertex in self._graph_processing.vertices:
- if vertex not in visited_vertices:
- next_vertices.append(vertex)
- break
- else:
- unvisited_vertices = False
-
- logger.debug('processing graph vertices: {0}'.format(self._graph_processing.vertices))
- while len(next_vertices) > 0:
- cur_vertex = next_vertices.pop(0)
- logger.debug('Examining vertex {0}'.format(cur_vertex))
-
- # must check that cur_vertex is not already visited because in cycles,
- # some nodes may be added to next_vertices twice
- if cur_vertex not in visited_vertices and not cur_vertex.component.is_processing:
- for parent in cur_vertex.parents:
- parent.children.remove(cur_vertex)
- for child in cur_vertex.children:
- child.parents.remove(cur_vertex)
- if cur_vertex.feedback:
+ def remove_vertex(vertex):
+ logger.debug('Removing', vertex)
+ for parent in vertex.parents:
+ parent.children.remove(vertex)
+ for child in vertex.children:
+ child.parents.remove(vertex)
+ if vertex.feedback:
child.backward_sources.add(parent.component)
self._graph_processing.connect_vertices(parent, child)
+ # ensure that children get removed even if vertex has no parents
+ if len(vertex.parents) == 0:
+ for child in vertex.children:
+ child.parents.remove(vertex)
+ if vertex.feedback:
+ child.backward_sources.add(parent.component)
for node in cur_vertex.parents + cur_vertex.children:
- logger.debug('New parents for vertex {0}: \n\t{1}\nchildren: \n\t{2}'.format(node, node.parents,
- node.children))
+ logger.debug(
+ 'New parents for vertex {0}: \n\t{1}\nchildren: \n\t{2}'.format(
+ node, node.parents, node.children
+ )
+ )
+
logger.debug('Removing vertex {0}'.format(cur_vertex))
- self._graph_processing.remove_vertex(cur_vertex)
+ self._graph_processing.remove_vertex(vertex)
- visited_vertices.add(cur_vertex)
- # add to next_vertices (frontier) any parents and children of cur_vertex that have not been visited yet
- next_vertices.extend(
- [vertex for vertex in cur_vertex.parents + cur_vertex.children if vertex not in visited_vertices])
+ # copy to avoid iteration problems when deleting
+ vert_list = self._graph_processing.vertices.copy()
+ for cur_vertex in vert_list:
+ logger.debug('Examining', cur_vertex)
+ if not cur_vertex.component.is_processing:
+ remove_vertex(cur_vertex)
self.needs_update_graph_processing = False
|
Patches the old-object unpickling to be compatible w/older GateStrings too.
Version 0.9.6 GateStrings have a ._str member but even older versions
of this object have a .str member instead. This commit updates the
special GateString.__setstate__ used when old-object-unpickling
is enabled to work with both versions. | @@ -22,7 +22,8 @@ def enable_old_object_unpickling():
replacement_obj = _circuit.Circuit.__new__(_circuit.Circuit)
return replacement_obj
def GateString_setstate(self,state):
- c = _objs.Circuit(state['_tup'], stringrep=state['_str'])
+ s = state['_str'] if '_str' in state else state['str']
+ c = _objs.Circuit(state['_tup'], stringrep=s)
self.__dict__.update(c.__dict__)
class dummy_CompressedGateString(object):
|
SystemCommand : Move `hash()` before `execute()`
Sticking to convention makes the implementations easier to verify. | @@ -52,6 +52,16 @@ class SystemCommand( GafferDispatch.TaskNode ) :
self["substitutions"] = Gaffer.CompoundDataPlug()
self["environmentVariables"] = Gaffer.CompoundDataPlug()
+ def hash( self, context ) :
+
+ h = GafferDispatch.TaskNode.hash( self, context )
+
+ self["command"].hash( h )
+ self["substitutions"].hash( h )
+ self["environmentVariables"].hash( h )
+
+ return h
+
def execute( self ) :
substitutions = IECore.CompoundData()
@@ -69,14 +79,4 @@ class SystemCommand( GafferDispatch.TaskNode ) :
subprocess.check_call( command, shell = True, env = env )
- def hash( self, context ) :
-
- h = GafferDispatch.TaskNode.hash( self, context )
-
- self["command"].hash( h )
- self["substitutions"].hash( h )
- self["environmentVariables"].hash( h )
-
- return h
-
IECore.registerRunTimeTyped( SystemCommand, typeName = "GafferDispatch::SystemCommand" )
|
added servicenow-get-computer command
* added servicenow-get-computer command
servicenow-get-computer: query the cmdb_ci_computer table with a computer code
returns: 'sys_id', 'u_code', 'support_group.value', 'os', 'comments'
* addded outputs | @@ -573,6 +573,47 @@ script:
return {'ContentsFormat': formats['json'], 'Type': entryTypes['note'], 'Contents': res, "HumanReadable": md, "EntryContext": ec};
};
+ var SNGetComputer = function (computerName) {
+ var ticket_type = "cmdb_ci_computer";
+ if (computerName) {
+ var path = "table/" + ticket_type + encodeToURLQuery({ sysparm_query: "u_code=" + computerName });
+
+ res = sendRequest('GET', path);
+
+ var md = '## ServiceNow Computer\n';
+ var ec = { ServiceNowComputer: [] };
+
+ var headerTran = function (headerName) {
+ var headers = {
+ sys_id: 'Id',
+ u_code: 'u_code (computer name)',
+ support_group: 'support group',
+ os: 'operating system',
+ comments: 'comments'
+ }
+ if (headers[headerName]) {
+ return headers[headerName];
+ } else {
+ return headerName;
+ }
+ };
+
+ if (res.obj.result[0].u_code === computerName) {
+ md += tableToMarkdown('ServiceNow Computer', res.obj.result[0], ['sys_id', 'u_code', 'support_group','os','comments'], '', headerTran);
+ ec.ServiceNowComputer.push({
+ sys_id: res.obj.result[0].sys_id,
+ u_code: res.obj.result[0].u_code,
+ support_group: res.obj.result[0].support_group.value,
+ os: res.obj.result[0].os,
+ comments: res.obj.result[0].comments
+ });
+ }
+
+ return { 'ContentsFormat': formats['json'], 'Type': entryTypes['note'], 'Contents': res, "HumanReadable": md, "EntryContext": ec };
+ } else {
+ throw 'incident-get-computer requires a computerName (snow field u_code)';
+ }
+ };
if (!params.ticket_type) {
params.ticket_type = 'incident';
}
@@ -601,6 +642,8 @@ script:
return SNUploadFile();
case 'servicenow-get-groups':
return SNGetGroups(args.name);
+ case 'servicenow-get-computer':
+ return SNGetComputer(args.computerName);
case 'fetch-incidents':
return fetchIncidents();
case 'test-module':
@@ -1544,4 +1587,22 @@ script:
required: true
description: Servicenow group name
description: Return information about specific servicenow group
+ - name: servicenow-get-computer
+ arguments:
+ - name: computerName
+ required: true
+ description: machine name
+ outputs:
+ - contextPath: ServiceNowComputer.sys_id
+ description: Id
+ - contextPath: ServiceNowComputer.u_code
+ description: Code
+ - contextPath: ServiceNowComputer.support_group
+ description: Support group
+ - contextPath: ServiceNowComputer.os
+ description: Operation System
+ - contextPath: ServiceNowComputer.comments
+ description: Comments
+ description: query the cmdb_ci_computer table with a computer code
isfetch: true
+releaseNotes: "Add servicenow-get-computer command"
\ No newline at end of file
|
Prepare Changelog for Automation
This PR prepares the changelog to be automatically updated during releases.
Authors:
- AJ Schmidt (@ajschmidt8)
Approvers:
- Dante Gama Dessavre (@dantegd)
URL: | -# cuML 0.18.0 (Date TBD)
+# 0.18.0
-## New Features
-
-## Improvements
-
-## Bug Fixes
-- PR #3279: Correct pure virtual declaration in manifold_inputs_t
+Please see https://github.com/rapidsai/cuml/releases/tag/branch-0.18-latest for the latest changes to this development branch.
-# cuML 0.17.0 (Date TBD)
+# cuML 0.17.0 (10 Dec 2020)
## New Features
- PR #3164: Expose silhouette score in Python
|
Fix CoC link in training request form
This commit/PR fixes | @@ -36,11 +36,7 @@ we'd be happy to help.</p>
<p>Please note that as a condition of taking this training:</p>
<ul>
- <li>You are required to abide by our Code of Conduct, which can be found at
- <a href="http://software-carpentry.org/conduct/">http://software-carpentry.org/conduct/</a>
- and
- <a href="http://datacarpentry.org/code-of-conduct/">http://datacarpentry.org/code-of-conduct/</a>.
- </li>
+ <li>You are required to abide by our <a href="https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html" target="_blank">Code of Conduct</a>.</li>
<li>You must complete short follow-up tasks after the course in order to
complete certification. The tasks are described at
|
Hints: Make import tracer robust against imports from "print" function.
* Also be more Python3 compatible by using proper level default for that
version. | @@ -61,7 +61,9 @@ def _moduleRepr(module):
def enableImportTracing(normalize_paths = True, show_source = False):
def _ourimport(name, globals = None, locals = None, fromlist = None, # @ReservedAssignment
- level = -1):
+ level = -1 if sys.version_info[0] < 2 else 0):
+ builtins.__import__ = original_import
+
global _indentation
try:
_indentation += 1
@@ -85,8 +87,13 @@ def enableImportTracing(normalize_paths = True, show_source = False):
print(_indentation * " " + "*" * 40)
+ builtins.__import__ = _ourimport
result = original_import(name, globals, locals, fromlist, level)
+ builtins.__import__ = original_import
print(_indentation * " " + "RESULT:", _moduleRepr(result))
+ print(_indentation * " " + "*" * 40)
+ builtins.__import__ = _ourimport
+
return result
finally:
_indentation -= 1
|
Remove extra "Hello," in developer notification after version update
Also move the contact link to the same line as the text it relates to | {% extends "reviewers/emails/base.ltxt" %}{% block content %}
-Hello,
-
Your add-on {{ name }} has been updated on addons.mozilla.org (AMO). Version {{ number }} is now available for download in our gallery at {{ addon_url }} .
This version has been screened and approved for the public. Keep in mind that other reviewers may look into this version in the future and determine that it requires changes or should be taken down. In that case, you will be notified again with details and next steps.
|
Update reports/California.md
Committing suggesting changes removing officer names, I apologize - didn't mean to cause any issues! | @@ -72,7 +72,7 @@ The police are seen shooting at fleeing protestors and parked vehicles.
### LAPD officer beats multiple protesters that are filming them during a protest in Beverley Hills | May 30th
-An officer, who was identified as Officer Hwang by an eye-witness who filmed the incident on their chest mounted GoPro camera, is seen beating an individual in a black jacket who was filming the officers. After being beat by a baton, the individual in the black jacket falls, and the officer continues to beat them. The eye-witness runs over to shield and protect the individual from the officer, and was beaten by the officer from behind while helping the fallen individual escape. At the end of the video, they are hit by a less-than-lethal projectile.
+An officer is seen beating an individual in a black jacket who was filming the officers. After being beat by a baton, the individual in the black jacket falls, and the officer continues to beat them. The eye-witness runs over to shield and protect the individual from the officer, and was beaten by the officer from behind while helping the fallen individual escape. At the end of the video, they are hit by a less-than-lethal projectile.
**Links**
@@ -350,4 +350,3 @@ In this video, an armored law enforcement vehicle tells protestors over megaphon
**Links**
* https://www.theguardian.com/us-news/2020/jun/04/vallejo-police-kill-unarmed-man-california
-
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.