id
int64 11
59.9k
| original
stringlengths 33
150k
| modified
stringlengths 37
150k
|
---|---|---|
8,055 |
def toavro(table, target, schema=None, sample=9,
codec='deflate', compression_level=None, **avro_args):
"""
Write the table into a new avro file according to schema passed.
This method assume that each column has values with the same type
for all rows of the source `table`.
`Apache Avro`_ is a data
serialization framework. It is used in data serialization (especially in
Hadoop ecosystem), for dataexchange for databases (Redshift) and RPC
protocols (like in Kafka). It has libraries to support many languages and
generally is faster and safer than text formats like Json, XML or CSV.
The `target` argument is the file path for creating the avro file.
Note that if a file already exists at the given location, it will be
overwritten.
The `schema` argument (dict) defines the rows field structure of the file.
Check fastavro `documentation`_ and Avro schema `reference`_ for details.
The `sample` argument (int, optional) defines how many rows are inspected
for discovering the field types and building a schema for the avro file
when the `schema` argument is not passed.
The `codec` argument (string, optional) sets the compression codec used to
shrink data in the file. It can be 'null', 'deflate' (default), 'bzip2' or
'snappy', 'zstandard', 'lz4', 'xz' (if installed)
The `compression_level` argument (int, optional) sets the level of
compression to use with the specified codec (if the codec supports it)
Additionally there are support for passing extra options in the
argument `**avro_args` that are fowarded directly to fastavro. Check the
fastavro `documentation`_ for reference.
The avro file format preserves type information, i.e., reading and writing
is round-trippable for tables with non-string data values. However the
conversion from Python value types to avro fields is not perfect. Use the
`schema` argument to define proper type to the conversion.
The following avro types are supported by the schema: null, boolean,
string, int, long, float, double, bytes, fixed, enum,
:ref:`array <array_schema>`, map, union, record, and recursive types
defined in :ref:`complex schemas <complex_schema>`.
Also :ref:`logical types <logical_schema>` are supported and translated to
coresponding python types: long timestamp-millis, long timestamp-micros, int date,
bytes decimal, fixed decimal, string uuid, int time-millis, long time-micros.
Example usage for writing files::
>>> # set up a Avro file to demonstrate with
>>> table2 = [['name', 'friends', 'age'],
... ['Bob', 42, 33],
... ['Jim', 13, 69],
... ['Joe', 86, 17],
... ['Ted', 23, 51]]
...
>>> schema2 = {
... 'doc': 'Some people records.',
... 'name': 'People',
... 'namespace': 'test',
... 'type': 'record',
... 'fields': [
... {'name': 'name', 'type': 'string'},
... {'name': 'friends', 'type': 'int'},
... {'name': 'age', 'type': 'int'},
... ]
... }
...
>>> # now demonstrate what writing with toavro()
>>> import petl as etl
>>> etl.toavro(table2, 'example-file-to-write.avro', schema=schema2)
...
>>> # this was what was saved above
>>> tbl2 = etl.fromavro('example-file-to-write.avro')
>>> tbl2
+-------+---------+-----+
| name | friends | age |
+=======+=========+=====+
| 'Bob' | 42 | 33 |
+-------+---------+-----+
| 'Jim' | 13 | 69 |
+-------+---------+-----+
| 'Joe' | 86 | 17 |
+-------+---------+-----+
| 'Ted' | 23 | 51 |
+-------+---------+-----+
.. versionadded:: 1.3.1
.. _Apache Avro: https://avro.apache.org/docs/current/spec.html
.. _reference: https://avro.apache.org/docs/current/spec.html#schemas
.. _documentation : https://fastavro.readthedocs.io/en/latest/writer.html
"""
target2 = write_source_from_arg(target)
_write_toavro(table,
target=target2,
mode='wb',
schema=schema,
sample=sample,
codec=codec,
compression_level=compression_level,
**avro_args)
|
def toavro(table, target, schema=None, sample=9,
codec='deflate', compression_level=None, **avro_args):
"""
Write the table into a new avro file according to schema passed.
This method assume that each column has values with the same type
for all rows of the source `table`.
`Apache Avro`_ is a data
serialization framework. It is used in data serialization (especially in
Hadoop ecosystem), for dataexchange for databases (Redshift) and RPC
protocols (like in Kafka). It has libraries to support many languages and
generally is faster and safer than text formats like Json, XML or CSV.
The `target` argument is the file path for creating the avro file.
Note that if a file already exists at the given location, it will be
overwritten.
The `schema` argument (dict) defines the rows field structure of the file.
Check fastavro `documentation`_ and Avro schema `reference`_ for details.
The `sample` argument (int, optional) defines how many rows are inspected
for discovering the field types and building a schema for the avro file
when the `schema` argument is not passed.
The `codec` argument (string, optional) sets the compression codec used to
shrink data in the file. It can be 'null', 'deflate' (default), 'bzip2' or
'snappy', 'zstandard', 'lz4', 'xz' (if installed)
The `compression_level` argument (int, optional) sets the level of
compression to use with the specified codec (if the codec supports it)
Additionally there are support for passing extra options in the
argument `**avro_args` that are fowarded directly to fastavro. Check the
fastavro `documentation`_ for reference.
The avro file format preserves type information, i.e., reading and writing
is round-trippable for tables with non-string data values. However the
conversion from Python value types to avro fields is not perfect. Use the
`schema` argument to define proper type to the conversion.
The following avro types are supported by the schema: null, boolean,
string, int, long, float, double, bytes, fixed, enum,
:ref:`array <array_schema>`, map, union, record, and recursive types
defined in :ref:`complex schemas <complex_schema>`.
Also :ref:`logical types <logical_schema>` are supported and translated to
coresponding python types: long timestamp-millis, long timestamp-micros, int date,
bytes decimal, fixed decimal, string uuid, int time-millis, long time-micros.
Example usage for writing files::
>>> # set up a Avro file to demonstrate with
>>> table2 = [['name', 'friends', 'age'],
... ['Bob', 42, 33],
... ['Jim', 13, 69],
... ['Joe', 86, 17],
... ['Ted', 23, 51]]
...
>>> schema2 = {
... 'doc': 'Some people records.',
... 'name': 'People',
... 'namespace': 'test',
... 'type': 'record',
... 'fields': [
... {'name': 'name', 'type': 'string'},
... {'name': 'friends', 'type': 'int'},
... {'name': 'age', 'type': 'int'},
... ]
... }
...
>>> # now demonstrate what writing with toavro()
>>> import petl as etl
>>> etl.toavro(table2, 'example-file-to-write.avro', schema=schema2)
...
>>> # this was what was saved above
>>> tbl2 = etl.fromavro('example-file-to-write.avro')
>>> tbl2
+-------+---------+-----+
| name | friends | age |
+=======+=========+=====+
| 'Bob' | 42 | 33 |
+-------+---------+-----+
| 'Jim' | 13 | 69 |
+-------+---------+-----+
| 'Joe' | 86 | 17 |
+-------+---------+-----+
| 'Ted' | 23 | 51 |
+-------+---------+-----+
.. versionadded:: 1.4.0
.. _Apache Avro: https://avro.apache.org/docs/current/spec.html
.. _reference: https://avro.apache.org/docs/current/spec.html#schemas
.. _documentation : https://fastavro.readthedocs.io/en/latest/writer.html
"""
target2 = write_source_from_arg(target)
_write_toavro(table,
target=target2,
mode='wb',
schema=schema,
sample=sample,
codec=codec,
compression_level=compression_level,
**avro_args)
|
58,693 |
def validate_yaml_schema(yaml_file_content: Text, schema_path: Text) -> None:
"""
Validate yaml content.
Args:
yaml_file_content: the content of the yaml file to be validated
schema_path: the schema of the yaml file
"""
from pykwalify.core import Core
from pykwalify.errors import SchemaError
from ruamel.yaml import YAMLError
import pkg_resources
import logging
log = logging.getLogger("pykwalify")
log.setLevel(logging.CRITICAL)
try:
source_data = rasa.shared.utils.io.read_yaml(yaml_file_content)
except YAMLError:
raise InvalidYamlFileError(
"The provided yaml file is invalid. You can use "
"http://www.yamllint.com/ to validate the yaml syntax "
"of your file."
)
except DuplicateKeyError as e:
raise InvalidYamlFileError(
"The provided yaml file contains a duplicated key: '{}'. You can use "
"http://www.yamllint.com/ to validate the yaml syntax "
"of your file.".format(str(e))
)
schema_file = pkg_resources.resource_filename(PACKAGE_NAME, schema_path)
schema_utils_file = pkg_resources.resource_filename(
PACKAGE_NAME, RESPONSES_SCHEMA_FILE
)
schema_extensions = pkg_resources.resource_filename(
PACKAGE_NAME, SCHEMA_EXTENSIONS_FILE
)
c = Core(
source_data=source_data,
schema_files=[schema_file, schema_utils_file],
extensions=[schema_extensions],
)
try:
c.validate(raise_exception=True)
except SchemaError:
raise InvalidYamlFileError(
"Please make sure the file is correct and all "
"mandatory parameters are specified. Please take a look at the errors "
"found during validation",
c.errors,
content=source_data,
)
|
def validate_yaml_schema(yaml_file_content: Text, schema_path: Text) -> None:
"""
Validate yaml content.
Args:
yaml_file_content: the content of the yaml file to be validated
schema_path: the schema of the yaml file
"""
from pykwalify.core import Core
from pykwalify.errors import SchemaError
from ruamel.yaml import YAMLError
import pkg_resources
import logging
log = logging.getLogger("pykwalify")
log.setLevel(logging.CRITICAL)
try:
source_data = rasa.shared.utils.io.read_yaml(yaml_file_content)
except YAMLError:
raise InvalidYamlFileError(
"The provided yaml file is invalid. You can use "
"http://www.yamllint.com/ to validate the yaml syntax "
"of your file."
)
except DuplicateKeyError as e:
raise InvalidYamlFileError(
"The provided yaml file contains a duplicated key: '{}'. You can use "
"http://www.yamllint.com/ to validate the yaml syntax "
"of your file.".format(str(e))
)
schema_file = pkg_resources.resource_filename(PACKAGE_NAME, schema_path)
schema_utils_file = pkg_resources.resource_filename(
PACKAGE_NAME, RESPONSES_SCHEMA_FILE
)
schema_extensions = pkg_resources.resource_filename(
PACKAGE_NAME, SCHEMA_EXTENSIONS_FILE
)
c = Core(
source_data=source_data,
schema_files=[schema_file, schema_utils_file],
extensions=[schema_extensions],
)
try:
c.validate(raise_exception=True)
except SchemaError:
raise InvalidYamlFileError(
"Please make sure the file is correct and all "
"mandatory parameters are specified. Here are the errors "
"found during validation",
c.errors,
content=source_data,
)
|
35,035 |
def parse_configs(input_configs):
"""Parse configuration values set via command line.
Parameters
----------
input_configs: list of str
list of configurations provided via command line.
Returns
-------
pass_context_configs: dict
a dict containing key-value configs to be used in the PassContext.
"""
all_configs = tvm.ir.transform.PassContext.list_configs()
supported_config_types = ("IntImm", "runtime.String")
supported_configs = [
name for name in all_configs.keys() if all_configs[name]["type"] in supported_config_types
]
pass_context_configs = {}
if not input_configs:
return {}
for config in input_configs:
if len(config) == 0:
raise TVMCException(
f"Invalid format for configuration '{config}', use <config>=<value>"
)
# Each config is expected to be provided as "name=value"
try:
name, value = config.split("=")
name = name.strip()
value = value.strip()
except ValueError:
raise TVMCException(
f"Invalid format for configuration '{config}', use <config>=<value>"
)
if name not in all_configs:
raise TVMCException(
f"Configuration '{name}' is not defined in TVM. "
f"These are the existing configurations: {', '.join(all_configs)}"
)
if name not in supported_configs:
raise TVMCException(
f"Configuration '{name}' is not supported in TVMC. "
f"The following configurations are supported: {', '.join(supported_configs)}"
)
parsed_value = set_config_value(name, value, all_configs[name]["type"])
pass_context_configs[name] = parsed_value
return pass_context_configs
|
def parse_configs(input_configs):
"""Parse configuration values set via command line.
Parameters
----------
input_configs: list of str
list of configurations provided via command line.
Returns
-------
pass_context_configs: dict
a dict containing key-value configs to be used in the PassContext.
"""
all_configs = tvm.ir.transform.PassContext.list_configs()
supported_config_types = ("IntImm", "runtime.String")
supported_configs = [
name for name in all_configs.keys() if all_configs[name]["type"] in supported_config_types
]
pass_context_configs = {}
if not input_configs:
return {}
for config in input_configs:
if not config:
raise TVMCException(
f"Invalid format for configuration '{config}', use <config>=<value>"
)
# Each config is expected to be provided as "name=value"
try:
name, value = config.split("=")
name = name.strip()
value = value.strip()
except ValueError:
raise TVMCException(
f"Invalid format for configuration '{config}', use <config>=<value>"
)
if name not in all_configs:
raise TVMCException(
f"Configuration '{name}' is not defined in TVM. "
f"These are the existing configurations: {', '.join(all_configs)}"
)
if name not in supported_configs:
raise TVMCException(
f"Configuration '{name}' is not supported in TVMC. "
f"The following configurations are supported: {', '.join(supported_configs)}"
)
parsed_value = set_config_value(name, value, all_configs[name]["type"])
pass_context_configs[name] = parsed_value
return pass_context_configs
|
523 |
def domain_subset(domains, key):
if len(key) not in (1, 2):
raise ValueError(f"invalid length (must be 1 or 2 hex digits): {key}")
key = (key * 2) if len(key) == 1 else key
min = int(key[0], 16)
max = int(key[1], 16)
subset = []
for name in domains:
index = int(md5(name.encode("utf-8")).hexdigest()[0], 16)
if min <= index and index <= max:
subset.append(name)
return subset
|
def domain_subset(domains, key):
if len(key) not in (1, 2):
raise ValueError(f"invalid length (must be 1 or 2 hex digits): {key}")
key = (key * 2) if len(key) == 1 else key
min = int(key[0], 16)
max = int(key[1], 16)
subset = []
for name in domains:
index = int(md5(name.encode("utf-8")).hexdigest()[0], 16)
if min <= index <= max:
subset.append(name)
return subset
|
13,994 |
def spike_contrast(spiketrains, t_start=None, t_stop=None,
min_bin=10 * pq.ms, bin_shrink_factor=0.9,
return_trace=False):
"""
Calculates the synchrony of spike trains, according to
:cite:`synchrony-Ciba18_136`. The spike trains can have different lengths.
Original implementation by: Philipp Steigerwald [[email protected]]
Visualization is covered in
:func:`viziphant.spike_train_synchrony.plot_spike_contrast`.
Parameters
----------
spiketrains : list of neo.SpikeTrain
A list of input spike trains to calculate the synchrony from.
t_start : pq.Quantity, optional
The beginning of the spike train. If None, it's taken as the minimum
value of `t_start's` of the input spike trains.
Default: None
t_stop : pq.Quantity, optional
The end of the spike train. If None, it's taken as the maximum value
of `t_stop` of the input spike trains.
Default: None
min_bin : pq.Quantity, optional
Sets the minimum value for the `bin_min` that is calculated by the
algorithm and defines the smallest bin size to compute the histogram
of the input `spiketrains`.
Default: 0.01 ms
bin_shrink_factor : float, optional
A multiplier to shrink the bin size on each iteration. The value must
be in range `(0, 1)`.
Default: 0.9
return_trace : bool, optional
If set to True, returns a history of spike-contrast synchrony, computed
for a range of different bin sizes, alongside with the maximum value of
the synchrony.
Default: False
Returns
-------
synchrony : float
Returns the synchrony of the input spike trains.
spike_contrast_trace : namedtuple
If `return_trace` is set to True, a `SpikeContrastTrace` namedtuple is
returned with the following attributes:
`.contrast` - the average sum of differences of the number of spikes
in subsuequent bins;
`.active_spiketrains` - the average number of spikes per bin,
weighted by the number of spike trains containing at least one spike
inside the bin;
`.synchrony` - the product of `contrast` and `active_spiketrains`;
`.bin_size` - the X axis, a list of bin sizes that correspond to
these traces.
Raises
------
ValueError
If `bin_shrink_factor` is not in (0, 1) range.
If the input spike trains constist of a single spiketrain.
If all input spike trains contain no more than 1 spike.
TypeError
If the input spike trains is not a list of `neo.SpikeTrain` objects.
If `t_start`, `t_stop`, or `min_bin` are not time quantities.
Examples
--------
>>> import quantities as pq
>>> from elephant.spike_train_generation import homogeneous_poisson_process
>>> from elephant.spike_train_synchrony import spike_contrast
>>> spiketrain_1 = homogeneous_poisson_process(rate=20*pq.Hz,
... t_stop=1000*pq.ms)
>>> spiketrain_2 = homogeneous_poisson_process(rate=20*pq.Hz,
... t_stop=1000*pq.ms)
>>> spike_contrast([spiketrain_1, spiketrain_2])
0.4192546583850932
"""
if not 0. < bin_shrink_factor < 1.:
raise ValueError(f"'bin_shrink_factor' ({bin_shrink_factor}) must be "
"in range (0, 1).")
if not len(spiketrains) > 1:
raise ValueError("Spike contrast measure requires more than 1 input "
"spiketrain.")
check_same_units(spiketrains, object_type=neo.SpikeTrain)
if not is_time_quantity(t_start, t_stop, allow_none=True):
raise TypeError("'t_start' and 't_stop' must be time quantities.")
if not is_time_quantity(min_bin):
raise TypeError("'min_bin' must be a time quantity.")
if t_start is None:
t_start = min(st.t_start for st in spiketrains)
if t_stop is None:
t_stop = max(st.t_stop for st in spiketrains)
# convert everything to spiketrain units
units = spiketrains[0].units
spiketrains = [st.magnitude for st in spiketrains]
t_start = t_start.rescale(units).item()
t_stop = t_stop.rescale(units).item()
min_bin = min_bin.rescale(units).item()
spiketrains = [times[(times >= t_start) & (times <= t_stop)]
for times in spiketrains]
n_spiketrains = len(spiketrains)
n_spikes_total = sum(map(len, spiketrains))
duration = t_stop - t_start
bin_max = duration / 2
try:
isi_min = min(np.diff(st).min() for st in spiketrains if len(st) > 1)
except TypeError:
raise ValueError("All input spiketrains contain no more than 1 spike.")
bin_min = max(isi_min / 2, min_bin)
contrast_list = []
active_spiketrains = []
synchrony_curve = []
# Set new time boundaries
t_start = t_start - isi_min
t_stop = t_stop + isi_min
bin_sizes = []
bin_size = bin_max
while bin_size >= bin_min:
bin_sizes.append(bin_size)
# Calculate Theta and n
theta_k, n_k = _get_theta_and_n_per_bin(spiketrains,
t_start=t_start,
t_stop=t_stop,
bin_size=bin_size)
# calculate synchrony_curve = contrast * active_st
active_st = (np.sum(n_k * theta_k) / np.sum(theta_k) - 1) / (
n_spiketrains - 1)
contrast = np.sum(np.abs(np.diff(theta_k))) / (2 * n_spikes_total)
# Contrast: sum(|derivation|) / (2*#Spikes)
synchrony = contrast * active_st
contrast_list.append(contrast)
active_spiketrains.append(active_st)
synchrony_curve.append(synchrony)
# New bin size
bin_size *= bin_shrink_factor
# Sync value is maximum of the cost function C
synchrony = max(synchrony_curve)
if return_trace:
spike_contrast_trace = SpikeContrastTrace(
contrast=contrast_list,
active_spiketrains=active_spiketrains,
synchrony=synchrony_curve,
bin_size=bin_sizes * units,
)
return synchrony, spike_contrast_trace
return synchrony
|
def spike_contrast(spiketrains, t_start=None, t_stop=None,
min_bin=10 * pq.ms, bin_shrink_factor=0.9,
return_trace=False):
"""
Calculates the synchrony of spike trains, according to
:cite:`synchrony-Ciba18_136`. The spike trains can have different lengths.
Original implementation by: Philipp Steigerwald [[email protected]]
Visualization is covered in
:func:`viziphant.spike_train_synchrony.plot_spike_contrast`.
Parameters
----------
spiketrains : list of neo.SpikeTrain
A list of input spike trains to calculate the synchrony from.
t_start : pq.Quantity, optional
The beginning of the spike train. If None, it's taken as the minimum
value of `t_start` values of the input spike trains.
Default: None
t_stop : pq.Quantity, optional
The end of the spike train. If None, it's taken as the maximum value
of `t_stop` of the input spike trains.
Default: None
min_bin : pq.Quantity, optional
Sets the minimum value for the `bin_min` that is calculated by the
algorithm and defines the smallest bin size to compute the histogram
of the input `spiketrains`.
Default: 0.01 ms
bin_shrink_factor : float, optional
A multiplier to shrink the bin size on each iteration. The value must
be in range `(0, 1)`.
Default: 0.9
return_trace : bool, optional
If set to True, returns a history of spike-contrast synchrony, computed
for a range of different bin sizes, alongside with the maximum value of
the synchrony.
Default: False
Returns
-------
synchrony : float
Returns the synchrony of the input spike trains.
spike_contrast_trace : namedtuple
If `return_trace` is set to True, a `SpikeContrastTrace` namedtuple is
returned with the following attributes:
`.contrast` - the average sum of differences of the number of spikes
in subsuequent bins;
`.active_spiketrains` - the average number of spikes per bin,
weighted by the number of spike trains containing at least one spike
inside the bin;
`.synchrony` - the product of `contrast` and `active_spiketrains`;
`.bin_size` - the X axis, a list of bin sizes that correspond to
these traces.
Raises
------
ValueError
If `bin_shrink_factor` is not in (0, 1) range.
If the input spike trains constist of a single spiketrain.
If all input spike trains contain no more than 1 spike.
TypeError
If the input spike trains is not a list of `neo.SpikeTrain` objects.
If `t_start`, `t_stop`, or `min_bin` are not time quantities.
Examples
--------
>>> import quantities as pq
>>> from elephant.spike_train_generation import homogeneous_poisson_process
>>> from elephant.spike_train_synchrony import spike_contrast
>>> spiketrain_1 = homogeneous_poisson_process(rate=20*pq.Hz,
... t_stop=1000*pq.ms)
>>> spiketrain_2 = homogeneous_poisson_process(rate=20*pq.Hz,
... t_stop=1000*pq.ms)
>>> spike_contrast([spiketrain_1, spiketrain_2])
0.4192546583850932
"""
if not 0. < bin_shrink_factor < 1.:
raise ValueError(f"'bin_shrink_factor' ({bin_shrink_factor}) must be "
"in range (0, 1).")
if not len(spiketrains) > 1:
raise ValueError("Spike contrast measure requires more than 1 input "
"spiketrain.")
check_same_units(spiketrains, object_type=neo.SpikeTrain)
if not is_time_quantity(t_start, t_stop, allow_none=True):
raise TypeError("'t_start' and 't_stop' must be time quantities.")
if not is_time_quantity(min_bin):
raise TypeError("'min_bin' must be a time quantity.")
if t_start is None:
t_start = min(st.t_start for st in spiketrains)
if t_stop is None:
t_stop = max(st.t_stop for st in spiketrains)
# convert everything to spiketrain units
units = spiketrains[0].units
spiketrains = [st.magnitude for st in spiketrains]
t_start = t_start.rescale(units).item()
t_stop = t_stop.rescale(units).item()
min_bin = min_bin.rescale(units).item()
spiketrains = [times[(times >= t_start) & (times <= t_stop)]
for times in spiketrains]
n_spiketrains = len(spiketrains)
n_spikes_total = sum(map(len, spiketrains))
duration = t_stop - t_start
bin_max = duration / 2
try:
isi_min = min(np.diff(st).min() for st in spiketrains if len(st) > 1)
except TypeError:
raise ValueError("All input spiketrains contain no more than 1 spike.")
bin_min = max(isi_min / 2, min_bin)
contrast_list = []
active_spiketrains = []
synchrony_curve = []
# Set new time boundaries
t_start = t_start - isi_min
t_stop = t_stop + isi_min
bin_sizes = []
bin_size = bin_max
while bin_size >= bin_min:
bin_sizes.append(bin_size)
# Calculate Theta and n
theta_k, n_k = _get_theta_and_n_per_bin(spiketrains,
t_start=t_start,
t_stop=t_stop,
bin_size=bin_size)
# calculate synchrony_curve = contrast * active_st
active_st = (np.sum(n_k * theta_k) / np.sum(theta_k) - 1) / (
n_spiketrains - 1)
contrast = np.sum(np.abs(np.diff(theta_k))) / (2 * n_spikes_total)
# Contrast: sum(|derivation|) / (2*#Spikes)
synchrony = contrast * active_st
contrast_list.append(contrast)
active_spiketrains.append(active_st)
synchrony_curve.append(synchrony)
# New bin size
bin_size *= bin_shrink_factor
# Sync value is maximum of the cost function C
synchrony = max(synchrony_curve)
if return_trace:
spike_contrast_trace = SpikeContrastTrace(
contrast=contrast_list,
active_spiketrains=active_spiketrains,
synchrony=synchrony_curve,
bin_size=bin_sizes * units,
)
return synchrony, spike_contrast_trace
return synchrony
|
28,020 |
def collect_ctu_involved_files(result_handler, source_analyzer, output_dir):
"""
This function collects the list of source files involved by CTU analysis.
The list of files are written to output_dir.
"""
if source_analyzer.ANALYZER_NAME != ClangSA.ANALYZER_NAME:
return
involved_files = set()
involved_files.update(source_analyzer.get_analyzer_mentioned_files(
result_handler.analyzer_stdout))
involved_files.update(source_analyzer.get_analyzer_mentioned_files(
result_handler.analyzer_stderr))
if involved_files:
out = os.path.join(output_dir, result_handler.analyzer_action_str)
with open(out, 'w') as f:
f.write('\n'.join(involved_files))
|
def collect_ctu_involved_files(result_handler, source_analyzer, output_dir):
"""
This function collects the list of source files involved by CTU analysis.
The list of files are written to output_dir.
"""
if source_analyzer.ANALYZER_NAME != ClangSA.ANALYZER_NAME:
return
involved_files = set()
involved_files.update(source_analyzer.get_analyzer_mentioned_files(
result_handler.analyzer_stdout))
involved_files.update(source_analyzer.get_analyzer_mentioned_files(
result_handler.analyzer_stderr))
if involved_files:
out = os.path.join(output_dir, result_handler.analyzer_action_str)
with open(out, 'w', encoding='utf-8', errors='ignore') as f:
f.write('\n'.join(involved_files))
|
9,051 |
def rate_user(
rate: int,
message: typing.Optional[str] = None,
) -> typing.Callable:
"""Decorate a function to be rate-limited for a user.
:param rate: seconds between permitted calls of this function by the same
user
:param message: optional; message send as notice when a user hits the limit
This decorator can be used alone or with the :func:`rate` decorator, as it
will always take precedence::
@rate(10, 10, 10)
@rate_user(20, 'You hit your rate limit for this function.')
# user limit will be set to 20, other to 10
# will send a NOTICE only when a user hits their own limit
# as other rate limit don't have any message set
If you don't provide a message, the default message set (if any) by
:func:`rate` will be used instead.
.. versionadded:: 8.0
"""
def add_attribute(function):
function.rate = rate
function.user_rate_message = message
return function
return add_attribute
|
def rate_user(
rate: int,
message: typing.Optional[str] = None,
) -> typing.Callable:
"""Decorate a function to be rate-limited for a user.
:param rate: seconds between permitted calls of this function by the same
user
:param message: optional; message send as notice when a user hits the limit
This decorator can be used alone or with the :func:`rate` decorator, as it
will always take precedence::
@rate(10, 10, 10)
@rate_user(20, 'You hit your rate limit for this function.')
# user limit will be set to 20, other to 10
# will send a NOTICE only when a user hits their own limit
# as other rate limits don't have any message set
If you don't provide a message, the default message set (if any) by
:func:`rate` will be used instead.
.. versionadded:: 8.0
"""
def add_attribute(function):
function.rate = rate
function.user_rate_message = message
return function
return add_attribute
|
48,672 |
def test_get_new_command():
assert get_new_command(Command('docker build -t artifactory:9090/foo/bar:fdb7c6d .', '')) == shell.and_('docker login', 'docker build -t artifactory:9090/foo/bar:fdb7c6d .')
assert get_new_command(Command('docker push artifactory:9090/foo/bar:fdb7c6d', '')) == shell.and_('docker login', 'docker push artifactory:9090/foo/bar:fdb7c6d')
|
def test_get_new_command():
assert get_new_command(Command('docker build -t artifactory:9090/foo/bar:fdb7c6d .', '')) == 'docker login && docker build -t artifactory:9090/foo/bar:fdb7c6d .'
assert get_new_command(Command('docker push artifactory:9090/foo/bar:fdb7c6d', '')) == 'docker login && docker push artifactory:9090/foo/bar:fdb7c6d'
|
11,701 |
def parametrize(tests, arity=None):
'''Helper for parametrizing pytest tests.
Expect a list of lambdas, one per test. Each lambda must return
the parameters for its respecting test.
Test identifiers will be automatically generated, from the test
number and its lambda definition line (1.10, 2.12, 3.20, ...).
If arity is None, the arguments being parametrized will be automatically
set from the function last arguments, according to the numbers of
parameters for each test.
'''
ids = []
argvalues = []
for n, t in enumerate(tests):
line = inspect.getsourcelines(t)[1]
ids.append('%u:%u' % (n+1, line))
argvalues.append(t())
if arity is None:
arity = len(argvalues[0])
assert arity > 0
def decorator(fn):
argnames = list(
parameter.name
for parameter in inspect.signature(fn).parameters.values()
if parameter.default is inspect.Parameter.empty
)[-arity:]
if arity == 1:
argnames = argnames[0]
return pytest.mark.parametrize(argnames, argvalues, ids=ids)(fn)
return decorator
|
def parametrize(tests, arity=None):
'''Helper for parametrizing pytest tests.
Expect a list of lambdas, one per test. Each lambda must return
the parameters for its respecting test.
Test identifiers will be automatically generated, from the test
number and its lambda definition line (1.10, 2.12, 3.20, ...).
If arity is None, the arguments being parametrized will be automatically
set from the function's last arguments, according to the numbers of
parameters for each test.
'''
ids = []
argvalues = []
for n, t in enumerate(tests):
line = inspect.getsourcelines(t)[1]
ids.append('%u:%u' % (n+1, line))
argvalues.append(t())
if arity is None:
arity = len(argvalues[0])
assert arity > 0
def decorator(fn):
argnames = list(
parameter.name
for parameter in inspect.signature(fn).parameters.values()
if parameter.default is inspect.Parameter.empty
)[-arity:]
if arity == 1:
argnames = argnames[0]
return pytest.mark.parametrize(argnames, argvalues, ids=ids)(fn)
return decorator
|
34,494 |
def convert(args: argparse.Namespace):
output = Path(args.output[0])
if not os.path.exists(output):
print_error_and_exit(
f"The output path {output} doesn't exist. Please make sure to specify "
f"existing directory and try again."
)
return
for training_data_path in args.training_data:
if not os.path.exists(training_data_path):
print_error_and_exit(
f"The training data path {training_data_path} doesn't exist "
f"and will be skipped."
)
loop = asyncio.get_event_loop()
num_of_files_converted = 0
for file in os.listdir(training_data_path):
source_path = Path(training_data_path) / file
output_path = Path(output) / f"{source_path.stem}{CONVERTED_FILE_POSTFIX}"
if MarkdownReader.is_markdown_nlu_file(source_path):
convert_nlu(source_path, output_path, source_path)
num_of_files_converted += 1
elif MarkdownStoryReader.is_markdown_story_file(source_path):
loop.run_until_complete(
convert_core(source_path, output_path, source_path)
)
num_of_files_converted += 1
else:
print_warning(
f"Skipped file '{source_path}' since it's neither NLU "
"nor Core training data file."
)
print_info(f"Converted {num_of_files_converted} files, saved in '{output}'")
|
def convert(args: argparse.Namespace):
output = Path(args.output[0])
if not os.path.exists(output):
print_error_and_exit(
f"The output path {output} doesn't exist. Please make sure to specify "
f"existing directory and try again."
)
return
for training_data_path in args.training_data:
if not os.path.exists(training_data_path):
print_error_and_exit(
f"The training data path {training_data_path} doesn't exist "
f"and will be skipped."
)
loop = asyncio.get_event_loop()
num_of_files_converted = 0
for file in os.listdir(training_data_path):
source_path = Path(training_data_path) / file
output_path = Path(output) / f"{source_path.stem}{CONVERTED_FILE_POSTFIX}"
if MarkdownReader.is_markdown_nlu_file(source_path):
convert_nlu(source_path, output_path, source_path)
num_of_files_converted += 1
elif MarkdownStoryReader.is_markdown_story_file(source_path):
loop.run_until_complete(
convert_core(source_path, output_path, source_path)
)
num_of_files_converted += 1
else:
print_warning(
f"Skipped file '{source_path}' since it's neither NLU "
"nor Core training data file."
)
print_info(f"Converted {num_of_files_converted} file(s), saved in '{output}'.")
|
57,866 |
def decode_str_using_chardet(decrypted_text):
chardet_detection = chardet.detect(decrypted_text)
encoding = chardet_detection.get('encoding', 'utf-8') or 'utf-8'
try:
# Trying to decode using the detected encoding
demisto.debug(f"Going to decode decrypted text using {encoding} encoding")
out = decrypted_text.decode(encoding)
except UnicodeDecodeError:
# In case the detected encoding fails apply the default encoding
demisto.info(f'Could not decode dile using detected encoding:{encoding}, retrying '
f'using utf-8.\n')
out = decrypted_text.decode('utf-8')
return out
|
def decode_str_using_chardet(decrypted_text):
chardet_detection = chardet.detect(decrypted_text)
encoding = chardet_detection.get('encoding', 'utf-8') or 'utf-8'
try:
# Trying to decode using the detected encoding
demisto.debug(f"Going to decode decrypted text using {encoding} encoding")
out = decrypted_text.decode(encoding)
except UnicodeDecodeError:
# In case the detected encoding fails apply the default encoding
demisto.info(f'Could not decode file using detected encoding:{encoding}, retrying '
f'using utf-8.\n')
out = decrypted_text.decode('utf-8')
return out
|
41,040 |
def _entrate_sp(x, sm_window):
"""
Calculate the entropy rate of a stationary Gaussian random process using
spectrum estimation with smoothing window.
Parameters
----------
x :
sm_window :
Returns
-------
out :
"""
n = x.shape
# Normalize x_sb to be unit variance
x_std = np.std(np.reshape(x, (np.prod(n), 1)))
if x_std < 1e-10:
x_std = 1e-10
x = x / x_std
if (sm_window == 1):
M = [int(i) for i in np.ceil(np.array(n) / 10)]
if (x.ndim >= 3):
parzen_w_3 = np.zeros((2 * n[2] - 1, ))
parzen_w_3[(n[2] - M[2] - 1):(n[2] +
M[2])] = _parzen_win(2 * M[2] + 1)
if (x.ndim >= 2):
parzen_w_2 = np.zeros((2 * n[1] - 1, ))
parzen_w_2[(n[1] - M[1] - 1):(n[1] +
M[1])] = _parzen_win(2 * M[1] + 1)
if (x.ndim >= 1):
parzen_w_1 = np.zeros((2 * n[0] - 1, ))
parzen_w_1[(n[0] - M[0] - 1):(n[0] +
M[0])] = _parzen_win(2 * M[0] + 1)
if x.ndim == 2 and min(n) == 1: # 1D
xc = _autocorr(x)
xc = xc * parzen_w_1
xf = fftshift(fft(xc))
elif x.ndim == 2 and min(n) != 1: # 2D
xc = _autocorr(x) # default option: computes raw correlations with NO
# normalization -- Matlab help on xcorr
# Bias correction
v1 = np.hstack((np.arange(1, n[0] + 1), np.arange(n[0] - 1, 0,
-1)))[np.newaxis, :]
v2 = np.hstack((np.arange(1, n[1] + 1), np.arange(n[1] - 1, 0,
-1)))[np.newaxis, :]
vd = np.dot(v1.T, v2)
xc = xc / vd
parzen_window_2D = np.dot(parzen_w_1, parzen_w_2.T)
xc = xc * parzen_window_2D
xf = fftshift(fft2(xc))
elif x.ndim == 3 and min(n) != 1: # 3D
xc = np.zeros((2 * n[0] - 1, 2 * n[1] - 1, 2 * n[2] - 1))
for m3 in range(n[2] - 1):
temp = np.zeros((2 * n[0] - 1, 2 * n[1] - 1))
for k in range(n[2] - m3):
temp = temp + correlate2d(x[:, :, k + m3], x[:, :, k])
# default option:
# computes raw correlations with NO normalization
# -- Matlab help on xcorr
xc[:, :, (n[2] - 1) - m3] = temp
xc[:, :, (n[2] - 1) + m3] = temp
# Bias correction
v1 = np.hstack((np.arange(1, n[0] + 1), np.arange(n[0] - 1, 0,
-1)))[np.newaxis, :]
v2 = np.hstack((np.arange(1, n[1] + 1), np.arange(n[1] - 1, 0,
-1)))[np.newaxis, :]
v3 = np.arange(n[2], 0, -1)
vd = np.dot(v1.T, v2)
vcu = np.zeros((2 * n[0] - 1, 2 * n[1] - 1, 2 * n[2] - 1))
for m3 in range(n[2]):
vcu[:, :, (n[2] - 1) - m3] = vd * v3[m3]
vcu[:, :, (n[2] - 1) + m3] = vd * v3[m3]
# Possible source of NAN values
xc = xc / vcu
parzen_window_2D = np.dot(parzen_w_1[np.newaxis, :].T,
parzen_w_2[np.newaxis, :])
parzen_window_3D = np.zeros((2 * n[0] - 1, 2 * n[1] - 1, 2 * n[2] - 1))
for m3 in range(n[2] - 1):
parzen_window_3D[:, :, (n[2] - 1) - m3] = np.dot(
parzen_window_2D, parzen_w_3[n[2] - 1 - m3])
parzen_window_3D[:, :, (n[2] - 1) + m3] = np.dot(
parzen_window_2D, parzen_w_3[n[2] - 1 + m3])
xc = xc * parzen_window_3D
xf = fftshift(fftn(xc))
else:
raise ValueError('Unrecognized matrix dimension.')
xf = abs(xf)
xf[xf < 1e-4] = 1e-4
out = 0.5 * np.log(2 * np.pi * np.exp(1)) + _sumN(np.log(abs(
(xf)))) / 2 / _sumN(abs(xf))
return out
|
def _entrate_sp(x, sm_window):
"""
Calculate the entropy rate of a stationary Gaussian random process using
spectrum estimation with smoothing window.
Parameters
----------
x :
sm_window :
Returns
-------
out :
"""
n = x.shape
# Normalize x_sb to be unit variance
x_std = np.std(np.reshape(x, (np.prod(n), 1)))
if x_std < 1e-10:
x_std = 1e-10
x = x / x_std
if sm_window == 1:
M = [int(i) for i in np.ceil(np.array(n) / 10)]
if (x.ndim >= 3):
parzen_w_3 = np.zeros((2 * n[2] - 1, ))
parzen_w_3[(n[2] - M[2] - 1):(n[2] +
M[2])] = _parzen_win(2 * M[2] + 1)
if (x.ndim >= 2):
parzen_w_2 = np.zeros((2 * n[1] - 1, ))
parzen_w_2[(n[1] - M[1] - 1):(n[1] +
M[1])] = _parzen_win(2 * M[1] + 1)
if (x.ndim >= 1):
parzen_w_1 = np.zeros((2 * n[0] - 1, ))
parzen_w_1[(n[0] - M[0] - 1):(n[0] +
M[0])] = _parzen_win(2 * M[0] + 1)
if x.ndim == 2 and min(n) == 1: # 1D
xc = _autocorr(x)
xc = xc * parzen_w_1
xf = fftshift(fft(xc))
elif x.ndim == 2 and min(n) != 1: # 2D
xc = _autocorr(x) # default option: computes raw correlations with NO
# normalization -- Matlab help on xcorr
# Bias correction
v1 = np.hstack((np.arange(1, n[0] + 1), np.arange(n[0] - 1, 0,
-1)))[np.newaxis, :]
v2 = np.hstack((np.arange(1, n[1] + 1), np.arange(n[1] - 1, 0,
-1)))[np.newaxis, :]
vd = np.dot(v1.T, v2)
xc = xc / vd
parzen_window_2D = np.dot(parzen_w_1, parzen_w_2.T)
xc = xc * parzen_window_2D
xf = fftshift(fft2(xc))
elif x.ndim == 3 and min(n) != 1: # 3D
xc = np.zeros((2 * n[0] - 1, 2 * n[1] - 1, 2 * n[2] - 1))
for m3 in range(n[2] - 1):
temp = np.zeros((2 * n[0] - 1, 2 * n[1] - 1))
for k in range(n[2] - m3):
temp = temp + correlate2d(x[:, :, k + m3], x[:, :, k])
# default option:
# computes raw correlations with NO normalization
# -- Matlab help on xcorr
xc[:, :, (n[2] - 1) - m3] = temp
xc[:, :, (n[2] - 1) + m3] = temp
# Bias correction
v1 = np.hstack((np.arange(1, n[0] + 1), np.arange(n[0] - 1, 0,
-1)))[np.newaxis, :]
v2 = np.hstack((np.arange(1, n[1] + 1), np.arange(n[1] - 1, 0,
-1)))[np.newaxis, :]
v3 = np.arange(n[2], 0, -1)
vd = np.dot(v1.T, v2)
vcu = np.zeros((2 * n[0] - 1, 2 * n[1] - 1, 2 * n[2] - 1))
for m3 in range(n[2]):
vcu[:, :, (n[2] - 1) - m3] = vd * v3[m3]
vcu[:, :, (n[2] - 1) + m3] = vd * v3[m3]
# Possible source of NAN values
xc = xc / vcu
parzen_window_2D = np.dot(parzen_w_1[np.newaxis, :].T,
parzen_w_2[np.newaxis, :])
parzen_window_3D = np.zeros((2 * n[0] - 1, 2 * n[1] - 1, 2 * n[2] - 1))
for m3 in range(n[2] - 1):
parzen_window_3D[:, :, (n[2] - 1) - m3] = np.dot(
parzen_window_2D, parzen_w_3[n[2] - 1 - m3])
parzen_window_3D[:, :, (n[2] - 1) + m3] = np.dot(
parzen_window_2D, parzen_w_3[n[2] - 1 + m3])
xc = xc * parzen_window_3D
xf = fftshift(fftn(xc))
else:
raise ValueError('Unrecognized matrix dimension.')
xf = abs(xf)
xf[xf < 1e-4] = 1e-4
out = 0.5 * np.log(2 * np.pi * np.exp(1)) + _sumN(np.log(abs(
(xf)))) / 2 / _sumN(abs(xf))
return out
|
42,468 |
def err(message: Optional[str] = None, nl: bool = True, **styles: Any) -> None:
if isinstance(styles, dict):
if not styles.get("fg"):
styles["fg"] = "red"
elif len(styles) == 0:
styles = OutputLevels.error.value
_out(message, nl, **styles)
|
def err(message: Optional[str] = None, nl: bool = True, **styles: Any) -> None:
return out(message, nl, OutputLevels.error, **styles)
|
55,171 |
def cov_matrix(prob, obs, wires=None, diag_approx=False):
"""Calculate the covariance matrix of a list of commuting observables, given
the joint probability distribution of the system in the shared eigenbasis.
.. note::
This method only works for **commuting observables.**
If the probability distribution is the result of a quantum circuit,
the quantum state must be rotated into the shared
eigenbasis of the list of observables before measurement.
Args:
prob (tensor_like): probability distribution
obs (list[.Observable]): a list of observables for which
to compute the covariance matrix
diag_approx (bool): if True, return the diagonal approximation
wires (.Wires): The wire register of the system. If not provided,
it is assumed that the wires are labelled with consecutive integers.
Returns:
tensor_like: the covariance matrix of size ``(len(obs), len(obs))``
**Example**
Consider the following ansatz and observable list:
>>> obs_list = [qml.PauliX(0) @ qml.PauliZ(1), qml.PauliY(2)]
>>> ansatz = qml.templates.StronglyEntanglingLayers
We can construct a QNode to output the probability distribution in the shared eigenbasis of the
observables:
.. code-block:: python
dev = qml.device("default.qubit", wires=3)
@qml.qnode(dev, interface="autograd")
def circuit(weights):
ansatz(weights, wires=[0, 1, 2])
# rotate into the basis of the observables
for o in obs_list:
o.diagonalizing_gates()
return qml.probs(wires=[0, 1, 2])
We can now compute the covariance matrix:
>>> shape = qml.templates.StronglyEntanglingLayers.shape(n_layers=2, n_wires=3)
>>> weights = np.random.random(shape, requires_grad=True)
>>> cov = qml.math.cov_matrix(circuit(weights), obs_list)
>>> cov
array([[0.98707611, 0.03665537],
[0.03665537, 0.99998377]])
Autodifferentiation is fully supported using all interfaces.
Here we use autograd:
>>> cost_fn = lambda weights: qml.math.cov_matrix(circuit(weights), obs_list)[0, 1]
>>> qml.grad(cost_fn)(weights)[0]
array([[[ 4.94240914e-17, -2.33786398e-01, -1.54193959e-01],
[-3.05414996e-17, 8.40072236e-04, 5.57884080e-04],
[ 3.01859411e-17, 8.60411436e-03, 6.15745204e-04]],
[[ 6.80309533e-04, -1.23162742e-03, 1.08729813e-03],
[-1.53863193e-01, -1.38700657e-02, -1.36243323e-01],
[-1.54665054e-01, -1.89018172e-02, -1.56415558e-01]]])
"""
variances = []
# diagonal variances
for i, o in enumerate(obs):
eigvals = cast(o.eigvals(), dtype=float64)
w = o.wires.labels if wires is None else wires.indices(o.wires)
p = marginal_prob(prob, w)
res = dot(eigvals**2, p) - (dot(eigvals, p)) ** 2
variances.append(res)
cov = diag(variances)
if diag_approx:
return cov
for i, j in itertools.combinations(range(len(obs)), r=2):
o1 = obs[i]
o2 = obs[j]
o1wires = o1.wires.labels if wires is None else wires.indices(o1.wires)
o2wires = o2.wires.labels if wires is None else wires.indices(o2.wires)
shared_wires = set(o1wires + o2wires)
l1 = cast(o1.eigvals(), dtype=float64)
l2 = cast(o2.eigvals(), dtype=float64)
l12 = cast(np.kron(l1, l2), dtype=float64)
p1 = marginal_prob(prob, o1wires)
p2 = marginal_prob(prob, o2wires)
p12 = marginal_prob(prob, shared_wires)
res = dot(l12, p12) - dot(l1, p1) * dot(l2, p2)
cov = scatter_element_add(cov, [i, j], res)
cov = scatter_element_add(cov, [j, i], res)
return cov
|
def cov_matrix(prob, obs, wires=None, diag_approx=False):
"""Calculate the covariance matrix of a list of commuting observables, given
the joint probability distribution of the system in the shared eigenbasis.
.. note::
This method only works for **commuting observables.**
If the probability distribution is the result of a quantum circuit,
the quantum state must be rotated into the shared
eigenbasis of the list of observables before measurement.
Args:
prob (tensor_like): probability distribution
obs (list[.Observable]): a list of observables for which
to compute the covariance matrix
diag_approx (bool): if True, return the diagonal approximation
wires (.Wires): The wire register of the system. If not provided,
it is assumed that the wires are labelled with consecutive integers.
Returns:
tensor_like: the covariance matrix of size ``(len(obs), len(obs))``
**Example**
Consider the following ansatz and observable list:
>>> obs_list = [qml.PauliX(0) @ qml.PauliZ(1), qml.PauliY(2)]
>>> ansatz = qml.templates.StronglyEntanglingLayers
We can construct a QNode to output the probability distribution in the shared eigenbasis of the
observables:
.. code-block:: python
dev = qml.device("default.qubit", wires=3)
@qml.qnode(dev, interface="autograd")
def circuit(weights):
ansatz(weights, wires=[0, 1, 2])
# rotate into the basis of the observables
for o in obs_list:
o.diagonalizing_gates()
return qml.probs(wires=[0, 1, 2])
We can now compute the covariance matrix:
>>> shape = qml.templates.StronglyEntanglingLayers.shape(n_layers=2, n_wires=3)
>>> weights = np.random.random(shape, requires_grad=True)
>>> cov = qml.math.cov_matrix(circuit(weights), obs_list)
>>> cov
array([[0.98707611, 0.03665537],
[0.03665537, 0.99998377]])
Autodifferentiation is fully supported using all interfaces.
Here we use autograd:
>>> cost_fn = lambda weights: qml.math.cov_matrix(circuit(weights), obs_list)[0, 1]
>>> qml.grad(cost_fn)(weights)[0]
array([[[ 4.94240914e-17, -2.33786398e-01, -1.54193959e-01],
[-3.05414996e-17, 8.40072236e-04, 5.57884080e-04],
[ 3.01859411e-17, 8.60411436e-03, 6.15745204e-04]],
[[ 6.80309533e-04, -1.23162742e-03, 1.08729813e-03],
[-1.53863193e-01, -1.38700657e-02, -1.36243323e-01],
[-1.54665054e-01, -1.89018172e-02, -1.56415558e-01]]])
"""
variances = []
# diagonal variances
for i, o in enumerate(obs):
eigvals = cast(o.eigvals(), dtype=float64)
w = o.wires.labels if wires is None else wires.indices(o.wires)
p = marginal_prob(prob, w)
res = dot(eigvals**2, p) - (dot(eigvals, p)) ** 2
variances.append(res)
cov = diag(variances)
if diag_approx:
return cov
for i, j in itertools.combinations(range(len(obs)), r=2):
o1 = obs[i]
o2 = obs[j]
consecutive_wires = list(range(0, num_wires))
# Return the full density matrix if all the wires are given
if tuple(wires) == tuple(consecutive_wires):
return density_matrix
shared_wires = set(o1wires + o2wires)
l1 = cast(o1.eigvals(), dtype=float64)
l2 = cast(o2.eigvals(), dtype=float64)
l12 = cast(np.kron(l1, l2), dtype=float64)
p1 = marginal_prob(prob, o1wires)
p2 = marginal_prob(prob, o2wires)
p12 = marginal_prob(prob, shared_wires)
res = dot(l12, p12) - dot(l1, p1) * dot(l2, p2)
cov = scatter_element_add(cov, [i, j], res)
cov = scatter_element_add(cov, [j, i], res)
return cov
|
55,723 |
def imsave(filename: str, data: np.ndarray):
"""custom imaplementation of imread to avoid skimage dependecy"""
ext = os.path.splitext(filename)[1]
if ext in [".tif", "tiff"]:
import tifffile
tifffile.imsave(filename, data)
else:
import imageio
imageio.imsave(filename, data)
|
def imsave(filename: str, data: np.ndarray):
"""Custom implementation of imsave to avoid skimage dependency.
Parameters
----------
filename : string
The path to write the file to.
data : np.ndarray
The image data.
"""
ext = os.path.splitext(filename)[1]
if ext in [".tif", "tiff"]:
import tifffile
tifffile.imsave(filename, data)
else:
import imageio
imageio.imsave(filename, data)
|
31,915 |
def list_attached_user_policies(args, aws_client):
client = aws_client.aws_session(
service=SERVICE,
role_arn=args.get('roleArn'),
role_session_name=args.get('roleSessionName'),
role_session_duration=args.get('roleSessionDuration'),
)
user_name = args.get('userName', "")
marker = args.get('marker', None)
limit, is_manual, page_size = get_limit(args)
kwargs = {
'UserName': user_name,
'MaxItems': limit
}
if marker:
kwargs.update({'Marker': marker})
response = client.list_attached_user_policies(**kwargs)
data = response.get('AttachedPolicies', [])
marker = response.get('Marker', None)
if is_manual and page_size is not None and len(data) > page_size:
data = data[-1 * page_size:]
policy_data = []
for policy in data:
policy_data.append({
'UserName': user_name,
'PolicyArn': policy.get('PolicyArn', ''),
'PolicyName': policy.get('PolicyName', '')
})
ec = {'AWS.IAM.AttachedUserPolicies(val.PolicyArn && val.UserName && val.PolicyArn === obj.PolicyArn && '
'val.UserName === obj.UserName)': policy_data,
'AWS.IAM.Users(val.UserName === \'{}\').AttachedPoliciesMarker'.format(user_name): marker}
human_readable = tableToMarkdown('AWS IAM Attached Policies for user {}'.format(user_name),
headers=['PolicyName', 'PolicyArn'],
headerTransform=pascalToSpace,
t=data)
return_outputs(human_readable, ec)
|
def list_attached_user_policies(args, aws_client):
client = aws_client.aws_session(
service=SERVICE,
role_arn=args.get('roleArn'),
role_session_name=args.get('roleSessionName'),
role_session_duration=args.get('roleSessionDuration'),
)
user_name = args.get('userName', "")
marker = args.get('marker', None)
limit, is_manual, page_size = get_limit(args)
kwargs = {
'UserName': user_name,
'MaxItems': limit
}
if marker:
kwargs.update({'Marker': marker})
response = client.list_attached_user_policies(**kwargs)
data = response.get('AttachedPolicies', [])
marker = response.get('Marker', None)
if is_manual and page_size is not None and len(data) > page_size:
data = data[-1 * page_size:]
policy_data = []
for policy in data:
policy_data.append({
'UserName': user_name,
'PolicyArn': policy.get('PolicyArn', ''),
'PolicyName': policy.get('PolicyName', '')
})
ec = {'AWS.IAM.AttachedUserPolicies(val.PolicyArn && val.UserName && val.PolicyArn === obj.PolicyArn && '
'val.UserName === obj.UserName)': policy_data,
'AWS.IAM.Users(val.UserName === \'{}\').AttachedPoliciesMarker'.format(user_name): marker}
human_readable = tableToMarkdown('AWS IAM Attached Policies for user {}'.format(user_name),
headers=['PolicyName', 'PolicyArn'],
headerTransform=pascalToSpace,
t=data)
return_outputs(human_readable, ec, response)
|
30,261 |
def get_download_count():
response = http_request(method='GET', url_suffix=USER_SUFFIX + RELEASE_SUFFIX)
count_per_release = []
for release in response:
total_download_count = 0
for asset in release['assets']:
total_download_count = total_download_count + asset['download_count']
release_info = {
'URL': release.get('url'),
'Download_count': total_download_count
}
count_per_release.append(release_info)
context_create_release(release_list=count_per_release, response=response)
|
def get_download_count():
response = http_request(method='GET', url_suffix=USER_SUFFIX + RELEASE_SUFFIX)
count_per_release = []
for release in response:
total_download_count = 0
for asset in release.get('assets', []):
total_download_count = total_download_count + asset['download_count']
release_info = {
'URL': release.get('url'),
'Download_count': total_download_count
}
count_per_release.append(release_info)
context_create_release(release_list=count_per_release, response=response)
|
30,297 |
def get_user_emails():
connect_pop3_server()
(resp_message, mails_list, octets) = pop3_server_conn.list() # type: ignore
mails = []
index = ''
for mail in mails_list:
try:
index = mail.split(' ')[0]
(resp_message, lines, octets) = pop3_server_conn.retr(index) # type: ignore
msg_content = unicode(b'\r\n'.join(lines), errors='ignore').encode("utf-8")
msg = Parser().parsestr(msg_content)
msg['index'] = index
mails.append(msg)
except Exception as e:
raise Exception("Failed to get email with index " + index + 'from the server.\nError:' + str(e))
return mails
|
def get_user_emails():
connect_pop3_server()
_, mails_list, _ = pop3_server_conn.list() # type: ignore
mails = []
index = ''
for mail in mails_list:
try:
index = mail.split(' ')[0]
(resp_message, lines, octets) = pop3_server_conn.retr(index) # type: ignore
msg_content = unicode(b'\r\n'.join(lines), errors='ignore').encode("utf-8")
msg = Parser().parsestr(msg_content)
msg['index'] = index
mails.append(msg)
except Exception as e:
raise Exception("Failed to get email with index " + index + 'from the server.\nError:' + str(e))
return mails
|
40,038 |
def _add_pandas_datasource_with_manual_generator(context):
"""
Add a Pandas datasource to the context without configuring any "opinionated" generators.
Only a manul generator is added.
:param context:
:return:
"""
data_source_name = "files_datasource"
# data_source_name = click.prompt(
# msg_prompt_datasource_name,
# default=data_source_name,
# show_default=True
# )
configuration = PandasDatasource.build_configuration(generators={
"default": {
"class_name": "PassthroughGenerator",
}
}
)
datasource = context.add_datasource(name=data_source_name,
class_name='PandasDatasource',
**configuration)
return data_source_name
|
def _add_pandas_datasource_with_manual_generator(context):
"""
Add a Pandas datasource to the context without configuring any "opinionated" generators.
Only a manual generator is added.
:param context:
:return:
"""
data_source_name = "files_datasource"
# data_source_name = click.prompt(
# msg_prompt_datasource_name,
# default=data_source_name,
# show_default=True
# )
configuration = PandasDatasource.build_configuration(generators={
"default": {
"class_name": "PassthroughGenerator",
}
}
)
datasource = context.add_datasource(name=data_source_name,
class_name='PandasDatasource',
**configuration)
return data_source_name
|
57,603 |
def fetch_production(zone_key='NL', session=None, target_datetime=None,
logger=logging.getLogger(__name__), energieopwek_nl=True):
if target_datetime is None:
target_datetime = arrow.utcnow()
r = session or requests.session()
consumptions = ENTSOE.fetch_consumption(zone_key=zone_key,
session=r,
target_datetime=target_datetime,
logger=logger)
if not consumptions:
return
for c in consumptions:
del c['source']
df_consumptions = pd.DataFrame.from_dict(consumptions).set_index(
'datetime')
# NL has exchanges with BE, DE, NO, GB, DK-DK1
exchanges = []
for exchange_key in ['BE', 'DE', 'GB']:
zone_1, zone_2 = sorted([exchange_key, zone_key])
exchange = ENTSOE.fetch_exchange(zone_key1=zone_1,
zone_key2=zone_2,
session=r,
target_datetime=target_datetime,
logger=logger)
if not exchange:
return
exchanges.extend(exchange or [])
# add NO data, fetch once for every hour
# This introduces an error, because it doesn't use the average power flow
# during the hour, but rather only the value during the first minute of the
# hour!
zone_1, zone_2 = sorted(['NO', zone_key])
exchange_NO = [statnett.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,
session=r, target_datetime=dt.datetime,
logger=logger)
for dt in arrow.Arrow.range(
'hour',
arrow.get(min([e['datetime']
for e in exchanges])).replace(minute=0),
arrow.get(max([e['datetime']
for e in exchanges])).replace(minute=0))]
exchanges.extend(exchange_NO)
# add DK1 data
zone_1, zone_2 = sorted(['DK-DK1', zone_key])
df_dk = pd.DataFrame(DK.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,
session=r, target_datetime=target_datetime,
logger=logger))
# Because other exchanges and consumption data is only available per hour
# we floor the timpstamp to hour and group by hour with averaging of netFlow
df_dk['datetime'] = df_dk['datetime'].dt.floor('H')
exchange_DK = df_dk.groupby(['datetime']).aggregate({'netFlow' : 'mean',
'sortedZoneKeys': 'max', 'source' : 'max'}).reset_index()
# because averaging with high precision numbers leads to rounding errors
exchange_DK = exchange_DK.round({'netFlow': 3})
exchanges.extend(exchange_DK.to_dict(orient='records'))
# We want to know the net-imports into NL, so if NL is in zone_1 we need
# to flip the direction of the flow. E.g. 100MW for NL->DE means 100MW
# export to DE and needs to become -100MW for import to NL.
for e in exchanges:
if(e['sortedZoneKeys'].startswith('NL->')):
e['NL_import'] = -1 * e['netFlow']
else:
e['NL_import'] = e['netFlow']
del e['source']
del e['netFlow']
df_exchanges = pd.DataFrame.from_dict(exchanges).set_index('datetime')
# Sum all exchanges to NL imports
df_exchanges = df_exchanges.groupby('datetime').sum()
# Fill missing values by propagating the value forward
df_consumptions_with_exchanges = df_consumptions.join(df_exchanges).fillna(
method='ffill', limit=3) # Limit to 3 x 15min
# Load = Generation + netImports
# => Generation = Load - netImports
df_total_generations = (df_consumptions_with_exchanges['consumption']
- df_consumptions_with_exchanges['NL_import'])
# Fetch all production
# The energieopwek_nl parser is backwards compatible with ENTSOE parser.
# Because of data quality issues we switch to using energieopwek, but if
# data quality of ENTSOE improves we can switch back to using a single
# source.
productions_ENTSOE = ENTSOE.fetch_production(zone_key=zone_key, session=r,
target_datetime=target_datetime, logger=logger)
if energieopwek_nl:
productions_eopwek = fetch_production_energieopwek_nl(session=r,
target_datetime=target_datetime, logger=logger)
# For every production value we look up the corresponding ENTSOE
# values and copy the nuclear, gas, coal, biomass and unknown production.
productions = []
for p in productions_eopwek:
entsoe_value = next((pe for pe in productions_ENTSOE
if pe["datetime"] == p["datetime"]), None)
if entsoe_value:
p["production"]["nuclear"] = entsoe_value["production"]["nuclear"]
p["production"]["gas"] = entsoe_value["production"]["gas"]
p["production"]["coal"] = entsoe_value["production"]["coal"]
p["production"]["biomass"] = entsoe_value["production"]["biomass"]
p["production"]["unknown"] = entsoe_value["production"]["unknown"]
productions.append(p)
else:
productions = productions_ENTSOE
if not productions:
return
# Flatten production dictionaries (we ignore storage)
for p in productions:
# if for some reason theré's no unknown value
if not 'unknown' in p['production']:
p['production']['unknown'] = 0
Z = sum([x or 0 for x in p['production'].values()])
# Only calculate the difference if the datetime exists
# If total ENTSOE reported production (Z) is less than total generation
# (calculated from consumption and imports), then there must be some
# unknown production missing, so we add the difference.
# The difference can actually be negative, because consumption is based
# on TSO network load, but locally generated electricity may never leave
# the DSO network and be substantial (e.g. Solar).
if p['datetime'] in df_total_generations and Z < df_total_generations[p['datetime']]:
p['production']['unknown'] = round((
df_total_generations[p['datetime']] - Z + p['production']['unknown']), 3)
# Filter invalid
# We should probably add logging to this
return [p for p in productions if p['production']['unknown'] > 0]
|
def fetch_production(zone_key='NL', session=None, target_datetime=None,
logger=logging.getLogger(__name__), energieopwek_nl=True):
if target_datetime is None:
target_datetime = arrow.utcnow()
else:
target_datetime = arrow.get(target_datetime)
r = session or requests.session()
consumptions = ENTSOE.fetch_consumption(zone_key=zone_key,
session=r,
target_datetime=target_datetime,
logger=logger)
if not consumptions:
return
for c in consumptions:
del c['source']
df_consumptions = pd.DataFrame.from_dict(consumptions).set_index(
'datetime')
# NL has exchanges with BE, DE, NO, GB, DK-DK1
exchanges = []
for exchange_key in ['BE', 'DE', 'GB']:
zone_1, zone_2 = sorted([exchange_key, zone_key])
exchange = ENTSOE.fetch_exchange(zone_key1=zone_1,
zone_key2=zone_2,
session=r,
target_datetime=target_datetime,
logger=logger)
if not exchange:
return
exchanges.extend(exchange or [])
# add NO data, fetch once for every hour
# This introduces an error, because it doesn't use the average power flow
# during the hour, but rather only the value during the first minute of the
# hour!
zone_1, zone_2 = sorted(['NO', zone_key])
exchange_NO = [statnett.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,
session=r, target_datetime=dt.datetime,
logger=logger)
for dt in arrow.Arrow.range(
'hour',
arrow.get(min([e['datetime']
for e in exchanges])).replace(minute=0),
arrow.get(max([e['datetime']
for e in exchanges])).replace(minute=0))]
exchanges.extend(exchange_NO)
# add DK1 data
zone_1, zone_2 = sorted(['DK-DK1', zone_key])
df_dk = pd.DataFrame(DK.fetch_exchange(zone_key1=zone_1, zone_key2=zone_2,
session=r, target_datetime=target_datetime,
logger=logger))
# Because other exchanges and consumption data is only available per hour
# we floor the timpstamp to hour and group by hour with averaging of netFlow
df_dk['datetime'] = df_dk['datetime'].dt.floor('H')
exchange_DK = df_dk.groupby(['datetime']).aggregate({'netFlow' : 'mean',
'sortedZoneKeys': 'max', 'source' : 'max'}).reset_index()
# because averaging with high precision numbers leads to rounding errors
exchange_DK = exchange_DK.round({'netFlow': 3})
exchanges.extend(exchange_DK.to_dict(orient='records'))
# We want to know the net-imports into NL, so if NL is in zone_1 we need
# to flip the direction of the flow. E.g. 100MW for NL->DE means 100MW
# export to DE and needs to become -100MW for import to NL.
for e in exchanges:
if(e['sortedZoneKeys'].startswith('NL->')):
e['NL_import'] = -1 * e['netFlow']
else:
e['NL_import'] = e['netFlow']
del e['source']
del e['netFlow']
df_exchanges = pd.DataFrame.from_dict(exchanges).set_index('datetime')
# Sum all exchanges to NL imports
df_exchanges = df_exchanges.groupby('datetime').sum()
# Fill missing values by propagating the value forward
df_consumptions_with_exchanges = df_consumptions.join(df_exchanges).fillna(
method='ffill', limit=3) # Limit to 3 x 15min
# Load = Generation + netImports
# => Generation = Load - netImports
df_total_generations = (df_consumptions_with_exchanges['consumption']
- df_consumptions_with_exchanges['NL_import'])
# Fetch all production
# The energieopwek_nl parser is backwards compatible with ENTSOE parser.
# Because of data quality issues we switch to using energieopwek, but if
# data quality of ENTSOE improves we can switch back to using a single
# source.
productions_ENTSOE = ENTSOE.fetch_production(zone_key=zone_key, session=r,
target_datetime=target_datetime, logger=logger)
if energieopwek_nl:
productions_eopwek = fetch_production_energieopwek_nl(session=r,
target_datetime=target_datetime, logger=logger)
# For every production value we look up the corresponding ENTSOE
# values and copy the nuclear, gas, coal, biomass and unknown production.
productions = []
for p in productions_eopwek:
entsoe_value = next((pe for pe in productions_ENTSOE
if pe["datetime"] == p["datetime"]), None)
if entsoe_value:
p["production"]["nuclear"] = entsoe_value["production"]["nuclear"]
p["production"]["gas"] = entsoe_value["production"]["gas"]
p["production"]["coal"] = entsoe_value["production"]["coal"]
p["production"]["biomass"] = entsoe_value["production"]["biomass"]
p["production"]["unknown"] = entsoe_value["production"]["unknown"]
productions.append(p)
else:
productions = productions_ENTSOE
if not productions:
return
# Flatten production dictionaries (we ignore storage)
for p in productions:
# if for some reason theré's no unknown value
if not 'unknown' in p['production']:
p['production']['unknown'] = 0
Z = sum([x or 0 for x in p['production'].values()])
# Only calculate the difference if the datetime exists
# If total ENTSOE reported production (Z) is less than total generation
# (calculated from consumption and imports), then there must be some
# unknown production missing, so we add the difference.
# The difference can actually be negative, because consumption is based
# on TSO network load, but locally generated electricity may never leave
# the DSO network and be substantial (e.g. Solar).
if p['datetime'] in df_total_generations and Z < df_total_generations[p['datetime']]:
p['production']['unknown'] = round((
df_total_generations[p['datetime']] - Z + p['production']['unknown']), 3)
# Filter invalid
# We should probably add logging to this
return [p for p in productions if p['production']['unknown'] > 0]
|
28,562 |
def plot_loo_pit(
ax,
figsize,
ecdf,
loo_pit,
loo_pit_ecdf,
unif_ecdf,
p975,
p025,
fill_kwargs,
ecdf_fill,
use_hdi,
x_vals,
hdi_kwargs,
hdi_odds,
n_unif,
unif,
plot_unif_kwargs,
loo_pit_kde,
legend, # pylint: disable=unused-argument
y_hat,
y,
color,
textsize,
credible_interval,
plot_kwargs,
backend_kwargs,
show,
):
"""Bokeh loo pit plot."""
if backend_kwargs is None:
backend_kwargs = {}
backend_kwargs = {
**backend_kwarg_defaults(("dpi", "plot.bokeh.figure.dpi"),),
**backend_kwargs,
}
dpi = backend_kwargs.pop("dpi")
(figsize, *_, linewidth, _) = _scale_fig_size(figsize, textsize, 1, 1)
plot_kwargs = {} if plot_kwargs is None else plot_kwargs
plot_kwargs.setdefault("color", to_hex(color))
plot_kwargs.setdefault("linewidth", linewidth * 1.4)
if isinstance(y, str):
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y)
elif isinstance(y, DataArray):
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y.name)
elif isinstance(y_hat, str):
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y_hat)
elif isinstance(y_hat, DataArray):
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y_hat.name)
else:
label = "LOO-PIT ECDF" if ecdf else "LOO-PIT"
plot_kwargs.setdefault("legend_label", label)
plot_unif_kwargs = {} if plot_unif_kwargs is None else plot_unif_kwargs
light_color = rgb_to_hsv(to_rgb(plot_kwargs.get("color")))
light_color[1] /= 2 # pylint: disable=unsupported-assignment-operation
light_color[2] += (1 - light_color[2]) / 2 # pylint: disable=unsupported-assignment-operation
plot_unif_kwargs.setdefault("color", to_hex(hsv_to_rgb(light_color)))
plot_unif_kwargs.setdefault("alpha", 0.5)
plot_unif_kwargs.setdefault("linewidth", 0.6 * linewidth)
if ecdf:
n_data_points = loo_pit.size
plot_kwargs.setdefault("drawstyle", "steps-mid" if n_data_points < 100 else "default")
plot_unif_kwargs.setdefault("drawstyle", "steps-mid" if n_data_points < 100 else "default")
if ecdf_fill:
if fill_kwargs is None:
fill_kwargs = {}
fill_kwargs.setdefault("color", to_hex(hsv_to_rgb(light_color)))
fill_kwargs.setdefault("alpha", 0.5)
fill_kwargs.setdefault(
"step", "mid" if plot_kwargs["drawstyle"] == "steps-mid" else None
)
fill_kwargs.setdefault(
"legend_label", "{:.3g}% credible interval".format(credible_interval)
)
elif use_hdi:
if hdi_kwargs is None:
hdi_kwargs = {}
hdi_kwargs.setdefault("color", to_hex(hsv_to_rgb(light_color)))
hdi_kwargs.setdefault("alpha", 0.35)
if ax is None:
backend_kwargs.setdefault("width", int(figsize[0] * dpi))
backend_kwargs.setdefault("height", int(figsize[1] * dpi))
ax = bkp.figure(x_range=(0, 1), **backend_kwargs)
if ecdf:
if plot_kwargs.get("drawstyle") == "steps-mid":
ax.step(
np.hstack((0, loo_pit, 1)),
np.hstack((0, loo_pit - loo_pit_ecdf, 0)),
line_color=plot_kwargs.get("color", "black"),
line_alpha=plot_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 3.0),
mode="center",
)
else:
ax.line(
np.hstack((0, loo_pit, 1)),
np.hstack((0, loo_pit - loo_pit_ecdf, 0)),
line_color=plot_kwargs.get("color", "black"),
line_alpha=plot_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 3.0),
)
if ecdf_fill:
if fill_kwargs.get("drawstyle") == "steps-mid":
# use step patch when you find out how to do that
ax.patch(
np.concatenate((unif_ecdf, unif_ecdf[::-1])),
np.concatenate((p975 - unif_ecdf, (p025 - unif_ecdf)[::-1])),
fill_color=fill_kwargs.get("color"),
fill_alpha=fill_kwargs.get("alpha", 1.0),
)
else:
ax.patch(
np.concatenate((unif_ecdf, unif_ecdf[::-1])),
np.concatenate((p975 - unif_ecdf, (p025 - unif_ecdf)[::-1])),
fill_color=fill_kwargs.get("color"),
fill_alpha=fill_kwargs.get("alpha", 1.0),
)
else:
if fill_kwargs is not None and fill_kwargs.get("drawstyle") == "steps-mid":
ax.step(
unif_ecdf,
p975 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 1.0),
mode="center",
)
ax.step(
unif_ecdf,
p025 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
mode="center",
)
else:
ax.line(
unif_ecdf,
p975 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
)
ax.line(
unif_ecdf,
p025 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
)
else:
if use_hdi:
ax.add_layout(
BoxAnnotation(
bottom=hdi_odds[1],
top=hdi_odds[0],
fill_alpha=hdi_kwargs.pop("alpha"),
fill_color=hdi_kwargs.pop("color"),
**hdi_kwargs
)
)
else:
for idx in range(n_unif):
unif_density, xmin, xmax = _fast_kde(unif[idx, :])
x_s = np.linspace(xmin, xmax, len(unif_density))
ax.line(
x_s,
unif_density,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 0.1),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
)
ax.line(
x_vals,
loo_pit_kde,
line_color=plot_kwargs.get("color", "black"),
line_alpha=plot_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 3.0),
)
show_layout(ax, show)
return ax
|
def plot_loo_pit(
ax,
figsize,
ecdf,
loo_pit,
loo_pit_ecdf,
unif_ecdf,
p975,
p025,
fill_kwargs,
ecdf_fill,
use_hdi,
x_vals,
hdi_kwargs,
hdi_odds,
n_unif,
unif,
plot_unif_kwargs,
loo_pit_kde,
legend, # pylint: disable=unused-argument
y_hat,
y,
color,
textsize,
credible_interval,
plot_kwargs,
backend_kwargs,
show,
):
"""Bokeh loo pit plot."""
if backend_kwargs is None:
backend_kwargs = {}
backend_kwargs = {
**backend_kwarg_defaults(("dpi", "plot.bokeh.figure.dpi"),),
**backend_kwargs,
}
dpi = backend_kwargs.pop("dpi")
(figsize, *_, linewidth, _) = _scale_fig_size(figsize, textsize, 1, 1)
plot_kwargs = {} if plot_kwargs is None else plot_kwargs
plot_kwargs.setdefault("color", to_hex(color))
plot_kwargs.setdefault("linewidth", linewidth * 1.4)
if isinstance(y, str):
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y)
elif isinstance(y, DataArray):
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y.name)
elif isinstance(y_hat, str):
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y_hat)
elif isinstance(y_hat, DataArray) and y_hat.name is not None:
label = ("{} LOO-PIT ECDF" if ecdf else "{} LOO-PIT").format(y_hat.name)
else:
label = "LOO-PIT ECDF" if ecdf else "LOO-PIT"
plot_kwargs.setdefault("legend_label", label)
plot_unif_kwargs = {} if plot_unif_kwargs is None else plot_unif_kwargs
light_color = rgb_to_hsv(to_rgb(plot_kwargs.get("color")))
light_color[1] /= 2 # pylint: disable=unsupported-assignment-operation
light_color[2] += (1 - light_color[2]) / 2 # pylint: disable=unsupported-assignment-operation
plot_unif_kwargs.setdefault("color", to_hex(hsv_to_rgb(light_color)))
plot_unif_kwargs.setdefault("alpha", 0.5)
plot_unif_kwargs.setdefault("linewidth", 0.6 * linewidth)
if ecdf:
n_data_points = loo_pit.size
plot_kwargs.setdefault("drawstyle", "steps-mid" if n_data_points < 100 else "default")
plot_unif_kwargs.setdefault("drawstyle", "steps-mid" if n_data_points < 100 else "default")
if ecdf_fill:
if fill_kwargs is None:
fill_kwargs = {}
fill_kwargs.setdefault("color", to_hex(hsv_to_rgb(light_color)))
fill_kwargs.setdefault("alpha", 0.5)
fill_kwargs.setdefault(
"step", "mid" if plot_kwargs["drawstyle"] == "steps-mid" else None
)
fill_kwargs.setdefault(
"legend_label", "{:.3g}% credible interval".format(credible_interval)
)
elif use_hdi:
if hdi_kwargs is None:
hdi_kwargs = {}
hdi_kwargs.setdefault("color", to_hex(hsv_to_rgb(light_color)))
hdi_kwargs.setdefault("alpha", 0.35)
if ax is None:
backend_kwargs.setdefault("width", int(figsize[0] * dpi))
backend_kwargs.setdefault("height", int(figsize[1] * dpi))
ax = bkp.figure(x_range=(0, 1), **backend_kwargs)
if ecdf:
if plot_kwargs.get("drawstyle") == "steps-mid":
ax.step(
np.hstack((0, loo_pit, 1)),
np.hstack((0, loo_pit - loo_pit_ecdf, 0)),
line_color=plot_kwargs.get("color", "black"),
line_alpha=plot_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 3.0),
mode="center",
)
else:
ax.line(
np.hstack((0, loo_pit, 1)),
np.hstack((0, loo_pit - loo_pit_ecdf, 0)),
line_color=plot_kwargs.get("color", "black"),
line_alpha=plot_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 3.0),
)
if ecdf_fill:
if fill_kwargs.get("drawstyle") == "steps-mid":
# use step patch when you find out how to do that
ax.patch(
np.concatenate((unif_ecdf, unif_ecdf[::-1])),
np.concatenate((p975 - unif_ecdf, (p025 - unif_ecdf)[::-1])),
fill_color=fill_kwargs.get("color"),
fill_alpha=fill_kwargs.get("alpha", 1.0),
)
else:
ax.patch(
np.concatenate((unif_ecdf, unif_ecdf[::-1])),
np.concatenate((p975 - unif_ecdf, (p025 - unif_ecdf)[::-1])),
fill_color=fill_kwargs.get("color"),
fill_alpha=fill_kwargs.get("alpha", 1.0),
)
else:
if fill_kwargs is not None and fill_kwargs.get("drawstyle") == "steps-mid":
ax.step(
unif_ecdf,
p975 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 1.0),
mode="center",
)
ax.step(
unif_ecdf,
p025 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
mode="center",
)
else:
ax.line(
unif_ecdf,
p975 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
)
ax.line(
unif_ecdf,
p025 - unif_ecdf,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 1.0),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
)
else:
if use_hdi:
ax.add_layout(
BoxAnnotation(
bottom=hdi_odds[1],
top=hdi_odds[0],
fill_alpha=hdi_kwargs.pop("alpha"),
fill_color=hdi_kwargs.pop("color"),
**hdi_kwargs
)
)
else:
for idx in range(n_unif):
unif_density, xmin, xmax = _fast_kde(unif[idx, :])
x_s = np.linspace(xmin, xmax, len(unif_density))
ax.line(
x_s,
unif_density,
line_color=plot_unif_kwargs.get("color", "black"),
line_alpha=plot_unif_kwargs.get("alpha", 0.1),
line_width=plot_unif_kwargs.get("linewidth", 1.0),
)
ax.line(
x_vals,
loo_pit_kde,
line_color=plot_kwargs.get("color", "black"),
line_alpha=plot_kwargs.get("alpha", 1.0),
line_width=plot_kwargs.get("linewidth", 3.0),
)
show_layout(ax, show)
return ax
|
43,291 |
def _chebyshev(one_hot_encoded_row, laplacian, coeffs, deg, max_eig):
"""
This function calculates one column of the Chebyshev approximation of exp(-scale * laplacian) for
all scales.
Args:
one_hot_encoded_row (SparseTensor): a sparse tensor indicating which column (node) to calculate.
laplacian (SparseTensor): the unormalized graph laplacian
coeffs: the Chebyshev coefficients for exp(-scale * x) for each scale in the shape (num_scales, deg)
deg: the degree of the Chebyshev polynomial
Returns:
(num_scales, num_nodes) tensor of the wavelets for each scale for the specified node.
"""
a = max_eig / 2
T_0 = tf.reshape(
tf.sparse.to_dense(one_hot_encoded_row), shape=(laplacian.shape[0], 1)
)
T_1 = (K.dot(laplacian, T_0) - a * T_0) / a
cheby_polys = [T_0, T_1]
for i in range(deg - 1):
cheby_poly = (2 / a) * (
K.dot(laplacian, cheby_polys[-1]) - a * cheby_polys[-1]
) - cheby_polys[-2]
cheby_polys.append(cheby_poly)
cheby_polys = K.squeeze(tf.stack(cheby_polys, axis=0), axis=-1)
return tf.matmul(coeffs, cheby_polys)
|
def _chebyshev(one_hot_encoded_row, laplacian, coeffs, deg, max_eig):
"""
This function calculates one column of the Chebyshev approximation of exp(-scale * laplacian) for
all scales.
Args:
one_hot_encoded_row (SparseTensor): a sparse tensor indicating which column (node) to calculate.
laplacian (SparseTensor): the unnormalized graph laplacian
coeffs: the Chebyshev coefficients for exp(-scale * x) for each scale in the shape (num_scales, deg)
deg: the degree of the Chebyshev polynomial
Returns:
(num_scales, num_nodes) tensor of the wavelets for each scale for the specified node.
"""
a = max_eig / 2
T_0 = tf.reshape(
tf.sparse.to_dense(one_hot_encoded_row), shape=(laplacian.shape[0], 1)
)
T_1 = (K.dot(laplacian, T_0) - a * T_0) / a
cheby_polys = [T_0, T_1]
for i in range(deg - 1):
cheby_poly = (2 / a) * (
K.dot(laplacian, cheby_polys[-1]) - a * cheby_polys[-1]
) - cheby_polys[-2]
cheby_polys.append(cheby_poly)
cheby_polys = K.squeeze(tf.stack(cheby_polys, axis=0), axis=-1)
return tf.matmul(coeffs, cheby_polys)
|
35,281 |
def non_negative_parafac_hals(tensor, rank, n_iter_max=100, init="svd", svd='numpy_svd', tol=1e-7,
sparsity_coefficients=[], fixed_modes=[],hals='approx',
verbose=False, return_errors=False):
"""
Non-negative CP decomposition
Uses HALS which updates each factor columnwise, fixing every other columns, see [1]_
Parameters
----------
tensor : ndarray
rank : int
number of components
n_iter_max : int
maximum number of iteration
init : {'svd', 'random'}, optional
svd : str, default is 'numpy_svd'
function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS
tol : float, optional
tolerance: the algorithm stops when the variation in
the reconstruction error is less than the tolerance
Default: 1e-8
sparsity_coefficients: array of float (of length the number of modes)
The sparsity coefficients on each factor.
If set to None, the algorithm is computed without sparsity
Default: [],
fixed_modes: array of integers (between 0 and the number of modes)
Has to be set not to update a factor, 0 and 1 for U and V respectively
Default: []
verbose: boolean
Indicates whether the algorithm prints the successive
reconstruction errors or not
Default: False
return_errors: boolean
Indicates whether the algorithm should return all reconstruction errors
and computation time of each iteration or not
Default: False
Returns
-------
factors : ndarray list
list of positive factors of the CP decomposition
element `i` is of shape ``(tensor.shape[i], rank)``
errors: list
A list of reconstruction errors at each iteration of the algorithm.
toc: list
A list with accumulated time at each iterations
fixed_modes = [], normalize = [False, False, False],
verbose = True, return_errors = False)
References
----------
[1]: N. Gillis and F. Glineur, Accelerated Multiplicative Updates and
Hierarchical ALS Algorithms for Nonnegative Matrix Factorization,
Neural Computation 24 (4): 1085-1105, 2012.
"""
weights, factors = initialize_nn_cp(tensor, rank, init=init, svd=svd,
random_state=None,
normalize_factors=False)
norm_tensor = tl.norm(tensor, 2)
nb_modes = len(tensor.shape)
if sparsity_coefficients == None or len(sparsity_coefficients) != nb_modes:
#print(
# "Irrelevant number of sparsity coefficient (different from the number of modes), they have been set to None.")
sparsity_coefficients = [None for i in range(nb_modes)]
if fixed_modes == None:
fixed_modes = []
# Avoiding errors
for fixed_value in fixed_modes:
sparsity_coefficients[fixed_value] = None
# Generating the mode update sequence
modes_list = [mode for mode in range(tl.ndim(tensor)) if mode not in fixed_modes]
# initialisation - declare local varaibles
rec_errors = []
# Iteratation
for iteration in range(n_iter_max):
# One pass of least squares on each updated mode
for mode in modes_list:
# Computing Hadamard of cross-products
pseudo_inverse = tl.tensor(tl.ones((rank, rank)), **tl.context(tensor))
for i, factor in enumerate(factors):
if i != mode:
pseudo_inverse = pseudo_inverse*tl.dot(tl.transpose(factor), factor)
if not iteration and weights is not None:
# Take into account init weights
mttkrp = unfolding_dot_khatri_rao(tensor, (weights, factors), mode)
else:
mttkrp = unfolding_dot_khatri_rao(tensor, (None, factors), mode)
# Call the hals resolution with nnls, optimizing the current mode
if hals=='approx':
factors[mode] = tl.transpose(
hals_nnls_approx(tl.transpose(mttkrp), pseudo_inverse, tl.transpose(factors[mode]),
maxiter=100,sparsity_coefficient=sparsity_coefficients[mode])[0])
elif hals=='exact':
factors[mode] = tl.transpose(
hals_nnls_exact(tl.transpose(mttkrp), pseudo_inverse, tl.transpose(factors[mode]),
maxiter=5000)[0])
if tol:
factors_norm = cp_norm((weights, factors))
iprod = tl.sum(tl.sum(mttkrp*factor, axis=0)*weights)
rec_error = tl.sqrt(tl.abs(norm_tensor**2 + factors_norm**2 - 2*iprod)) / norm_tensor
rec_errors.append(rec_error)
if iteration > 1:
if verbose:
print('reconstruction error={}, variation={}.'.format(
rec_errors[-1], rec_errors[-2] - rec_errors[-1]))
if tol and abs(rec_errors[-2] - rec_errors[-1]) < tol:
if verbose:
print('converged in {} iterations.'.format(iteration))
break
cp_tensor = CPTensor((weights, factors))
if return_errors:
return cp_tensor, rec_errors
else:
return cp_tensor
|
def non_negative_parafac_hals(tensor, rank, n_iter_max=100, init="svd", svd='numpy_svd', tol=1e-7,
sparsity_coefficients=[], fixed_modes=[],hals='approx',
verbose=False, return_errors=False):
"""
Non-negative CP decomposition
Uses HALS which updates each factor columnwise, fixing every other columns, see [1]_
Parameters
----------
tensor : ndarray
rank : int
number of components
n_iter_max : int
maximum number of iteration
init : {'svd', 'random'}, optional
svd : str, default is 'numpy_svd'
function to use to compute the SVD, acceptable values in tensorly.SVD_FUNS
tol : float, optional
tolerance: the algorithm stops when the variation in
the reconstruction error is less than the tolerance
Default: 1e-8
sparsity_coefficients: array of float (of length the number of modes)
The sparsity coefficients on each factor.
If set to None, the algorithm is computed without sparsity
Default: [],
fixed_modes: array of integers (between 0 and the number of modes)
Has to be set not to update a factor, 0 and 1 for U and V respectively
Default: []
verbose: boolean
Indicates whether the algorithm prints the successive
reconstruction errors or not
Default: False
return_errors: boolean
Indicates whether the algorithm should return all reconstruction errors
and computation time of each iteration or not
Default: False
Returns
-------
factors : ndarray list
list of positive factors of the CP decomposition
element `i` is of shape ``(tensor.shape[i], rank)``
errors: list
A list of reconstruction errors at each iteration of the algorithm.
toc: list
A list with accumulated time at each iterations
fixed_modes = [], normalize = [False, False, False],
verbose = True, return_errors = False)
References
----------
[1]: N. Gillis and F. Glineur, Accelerated Multiplicative Updates and
Hierarchical ALS Algorithms for Nonnegative Matrix Factorization,
Neural Computation 24 (4): 1085-1105, 2012.
"""
weights, factors = initialize_nn_cp(tensor, rank, init=init, svd=svd,
random_state=None,
normalize_factors=False)
norm_tensor = tl.norm(tensor, 2)
n_modes = tl.ndim(tensor)
if sparsity_coefficients == None or len(sparsity_coefficients) != nb_modes:
#print(
# "Irrelevant number of sparsity coefficient (different from the number of modes), they have been set to None.")
sparsity_coefficients = [None for i in range(nb_modes)]
if fixed_modes == None:
fixed_modes = []
# Avoiding errors
for fixed_value in fixed_modes:
sparsity_coefficients[fixed_value] = None
# Generating the mode update sequence
modes_list = [mode for mode in range(tl.ndim(tensor)) if mode not in fixed_modes]
# initialisation - declare local varaibles
rec_errors = []
# Iteratation
for iteration in range(n_iter_max):
# One pass of least squares on each updated mode
for mode in modes_list:
# Computing Hadamard of cross-products
pseudo_inverse = tl.tensor(tl.ones((rank, rank)), **tl.context(tensor))
for i, factor in enumerate(factors):
if i != mode:
pseudo_inverse = pseudo_inverse*tl.dot(tl.transpose(factor), factor)
if not iteration and weights is not None:
# Take into account init weights
mttkrp = unfolding_dot_khatri_rao(tensor, (weights, factors), mode)
else:
mttkrp = unfolding_dot_khatri_rao(tensor, (None, factors), mode)
# Call the hals resolution with nnls, optimizing the current mode
if hals=='approx':
factors[mode] = tl.transpose(
hals_nnls_approx(tl.transpose(mttkrp), pseudo_inverse, tl.transpose(factors[mode]),
maxiter=100,sparsity_coefficient=sparsity_coefficients[mode])[0])
elif hals=='exact':
factors[mode] = tl.transpose(
hals_nnls_exact(tl.transpose(mttkrp), pseudo_inverse, tl.transpose(factors[mode]),
maxiter=5000)[0])
if tol:
factors_norm = cp_norm((weights, factors))
iprod = tl.sum(tl.sum(mttkrp*factor, axis=0)*weights)
rec_error = tl.sqrt(tl.abs(norm_tensor**2 + factors_norm**2 - 2*iprod)) / norm_tensor
rec_errors.append(rec_error)
if iteration > 1:
if verbose:
print('reconstruction error={}, variation={}.'.format(
rec_errors[-1], rec_errors[-2] - rec_errors[-1]))
if tol and abs(rec_errors[-2] - rec_errors[-1]) < tol:
if verbose:
print('converged in {} iterations.'.format(iteration))
break
cp_tensor = CPTensor((weights, factors))
if return_errors:
return cp_tensor, rec_errors
else:
return cp_tensor
|
26,225 |
def check_alerting_message(duthost, stopped_containers_list):
"""Checks whether the names of stopped containers appear in Monit alerting message.
Args:
duthost: Host DUT.
stopped_containers_list: A list of stopped container names.
Return:
None.
"""
alerting_message = duthost.shell("sudo cat /var/log/syslog | grep -m 1 '.*monit.*container_checker'",
module_ignore_errors=True)
pytest_assert(len(alerting_message["stdout_lines"]) > 0,
"Failed to get Monit alerting message from container_checker!")
for container_name in stopped_containers_list:
if container_name not in alerting_message["stdout_lines"][0]:
pytest.fail("Container '{}' was not running and not found in Monit alerting message!"
.format(container_name))
|
def check_alerting_message(duthost, stopped_containers_list):
"""Checks whether the names of stopped containers appear in Monit alerting message.
Args:
duthost: Host DUT.
stopped_containers_list: A list of stopped container names.
Return:
None.
"""
alerting_message = duthost.shell("sudo cat /var/log/syslog | grep -m 1 '.*monit.*container_checker'",
module_ignore_errors=True)
pytest_assert(len(stopped_containers_list) == 0 or len(alerting_message["stdout_lines"]) > 0,
"Failed to get Monit alerting message from container_checker!")
for container_name in stopped_containers_list:
if container_name not in alerting_message["stdout_lines"][0]:
pytest.fail("Container '{}' was not running and not found in Monit alerting message!"
.format(container_name))
|
31,729 |
def get_details_of_an_abuse_mailbox_campaign_command(client, args):
campaign_id = str(args.get('campaign_id', ''))
subtenant = args.get('subtenant', None)
response = client.get_details_of_an_abuse_mailbox_campaign_request(campaign_id, subtenant)
command_results = CommandResults(
outputs_prefix='AbnormalSecurity.AbuseCampaignDetails',
outputs_key_field='',
outputs=response,
raw_response=response
)
return command_results
|
def get_details_of_an_abuse_mailbox_campaign_command(client, args):
campaign_id = str(args.get('campaign_id', ''))
subtenant = args.get('subtenant', None)
response = client.get_details_of_an_abuse_mailbox_campaign_request(campaign_id, subtenant)
command_results = CommandResults(
outputs_prefix='AbnormalSecurity.AbuseCampaign',
outputs_key_field='',
outputs=response,
raw_response=response
)
return command_results
|
17,323 |
def open_rasterio(filename, parse_coordinates=None, chunks=None, cache=None,
lock=None):
"""Open a file with rasterio (experimental).
This should work with any file that rasterio can open (most often:
geoTIFF). The x and y coordinates are generated automatically from the
file's geoinformation, shifted to the center of each pixel (see
`"PixelIsArea" Raster Space
<http://web.archive.org/web/20160326194152/http://remotesensing.org/geotiff/spec/geotiff2.5.html#2.5.2>`_
for more information).
You can generate 2D coordinates from the file's attributes with::
from affine import Affine
da = xr.open_rasterio('path_to_file.tif')
transform = Affine.from_gdal(*da.attrs['transform'])
nx, ny = da.sizes['x'], da.sizes['y']
x, y = np.meshgrid(np.arange(nx)+0.5, np.arange(ny)+0.5) * transform
Parameters
----------
filename : str, rasterio.DatasetReader, or rasterio.WarpedVRT
Path to the file to open. Or already open rasterio dataset.
parse_coordinates : bool, optional
Whether to parse the x and y coordinates out of the file's
``transform`` attribute or not. The default is to automatically
parse the coordinates only if they are rectilinear (1D).
It can be useful to set ``parse_coordinates=False``
if your files are very large or if you don't need the coordinates.
chunks : int, tuple or dict, optional
Chunk sizes along each dimension, e.g., ``5``, ``(5, 5)`` or
``{'x': 5, 'y': 5}``. If chunks is provided, it used to load the new
DataArray into a dask array.
cache : bool, optional
If True, cache data loaded from the underlying datastore in memory as
NumPy arrays when accessed to avoid reading from the underlying data-
store multiple times. Defaults to True unless you specify the `chunks`
argument to use dask, in which case it defaults to False.
lock : False, True or threading.Lock, optional
If chunks is provided, this argument is passed on to
:py:func:`dask.array.from_array`. By default, a global lock is
used to avoid issues with concurrent access to the same file when using
dask's multithreaded backend.
Returns
-------
data : DataArray
The newly created DataArray.
"""
import rasterio
from rasterio.vrt import WarpedVRT
vrt_params = None
if isinstance(filename, rasterio.io.DatasetReader):
filename = filename.name
elif isinstance(filename, rasterio.vrt.WarpedVRT):
vrt = filename
filename = vrt.src_dataset.name
vrt_params = dict(crs=vrt.crs.to_string(),
resampling=vrt.resampling,
src_nodata=vrt.src_nodata,
dst_nodata=vrt.dst_nodata,
tolerance=vrt.tolerance,
warp_extras=vrt.warp_extras)
if lock is None:
lock = RASTERIO_LOCK
manager = CachingFileManager(rasterio.open, filename, lock=lock, mode='r')
riods = manager.acquire()
if vrt_params is not None:
riods = WarpedVRT(riods, **vrt_params)
if cache is None:
cache = chunks is None
coords = OrderedDict()
# Get bands
if riods.count < 1:
raise ValueError('Unknown dims')
coords['band'] = np.asarray(riods.indexes)
# Get coordinates
if LooseVersion(rasterio.__version__) < '1.0':
transform = riods.affine
else:
transform = riods.transform
if transform.is_rectilinear:
# 1d coordinates
parse = True if parse_coordinates is None else parse_coordinates
if parse:
nx, ny = riods.width, riods.height
# xarray coordinates are pixel centered
x, _ = (np.arange(nx) + 0.5, np.zeros(nx) + 0.5) * transform
_, y = (np.zeros(ny) + 0.5, np.arange(ny) + 0.5) * transform
coords['y'] = y
coords['x'] = x
else:
# 2d coordinates
parse = False if (parse_coordinates is None) else parse_coordinates
if parse:
warnings.warn(
"The file coordinates' transformation isn't "
"rectilinear: xarray won't parse the coordinates "
"in this case. Set `parse_coordinates=False` to "
"suppress this warning.",
RuntimeWarning, stacklevel=3)
# Attributes
attrs = dict()
# Affine transformation matrix (always available)
# This describes coefficients mapping pixel coordinates to CRS
# For serialization store as tuple of 6 floats, the last row being
# always (0, 0, 1) per definition (see
# https://github.com/sgillies/affine)
attrs['transform'] = tuple(transform)[:6]
if hasattr(riods, 'crs') and riods.crs:
# CRS is a dict-like object specific to rasterio
# If CRS is not None, we convert it back to a PROJ4 string using
# rasterio itself
attrs['crs'] = riods.crs.to_proj4()
if hasattr(riods, 'res'):
# (width, height) tuple of pixels in units of CRS
attrs['res'] = riods.res
if hasattr(riods, 'is_tiled'):
# Is the TIF tiled? (bool)
# We cast it to an int for netCDF compatibility
attrs['is_tiled'] = np.uint8(riods.is_tiled)
if hasattr(riods, 'nodatavals'):
# The nodata values for the raster bands
attrs['nodatavals'] = tuple(
np.nan if nodataval is None else nodataval
for nodataval in riods.nodatavals)
# Parse extra metadata from tags, if supported
parsers = {'ENVI': _parse_envi}
driver = riods.driver
if driver in parsers:
meta = parsers[driver](riods.tags(ns=driver))
for k, v in meta.items():
# Add values as coordinates if they match the band count,
# as attributes otherwise
if (isinstance(v, (list, np.ndarray))
and len(v) == riods.count):
coords[k] = ('band', np.asarray(v))
else:
attrs[k] = v
data = indexing.LazilyOuterIndexedArray(
RasterioArrayWrapper(manager, lock, vrt_params))
# this lets you write arrays loaded with rasterio
data = indexing.CopyOnWriteArray(data)
if cache and chunks is None:
data = indexing.MemoryCachedArray(data)
result = DataArray(data=data, dims=('band', 'y', 'x'),
coords=coords, attrs=attrs)
if chunks is not None:
from dask.base import tokenize
# augment the token with the file modification time
try:
mtime = os.path.getmtime(filename)
except OSError:
# the filename is probably an s3 bucket rather than a regular file
mtime = None
token = tokenize(filename, mtime, chunks)
name_prefix = 'open_rasterio-%s' % token
result = result.chunk(chunks, name_prefix=name_prefix, token=token)
# Make the file closeable
result._file_obj = manager
return result
|
def open_rasterio(filename, parse_coordinates=None, chunks=None, cache=None,
lock=None):
"""Open a file with rasterio (experimental).
This should work with any file that rasterio can open (most often:
geoTIFF). The x and y coordinates are generated automatically from the
file's geoinformation, shifted to the center of each pixel (see
`"PixelIsArea" Raster Space
<http://web.archive.org/web/20160326194152/http://remotesensing.org/geotiff/spec/geotiff2.5.html#2.5.2>`_
for more information).
You can generate 2D coordinates from the file's attributes with::
from affine import Affine
da = xr.open_rasterio('path_to_file.tif')
transform = Affine.from_gdal(*da.attrs['transform'])
nx, ny = da.sizes['x'], da.sizes['y']
x, y = np.meshgrid(np.arange(nx)+0.5, np.arange(ny)+0.5) * transform
Parameters
----------
filename : str, rasterio.DatasetReader, or rasterio.WarpedVRT
Path to the file to open. Or already open rasterio dataset.
parse_coordinates : bool, optional
Whether to parse the x and y coordinates out of the file's
``transform`` attribute or not. The default is to automatically
parse the coordinates only if they are rectilinear (1D).
It can be useful to set ``parse_coordinates=False``
if your files are very large or if you don't need the coordinates.
chunks : int, tuple or dict, optional
Chunk sizes along each dimension, e.g., ``5``, ``(5, 5)`` or
``{'x': 5, 'y': 5}``. If chunks is provided, it used to load the new
DataArray into a dask array.
cache : bool, optional
If True, cache data loaded from the underlying datastore in memory as
NumPy arrays when accessed to avoid reading from the underlying data-
store multiple times. Defaults to True unless you specify the `chunks`
argument to use dask, in which case it defaults to False.
lock : False, True or threading.Lock, optional
If chunks is provided, this argument is passed on to
:py:func:`dask.array.from_array`. By default, a global lock is
used to avoid issues with concurrent access to the same file when using
dask's multithreaded backend.
Returns
-------
data : DataArray
The newly created DataArray.
"""
import rasterio
from rasterio.vrt import WarpedVRT
vrt_params = None
if isinstance(filename, rasterio.io.DatasetReader):
filename = filename.name
elif isinstance(filename, rasterio.vrt.WarpedVRT):
vrt = filename
filename = vrt.src_dataset.name
vrt_params = dict(crs=vrt.crs.to_string(),
resampling=vrt.resampling,
src_nodata=vrt.src_nodata,
dst_nodata=vrt.dst_nodata,
tolerance=vrt.tolerance,
warp_extras=vrt.warp_extras)
if lock is None:
lock = RASTERIO_LOCK
manager = CachingFileManager(rasterio.open, filename, lock=lock, mode='r')
riods = manager.acquire()
if vrt_params is not None:
riods = WarpedVRT(riods, **vrt_params)
if cache is None:
cache = chunks is None
coords = OrderedDict()
# Get bands
if riods.count < 1:
raise ValueError('Unknown dims')
coords['band'] = np.asarray(riods.indexes)
# Get coordinates
if LooseVersion(rasterio.__version__) < '1.0':
transform = riods.affine
else:
transform = riods.transform
if transform.is_rectilinear:
# 1d coordinates
parse = True if parse_coordinates is None else parse_coordinates
if parse:
nx, ny = riods.width, riods.height
# xarray coordinates are pixel centered
x, _ = (np.arange(nx) + 0.5, np.zeros(nx) + 0.5) * transform
_, y = (np.zeros(ny) + 0.5, np.arange(ny) + 0.5) * transform
coords['y'] = y
coords['x'] = x
else:
# 2d coordinates
parse = False if (parse_coordinates is None) else parse_coordinates
if parse:
warnings.warn(
"The file coordinates' transformation isn't "
"rectilinear: xarray won't parse the coordinates "
"in this case. Set `parse_coordinates=False` to "
"suppress this warning.",
RuntimeWarning, stacklevel=3)
# Attributes
attrs = dict()
# Affine transformation matrix (always available)
# This describes coefficients mapping pixel coordinates to CRS
# For serialization store as tuple of 6 floats, the last row being
# always (0, 0, 1) per definition (see
# https://github.com/sgillies/affine)
attrs['transform'] = tuple(transform)[:6]
if hasattr(riods, 'crs') and riods.crs:
# CRS is a dict-like object specific to rasterio
# If CRS is not None, we convert it back to a PROJ4 string using
# rasterio itself
attrs['crs'] = riods.crs.to_dict()
if hasattr(riods, 'res'):
# (width, height) tuple of pixels in units of CRS
attrs['res'] = riods.res
if hasattr(riods, 'is_tiled'):
# Is the TIF tiled? (bool)
# We cast it to an int for netCDF compatibility
attrs['is_tiled'] = np.uint8(riods.is_tiled)
if hasattr(riods, 'nodatavals'):
# The nodata values for the raster bands
attrs['nodatavals'] = tuple(
np.nan if nodataval is None else nodataval
for nodataval in riods.nodatavals)
# Parse extra metadata from tags, if supported
parsers = {'ENVI': _parse_envi}
driver = riods.driver
if driver in parsers:
meta = parsers[driver](riods.tags(ns=driver))
for k, v in meta.items():
# Add values as coordinates if they match the band count,
# as attributes otherwise
if (isinstance(v, (list, np.ndarray))
and len(v) == riods.count):
coords[k] = ('band', np.asarray(v))
else:
attrs[k] = v
data = indexing.LazilyOuterIndexedArray(
RasterioArrayWrapper(manager, lock, vrt_params))
# this lets you write arrays loaded with rasterio
data = indexing.CopyOnWriteArray(data)
if cache and chunks is None:
data = indexing.MemoryCachedArray(data)
result = DataArray(data=data, dims=('band', 'y', 'x'),
coords=coords, attrs=attrs)
if chunks is not None:
from dask.base import tokenize
# augment the token with the file modification time
try:
mtime = os.path.getmtime(filename)
except OSError:
# the filename is probably an s3 bucket rather than a regular file
mtime = None
token = tokenize(filename, mtime, chunks)
name_prefix = 'open_rasterio-%s' % token
result = result.chunk(chunks, name_prefix=name_prefix, token=token)
# Make the file closeable
result._file_obj = manager
return result
|
38,450 |
def dump_grid_bucket_to_file(gb, fn, dfn=False, use_dim_as_surfix=False):
"""
Dump all grids and mortar grids in a GridBucket to a file.
Parameters:
gb (GridBucket): Each grid of the grid bucket will be written to file.
fn (String): The file name. This name will be passed to open() using 'w'
with a surfix giving the grid number.
dfn (bool): OPTIONAL. Set to true if grid bucket is a DFN network
Returns:
None
"""
# If use_dim_as_surfix is True we use the grid dimension as an index.
# This can only be done if there is one grid per dimension.
# Otherwise we use the graph node number.
if use_dim_as_surfix:
for dim in range(gb.dim_max()):
if len(gb.grids_of_dimension(dim)) > 1:
raise ValueError(
"""dump_grid_bucket_to_file tried to use dimension as surfix.
This is only possible if there is 1 grid per dimension. The gb
has {} grids of dimension {}""".format(
len(gb.grids_of_dimension(dim)), dim
)
)
for g, d in gb:
if use_dim_as_surfix:
grid_name = append_id_(fn, str(g.dim))
else:
grid_name = append_id_(fn, d["node_number"])
g.idx = d["node_number"]
dim = g.dim
g.dim = gb.dim_max() + dfn
dump_grid_to_file(g, grid_name)
g.dim = dim
for e, d in gb.edges():
dump_mortar_grid_to_file(gb, e, d, fn, use_dim_as_surfix, dfn)
|
def dump_grid_bucket_to_file(gb: pp.GridBucket, fn: str, dfn: bool = False, use_dim_as_surfix: bool=False) -> None:
"""
Dump all grids and mortar grids in a GridBucket to a file.
Parameters:
gb (GridBucket): Each grid of the grid bucket will be written to file.
fn (String): The file name. This name will be passed to open() using 'w'
with a surfix giving the grid number.
dfn (bool): OPTIONAL. Set to true if grid bucket is a DFN network
Returns:
None
"""
# If use_dim_as_surfix is True we use the grid dimension as an index.
# This can only be done if there is one grid per dimension.
# Otherwise we use the graph node number.
if use_dim_as_surfix:
for dim in range(gb.dim_max()):
if len(gb.grids_of_dimension(dim)) > 1:
raise ValueError(
"""dump_grid_bucket_to_file tried to use dimension as surfix.
This is only possible if there is 1 grid per dimension. The gb
has {} grids of dimension {}""".format(
len(gb.grids_of_dimension(dim)), dim
)
)
for g, d in gb:
if use_dim_as_surfix:
grid_name = append_id_(fn, str(g.dim))
else:
grid_name = append_id_(fn, d["node_number"])
g.idx = d["node_number"]
dim = g.dim
g.dim = gb.dim_max() + dfn
dump_grid_to_file(g, grid_name)
g.dim = dim
for e, d in gb.edges():
dump_mortar_grid_to_file(gb, e, d, fn, use_dim_as_surfix, dfn)
|
30,749 |
def upload_files(
file_path: str, dir_path: str, /,
types: Optional[Set[str]] = None, extensions: Optional[Set[str]] = None,
types_inclusive_or_exclusive: Optional[str] = None,
extensions_inclusive_or_exclusive: Optional[str] = None,
wpa_pwd: Optional[str] = None,
rsa_path: Optional[str] = None,
limit: int = 5,
**kwargs
) -> Union[CommandResults, str]:
"""Extracts files and delivers it to CortexSOAR
Args:
file_path: the path to the PCAP file
dir_path: dir path for the files
types: types to filter by.
extensions: extensions to filter by.
types_inclusive_or_exclusive: should types set be inclusive or exclusive
extensions_inclusive_or_exclusive: should extensions set be inclusive or exclusive
wpa_pwd: password to the file (if WPA-PWD protected)
rsa_path: path to a private key file (if TLS encrypted)
limit: maximum files to extract (default 5)
Returns:
Extracted files to download
"""
if kwargs is not None:
demisto.debug(f'PcapFileExtractor: Got extra arguments in upload_files:\n{kwargs}')
command = ['tshark', '-r', f'{file_path}', '--export-objects', f'http,{dir_path}',
'--export-objects', f'smb,{dir_path}', '--export-objects', f'imf,{dir_path}',
'--export-objects', f'tftp,{dir_path}', '--export-objects', f'dicom,{dir_path}']
# If WPA-PWD protected
if wpa_pwd:
command.extend([
'-o', 'wlan.enable_decryption:TRUE',
'-o', f'uat:80211_keys:"wpa-pwd","{wpa_pwd}"'
])
# If need to decrypt the file using a RSA key
if rsa_path:
command.extend(['-o', f'uat:rsa_keys:"{rsa_path}",""'])
run_command(command)
context = []
md5 = hashlib.md5()
sha1 = hashlib.sha1()
sha256 = hashlib.sha256()
for root, _, files in os.walk(dir_path):
# Limit the files list to minimum
files = files[: limit]
if not files:
return 'No files found.'
# Filter files
files = filter_files(root, files,
types=types,
extensions=extensions,
extensions_inclusive_or_exclusive=extensions_inclusive_or_exclusive,
types_inclusive_or_exclusive=types_inclusive_or_exclusive)
for file in files:
file_path = os.path.join(root, file)
file_name = os.path.join(file)
with open(file_path, 'rb') as file_stream:
data = file_stream.read()
demisto.results(fileResult(file_name, data))
md5.update(data)
sha1.update(data)
sha256.update(data)
context.append({
'FileMD5': md5.hexdigest(),
'FileSHA1': sha1.hexdigest(),
'FileSHA256': sha256.hexdigest(),
'FileName': file_name,
'FileSize': os.path.getsize(file_path),
'FileExtension': os.path.splitext(file_name)[1]
})
readable_output = tableToMarkdown('Pcap Extracted Files', [{'name': file_name} for file_name in files])
results = CommandResults(
outputs_prefix='PcapExtractedFiles',
outputs_key_field='FileMD5',
outputs=context,
readable_output=readable_output
)
return results
else:
raise DemistoException('No files found in path.')
|
def upload_files(
file_path: str, dir_path: str,
types: Optional[Set[str]] = None, extensions: Optional[Set[str]] = None,
types_inclusive_or_exclusive: Optional[str] = None,
extensions_inclusive_or_exclusive: Optional[str] = None,
wpa_pwd: Optional[str] = None,
rsa_path: Optional[str] = None,
limit: int = 5,
**kwargs
) -> Union[CommandResults, str]:
"""Extracts files and delivers it to CortexSOAR
Args:
file_path: the path to the PCAP file
dir_path: dir path for the files
types: types to filter by.
extensions: extensions to filter by.
types_inclusive_or_exclusive: should types set be inclusive or exclusive
extensions_inclusive_or_exclusive: should extensions set be inclusive or exclusive
wpa_pwd: password to the file (if WPA-PWD protected)
rsa_path: path to a private key file (if TLS encrypted)
limit: maximum files to extract (default 5)
Returns:
Extracted files to download
"""
if kwargs is not None:
demisto.debug(f'PcapFileExtractor: Got extra arguments in upload_files:\n{kwargs}')
command = ['tshark', '-r', f'{file_path}', '--export-objects', f'http,{dir_path}',
'--export-objects', f'smb,{dir_path}', '--export-objects', f'imf,{dir_path}',
'--export-objects', f'tftp,{dir_path}', '--export-objects', f'dicom,{dir_path}']
# If WPA-PWD protected
if wpa_pwd:
command.extend([
'-o', 'wlan.enable_decryption:TRUE',
'-o', f'uat:80211_keys:"wpa-pwd","{wpa_pwd}"'
])
# If need to decrypt the file using a RSA key
if rsa_path:
command.extend(['-o', f'uat:rsa_keys:"{rsa_path}",""'])
run_command(command)
context = []
md5 = hashlib.md5()
sha1 = hashlib.sha1()
sha256 = hashlib.sha256()
for root, _, files in os.walk(dir_path):
# Limit the files list to minimum
files = files[: limit]
if not files:
return 'No files found.'
# Filter files
files = filter_files(root, files,
types=types,
extensions=extensions,
extensions_inclusive_or_exclusive=extensions_inclusive_or_exclusive,
types_inclusive_or_exclusive=types_inclusive_or_exclusive)
for file in files:
file_path = os.path.join(root, file)
file_name = os.path.join(file)
with open(file_path, 'rb') as file_stream:
data = file_stream.read()
demisto.results(fileResult(file_name, data))
md5.update(data)
sha1.update(data)
sha256.update(data)
context.append({
'FileMD5': md5.hexdigest(),
'FileSHA1': sha1.hexdigest(),
'FileSHA256': sha256.hexdigest(),
'FileName': file_name,
'FileSize': os.path.getsize(file_path),
'FileExtension': os.path.splitext(file_name)[1]
})
readable_output = tableToMarkdown('Pcap Extracted Files', [{'name': file_name} for file_name in files])
results = CommandResults(
outputs_prefix='PcapExtractedFiles',
outputs_key_field='FileMD5',
outputs=context,
readable_output=readable_output
)
return results
else:
raise DemistoException('No files found in path.')
|
7,138 |
def forward_energy(image, mode, mask=None):
"""Find the edge magnitude using forward energy for seam carving.
Depending on the direction in `mode`, determines the magnitude of
each pixel based on the new edges created after removing a seam
containing that pixel.
Parameters
----------
image : 2-D array
Image to process.
mode : str {'horizontal', 'vertical'}
Indicates whether seams are to be removed vertically or horizontally.
Removing seams horizontally will decrease the height whereas removing
vertically will decrease the width.
mask : 2-D array, optional
An optional mask to limit the application to a certain area.
Note that pixels surrounding masked regions are also masked to
prevent masked regions from affecting the result.
Returns
-------
output : 2-D array
The forward energy edge map.
References
----------
.. [1] Michael Rubinstein, Ariel Shamir, and Shai Avidan
"Improved Seam Carving for Video Retargeting"
http://www.faculty.idc.ac.il/arik/SCWeb/vidret/index.html
"""
assert_nD(image, 2)
image = img_as_float(image)
if mode == 'horizontal':
image = np.swapaxes(image, 0, 1)
height = image.shape[0]
width = image.shape[1]
energy = np.zeros((height, width))
m = np.zeros((height, width))
U = np.roll(image, 1, axis=0)
L = np.roll(image, 1, axis=1)
R = np.roll(image, -1, axis=1)
cU = np.abs(R - L)
cL = np.abs(U - L) + cU
cR = np.abs(U - R) + cU
for i in range(1, height):
mU = m[i - 1]
mL = np.roll(mU, 1)
mR = np.roll(mU, -1)
mULR = np.array([mU, mL, mR])
cULR = np.array([cU[i], cL[i], cR[i]])
mULR += cULR
argmins = np.argmin(mULR, axis=0)
m[i] = np.choose(argmins, mULR)
energy[i] = np.choose(argmins, cULR)
if mode == 'horizontal':
energy = np.swapaxes(energy, 0, 1)
return _mask_filter_result(energy, mask)
|
def forward_energy(image, mode, mask=None):
"""Find the edge magnitude using forward energy for seam carving.
Depending on the direction in `mode`, determines the magnitude of
each pixel based on the new edges created after removing a seam
containing that pixel.
Parameters
----------
image : array, shape (M, N)
Image to process.
mode : str {'horizontal', 'vertical'}
Indicates whether seams are to be removed vertically or horizontally.
Removing seams horizontally will decrease the height whereas removing
vertically will decrease the width.
mask : 2-D array, optional
An optional mask to limit the application to a certain area.
Note that pixels surrounding masked regions are also masked to
prevent masked regions from affecting the result.
Returns
-------
output : 2-D array
The forward energy edge map.
References
----------
.. [1] Michael Rubinstein, Ariel Shamir, and Shai Avidan
"Improved Seam Carving for Video Retargeting"
http://www.faculty.idc.ac.il/arik/SCWeb/vidret/index.html
"""
assert_nD(image, 2)
image = img_as_float(image)
if mode == 'horizontal':
image = np.swapaxes(image, 0, 1)
height = image.shape[0]
width = image.shape[1]
energy = np.zeros((height, width))
m = np.zeros((height, width))
U = np.roll(image, 1, axis=0)
L = np.roll(image, 1, axis=1)
R = np.roll(image, -1, axis=1)
cU = np.abs(R - L)
cL = np.abs(U - L) + cU
cR = np.abs(U - R) + cU
for i in range(1, height):
mU = m[i - 1]
mL = np.roll(mU, 1)
mR = np.roll(mU, -1)
mULR = np.array([mU, mL, mR])
cULR = np.array([cU[i], cL[i], cR[i]])
mULR += cULR
argmins = np.argmin(mULR, axis=0)
m[i] = np.choose(argmins, mULR)
energy[i] = np.choose(argmins, cULR)
if mode == 'horizontal':
energy = np.swapaxes(energy, 0, 1)
return _mask_filter_result(energy, mask)
|
44,073 |
def compare_fragment_nodes(node_data, expected_data):
"""Helper function to comper nodes of fragment graphs"""
expected = [(exp_data[0].name, exp_data[0].wires, exp_data[1]) for exp_data in expected_data]
for data in node_data:
# The exact ordering of node data within the list varies on each call
assert (data[0].name, data[0].wires, data[1]) in expected
|
def compare_fragment_nodes(node_data, expected_data):
"""Helper function to compare nodes of fragment graphs"""
expected = [(exp_data[0].name, exp_data[0].wires, exp_data[1]) for exp_data in expected_data]
for data in node_data:
# The exact ordering of node data within the list varies on each call
assert (data[0].name, data[0].wires, data[1]) in expected
|
43,974 |
def _pauli_mult(p1, p2):
r"""Return the result of multipication between two tensor product of pauli operators.
The Pauli operator ::math::`(P_0)` is denoted by [(0, 'P')], where ::math::`P` represents
::math::`X`, ::math::`Y` or ::math::`Z`.
Args:
p1 (list[list[tuple[int, str]]]): the first tensor product of pauli operators
p2 (list[list[tuple[int, str]]]): the second tensor product of pauli operators
Returns
tuple(list[tuple[int, str]], complex): list of the pauli operators and the coefficient
**Example**
>>> p1 = [(0, "X"), (1, "Y")], # X_0 @ Y_1
>>> p2 = [(0, "X"), (2, "Y")], # X_0 @ Y_2
>>> _pauli_mult(p1, p2)
([(2, "Y"), (1, "Y")], 1.0) # p1 @ p2 = X_0 @ Y_1 @ X_0 @ Y_2
"""
c = 1.0
t1 = [t[0] for t in p1]
t2 = [t[0] for t in p2]
k = []
for i in p1:
if i[0] in t1 and i[0] not in t2:
k.append((i[0], pauli_mult[i[1]]))
for j in p2:
if j[0] in t2 and j[0] not in t1:
k.append((j[0], pauli_mult[j[1]]))
if i[0] == j[0]:
if i[1] + j[1] in pauli_coeff:
k.append((i[0], pauli_mult[i[1] + j[1]]))
c = c * pauli_coeff[i[1] + j[1]]
else:
k.append((i[0], pauli_mult[i[1] + j[1]]))
k = [i for i in k if "I" not in i[1]]
for item in k:
k_ = [i for i, x in enumerate(k) if x == item]
if len(k_) >= 2:
for j in k_[::-1][:-1]:
del k[j]
return k, c
|
def _pauli_mult(p1, p2):
r"""Return the result of multipication between two tensor product of pauli operators.
The Pauli operator ::math::`(P_0)` is denoted by [(0, 'P')], where ::math::`P` represents
::math::`X`, ::math::`Y` or ::math::`Z`.
Args:
p1 (list[list[tuple[int, str]]]): the first tensor product of pauli operators
p2 (list[list[tuple[int, str]]]): the second tensor product of pauli operators
Returns
tuple(list[tuple[int, str]], complex): list of the Pauli operators and the coefficient
**Example**
>>> p1 = [(0, "X"), (1, "Y")], # X_0 @ Y_1
>>> p2 = [(0, "X"), (2, "Y")], # X_0 @ Y_2
>>> _pauli_mult(p1, p2)
([(2, "Y"), (1, "Y")], 1.0) # p1 @ p2 = X_0 @ Y_1 @ X_0 @ Y_2
"""
c = 1.0
t1 = [t[0] for t in p1]
t2 = [t[0] for t in p2]
k = []
for i in p1:
if i[0] in t1 and i[0] not in t2:
k.append((i[0], pauli_mult[i[1]]))
for j in p2:
if j[0] in t2 and j[0] not in t1:
k.append((j[0], pauli_mult[j[1]]))
if i[0] == j[0]:
if i[1] + j[1] in pauli_coeff:
k.append((i[0], pauli_mult[i[1] + j[1]]))
c = c * pauli_coeff[i[1] + j[1]]
else:
k.append((i[0], pauli_mult[i[1] + j[1]]))
k = [i for i in k if "I" not in i[1]]
for item in k:
k_ = [i for i, x in enumerate(k) if x == item]
if len(k_) >= 2:
for j in k_[::-1][:-1]:
del k[j]
return k, c
|
37,825 |
def call(
*args: PathOrStr,
env: Optional[Dict[str, str]] = None,
cwd: Optional[PathOrStr] = None,
text: bool = False,
) -> Optional[str]:
"""
Run subprocess.run, but print the commands first. Takes the commands as
*args. Should use shell=True on Windows due to a bug. Also converts to
Paths to strings, due to Windows behavior at least on older Pythons.
https://bugs.python.org/issue8557
"""
args_ = [str(arg) for arg in args]
# print the command executing for the logs
print("+ " + " ".join(shlex.quote(a) for a in args_))
kwargs: Dict[str, Any] = {}
if text:
kwargs["universal_newlines"] = True
kwargs["stdout"] = subprocess.PIPE
result = subprocess.run(args_, check=True, shell=IS_WIN, env=env, cwd=cwd, **kwargs)
if not text:
return None
return cast(str, result.stdout)
|
def call(
*args: PathOrStr,
env: Optional[Dict[str, str]] = None,
cwd: Optional[PathOrStr] = None,
text: bool = False,
) -> Optional[str]:
"""
Run subprocess.run, but print the commands first. Takes the commands as
*args. Uses shell=True on Windows due to a bug. Also converts to
Paths to strings, due to Windows behavior at least on older Pythons.
https://bugs.python.org/issue8557
"""
args_ = [str(arg) for arg in args]
# print the command executing for the logs
print("+ " + " ".join(shlex.quote(a) for a in args_))
kwargs: Dict[str, Any] = {}
if text:
kwargs["universal_newlines"] = True
kwargs["stdout"] = subprocess.PIPE
result = subprocess.run(args_, check=True, shell=IS_WIN, env=env, cwd=cwd, **kwargs)
if not text:
return None
return cast(str, result.stdout)
|
33,509 |
def install_stepfunctions_local():
if not os.path.exists(INSTALL_PATH_STEPFUNCTIONS_JAR):
# pull the JAR file from the Docker image, which is more up-to-date than the downloadable JAR file
if not has_docker():
# TODO: works only when running on the host, outside of Docker -> add a fallback if running in Docker?
LOG.warning("Docker not available - skipping installation of StepFunctions dependency")
return
log_install_msg("Step Functions")
mkdir(INSTALL_DIR_STEPFUNCTIONS)
DOCKER_CLIENT.pull_image(IMAGE_NAME_SFN_LOCAL)
docker_name = "tmp-ls-sfn"
DOCKER_CLIENT.run_container(
IMAGE_NAME_SFN_LOCAL,
remove=True,
entrypoint="",
name=docker_name,
detach=True,
command=["sleep", "15"],
)
time.sleep(5)
DOCKER_CLIENT.copy_from_container(
docker_name, local_path=dirs.static_libs, container_path="/home/stepfunctionslocal/"
)
path = Path(f"{dirs.static_libs}/stepfunctionslocal/")
for file in path.glob("*.jar"):
file.rename(Path(INSTALL_DIR_STEPFUNCTIONS) / file.name)
rm_rf("%s/stepfunctionslocal" % dirs.static_libs)
classes = [
SFN_PATCH_CLASS1,
SFN_PATCH_CLASS2,
SFN_PATCH_CLASS_REGION,
SFN_PATCH_CLASS_STARTER,
SFN_PATCH_CLASS_ASYNC2SERVICEAPI,
SFN_PATCH_CLASS_DESCRIBEEXECUTIONPARSED,
SFN_PATCH_FILE_METAINF,
]
for patch_class in classes:
patch_url = f"{SFN_PATCH_URL_PREFIX}/{patch_class}"
add_file_to_jar(patch_class, patch_url, target_jar=INSTALL_PATH_STEPFUNCTIONS_JAR)
# special case for Manifest file - extract first, replace content, then update in JAR file
manifest_file = os.path.join(INSTALL_DIR_STEPFUNCTIONS, "META-INF", "MANIFEST.MF")
if not os.path.exists(manifest_file):
content = run(["unzip", "-p", INSTALL_PATH_STEPFUNCTIONS_JAR, "META-INF/MANIFEST.MF"])
content = re.sub(
"Main-Class: .+", "Main-Class: cloud.localstack.StepFunctionsStarter", content
)
classpath = " ".join([os.path.basename(jar) for jar in JAR_URLS])
content = re.sub(r"Class-Path: \. ", f"Class-Path: {classpath} . ", content)
save_file(manifest_file, content)
run(
["zip", INSTALL_PATH_STEPFUNCTIONS_JAR, "META-INF/MANIFEST.MF"],
cwd=INSTALL_DIR_STEPFUNCTIONS,
)
# download additional jar libs
for jar_url in JAR_URLS:
target = os.path.join(INSTALL_DIR_STEPFUNCTIONS, os.path.basename(jar_url))
if not file_exists_not_empty(target):
download(jar_url, target)
# download aws-sdk lambda handler
target = os.path.join(INSTALL_DIR_STEPFUNCTIONS, "localstack-internal-awssdk", "awssdk.zip")
if not file_exists_not_empty(target):
download(SFN_AWS_SDK_LAMBDA_ZIP_FILE, target)
|
def install_stepfunctions_local():
if not os.path.exists(INSTALL_PATH_STEPFUNCTIONS_JAR):
# pull the JAR file from the Docker image, which is more up-to-date than the downloadable JAR file
if not has_docker():
# TODO: works only when a docker socket is available -> add a fallback if running without Docker?
LOG.warning("Docker not available - skipping installation of StepFunctions dependency")
return
log_install_msg("Step Functions")
mkdir(INSTALL_DIR_STEPFUNCTIONS)
DOCKER_CLIENT.pull_image(IMAGE_NAME_SFN_LOCAL)
docker_name = "tmp-ls-sfn"
DOCKER_CLIENT.run_container(
IMAGE_NAME_SFN_LOCAL,
remove=True,
entrypoint="",
name=docker_name,
detach=True,
command=["sleep", "15"],
)
time.sleep(5)
DOCKER_CLIENT.copy_from_container(
docker_name, local_path=dirs.static_libs, container_path="/home/stepfunctionslocal/"
)
path = Path(f"{dirs.static_libs}/stepfunctionslocal/")
for file in path.glob("*.jar"):
file.rename(Path(INSTALL_DIR_STEPFUNCTIONS) / file.name)
rm_rf("%s/stepfunctionslocal" % dirs.static_libs)
classes = [
SFN_PATCH_CLASS1,
SFN_PATCH_CLASS2,
SFN_PATCH_CLASS_REGION,
SFN_PATCH_CLASS_STARTER,
SFN_PATCH_CLASS_ASYNC2SERVICEAPI,
SFN_PATCH_CLASS_DESCRIBEEXECUTIONPARSED,
SFN_PATCH_FILE_METAINF,
]
for patch_class in classes:
patch_url = f"{SFN_PATCH_URL_PREFIX}/{patch_class}"
add_file_to_jar(patch_class, patch_url, target_jar=INSTALL_PATH_STEPFUNCTIONS_JAR)
# special case for Manifest file - extract first, replace content, then update in JAR file
manifest_file = os.path.join(INSTALL_DIR_STEPFUNCTIONS, "META-INF", "MANIFEST.MF")
if not os.path.exists(manifest_file):
content = run(["unzip", "-p", INSTALL_PATH_STEPFUNCTIONS_JAR, "META-INF/MANIFEST.MF"])
content = re.sub(
"Main-Class: .+", "Main-Class: cloud.localstack.StepFunctionsStarter", content
)
classpath = " ".join([os.path.basename(jar) for jar in JAR_URLS])
content = re.sub(r"Class-Path: \. ", f"Class-Path: {classpath} . ", content)
save_file(manifest_file, content)
run(
["zip", INSTALL_PATH_STEPFUNCTIONS_JAR, "META-INF/MANIFEST.MF"],
cwd=INSTALL_DIR_STEPFUNCTIONS,
)
# download additional jar libs
for jar_url in JAR_URLS:
target = os.path.join(INSTALL_DIR_STEPFUNCTIONS, os.path.basename(jar_url))
if not file_exists_not_empty(target):
download(jar_url, target)
# download aws-sdk lambda handler
target = os.path.join(INSTALL_DIR_STEPFUNCTIONS, "localstack-internal-awssdk", "awssdk.zip")
if not file_exists_not_empty(target):
download(SFN_AWS_SDK_LAMBDA_ZIP_FILE, target)
|
6,491 |
def mark_attendance_and_link_log(logs, attendance_status,attendance_date, working_hours=None, late_entry=False, early_exit=False, in_time=None, out_time=None, shift=None):
"""Creates an attendance and links the attendance to the Employee Checkin.
Note: If attendance is already present for the given date, the logs are marked as skipped and no exception is thrown.
:param logs: The List of 'Employee Checkin'.
:param attendance_status: Attendance status to be marked. One of: (Present, Absent, Half Day, Skip). Note: 'On Leave' is not supported by this function.
:param attendance_date: Date of the attendance to be created.
:param working_hours: (optional)Number of working hours for the given date.
"""
log_names = [x.name for x in logs]
employee = logs[0].employee
if attendance_status == 'Skip':
frappe.db.sql("""update `tabEmployee Checkin`
set skip_auto_attendance = %s
where name in %s""", ('1', log_names))
return None
else:
employee_doc = frappe.get_doc('Employee', employee)
if not frappe.db.exists('Attendance', {'employee':employee, 'attendance_date':attendance_date, 'docstatus':('!=', '2')}):
doc_dict = {
'doctype': 'Attendance',
'employee': employee,
'attendance_date': attendance_date,
'status': attendance_status,
'working_hours': working_hours,
'company': employee_doc.company,
'shift': shift,
'late_entry': late_entry,
'early_exit': early_exit,
'in_time': in_time,
'out_time': out_time
}
hd_status, p_status, a_status = get_applicable_status()
if attendance_status == hd_status:
#half day checkins is not there it means he is absent for remaining half day
doc_dict["remaining_half_day_status"] = a_status
attendance = frappe.get_doc(doc_dict).insert()
attendance.submit()
frappe.db.sql("""update `tabEmployee Checkin`
set attendance = %s
where name in %s""", (attendance.name, log_names))
return attendance
else:
frappe.db.sql("""update `tabEmployee Checkin`
set skip_auto_attendance = %s
where name in %s""", ('1', log_names))
return None
|
def mark_attendance_and_link_log(logs, attendance_status,attendance_date, working_hours=None, late_entry=False, early_exit=False, in_time=None, out_time=None, shift=None):
"""Creates an attendance and links the attendance to the Employee Checkin.
Note: If attendance is already present for the given date, the logs are marked as skipped and no exception is thrown.
:param logs: The List of 'Employee Checkin'.
:param attendance_status: Attendance status to be marked. One of: (Present, Absent, Half Day, Skip). Note: 'On Leave' is not supported by this function.
:param attendance_date: Date of the attendance to be created.
:param working_hours: (optional)Number of working hours for the given date.
"""
log_names = [x.name for x in logs]
employee = logs[0].employee
if attendance_status == 'Skip':
frappe.db.sql("""update `tabEmployee Checkin`
set skip_auto_attendance = %s
where name in %s""", ('1', log_names))
return None
else:
employee_doc = frappe.get_doc('Employee', employee)
if not frappe.db.exists('Attendance', {'employee':employee, 'attendance_date':attendance_date, 'docstatus':('!=', '2')}):
doc_dict = {
'doctype': 'Attendance',
'employee': employee,
'attendance_date': attendance_date,
'status': attendance_status,
'working_hours': working_hours,
'company': employee_doc.company,
'shift': shift,
'late_entry': late_entry,
'early_exit': early_exit,
'in_time': in_time,
'out_time': out_time
}
half_day_status, present_status, absent_status = get_applicable_status()
if attendance_status == hd_status:
#half day checkins is not there it means he is absent for remaining half day
doc_dict["remaining_half_day_status"] = a_status
attendance = frappe.get_doc(doc_dict).insert()
attendance.submit()
frappe.db.sql("""update `tabEmployee Checkin`
set attendance = %s
where name in %s""", (attendance.name, log_names))
return attendance
else:
frappe.db.sql("""update `tabEmployee Checkin`
set skip_auto_attendance = %s
where name in %s""", ('1', log_names))
return None
|
8,240 |
def _bresenham(x1, y1, x2, y2):
"""
Returns an array of all pixel coordinates which the line defined by `x1, y1` and
`x2, y2` crosses. Uses Bresenham's line algorithm to enumerate the pixels along
a line. This was adapted from ginga.
Parameters
----------
x1, y1, x2, y2 :`int`
References
----------
* https://github.com/ejeschke/ginga/blob/c8ceaf8e559acc547bf25661842a53ed44a7b36f/ginga/BaseImage.py#L503
* http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
"""
for x in [x1, y1, x2, y2]:
if type(x) not in (int, np.int64):
raise TypeError('All pixel coordinates must be of type int')
dx = abs(x2 - x1)
dy = abs(y2 - y1)
sx = 1 if x1 < x2 else -1
sy = 1 if y1 < y2 else -1
err = dx - dy
res = []
x, y = x1, y1
while True:
res.append((x, y))
if (x == x2) and (y == y2):
break
e2 = 2 * err
if e2 > -dy:
err = err - dy
x += sx
if e2 < dx:
err = err + dx
y += sy
return np.array(res)
|
def _bresenham(*, x1, y1, x2, y2):
"""
Returns an array of all pixel coordinates which the line defined by `x1, y1` and
`x2, y2` crosses. Uses Bresenham's line algorithm to enumerate the pixels along
a line. This was adapted from ginga.
Parameters
----------
x1, y1, x2, y2 :`int`
References
----------
* https://github.com/ejeschke/ginga/blob/c8ceaf8e559acc547bf25661842a53ed44a7b36f/ginga/BaseImage.py#L503
* http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
"""
for x in [x1, y1, x2, y2]:
if type(x) not in (int, np.int64):
raise TypeError('All pixel coordinates must be of type int')
dx = abs(x2 - x1)
dy = abs(y2 - y1)
sx = 1 if x1 < x2 else -1
sy = 1 if y1 < y2 else -1
err = dx - dy
res = []
x, y = x1, y1
while True:
res.append((x, y))
if (x == x2) and (y == y2):
break
e2 = 2 * err
if e2 > -dy:
err = err - dy
x += sx
if e2 < dx:
err = err + dx
y += sy
return np.array(res)
|
4,474 |
def _check_sphere(sphere, info=None, sphere_units='m'):
from ..defaults import HEAD_SIZE_DEFAULT
from ..bem import fit_sphere_to_headshape, ConductorModel, get_fitting_dig
if sphere is None:
sphere = HEAD_SIZE_DEFAULT
if info is not None:
# Decide if we have enough dig points to do the auto fit
try:
get_fitting_dig(info, 'extra', verbose='error')
except (RuntimeError, ValueError):
pass
else:
sphere = 'auto'
if isinstance(sphere, str):
if sphere not in ('auto', 'eeglab'):
raise ValueError(f'sphere, if str, must be "auto" or "eeglab", '
f'got {sphere}')
assert info is not None
if sphere == 'auto':
R, r0, _ = fit_sphere_to_headshape(info, verbose=False, units='m')
sphere = tuple(r0) + (R,)
sphere_units = 'm'
elif sphere == 'eeglab':
montage = info.get_montage()
horizon_ch_names = ['Oz', 'Fpz', 'T7', 'T8']
for ch_name in horizon_ch_names:
if ch_name not in montage.ch_names:
raise ValueError(
f'spehre="eeglab" requires digitization points of '
f'the following electrode locations in the data: '
f'{", ".join(horizon_ch_names)}, but could not find: '
f'{ch_name}')
# Extracted from Mikołaj Magnuski's example
ch_pos = montage.get_positions()['ch_pos']
pos = np.stack([ch_pos[ch_name] for ch_name in horizon_ch_names])
# now we calculate the radius from T7 and T8 x position
# (we could use Oz and Fpz y positions as well)
radius = np.abs(pos[[2, 3], 0]).mean()
# then we obtain the x, y, z sphere center this way:
# x: x position of the Oz channel (should be very close to 0)
# y: y position of the T8 channel (should be very close to 0 too)
# z: average z position of Oz, Fpz, T7 and T8 (their z position
# should be the the same, so we could also use just one of these
# channels), it should be positive and somewhere around `0.03`
# (3 cm)
x = pos[0, 0]
y = pos[-1, 1]
z = pos[:, -1].mean()
sphere=(x, y, z, radius)
sphere_units = 'm'
del x, y , z, radius, montage, ch_pos
elif isinstance(sphere, ConductorModel):
if not sphere['is_sphere'] or len(sphere['layers']) == 0:
raise ValueError('sphere, if a ConductorModel, must be spherical '
'with multiple layers, not a BEM or single-layer '
'sphere (got %s)' % (sphere,))
sphere = tuple(sphere['r0']) + (sphere['layers'][0]['rad'],)
sphere_units = 'm'
sphere = np.array(sphere, dtype=float)
if sphere.shape == ():
sphere = np.concatenate([[0.] * 3, [sphere]])
if sphere.shape != (4,):
raise ValueError('sphere must be float or 1D array of shape (4,), got '
'array-like of shape %s' % (sphere.shape,))
_check_option('sphere_units', sphere_units, ('m', 'mm'))
if sphere_units == 'mm':
sphere /= 1000.
sphere = np.array(sphere, float)
return sphere
|
def _check_sphere(sphere, info=None, sphere_units='m'):
from ..defaults import HEAD_SIZE_DEFAULT
from ..bem import fit_sphere_to_headshape, ConductorModel, get_fitting_dig
if sphere is None:
sphere = HEAD_SIZE_DEFAULT
if info is not None:
# Decide if we have enough dig points to do the auto fit
try:
get_fitting_dig(info, 'extra', verbose='error')
except (RuntimeError, ValueError):
pass
else:
sphere = 'auto'
if isinstance(sphere, str):
if sphere not in ('auto', 'eeglab'):
raise ValueError(f'sphere, if str, must be "auto" or "eeglab", '
f'got {sphere}')
assert info is not None
if sphere == 'auto':
R, r0, _ = fit_sphere_to_headshape(info, verbose=False, units='m')
sphere = tuple(r0) + (R,)
sphere_units = 'm'
elif sphere == 'eeglab':
montage = info.get_montage()
horizon_ch_names = ['Oz', 'Fpz', 'T7', 'T8']
for ch_name in horizon_ch_names:
if ch_name not in montage.ch_names:
raise ValueError(
f'spehre="eeglab" requires digitization points of '
f'the following electrode locations in the data: '
f'{", ".join(horizon_ch_names)}, but could not find: '
f'{ch_name}')
# Extracted from Mikołaj Magnuski's example
ch_pos = montage.get_positions()['ch_pos']
pos = np.stack([ch_pos[ch_name] for ch_name in horizon_ch_names])
# now we calculate the radius from T7 and T8 x position
# (we could use Oz and Fpz y positions as well)
radius = np.abs(pos[[2, 3], 0]).mean()
# then we obtain the x, y, z sphere center this way:
# x: x position of the Oz channel (should be very close to 0)
# y: y position of the T8 channel (should be very close to 0 too)
# z: average z position of Oz, Fpz, T7 and T8 (their z position
# should be the the same, so we could also use just one of these
# channels), it should be positive and somewhere around `0.03`
# (3 cm)
x = pos[0, 0]
y = pos[-1, 1]
z = pos[:, -1].mean()
sphere=(x, y, z, radius)
sphere_units = 'm'
del x, y, z, radius, montage, ch_pos
elif isinstance(sphere, ConductorModel):
if not sphere['is_sphere'] or len(sphere['layers']) == 0:
raise ValueError('sphere, if a ConductorModel, must be spherical '
'with multiple layers, not a BEM or single-layer '
'sphere (got %s)' % (sphere,))
sphere = tuple(sphere['r0']) + (sphere['layers'][0]['rad'],)
sphere_units = 'm'
sphere = np.array(sphere, dtype=float)
if sphere.shape == ():
sphere = np.concatenate([[0.] * 3, [sphere]])
if sphere.shape != (4,):
raise ValueError('sphere must be float or 1D array of shape (4,), got '
'array-like of shape %s' % (sphere.shape,))
_check_option('sphere_units', sphere_units, ('m', 'mm'))
if sphere_units == 'mm':
sphere /= 1000.
sphere = np.array(sphere, float)
return sphere
|
21,190 |
def validate_subcommand(
commands: Sequence[str], workflows: Sequence[str], subcommand: str
) -> None:
"""Check that a subcommand is valid and defined. Raises an error otherwise.
commands (Sequence[str]): The available commands.
subcommand (str): The subcommand.
"""
if not commands and not workflows:
msg.fail(f"No commands or workflows defined in {PROJECT_FILE}", exits=1)
if subcommand not in commands and subcommand not in workflows:
help_msg = []
if subcommand == "assets" or subcommand == "asset":
help_msg.append("Did you mean to run: python -m spacy project assets?")
if commands:
help_msg.append(f"Available commands: {', '.join(commands)}")
if workflows:
help_msg.append(f"Available workflows: {', '.join(workflows)}")
msg.fail(
f"Can't find command or workflow '{subcommand}' in {PROJECT_FILE}",
". ".join(help_msg),
exits=1,
)
|
def validate_subcommand(
commands: Sequence[str], workflows: Sequence[str], subcommand: str
) -> None:
"""Check that a subcommand is valid and defined. Raises an error otherwise.
commands (Sequence[str]): The available commands.
subcommand (str): The subcommand.
"""
if not commands and not workflows:
msg.fail(f"No commands or workflows defined in {PROJECT_FILE}", exits=1)
if subcommand not in commands and subcommand not in workflows:
help_msg = []
if subcommand in ["assets", "asset"]:
help_msg.append("Did you mean to run: python -m spacy project assets?")
if commands:
help_msg.append(f"Available commands: {', '.join(commands)}")
if workflows:
help_msg.append(f"Available workflows: {', '.join(workflows)}")
msg.fail(
f"Can't find command or workflow '{subcommand}' in {PROJECT_FILE}",
". ".join(help_msg),
exits=1,
)
|
6,928 |
def get_doc_files(files, start_path, force=0, sync_everything = False, verbose=False):
"""walk and sync all doctypes and pages"""
if not files:
files = []
# load in sequence - warning for devs
document_types = ['doctype', 'page', 'report', 'dashboard_chart_source', 'print_format',
'website_theme', 'web_form', 'web_template', 'notification', 'print_style',
'data_migration_mapping', 'data_migration_plan', 'workspace',
'onboarding_step', 'module_onboarding']
for doctype in document_types:
doctype_path = os.path.join(start_path, doctype)
if os.path.exists(doctype_path):
for docname in os.listdir(doctype_path):
if os.path.isdir(os.path.join(doctype_path, docname)):
doc_path = os.path.join(doctype_path, docname, docname) + ".json"
if os.path.exists(doc_path):
if not doc_path in files:
files.append(doc_path)
return files
|
def get_doc_files(files, start_path):
"""walk and sync all doctypes and pages"""
if not files:
files = []
# load in sequence - warning for devs
document_types = ['doctype', 'page', 'report', 'dashboard_chart_source', 'print_format',
'website_theme', 'web_form', 'web_template', 'notification', 'print_style',
'data_migration_mapping', 'data_migration_plan', 'workspace',
'onboarding_step', 'module_onboarding']
for doctype in document_types:
doctype_path = os.path.join(start_path, doctype)
if os.path.exists(doctype_path):
for docname in os.listdir(doctype_path):
if os.path.isdir(os.path.join(doctype_path, docname)):
doc_path = os.path.join(doctype_path, docname, docname) + ".json"
if os.path.exists(doc_path):
if not doc_path in files:
files.append(doc_path)
return files
|
22,159 |
def get_variables(event):
v = copy.copy(DEFAULT_VARIABLES)
scheme = PERSON_NAME_SCHEMES[event.settings.name_scheme]
concatenation_for_salutation = scheme.get("concatenation_for_salutation", scheme["concatenation"])
v['attendee_name_for_salutation'] = {
'label': _("Attendee name for salutation"),
'editor_sample': _("Mr Doe"),
'evaluate': lambda op, order, ev: concatenation_for_salutation(op.attendee_name_parts)
}
for key, label, weight in scheme['fields']:
v['attendee_name_%s' % key] = {
'label': _("Attendee name: {part}").format(part=label),
'editor_sample': scheme['sample'][key],
'evaluate': partial(_get_attendee_name_part, key)
}
for i in range(2, len(scheme['fields']) + 1):
for comb in itertools.combinations(scheme['fields'], i):
v['attendee_name_%s' % ('_'.join(c[0] for c in comb))] = {
'label': _("Attendee name: {part}").format(part=' + '.join(str(c[1]) for c in comb)),
'editor_sample': ' '.join(str(scheme['sample'][c[0]]) for c in comb),
'evaluate': partial(_get_attendee_name_part, comb)
}
v['invoice_name']['editor_sample'] = scheme['concatenation'](scheme['sample'])
v['attendee_name']['editor_sample'] = scheme['concatenation'](scheme['sample'])
v['invoice_name_for_salutation'] = {
'label': _("Invoice address name for salutation"),
'editor_sample': _("Mr Doe"),
'evaluate': lambda op, order, ev: concatenation_for_salutation(order.invoice_address.name_parts if getattr(order, 'invoice_address', None) else {})
}
for key, label, weight in scheme['fields']:
v['invoice_name_%s' % key] = {
'label': _("Invoice address name: {part}").format(part=label),
'editor_sample': scheme['sample'][key],
"evaluate": partial(_get_ia_name_part, key)
}
for recv, res in layout_text_variables.send(sender=event):
v.update(res)
return v
|
def get_variables(event):
v = copy.copy(DEFAULT_VARIABLES)
scheme = PERSON_NAME_SCHEMES[event.settings.name_scheme]
concatenation_for_salutation = scheme.get("concatenation_for_salutation", scheme["concatenation"])
v['attendee_name_for_salutation'] = {
'label': _("Attendee name for salutation"),
'editor_sample': _("Mr Doe"),
'evaluate': lambda op, order, ev: concatenation_for_salutation(op.attendee_name_parts or {})
}
for key, label, weight in scheme['fields']:
v['attendee_name_%s' % key] = {
'label': _("Attendee name: {part}").format(part=label),
'editor_sample': scheme['sample'][key],
'evaluate': partial(_get_attendee_name_part, key)
}
for i in range(2, len(scheme['fields']) + 1):
for comb in itertools.combinations(scheme['fields'], i):
v['attendee_name_%s' % ('_'.join(c[0] for c in comb))] = {
'label': _("Attendee name: {part}").format(part=' + '.join(str(c[1]) for c in comb)),
'editor_sample': ' '.join(str(scheme['sample'][c[0]]) for c in comb),
'evaluate': partial(_get_attendee_name_part, comb)
}
v['invoice_name']['editor_sample'] = scheme['concatenation'](scheme['sample'])
v['attendee_name']['editor_sample'] = scheme['concatenation'](scheme['sample'])
v['invoice_name_for_salutation'] = {
'label': _("Invoice address name for salutation"),
'editor_sample': _("Mr Doe"),
'evaluate': lambda op, order, ev: concatenation_for_salutation(order.invoice_address.name_parts if getattr(order, 'invoice_address', None) else {})
}
for key, label, weight in scheme['fields']:
v['invoice_name_%s' % key] = {
'label': _("Invoice address name: {part}").format(part=label),
'editor_sample': scheme['sample'][key],
"evaluate": partial(_get_ia_name_part, key)
}
for recv, res in layout_text_variables.send(sender=event):
v.update(res)
return v
|
31,830 |
def get_used_dockers_images() -> CommandResults:
md = None
active_docker_list_integration = {}
active_docker_list_automation = {}
result_dict: Dict[str, List[str]] = {}
active_integration_instances = demisto.internalHttpRequest(POST_COMMAND, "%s" % SETTING_INTEGRATION_SEARCH,
REQUEST_INTEGRATION_SEARCH_BODY)
demisto.debug(
f"called demisto.internalHttpRequest(\"{POST_COMMAND}\", \"{SETTING_INTEGRATION_SEARCH}\", "
f"\"{REQUEST_INTEGRATION_SEARCH_BODY}\")")
demisto.debug(f'respose code = {0}', active_integration_instances['statusCode'])
if active_integration_instances and active_integration_instances['statusCode'] == 200:
active_docker_list_integration = extract_dockers_from_integration_search_result(
active_integration_instances['body'])
active_automation_instances = demisto.internalHttpRequest(POST_COMMAND, "%s" % AUTOMATION_SEARCH,
REQUEST_INTEGRATION_SEARCH_BODY)
demisto.debug(f"called demisto.internalHttpRequest(\"{POST_COMMAND}\", \"{AUTOMATION_SEARCH}\", "
f"\"{REQUEST_INTEGRATION_SEARCH_BODY}\")")
demisto.debug(f'respose code = {0}', active_automation_instances['statusCode'])
if active_automation_instances and active_automation_instances['statusCode'] == 200:
active_docker_list_automation = extract_dockers_from_automation_search_result(
active_automation_instances['body'])
result_dict = merge_result(active_docker_list_integration, result_dict, MAX_PER_DOCKER)
result_dict = merge_result(active_docker_list_automation, result_dict, MAX_PER_DOCKER)
''' format the result for Markdown view'''
result_output = []
result_output = format_result_for_markdown(result_dict)
md = tableToMarkdown('Dockers Images In use:', result_output, )
return CommandResults(readable_output=md)
|
def get_used_dockers_images() -> CommandResults:
md = None
active_docker_list_integration = {}
active_docker_list_automation = {}
result_dict: Dict[str, List[str]] = {}
active_integration_instances = demisto.internalHttpRequest(POST_COMMAND, "%s" % SETTING_INTEGRATION_SEARCH,
REQUEST_INTEGRATION_SEARCH_BODY)
demisto.debug(
f"called demisto.internalHttpRequest(\"{POST_COMMAND}\", \"{SETTING_INTEGRATION_SEARCH}\", "
f"\"{REQUEST_INTEGRATION_SEARCH_BODY}\")")
demisto.debug(f'respose code = {0}', active_integration_instances['statusCode'])
if active_integration_instances and active_integration_instances['statusCode'] == 200:
active_docker_list_integration = extract_dockers_from_integration_search_result(
active_integration_instances['body'])
active_automations = demisto.internalHttpRequest(POST_COMMAND, "%s" % AUTOMATION_SEARCH,
REQUEST_INTEGRATION_SEARCH_BODY)
demisto.debug(f"called demisto.internalHttpRequest(\"{POST_COMMAND}\", \"{AUTOMATION_SEARCH}\", "
f"\"{REQUEST_INTEGRATION_SEARCH_BODY}\")")
demisto.debug(f'respose code = {0}', active_automation_instances['statusCode'])
if active_automation_instances and active_automation_instances['statusCode'] == 200:
active_docker_list_automation = extract_dockers_from_automation_search_result(
active_automation_instances['body'])
result_dict = merge_result(active_docker_list_integration, result_dict, MAX_PER_DOCKER)
result_dict = merge_result(active_docker_list_automation, result_dict, MAX_PER_DOCKER)
''' format the result for Markdown view'''
result_output = []
result_output = format_result_for_markdown(result_dict)
md = tableToMarkdown('Dockers Images In use:', result_output, )
return CommandResults(readable_output=md)
|
7,708 |
def cf4(operator, timesteps, power=None, power_density=None, print_out=True):
r"""Deplete using the CF4 algorithm.
Implements the fourth order `commutator-free Lie algorithm
<https://doi.org/10.1016/S0167-739X(02)00161-9>`_.
This algorithm is mathematically defined as:
.. math::
\begin{aligned}
F_1 &= h A(y_0) \\
y_1 &= \text{expm}(1/2 F_1) y_0 \\
F_2 &= h A(y_1) \\
y_2 &= \text{expm}(1/2 F_2) y_0 \\
F_3 &= h A(y_2) \\
y_3 &= \text{expm}(-1/2 F_1 + F_3) y_1 \\
F_4 &= h A(y_3) \\
y_4 &= \text{expm}( 1/4 F_1 + 1/6 F_2 + 1/6 F_3 - 1/12 F_4)
\text{expm}(-1/12 F_1 + 1/6 F_2 + 1/6 F_3 + 1/4 F_4) y_0
\end{aligned}
Parameters
----------
operator : openmc.deplete.TransportOperator
The operator object to simulate on.
timesteps : iterable of float
Array of timesteps in units of [s]. Note that values are not cumulative.
power : float or iterable of float, optional
Power of the reactor in [W]. A single value indicates that the power is
constant over all timesteps. An iterable indicates potentially different
power levels for each timestep. For a 2D problem, the power can be given
in [W/cm] as long as the "volume" assigned to a depletion material is
actually an area in [cm^2]. Either `power` or `power_density` must be
specified.
power_density : float or iterable of float, optional
Power density of the reactor in [W/gHM]. It is multiplied by initial
heavy metal inventory to get total power if `power` is not speficied.
print_out : bool, optional
Whether or not to print out time.
"""
if power is None:
if power_density is None:
raise ValueError(
"Neither power nor power density was specified.")
if not isinstance(power_density, Iterable):
power = power_density*operator.heavy_metal
else:
power = [i*operator.heavy_metal for i in power_density]
if not isinstance(power, Iterable):
power = [power]*len(timesteps)
# Generate initial conditions
with operator as vec:
# Initialize time and starting index
if operator.prev_res is None:
t = 0.0
i_res = 0
else:
t = operator.prev_res[-1].time[-1]
i_res = len(operator.prev_res)
chain = operator.chain
for i, (dt, p) in enumerate(zip(timesteps, power)):
# Get beginning-of-timestep concentrations and reaction rates
# Avoid doing first transport run if already done in previous
# calculation
if i > 0 or operator.prev_res is None:
x = [copy.deepcopy(vec)]
op_results = [operator(x[0], p)]
else:
# Get initial concentration
x = [operator.prev_res[-1].data[0]]
# Get rates
op_results = [operator.prev_res[-1]]
op_results[0].rates = op_results[0].rates[0]
# Set first stage value of keff
op_results[0].k = op_results[0].k[0]
# Scale reaction rates by ratio of powers
power_res = operator.prev_res[-1].power
ratio_power = p / power_res
op_results[0].rates *= ratio_power[0]
# Step 1: deplete with matrix 1/2*A(y0)
time_1, x_new = timed_deplete(
chain, x[0], op_results[0].rates, dt, print_out,
matrix_func=_cf4_f1)
x.append(x_new)
op_results.append(operator(x_new, p))
# Step 2: deplete with matrix 1/2*A(y1)
time_2, x_new = timed_deplete(
chain, x[0], op_results[1].rates, dt, print_out,
matrix_func=_cf4_f1)
x.append(x_new)
op_results.append(operator(x_new, p))
# Step 3: deplete with matrix -1/2*A(y0)+A(y2)
rates = list(zip(op_results[0].rates, op_results[2].rates))
time_3, x_new = timed_deplete(
chain, x[1], rates, dt, print_out, matrix_func=_cf4_f2)
x.append(x_new)
op_results.append(operator(x_new, p))
# Step 4: deplete with two matrix exponentials
rates = list(zip(op_results[0].rates, op_results[1].rates,
op_results[2].rates, op_results[3].rates))
time_4, x_end = timed_deplete(
chain, x[0], rates, dt, print_out, matrix_func=_cf4_f3)
time_5, x_end = timed_deplete(
chain, x_end, rates, dt, print_out, matrix_func=_cf4_f4)
# Create results, write to disk
Results.save(
operator, x, op_results, [t, t + dt], p, i_res + i,
sum((time_1, time_2, time_3, time_4, time_5)))
# Advance time, update vector
t += dt
vec = copy.deepcopy(x_end)
# Perform one last simulation
x = [copy.deepcopy(vec)]
op_results = [operator(x[0], power[-1])]
# Create results, write to disk
Results.save(operator, x, op_results, [t, t], p, i_res + len(timesteps), None)
|
def cf4(operator, timesteps, power=None, power_density=None, print_out=True):
r"""Deplete using the CF4 algorithm.
Implements the fourth order `commutator-free Lie algorithm
<https://doi.org/10.1016/S0167-739X(02)00161-9>`_.
This algorithm is mathematically defined as:
.. math::
\begin{aligned}
F_1 &= h A(y_0) \\
y_1 &= \text{expm}(1/2 F_1) y_0 \\
F_2 &= h A(y_1) \\
y_2 &= \text{expm}(1/2 F_2) y_0 \\
F_3 &= h A(y_2) \\
y_3 &= \text{expm}(-1/2 F_1 + F_3) y_1 \\
F_4 &= h A(y_3) \\
y_4 &= \text{expm}( 1/4 F_1 + 1/6 F_2 + 1/6 F_3 - 1/12 F_4)
\text{expm}(-1/12 F_1 + 1/6 F_2 + 1/6 F_3 + 1/4 F_4) y_0
\end{aligned}
Parameters
----------
operator : openmc.deplete.TransportOperator
The operator object to simulate on.
timesteps : iterable of float
Array of timesteps in units of [s]. Note that values are not cumulative.
power : float or iterable of float, optional
Power of the reactor in [W]. A single value indicates that the power is
constant over all timesteps. An iterable indicates potentially different
power levels for each timestep. For a 2D problem, the power can be given
in [W/cm] as long as the "volume" assigned to a depletion material is
actually an area in [cm^2]. Either `power` or `power_density` must be
specified.
power_density : float or iterable of float, optional
Power density of the reactor in [W/gHM]. It is multiplied by initial
heavy metal inventory to get total power if `power` is not speficied.
print_out : bool, optional
Whether or not to print out time.
"""
if power is None:
if power_density is None:
raise ValueError(
"Neither power nor power density was specified.")
if not isinstance(power_density, Iterable):
power = power_density*operator.heavy_metal
else:
power = [i*operator.heavy_metal for i in power_density]
if not isinstance(power, Iterable):
power = [power]*len(timesteps)
# Generate initial conditions
with operator as vec:
# Initialize time and starting index
if operator.prev_res is None:
t = 0.0
i_res = 0
else:
t = operator.prev_res[-1].time[-1]
i_res = len(operator.prev_res)
chain = operator.chain
for i, (dt, p) in enumerate(zip(timesteps, power)):
# Get beginning-of-timestep concentrations and reaction rates
# Avoid doing first transport run if already done in previous
# calculation
if i > 0 or operator.prev_res is None:
x = [copy.deepcopy(vec)]
op_results = [operator(x[0], p)]
else:
# Get initial concentration
x = [operator.prev_res[-1].data[0]]
# Get rates
op_results = [operator.prev_res[-1]]
op_results[0].rates = op_results[0].rates[0]
# Set first stage value of keff
op_results[0].k = op_results[0].k[0]
# Scale reaction rates by ratio of powers
power_res = operator.prev_res[-1].power
ratio_power = p / power_res
op_results[0].rates *= ratio_power[0]
# Step 1: deplete with matrix 1/2*A(y0)
time_1, x_new = timed_deplete(
chain, x[0], op_results[0].rates, dt, print_out,
matrix_func=_cf4_f1)
x.append(x_new)
op_results.append(operator(x_new, p))
# Step 2: deplete with matrix 1/2*A(y1)
time_2, x_new = timed_deplete(
chain, x[0], op_results[1].rates, dt, print_out,
matrix_func=_cf4_f1)
x.append(x_new)
op_results.append(operator(x_new, p))
# Step 3: deplete with matrix -1/2*A(y0)+A(y2)
rates = list(zip(op_results[0].rates, op_results[2].rates))
time_3, x_new = timed_deplete(
chain, x[1], rates, dt, print_out, matrix_func=_cf4_f2)
x.append(x_new)
op_results.append(operator(x_new, p))
# Step 4: deplete with two matrix exponentials
rates = list(zip(op_results[0].rates, op_results[1].rates,
op_results[2].rates, op_results[3].rates))
time_4, x_end = timed_deplete(
chain, x[0], rates, dt, print_out, matrix_func=_cf4_f3)
time_5, x_end = timed_deplete(
chain, x_end, rates, dt, print_out, matrix_func=_cf4_f4)
# Create results, write to disk
Results.save(
operator, x, op_results, [t, t + dt], p, i_res + i,
time_1 + time_2 + time_3 + time_4 + time_5)
# Advance time, update vector
t += dt
vec = copy.deepcopy(x_end)
# Perform one last simulation
x = [copy.deepcopy(vec)]
op_results = [operator(x[0], power[-1])]
# Create results, write to disk
Results.save(operator, x, op_results, [t, t], p, i_res + len(timesteps), None)
|
2,066 |
def _omp_path_residues(X_train, y_train, X_test, y_test, copy=True,
fit_intercept=True, normalize=True, max_iter=100):
"""Compute the residues on left-out data for a full LARS path.
Parameters
----------
X_train : ndarray of shape (n_samples, n_features)
The data to fit the LARS on.
y_train : ndarray of shape (n_samples)
The target variable to fit LARS on.
X_test : ndarray of shape (n_samples, n_features)
The data to compute the residues on.
y_test : ndarray of shape (n_samples)
The target variable to compute the residues on.
copy : bool, defualt=True
Whether X_train, X_test, y_train and y_test should be copied. If
False, they may be overwritten.
fit_intercept : bool, default=True
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(i.e. data is expected to be centered).
normalize : bool, default=True
This parameter is ignored when ``fit_intercept`` is set to False.
If True, the regressors X will be normalized before regression by
subtracting the mean and dividing by the l2-norm.
If you wish to standardize, please use
:class:`~sklearn.preprocessing.StandardScaler` before calling ``fit``
on an estimator with ``normalize=False``.
max_iter : int, default=100
Maximum numbers of iterations to perform, therefore maximum features
to include. 100 by default.
Returns
-------
residues : ndarray of shape (n_samples, max_features)
Residues of the prediction on the test data.
"""
if copy:
X_train = X_train.copy()
y_train = y_train.copy()
X_test = X_test.copy()
y_test = y_test.copy()
if fit_intercept:
X_mean = X_train.mean(axis=0)
X_train -= X_mean
X_test -= X_mean
y_mean = y_train.mean(axis=0)
y_train = as_float_array(y_train, copy=False)
y_train -= y_mean
y_test = as_float_array(y_test, copy=False)
y_test -= y_mean
if normalize:
norms = np.sqrt(np.sum(X_train ** 2, axis=0))
nonzeros = np.flatnonzero(norms)
X_train[:, nonzeros] /= norms[nonzeros]
coefs = orthogonal_mp(X_train, y_train, n_nonzero_coefs=max_iter, tol=None,
precompute=False, copy_X=False,
return_path=True)
if coefs.ndim == 1:
coefs = coefs[:, np.newaxis]
if normalize:
coefs[nonzeros] /= norms[nonzeros][:, np.newaxis]
return np.dot(coefs.T, X_test.T) - y_test
|
def _omp_path_residues(X_train, y_train, X_test, y_test, copy=True,
fit_intercept=True, normalize=True, max_iter=100):
"""Compute the residues on left-out data for a full LARS path.
Parameters
----------
X_train : ndarray of shape (n_samples, n_features)
The data to fit the LARS on.
y_train : ndarray of shape (n_samples)
The target variable to fit LARS on.
X_test : ndarray of shape (n_samples, n_features)
The data to compute the residues on.
y_test : ndarray of shape (n_samples)
The target variable to compute the residues on.
copy : bool, default=True
Whether X_train, X_test, y_train and y_test should be copied. If
False, they may be overwritten.
fit_intercept : bool, default=True
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(i.e. data is expected to be centered).
normalize : bool, default=True
This parameter is ignored when ``fit_intercept`` is set to False.
If True, the regressors X will be normalized before regression by
subtracting the mean and dividing by the l2-norm.
If you wish to standardize, please use
:class:`~sklearn.preprocessing.StandardScaler` before calling ``fit``
on an estimator with ``normalize=False``.
max_iter : int, default=100
Maximum numbers of iterations to perform, therefore maximum features
to include. 100 by default.
Returns
-------
residues : ndarray of shape (n_samples, max_features)
Residues of the prediction on the test data.
"""
if copy:
X_train = X_train.copy()
y_train = y_train.copy()
X_test = X_test.copy()
y_test = y_test.copy()
if fit_intercept:
X_mean = X_train.mean(axis=0)
X_train -= X_mean
X_test -= X_mean
y_mean = y_train.mean(axis=0)
y_train = as_float_array(y_train, copy=False)
y_train -= y_mean
y_test = as_float_array(y_test, copy=False)
y_test -= y_mean
if normalize:
norms = np.sqrt(np.sum(X_train ** 2, axis=0))
nonzeros = np.flatnonzero(norms)
X_train[:, nonzeros] /= norms[nonzeros]
coefs = orthogonal_mp(X_train, y_train, n_nonzero_coefs=max_iter, tol=None,
precompute=False, copy_X=False,
return_path=True)
if coefs.ndim == 1:
coefs = coefs[:, np.newaxis]
if normalize:
coefs[nonzeros] /= norms[nonzeros][:, np.newaxis]
return np.dot(coefs.T, X_test.T) - y_test
|
39,177 |
def _get_pattern():
pattern = os.getenv('GALLERY_PATTERN')
# If BUILD_GALLERY is falsy -> no build
# If BUILD_GALLERY is truey -> build
# If BUILD_GALLERY is undefined
# If GALLERY_PATTERN is defined -> build
# If GALLERY_PATTERN is not defined -> not build
if not _get_var('BUILD_GALLERY', default=False if pattern is None else True):
if pattern is not None:
print(
' --- WARNING: "GALLERY_PATTERN" is provided, but "BUILD_GALLERY" valus is falsy. '
'Sphinx galleries are not built. To build galleries, set `BUILD_GALLERY=1`.'
)
return {
'ignore_pattern': r'\.py',
}
ret = {'filename_pattern': 'tutorial.py'}
if os.getenv('GALLERY_PATTERN'):
# See https://github.com/pytorch/tutorials/blob/cbf2238df0e78d84c15bd94288966d2f4b2e83ae/conf.py#L75-L83
ret['ignore_pattern'] = r'/(?!' + re.escape(os.getenv('GALLERY_PATTERN')) + r')[^/]+$'
return ret
|
def _get_pattern():
pattern = os.getenv('GALLERY_PATTERN')
# If BUILD_GALLERY is falsy -> no build
# If BUILD_GALLERY is truey -> build
# If BUILD_GALLERY is undefined
# If GALLERY_PATTERN is defined -> build
# If GALLERY_PATTERN is not defined -> not build
if not _get_var('BUILD_GALLERY', default=False if pattern is None else True):
if pattern is not None:
print(
' --- WARNING: "GALLERY_PATTERN" is provided, but "BUILD_GALLERY" value is falsy. '
'Sphinx galleries are not built. To build galleries, set `BUILD_GALLERY=1`.'
)
return {
'ignore_pattern': r'\.py',
}
ret = {'filename_pattern': 'tutorial.py'}
if os.getenv('GALLERY_PATTERN'):
# See https://github.com/pytorch/tutorials/blob/cbf2238df0e78d84c15bd94288966d2f4b2e83ae/conf.py#L75-L83
ret['ignore_pattern'] = r'/(?!' + re.escape(os.getenv('GALLERY_PATTERN')) + r')[^/]+$'
return ret
|
10,601 |
def recursive_diff(dict1, dict2):
if not isinstance(dict1, dict) or not isinstance(dict2, dict):
return None
left = dict((k, v) for (k, v) in dict1.items() if k not in dict2)
right = dict((k, v) for (k, v) in dict2.items() if k not in dict1)
for k in (set(dict1.keys()) & set(dict2.keys())):
if isinstance(dict1[k], dict) and isinstance(dict2[k], dict):
result = recursive_diff(dict1[k], dict2[k])
if result:
left[k] = result[0]
right[k] = result[1]
elif dict1[k] != dict2[k]:
left[k] = dict1[k]
right[k] = dict2[k]
if left or right:
return left, right
else:
return None
|
def recursive_diff(dict1, dict2):
if not isinstance(dict1, MutableMapping) or not isinstance(dict2, MutableMapping):
return None
left = dict((k, v) for (k, v) in dict1.items() if k not in dict2)
right = dict((k, v) for (k, v) in dict2.items() if k not in dict1)
for k in (set(dict1.keys()) & set(dict2.keys())):
if isinstance(dict1[k], dict) and isinstance(dict2[k], dict):
result = recursive_diff(dict1[k], dict2[k])
if result:
left[k] = result[0]
right[k] = result[1]
elif dict1[k] != dict2[k]:
left[k] = dict1[k]
right[k] = dict2[k]
if left or right:
return left, right
else:
return None
|
51,397 |
def get_value(sensor: Sensor, field: str) -> float | int | str:
"""Get the value of a sensor field."""
value = sensor.data[field]["values"][-1]["s"]
# Check if it is already a float or int
if isinstance(value, int):
return value
if isinstance(value, float):
return value
if (value.find("-") <= 0) and value.replace(
"-", "", 1
).isdigit(): # Check if int, then return int
return int(value)
if (
(value.find("-") <= 0)
and (value.count(".") < 2)
and (value.replace("-", "", 1).replace(".", "", 1).isdigit())
): # Check if float, then return float
return float(value)
# Return string value of field
return str(value)
|
def get_value(sensor: Sensor, field: str) -> float | int | str:
"""Get the value of a sensor field."""
value = sensor.data[field]["values"][-1]["s"]
try:
value = float(value)
except ValueError:
return value # str
return int(value) if value.is_integer() else value # int, else float
|
59,264 |
def _run_pip(args, additional_paths=None):
# Run the bootstrapping in a subprocess to avoid leaking any state that happens
# after pip has executed. Particularly, this avoids the case when pip holds onto
# the files in *additional_paths*, preventing us to remove them at the end of the
# invocation.
code = f"""
import runpy
import sys
sys.path = {additional_paths or []} + sys.path
sys.argv[1:] = {args}
runpy.run_module("pip", run_name="__main__", alter_sys=True)
"""
cmd = [
sys.executable,
# run code in isolated mode if currently running isolated
*([ '-I'] if sys.flags.isolated else []),
'-W', 'ignore::DeprecationWarning',
'-c',
code ]
return subprocess.run( cmd, check = True ).returncode
|
def _run_pip(args, additional_paths=None):
# Run the bootstrapping in a subprocess to avoid leaking any state that happens
# after pip has executed. Particularly, this avoids the case when pip holds onto
# the files in *additional_paths*, preventing us to remove them at the end of the
# invocation.
code = f"""
import runpy
import sys
sys.path = {additional_paths or []} + sys.path
sys.argv[1:] = {args}
runpy.run_module("pip", run_name="__main__", alter_sys=True)
"""
cmd = [
sys.executable,
'-W',
'ignore::DeprecationWarning',
'-c',
code,
]
if sys.flags.isolated:
# run code in isolated mode if currently running isolated
cmd.insert(1, '-I')
return subprocess.run(cmd, check=True).returncode
|
20,557 |
def display_open(file, message="\nDone! To view results"):
"""Print the syntax to open a file based on the platform."""
cmd_open = None
if sys.platform.startswith('linux'):
# If user runs SCT within the official Docker distribution, or in WSL, then the command xdg-open will not be
# working, therefore we prefer to instruct the user to manually open the file.
# Source for WSL environment variables: https://stackoverflow.com/a/61036356
if "DOCKER" not in os.environ and "IS_WSL" not in os.environ and "WSL_DISTRO_NAME" not in os.environ:
cmd_open = 'xdg-open'
elif sys.platform.startswith('darwin'):
cmd_open = 'open'
elif sys.platform.startswith('win32'):
cmd_open = 'start'
if cmd_open:
printv(f'{message}, type:')
printv(f"{cmd_open} {file}\n", verbose=1, type='info')
else:
printv(f'{message}, open the following file:')
printv(f"{file}\n", verbose=1, type='info')
|
def display_open(file, message="\nDone! To view results"):
"""Print the syntax to open a file based on the platform."""
cmd_open = None
if sys.platform.startswith('linux'):
# If user runs SCT within the official Docker distribution, or in WSL, then the command xdg-open will not be
# working, therefore we prefer to instruct the user to manually open the file.
# Source for WSL environment variables: https://stackoverflow.com/a/61036356
if "DOCKER" not in os.environ and "IS_WSL" not in os.environ and "WSL_DISTRO_NAME" not in os.environ:
cmd_open = 'xdg-open'
elif sys.platform.startswith('darwin'):
cmd_open = 'open'
elif sys.platform.startswith('win32'):
cmd_open = 'start'
if cmd_open:
printv(f'{message}, type:')
printv(f"{cmd_open} {file}\n", verbose=1, type='info')
else:
printv(f'{message}, open the following file:')
printv(f"{file}\n", type='info')
|
40,237 |
def reshape(lst, shape):
"""Gives a new shape to an array without changing its data.
This function mimicks the functionality of ``numpy.reshape`` [1]_, but in a simpler form.
Parameters
----------
lst : list
A list of items.
shape : int or tuple of ints
The new shape of the list
Examples
--------
>>> a = [1, 2, 3, 4, 5, 6]
>>> reshape(a, (2, 3))
[[1, 2, 3], [4, 5, 6]]
>>> reshape(a, (3, 2))
[[1, 2], [3, 4], [5, 6]]
References
----------
.. [1] ``numpy.reshape`` Available at https://numpy.org/doc/stable/reference/generated/numpy.reshape.html
"""
if len(shape) == 1:
return lst
if len(lst) != reduce(lambda x, y: x * y, shape):
raise ValueError("ValueError: cannot reshape array of size %d into shape %s" % (len(lst), shape))
n = reduce(mul, shape[1:])
return [reshape(lst[i * n:(i + 1) * n], shape[1:]) for i in range(len(lst) // n)]
|
def reshape(lst, shape):
"""Gives a new shape to an array without changing its data.
This function mimics the functionality of ``numpy.reshape`` [1]_, but in a simpler form.
Parameters
----------
lst : list
A list of items.
shape : int or tuple of ints
The new shape of the list
Examples
--------
>>> a = [1, 2, 3, 4, 5, 6]
>>> reshape(a, (2, 3))
[[1, 2, 3], [4, 5, 6]]
>>> reshape(a, (3, 2))
[[1, 2], [3, 4], [5, 6]]
References
----------
.. [1] ``numpy.reshape`` Available at https://numpy.org/doc/stable/reference/generated/numpy.reshape.html
"""
if len(shape) == 1:
return lst
if len(lst) != reduce(lambda x, y: x * y, shape):
raise ValueError("ValueError: cannot reshape array of size %d into shape %s" % (len(lst), shape))
n = reduce(mul, shape[1:])
return [reshape(lst[i * n:(i + 1) * n], shape[1:]) for i in range(len(lst) // n)]
|
39,637 |
def prepare_data(source_fnames: List[str],
target_fname: str,
source_vocabs: List[vocab.Vocab],
target_vocab: vocab.Vocab,
source_vocab_paths: List[Optional[str]],
target_vocab_path: Optional[str],
shared_vocab: bool,
max_seq_len_source: int,
max_seq_len_target: int,
bucketing: bool,
bucket_width: int,
samples_per_shard: int,
min_num_shards: int,
output_prefix: str,
bucket_scaling: bool = True,
keep_tmp_shard_files: bool = False,
max_processes: int = None):
logger.info("Preparing data.")
# write vocabularies to data folder
vocab.save_source_vocabs(source_vocabs, output_prefix)
vocab.save_target_vocab(target_vocab, output_prefix)
# Pass 1: get target/source length ratios.
length_statistics = analyze_sequence_lengths(source_fnames, target_fname, source_vocabs, target_vocab,
max_seq_len_source, max_seq_len_target)
check_condition(length_statistics.num_sents > 0,
"No training sequences found with length smaller or equal than the maximum sequence length."
"Consider increasing %s" % C.TRAINING_ARG_MAX_SEQ_LEN)
# define buckets
buckets = define_parallel_buckets(max_seq_len_source, max_seq_len_target, bucket_width, bucket_scaling,
length_statistics.length_ratio_mean) if bucketing else [(max_seq_len_source,
max_seq_len_target)]
logger.info("Buckets: %s", buckets)
# Pass 2: Randomly assign data to data shards
# no pre-processing yet, just write the sentences to different files
num_shards = get_num_shards(length_statistics.num_sents, samples_per_shard, min_num_shards)
logger.info("%d samples will be split into %d shard(s) (requested samples/shard=%d, min_num_shards=%d)."
% (length_statistics.num_sents, num_shards, samples_per_shard, min_num_shards))
shards, data_statistics = shard_data(source_fnames=source_fnames,
target_fname=target_fname,
source_vocabs=source_vocabs,
target_vocab=target_vocab,
num_shards=num_shards,
buckets=buckets,
length_ratio_mean=length_statistics.length_ratio_mean,
length_ratio_std=length_statistics.length_ratio_std,
output_prefix=output_prefix)
data_statistics.log()
data_loader = RawParallelDatasetLoader(buckets=buckets,
eos_id=C.EOS_ID,
pad_id=C.PAD_ID)
# 3. convert each shard to serialized ndarrays
if not max_processes:
logger.info("Processing shards sequentily.")
# Process shards sequantially woithout using multiprocessing
for shard_idx, (shard_sources, shard_target, shard_stats) in enumerate(shards):
process_shard(shard_idx, data_loader, shard_sources, shard_target,
shard_stats, output_prefix, keep_tmp_shard_files)
else:
logger.info(f"Processing shards using {max_processes} processes.")
# Process shards in parallel using max_processes process
results = []
pool = multiprocessing.pool.Pool(processes=max_processes)
for shard_idx, (shard_sources, shard_target, shard_stats) in enumerate(shards):
result = pool.apply_async(process_shard, args=(shard_idx, data_loader, shard_sources, shard_target,
shard_stats, output_prefix, keep_tmp_shard_files))
results.append(result)
pool.close()
pool.join()
for result in results:
if not result.successful():
logger.error("Process ended in error.")
raise RuntimeError("Shard processing fail")
data_info = DataInfo(sources=[os.path.abspath(fname) for fname in source_fnames],
target=os.path.abspath(target_fname),
source_vocabs=source_vocab_paths,
target_vocab=target_vocab_path,
shared_vocab=shared_vocab,
num_shards=num_shards)
data_info_fname = os.path.join(output_prefix, C.DATA_INFO)
logger.info("Writing data info to '%s'", data_info_fname)
data_info.save(data_info_fname)
config_data = DataConfig(data_statistics=data_statistics,
max_seq_len_source=max_seq_len_source,
max_seq_len_target=max_seq_len_target,
num_source_factors=len(source_fnames))
config_data_fname = os.path.join(output_prefix, C.DATA_CONFIG)
logger.info("Writing data config to '%s'", config_data_fname)
config_data.save(config_data_fname)
version_file = os.path.join(output_prefix, C.PREPARED_DATA_VERSION_FILE)
with open(version_file, "w") as version_out:
version_out.write(str(C.PREPARED_DATA_VERSION))
|
def prepare_data(source_fnames: List[str],
target_fname: str,
source_vocabs: List[vocab.Vocab],
target_vocab: vocab.Vocab,
source_vocab_paths: List[Optional[str]],
target_vocab_path: Optional[str],
shared_vocab: bool,
max_seq_len_source: int,
max_seq_len_target: int,
bucketing: bool,
bucket_width: int,
samples_per_shard: int,
min_num_shards: int,
output_prefix: str,
bucket_scaling: bool = True,
keep_tmp_shard_files: bool = False,
max_processes: int = None):
logger.info("Preparing data.")
# write vocabularies to data folder
vocab.save_source_vocabs(source_vocabs, output_prefix)
vocab.save_target_vocab(target_vocab, output_prefix)
# Pass 1: get target/source length ratios.
length_statistics = analyze_sequence_lengths(source_fnames, target_fname, source_vocabs, target_vocab,
max_seq_len_source, max_seq_len_target)
check_condition(length_statistics.num_sents > 0,
"No training sequences found with length smaller or equal than the maximum sequence length."
"Consider increasing %s" % C.TRAINING_ARG_MAX_SEQ_LEN)
# define buckets
buckets = define_parallel_buckets(max_seq_len_source, max_seq_len_target, bucket_width, bucket_scaling,
length_statistics.length_ratio_mean) if bucketing else [(max_seq_len_source,
max_seq_len_target)]
logger.info("Buckets: %s", buckets)
# Pass 2: Randomly assign data to data shards
# no pre-processing yet, just write the sentences to different files
num_shards = get_num_shards(length_statistics.num_sents, samples_per_shard, min_num_shards)
logger.info("%d samples will be split into %d shard(s) (requested samples/shard=%d, min_num_shards=%d)."
% (length_statistics.num_sents, num_shards, samples_per_shard, min_num_shards))
shards, data_statistics = shard_data(source_fnames=source_fnames,
target_fname=target_fname,
source_vocabs=source_vocabs,
target_vocab=target_vocab,
num_shards=num_shards,
buckets=buckets,
length_ratio_mean=length_statistics.length_ratio_mean,
length_ratio_std=length_statistics.length_ratio_std,
output_prefix=output_prefix)
data_statistics.log()
data_loader = RawParallelDatasetLoader(buckets=buckets,
eos_id=C.EOS_ID,
pad_id=C.PAD_ID)
# 3. convert each shard to serialized ndarrays
if not max_processes:
logger.info("Processing shards sequentily.")
# Process shards sequantially woithout using multiprocessing
for shard_idx, (shard_sources, shard_target, shard_stats) in enumerate(shards):
process_shard(shard_idx, data_loader, shard_sources, shard_target,
shard_stats, output_prefix, keep_tmp_shard_files)
else:
logger.info(f"Processing shards using {max_processes} processes.")
# Process shards in parallel using max_processes process
results = []
pool = multiprocessing.pool.Pool(processes=max_processes)
for shard_idx, (shard_sources, shard_target, shard_stats) in enumerate(shards):
result = pool.apply_async(process_shard, args=(shard_idx, data_loader, shard_sources, shard_target,
shard_stats, output_prefix, keep_tmp_shard_files))
results.append(result)
pool.close()
pool.join()
for result in results:
if not result.successful():
logger.error("Process ended in error.")
raise RuntimeError("Shard processing failed.")
data_info = DataInfo(sources=[os.path.abspath(fname) for fname in source_fnames],
target=os.path.abspath(target_fname),
source_vocabs=source_vocab_paths,
target_vocab=target_vocab_path,
shared_vocab=shared_vocab,
num_shards=num_shards)
data_info_fname = os.path.join(output_prefix, C.DATA_INFO)
logger.info("Writing data info to '%s'", data_info_fname)
data_info.save(data_info_fname)
config_data = DataConfig(data_statistics=data_statistics,
max_seq_len_source=max_seq_len_source,
max_seq_len_target=max_seq_len_target,
num_source_factors=len(source_fnames))
config_data_fname = os.path.join(output_prefix, C.DATA_CONFIG)
logger.info("Writing data config to '%s'", config_data_fname)
config_data.save(config_data_fname)
version_file = os.path.join(output_prefix, C.PREPARED_DATA_VERSION_FILE)
with open(version_file, "w") as version_out:
version_out.write(str(C.PREPARED_DATA_VERSION))
|
58,127 |
def get_alerts_helper(alert_info: Dict, aid: Optional[int], human_readable: bool) -> Tuple[Dict, Dict]:
"""
Prepare the context and human readable data for Alerts
Args:
alert_info (Dict): Dict information of an Active Alert
aid (int (Optional)): AID to prepare data for
human_readable (boolean): Flag to prepare human readable data or not
Returns:
Tuple of Context Data and Human Readable Data.
"""
entry_context = {}
human_readable_data = {}
if alert_info:
if human_readable:
human_readable_data = assign_params(
**{
"Active": alert_info.get('active'),
"Agents": alert_info.get('agents'),
"AID": aid,
"Alert ID": alert_info.get('alertId'),
"Date Start": alert_info.get('dateStart'),
"API Links": alert_info.get('apiLinks'),
"Perma Link": alert_info.get('permalink'),
"Rule Expression": alert_info.get('ruleExpression'),
"Rule ID": alert_info.get('ruleId'),
"Rule Name": alert_info.get('ruleName'),
"Test ID": alert_info.get('testId'),
"Test Name": alert_info.get('testName'),
"Violation Count": alert_info.get('violationCount'),
"Type": alert_info.get('type'),
"Severity": alert_info.get('severity')
}
)
entry_context = assign_params(
**{
"Active": alert_info.get('active'),
"Agents": alert_info.get('agents'),
"AID": aid,
"AlertID": alert_info.get('alertId'),
"DateStart": alert_info.get('dateStart'),
"ApiLinks": alert_info.get('apiLinks'),
"PermaLink": alert_info.get('permalink'),
"RuleExpression": alert_info.get('ruleExpression'),
"RuleID": alert_info.get('ruleId'),
"RuleName": alert_info.get('ruleName'),
"TestID": alert_info.get('testId'),
"TestName": alert_info.get('testName'),
"ViolationCount": alert_info.get('violationCount'),
"Type": alert_info.get('type'),
"Severity": alert_info.get('severity')
}
)
return (entry_context, human_readable_data)
|
def get_alerts_helper(alert_info: Dict, aid: Optional[int], human_readable: bool) -> Tuple[Dict, Dict]:
"""
Prepare the context and human readable data for Alerts
Args:
alert_info (Dict): Dict information of an Active Alert
aid (int (Optional)): AID to prepare data for
human_readable (boolean): Flag to prepare human readable data or not
Returns:
Tuple of Context Data and Human Readable Data.
"""
entry_context = {}
human_readable_data = {}
if alert_info:
prepared_alert_info = {
"Active": alert_info.get('active'),
"Agents": alert_info.get('agents'),
"AID": aid,
"AlertID": alert_info.get('alertId'),
"DateStart": alert_info.get('dateStart'),
"ApiLinks": alert_info.get('apiLinks'),
"PermaLink": alert_info.get('permalink'),
"RuleExpression": alert_info.get('ruleExpression'),
"RuleID": alert_info.get('ruleId'),
"RuleName": alert_info.get('ruleName'),
"TestID": alert_info.get('testId'),
"TestName": alert_info.get('testName'),
"ViolationCount": alert_info.get('violationCount'),
"Type": alert_info.get('type'),
"Severity": alert_info.get('severity')
}
if human_readable:
human_readable_data = assign_params(
**prepared_alert_info
)
entry_context = assign_params(
**prepared_alert_info
)
return (entry_context, human_readable_data)
|
51,384 |
def average(*args: Any, default: Any = _SENTINEL) -> Any:
"""
Filter and function to calculate the arithmetic mean of an iterable or of two or more arguments.
The parameters may be passed as an iterable or as separate arguments.
"""
if len(args) == 0:
raise TypeError("average expected at least 1 argument, got 0")
# If first argument is iterable, more then 1 argument provided but not named default
# than use 2nd argument as default.
if isinstance(args[0], Iterable):
average_list = args[0]
if len(args) > 1 and default is _SENTINEL:
default = args[1]
elif len(args) == 1:
raise TypeError(f"'{type(args[0]).__name__}' object is not iterable")
else:
average_list = args
try:
return statistics.fmean(average_list)
except (TypeError, statistics.StatisticsError):
if default is _SENTINEL:
raise_no_default("average", args)
return default
|
def average(*args: Any, default: Any = _SENTINEL) -> Any:
"""
Filter and function to calculate the arithmetic mean of an iterable or of two or more arguments.
The parameters may be passed as an iterable or as separate arguments.
"""
if len(args) == 0:
raise TypeError("average expected at least 1 argument, got 0")
# If first argument is an iterable and more than 1 argument provided but not a named default,
# then use the 2nd argument as default.
if isinstance(args[0], Iterable):
average_list = args[0]
if len(args) > 1 and default is _SENTINEL:
default = args[1]
elif len(args) == 1:
raise TypeError(f"'{type(args[0]).__name__}' object is not iterable")
else:
average_list = args
try:
return statistics.fmean(average_list)
except (TypeError, statistics.StatisticsError):
if default is _SENTINEL:
raise_no_default("average", args)
return default
|
14,611 |
def check_generate_predictions(use_feature_hashing=False,
use_threshold=False,
test_on_subset=False,
all_probs=False):
# create some simple classification feature sets for training and testing
train_fs, test_fs = make_classification_data(num_examples=1000,
num_features=5,
use_feature_hashing=use_feature_hashing,
feature_bins=4)
proba = use_threshold or all_probs
# create a learner that uses an SGD classifier
learner = Learner('SGDClassifier', probability=proba)
# train the learner with grid search
learner.train(train_fs, grid_search=True)
# if we are asked to use only a subset, then filter out
# one of the features if we are not using feature hashing,
# do nothing if we are using feature hashing
if test_on_subset and not use_feature_hashing:
test_fs.filter(features=['f01', 'f02', 'f03', 'f04'])
# get the predictions on the test featureset
predictions = learner.predict(test_fs)
# if we asked for probabilities, then use the threshold
# to convert them into binary predictions
if use_threshold:
threshold = 0.6
predictions = [int(p[1] >= threshold) for p in predictions]
else:
predictions = predictions.tolist()
threshold = None
# save the learner to a file
model_file = join(_my_dir, 'output',
'test_generate_predictions.model')
learner.save(model_file)
# now use Predictor to generate the predictions and make
# sure that they are the same as before saving the model
p = gp.Predictor(model_file, threshold=threshold,
return_all_probabilities=all_probs)
predictions_after_saving = p.predict(test_fs)
eq_(predictions, predictions_after_saving)
|
def check_generate_predictions(use_feature_hashing=False,
use_threshold=False,
test_on_subset=False,
all_probs=False):
# create some simple classification feature sets for training and testing
train_fs, test_fs = make_classification_data(num_examples=1000,
num_features=5,
use_feature_hashing=use_feature_hashing,
feature_bins=4)
enable_probabilities = use_threshold or all_probs
# create a learner that uses an SGD classifier
learner = Learner('SGDClassifier', probability=proba)
# train the learner with grid search
learner.train(train_fs, grid_search=True)
# if we are asked to use only a subset, then filter out
# one of the features if we are not using feature hashing,
# do nothing if we are using feature hashing
if test_on_subset and not use_feature_hashing:
test_fs.filter(features=['f01', 'f02', 'f03', 'f04'])
# get the predictions on the test featureset
predictions = learner.predict(test_fs)
# if we asked for probabilities, then use the threshold
# to convert them into binary predictions
if use_threshold:
threshold = 0.6
predictions = [int(p[1] >= threshold) for p in predictions]
else:
predictions = predictions.tolist()
threshold = None
# save the learner to a file
model_file = join(_my_dir, 'output',
'test_generate_predictions.model')
learner.save(model_file)
# now use Predictor to generate the predictions and make
# sure that they are the same as before saving the model
p = gp.Predictor(model_file, threshold=threshold,
return_all_probabilities=all_probs)
predictions_after_saving = p.predict(test_fs)
eq_(predictions, predictions_after_saving)
|
6,076 |
def getQueuesResolved(siteDict):
"""
Get the list of queue descriptions merging site/ce/queue parameters and adding some
derived parameters.
:param dict siteDict: dictionary with configuration data as returned by Resources.getQueues() method
:return: S_OK/S_ERROR, Value dictionary per queue with configuration data updated, e.g. for SiteDirector
"""
queueDict = {}
for site in siteDict:
for ce in siteDict[site]:
ceDict = siteDict[site][ce]
qDict = ceDict.pop('Queues')
for queue in qDict:
queueName = '%s_%s' % (ce, queue)
queueDict[queueName] = qDict[queue]
queueDict[queueName] = qDict[queue]
queueDict[queueName]['Queue'] = queue
queueDict[queueName]['Site'] = site
# Evaluate the CPU limit of the queue according to the Glue convention
# To Do: should be a utility
if "maxCPUTime" in queueDict[queueName] and \
"SI00" in queueDict[queueName]:
maxCPUTime = float(queueDict[queueName]['maxCPUTime'])
# For some sites there are crazy values in the CS
maxCPUTime = max(maxCPUTime, 0)
maxCPUTime = min(maxCPUTime, 86400 * 12.5)
si00 = float(queueDict[queueName]['SI00'])
queueCPUTime = 60. / 250. * maxCPUTime * si00
queueDict[queueName]['CPUTime'] = int(queueCPUTime)
# Tags & RequiredTags defined on the Queue level and on the CE level are concatenated
# This also converts them from a string to a list if required.
for tagFieldName in ('Tag', 'RequiredTag'):
ceTags = ceDict.get(tagFieldName, [])
if isinstance(ceTags, basestring):
ceTags = fromChar(ceTags)
queueTags = queueDict[queueName].get(tagFieldName)
if queueTags and isinstance(queueTags, basestring):
queueTags = fromChar(queueTags)
queueDict[queueName][tagFieldName] = queueTags
if ceTags:
if queueTags:
allTags = list(set(ceTags + queueTags))
queueDict[queueName][tagFieldName] = allTags
else:
queueDict[queueName][tagFieldName] = ceTags
# Some parameters can be defined on the CE level and are inherited by all Queues
for parameter in ['MaxRAM', 'NumberOfProcessors', 'WholeNode']:
queueParameter = queueDict[queueName].get(parameter)
ceParameter = ceDict.get(parameter)
if ceParameter or queueParameter:
queueDict[queueName][parameter] = ceParameter if not queueParameter \
else queueParameter
# If we have a multi-core queue add MultiProcessor tag
if queueDict[queueName].get('NumberOfProcessors', 1) > 1:
queueDict[queueName].setdefault('Tag', []).append('MultiProcessor')
queueDict[queueName]['CEName'] = ce
queueDict[queueName]['GridCE'] = ce
queueDict[queueName]['CEType'] = ceDict['CEType']
queueDict[queueName]['GridMiddleware'] = ceDict['CEType']
queueDict[queueName]['QueueName'] = queue
platform = ''
if "Platform" in queueDict[queueName]:
platform = queueDict[queueName]['Platform']
elif "Platform" in ceDict:
platform = ceDict['Platform']
elif "OS" in ceDict:
architecture = ceDict.get('architecture', 'x86_64')
platform = '_'.join([architecture, ceDict['OS']])
queueDict[queueName]['Platform'] = platform
if "Platform" not in queueDict[queueName] and platform:
result = getDIRACPlatform(platform)
if result['OK']:
queueDict[queueName]['Platform'] = result['Value'][0]
return S_OK(queueDict)
|
def getQueuesResolved(siteDict):
"""
Get the list of queue descriptions merging site/ce/queue parameters and adding some
derived parameters.
:param dict siteDict: dictionary with configuration data as returned by Resources.getQueues() method
:return: S_OK/S_ERROR, Value dictionary per queue with configuration data updated, e.g. for SiteDirector
"""
queueDict = {}
for site in siteDict:
for ce in siteDict[site]:
ceDict = siteDict[site][ce]
qDict = ceDict.pop('Queues')
for queue in qDict:
queueName = '%s_%s' % (ce, queue)
queueDict[queueName] = qDict[queue]
queueDict[queueName] = qDict[queue]
queueDict[queueName]['Queue'] = queue
queueDict[queueName]['Site'] = site
# Evaluate the CPU limit of the queue according to the Glue convention
# To Do: should be a utility
if "maxCPUTime" in queueDict[queueName] and \
"SI00" in queueDict[queueName]:
maxCPUTime = float(queueDict[queueName]['maxCPUTime'])
# For some sites there are crazy values in the CS
maxCPUTime = max(maxCPUTime, 0)
maxCPUTime = min(maxCPUTime, 86400 * 12.5)
si00 = float(queueDict[queueName]['SI00'])
queueCPUTime = 60. / 250. * maxCPUTime * si00
queueDict[queueName]['CPUTime'] = int(queueCPUTime)
# Tags & RequiredTags defined on the Queue level and on the CE level are concatenated
# This also converts them from a string to a list if required.
for tagFieldName in ('Tag', 'RequiredTag'):
ceTags = ceDict.get(tagFieldName, [])
if isinstance(ceTags, basestring):
ceTags = fromChar(ceTags)
queueTags = queueDict[queueName].get(tagFieldName)
if queueTags and isinstance(queueTags, basestring):
queueTags = fromChar(queueTags)
queueDict[queueName][tagFieldName] = list(set(ceTags + queueTags))
if ceTags:
if queueTags:
allTags = list(set(ceTags + queueTags))
queueDict[queueName][tagFieldName] = allTags
else:
queueDict[queueName][tagFieldName] = ceTags
# Some parameters can be defined on the CE level and are inherited by all Queues
for parameter in ['MaxRAM', 'NumberOfProcessors', 'WholeNode']:
queueParameter = queueDict[queueName].get(parameter)
ceParameter = ceDict.get(parameter)
if ceParameter or queueParameter:
queueDict[queueName][parameter] = ceParameter if not queueParameter \
else queueParameter
# If we have a multi-core queue add MultiProcessor tag
if queueDict[queueName].get('NumberOfProcessors', 1) > 1:
queueDict[queueName].setdefault('Tag', []).append('MultiProcessor')
queueDict[queueName]['CEName'] = ce
queueDict[queueName]['GridCE'] = ce
queueDict[queueName]['CEType'] = ceDict['CEType']
queueDict[queueName]['GridMiddleware'] = ceDict['CEType']
queueDict[queueName]['QueueName'] = queue
platform = ''
if "Platform" in queueDict[queueName]:
platform = queueDict[queueName]['Platform']
elif "Platform" in ceDict:
platform = ceDict['Platform']
elif "OS" in ceDict:
architecture = ceDict.get('architecture', 'x86_64')
platform = '_'.join([architecture, ceDict['OS']])
queueDict[queueName]['Platform'] = platform
if "Platform" not in queueDict[queueName] and platform:
result = getDIRACPlatform(platform)
if result['OK']:
queueDict[queueName]['Platform'] = result['Value'][0]
return S_OK(queueDict)
|
44,579 |
def load_schemas_from_git(ref, target_dir='schemas'):
tree = ecs_helpers.get_tree_by_ref(ref)
fields_nested = {}
# Handles case if target dir doesn't exists in git ref
if ecs_helpers.path_exists_in_git_tree(tree, target_dir):
for blob in tree[target_dir].blobs:
if blob.name.endswith('.yml'):
new_fields = read_schema_blob(blob, ref)
fields_nested = ecs_helpers.safe_merge_dicts(fields_nested, new_fields)
else:
raise KeyError(f"Target directory './{target_dir}' not present in current git ref!")
return fields_nested
|
def load_schemas_from_git(ref, target_dir='schemas'):
tree = ecs_helpers.get_tree_by_ref(ref)
fields_nested = {}
# Handles case if target dir doesn't exists in git ref
if ecs_helpers.path_exists_in_git_tree(tree, target_dir):
for blob in tree[target_dir].blobs:
if blob.name.endswith('.yml'):
new_fields = read_schema_blob(blob, ref)
fields_nested = ecs_helpers.safe_merge_dicts(fields_nested, new_fields)
else:
raise KeyError(f"Target directory './{target_dir}' not present in git ref '{ref}'!")
return fields_nested
|
16,572 |
def setup_services(hass: HomeAssistant) -> None:
"""Set up the global UniFi Protect services."""
services = [
(
SERVICE_ADD_DOORBELL_TEXT,
functools.partial(add_doorbell_text, hass),
DOORBELL_TEXT_SCHEMA,
),
(
SERVICE_REMOVE_DOORBELL_TEXT,
functools.partial(remove_doorbell_text, hass),
DOORBELL_TEXT_SCHEMA,
),
(
SERVICE_SET_DEFAULT_DOORBELL_TEXT,
functools.partial(set_default_doorbell_text, hass),
DOORBELL_TEXT_SCHEMA,
),
]
for name, method, schema in services:
if hass.services.has_service(DOMAIN, name):
continue
hass.services.async_register(DOMAIN, name, method, schema=schema)
|
def async_setup_services(hass: HomeAssistant) -> None:
"""Set up the global UniFi Protect services."""
services = [
(
SERVICE_ADD_DOORBELL_TEXT,
functools.partial(add_doorbell_text, hass),
DOORBELL_TEXT_SCHEMA,
),
(
SERVICE_REMOVE_DOORBELL_TEXT,
functools.partial(remove_doorbell_text, hass),
DOORBELL_TEXT_SCHEMA,
),
(
SERVICE_SET_DEFAULT_DOORBELL_TEXT,
functools.partial(set_default_doorbell_text, hass),
DOORBELL_TEXT_SCHEMA,
),
]
for name, method, schema in services:
if hass.services.has_service(DOMAIN, name):
continue
hass.services.async_register(DOMAIN, name, method, schema=schema)
|
39,893 |
def paint_worklock_claim(emitter, bidder_address: str):
message = f"""
Successfully claimed WorkLock tokens for {bidder_address}.
Next Steps for Worklock Winners
===============================
See the nucypher official documentation for a comprehensive guide!
Create a stake with your allocation contract:
'nucypher stake create --provider <URI> --staking-address {bidder_address}'
Bond a worker to your stake: 'nucypher stake set-worker --worker-address <WORKER ADDRESS>'
"""
emitter.message(message, color='green')
|
def paint_worklock_claim(emitter, bidder_address: str):
message = f"""
Successfully claimed WorkLock tokens for {bidder_address}.
Next Steps for Worklock Winners
===============================
See the official nucypher documentation for a comprehensive guide!
Create a stake with your allocation contract:
'nucypher stake create --provider <URI> --staking-address {bidder_address}'
Bond a worker to your stake: 'nucypher stake set-worker --worker-address <WORKER ADDRESS>'
"""
emitter.message(message, color='green')
|
53,477 |
def test_base_checker_ordering() -> None:
"""Test ordering of checkers based on their __gt__ method."""
linter = PyLinter()
fake_checker_1 = OtherBasicChecker()
fake_checker_2 = LessBasicChecker()
fake_checker_3 = DifferentBasicChecker()
import_checker = ImportsChecker(linter)
while_checker = WhileChecker(linter)
type_checker = TypeChecker(linter)
list_of_checkers = [
1,
fake_checker_1,
fake_checker_2,
fake_checker_3,
type_checker,
import_checker,
while_checker,
linter,
]
assert sorted(list_of_checkers) == [ # type: ignore[type-var]
linter,
import_checker,
type_checker,
fake_checker_3,
fake_checker_1,
fake_checker_2,
while_checker,
1,
]
assert fake_checker_1 > fake_checker_3
assert fake_checker_2 > fake_checker_3
assert fake_checker_1 == fake_checker_2
|
def test_base_checker_ordering(linter: PyLinter) -> None:
"""Test ordering of checkers based on their __gt__ method."""
fake_checker_1 = OtherBasicChecker()
fake_checker_2 = LessBasicChecker()
fake_checker_3 = DifferentBasicChecker()
import_checker = ImportsChecker(linter)
while_checker = WhileChecker(linter)
type_checker = TypeChecker(linter)
list_of_checkers = [
1,
fake_checker_1,
fake_checker_2,
fake_checker_3,
type_checker,
import_checker,
while_checker,
linter,
]
assert sorted(list_of_checkers) == [ # type: ignore[type-var]
linter,
import_checker,
type_checker,
fake_checker_3,
fake_checker_1,
fake_checker_2,
while_checker,
1,
]
assert fake_checker_1 > fake_checker_3
assert fake_checker_2 > fake_checker_3
assert fake_checker_1 == fake_checker_2
|
8,927 |
def ctcp(function=None, *command_list):
"""Decorate a callable to trigger on CTCP commands (mostly, ``ACTION``).
:param str ctcp_command: one or more CTCP command(s) on which to trigger
(really, the only useful value is ``ACTION``)
.. versionadded:: 7.1
This is now ``ctcp`` instead of ``intent``, and it can be called
without argument, assuming ``ACTION`` in that case.
.. note::
This used to be ``@intent``, for a long dead feature in the IRCv3 spec.
It is now replaced by ``@ctcp``, which can be used without arguments.
In that case, Sopel will assume to trigger on ``ACTION``.
As ``sopel.module`` will be removed in Sopel 9, so will ``@intent``.
"""
default_commands = ('ACTION',) + command_list
if function is None:
return ctcp(*default_commands) # called as ``@ctcp()``
elif callable(function):
# called as ``@ctcp`` or ``@ctcp(function)``
# or even ``@ctcp(function, 'ACTION', ...)``
return ctcp(*default_commands)(function)
# function is not None, and it is not a callable
# called as ``@ctcp('ACTION', ...)``
ctcp_commands = (function,) + command_list
def add_attribute(function):
function._sopel_callable = True
if not hasattr(function, "intents"):
function.intents = []
for name in ctcp_commands:
if name not in function.intents:
function.intents.append(name)
return function
return add_attribute
|
def ctcp(function=None, *command_list):
"""Decorate a callable to trigger on CTCP commands (mostly, ``ACTION``).
:param str ctcp_command: one or more CTCP command(s) on which to trigger
(really, the only useful value is ``ACTION``)
.. versionadded:: 7.1
This is now ``ctcp`` instead of ``intent``, and it can be called
without argument, in which case it will assume ``ACTION``.
.. note::
This used to be ``@intent``, for a long dead feature in the IRCv3 spec.
It is now replaced by ``@ctcp``, which can be used without arguments.
In that case, Sopel will assume to trigger on ``ACTION``.
As ``sopel.module`` will be removed in Sopel 9, so will ``@intent``.
"""
default_commands = ('ACTION',) + command_list
if function is None:
return ctcp(*default_commands) # called as ``@ctcp()``
elif callable(function):
# called as ``@ctcp`` or ``@ctcp(function)``
# or even ``@ctcp(function, 'ACTION', ...)``
return ctcp(*default_commands)(function)
# function is not None, and it is not a callable
# called as ``@ctcp('ACTION', ...)``
ctcp_commands = (function,) + command_list
def add_attribute(function):
function._sopel_callable = True
if not hasattr(function, "intents"):
function.intents = []
for name in ctcp_commands:
if name not in function.intents:
function.intents.append(name)
return function
return add_attribute
|
24,306 |
def render_metadata_progress():
valid_checks = sorted(get_valid_checks())
total_checks = len(valid_checks)
checks_with_metadata = 0
lines = ['## Metadata', '', None, '', '??? check "Completed"']
for check in valid_checks:
config_file = get_config_file(check)
status = ' '
if not os.path.exists(config_file):
# tile only
total_checks -= 1
else:
check_file = get_check_file(check)
if os.path.exists(check_file):
with open(check_file) as f:
contents = f.read()
if 'self.set_metadata' in contents:
status = 'X'
checks_with_metadata += 1
lines.append(f' - [{status}] {check}')
percent = checks_with_metadata / total_checks * 100
formatted_percent = f'{percent:.2f}'
lines[2] = f'[={formatted_percent}% "{formatted_percent}%"]'
return lines
|
def render_metadata_progress():
valid_checks = sorted(get_valid_checks())
total_checks = len(valid_checks)
checks_with_metadata = 0
lines = ['## Metadata', '', None, '', '??? check "Completed"']
for check in valid_checks:
config_file = get_config_file(check)
status = ' '
if not os.path.exists(config_file):
# tile only
total_checks -= 1
else:
check_file = get_check_file(check)
if os.path.exists(check_file):
with open(check_file, 'r', encoding='utf-8') as f:
contents = f.read()
if 'self.set_metadata' in contents:
status = 'X'
checks_with_metadata += 1
lines.append(f' - [{status}] {check}')
percent = checks_with_metadata / total_checks * 100
formatted_percent = f'{percent:.2f}'
lines[2] = f'[={formatted_percent}% "{formatted_percent}%"]'
return lines
|
30,782 |
def map_changes_to_existing_user(existing_user, new_json):
# if existing_user is not None:
for k, v in new_json.items():
if type(v) == list:
# handle in specific way
# as of now only emails needs to be handled
if k == 'emails':
existing_email_list = existing_user.get(k)
# update
for i in v:
for j in existing_email_list:
if j.get('type') == i.get('type'):
if j.get('value') != i.get('value'):
j['value'] = i.get('value')
if i.get('primary', None) is not None:
j['primary'] = i.get('primary')
else:
if j.get('primary', None) is not None:
j['primary'] = j.get('primary')
break
# add
new_email_list = []
for i in v:
exist = False
for j in existing_email_list:
if i.get('type') == j.get('type', ''):
exist = True
break
if not exist:
new_email = {'type': i.get('type'),
'value': i.get('value')}
if i.get('primary', None) is not None:
new_email.update({'primary': i.get('primary')})
new_email_list.append(new_email)
existing_email_list.extend(new_email_list)
elif type(v) == dict:
if k != SCIM_EXTENSION_SCHEMA:
map_changes_to_existing_user(existing_user.get(k), v)
else:
existing_user[k] = v
|
def map_changes_to_existing_user(existing_user, new_json):
# if existing_user is not None:
for k, v in new_json.items():
if type(v) == list:
# handle in specific way
# as of now only emails needs to be handled
if k == 'emails':
existing_email_list = existing_user.get(k)
# update
for new_json_email in v:
for j in existing_email_list:
if j.get('type') == i.get('type'):
if j.get('value') != i.get('value'):
j['value'] = i.get('value')
if i.get('primary', None) is not None:
j['primary'] = i.get('primary')
else:
if j.get('primary', None) is not None:
j['primary'] = j.get('primary')
break
# add
new_email_list = []
for i in v:
exist = False
for j in existing_email_list:
if i.get('type') == j.get('type', ''):
exist = True
break
if not exist:
new_email = {'type': i.get('type'),
'value': i.get('value')}
if i.get('primary', None) is not None:
new_email.update({'primary': i.get('primary')})
new_email_list.append(new_email)
existing_email_list.extend(new_email_list)
elif type(v) == dict:
if k != SCIM_EXTENSION_SCHEMA:
map_changes_to_existing_user(existing_user.get(k), v)
else:
existing_user[k] = v
|
42,016 |
def test_abbreviated_json_to_distribution() -> None:
for key in EXAMPLE_ABBREVIATED_JSONS:
distribution_actual = distributions.json_to_distribution(EXAMPLE_JSONS[key])
assert distribution_actual == EXAMPLE_DISTRIBUTIONS[key]
unknown_json = '{"name": "UnknownDistribution", "attributes": {"low": 1.0, "high": 2.0}}'
pytest.raises(ValueError, lambda: distributions.json_to_distribution(unknown_json))
|
def test_abbreviated_json_to_distribution() -> None:
for key in EXAMPLE_ABBREVIATED_JSONS:
distribution_actual = distributions.json_to_distribution(EXAMPLE_ABBREVIATED_JSONS[key])
assert distribution_actual == EXAMPLE_DISTRIBUTIONS[key]
unknown_json = '{"name": "UnknownDistribution", "attributes": {"low": 1.0, "high": 2.0}}'
pytest.raises(ValueError, lambda: distributions.json_to_distribution(unknown_json))
|
28,625 |
def as_numpy_array(self):
"""
A short extension method for converting vec3_double arrays to numpy arrays.
"""
if isinstance(self, type(vec3_double())):
return self.as_double().as_numpy_array().reshape(-1, 3)
|
def _vec3_double_as_numpy_array(flex_array):
"""
A short extension method for converting vec3_double arrays to numpy arrays.
"""
if isinstance(self, type(vec3_double())):
return self.as_double().as_numpy_array().reshape(-1, 3)
|
44,477 |
def remove_callback_token(node):
"""
Remove a callback token
:param node: the node
"""
tmp_file = "{}.tmp".format(callback_tokens_file)
if not os.path.isfile(callback_tokens_file):
open(callback_tokens_file, "a+")
os.chmod(callback_tokens_file, 0o600)
with open(tmp_file, "w") as backup_fp:
os.chmod(tmp_file, 0o600)
with open(callback_tokens_file, "r+") as callback_fp:
# Entries are of the format: 'node_hostname:agent_port token'
# We need to get the node_hostname part
for _, line in enumerate(callback_fp):
parts = line.split(":")
if parts[0] == node:
continue
else:
backup_fp.write(line)
try_set_file_permissions(tmp_file)
shutil.move(tmp_file, callback_tokens_file)
|
def remove_callback_token(node):
"""
Remove a callback token
:param node: the node
"""
tmp_file = "{}.tmp".format(callback_tokens_file)
if not os.path.isfile(callback_tokens_file):
open(callback_tokens_file, "a+")
os.chmod(callback_tokens_file, 0o600)
with open(tmp_file, "w") as backup_fp:
os.chmod(tmp_file, 0o600)
with open(callback_tokens_file, "r+") as callback_fp:
# Entries are of the format: 'node_hostname:agent_port token'
# We need to get the node_hostname part
for line in callback_fp:
parts = line.split(":")
if parts[0] == node:
continue
else:
backup_fp.write(line)
try_set_file_permissions(tmp_file)
shutil.move(tmp_file, callback_tokens_file)
|
44,159 |
def pattern_matching(circuit_dag, pattern_dag):
r"""Function that applies the pattern matching algorithm and returns the list of maximal matches.
Args:
circuit_dag (.CommutationDAG): A commutation DAG representing the circuit to be optimized.
pattern_dag(.CommutationDAG): A commutation DAG representing the pattern.
Returns:
list(Match): the list of maximal matches.
**Example**
First let's consider the following circuit
.. code-block:: python
def circuit():
qml.S(wires=0)
qml.PauliZ(wires=0)
qml.S(wires=1)
qml.CZ(wires=[0, 1])
qml.S(wires=1)
qml.S(wires=2)
qml.CZ(wires=[1, 2])
qml.S(wires=2)
return qml.expval(qml.PauliX(wires=0))
where we want to find all maximal matches of a pattern containing a sequence of two ``pennylane.S`` gates and
a ``pennylane.PauliZ`` gate:
.. code-block:: python
with qml.tape.QuantumTape() as pattern:
qml.S(wires=0)
qml.S(wires=0)
qml.PauliZ(wires=0)
>>> circuit_dag = qml.commutation_dag(circuit)()
>>> pattern_dag = qml.commutation_dag(pattern)()
>>> all_max_matches = qml.pattern_matching(circuit_dag, pattern_dag)
It is possible to access the matches by looping through the list. The first integers indices represent the gates
in the pattern and the second intergers the gates in the circuit (by order of appearance).
>>> for match_conf in all_max_matches:
... print(match_conf.match)
[[0, 0], [2, 1]]
[[0, 2], [1, 4]]
[[0, 4], [1, 2]]
[[0, 5], [1, 7]]
[[0, 7], [1, 5]]
**Reference:**
[1] Iten, R., Moyard, R., Metger, T., Sutter, D. and Woerner, S., 2022.
Exact and practical pattern matching for quantum circuit optimization.
`doi.org/10.1145/3498325 <https://dl.acm.org/doi/abs/10.1145/3498325>`_
"""
# Match list
match_list = []
# Loop through all possible initial matches
for node_c, node_p in itertools.product(circuit_dag.get_nodes(), pattern_dag.get_nodes()):
# Initial matches between two identical gates (No qubits comparison)
if _compare_operation_without_qubits(node_c[1], node_p[1]):
# Fix qubits from the first (target fixed and control restrained)
not_fixed_qubits_confs = _not_fixed_qubits(
circuit_dag.num_wires, node_c[1].wires, pattern_dag.num_wires - len(node_p[1].wires)
)
# Loop over all possible qubits configurations given the first match constrains
for not_fixed_qubits_conf in not_fixed_qubits_confs:
for not_fixed_qubits_conf_permuted in itertools.permutations(not_fixed_qubits_conf):
for first_match_qubits_conf in _first_match_qubits(
node_c[1], node_p[1], pattern_dag.num_wires
):
# Qubits mapping between circuit and pattern
qubits_conf = _merge_first_match_and_permutation(
first_match_qubits_conf, not_fixed_qubits_conf_permuted
)
# Update wires, target_wires, control_wires
wires, target_wires, control_wires = _update_qubits(
circuit_dag, qubits_conf
)
# Forward match part of the algorithm
forward = ForwardMatch(
circuit_dag,
pattern_dag,
node_c[0],
node_p[0],
wires,
target_wires,
control_wires,
)
forward.run_forward_match()
# Backward match part of the algorithm
backward = BackwardMatch(
circuit_dag,
pattern_dag,
qubits_conf,
forward.match,
forward.circuit_matched_with,
forward.circuit_blocked,
forward.pattern_matched_with,
node_c[0],
node_p[0],
wires,
control_wires,
target_wires,
)
backward.run_backward_match()
_add_match(match_list, backward.match_final)
match_list.sort(key=lambda x: len(x.match), reverse=True)
# Extract maximal matches and optimizes the circuit for compatible maximal matches
if match_list:
maximal = MaximalMatches(match_list)
maximal.run_maximal_matches()
max_matches = maximal.max_match_list
return max_matches
return match_list
|
def pattern_matching(circuit_dag, pattern_dag):
r"""Function that applies the pattern matching algorithm and returns the list of maximal matches.
Args:
circuit_dag (.CommutationDAG): A commutation DAG representing the circuit to be optimized.
pattern_dag(.CommutationDAG): A commutation DAG representing the pattern.
Returns:
list(Match): the list of maximal matches.
**Example**
First let's consider the following circuit
.. code-block:: python
def circuit():
qml.S(wires=0)
qml.PauliZ(wires=0)
qml.S(wires=1)
qml.CZ(wires=[0, 1])
qml.S(wires=1)
qml.S(wires=2)
qml.CZ(wires=[1, 2])
qml.S(wires=2)
return qml.expval(qml.PauliX(wires=0))
Assume that we want to find all maximal matches of a pattern containing a sequence of two :class:`~.S` gates and
a ``pennylane.PauliZ`` gate:
.. code-block:: python
with qml.tape.QuantumTape() as pattern:
qml.S(wires=0)
qml.S(wires=0)
qml.PauliZ(wires=0)
>>> circuit_dag = qml.commutation_dag(circuit)()
>>> pattern_dag = qml.commutation_dag(pattern)()
>>> all_max_matches = qml.pattern_matching(circuit_dag, pattern_dag)
It is possible to access the matches by looping through the list. The first integers indices represent the gates
in the pattern and the second intergers the gates in the circuit (by order of appearance).
>>> for match_conf in all_max_matches:
... print(match_conf.match)
[[0, 0], [2, 1]]
[[0, 2], [1, 4]]
[[0, 4], [1, 2]]
[[0, 5], [1, 7]]
[[0, 7], [1, 5]]
**Reference:**
[1] Iten, R., Moyard, R., Metger, T., Sutter, D. and Woerner, S., 2022.
Exact and practical pattern matching for quantum circuit optimization.
`doi.org/10.1145/3498325 <https://dl.acm.org/doi/abs/10.1145/3498325>`_
"""
# Match list
match_list = []
# Loop through all possible initial matches
for node_c, node_p in itertools.product(circuit_dag.get_nodes(), pattern_dag.get_nodes()):
# Initial matches between two identical gates (No qubits comparison)
if _compare_operation_without_qubits(node_c[1], node_p[1]):
# Fix qubits from the first (target fixed and control restrained)
not_fixed_qubits_confs = _not_fixed_qubits(
circuit_dag.num_wires, node_c[1].wires, pattern_dag.num_wires - len(node_p[1].wires)
)
# Loop over all possible qubits configurations given the first match constrains
for not_fixed_qubits_conf in not_fixed_qubits_confs:
for not_fixed_qubits_conf_permuted in itertools.permutations(not_fixed_qubits_conf):
for first_match_qubits_conf in _first_match_qubits(
node_c[1], node_p[1], pattern_dag.num_wires
):
# Qubits mapping between circuit and pattern
qubits_conf = _merge_first_match_and_permutation(
first_match_qubits_conf, not_fixed_qubits_conf_permuted
)
# Update wires, target_wires, control_wires
wires, target_wires, control_wires = _update_qubits(
circuit_dag, qubits_conf
)
# Forward match part of the algorithm
forward = ForwardMatch(
circuit_dag,
pattern_dag,
node_c[0],
node_p[0],
wires,
target_wires,
control_wires,
)
forward.run_forward_match()
# Backward match part of the algorithm
backward = BackwardMatch(
circuit_dag,
pattern_dag,
qubits_conf,
forward.match,
forward.circuit_matched_with,
forward.circuit_blocked,
forward.pattern_matched_with,
node_c[0],
node_p[0],
wires,
control_wires,
target_wires,
)
backward.run_backward_match()
_add_match(match_list, backward.match_final)
match_list.sort(key=lambda x: len(x.match), reverse=True)
# Extract maximal matches and optimizes the circuit for compatible maximal matches
if match_list:
maximal = MaximalMatches(match_list)
maximal.run_maximal_matches()
max_matches = maximal.max_match_list
return max_matches
return match_list
|
5,508 |
def test_save_user(django_user_model):
data = {
"username": "JoeDeveloper",
"fullname": "Joe Developer",
"title": "Web Developer",
"organization": "Acme, Inc.",
"location": "Springfield, USA",
"irc_nickname": "joedev",
"twitter_url": "http://twitter.com/joedev1999",
"github_url": "https://github.com/joedev1999",
"stackoverflow_url": "http://stackoverflow.com/users/1/joedev1999",
"linkedin_url": "http://www.linkedin.com/in/joedev1999",
"pmo_url": "http://people.mozilla.org/u/joedev/",
"date_joined": datetime(1999, 1, 1, 10, 40, 23),
}
Storage().save_user(data)
user = django_user_model.objects.get(username="JoeDeveloper")
assert user.fullname == "Joe Developer"
assert user.title == "Web Developer"
assert user.organization == "Acme, Inc."
assert user.location == "Springfield, USA"
assert user.irc_nickname == "joedev"
assert user.twitter_url == "http://twitter.com/joedev1999"
assert user.github_url == "https://github.com/joedev1999"
assert user.stackoverflow_url == ("http://stackoverflow.com/users/1/joedev1999")
assert user.linkedin_url == "http://www.linkedin.com/in/joedev1999"
assert user.pmo_url == "http://people.mozilla.org/u/joedev/"
assert user.date_joined == datetime(1999, 1, 1, 10, 40, 23)
|
def test_save_user(django_user_model):
data = {
"username": "JoeDeveloper",
"fullname": "Joe Developer",
"title": "Web Developer",
"organization": "Acme, Inc.",
"location": "Springfield, USA",
"irc_nickname": "joedev",
"twitter_url": "http://twitter.com/joedev1999",
"github_url": "https://github.com/joedev1999",
"stackoverflow_url": "http://stackoverflow.com/users/1/joedev1999",
"linkedin_url": "http://www.linkedin.com/in/joedev1999",
"pmo_url": "http://people.mozilla.org/u/joedev/",
"date_joined": datetime(1999, 1, 1, 10, 40, 23),
}
Storage().save_user(data)
user = django_user_model.objects.get(username="JoeDeveloper")
assert user.fullname == "Joe Developer"
assert user.title == "Web Developer"
assert user.organization == "Acme, Inc."
assert user.location == "Springfield, USA"
assert user.irc_nickname == "joedev"
assert user.twitter_url == "http://twitter.com/joedev1999"
assert user.github_url == "https://github.com/joedev1999"
assert user.stackoverflow_url == ("http://stackoverflow.com/users/1/joedev1999")
assert user.linkedin_url == "http://www.linkedin.com/in/joedev1999"
assert user.pmo_url == "https://people.mozilla.org/p/joedev/"
assert user.date_joined == datetime(1999, 1, 1, 10, 40, 23)
|
46,225 |
def test_affine_properties_settters():
transform = Affine()
transform.translate = [8, -5]
npt.assert_allclose(transform.translate, [8, -5])
transform.scale = [2, 3]
npt.assert_allclose(transform.scale, [2, 3])
transform.rotate = 90
npt.assert_almost_equal(transform.rotate, [[0, -1], [1, 0]])
transform.shear = [1]
npt.assert_almost_equal(transform.shear, [1])
|
def test_affine_properties_setters():
transform = Affine()
transform.translate = [8, -5]
npt.assert_allclose(transform.translate, [8, -5])
transform.scale = [2, 3]
npt.assert_allclose(transform.scale, [2, 3])
transform.rotate = 90
npt.assert_almost_equal(transform.rotate, [[0, -1], [1, 0]])
transform.shear = [1]
npt.assert_almost_equal(transform.shear, [1])
|
54,860 |
def from_cugraph(cugraph_graph):
"""Create a graph from a cugraph graph and return.
Parameters
----------
cugraph_graph : cugraph.Graph
The cugraph graph holding the graph structure and the node/edge attributes.
If the input graph is undirected, DGL converts it to a directed graph
by :func:`cugraph.Graph.to_directed`.
Returns
-------
DGLGraph
The created graph.
Examples
--------
The following example uses PyTorch backend.
>>> import dgl
>>> import cugraph
>>> import cudf
Create a cugraph graph.
>>> cugraph_g = cugraph.Graph(directed=True)
>>> df = cudf.DataFrame({"source":[0, 1, 2, 3],
"destination":[1, 2, 3, 0]})
>>> cugraph_g.from_cudf_edgelist(df)
Convert it into a DGLGraph
>>> g = dgl.from_cugraph(cugraph_g)
>>> g.edges()
(tensor([1, 2, 3, 0], device='cuda:0'), tensor([2, 3, 0, 1], device='cuda:0'))
"""
if not cugraph_graph.is_directed():
cugraph_graph = cugraph_graph.to_directed()
edges = cugraph_graph.edges()
src_t = F.zerocopy_from_dlpack(edges['src'].to_dlpack())
dst_t = F.zerocopy_from_dlpack(edges['dst'].to_dlpack())
g = graph((src_t,dst_t))
return g
|
def from_cugraph(cugraph_graph):
"""Create a graph from a cugraph graph and return.
Parameters
----------
cugraph_graph : cugraph.Graph
The cugraph graph object holding the graph structure. Node and edge attributes are
dropped.
If the input graph is undirected, DGL converts it to a directed graph
by :func:`cugraph.Graph.to_directed`.
Returns
-------
DGLGraph
The created graph.
Examples
--------
The following example uses PyTorch backend.
>>> import dgl
>>> import cugraph
>>> import cudf
Create a cugraph graph.
>>> cugraph_g = cugraph.Graph(directed=True)
>>> df = cudf.DataFrame({"source":[0, 1, 2, 3],
"destination":[1, 2, 3, 0]})
>>> cugraph_g.from_cudf_edgelist(df)
Convert it into a DGLGraph
>>> g = dgl.from_cugraph(cugraph_g)
>>> g.edges()
(tensor([1, 2, 3, 0], device='cuda:0'), tensor([2, 3, 0, 1], device='cuda:0'))
"""
if not cugraph_graph.is_directed():
cugraph_graph = cugraph_graph.to_directed()
edges = cugraph_graph.edges()
src_t = F.zerocopy_from_dlpack(edges['src'].to_dlpack())
dst_t = F.zerocopy_from_dlpack(edges['dst'].to_dlpack())
g = graph((src_t,dst_t))
return g
|
3,024 |
def interpolate_1d_fill(
values,
method="pad",
axis=0,
limit=None,
limit_area=None,
fill_value=None,
dtype=None,
):
"""
This is a 1D-versoin of `interpolate_2d`, which is used for methods `pad`
and `backfill` when interpolating. This 1D-version is necessary to be
able to handle kwarg `limit_area` via the function
` _derive_indices_of_nans_to_preserve`. It is used the same way as the
1D-interpolation functions which are based on scipy-interpolation, i.e.
via np.apply_along_axis.
"""
if method == "pad":
limit_direction = "forward"
elif method == "backfill":
limit_direction = "backward"
else:
raise ValueError("`method` must be either 'pad' or 'backfill'.")
orig_values = values
yvalues = values
invalid = isna(yvalues)
valid = ~invalid
if values.ndim > 1:
raise AssertionError("This only works with 1D data.")
if fill_value is None:
mask = None
else: # todo create faster fill func without masking
mask = mask_missing(values, fill_value)
preserve_nans = _derive_indices_of_nans_to_preserve(
yvalues=yvalues,
valid=valid,
invalid=invalid,
limit=limit,
limit_area=limit_area,
limit_direction=limit_direction,
)
method = clean_fill_method(method)
if method == "pad":
values = pad_1d(values, limit=limit, mask=mask, dtype=dtype)
else:
values = backfill_1d(values, limit=limit, mask=mask, dtype=dtype)
if orig_values.dtype.kind == "M":
# convert float back to datetime64
values = values.astype(orig_values.dtype)
values[preserve_nans] = fill_value
return values
|
def interpolate_1d_fill(
values,
method="pad",
axis=0,
limit=None,
limit_area=None,
fill_value: Optional[Hashable] = None,
dtype=None,
):
"""
This is a 1D-versoin of `interpolate_2d`, which is used for methods `pad`
and `backfill` when interpolating. This 1D-version is necessary to be
able to handle kwarg `limit_area` via the function
` _derive_indices_of_nans_to_preserve`. It is used the same way as the
1D-interpolation functions which are based on scipy-interpolation, i.e.
via np.apply_along_axis.
"""
if method == "pad":
limit_direction = "forward"
elif method == "backfill":
limit_direction = "backward"
else:
raise ValueError("`method` must be either 'pad' or 'backfill'.")
orig_values = values
yvalues = values
invalid = isna(yvalues)
valid = ~invalid
if values.ndim > 1:
raise AssertionError("This only works with 1D data.")
if fill_value is None:
mask = None
else: # todo create faster fill func without masking
mask = mask_missing(values, fill_value)
preserve_nans = _derive_indices_of_nans_to_preserve(
yvalues=yvalues,
valid=valid,
invalid=invalid,
limit=limit,
limit_area=limit_area,
limit_direction=limit_direction,
)
method = clean_fill_method(method)
if method == "pad":
values = pad_1d(values, limit=limit, mask=mask, dtype=dtype)
else:
values = backfill_1d(values, limit=limit, mask=mask, dtype=dtype)
if orig_values.dtype.kind == "M":
# convert float back to datetime64
values = values.astype(orig_values.dtype)
values[preserve_nans] = fill_value
return values
|
14,036 |
def plot_dataframe(
df,
column=None,
cmap=None,
color=None,
ax=None,
cax=None,
categorical=False,
legend=False,
scheme=None,
k=5,
vmin=None,
vmax=None,
markersize=None,
figsize=None,
legend_kwds=None,
categories=None,
classification_kwds=None,
missing_kwds=None,
aspect="auto",
**style_kwds
):
"""
Plot a GeoDataFrame.
Generate a plot of a GeoDataFrame with matplotlib. If a
column is specified, the plot coloring will be based on values
in that column.
Parameters
----------
df : GeoDataFrame
The GeoDataFrame to be plotted. Currently Polygon,
MultiPolygon, LineString, MultiLineString and Point
geometries can be plotted.
column : str, np.array, pd.Series (default None)
The name of the dataframe column, np.array, or pd.Series to be plotted.
If np.array or pd.Series are used then it must have same length as
dataframe. Values are used to color the plot. Ignored if `color` is
also set.
cmap : str (default None)
The name of a colormap recognized by matplotlib.
color : str (default None)
If specified, all objects will be colored uniformly.
ax : matplotlib.pyplot.Artist (default None)
axes on which to draw the plot
cax : matplotlib.pyplot Artist (default None)
axes on which to draw the legend in case of color map.
categorical : bool (default False)
If False, cmap will reflect numerical values of the
column being plotted. For non-numerical columns, this
will be set to True.
legend : bool (default False)
Plot a legend. Ignored if no `column` is given, or if `color` is given.
scheme : str (default None)
Name of a choropleth classification scheme (requires mapclassify).
A mapclassify.MapClassifier object will be used
under the hood. Supported are all schemes provided by mapclassify (e.g.
'BoxPlot', 'EqualInterval', 'FisherJenks', 'FisherJenksSampled',
'HeadTailBreaks', 'JenksCaspall', 'JenksCaspallForced',
'JenksCaspallSampled', 'MaxP', 'MaximumBreaks',
'NaturalBreaks', 'Quantiles', 'Percentiles', 'StdMean',
'UserDefined'). Arguments can be passed in classification_kwds.
k : int (default 5)
Number of classes (ignored if scheme is None)
vmin : None or float (default None)
Minimum value of cmap. If None, the minimum data value
in the column to be plotted is used.
vmax : None or float (default None)
Maximum value of cmap. If None, the maximum data value
in the column to be plotted is used.
markersize : str or float or sequence (default None)
Only applies to point geometries within a frame.
If a str, will use the values in the column of the frame specified
by markersize to set the size of markers. Otherwise can be a value
to apply to all points, or a sequence of the same length as the
number of points.
figsize : tuple of integers (default None)
Size of the resulting matplotlib.figure.Figure. If the argument
axes is given explicitly, figsize is ignored.
legend_kwds : dict (default None)
Keyword arguments to pass to matplotlib.pyplot.legend() or
matplotlib.pyplot.colorbar().
Additional accepted keywords when `scheme` is specified:
fmt : string
A formatting specification for the bin edges of the classes in the
legend. For example, to have no decimals: ``{"fmt": "{:.0f}"}``.
labels : list-like
A list of legend labels to override the auto-generated labels.
Needs to have the same number of elements as the number of
classes (`k`).
interval : boolean
An option to remove brackets from mapclassify legend.
If True, open/closed interval brackets are shown in legend.
categories : list-like
Ordered list-like object of categories to be used for categorical plot.
classification_kwds : dict (default None)
Keyword arguments to pass to mapclassify
missing_kwds : dict (default None)
Keyword arguments specifying color options (as style_kwds)
to be passed on to geometries with missing values in addition to
or overwriting other style kwds. If None, geometries with missing
values are not plotted.
aspect : 'auto', 'equal', None or float (default 'auto')
Set aspect of axis. If 'auto', the default aspect for map plots is 'equal'; if
however data are not projected (coordinates are long/lat), the aspect is by
default set to 1/cos(df_y * pi/180) with df_y the y coordinate of the middle of
the GeoDataFrame (the mean of the y range of bounding box) so that a long/lat
square appears square in the middle of the plot. This implies an
Equirectangular projection. If None, the aspect of `ax` won't be changed. It can
also be set manually (float) as the ratio of y-unit to x-unit.
**style_kwds : dict
Style options to be passed on to the actual plot function, such
as ``edgecolor``, ``facecolor``, ``linewidth``, ``markersize``,
``alpha``.
Returns
-------
ax : matplotlib axes instance
"""
if "colormap" in style_kwds:
warnings.warn(
"'colormap' is deprecated, please use 'cmap' instead "
"(for consistency with matplotlib)",
FutureWarning,
)
cmap = style_kwds.pop("colormap")
if "axes" in style_kwds:
warnings.warn(
"'axes' is deprecated, please use 'ax' instead "
"(for consistency with pandas)",
FutureWarning,
)
ax = style_kwds.pop("axes")
if column is not None and color is not None:
warnings.warn(
"Only specify one of 'column' or 'color'. Using 'color'.", UserWarning
)
column = None
try:
import matplotlib.pyplot as plt
except ImportError:
raise ImportError(
"The matplotlib package is required for plotting in geopandas. "
"You can install it using 'conda install -c conda-forge matplotlib' or "
"'pip install matplotlib'."
)
if ax is None:
if cax is not None:
raise ValueError("'ax' can not be None if 'cax' is not.")
fig, ax = plt.subplots(figsize=figsize)
if aspect == "auto":
if df.crs and df.crs.is_geographic:
bounds = df.total_bounds
y_coord = np.mean([bounds[1], bounds[3]])
ax.set_aspect(1 / np.cos(y_coord * np.pi / 180))
# formula ported from R package sp
# https://github.com/edzer/sp/blob/master/R/mapasp.R
else:
ax.set_aspect("equal")
elif aspect is not None:
ax.set_aspect(aspect)
# GH 1555
# if legend_kwds set, copy so we don't update it in place
if legend_kwds is not None:
legend_kwds = legend_kwds.copy()
if df.empty:
warnings.warn(
"The GeoDataFrame you are attempting to plot is "
"empty. Nothing has been displayed.",
UserWarning,
)
return ax
if isinstance(markersize, str):
markersize = df[markersize].values
if column is None:
return plot_series(
df.geometry,
cmap=cmap,
color=color,
ax=ax,
figsize=figsize,
markersize=markersize,
aspect=aspect,
**style_kwds
)
# To accept pd.Series and np.arrays as column
if isinstance(column, (np.ndarray, pd.Series)):
if column.shape[0] != df.shape[0]:
raise ValueError(
"The dataframe and given column have different number of rows."
)
else:
values = column
else:
values = df[column]
if pd.api.types.is_categorical_dtype(values.dtype):
if categories is not None:
raise ValueError(
"Cannot specify 'categories' when column has categorical dtype"
)
categorical = True
elif values.dtype is np.dtype("O") or categories:
categorical = True
nan_idx = np.asarray(pd.isna(values), dtype="bool")
# Define `values` as a Series
if categorical:
if cmap is None:
cmap = "tab10"
cat = pd.Categorical(values, categories=categories)
categories = list(cat.categories)
# values missing in the Categorical but not in original values
missing = list(np.unique(values[~nan_idx & cat.isna()]))
if missing:
raise ValueError(
"Column contains values not listed in categories. "
"Missing categories: {}.".format(missing)
)
values = cat.codes[~nan_idx]
vmin = 0 if vmin is None else vmin
vmax = len(categories) - 1 if vmax is None else vmax
if scheme is not None:
if classification_kwds is None:
classification_kwds = {}
if "k" not in classification_kwds:
classification_kwds["k"] = k
binning = _mapclassify_choro(values[~nan_idx], scheme, **classification_kwds)
# set categorical to True for creating the legend
categorical = True
if legend_kwds is not None and "labels" in legend_kwds:
if len(legend_kwds["labels"]) != binning.k:
raise ValueError(
"Number of labels must match number of bins, "
"received {} labels for {} bins".format(
len(legend_kwds["labels"]), binning.k
)
)
else:
categories = list(legend_kwds.pop("labels"))
else:
fmt = "{:.2f}"
if legend_kwds is not None and "fmt" in legend_kwds:
fmt = legend_kwds.pop("fmt")
categories = binning.get_legend_classes(fmt)
show_interval = True
if legend_kwds is not None and "interval" in legend_kwds:
show_interval = legend_kwds.pop("interval")
if not show_interval:
categories = [
c.replace("(", "").replace(")", "").replace("[", "").replace("]", "")
for c in categories
]
values = np.array(binning.yb)
# fill values with placeholder where were NaNs originally to map them properly
# (after removing them in categorical or scheme)
if categorical:
for n in np.where(nan_idx)[0]:
values = np.insert(values, n, values[0])
mn = values[~np.isnan(values)].min() if vmin is None else vmin
mx = values[~np.isnan(values)].max() if vmax is None else vmax
# decompose GeometryCollections
geoms, multiindex = _flatten_multi_geoms(df.geometry, prefix="Geom")
values = np.take(values, multiindex, axis=0)
nan_idx = np.take(nan_idx, multiindex, axis=0)
expl_series = geopandas.GeoSeries(geoms)
geom_types = expl_series.type
poly_idx = np.asarray((geom_types == "Polygon") | (geom_types == "MultiPolygon"))
line_idx = np.asarray(
(geom_types == "LineString")
| (geom_types == "MultiLineString")
| (geom_types == "LinearRing")
)
point_idx = np.asarray((geom_types == "Point") | (geom_types == "MultiPoint"))
# plot all Polygons and all MultiPolygon components in the same collection
polys = expl_series[poly_idx & np.invert(nan_idx)]
subset = values[poly_idx & np.invert(nan_idx)]
if not polys.empty:
_plot_polygon_collection(
ax, polys, subset, vmin=mn, vmax=mx, cmap=cmap, **style_kwds
)
# plot all LineStrings and MultiLineString components in same collection
lines = expl_series[line_idx & np.invert(nan_idx)]
subset = values[line_idx & np.invert(nan_idx)]
if not lines.empty:
_plot_linestring_collection(
ax, lines, subset, vmin=mn, vmax=mx, cmap=cmap, **style_kwds
)
# plot all Points in the same collection
points = expl_series[point_idx & np.invert(nan_idx)]
subset = values[point_idx & np.invert(nan_idx)]
if not points.empty:
if isinstance(markersize, np.ndarray):
markersize = np.take(markersize, multiindex, axis=0)
markersize = markersize[point_idx & np.invert(nan_idx)]
_plot_point_collection(
ax,
points,
subset,
vmin=mn,
vmax=mx,
markersize=markersize,
cmap=cmap,
**style_kwds
)
if missing_kwds is not None:
if color:
if "color" not in missing_kwds:
missing_kwds["color"] = color
merged_kwds = style_kwds.copy()
merged_kwds.update(missing_kwds)
plot_series(expl_series[nan_idx], ax=ax, **merged_kwds)
if legend and not color:
if legend_kwds is None:
legend_kwds = {}
if "fmt" in legend_kwds:
legend_kwds.pop("fmt")
from matplotlib.lines import Line2D
from matplotlib.colors import Normalize
from matplotlib import cm
norm = style_kwds.get("norm", None)
if not norm:
norm = Normalize(vmin=mn, vmax=mx)
n_cmap = cm.ScalarMappable(norm=norm, cmap=cmap)
if categorical:
patches = []
for value, cat in enumerate(categories):
patches.append(
Line2D(
[0],
[0],
linestyle="none",
marker="o",
alpha=style_kwds.get("alpha", 1),
markersize=10,
markerfacecolor=n_cmap.to_rgba(value),
markeredgewidth=0,
)
)
if missing_kwds is not None:
if "color" in merged_kwds:
merged_kwds["facecolor"] = merged_kwds["color"]
patches.append(
Line2D(
[0],
[0],
linestyle="none",
marker="o",
alpha=merged_kwds.get("alpha", 1),
markersize=10,
markerfacecolor=merged_kwds.get("facecolor", None),
markeredgecolor=merged_kwds.get("edgecolor", None),
markeredgewidth=merged_kwds.get(
"linewidth", 1 if merged_kwds.get("edgecolor", False) else 0
),
)
)
categories.append(merged_kwds.get("label", "NaN"))
legend_kwds.setdefault("numpoints", 1)
legend_kwds.setdefault("loc", "best")
ax.legend(patches, categories, **legend_kwds)
else:
if cax is not None:
legend_kwds.setdefault("cax", cax)
else:
legend_kwds.setdefault("ax", ax)
n_cmap.set_array([])
ax.get_figure().colorbar(n_cmap, **legend_kwds)
plt.draw()
return ax
|
def plot_dataframe(
df,
column=None,
cmap=None,
color=None,
ax=None,
cax=None,
categorical=False,
legend=False,
scheme=None,
k=5,
vmin=None,
vmax=None,
markersize=None,
figsize=None,
legend_kwds=None,
categories=None,
classification_kwds=None,
missing_kwds=None,
aspect="auto",
**style_kwds
):
"""
Plot a GeoDataFrame.
Generate a plot of a GeoDataFrame with matplotlib. If a
column is specified, the plot coloring will be based on values
in that column.
Parameters
----------
df : GeoDataFrame
The GeoDataFrame to be plotted. Currently Polygon,
MultiPolygon, LineString, MultiLineString and Point
geometries can be plotted.
column : str, np.array, pd.Series (default None)
The name of the dataframe column, np.array, or pd.Series to be plotted.
If np.array or pd.Series are used then it must have same length as
dataframe. Values are used to color the plot. Ignored if `color` is
also set.
cmap : str (default None)
The name of a colormap recognized by matplotlib.
color : str (default None)
If specified, all objects will be colored uniformly.
ax : matplotlib.pyplot.Artist (default None)
axes on which to draw the plot
cax : matplotlib.pyplot Artist (default None)
axes on which to draw the legend in case of color map.
categorical : bool (default False)
If False, cmap will reflect numerical values of the
column being plotted. For non-numerical columns, this
will be set to True.
legend : bool (default False)
Plot a legend. Ignored if no `column` is given, or if `color` is given.
scheme : str (default None)
Name of a choropleth classification scheme (requires mapclassify).
A mapclassify.MapClassifier object will be used
under the hood. Supported are all schemes provided by mapclassify (e.g.
'BoxPlot', 'EqualInterval', 'FisherJenks', 'FisherJenksSampled',
'HeadTailBreaks', 'JenksCaspall', 'JenksCaspallForced',
'JenksCaspallSampled', 'MaxP', 'MaximumBreaks',
'NaturalBreaks', 'Quantiles', 'Percentiles', 'StdMean',
'UserDefined'). Arguments can be passed in classification_kwds.
k : int (default 5)
Number of classes (ignored if scheme is None)
vmin : None or float (default None)
Minimum value of cmap. If None, the minimum data value
in the column to be plotted is used.
vmax : None or float (default None)
Maximum value of cmap. If None, the maximum data value
in the column to be plotted is used.
markersize : str or float or sequence (default None)
Only applies to point geometries within a frame.
If a str, will use the values in the column of the frame specified
by markersize to set the size of markers. Otherwise can be a value
to apply to all points, or a sequence of the same length as the
number of points.
figsize : tuple of integers (default None)
Size of the resulting matplotlib.figure.Figure. If the argument
axes is given explicitly, figsize is ignored.
legend_kwds : dict (default None)
Keyword arguments to pass to matplotlib.pyplot.legend() or
matplotlib.pyplot.colorbar().
Additional accepted keywords when `scheme` is specified:
fmt : string
A formatting specification for the bin edges of the classes in the
legend. For example, to have no decimals: ``{"fmt": "{:.0f}"}``.
labels : list-like
A list of legend labels to override the auto-generated labels.
Needs to have the same number of elements as the number of
classes (`k`).
interval : boolean
An option to remove brackets from mapclassify legend.
If True, open/closed interval brackets are shown in legend.
categories : list-like
Ordered list-like object of categories to be used for categorical plot.
classification_kwds : dict (default None)
Keyword arguments to pass to mapclassify
missing_kwds : dict (default None)
Keyword arguments specifying color options (as style_kwds)
to be passed on to geometries with missing values in addition to
or overwriting other style kwds. If None, geometries with missing
values are not plotted.
aspect : 'auto', 'equal', None or float (default 'auto')
Set aspect of axis. If 'auto', the default aspect for map plots is 'equal'; if
however data are not projected (coordinates are long/lat), the aspect is by
default set to 1/cos(df_y * pi/180) with df_y the y coordinate of the middle of
the GeoDataFrame (the mean of the y range of bounding box) so that a long/lat
square appears square in the middle of the plot. This implies an
Equirectangular projection. If None, the aspect of `ax` won't be changed. It can
also be set manually (float) as the ratio of y-unit to x-unit.
**style_kwds : dict
Style options to be passed on to the actual plot function, such
as ``edgecolor``, ``facecolor``, ``linewidth``, ``markersize``,
``alpha``.
Returns
-------
ax : matplotlib axes instance
"""
if "colormap" in style_kwds:
warnings.warn(
"'colormap' is deprecated, please use 'cmap' instead "
"(for consistency with matplotlib)",
FutureWarning,
)
cmap = style_kwds.pop("colormap")
if "axes" in style_kwds:
warnings.warn(
"'axes' is deprecated, please use 'ax' instead "
"(for consistency with pandas)",
FutureWarning,
)
ax = style_kwds.pop("axes")
if column is not None and color is not None:
warnings.warn(
"Only specify one of 'column' or 'color'. Using 'color'.", UserWarning
)
column = None
try:
import matplotlib.pyplot as plt
except ImportError:
raise ImportError(
"The matplotlib package is required for plotting in geopandas. "
"You can install it using 'conda install -c conda-forge matplotlib' or "
"'pip install matplotlib'."
)
if ax is None:
if cax is not None:
raise ValueError("'ax' can not be None if 'cax' is not.")
fig, ax = plt.subplots(figsize=figsize)
if aspect == "auto":
if df.crs and df.crs.is_geographic:
bounds = df.total_bounds
y_coord = np.mean([bounds[1], bounds[3]])
ax.set_aspect(1 / np.cos(y_coord * np.pi / 180))
# formula ported from R package sp
# https://github.com/edzer/sp/blob/master/R/mapasp.R
else:
ax.set_aspect("equal")
elif aspect is not None:
ax.set_aspect(aspect)
# GH 1555
# if legend_kwds set, copy so we don't update it in place
if legend_kwds is not None:
legend_kwds = legend_kwds.copy()
if df.empty:
warnings.warn(
"The GeoDataFrame you are attempting to plot is "
"empty. Nothing has been displayed.",
UserWarning,
)
return ax
if isinstance(markersize, str):
markersize = df[markersize].values
if column is None:
return plot_series(
df.geometry,
cmap=cmap,
color=color,
ax=ax,
figsize=figsize,
markersize=markersize,
aspect=aspect,
**style_kwds
)
# To accept pd.Series and np.arrays as column
if isinstance(column, (np.ndarray, pd.Series)):
if column.shape[0] != df.shape[0]:
raise ValueError(
"The dataframe and given column have different number of rows."
)
else:
values = column
else:
values = df[column]
if pd.api.types.is_categorical_dtype(values.dtype):
if categories is not None:
raise ValueError(
"Cannot specify 'categories' when column has categorical dtype"
)
categorical = True
elif values.dtype is np.dtype("O") or categories:
categorical = True
nan_idx = np.asarray(pd.isna(values), dtype="bool")
# Define `values` as a Series
if categorical:
if cmap is None:
cmap = "tab10"
cat = pd.Categorical(values, categories=categories)
categories = list(cat.categories)
# values missing in the Categorical but not in original values
missing = list(np.unique(values[~nan_idx & cat.isna()]))
if missing:
raise ValueError(
"Column contains values not listed in categories. "
"Missing categories: {}.".format(missing)
)
values = cat.codes[~nan_idx]
vmin = 0 if vmin is None else vmin
vmax = len(categories) - 1 if vmax is None else vmax
if scheme is not None:
if classification_kwds is None:
classification_kwds = {}
if "k" not in classification_kwds:
classification_kwds["k"] = k
binning = _mapclassify_choro(values[~nan_idx], scheme, **classification_kwds)
# set categorical to True for creating the legend
categorical = True
if legend_kwds is not None and "labels" in legend_kwds:
if len(legend_kwds["labels"]) != binning.k:
raise ValueError(
"Number of labels must match number of bins, "
"received {} labels for {} bins".format(
len(legend_kwds["labels"]), binning.k
)
)
else:
categories = list(legend_kwds.pop("labels"))
else:
fmt = "{:.2f}"
if legend_kwds is not None and "fmt" in legend_kwds:
fmt = legend_kwds.pop("fmt")
categories = binning.get_legend_classes(fmt)
show_interval = True
if legend_kwds is not None and "interval" in legend_kwds:
show_interval = legend_kwds.pop("interval")
if not show_interval:
categories = [c[1:-1] for c in categories]
values = np.array(binning.yb)
# fill values with placeholder where were NaNs originally to map them properly
# (after removing them in categorical or scheme)
if categorical:
for n in np.where(nan_idx)[0]:
values = np.insert(values, n, values[0])
mn = values[~np.isnan(values)].min() if vmin is None else vmin
mx = values[~np.isnan(values)].max() if vmax is None else vmax
# decompose GeometryCollections
geoms, multiindex = _flatten_multi_geoms(df.geometry, prefix="Geom")
values = np.take(values, multiindex, axis=0)
nan_idx = np.take(nan_idx, multiindex, axis=0)
expl_series = geopandas.GeoSeries(geoms)
geom_types = expl_series.type
poly_idx = np.asarray((geom_types == "Polygon") | (geom_types == "MultiPolygon"))
line_idx = np.asarray(
(geom_types == "LineString")
| (geom_types == "MultiLineString")
| (geom_types == "LinearRing")
)
point_idx = np.asarray((geom_types == "Point") | (geom_types == "MultiPoint"))
# plot all Polygons and all MultiPolygon components in the same collection
polys = expl_series[poly_idx & np.invert(nan_idx)]
subset = values[poly_idx & np.invert(nan_idx)]
if not polys.empty:
_plot_polygon_collection(
ax, polys, subset, vmin=mn, vmax=mx, cmap=cmap, **style_kwds
)
# plot all LineStrings and MultiLineString components in same collection
lines = expl_series[line_idx & np.invert(nan_idx)]
subset = values[line_idx & np.invert(nan_idx)]
if not lines.empty:
_plot_linestring_collection(
ax, lines, subset, vmin=mn, vmax=mx, cmap=cmap, **style_kwds
)
# plot all Points in the same collection
points = expl_series[point_idx & np.invert(nan_idx)]
subset = values[point_idx & np.invert(nan_idx)]
if not points.empty:
if isinstance(markersize, np.ndarray):
markersize = np.take(markersize, multiindex, axis=0)
markersize = markersize[point_idx & np.invert(nan_idx)]
_plot_point_collection(
ax,
points,
subset,
vmin=mn,
vmax=mx,
markersize=markersize,
cmap=cmap,
**style_kwds
)
if missing_kwds is not None:
if color:
if "color" not in missing_kwds:
missing_kwds["color"] = color
merged_kwds = style_kwds.copy()
merged_kwds.update(missing_kwds)
plot_series(expl_series[nan_idx], ax=ax, **merged_kwds)
if legend and not color:
if legend_kwds is None:
legend_kwds = {}
if "fmt" in legend_kwds:
legend_kwds.pop("fmt")
from matplotlib.lines import Line2D
from matplotlib.colors import Normalize
from matplotlib import cm
norm = style_kwds.get("norm", None)
if not norm:
norm = Normalize(vmin=mn, vmax=mx)
n_cmap = cm.ScalarMappable(norm=norm, cmap=cmap)
if categorical:
patches = []
for value, cat in enumerate(categories):
patches.append(
Line2D(
[0],
[0],
linestyle="none",
marker="o",
alpha=style_kwds.get("alpha", 1),
markersize=10,
markerfacecolor=n_cmap.to_rgba(value),
markeredgewidth=0,
)
)
if missing_kwds is not None:
if "color" in merged_kwds:
merged_kwds["facecolor"] = merged_kwds["color"]
patches.append(
Line2D(
[0],
[0],
linestyle="none",
marker="o",
alpha=merged_kwds.get("alpha", 1),
markersize=10,
markerfacecolor=merged_kwds.get("facecolor", None),
markeredgecolor=merged_kwds.get("edgecolor", None),
markeredgewidth=merged_kwds.get(
"linewidth", 1 if merged_kwds.get("edgecolor", False) else 0
),
)
)
categories.append(merged_kwds.get("label", "NaN"))
legend_kwds.setdefault("numpoints", 1)
legend_kwds.setdefault("loc", "best")
ax.legend(patches, categories, **legend_kwds)
else:
if cax is not None:
legend_kwds.setdefault("cax", cax)
else:
legend_kwds.setdefault("ax", ax)
n_cmap.set_array([])
ax.get_figure().colorbar(n_cmap, **legend_kwds)
plt.draw()
return ax
|
30,418 |
def read_file_and_encode64(attach_id):
"""
Reads file that was uploaded to War Room and encodes it's content to base 64.
:type attach_id: ``str``
:param attach_id: The id of uploaded file to War Room
:return: Base 64 encoded data, size of the encoded data in bytes and uploaded file name.
:rtype: ``bytes``, ``int``, ``str``
"""
try:
file_info = demisto.getFilePath(attach_id)
with open(file_info['path'], 'rb') as file:
b64_encoded_data = base64.b64encode(file.read())
file_size = os.path.getsize(file_info['path'])
return b64_encoded_data, file_size, file_info['name']
except Exception as e:
return_error(f'Unable to read and decode in base 64 file with id {attach_id}', e)
|
def read_file_and_encode64(attach_id):
"""
Reads file that was uploaded to War Room and encodes it's content to base 64.
:type attach_id: ``str``
:param attach_id: The id of uploaded file to War Room
:return: Base 64 encoded data, size of the encoded data in bytes and uploaded file name.
:rtype: ``bytes``, ``int``, ``str``
"""
try:
file_info = demisto.getFilePath(attach_id)
with open(file_info['path'], 'rb') as file_:
b64_encoded_data = base64.b64encode(file.read())
file_size = os.path.getsize(file_info['path'])
return b64_encoded_data, file_size, file_info['name']
except Exception as e:
return_error(f'Unable to read and decode in base 64 file with id {attach_id}', e)
|
8,493 |
def include_or_exclude_file(filename, include_list=None, exclude_list=None):
""""
Generic inclusion/exclusion decision function based on filename and list of include and exclude patterns.
Args:
filename:
Filename considered for inclusion.
include_list:
List of inclusion file patterns.
exclude_list:
List of exclusion file patterns.
Returns:
A boolean indicating whether the file should be included or not.
If ``include_list`` is provided, True is returned only if the filename matches one of include patterns (and does not
match any patterns in ``exclude_list``, if provided). If ``include_lsit`` is not provided, True is returned if
filename does not match any patterns in ``exclude list``, if provided. If neither list is provided, True is
returned for any filename.
"""
if include_list is not None:
for pattern in include_list:
if fnmatch.fnmatch(filename, pattern):
break
else:
return False # Not explicitly included; exclude
if exclude_list is not None:
for pattern in exclude_list:
if fnmatch.fnmatch(filename, pattern):
return False # Explicitly excluded
return True
|
def include_or_exclude_file(filename, include_list=None, exclude_list=None):
""""
Generic inclusion/exclusion decision function based on filename and list of include and exclude patterns.
Args:
filename:
Filename considered for inclusion.
include_list:
List of inclusion file patterns.
exclude_list:
List of exclusion file patterns.
Returns:
A boolean indicating whether the file should be included or not.
If ``include_list`` is provided, True is returned only if the filename matches one of include patterns (and does not
match any patterns in ``exclude_list``, if provided). If ``include_list`` is not provided, True is returned if
filename does not match any patterns in ``exclude list``, if provided. If neither list is provided, True is
returned for any filename.
"""
if include_list is not None:
for pattern in include_list:
if fnmatch.fnmatch(filename, pattern):
break
else:
return False # Not explicitly included; exclude
if exclude_list is not None:
for pattern in exclude_list:
if fnmatch.fnmatch(filename, pattern):
return False # Explicitly excluded
return True
|
54,093 |
def set_line_s_max_pu(n):
n.lines['s_max_pu'] = snakemake.config['lines']['s_max_pu']
logger.info("N-1 security margin set to {}".format(snakemake.config['lines']['s_max_pu']))
|
def set_line_s_max_pu(n):
s_max_pu = snakemake.config['lines']['s_max_pu']
n.lines['s_max_pu'] = s_max_pu
logger.info(f"N-1 security margin of lines set to {s_max_pu}")
|
7,042 |
def get_rsync_rund_cmd(src, dst, reinstall=False, dry_run=False):
"""Create and return the rsync command used for cylc install/re-install.
Args:
src (str):
file path location of source directory
dst (str):
file path location of destination directory
reinstall (bool):
indicate reinstall (--delete option added)
dry-run (bool):
indicate dry-run, rsync will not take place but report output if a
real run were to be executed
Return:
list: command to use for rsync.
"""
rsync_cmd = ["rsync"]
rsync_cmd.append("-av")
if dry_run:
rsync_cmd.append("--dry-run")
if reinstall:
rsync_cmd.append('--delete')
ignore_dirs = [
'.git',
'.svn',
'.cylcignore',
'rose-workflow.conf',
'opt/rose-workflow-cylc-install.conf',
WorkflowFiles.LOG_DIR,
WorkflowFiles.Install.DIRNAME,
WorkflowFiles.Service.DIRNAME]
for exclude in ignore_dirs:
if (Path(src).joinpath(exclude).exists() or
Path(dst).joinpath(exclude).exists()):
rsync_cmd.append(f"--exclude={exclude}")
if Path(src).joinpath('.cylcignore').exists():
rsync_cmd.append("--exclude-from=.cylcignore")
rsync_cmd.append(f"{src}/")
rsync_cmd.append(f"{dst}/")
return rsync_cmd
|
def get_rsync_rund_cmd(src, dst, reinstall=False, dry_run=False):
"""Create and return the rsync command used for cylc install/re-install.
Args:
src (str):
file path location of source directory
dst (str):
file path location of destination directory
reinstall (bool):
indicate reinstall (--delete option added)
dry-run (bool):
indicate dry-run, rsync will not take place but report output if a
real run were to be executed
Return:
list: command to use for rsync.
"""
rsync_cmd = ["rsync"]
rsync_cmd.append("-av")
if dry_run:
rsync_cmd.append("--dry-run")
if reinstall:
rsync_cmd.append('--delete')
ignore_dirs = [
'.git',
'.svn',
'.cylcignore',
'rose-suite.conf',
'opt/rose-workflow-cylc-install.conf',
WorkflowFiles.LOG_DIR,
WorkflowFiles.Install.DIRNAME,
WorkflowFiles.Service.DIRNAME]
for exclude in ignore_dirs:
if (Path(src).joinpath(exclude).exists() or
Path(dst).joinpath(exclude).exists()):
rsync_cmd.append(f"--exclude={exclude}")
if Path(src).joinpath('.cylcignore').exists():
rsync_cmd.append("--exclude-from=.cylcignore")
rsync_cmd.append(f"{src}/")
rsync_cmd.append(f"{dst}/")
return rsync_cmd
|
45,398 |
def is_reduce_function(fn):
"""
Check whether all functions defined by `fn` are groupby reductions.
If true, all functions defined by `fn` can be implemented with TreeReduce.
Parameters
----------
fn : Any
Function to test.
Returns
-------
bool
Whether all functions defined by `fn` are reductions.
"""
return _is_reduce_function_with_depth(fn, 0)
|
def is_reduce_function(fn):
"""
Check whether all functions defined by `fn` are groupby reductions.
If true, all functions defined by `fn` can be implemented with TreeReduce.
Parameters
----------
fn : Any
Function to test.
Returns
-------
bool
Whether all functions defined by `fn` are reductions.
"""
return _is_reduce_function_with_depth(fn, depth=0)
|
7,496 |
def test_time_1d_location_unsupported(header_time_1d_noobs):
# Check what happens when TREFPOS is unsupported
header_time_1d_noobs['TREFPOS'] = 'BARYCENTER'
wcs = WCS(header_time_1d_noobs)
with pytest.warns(UserWarning,
match="Observation location 'barycenter' is not "
"supported, setting location in Time to None"):
time = wcs.pixel_to_world(10)
assert time.location is None
|
def test_time_1d_location_unsupported(header_time_1d_no_obs):
# Check what happens when TREFPOS is unsupported
header_time_1d_noobs['TREFPOS'] = 'BARYCENTER'
wcs = WCS(header_time_1d_noobs)
with pytest.warns(UserWarning,
match="Observation location 'barycenter' is not "
"supported, setting location in Time to None"):
time = wcs.pixel_to_world(10)
assert time.location is None
|
32,446 |
def tim_insert_jsons(client: Client):
indicators = demisto.args().get('indicator', '')
if not indicators:
iocs = get_last_iocs()
else:
iocs = get_indicators(indicators)
if iocs:
path = 'tim_insert_jsons/'
for i, singel_batch_iocs in enumerate(batch_iocs(iocs)):
demisto.debug(f'push batch: {i}')
requests_kwargs: Dict = get_requests_kwargs(_json=list(
map(demisto_ioc_to_xdr, singel_batch_iocs)))
client.http_request(url_suffix=path, requests_kwargs=requests_kwargs)
return_outputs('push done.')
|
def tim_insert_jsons(client: Client):
indicators = demisto.args().get('indicator', '')
if not indicators:
iocs = get_last_iocs()
else:
iocs = get_indicators(indicators)
if iocs:
path = 'tim_insert_jsons/'
for i, single_batch_iocs in enumerate(batch_iocs(iocs)):
demisto.debug(f'push batch: {i}')
requests_kwargs: Dict = get_requests_kwargs(_json=list(
map(demisto_ioc_to_xdr, singel_batch_iocs)))
client.http_request(url_suffix=path, requests_kwargs=requests_kwargs)
return_outputs('push done.')
|
27,925 |
def main():
# This script is almost identical to train_mnist.py. The only difference is
# that this script uses data-parallel computation on two GPUs.
# See train_mnist.py for more details.
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=400,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--out', '-o', default='result_data_parallel',
help='Directory to output the result')
parser.add_argument('--resume', '-r', default='',
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
parser.add_argument('--devices', '-d', type=str, nargs='*',
default=['0', '1', '2', '3'],
help='Device specifiers. Either ChainerX device '
'specifiers or integers. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--ljob', '-j', type=int,
help='Number of parallel data loading processes')
args = parser.parse_args()
devices = tuple([chainer.get_device(d) for d in args.devices])
if any(device.xp is chainerx for device in devices):
sys.stderr.write('This example does not support ChainerX devices.\n')
sys.exit(1)
print('Devices: {}'.format(args.devices))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
model = L.Classifier(train_mnist.MLP(args.unit, 10))
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
train, test = chainer.datasets.get_mnist()
train_iters = [
chainer.iterators.MultiprocessIterator(i,
args.batchsize,
n_processes=args.ljob)
for i in chainer.datasets.split_dataset_n_random(train, args.ljob)]
test_iter = chainer.iterators.MultiprocessIterator(
test, args.batchsize, repeat=False, n_processes=args.ljob)
updater = training.updaters.MultiprocessParallelUpdater(train_iters,
optimizer,
devices=(devices))
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
trainer.extend(extensions.Evaluator(test_iter, model, device=devices[0]))
trainer.extend(extensions.DumpGraph('main/loss'))
trainer.extend(extensions.snapshot(), trigger=(args.epoch, 'epoch'))
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.extend(extensions.ProgressBar())
if args.resume:
chainer.serializers.load_npz(args.resume, trainer)
trainer.run()
|
def main():
# This script is almost identical to train_mnist.py. The only difference is
# that this script uses data-parallel computation on two GPUs.
# See train_mnist.py for more details.
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=400,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--out', '-o', default='result_data_parallel',
help='Directory to output the result')
parser.add_argument('--resume', '-r', default='',
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
parser.add_argument('--devices', '-d', type=str, nargs='*',
default=['0', '1', '2', '3'],
help='Device specifiers. Either ChainerX device '
'specifiers or integers. If non-negative integer, '
'CuPy arrays with specified device id are used. If '
'negative integer, NumPy arrays are used')
parser.add_argument('--ljob', '-j', type=int, default=4,
help='Number of parallel data loading processes')
args = parser.parse_args()
devices = tuple([chainer.get_device(d) for d in args.devices])
if any(device.xp is chainerx for device in devices):
sys.stderr.write('This example does not support ChainerX devices.\n')
sys.exit(1)
print('Devices: {}'.format(args.devices))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
model = L.Classifier(train_mnist.MLP(args.unit, 10))
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
train, test = chainer.datasets.get_mnist()
train_iters = [
chainer.iterators.MultiprocessIterator(i,
args.batchsize,
n_processes=args.ljob)
for i in chainer.datasets.split_dataset_n_random(train, args.ljob)]
test_iter = chainer.iterators.MultiprocessIterator(
test, args.batchsize, repeat=False, n_processes=args.ljob)
updater = training.updaters.MultiprocessParallelUpdater(train_iters,
optimizer,
devices=(devices))
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
trainer.extend(extensions.Evaluator(test_iter, model, device=devices[0]))
trainer.extend(extensions.DumpGraph('main/loss'))
trainer.extend(extensions.snapshot(), trigger=(args.epoch, 'epoch'))
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
trainer.extend(extensions.ProgressBar())
if args.resume:
chainer.serializers.load_npz(args.resume, trainer)
trainer.run()
|
5,199 |
def test_addfont_as_path():
"""Smoke test that addfont() accepts pathlib.Path."""
font_test_file = 'mpltest.ttf'
path = Path(__file__).parent / font_test_file
try:
fontManager.addfont(path)
added, = [font for font in fontManager.ttflist
if font.fname.endswith('mpltest.ttf')]
fontManager.ttflist.remove(added)
finally:
to_remove = [font for font in fontManager.ttflist
if font.fname.endswith('mpltest.ttf')]
for font in to_remove:
fontManager.ttflist.remove(font)
|
def test_addfont_as_path():
"""Smoke test that addfont() accepts pathlib.Path."""
font_test_file = 'mpltest.ttf'
path = Path(__file__).parent / font_test_file
try:
fontManager.addfont(path)
added, = [font for font in fontManager.ttflist
if font.fname.endswith(font_test_file)]
fontManager.ttflist.remove(added)
finally:
to_remove = [font for font in fontManager.ttflist
if font.fname.endswith('mpltest.ttf')]
for font in to_remove:
fontManager.ttflist.remove(font)
|
31,031 |
def get_all_user_profiles():
query = 'type:\"User Profile\"'
employee_id_to_user_profile = {}
email_to_user_profile = {}
def handle_batch(user_profiles):
for user_profile in user_profiles:
user_profile = user_profile.get('CustomFields', {})
employee_id = user_profile.get('employeeid')
email = user_profile.get('email')
employee_id_to_user_profile[employee_id] = user_profile
email_to_user_profile[email] = user_profile
query_result = demisto.searchIndicators(query=query, size=BATCH_SIZE)
handle_batch(query_result.get('iocs', []))
while query_result.get('searchAfter') is not None:
query_result = demisto.searchIndicators(query=query, size=BATCH_SIZE)
handle_batch(query_result.get('iocs', []))
return employee_id_to_user_profile, email_to_user_profile
|
def get_all_user_profiles():
query = 'type:\"User Profile\"'
employee_id_to_user_profile = {}
email_to_user_profile = {}
def handle_batch(user_profiles):
for user_profile in user_profiles:
user_profile = user_profile.get('CustomFields', {})
employee_id = user_profile.get('employeeid')
email = user_profile.get('email')
employee_id_to_user_profile[employee_id] = user_profile
email_to_user_profile[email] = user_profile
query_result = demisto.searchIndicators(query=query, size=BATCH_SIZE)
handle_batch(query_result.get('iocs', []))
while query_result.get('searchAfter') is not None:
query_result = demisto.searchIndicators(query=query, size=BATCH_SIZE, searchAfter=query_result.get('searchAfter'))
handle_batch(query_result.get('iocs', []))
return employee_id_to_user_profile, email_to_user_profile
|
41,898 |
def plot_edf(study: Union[Study, Sequence[Study]]) -> Axes:
"""Plot the objective value EDF (empirical distribution function) of a study with Matplotlib.
Note that only the complete trials are considered when plotting the EDF.
.. note::
EDF is useful to analyze and improve search spaces.
For instance, you can see a practical use case of EDF in the paper
`Designing Network Design Spaces <https://arxiv.org/abs/2003.13678>`_.
.. note::
The plotted EDF assumes that the value of the objective function is in
accordance with the uniform distribution over the objective space.
Example:
The following code snippet shows how to plot EDF.
.. testcode::
import math
import optuna
def ackley(x, y):
a = 20 * math.exp(-0.2 * math.sqrt(0.5 * (x ** 2 + y ** 2)))
b = math.exp(0.5 * (math.cos(2 * math.pi * x) + math.cos(2 * math.pi * y)))
return -a - b + math.e + 20
def objective(trial, low, high):
x = trial.suggest_float("x", low, high)
y = trial.suggest_float("y", low, high)
return ackley(x, y)
sampler = optuna.samplers.RandomSampler()
# Widest search space.
study0 = optuna.create_study(study_name="x=[0,5), y=[0,5)", sampler=sampler)
study0.optimize(lambda t: objective(t, 0, 5), n_trials=500)
# Narrower search space.
study1 = optuna.create_study(study_name="x=[0,4), y=[0,4)", sampler=sampler)
study1.optimize(lambda t: objective(t, 0, 4), n_trials=500)
# Narrowest search space but it doesn't include the global optimum point.
study2 = optuna.create_study(study_name="x=[1,3), y=[1,3)", sampler=sampler)
study2.optimize(lambda t: objective(t, 1, 3), n_trials=500)
optuna.visualization.plot_edf([study0, study1, study2])
Args:
study:
A target :class:`~optuna.study.Study` object.
You can pass multiple studies if you want to compare those EDFs.
Returns:
A :class:`matplotlib.figure.Figure` object.
"""
_imports.check()
if isinstance(study, Study):
studies = [study]
else:
studies = list(study)
return _get_edf_plot(studies)
|
def plot_edf(study: Union[Study, Sequence[Study]]) -> Axes:
"""Plot the objective value EDF (empirical distribution function) of a study with Matplotlib.
Note that only the complete trials are considered when plotting the EDF.
.. note::
EDF is useful to analyze and improve search spaces.
For instance, you can see a practical use case of EDF in the paper
`Designing Network Design Spaces <https://arxiv.org/abs/2003.13678>`_.
.. note::
The plotted EDF assumes that the value of the objective function is in
accordance with the uniform distribution over the objective space.
Example:
The following code snippet shows how to plot EDF.
.. testcode::
import math
import optuna
def ackley(x, y):
a = 20 * math.exp(-0.2 * math.sqrt(0.5 * (x ** 2 + y ** 2)))
b = math.exp(0.5 * (math.cos(2 * math.pi * x) + math.cos(2 * math.pi * y)))
return -a - b + math.e + 20
def objective(trial, low, high):
x = trial.suggest_float("x", low, high)
y = trial.suggest_float("y", low, high)
return ackley(x, y)
sampler = optuna.samplers.RandomSampler()
# Widest search space.
study0 = optuna.create_study(study_name="x=[0,5), y=[0,5)", sampler=sampler)
study0.optimize(lambda t: objective(t, 0, 5), n_trials=500)
# Narrower search space.
study1 = optuna.create_study(study_name="x=[0,4), y=[0,4)", sampler=sampler)
study1.optimize(lambda t: objective(t, 0, 4), n_trials=500)
# Narrowest search space but it doesn't include the global optimum point.
study2 = optuna.create_study(study_name="x=[1,3), y=[1,3)", sampler=sampler)
study2.optimize(lambda t: objective(t, 1, 3), n_trials=500)
optuna.visualization.plot_edf([study0, study1, study2])
Args:
study:
A target :class:`~optuna.study.Study` object.
You can pass multiple studies if you want to compare those EDFs.
Returns:
A :class:`matplotlib.axes.Axes` object.
"""
_imports.check()
if isinstance(study, Study):
studies = [study]
else:
studies = list(study)
return _get_edf_plot(studies)
|
10,582 |
def incidental_report(args):
"""Generate incidental coverage report."""
ct = CoverageTool()
git = Git(os.path.abspath(args.source))
coverage_data = CoverageData(os.path.abspath(args.result))
try:
git.show([coverage_data.result_sha, '--'])
except subprocess.CalledProcessError:
raise ApplicationError('%s: commit not found: %s\n'
'make sure your source repository is up-to-date' % (git.path, coverage_data.result_sha))
if coverage_data.result != "succeeded":
check_failed(args, 'results from Shippable indicate tests did not pass (result: %s)\n'
're-run until passing, then download the latest results and re-run the report using those results' % coverage_data.result)
if not coverage_data.paths:
raise ApplicationError('no coverage data found\n'
'make sure the downloaded results are from a code coverage run on Shippable')
# generate a unique subdirectory in the output directory based on the input files being used
path_hash = hashlib.sha256(b'\n'.join(p.encode() for p in coverage_data.paths)).hexdigest()
output_path = os.path.abspath(os.path.join(args.output, path_hash))
data_path = os.path.join(output_path, 'data')
reports_path = os.path.join(output_path, 'reports')
for path in [data_path, reports_path]:
if not os.path.exists(path):
os.makedirs(path)
# combine coverage results into a single file
combined_path = os.path.join(output_path, 'combined.json')
cached(combined_path, args.use_cache, args.verbose,
lambda: ct.combine(coverage_data.paths, combined_path))
with open(combined_path) as combined_file:
combined = json.load(combined_file)
if args.plugin_path:
# reporting on coverage missing from the test target for the specified plugin
# the report will be on a single target
cache_path_format = '%s' + '-for-%s' % os.path.splitext(os.path.basename(args.plugin_path))[0]
target_pattern = '^%s$' % get_target_name_from_plugin_path(args.plugin_path)
include_path = args.plugin_path
missing = True
target_name = get_target_name_from_plugin_path(args.plugin_path)
else:
# reporting on coverage exclusive to the matched targets
# the report can contain multiple targets
cache_path_format = '%s'
target_pattern = args.targets
include_path = None
missing = False
target_name = None
# identify integration test targets to analyze
target_names = sorted(combined['targets'])
incidental_target_names = [target for target in target_names if re.search(target_pattern, target)]
if not incidental_target_names:
if target_name:
# if the plugin has no tests we still want to know what coverage is missing
incidental_target_names = [target_name]
else:
raise ApplicationError('no targets to analyze')
# exclude test support plugins from analysis
# also exclude six, which for an unknown reason reports bogus coverage lines (indicating coverage of comments)
exclude_path = '^(test/support/|lib/ansible/module_utils/six/)'
# process coverage for each target and then generate a report
# save sources for generating a summary report at the end
summary = {}
report_paths = {}
for target_name in incidental_target_names:
cache_name = cache_path_format % target_name
only_target_path = os.path.join(data_path, 'only-%s.json' % cache_name)
cached(only_target_path, args.use_cache, args.verbose,
lambda: ct.filter(combined_path, only_target_path, include_targets=[target_name], include_path=include_path, exclude_path=exclude_path))
without_target_path = os.path.join(data_path, 'without-%s.json' % cache_name)
cached(without_target_path, args.use_cache, args.verbose,
lambda: ct.filter(combined_path, without_target_path, exclude_targets=[target_name], include_path=include_path, exclude_path=exclude_path))
if missing:
source_target_path = missing_target_path = os.path.join(data_path, 'missing-%s.json' % cache_name)
cached(missing_target_path, args.use_cache, args.verbose,
lambda: ct.missing(without_target_path, only_target_path, missing_target_path, only_gaps=True))
else:
source_target_path = exclusive_target_path = os.path.join(data_path, 'exclusive-%s.json' % cache_name)
cached(exclusive_target_path, args.use_cache, args.verbose,
lambda: ct.missing(only_target_path, without_target_path, exclusive_target_path, only_gaps=True))
source_expanded_target_path = os.path.join(os.path.dirname(source_target_path), 'expanded-%s' % os.path.basename(source_target_path))
cached(source_expanded_target_path, args.use_cache, args.verbose,
lambda: ct.expand(source_target_path, source_expanded_target_path))
summary[target_name] = sources = collect_sources(source_expanded_target_path, git, coverage_data)
txt_report_path = os.path.join(reports_path, '%s.txt' % cache_name)
cached(txt_report_path, args.use_cache, args.verbose,
lambda: generate_report(sources, txt_report_path, coverage_data, target_name, missing=missing))
report_paths[target_name] = txt_report_path
# provide a summary report of results
for target_name in incidental_target_names:
sources = summary[target_name]
report_path = os.path.relpath(report_paths[target_name])
print('%s: %d arcs, %d lines, %d files - %s' % (
target_name,
sum(len(s.covered_arcs) for s in sources),
sum(len(s.covered_lines) for s in sources),
len(sources),
report_path,
))
if not missing:
sys.stderr.write('NOTE: This report shows only coverage exclusive to the reported targets. '
'As targets are removed, exclusive coverage on the remaining targets will increase.\n')
|
def incidental_report(args):
"""Generate incidental coverage report."""
ct = CoverageTool()
git = Git(os.path.abspath(args.source))
coverage_data = CoverageData(os.path.abspath(args.result))
try:
git.show([coverage_data.result_sha, '--'])
except subprocess.CalledProcessError:
raise ApplicationError('%s: commit not found: %s\n'
'make sure your source repository is up-to-date' % (git.path, coverage_data.result_sha))
if coverage_data.result != "succeeded":
check_failed(args, 'results indicate tests did not pass (result: %s)\n'
're-run until passing, then download the latest results and re-run the report using those results' % coverage_data.result)
if not coverage_data.paths:
raise ApplicationError('no coverage data found\n'
'make sure the downloaded results are from a code coverage run on Shippable')
# generate a unique subdirectory in the output directory based on the input files being used
path_hash = hashlib.sha256(b'\n'.join(p.encode() for p in coverage_data.paths)).hexdigest()
output_path = os.path.abspath(os.path.join(args.output, path_hash))
data_path = os.path.join(output_path, 'data')
reports_path = os.path.join(output_path, 'reports')
for path in [data_path, reports_path]:
if not os.path.exists(path):
os.makedirs(path)
# combine coverage results into a single file
combined_path = os.path.join(output_path, 'combined.json')
cached(combined_path, args.use_cache, args.verbose,
lambda: ct.combine(coverage_data.paths, combined_path))
with open(combined_path) as combined_file:
combined = json.load(combined_file)
if args.plugin_path:
# reporting on coverage missing from the test target for the specified plugin
# the report will be on a single target
cache_path_format = '%s' + '-for-%s' % os.path.splitext(os.path.basename(args.plugin_path))[0]
target_pattern = '^%s$' % get_target_name_from_plugin_path(args.plugin_path)
include_path = args.plugin_path
missing = True
target_name = get_target_name_from_plugin_path(args.plugin_path)
else:
# reporting on coverage exclusive to the matched targets
# the report can contain multiple targets
cache_path_format = '%s'
target_pattern = args.targets
include_path = None
missing = False
target_name = None
# identify integration test targets to analyze
target_names = sorted(combined['targets'])
incidental_target_names = [target for target in target_names if re.search(target_pattern, target)]
if not incidental_target_names:
if target_name:
# if the plugin has no tests we still want to know what coverage is missing
incidental_target_names = [target_name]
else:
raise ApplicationError('no targets to analyze')
# exclude test support plugins from analysis
# also exclude six, which for an unknown reason reports bogus coverage lines (indicating coverage of comments)
exclude_path = '^(test/support/|lib/ansible/module_utils/six/)'
# process coverage for each target and then generate a report
# save sources for generating a summary report at the end
summary = {}
report_paths = {}
for target_name in incidental_target_names:
cache_name = cache_path_format % target_name
only_target_path = os.path.join(data_path, 'only-%s.json' % cache_name)
cached(only_target_path, args.use_cache, args.verbose,
lambda: ct.filter(combined_path, only_target_path, include_targets=[target_name], include_path=include_path, exclude_path=exclude_path))
without_target_path = os.path.join(data_path, 'without-%s.json' % cache_name)
cached(without_target_path, args.use_cache, args.verbose,
lambda: ct.filter(combined_path, without_target_path, exclude_targets=[target_name], include_path=include_path, exclude_path=exclude_path))
if missing:
source_target_path = missing_target_path = os.path.join(data_path, 'missing-%s.json' % cache_name)
cached(missing_target_path, args.use_cache, args.verbose,
lambda: ct.missing(without_target_path, only_target_path, missing_target_path, only_gaps=True))
else:
source_target_path = exclusive_target_path = os.path.join(data_path, 'exclusive-%s.json' % cache_name)
cached(exclusive_target_path, args.use_cache, args.verbose,
lambda: ct.missing(only_target_path, without_target_path, exclusive_target_path, only_gaps=True))
source_expanded_target_path = os.path.join(os.path.dirname(source_target_path), 'expanded-%s' % os.path.basename(source_target_path))
cached(source_expanded_target_path, args.use_cache, args.verbose,
lambda: ct.expand(source_target_path, source_expanded_target_path))
summary[target_name] = sources = collect_sources(source_expanded_target_path, git, coverage_data)
txt_report_path = os.path.join(reports_path, '%s.txt' % cache_name)
cached(txt_report_path, args.use_cache, args.verbose,
lambda: generate_report(sources, txt_report_path, coverage_data, target_name, missing=missing))
report_paths[target_name] = txt_report_path
# provide a summary report of results
for target_name in incidental_target_names:
sources = summary[target_name]
report_path = os.path.relpath(report_paths[target_name])
print('%s: %d arcs, %d lines, %d files - %s' % (
target_name,
sum(len(s.covered_arcs) for s in sources),
sum(len(s.covered_lines) for s in sources),
len(sources),
report_path,
))
if not missing:
sys.stderr.write('NOTE: This report shows only coverage exclusive to the reported targets. '
'As targets are removed, exclusive coverage on the remaining targets will increase.\n')
|
35,499 |
def graph_timestamps(timestamps, relative):
t0 = get_min_time(min(timestamps.keys()), SERVICES[0], timestamps)
fig, ax = plt.subplots()
ax.set_xlim(0, 150 if relative else 750)
ax.set_ylim(0, 15)
ax.set_xlabel('milliseconds')
ax.set_ylabel('Frame id')
colors = ["blue", 'green', 'red', 'yellow', 'purple']
assert len(colors) == len(SERVICES), "Each service needs a color"
points = {"x": [], "y": [], "labels": []}
for frame_id, services in timestamps.items():
if relative:
t0 = get_min_time(frame_id, SERVICES[0], timestamps)
service_bars = []
for service, events in services.items():
start, end = get_interval(frame_id, service,timestamps)
service_bars.append(((start-t0)/1e6,(end-start)/1e6))
for event in events:
points["x"].append((event[1]-t0)/1e6)
points["y"].append(frame_id)
points["labels"].append(event[0])
ax.broken_barh(service_bars, (frame_id-0.45, 0.9), facecolors=(colors), alpha=0.5)
scatter = ax.scatter(points['x'], points['y'], marker="d", edgecolor='black')
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=points["labels"])
mpld3.plugins.connect(fig, tooltip)
plt.legend(handles=[mpatches.Patch(color=colors[i], label=SERVICES[i]) for i in range(len(SERVICES))])
#mpld3.save_html(fig, 'latencylogger_plot.html')
mpld3.show(fig)
|
def graph_timestamps(timestamps, relative):
t0 = get_min_time(min(timestamps.keys()), SERVICES[0], timestamps)
fig, ax = plt.subplots()
ax.set_xlim(0, 150 if relative else 750)
ax.set_ylim(0, 15)
ax.set_xlabel('milliseconds')
ax.set_ylabel('Frame ID')
colors = ["blue", 'green', 'red', 'yellow', 'purple']
assert len(colors) == len(SERVICES), "Each service needs a color"
points = {"x": [], "y": [], "labels": []}
for frame_id, services in timestamps.items():
if relative:
t0 = get_min_time(frame_id, SERVICES[0], timestamps)
service_bars = []
for service, events in services.items():
start, end = get_interval(frame_id, service,timestamps)
service_bars.append(((start-t0)/1e6,(end-start)/1e6))
for event in events:
points["x"].append((event[1]-t0)/1e6)
points["y"].append(frame_id)
points["labels"].append(event[0])
ax.broken_barh(service_bars, (frame_id-0.45, 0.9), facecolors=(colors), alpha=0.5)
scatter = ax.scatter(points['x'], points['y'], marker="d", edgecolor='black')
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=points["labels"])
mpld3.plugins.connect(fig, tooltip)
plt.legend(handles=[mpatches.Patch(color=colors[i], label=SERVICES[i]) for i in range(len(SERVICES))])
#mpld3.save_html(fig, 'latencylogger_plot.html')
mpld3.show(fig)
|
1,712 |
def confusion_matrix(y_true, y_pred, labels=None, sample_weight=None,
normalize=None):
"""Compute confusion matrix to evaluate the accuracy of a classification.
By definition a confusion matrix :math:`C` is such that :math:`C_{i, j}`
is equal to the number of observations known to be in group :math:`i` and
predicted to be in group :math:`j`.
Thus in binary classification, the count of true negatives is
:math:`C_{0,0}`, false negatives is :math:`C_{1,0}`, true positives is
:math:`C_{1,1}` and false positives is :math:`C_{0,1}`.
Read more in the :ref:`User Guide <confusion_matrix>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,)
Estimated targets as returned by a classifier.
labels : array-like of shape (n_classes), default=None
List of labels to index the matrix. This may be used to reorder
or select a subset of labels.
If ``None`` is given, those that appear at least once
in ``y_true`` or ``y_pred`` are used in sorted order.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
normalize : {'true', 'pred', 'all'}, default=None
Normalizes confusion matrix over the true (rows), predicted (columns)
conditions or all the population. If None, confusion matrix will not be
normalized.
Returns
-------
C : ndarray of shape (n_classes, n_classes)
Confusion matrix whose i-th row and j-th
column entry indicates the number of
samples with true label being i-th class
and prediced label being j-th class.
References
----------
.. [1] `Wikipedia entry for the Confusion matrix
<https://en.wikipedia.org/wiki/Confusion_matrix>`_
(Wikipedia and other references may use a different
convention for axes)
Examples
--------
>>> from sklearn.metrics import confusion_matrix
>>> y_true = [2, 0, 2, 2, 0, 1]
>>> y_pred = [0, 0, 2, 2, 0, 2]
>>> confusion_matrix(y_true, y_pred)
array([[2, 0, 0],
[0, 0, 1],
[1, 0, 2]])
>>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
>>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
>>> confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
array([[2, 0, 0],
[0, 0, 1],
[1, 0, 2]])
In the binary case, we can extract true positives, etc as follows:
>>> tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel()
>>> (tn, fp, fn, tp)
(0, 2, 1, 1)
"""
y_type, y_true, y_pred = _check_targets(y_true, y_pred)
if y_type not in ("binary", "multiclass"):
raise ValueError("%s is not supported" % y_type)
if labels is None:
labels = unique_labels(y_true, y_pred)
elif len(y_true) == 0:
n_labels = len(labels)
return np.zeros((n_labels, n_labels), dtype=np.int)
else:
labels = np.asarray(labels)
if np.all([l not in y_true for l in labels]):
raise ValueError("At least one label specified must be in y_true")
if sample_weight is None:
sample_weight = np.ones(y_true.shape[0], dtype=np.int64)
else:
sample_weight = np.asarray(sample_weight)
check_consistent_length(y_true, y_pred, sample_weight)
if normalize not in ['true', 'pred', 'all', None]:
raise ValueError("normalize must be one of {'true', 'pred', "
"'all', None}")
n_labels = labels.size
label_to_ind = {y: x for x, y in enumerate(labels)}
# convert yt, yp into index
y_pred = np.array([label_to_ind.get(x, n_labels + 1) for x in y_pred])
y_true = np.array([label_to_ind.get(x, n_labels + 1) for x in y_true])
# intersect y_pred, y_true with labels, eliminate items not in labels
ind = np.logical_and(y_pred < n_labels, y_true < n_labels)
y_pred = y_pred[ind]
y_true = y_true[ind]
# also eliminate weights of eliminated items
sample_weight = sample_weight[ind]
# Choose the accumulator dtype to always have high precision
if sample_weight.dtype.kind in {'i', 'u', 'b'}:
dtype = np.int64
else:
dtype = np.float64
cm = coo_matrix((sample_weight, (y_true, y_pred)),
shape=(n_labels, n_labels), dtype=dtype,
).toarray()
with np.errstate(all='ignore'):
if normalize == 'true':
cm = cm / cm.sum(axis=1, keepdims=True)
elif normalize == 'pred':
cm = cm / cm.sum(axis=0, keepdims=True)
elif normalize == 'all':
cm = cm / cm.sum()
cm = np.nan_to_num(cm)
return cm
|
def confusion_matrix(y_true, y_pred, labels=None, sample_weight=None,
normalize=None):
"""Compute confusion matrix to evaluate the accuracy of a classification.
By definition a confusion matrix :math:`C` is such that :math:`C_{i, j}`
is equal to the number of observations known to be in group :math:`i` and
predicted to be in group :math:`j`.
Thus in binary classification, the count of true negatives is
:math:`C_{0,0}`, false negatives is :math:`C_{1,0}`, true positives is
:math:`C_{1,1}` and false positives is :math:`C_{0,1}`.
Read more in the :ref:`User Guide <confusion_matrix>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like of shape (n_samples,)
Estimated targets as returned by a classifier.
labels : array-like of shape (n_classes), default=None
List of labels to index the matrix. This may be used to reorder
or select a subset of labels.
If ``None`` is given, those that appear at least once
in ``y_true`` or ``y_pred`` are used in sorted order.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
normalize : {'true', 'pred', 'all'}, default=None
Normalizes confusion matrix over the true (rows), predicted (columns)
conditions or all the population. If None, confusion matrix will not be
normalized.
Returns
-------
C : ndarray of shape (n_classes, n_classes)
Confusion matrix whose i-th row and j-th
column entry indicates the number of
samples with true label being i-th class
and prediced label being j-th class.
References
----------
.. [1] `Wikipedia entry for the Confusion matrix
<https://en.wikipedia.org/wiki/Confusion_matrix>`_
(Wikipedia and other references may use a different
convention for axes)
Examples
--------
>>> from sklearn.metrics import confusion_matrix
>>> y_true = [2, 0, 2, 2, 0, 1]
>>> y_pred = [0, 0, 2, 2, 0, 2]
>>> confusion_matrix(y_true, y_pred)
array([[2, 0, 0],
[0, 0, 1],
[1, 0, 2]])
>>> y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
>>> y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
>>> confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
array([[2, 0, 0],
[0, 0, 1],
[1, 0, 2]])
In the binary case, we can extract true positives, etc as follows:
>>> tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel()
>>> (tn, fp, fn, tp)
(0, 2, 1, 1)
"""
y_type, y_true, y_pred = _check_targets(y_true, y_pred)
if y_type not in ("binary", "multiclass"):
raise ValueError("%s is not supported" % y_type)
if labels is None:
labels = unique_labels(y_true, y_pred)
elif not y_true:
n_labels = len(labels)
return np.zeros((n_labels, n_labels), dtype=np.int)
else:
labels = np.asarray(labels)
if np.all([l not in y_true for l in labels]):
raise ValueError("At least one label specified must be in y_true")
if sample_weight is None:
sample_weight = np.ones(y_true.shape[0], dtype=np.int64)
else:
sample_weight = np.asarray(sample_weight)
check_consistent_length(y_true, y_pred, sample_weight)
if normalize not in ['true', 'pred', 'all', None]:
raise ValueError("normalize must be one of {'true', 'pred', "
"'all', None}")
n_labels = labels.size
label_to_ind = {y: x for x, y in enumerate(labels)}
# convert yt, yp into index
y_pred = np.array([label_to_ind.get(x, n_labels + 1) for x in y_pred])
y_true = np.array([label_to_ind.get(x, n_labels + 1) for x in y_true])
# intersect y_pred, y_true with labels, eliminate items not in labels
ind = np.logical_and(y_pred < n_labels, y_true < n_labels)
y_pred = y_pred[ind]
y_true = y_true[ind]
# also eliminate weights of eliminated items
sample_weight = sample_weight[ind]
# Choose the accumulator dtype to always have high precision
if sample_weight.dtype.kind in {'i', 'u', 'b'}:
dtype = np.int64
else:
dtype = np.float64
cm = coo_matrix((sample_weight, (y_true, y_pred)),
shape=(n_labels, n_labels), dtype=dtype,
).toarray()
with np.errstate(all='ignore'):
if normalize == 'true':
cm = cm / cm.sum(axis=1, keepdims=True)
elif normalize == 'pred':
cm = cm / cm.sum(axis=0, keepdims=True)
elif normalize == 'all':
cm = cm / cm.sum()
cm = np.nan_to_num(cm)
return cm
|
43,008 |
def _deep_update(source, overrides):
"""Recursively update a nested dictionary.
This function is a generalization of Python's built in
``dict.update`` method, modified to recursively update
keys with nested dictionaries.
"""
for key, value in overrides.items():
if isinstance(value, collections.Mapping) and value:
# Override value is a non-empty dictionary.
# Update the source key with the override dictionary.
returned = _deep_update(source.get(key, {}), value)
source[key] = returned
elif value != {}:
# Override value is not an empty dictionary.
source[key] = overrides[key]
return source
|
def _deep_update(old_dict, new_dict):
"""Recursively update a nested dictionary.
This function is a generalization of Python's built in
``dict.update`` method, modified to recursively update
keys with nested dictionaries.
"""
for key, value in overrides.items():
if isinstance(value, collections.Mapping) and value:
# Override value is a non-empty dictionary.
# Update the source key with the override dictionary.
returned = _deep_update(source.get(key, {}), value)
source[key] = returned
elif value != {}:
# Override value is not an empty dictionary.
source[key] = overrides[key]
return source
|
47,993 |
def main():
args = parse_arguments()
# --------------------------------- 1. Load Plugin for inference engine ---------------------------------
ie = IECore()
if 'CPU' in args.target_device:
if args.path_to_extension:
ie.add_extension(args.path_to_extension, "CPU")
if args.number_threads is not None:
ie.set_config({'CPU_THREADS_NUM': str(args.number_threads)}, "CPU")
elif 'GPU' in args.target_device:
if args.path_to_cldnn_config:
ie.set_config({'CONFIG_FILE': args.path_to_cldnn_config}, "GPU")
else:
raise AttributeError("Device {} do not support of 3D convolution. "
"Please use CPU, GPU or HETERO:*CPU*, HETERO:*GPU*")
version = ie.get_versions(args.target_device)[args.target_device].build_number
log.info('IE build: {}'.format(version))
# --------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ---------------------
log.info('Reading model {}'.format(args.path_to_model))
ie_network = ie.read_network(args.path_to_model, os.path.splitext(args.path_to_model)[0] + '.bin')
input_info = ie_network.input_info
if len(input_info) == 0:
raise AttributeError('No inputs info is provided')
elif len(input_info) != 1:
raise AttributeError("only one input layer network is supported")
input_name = next(iter(input_info))
out_name = next(iter(ie_network.outputs))
if args.shape:
log.debug("Reshape model from {} to {}".format(input_info[input_name].input_data.shape, args.shape))
ie_network.reshape({input_name: args.shape})
input_info = ie_network.input_info
# ---------------------------------------- 4. Preparing input data ----------------------------------------
if len(input_info[input_name].input_data.shape) != 5:
raise AttributeError("Incorrect shape {} for 3d convolution network".format(args.shape))
n, c, d, h, w = input_info[input_name].input_data.shape
ie_network.batch_size = n
if not os.path.exists(args.path_to_input_data):
raise AttributeError("Path to input data: '{}' does not exist".format(args.path_to_input_data))
input_type = get_input_type(args.path_to_input_data)
is_nifti_data = (input_type == NIFTI_FILE or input_type == NIFTI_FOLDER)
if input_type == NIFTI_FOLDER:
series_name = find_series_name(args.path_to_input_data)
original_data, data_crop, affine, original_size, bbox = \
read_image(args.path_to_input_data, data_name=series_name, sizes=(d, h, w),
mri_sequence_order=args.mri_sequence, full_intensities_range=args.full_intensities_range)
elif input_type == NIFTI_FILE:
original_data, data_crop, affine, original_size, bbox = \
read_image(args.path_to_input_data, data_name=args.path_to_input_data, sizes=(d, h, w), is_series=False,
mri_sequence_order=args.mri_sequence, full_intensities_range=args.full_intensities_range)
else:
data_crop = np.zeros(shape=(n, c, d, h, w), dtype=np.float)
im_seq = ImageSequence.Iterator(Image.open(args.path_to_input_data))
for i, page in enumerate(im_seq):
im = np.array(page).reshape(h, w, c)
for channel in range(c):
data_crop[:, channel, i, :, :] = im[:, :, channel]
original_data = data_crop
original_size = original_data.shape[-3:]
test_im = {input_name: data_crop}
# ------------------------------------- 4. Loading model to the plugin -------------------------------------
executable_network = ie.load_network(network=ie_network, device_name=args.target_device)
log.info('Loaded model {} to {}'.format(args.path_to_model, args.target_device))
del ie_network
# ---------------------------------------------- 5. Do inference --------------------------------------------
start_time = datetime.now()
res = executable_network.infer(test_im)
infer_time = datetime.now() - start_time
log.info("Inference time is {}".format(infer_time))
# ---------------------------- 6. Processing of the received inference results ------------------------------
result = res[out_name]
batch, channels, out_d, out_h, out_w = result.shape
list_img = []
list_seg_result = []
start_time = datetime.now()
for batch, data in enumerate(result):
seg_result = np.zeros(shape=original_size, dtype=np.uint8)
if data.shape[1:] != original_size:
x = bbox[1] - bbox[0]
y = bbox[3] - bbox[2]
z = bbox[5] - bbox[4]
out_result = np.zeros(shape=((channels,) + original_size), dtype=float)
out_result[:, bbox[0]:bbox[1], bbox[2]:bbox[3], bbox[4]:bbox[5]] = \
resample_np(data, (channels, x, y, z), 1)
else:
out_result = data
if channels == 1:
reshaped_data = out_result.reshape(original_size[0], original_size[1], original_size[2])
mask = reshaped_data[:, :, :] > 0.5
reshaped_data[mask] = 1
seg_result = reshaped_data.astype(int)
elif channels == 4:
seg_result = np.argmax(out_result, axis=0).astype(int)
elif channels == 3:
res = np.zeros(shape=out_result.shape, dtype=bool)
res = out_result > 0.5
wt = res[0]
tc = res[1]
et = res[2]
seg_result[wt] = 2
seg_result[tc] = 1
seg_result[et] = 3
im = np.stack([original_data[batch, 0, :, :, :],
original_data[batch, 0, :, :, :],
original_data[batch, 0, :, :, :]],
axis=3)
im = 255 * (im - im.min())/(im.max() - im.min())
color_seg_frame = np.zeros(im.shape, dtype=np.uint8)
for idx, c in enumerate(CLASSES_COLOR_MAP):
color_seg_frame[seg_result[:, :, :] == idx, :] = np.array(c, dtype=np.uint8)
mask = seg_result[:, :, :] > 0
im[mask] = color_seg_frame[mask]
for k in range(im.shape[2]):
if is_nifti_data:
list_img.append(Image.fromarray(im[:, :, k, :].astype('uint8'), 'RGB'))
else:
list_img.append(Image.fromarray(im[k, :, :, :].astype('uint8'), 'RGB'))
if args.output_nifti and is_nifti_data:
list_seg_result.append(seg_result)
result_processing_time = datetime.now() - start_time
log.info("Processing time is {}".format(result_processing_time))
# --------------------------------------------- 7. Save output -----------------------------------------------
tiff_output_name = os.path.join(args.path_to_output, 'output.tiff')
Image.new('RGB', (original_data.shape[3], original_data.shape[2])).save(tiff_output_name,
append_images=list_img, save_all=True)
log.debug("Result tiff file was saved to {}".format(tiff_output_name))
if args.output_nifti and is_nifti_data:
for seg_res in list_seg_result:
nii_filename = os.path.join(args.path_to_output, 'output_{}.nii.gz'.format(list_seg_result.index(seg_res)))
nib.save(nib.Nifti1Image(seg_res, affine=affine), nii_filename)
log.debug("Result nifti file was saved to {}".format(nii_filename))
|
def main():
args = parse_arguments()
# --------------------------------- 1. Load Plugin for inference engine ---------------------------------
ie = IECore()
if 'CPU' in args.target_device:
if args.path_to_extension:
ie.add_extension(args.path_to_extension, "CPU")
if args.number_threads is not None:
ie.set_config({'CPU_THREADS_NUM': str(args.number_threads)}, "CPU")
elif 'GPU' in args.target_device:
if args.path_to_cldnn_config:
ie.set_config({'CONFIG_FILE': args.path_to_cldnn_config}, "GPU")
else:
raise AttributeError("Device {} do not support of 3D convolution. "
"Please use CPU, GPU or HETERO:*CPU*, HETERO:*GPU*")
log.info('IE build: {}'.format(ie.get_versions(args.target_device)[args.target_device].build_number))
# --------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ---------------------
log.info('Reading model {}'.format(args.path_to_model))
ie_network = ie.read_network(args.path_to_model, os.path.splitext(args.path_to_model)[0] + '.bin')
input_info = ie_network.input_info
if len(input_info) == 0:
raise AttributeError('No inputs info is provided')
elif len(input_info) != 1:
raise AttributeError("only one input layer network is supported")
input_name = next(iter(input_info))
out_name = next(iter(ie_network.outputs))
if args.shape:
log.debug("Reshape model from {} to {}".format(input_info[input_name].input_data.shape, args.shape))
ie_network.reshape({input_name: args.shape})
input_info = ie_network.input_info
# ---------------------------------------- 4. Preparing input data ----------------------------------------
if len(input_info[input_name].input_data.shape) != 5:
raise AttributeError("Incorrect shape {} for 3d convolution network".format(args.shape))
n, c, d, h, w = input_info[input_name].input_data.shape
ie_network.batch_size = n
if not os.path.exists(args.path_to_input_data):
raise AttributeError("Path to input data: '{}' does not exist".format(args.path_to_input_data))
input_type = get_input_type(args.path_to_input_data)
is_nifti_data = (input_type == NIFTI_FILE or input_type == NIFTI_FOLDER)
if input_type == NIFTI_FOLDER:
series_name = find_series_name(args.path_to_input_data)
original_data, data_crop, affine, original_size, bbox = \
read_image(args.path_to_input_data, data_name=series_name, sizes=(d, h, w),
mri_sequence_order=args.mri_sequence, full_intensities_range=args.full_intensities_range)
elif input_type == NIFTI_FILE:
original_data, data_crop, affine, original_size, bbox = \
read_image(args.path_to_input_data, data_name=args.path_to_input_data, sizes=(d, h, w), is_series=False,
mri_sequence_order=args.mri_sequence, full_intensities_range=args.full_intensities_range)
else:
data_crop = np.zeros(shape=(n, c, d, h, w), dtype=np.float)
im_seq = ImageSequence.Iterator(Image.open(args.path_to_input_data))
for i, page in enumerate(im_seq):
im = np.array(page).reshape(h, w, c)
for channel in range(c):
data_crop[:, channel, i, :, :] = im[:, :, channel]
original_data = data_crop
original_size = original_data.shape[-3:]
test_im = {input_name: data_crop}
# ------------------------------------- 4. Loading model to the plugin -------------------------------------
executable_network = ie.load_network(network=ie_network, device_name=args.target_device)
log.info('Loaded model {} to {}'.format(args.path_to_model, args.target_device))
del ie_network
# ---------------------------------------------- 5. Do inference --------------------------------------------
start_time = datetime.now()
res = executable_network.infer(test_im)
infer_time = datetime.now() - start_time
log.info("Inference time is {}".format(infer_time))
# ---------------------------- 6. Processing of the received inference results ------------------------------
result = res[out_name]
batch, channels, out_d, out_h, out_w = result.shape
list_img = []
list_seg_result = []
start_time = datetime.now()
for batch, data in enumerate(result):
seg_result = np.zeros(shape=original_size, dtype=np.uint8)
if data.shape[1:] != original_size:
x = bbox[1] - bbox[0]
y = bbox[3] - bbox[2]
z = bbox[5] - bbox[4]
out_result = np.zeros(shape=((channels,) + original_size), dtype=float)
out_result[:, bbox[0]:bbox[1], bbox[2]:bbox[3], bbox[4]:bbox[5]] = \
resample_np(data, (channels, x, y, z), 1)
else:
out_result = data
if channels == 1:
reshaped_data = out_result.reshape(original_size[0], original_size[1], original_size[2])
mask = reshaped_data[:, :, :] > 0.5
reshaped_data[mask] = 1
seg_result = reshaped_data.astype(int)
elif channels == 4:
seg_result = np.argmax(out_result, axis=0).astype(int)
elif channels == 3:
res = np.zeros(shape=out_result.shape, dtype=bool)
res = out_result > 0.5
wt = res[0]
tc = res[1]
et = res[2]
seg_result[wt] = 2
seg_result[tc] = 1
seg_result[et] = 3
im = np.stack([original_data[batch, 0, :, :, :],
original_data[batch, 0, :, :, :],
original_data[batch, 0, :, :, :]],
axis=3)
im = 255 * (im - im.min())/(im.max() - im.min())
color_seg_frame = np.zeros(im.shape, dtype=np.uint8)
for idx, c in enumerate(CLASSES_COLOR_MAP):
color_seg_frame[seg_result[:, :, :] == idx, :] = np.array(c, dtype=np.uint8)
mask = seg_result[:, :, :] > 0
im[mask] = color_seg_frame[mask]
for k in range(im.shape[2]):
if is_nifti_data:
list_img.append(Image.fromarray(im[:, :, k, :].astype('uint8'), 'RGB'))
else:
list_img.append(Image.fromarray(im[k, :, :, :].astype('uint8'), 'RGB'))
if args.output_nifti and is_nifti_data:
list_seg_result.append(seg_result)
result_processing_time = datetime.now() - start_time
log.info("Processing time is {}".format(result_processing_time))
# --------------------------------------------- 7. Save output -----------------------------------------------
tiff_output_name = os.path.join(args.path_to_output, 'output.tiff')
Image.new('RGB', (original_data.shape[3], original_data.shape[2])).save(tiff_output_name,
append_images=list_img, save_all=True)
log.debug("Result tiff file was saved to {}".format(tiff_output_name))
if args.output_nifti and is_nifti_data:
for seg_res in list_seg_result:
nii_filename = os.path.join(args.path_to_output, 'output_{}.nii.gz'.format(list_seg_result.index(seg_res)))
nib.save(nib.Nifti1Image(seg_res, affine=affine), nii_filename)
log.debug("Result nifti file was saved to {}".format(nii_filename))
|
24,697 |
def _declare_qos_parameteres(
entity_type: Union[Type[Publisher], Type[Subscription]],
node: 'Node',
topic_name: Text,
qos: QoSProfile,
options: QoSOverridingOptions
) -> QoSProfile:
"""
Declare qos parameters for a Publisher or a Subscription.
:param entity_type: Either `rclpy.node.Publisher` or `rclpy.node.Subscription`.
:param node: Node used to declare the parameters.
:param topic_name: Topic name of the entity being created.
:param qos: Default qos settings of the entity being created, that will be overriden
with the user provided qos parameter overrides.
:param options: Options that indicates which parameters are going to be declared.
"""
if not issubclass(entity_type, (Publisher, Subscription)):
raise TypeError('Argument `entity_type` should be a subclass of Publisher or Subscription')
entity_type_str = 'publisher' if issubclass(entity_type, Publisher) else Subscription
id_suffix = '' if options.entity_id is None else f'_{options.entity_id}'
name = f'qos_overrides.{topic_name}.{entity_type_str}{id_suffix}.' '{}'
description = '{}' f' for {entity_type_str} `{topic_name}` with id `{options.entity_id}`'
allowed_policies = _get_allowed_policies(entity_type)
for policy in options.policy_kinds:
if policy not in allowed_policies:
continue
policy_name = policy.name.lower()
descriptor = ParameterDescriptor()
descriptor.description = description.format(policy_name)
descriptor.read_only = True
param = node.declare_parameter(
name.format(policy_name),
_get_qos_policy_parameter(qos, policy),
descriptor)
_override_qos_policy_with_param(qos, policy, param)
if options.callback is not None and not options.callback(qos):
raise InvalidQosOverridesError(
description.format('Provided qos overrides') + ', are not valid')
|
def _declare_qos_parameteres(
entity_type: Union[Type[Publisher], Type[Subscription]],
node: 'Node',
topic_name: Text,
qos: QoSProfile,
options: QoSOverridingOptions
) -> QoSProfile:
"""
Declare QoS parameters for a Publisher or a Subscription.
:param entity_type: Either `rclpy.node.Publisher` or `rclpy.node.Subscription`.
:param node: Node used to declare the parameters.
:param topic_name: Topic name of the entity being created.
:param qos: Default qos settings of the entity being created, that will be overriden
with the user provided qos parameter overrides.
:param options: Options that indicates which parameters are going to be declared.
"""
if not issubclass(entity_type, (Publisher, Subscription)):
raise TypeError('Argument `entity_type` should be a subclass of Publisher or Subscription')
entity_type_str = 'publisher' if issubclass(entity_type, Publisher) else Subscription
id_suffix = '' if options.entity_id is None else f'_{options.entity_id}'
name = f'qos_overrides.{topic_name}.{entity_type_str}{id_suffix}.' '{}'
description = '{}' f' for {entity_type_str} `{topic_name}` with id `{options.entity_id}`'
allowed_policies = _get_allowed_policies(entity_type)
for policy in options.policy_kinds:
if policy not in allowed_policies:
continue
policy_name = policy.name.lower()
descriptor = ParameterDescriptor()
descriptor.description = description.format(policy_name)
descriptor.read_only = True
param = node.declare_parameter(
name.format(policy_name),
_get_qos_policy_parameter(qos, policy),
descriptor)
_override_qos_policy_with_param(qos, policy, param)
if options.callback is not None and not options.callback(qos):
raise InvalidQosOverridesError(
description.format('Provided qos overrides') + ', are not valid')
|
40,578 |
def load_timeseries_opsd(years=None, fn=None, countries=None, source="ENTSOE_power_statistics"):
"""
Read load data from OPSD time-series package version 2019-06-05.
Parameters
----------
years : None or slice()
Years for which to read load data (defaults to
slice("2018","2019"))
fn : file name
countries : Countries for which to read load data.
source : "ENTSOE_transparency" or "ENTSOE_power_statistics"
Returns
-------
load : pd.DataFrame
Load time-series with UTC timestamps x ISO-2 countries
"""
if countries is None:
countries = snakemake.config['countries']
if source == 'ENTSOE_transparency':
load = (pd.read_csv(fn, index_col=0, parse_dates=True)
.loc[:, lambda df: df.columns.to_series().str.endswith('_load_actual_entsoe_transparency')]
.rename(columns=lambda s: s[:-len('_load_actual_entsoe_transparency')])
.dropna(how="all", axis=0))
elif source == 'ENTSOE_power_statistics':
load = (pd.read_csv(fn, index_col=0, parse_dates=True)
.loc[:, lambda df: df.columns.to_series().str.endswith('_load_actual_entsoe_power_statistics')]
.rename(columns=lambda s: s[:-len('_load_actual_entsoe_power_statistics')])
.dropna(how="all", axis=0))
else:
logger.warning("Please proviede correct source name for load data")
if 'GB_UKM' in load.columns:
load.rename(columns={'GB_UKM' : 'GB'}, inplace=True)
load = load.filter(items=countries)
if years is not None:
load = load.loc[years]
return load
|
def load_timeseries_opsd(years=None, fn=None, countries=None, source="ENTSOE_power_statistics"):
"""
Read load data from OPSD time-series package version 2019-06-05.
Parameters
----------
years : None or slice()
Years for which to read load data (defaults to
slice("2018","2019"))
fn : file name
countries : Countries for which to read load data.
source : "ENTSOE_transparency" or "ENTSOE_power_statistics"
Returns
-------
load : pd.DataFrame
Load time-series with UTC timestamps x ISO-2 countries
"""
if countries is None:
countries = snakemake.config['countries']
if source == 'ENTSOE_transparency':
load = (pd.read_csv(fn, index_col=0, parse_dates=True)
.loc[:, lambda df: df.columns.to_series().str.endswith('_load_actual_entsoe_transparency')]
.rename(columns=lambda s: s[:-len('_load_actual_entsoe_transparency')])
.dropna(how="all", axis=0))
elif source == 'ENTSOE_power_statistics':
load = (pd.read_csv(fn, index_col=0, parse_dates=True)
.loc[:, lambda df: df.columns.to_series().str.endswith('_load_actual_entsoe_power_statistics')]
.rename(columns=lambda s: s[:-len('_load_actual_entsoe_power_statistics')])
.dropna(how="all", axis=0))
else:
raise NotImplementedError(f"Data for source `{source}` not available.")
if 'GB_UKM' in load.columns:
load.rename(columns={'GB_UKM' : 'GB'}, inplace=True)
load = load.filter(items=countries)
if years is not None:
load = load.loc[years]
return load
|
23,908 |
def make_verdict(subtask_file_paths, crops, results, raw_verification):
verdict = True
for crop_data in results:
crop = get_crop_with_id(crop_data['crop']['id'], crops)
left, top = crop.get_relative_top_left()
print("left " + str(left))
print("top " + str(top))
for crop, subtask in zip(crop_data['results'], subtask_file_paths):
crop_path = os.path.join(OUTPUT_DIR, crop)
if not raw_verification:
results_path = calculate_metrics(
crop_path,
subtask,
left, top,
metrics_output_filename=os.path.join(
OUTPUT_DIR,
crop_data['crop']['outfilebasename'] + "metrics.txt")
)
else:
results_path = get_raw_verification(
crop_path,
subtask,
left,
top,
metrics_output_filename=os.path.join(
OUTPUT_DIR,
crop_data['crop']['outfilebasename'] + "metrics.txt"
),
)
print("results_path: ", results_path)
with open(results_path, 'r') as f:
data = json.load(f)
if data['Label'] != "TRUE":
verdict = False
with open(os.path.join(OUTPUT_DIR, 'verdict.json'), 'w') as f:
json.dump({'verdict': verdict}, f)
|
def make_verdict(subtask_file_paths, crops, results, use_raw_verification):
verdict = True
for crop_data in results:
crop = get_crop_with_id(crop_data['crop']['id'], crops)
left, top = crop.get_relative_top_left()
print("left " + str(left))
print("top " + str(top))
for crop, subtask in zip(crop_data['results'], subtask_file_paths):
crop_path = os.path.join(OUTPUT_DIR, crop)
if not raw_verification:
results_path = calculate_metrics(
crop_path,
subtask,
left, top,
metrics_output_filename=os.path.join(
OUTPUT_DIR,
crop_data['crop']['outfilebasename'] + "metrics.txt")
)
else:
results_path = get_raw_verification(
crop_path,
subtask,
left,
top,
metrics_output_filename=os.path.join(
OUTPUT_DIR,
crop_data['crop']['outfilebasename'] + "metrics.txt"
),
)
print("results_path: ", results_path)
with open(results_path, 'r') as f:
data = json.load(f)
if data['Label'] != "TRUE":
verdict = False
with open(os.path.join(OUTPUT_DIR, 'verdict.json'), 'w') as f:
json.dump({'verdict': verdict}, f)
|
41,519 |
def all_pois_floating(pdf, fixed_params):
"""
Checks whether all POI(s) are floatig (i.e. not within the fixed set).
Args:
pdf (:obj:`pyhf.Model`): The model
fixed_params: array of bools
Returns:
bool: The result whether all POIs are floating.
"""
poi_fixed = fixed_params[pdf.config.poi_index]
return not poi_fixed
|
def all_pois_floating(pdf, fixed_params):
"""
Checks whether all POI(s) are floatig (i.e. not within the fixed set).
Args:
pdf (:obj:`pyhf.Model`): The model
fixed_params: array of bools
Returns:
:obj:`bool`: The result whether all POIs are floating.
"""
poi_fixed = fixed_params[pdf.config.poi_index]
return not poi_fixed
|
29,197 |
def enable_cloud_logging():
"""Enables google cloud logging."""
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.get_default_handler()
client.setup_logging()
|
def enable_cloud_logging():
"""Enables google cloud logging."""
# Instantiates a client
client = logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.get_default_handler()
client.setup_logging()
|
1,260 |
def mask_volume(img, units='mm3'):
""" Compute volume of mask image.
Equivalent to "fslstats /path/file.nii -V"
Parameters
----------
img : ``SpatialImage``
All voxels of the mask should be of value 1, background should have value 0.
units : string {"mm3", "vox"}, optional
Unit of the returned mask volume. Defaults to "mm3".
Returns
-------
mask_volume_vx: float
Volume of mask expressed in voxels.
or
mask_volume_mm3: float
Volume of mask expressed in mm3.
Examples
--------
>>> import nibabel as nf
>>> path = 'path/to/nifti/mask.nii'
>>> img = nf.load(path) # path is contains a path to an example nifti mask
>>> mask_volume(img)
50.3021
"""
header = img.header
_, vx, vy, vz, _, _, _, _ = header['pixdim']
voxel_volume_mm3 = vx * vy * vz
mask = img.get_fdata()
mask_volume_vx = np.sum(mask)
mask_volume_mm3 = mask_volume_vx * voxel_volume_mm3
if units == 'vox':
return mask_volume_vx
elif units == 'mm3':
return mask_volume_mm3
else:
raise ValueError(f'{units} is not a valid unit. Choose "mm3" or "vox".')
|
def mask_volume(img, units='mm3'):
""" Compute volume of mask image.
Equivalent to "fslstats /path/file.nii -V"
Parameters
----------
img : ``SpatialImage``
All voxels of the mask should be of value 1, background should have value 0.
units : string {"mm3", "vox"}, optional
Unit of the returned mask volume. Defaults to "mm3".
Returns
-------
mask_volume_vx: float
Volume of mask expressed in voxels.
or
mask_volume_mm3: float
Volume of mask expressed in mm3.
Examples
--------
>>> import nibabel as nf
>>> path = 'path/to/nifti/mask.nii'
>>> img = nf.load(path) # path is contains a path to an example nifti mask
>>> mask_volume(img)
50.3021
"""
header = img.header
_, vx, vy, vz, _, _, _, _ = header['pixdim']
voxel_volume_mm3 = vx * vy * vz
mask_volume_vx = np.count_nonzero(img.dataobj)
mask_volume_mm3 = mask_volume_vx * voxel_volume_mm3
if units == 'vox':
return mask_volume_vx
elif units == 'mm3':
return mask_volume_mm3
else:
raise ValueError(f'{units} is not a valid unit. Choose "mm3" or "vox".')
|
20,517 |
def dilate(data, radius, shape, dim=None):
"""
Dilate data using ball structuring element
:param data: numpy array: 2d or 3d array
:param radius: int: Radius of the structuring element. If 0, the convolution has no effect.
:param shape: {'square', 'cube', 'disk', 'ball'}
:param dim: {0, 1, 2}: Dimension of the array which 2D structural element will be orthogonal to. For example, if
you wish to apply a 2D disk kernel in the X-Y plane, leaving Z unaffected, parameters will be: shape=disk, dim=2.
:return: numpy array: data dilated
"""
# TODO: make a build_selem(radius, shape) function called here
# TODO: enable custom selem
shape = 'ball'
# Create structuring element of desired shape and radius
# Note: the trick here is to use the variable shape as the skimage.morphology function itself
selem = globals()[shape](radius)
# else:
# # define structured element as a box with input dimensions
# selem = np.ones((radius[0], radius[1], radius[2]), dtype=np.dtype)
return dilation(data, selem=selem, out=None)
|
def dilate(data, radius, shape, dim=None):
"""
Dilate data using ball structuring element
:param data: numpy array: 2d or 3d array
:param radius: int: Radius of the structuring element. If 0, the convolution has no effect.
:param shape: {'square', 'cube', 'disk', 'ball'}
:param dim: {0, 1, 2}: Dimension of the array which 2D structural element will be orthogonal to. For example, if
you wish to apply a 2D disk kernel in the X-Y plane, leaving Z unaffected, parameters will be: shape=disk, dim=2.
:return: numpy array: data dilated
"""
# TODO: make a build_selem(radius, shape) function called here
# TODO: enable custom selem
shape = 'ball'
# Create structuring element of desired shape and radius
# Note: the trick here is to use the variable shape as the skimage.morphology function itself
selem = {'square': square, 'cube': cube, 'disk': disk, 'ball': ball}[shape](radius)
# else:
# # define structured element as a box with input dimensions
# selem = np.ones((radius[0], radius[1], radius[2]), dtype=np.dtype)
return dilation(data, selem=selem, out=None)
|
6,001 |
def _get_broadcasted_binary_op_result(obj1, obj2, cq, dtype_getter=None):
if dtype_getter is None:
dtype_getter = _get_common_dtype
if obj1.shape == obj2.shape:
return obj1._new_like_me(dtype_getter(obj1, obj2, cq),
cq)
elif obj1.shape == ():
return obj2._new_like_me(dtype_getter(obj1, obj2, cq),
cq)
elif obj2.shape == ():
return obj1._new_like_me(dtype_getter(obj1, obj2, cq),
cq)
else:
raise NotImplementedError("Broadcasting binary op with shapes:"
f" {obj1.shape}, {obj2.shape}.")
|
def _get_broadcasted_binary_op_result(obj1, obj2, cq, dtype_getter=_get_common_dtype):
if obj1.shape == obj2.shape:
return obj1._new_like_me(dtype_getter(obj1, obj2, cq),
cq)
elif obj1.shape == ():
return obj2._new_like_me(dtype_getter(obj1, obj2, cq),
cq)
elif obj2.shape == ():
return obj1._new_like_me(dtype_getter(obj1, obj2, cq),
cq)
else:
raise NotImplementedError("Broadcasting binary op with shapes:"
f" {obj1.shape}, {obj2.shape}.")
|
33,888 |
def _workflow_wait_executor(
func: Callable, context: "WorkflowStepContext", step_id: "StepID",
baked_inputs: "_BakedWorkflowInputs",
runtime_options: "WorkflowStepRuntimeOptions") -> Any:
"""Executor of 'workflow.wait' steps."""
# Part 1: update the context for the step
workflow_context.update_workflow_step_context(context, step_id)
context = workflow_context.get_workflow_step_context()
step_type = runtime_options.step_type
assert step_type == StepType.WAIT
wait_options = runtime_options.ray_options.get("wait_options", {})
# Part 2: wait inputs
ready_workflows, remaining_workflows = baked_inputs.wait(**wait_options)
ready_objects = []
for w in ready_workflows:
obj, _, = _resolve_object_ref(w.ref.ref)
ready_objects.append(obj)
persisted_output = (ready_objects, remaining_workflows)
volatile_output = None
# Part 3: execute the step
store = workflow_storage.get_workflow_storage()
commit_step(store, step_id, persisted_output, exception=None)
if context.last_step_of_workflow:
# advance the progress of the workflow
store.advance_progress(step_id)
_record_step_status(step_id, WorkflowStatus.SUCCESSFUL)
logger.info(get_step_status_info(WorkflowStatus.SUCCESSFUL))
return persisted_output, volatile_output
|
def _workflow_wait_executor(
func: Callable, context: "WorkflowStepContext", step_id: "StepID",
baked_inputs: "_BakedWorkflowInputs",
runtime_options: "WorkflowStepRuntimeOptions") -> Any:
"""Executor of 'workflow.wait' steps."""
# Part 1: update the context for the step
workflow_context.update_workflow_step_context(context, step_id)
context = workflow_context.get_workflow_step_context()
step_type = runtime_options.step_type
assert step_type == StepType.WAIT
wait_options = runtime_options.ray_options.get("wait_options", {})
# Part 2: wait inputs
ready_workflows, remaining_workflows = baked_inputs.wait(**wait_options)
ready_objects = []
for w in ready_workflows:
obj, _, = _resolve_object_ref(w.ref.ref)
ready_objects.append(obj)
persisted_output = (ready_objects, remaining_workflows)
volatile_output = None
# Part 3: Finish the step.
store = workflow_storage.get_workflow_storage()
commit_step(store, step_id, persisted_output, exception=None)
if context.last_step_of_workflow:
# advance the progress of the workflow
store.advance_progress(step_id)
_record_step_status(step_id, WorkflowStatus.SUCCESSFUL)
logger.info(get_step_status_info(WorkflowStatus.SUCCESSFUL))
return persisted_output, volatile_output
|
5,330 |
def delete(event, saltenv="base", test=None):
"""
Delete a reactor
CLI Example:
.. code-block:: bash
salt-run reactor.delete 'salt/cloud/*/destroyed'
"""
if not _reactor_system_available():
raise CommandExecutionError("Reactor system is not running.")
with salt.utils.event.get_event(
"master",
__opts__["sock_dir"],
__opts__["transport"],
opts=__opts__,
listen=True,
) as sevent:
master_key = salt.utils.master.get_master_key("root", __opts__)
__jid_event__.fire_event(
{"event": event, "key": master_key}, "salt/reactors/manage/delete"
)
res = sevent.get_event(wait=30, tag="salt/reactors/manage/delete-complete")
return res.get('result', None)
|
def delete(event, saltenv="base", test=None):
"""
Delete a reactor
CLI Example:
.. code-block:: bash
salt-run reactor.delete 'salt/cloud/*/destroyed'
"""
if not _reactor_system_available():
raise CommandExecutionError("Reactor system is not running.")
with salt.utils.event.get_event(
"master",
__opts__["sock_dir"],
__opts__["transport"],
opts=__opts__,
listen=True,
) as sevent:
master_key = salt.utils.master.get_master_key("root", __opts__)
__jid_event__.fire_event(
{"event": event, "key": master_key}, "salt/reactors/manage/delete"
)
res = sevent.get_event(wait=30, tag="salt/reactors/manage/delete-complete")
return res.get("result")
|
58,153 |
def main():
"""
PARSE AND VALIDATE INTEGRATION PARAMS
"""
# get the service API url
base_url = demisto.params()['url'].strip('/')
verify_certificate = not demisto.params().get('insecure', False)
proxy = demisto.params().get('proxy', False)
client_id = demisto.params().get('credentials').get('identifier')
client_secret = demisto.params().get('credentials').get('password')
oauth_url = demisto.params().get('oauth_url')
default_tsg_id = demisto.params().get('tsg_id')
LOG(f'Command being called is {demisto.command()}')
commands = {
'test-module': test_module,
'prisma-access-create-security-rule': create_security_rule_command,
'prisma-access-list-security-rules': list_security_rules_command,
'prisma-access-push-candidate-config': push_candidate_config_command,
'prisma-access-get-config-jobs-by-id': get_config_jobs_by_id_command,
'prisma-access-list-config-jobs': list_config_jobs_command,
'prisma-access-update-security-rule': update_security_rule_command,
'prisma-access-get-security-rule-by-name': get_security_rule_by_name_command,
'prisma-access-query-agg-monitor-api': query_agg_monitor_api_command,
'prisma-access-delete-security-rule': delete_security_rule_command,
'prisma-access-create-address-object': create_address_object_command,
'prisma-access-edit-address-object': edit_address_object_command,
'prisma-access-delete-address-object': delete_address_object_command,
'prisma-access-list-address-objects': list_address_objects_command
}
command = demisto.command()
client = Client(
base_url=base_url,
client_id=client_id,
client_secret=client_secret,
oauth_url=oauth_url,
verify=verify_certificate,
headers={
'Accept': 'application/json',
'Content-Type': 'application/json'
},
proxy=proxy,
ok_codes=(200, 201, 204))
try:
if command in commands:
return_results(commands[command](client, demisto.args(), default_tsg_id))
else:
raise NotImplementedError(f'Command "{command}" is not implemented.')
# Log exceptions
except Exception as e:
return_error(f'Failed to execute {demisto.command()} command. Error: {str(e)}')
|
def main():
"""
PARSE AND VALIDATE INTEGRATION PARAMS
"""
# get the service API url
params = demisto.params()
base_url = params.get('url').strip('/')
verify_certificate = not argToBoolean(params.get('insecure', False))
proxy = argToBoolean(params.get('proxy', False))
client_id = params.get('credentials', {}).get('identifier')
client_secret = params.get('credentials', {}).get('password')
oauth_url = params.get('oauth_url')
default_tsg_id = params.get('tsg_id')
LOG(f'Command being called is {demisto.command()}')
commands = {
'test-module': test_module,
'prisma-access-create-security-rule': create_security_rule_command,
'prisma-access-list-security-rules': list_security_rules_command,
'prisma-access-push-candidate-config': push_candidate_config_command,
'prisma-access-get-config-jobs-by-id': get_config_jobs_by_id_command,
'prisma-access-list-config-jobs': list_config_jobs_command,
'prisma-access-update-security-rule': update_security_rule_command,
'prisma-access-get-security-rule-by-name': get_security_rule_by_name_command,
'prisma-access-query-agg-monitor-api': query_agg_monitor_api_command,
'prisma-access-delete-security-rule': delete_security_rule_command,
'prisma-access-create-address-object': create_address_object_command,
'prisma-access-edit-address-object': edit_address_object_command,
'prisma-access-delete-address-object': delete_address_object_command,
'prisma-access-list-address-objects': list_address_objects_command
}
command = demisto.command()
client = Client(
base_url=base_url,
client_id=client_id,
client_secret=client_secret,
oauth_url=oauth_url,
verify=verify_certificate,
headers={
'Accept': 'application/json',
'Content-Type': 'application/json'
},
proxy=proxy,
ok_codes=(200, 201, 204))
try:
if command in commands:
return_results(commands[command](client, demisto.args(), default_tsg_id))
else:
raise NotImplementedError(f'Command "{command}" is not implemented.')
# Log exceptions
except Exception as e:
return_error(f'Failed to execute {demisto.command()} command. Error: {str(e)}')
|
31,582 |
def ip_statistics_command(client, args):
query = args.get('query')
body = {
"query": query
}
res = client.query(query_type="ip_stats", body=body)
res = {k: v for k, v in res.items() if k not in removed_keys}
top_ptrs = res.get('top_ptr_patterns', [])
ports = res.get('ports', [])
total = res.get('total', {}).get('value')
table_data = {
"Top PTRs Count": len(top_ptrs),
"Ports": len(ports),
"Total": total
}
md = tableToMarkdown(f"IP Statistics:", table_data)
command_results = CommandResults(
outputs_prefix=f"SecurityTrails.IP.Search.IPStats",
outputs_key_field=None,
outputs=res,
readable_output=md
)
return_results(command_results)
|
def ip_statistics_command(client, args):
query = args.get('query')
body = {
"query": query
}
res = client.query(query_type="ip_stats", body=body)
res = {k: v for k, v in res.items() if k not in removed_keys}
top_ptrs = res.get('top_ptr_patterns', [])
ports = res.get('ports', [])
total = res.get('total', {}).get('value')
table_data = {
"Top PTRs Count": len(top_ptrs),
"Ports": len(ports),
"Total": total
}
md = tableToMarkdown(f"IP Statistics:", table_data)
command_results = CommandResults(
outputs_prefix="SecurityTrails.IP.Search.IPStats",
outputs=res,
readable_output=md
)
return_results(command_results)
|
25,189 |
def build_class(
name: str, basenames: Sequence[str] = (), doc: Optional[str] = None
) -> nodes.ClassDef:
"""Create and initialize an astroid ClassDef node."""
node = nodes.ClassDef(name)
basenodes: List[nodes.Name] = []
for base in basenames:
basenode = nodes.Name(name=base)
basenode.parent = node
basenodes.append(basenode)
node.postinit(
bases=basenodes,
body=[],
decorators=None,
doc_node=nodes.Const(value=doc) if doc else None,
)
return node
|
def build_class(
name: str, basenames: Sequence[str] = (), doc: Optional[str] = None
) -> nodes.ClassDef:
"""Create and initialize an astroid ClassDef node."""
node = nodes.ClassDef(name)
basenodes: List[nodes.Name] = []
for base in basenames:
basenode = nodes.Name(name=base)
basenode.parent = node
basenodes.append(basenode)
node.postinit(
bases=[nodes.Name(name=base, parent=node) for base in basenames],
body=[],
decorators=None,
doc_node=nodes.Const(value=doc) if doc else None,
)
return node
|
29,776 |
def load_heuristic(heuristic):
"""Load heuristic from the file, return the module
"""
if os.path.sep in heuristic or os.path.lexists(heuristic):
heuristic_file = op.realpath(heuristic)
path, fname = op.split(heuristic_file)
try:
old_syspath = sys.path[:]
sys.path[0:0]=[path]
mod = __import__(fname.split('.')[0])
mod.filename = heuristic_file
finally:
sys.path = old_syspath
else:
from importlib import import_module
try:
mod = import_module('heudiconv.heuristics.%s' % heuristic)
mod.filename = mod.__file__.rstrip('co') # remove c or o from pyc/pyo
except Exception as exc:
raise ImportError(
"Failed to import heuristic %s: %s"
% (heuristic, exc)
)
return mod
|
def load_heuristic(heuristic):
"""Load heuristic from the file, return the module
"""
if os.path.sep in heuristic or os.path.lexists(heuristic):
heuristic_file = op.realpath(heuristic)
path, fname = op.split(heuristic_file)
try:
old_syspath = sys.path[:]
sys.path.insert(0, path)
mod = __import__(fname.split('.')[0])
mod.filename = heuristic_file
finally:
sys.path = old_syspath
else:
from importlib import import_module
try:
mod = import_module('heudiconv.heuristics.%s' % heuristic)
mod.filename = mod.__file__.rstrip('co') # remove c or o from pyc/pyo
except Exception as exc:
raise ImportError(
"Failed to import heuristic %s: %s"
% (heuristic, exc)
)
return mod
|
1,643 |
def empirical_covariance(X, assume_centered=False):
"""Computes the Maximum likelihood covariance estimator
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Data from which to compute the covariance estimate
assume_centered : bool, default=False
If True, data will not be centered before computation.
Useful when working with data whose mean is almost, but not exactly
zero.
If False, data will be centered before computation.
Returns
-------
covariance : ndarray of shape (n_features, n_features)
Empirical covariance (Maximum Likelihood Estimator).
Examples
--------
>>> from sklearn.covariance import empirical_covariance
>>> X = [[1,1,1],[1,1,1],[1,1,1],
... [0,0,0],[0,0,0],[0,0,0]]
>>> empirical_covariance(X)
array([[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25]])
"""
X = np.asarray(X)
if X.ndim == 1:
X = np.reshape(X, (1, -1))
if X.shape[0] == 1:
warnings.warn("Only one sample available. "
"You may want to reshape your data array")
if assume_centered:
covariance = np.dot(X.T, X) / X.shape[0]
else:
covariance = np.cov(X.T, bias=1)
if covariance.ndim == 0:
covariance = np.array([[covariance]])
return covariance
|
def empirical_covariance(X, assume_centered=False):
"""Computes the Maximum likelihood covariance estimator
Parameters
----------
X : ndarray of shape (n_samples, n_features)
Data from which to compute the covariance estimate
assume_centered : bool, default=False
If True, data will not be centered before computation.
Useful when working with data whose mean is almost, but not exactly
zero.
If False, data will be centered before computation.
Returns
-------
covariance : ndarray of shape (n_features, n_features)
Empirical covariance (Maximum Likelihood Estimator).
Examples
--------
>>> from sklearn.covariance import empirical_covariance
>>> X = [[1,1,1],[1,1,1],[1,1,1],
... [0,0,0],[0,0,0],[0,0,0]]
>>> empirical_covariance(X)
array([[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25],
[0.25, 0.25, 0.25]])
"""
X = np.asarray(X)
if X.ndim == 1:
X = np.reshape(X, (1, -1))
if X.shape[0] == 1:
warnings.warn("Only one sample available. "
"You may want to reshape your data array")
if assume_centered:
covariance = np.dot(X.T, X) / X.shape[0]
else:
covariance = np.cov(X.T, bias=1)
if covariance.ndim == 0:
covariance = np.array([[covariance]])
return covariance
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.