nwo
stringlengths
5
86
sha
stringlengths
40
40
path
stringlengths
4
189
language
stringclasses
1 value
identifier
stringlengths
1
94
parameters
stringlengths
2
4.03k
argument_list
stringclasses
1 value
return_statement
stringlengths
0
11.5k
docstring
stringlengths
1
33.2k
docstring_summary
stringlengths
0
5.15k
docstring_tokens
list
function
stringlengths
34
151k
function_tokens
list
url
stringlengths
90
278
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_cocoa/stc.py
python
StyledTextCtrl.FormatRange
(*args, **kwargs)
return _stc.StyledTextCtrl_FormatRange(*args, **kwargs)
FormatRange(self, bool doDraw, int startPos, int endPos, DC draw, DC target, Rect renderRect, Rect pageRect) -> int On Windows, will draw the document into a display context such as a printer.
FormatRange(self, bool doDraw, int startPos, int endPos, DC draw, DC target, Rect renderRect, Rect pageRect) -> int
[ "FormatRange", "(", "self", "bool", "doDraw", "int", "startPos", "int", "endPos", "DC", "draw", "DC", "target", "Rect", "renderRect", "Rect", "pageRect", ")", "-", ">", "int" ]
def FormatRange(*args, **kwargs): """ FormatRange(self, bool doDraw, int startPos, int endPos, DC draw, DC target, Rect renderRect, Rect pageRect) -> int On Windows, will draw the document into a display context such as a printer. """ return _stc.StyledTextCtrl_FormatRange(*args, **kwargs)
[ "def", "FormatRange", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_stc", ".", "StyledTextCtrl_FormatRange", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/stc.py#L3504-L3511
tensorflow/tensorflow
419e3a6b650ea4bd1b0cba23c4348f8a69f3272e
tensorflow/python/ops/variable_scope.py
python
_VariableStore._get_partitioned_variable
(self, name, partitioner, shape=None, dtype=dtypes.float32, initializer=None, regularizer=None, reuse=None, trainable=None, collections=None, caching_device=None, validate_shape=True, use_resource=None, constraint=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregation.NONE)
return partitioned_var
Gets or creates a sharded variable list with these parameters. The `partitioner` must be a callable that accepts a fully defined `TensorShape` and returns a sequence of integers (the `partitions`). These integers describe how to partition the given sharded `Variable` along the given dimension. That is, `partitions[1] = 3` means split the `Variable` into 3 shards along dimension 1. Currently, sharding along only one axis is supported. If the list of variables with the given name (prefix) is already stored, we return the stored variables. Otherwise, we create a new one. Set `reuse` to `True` when you only want to reuse existing Variables. Set `reuse` to `False` when you only want to create new Variables. Set `reuse` to None (the default) or tf.compat.v1.AUTO_REUSE when you want variables to be created if they don't exist or returned if they do. If initializer is `None` (the default), the default initializer passed in the constructor is used. If that one is `None` too, we use a new `glorot_uniform_initializer`. If initializer is a Tensor, we use it as a value and derive the shape from the initializer. If the initializer is a callable, then it will be called for each shard. Otherwise the initializer should match the shape of the entire sharded Variable, and it will be sliced accordingly for each shard. Some useful partitioners are available. See, e.g., `variable_axis_size_partitioner` and `min_max_variable_partitioner`. Args: name: the name of the new or existing sharded variable. partitioner: Optional callable that accepts a fully defined `TensorShape` and `dtype` of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). shape: shape of the new or existing sharded variable. dtype: type of the new or existing sharded variable (defaults to `DT_FLOAT`). initializer: initializer for the sharded variable. regularizer: a (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. reuse: a Boolean, None, or tf.AUTO_REUSE. Controls reuse or creation of variables. trainable: If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`). collections: List of graph collections keys to add the Variable to. Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see `tf.Variable`). caching_device: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not `None`, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through `Switch` and other conditional statements. validate_shape: If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. use_resource: If False, creates a regular Variable. If True, creates an experimental ResourceVariable which has well-defined semantics. Defaults to False (will later change to True). constraint: An optional projection function to be applied to the variable after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. synchronization: Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class `tf.VariableSynchronization`. By default the synchronization is set to `AUTO` and the current `DistributionStrategy` chooses when to synchronize. aggregation: Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class `tf.VariableAggregation`. Returns: A `PartitionedVariable` object. Raises: ValueError: when creating a new variable and shape is not declared, when reusing a variable and specifying a conflicting shape, when violating reuse during variable creation, or if an existing sharded variable exists for the given name but with different sharding.
Gets or creates a sharded variable list with these parameters.
[ "Gets", "or", "creates", "a", "sharded", "variable", "list", "with", "these", "parameters", "." ]
def _get_partitioned_variable(self, name, partitioner, shape=None, dtype=dtypes.float32, initializer=None, regularizer=None, reuse=None, trainable=None, collections=None, caching_device=None, validate_shape=True, use_resource=None, constraint=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregation.NONE): """Gets or creates a sharded variable list with these parameters. The `partitioner` must be a callable that accepts a fully defined `TensorShape` and returns a sequence of integers (the `partitions`). These integers describe how to partition the given sharded `Variable` along the given dimension. That is, `partitions[1] = 3` means split the `Variable` into 3 shards along dimension 1. Currently, sharding along only one axis is supported. If the list of variables with the given name (prefix) is already stored, we return the stored variables. Otherwise, we create a new one. Set `reuse` to `True` when you only want to reuse existing Variables. Set `reuse` to `False` when you only want to create new Variables. Set `reuse` to None (the default) or tf.compat.v1.AUTO_REUSE when you want variables to be created if they don't exist or returned if they do. If initializer is `None` (the default), the default initializer passed in the constructor is used. If that one is `None` too, we use a new `glorot_uniform_initializer`. If initializer is a Tensor, we use it as a value and derive the shape from the initializer. If the initializer is a callable, then it will be called for each shard. Otherwise the initializer should match the shape of the entire sharded Variable, and it will be sliced accordingly for each shard. Some useful partitioners are available. See, e.g., `variable_axis_size_partitioner` and `min_max_variable_partitioner`. Args: name: the name of the new or existing sharded variable. partitioner: Optional callable that accepts a fully defined `TensorShape` and `dtype` of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). shape: shape of the new or existing sharded variable. dtype: type of the new or existing sharded variable (defaults to `DT_FLOAT`). initializer: initializer for the sharded variable. regularizer: a (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. reuse: a Boolean, None, or tf.AUTO_REUSE. Controls reuse or creation of variables. trainable: If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`). collections: List of graph collections keys to add the Variable to. Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see `tf.Variable`). caching_device: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not `None`, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through `Switch` and other conditional statements. validate_shape: If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. use_resource: If False, creates a regular Variable. If True, creates an experimental ResourceVariable which has well-defined semantics. Defaults to False (will later change to True). constraint: An optional projection function to be applied to the variable after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. synchronization: Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class `tf.VariableSynchronization`. By default the synchronization is set to `AUTO` and the current `DistributionStrategy` chooses when to synchronize. aggregation: Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class `tf.VariableAggregation`. Returns: A `PartitionedVariable` object. Raises: ValueError: when creating a new variable and shape is not declared, when reusing a variable and specifying a conflicting shape, when violating reuse during variable creation, or if an existing sharded variable exists for the given name but with different sharding. """ initializing_from_value = initializer is not None and isinstance( initializer, ops.Tensor) if name in self._vars: raise ValueError( "A partitioner was provided, but an unpartitioned version of the " "variable was found: %s. Perhaps a variable of the same name was " "already created without partitioning?" % name) shape = tensor_shape.as_shape(shape) if initializing_from_value: shape = shape.merge_with(initializer.get_shape()) partitions = None if not reuse or partitioner: partitions = _call_partitioner(partitioner, shape, dtype) if name in self._partitioned_vars: if reuse is False: raise ValueError( "Partitioned variable with name %s already exists. Did you mean to " "set reuse=True or reuse=tf.AUTO_REUSE in VarScope?" % name) existing_var = self._partitioned_vars[name] if not shape.is_compatible_with(existing_var.get_shape()): raise ValueError( "Trying to reuse partitioned variable %s, but specified shape %s " "and found shape %s." % (name, shape, existing_var.get_shape())) if not dtype.is_compatible_with(existing_var.dtype): raise ValueError( "Trying to reuse partitioned variable %s, but specified dtype %s " "and found dtype %s." % (name, dtype.name, existing_var.dtype.name)) # pylint: disable=protected-access if (partitions is not None and existing_var._get_partitions() != partitions): raise ValueError( "Trying to reuse partitioned variable %s, but specified partitions " "%s and found partitions %s." % (name, partitions, existing_var._get_partitions())) # pylint: enable=protected-access return existing_var if reuse is True: raise ValueError("PartitionedVariable %s does not exist, or was not " "created with tf.get_variable(). Did you mean to set " "reuse=False or reuse=tf.AUTO_REUSE in VarScope?" % name) slice_dim, num_slices = _get_slice_dim_and_num_slices(partitions) if "%s/part_0" % name in self._vars: if "%s/part_%d" % (name, num_slices - 1) not in self._vars: raise ValueError( "Partitioner returned a different partitioning than what was " "already found. Partitioner returned %d shards, and shard " "%s/part_0 was found, but %s/part_%d was not." % (num_slices, name, name, num_slices - 1)) if "%s/part_%d" % (name, num_slices) in self._vars: raise ValueError( "Partitioner returned a different partitioning than what was " "already found. Partitioner returned %d shards, and shard " "%s/part_0 was found, but so was the extra shard %s/part_%d." % (num_slices, name, name, num_slices)) vs = [] for i, (var_offset, var_shape) in enumerate( _iter_slices(shape.as_list(), num_slices, slice_dim)): partition_info = _PartitionInfo( full_shape=shape.as_list(), var_offset=var_offset) var_full_name = "%s/part_%d" % (name, i) with ops.name_scope( var_full_name + "/PartitionedInitializer", skip_on_eager=False): # Create the tensor to initialize the variable with default value. if initializer is None: init, initializing_from_value = self._get_default_initializer( name=name, shape=shape, dtype=dtype) if initializing_from_value: init_shape = None else: init_shape = var_shape elif callable(initializer): init = initializer init_shape = var_shape elif isinstance(initializer, ops.Tensor): init = array_ops.slice(initializer, var_offset, var_shape) # Use the dtype of the given tensor. dtype = init.dtype.base_dtype init_shape = None else: init = ops.convert_to_tensor(initializer, dtype=dtype) init = array_ops.slice(init, var_offset, var_shape) init_shape = None with ops.name_scope(None): var = self._get_single_variable( name=var_full_name, shape=init_shape, dtype=dtype, initializer=init, partition_info=partition_info, regularizer=regularizer, reuse=reuse, trainable=trainable, collections=collections, caching_device=caching_device, validate_shape=validate_shape, use_resource=use_resource, constraint=constraint, synchronization=synchronization, aggregation=aggregation) # pylint: disable=protected-access var._set_save_slice_info( variables.Variable.SaveSliceInfo(name, shape.as_list(), var_offset, var_shape)) vs.append(var) # pylint: enable=protected-access partitioned_var = variables.PartitionedVariable( name=name, shape=shape, dtype=dtype, variable_list=vs, partitions=partitions) if not context.executing_eagerly() or self._store_eager_variables: self._partitioned_vars[name] = partitioned_var return partitioned_var
[ "def", "_get_partitioned_variable", "(", "self", ",", "name", ",", "partitioner", ",", "shape", "=", "None", ",", "dtype", "=", "dtypes", ".", "float32", ",", "initializer", "=", "None", ",", "regularizer", "=", "None", ",", "reuse", "=", "None", ",", "trainable", "=", "None", ",", "collections", "=", "None", ",", "caching_device", "=", "None", ",", "validate_shape", "=", "True", ",", "use_resource", "=", "None", ",", "constraint", "=", "None", ",", "synchronization", "=", "VariableSynchronization", ".", "AUTO", ",", "aggregation", "=", "VariableAggregation", ".", "NONE", ")", ":", "initializing_from_value", "=", "initializer", "is", "not", "None", "and", "isinstance", "(", "initializer", ",", "ops", ".", "Tensor", ")", "if", "name", "in", "self", ".", "_vars", ":", "raise", "ValueError", "(", "\"A partitioner was provided, but an unpartitioned version of the \"", "\"variable was found: %s. Perhaps a variable of the same name was \"", "\"already created without partitioning?\"", "%", "name", ")", "shape", "=", "tensor_shape", ".", "as_shape", "(", "shape", ")", "if", "initializing_from_value", ":", "shape", "=", "shape", ".", "merge_with", "(", "initializer", ".", "get_shape", "(", ")", ")", "partitions", "=", "None", "if", "not", "reuse", "or", "partitioner", ":", "partitions", "=", "_call_partitioner", "(", "partitioner", ",", "shape", ",", "dtype", ")", "if", "name", "in", "self", ".", "_partitioned_vars", ":", "if", "reuse", "is", "False", ":", "raise", "ValueError", "(", "\"Partitioned variable with name %s already exists. Did you mean to \"", "\"set reuse=True or reuse=tf.AUTO_REUSE in VarScope?\"", "%", "name", ")", "existing_var", "=", "self", ".", "_partitioned_vars", "[", "name", "]", "if", "not", "shape", ".", "is_compatible_with", "(", "existing_var", ".", "get_shape", "(", ")", ")", ":", "raise", "ValueError", "(", "\"Trying to reuse partitioned variable %s, but specified shape %s \"", "\"and found shape %s.\"", "%", "(", "name", ",", "shape", ",", "existing_var", ".", "get_shape", "(", ")", ")", ")", "if", "not", "dtype", ".", "is_compatible_with", "(", "existing_var", ".", "dtype", ")", ":", "raise", "ValueError", "(", "\"Trying to reuse partitioned variable %s, but specified dtype %s \"", "\"and found dtype %s.\"", "%", "(", "name", ",", "dtype", ".", "name", ",", "existing_var", ".", "dtype", ".", "name", ")", ")", "# pylint: disable=protected-access", "if", "(", "partitions", "is", "not", "None", "and", "existing_var", ".", "_get_partitions", "(", ")", "!=", "partitions", ")", ":", "raise", "ValueError", "(", "\"Trying to reuse partitioned variable %s, but specified partitions \"", "\"%s and found partitions %s.\"", "%", "(", "name", ",", "partitions", ",", "existing_var", ".", "_get_partitions", "(", ")", ")", ")", "# pylint: enable=protected-access", "return", "existing_var", "if", "reuse", "is", "True", ":", "raise", "ValueError", "(", "\"PartitionedVariable %s does not exist, or was not \"", "\"created with tf.get_variable(). Did you mean to set \"", "\"reuse=False or reuse=tf.AUTO_REUSE in VarScope?\"", "%", "name", ")", "slice_dim", ",", "num_slices", "=", "_get_slice_dim_and_num_slices", "(", "partitions", ")", "if", "\"%s/part_0\"", "%", "name", "in", "self", ".", "_vars", ":", "if", "\"%s/part_%d\"", "%", "(", "name", ",", "num_slices", "-", "1", ")", "not", "in", "self", ".", "_vars", ":", "raise", "ValueError", "(", "\"Partitioner returned a different partitioning than what was \"", "\"already found. Partitioner returned %d shards, and shard \"", "\"%s/part_0 was found, but %s/part_%d was not.\"", "%", "(", "num_slices", ",", "name", ",", "name", ",", "num_slices", "-", "1", ")", ")", "if", "\"%s/part_%d\"", "%", "(", "name", ",", "num_slices", ")", "in", "self", ".", "_vars", ":", "raise", "ValueError", "(", "\"Partitioner returned a different partitioning than what was \"", "\"already found. Partitioner returned %d shards, and shard \"", "\"%s/part_0 was found, but so was the extra shard %s/part_%d.\"", "%", "(", "num_slices", ",", "name", ",", "name", ",", "num_slices", ")", ")", "vs", "=", "[", "]", "for", "i", ",", "(", "var_offset", ",", "var_shape", ")", "in", "enumerate", "(", "_iter_slices", "(", "shape", ".", "as_list", "(", ")", ",", "num_slices", ",", "slice_dim", ")", ")", ":", "partition_info", "=", "_PartitionInfo", "(", "full_shape", "=", "shape", ".", "as_list", "(", ")", ",", "var_offset", "=", "var_offset", ")", "var_full_name", "=", "\"%s/part_%d\"", "%", "(", "name", ",", "i", ")", "with", "ops", ".", "name_scope", "(", "var_full_name", "+", "\"/PartitionedInitializer\"", ",", "skip_on_eager", "=", "False", ")", ":", "# Create the tensor to initialize the variable with default value.", "if", "initializer", "is", "None", ":", "init", ",", "initializing_from_value", "=", "self", ".", "_get_default_initializer", "(", "name", "=", "name", ",", "shape", "=", "shape", ",", "dtype", "=", "dtype", ")", "if", "initializing_from_value", ":", "init_shape", "=", "None", "else", ":", "init_shape", "=", "var_shape", "elif", "callable", "(", "initializer", ")", ":", "init", "=", "initializer", "init_shape", "=", "var_shape", "elif", "isinstance", "(", "initializer", ",", "ops", ".", "Tensor", ")", ":", "init", "=", "array_ops", ".", "slice", "(", "initializer", ",", "var_offset", ",", "var_shape", ")", "# Use the dtype of the given tensor.", "dtype", "=", "init", ".", "dtype", ".", "base_dtype", "init_shape", "=", "None", "else", ":", "init", "=", "ops", ".", "convert_to_tensor", "(", "initializer", ",", "dtype", "=", "dtype", ")", "init", "=", "array_ops", ".", "slice", "(", "init", ",", "var_offset", ",", "var_shape", ")", "init_shape", "=", "None", "with", "ops", ".", "name_scope", "(", "None", ")", ":", "var", "=", "self", ".", "_get_single_variable", "(", "name", "=", "var_full_name", ",", "shape", "=", "init_shape", ",", "dtype", "=", "dtype", ",", "initializer", "=", "init", ",", "partition_info", "=", "partition_info", ",", "regularizer", "=", "regularizer", ",", "reuse", "=", "reuse", ",", "trainable", "=", "trainable", ",", "collections", "=", "collections", ",", "caching_device", "=", "caching_device", ",", "validate_shape", "=", "validate_shape", ",", "use_resource", "=", "use_resource", ",", "constraint", "=", "constraint", ",", "synchronization", "=", "synchronization", ",", "aggregation", "=", "aggregation", ")", "# pylint: disable=protected-access", "var", ".", "_set_save_slice_info", "(", "variables", ".", "Variable", ".", "SaveSliceInfo", "(", "name", ",", "shape", ".", "as_list", "(", ")", ",", "var_offset", ",", "var_shape", ")", ")", "vs", ".", "append", "(", "var", ")", "# pylint: enable=protected-access", "partitioned_var", "=", "variables", ".", "PartitionedVariable", "(", "name", "=", "name", ",", "shape", "=", "shape", ",", "dtype", "=", "dtype", ",", "variable_list", "=", "vs", ",", "partitions", "=", "partitions", ")", "if", "not", "context", ".", "executing_eagerly", "(", ")", "or", "self", ".", "_store_eager_variables", ":", "self", ".", "_partitioned_vars", "[", "name", "]", "=", "partitioned_var", "return", "partitioned_var" ]
https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/ops/variable_scope.py#L599-L824
CRYTEK/CRYENGINE
232227c59a220cbbd311576f0fbeba7bb53b2a8c
Code/Tools/waf-1.7.13/crywaflib/compile_rules_win_x64_win_x64.py
python
load_release_win_x64_win_x64_settings
(conf)
Setup all compiler and linker settings shared over all win_x64_win_x64 configurations for the 'release' configuration
Setup all compiler and linker settings shared over all win_x64_win_x64 configurations for the 'release' configuration
[ "Setup", "all", "compiler", "and", "linker", "settings", "shared", "over", "all", "win_x64_win_x64", "configurations", "for", "the", "release", "configuration" ]
def load_release_win_x64_win_x64_settings(conf): """ Setup all compiler and linker settings shared over all win_x64_win_x64 configurations for the 'release' configuration """ v = conf.env conf.load_win_x64_win_x64_common_settings() # Load addional shared settings conf.load_release_cryengine_settings() conf.load_release_msvc_settings() conf.load_release_windows_settings()
[ "def", "load_release_win_x64_win_x64_settings", "(", "conf", ")", ":", "v", "=", "conf", ".", "env", "conf", ".", "load_win_x64_win_x64_common_settings", "(", ")", "# Load addional shared settings", "conf", ".", "load_release_cryengine_settings", "(", ")", "conf", ".", "load_release_msvc_settings", "(", ")", "conf", ".", "load_release_windows_settings", "(", ")" ]
https://github.com/CRYTEK/CRYENGINE/blob/232227c59a220cbbd311576f0fbeba7bb53b2a8c/Code/Tools/waf-1.7.13/crywaflib/compile_rules_win_x64_win_x64.py#L93-L104
windystrife/UnrealEngine_NVIDIAGameWorks
b50e6338a7c5b26374d66306ebc7807541ff815e
Engine/Source/ThirdParty/CEF3/pristine/cef_source/tools/cef_parser.py
python
obj_analysis.is_result_vector_rawptr
(self)
return self.result_value[0]['result_type'] == 'rawptr'
Returns true if this is a RawPtr vector.
Returns true if this is a RawPtr vector.
[ "Returns", "true", "if", "this", "is", "a", "RawPtr", "vector", "." ]
def is_result_vector_rawptr(self): """ Returns true if this is a RawPtr vector. """ return self.result_value[0]['result_type'] == 'rawptr'
[ "def", "is_result_vector_rawptr", "(", "self", ")", ":", "return", "self", ".", "result_value", "[", "0", "]", "[", "'result_type'", "]", "==", "'rawptr'" ]
https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Source/ThirdParty/CEF3/pristine/cef_source/tools/cef_parser.py#L1913-L1915
envoyproxy/envoy
65541accdafe255e72310b4298d646e091da2d80
tools/config_validation/validate_fragment.py
python
validate_fragment
(type_name, fragment, descriptor_path=None)
Validate a dictionary representing a JSON/YAML fragment against an Envoy API proto3 type. Throws Protobuf errors on parsing exceptions, successful validations produce no result. Args: type_name: a string providing the type name, e.g. envoy.config.bootstrap.v3.Bootstrap. fragment: a dictionary representing the parsed JSON/YAML configuration fragment.
Validate a dictionary representing a JSON/YAML fragment against an Envoy API proto3 type.
[ "Validate", "a", "dictionary", "representing", "a", "JSON", "/", "YAML", "fragment", "against", "an", "Envoy", "API", "proto3", "type", "." ]
def validate_fragment(type_name, fragment, descriptor_path=None): """Validate a dictionary representing a JSON/YAML fragment against an Envoy API proto3 type. Throws Protobuf errors on parsing exceptions, successful validations produce no result. Args: type_name: a string providing the type name, e.g. envoy.config.bootstrap.v3.Bootstrap. fragment: a dictionary representing the parsed JSON/YAML configuration fragment. """ json_fragment = json.dumps(fragment, skipkeys=True) if not descriptor_path: r = runfiles.Create() descriptor_path = r.Rlocation( 'envoy/tools/type_whisperer/all_protos_with_ext_pb_text.pb_text') file_desc_set = descriptor_pb2.FileDescriptorSet() text_format.Parse( pathlib.Path(descriptor_path).read_text(), file_desc_set, allow_unknown_extension=True) pool = descriptor_pool.DescriptorPool() for f in file_desc_set.file: pool.Add(f) desc = pool.FindMessageTypeByName(type_name) msg = message_factory.MessageFactory(pool=pool).GetPrototype(desc)() json_format.Parse(json_fragment, msg, descriptor_pool=pool)
[ "def", "validate_fragment", "(", "type_name", ",", "fragment", ",", "descriptor_path", "=", "None", ")", ":", "json_fragment", "=", "json", ".", "dumps", "(", "fragment", ",", "skipkeys", "=", "True", ")", "if", "not", "descriptor_path", ":", "r", "=", "runfiles", ".", "Create", "(", ")", "descriptor_path", "=", "r", ".", "Rlocation", "(", "'envoy/tools/type_whisperer/all_protos_with_ext_pb_text.pb_text'", ")", "file_desc_set", "=", "descriptor_pb2", ".", "FileDescriptorSet", "(", ")", "text_format", ".", "Parse", "(", "pathlib", ".", "Path", "(", "descriptor_path", ")", ".", "read_text", "(", ")", ",", "file_desc_set", ",", "allow_unknown_extension", "=", "True", ")", "pool", "=", "descriptor_pool", ".", "DescriptorPool", "(", ")", "for", "f", "in", "file_desc_set", ".", "file", ":", "pool", ".", "Add", "(", "f", ")", "desc", "=", "pool", ".", "FindMessageTypeByName", "(", "type_name", ")", "msg", "=", "message_factory", ".", "MessageFactory", "(", "pool", "=", "pool", ")", ".", "GetPrototype", "(", "desc", ")", "(", ")", "json_format", ".", "Parse", "(", "json_fragment", ",", "msg", ",", "descriptor_pool", "=", "pool", ")" ]
https://github.com/envoyproxy/envoy/blob/65541accdafe255e72310b4298d646e091da2d80/tools/config_validation/validate_fragment.py#L55-L83
gnina/gnina
b9ae032f52fc7a8153987bde09c0efa3620d8bb6
caffe/scripts/cpp_lint.py
python
CheckIncludeLine
(filename, clean_lines, linenum, include_state, error)
Check rules that are applicable to #include lines. Strings on #include lines are NOT removed from elided line, to make certain tasks easier. However, to prevent false positives, checks applicable to #include lines in CheckLanguage must be put here. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. include_state: An _IncludeState instance in which the headers are inserted. error: The function to call with any errors found.
Check rules that are applicable to #include lines.
[ "Check", "rules", "that", "are", "applicable", "to", "#include", "lines", "." ]
def CheckIncludeLine(filename, clean_lines, linenum, include_state, error): """Check rules that are applicable to #include lines. Strings on #include lines are NOT removed from elided line, to make certain tasks easier. However, to prevent false positives, checks applicable to #include lines in CheckLanguage must be put here. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. include_state: An _IncludeState instance in which the headers are inserted. error: The function to call with any errors found. """ fileinfo = FileInfo(filename) line = clean_lines.lines[linenum] # "include" should use the new style "foo/bar.h" instead of just "bar.h" if _RE_PATTERN_INCLUDE_NEW_STYLE.search(line): error(filename, linenum, 'build/include_dir', 4, 'Include the directory when naming .h files') # we shouldn't include a file more than once. actually, there are a # handful of instances where doing so is okay, but in general it's # not. match = _RE_PATTERN_INCLUDE.search(line) if match: include = match.group(2) is_system = (match.group(1) == '<') if include in include_state: error(filename, linenum, 'build/include', 4, '"%s" already included at %s:%s' % (include, filename, include_state[include])) else: include_state[include] = linenum # We want to ensure that headers appear in the right order: # 1) for foo.cc, foo.h (preferred location) # 2) c system files # 3) cpp system files # 4) for foo.cc, foo.h (deprecated location) # 5) other google headers # # We classify each include statement as one of those 5 types # using a number of techniques. The include_state object keeps # track of the highest type seen, and complains if we see a # lower type after that. error_message = include_state.CheckNextIncludeOrder( _ClassifyInclude(fileinfo, include, is_system)) if error_message: error(filename, linenum, 'build/include_order', 4, '%s. Should be: %s.h, c system, c++ system, other.' % (error_message, fileinfo.BaseName())) canonical_include = include_state.CanonicalizeAlphabeticalOrder(include) if not include_state.IsInAlphabeticalOrder( clean_lines, linenum, canonical_include): error(filename, linenum, 'build/include_alpha', 4, 'Include "%s" not in alphabetical order' % include) include_state.SetLastHeader(canonical_include) # Look for any of the stream classes that are part of standard C++. match = _RE_PATTERN_INCLUDE.match(line) if match: include = match.group(2) if Match(r'(f|ind|io|i|o|parse|pf|stdio|str|)?stream$', include): # Many unit tests use cout, so we exempt them. if not _IsTestFilename(filename): error(filename, linenum, 'readability/streams', 3, 'Streams are highly discouraged.')
[ "def", "CheckIncludeLine", "(", "filename", ",", "clean_lines", ",", "linenum", ",", "include_state", ",", "error", ")", ":", "fileinfo", "=", "FileInfo", "(", "filename", ")", "line", "=", "clean_lines", ".", "lines", "[", "linenum", "]", "# \"include\" should use the new style \"foo/bar.h\" instead of just \"bar.h\"", "if", "_RE_PATTERN_INCLUDE_NEW_STYLE", ".", "search", "(", "line", ")", ":", "error", "(", "filename", ",", "linenum", ",", "'build/include_dir'", ",", "4", ",", "'Include the directory when naming .h files'", ")", "# we shouldn't include a file more than once. actually, there are a", "# handful of instances where doing so is okay, but in general it's", "# not.", "match", "=", "_RE_PATTERN_INCLUDE", ".", "search", "(", "line", ")", "if", "match", ":", "include", "=", "match", ".", "group", "(", "2", ")", "is_system", "=", "(", "match", ".", "group", "(", "1", ")", "==", "'<'", ")", "if", "include", "in", "include_state", ":", "error", "(", "filename", ",", "linenum", ",", "'build/include'", ",", "4", ",", "'\"%s\" already included at %s:%s'", "%", "(", "include", ",", "filename", ",", "include_state", "[", "include", "]", ")", ")", "else", ":", "include_state", "[", "include", "]", "=", "linenum", "# We want to ensure that headers appear in the right order:", "# 1) for foo.cc, foo.h (preferred location)", "# 2) c system files", "# 3) cpp system files", "# 4) for foo.cc, foo.h (deprecated location)", "# 5) other google headers", "#", "# We classify each include statement as one of those 5 types", "# using a number of techniques. The include_state object keeps", "# track of the highest type seen, and complains if we see a", "# lower type after that.", "error_message", "=", "include_state", ".", "CheckNextIncludeOrder", "(", "_ClassifyInclude", "(", "fileinfo", ",", "include", ",", "is_system", ")", ")", "if", "error_message", ":", "error", "(", "filename", ",", "linenum", ",", "'build/include_order'", ",", "4", ",", "'%s. Should be: %s.h, c system, c++ system, other.'", "%", "(", "error_message", ",", "fileinfo", ".", "BaseName", "(", ")", ")", ")", "canonical_include", "=", "include_state", ".", "CanonicalizeAlphabeticalOrder", "(", "include", ")", "if", "not", "include_state", ".", "IsInAlphabeticalOrder", "(", "clean_lines", ",", "linenum", ",", "canonical_include", ")", ":", "error", "(", "filename", ",", "linenum", ",", "'build/include_alpha'", ",", "4", ",", "'Include \"%s\" not in alphabetical order'", "%", "include", ")", "include_state", ".", "SetLastHeader", "(", "canonical_include", ")", "# Look for any of the stream classes that are part of standard C++.", "match", "=", "_RE_PATTERN_INCLUDE", ".", "match", "(", "line", ")", "if", "match", ":", "include", "=", "match", ".", "group", "(", "2", ")", "if", "Match", "(", "r'(f|ind|io|i|o|parse|pf|stdio|str|)?stream$'", ",", "include", ")", ":", "# Many unit tests use cout, so we exempt them.", "if", "not", "_IsTestFilename", "(", "filename", ")", ":", "error", "(", "filename", ",", "linenum", ",", "'readability/streams'", ",", "3", ",", "'Streams are highly discouraged.'", ")" ]
https://github.com/gnina/gnina/blob/b9ae032f52fc7a8153987bde09c0efa3620d8bb6/caffe/scripts/cpp_lint.py#L3684-L3753
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Code/Tools/AzCodeGenerator/Scripts/az_code_gen/clang_cpp.py
python
resolve_annotation
(annotations, tag, default=None)
return default
Resolves an annotation in C++ namespace format to its value or None e.g. AzClass::Serialize::Base -> annotations['AzClass']['Serialize']['Base'] @param annotations The root annotation dict to search through @param tag The C++ formatted tag (using either :: or . as a separator) @param default The value to return if no annotation is found (default: None) @return The resolved annotation value or default
Resolves an annotation in C++ namespace format to its value or None e.g. AzClass::Serialize::Base -> annotations['AzClass']['Serialize']['Base']
[ "Resolves", "an", "annotation", "in", "C", "++", "namespace", "format", "to", "its", "value", "or", "None", "e", ".", "g", ".", "AzClass", "::", "Serialize", "::", "Base", "-", ">", "annotations", "[", "AzClass", "]", "[", "Serialize", "]", "[", "Base", "]" ]
def resolve_annotation(annotations, tag, default=None): """ Resolves an annotation in C++ namespace format to its value or None e.g. AzClass::Serialize::Base -> annotations['AzClass']['Serialize']['Base'] @param annotations The root annotation dict to search through @param tag The C++ formatted tag (using either :: or . as a separator) @param default The value to return if no annotation is found (default: None) @return The resolved annotation value or default """ tag_parts = re.split(TAG_SPLIT_PATTERN, tag) try: namespace = annotations for part in tag_parts[0:-1]: namespace = namespace.get(part, {}) return namespace.get(tag_parts[-1], default) except: pass return default
[ "def", "resolve_annotation", "(", "annotations", ",", "tag", ",", "default", "=", "None", ")", ":", "tag_parts", "=", "re", ".", "split", "(", "TAG_SPLIT_PATTERN", ",", "tag", ")", "try", ":", "namespace", "=", "annotations", "for", "part", "in", "tag_parts", "[", "0", ":", "-", "1", "]", ":", "namespace", "=", "namespace", ".", "get", "(", "part", ",", "{", "}", ")", "return", "namespace", ".", "get", "(", "tag_parts", "[", "-", "1", "]", ",", "default", ")", "except", ":", "pass", "return", "default" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Code/Tools/AzCodeGenerator/Scripts/az_code_gen/clang_cpp.py#L28-L44
Xilinx/Vitis-AI
fc74d404563d9951b57245443c73bef389f3657f
tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/timeseries/python/timeseries/head.py
python
TimeSeriesRegressionHead._train_ops
(self, features)
return estimator_lib.EstimatorSpec( loss=model_outputs.loss, mode=mode, train_op=train_op)
Add training ops to the graph.
Add training ops to the graph.
[ "Add", "training", "ops", "to", "the", "graph", "." ]
def _train_ops(self, features): """Add training ops to the graph.""" mode = estimator_lib.ModeKeys.TRAIN with variable_scope.variable_scope( "model", # Use ResourceVariables to avoid race conditions. use_resource=True): model_outputs = self.create_loss(features, mode) train_op = self.optimizer.minimize( model_outputs.loss, global_step=training_util.get_global_step()) return estimator_lib.EstimatorSpec( loss=model_outputs.loss, mode=mode, train_op=train_op)
[ "def", "_train_ops", "(", "self", ",", "features", ")", ":", "mode", "=", "estimator_lib", ".", "ModeKeys", ".", "TRAIN", "with", "variable_scope", ".", "variable_scope", "(", "\"model\"", ",", "# Use ResourceVariables to avoid race conditions.", "use_resource", "=", "True", ")", ":", "model_outputs", "=", "self", ".", "create_loss", "(", "features", ",", "mode", ")", "train_op", "=", "self", ".", "optimizer", ".", "minimize", "(", "model_outputs", ".", "loss", ",", "global_step", "=", "training_util", ".", "get_global_step", "(", ")", ")", "return", "estimator_lib", ".", "EstimatorSpec", "(", "loss", "=", "model_outputs", ".", "loss", ",", "mode", "=", "mode", ",", "train_op", "=", "train_op", ")" ]
https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/timeseries/python/timeseries/head.py#L95-L110
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/pkg_resources/__init__.py
python
fixup_namespace_packages
(path_item, parent=None)
Ensure that previously-declared namespace packages include path_item
Ensure that previously-declared namespace packages include path_item
[ "Ensure", "that", "previously", "-", "declared", "namespace", "packages", "include", "path_item" ]
def fixup_namespace_packages(path_item, parent=None): """Ensure that previously-declared namespace packages include path_item""" _imp.acquire_lock() try: for package in _namespace_packages.get(parent, ()): subpath = _handle_ns(package, path_item) if subpath: fixup_namespace_packages(subpath, package) finally: _imp.release_lock()
[ "def", "fixup_namespace_packages", "(", "path_item", ",", "parent", "=", "None", ")", ":", "_imp", ".", "acquire_lock", "(", ")", "try", ":", "for", "package", "in", "_namespace_packages", ".", "get", "(", "parent", ",", "(", ")", ")", ":", "subpath", "=", "_handle_ns", "(", "package", ",", "path_item", ")", "if", "subpath", ":", "fixup_namespace_packages", "(", "subpath", ",", "package", ")", "finally", ":", "_imp", ".", "release_lock", "(", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/pkg_resources/__init__.py#L2308-L2317
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/x86/toolchain/lib/python2.7/distutils/msvccompiler.py
python
get_build_version
()
return None
Return the version of MSVC that was used to build Python. For Python 2.3 and up, the version number is included in sys.version. For earlier versions, assume the compiler is MSVC 6.
Return the version of MSVC that was used to build Python.
[ "Return", "the", "version", "of", "MSVC", "that", "was", "used", "to", "build", "Python", "." ]
def get_build_version(): """Return the version of MSVC that was used to build Python. For Python 2.3 and up, the version number is included in sys.version. For earlier versions, assume the compiler is MSVC 6. """ prefix = "MSC v." i = string.find(sys.version, prefix) if i == -1: return 6 i = i + len(prefix) s, rest = sys.version[i:].split(" ", 1) majorVersion = int(s[:-2]) - 6 minorVersion = int(s[2:3]) / 10.0 # I don't think paths are affected by minor version in version 6 if majorVersion == 6: minorVersion = 0 if majorVersion >= 6: return majorVersion + minorVersion # else we don't know what version of the compiler this is return None
[ "def", "get_build_version", "(", ")", ":", "prefix", "=", "\"MSC v.\"", "i", "=", "string", ".", "find", "(", "sys", ".", "version", ",", "prefix", ")", "if", "i", "==", "-", "1", ":", "return", "6", "i", "=", "i", "+", "len", "(", "prefix", ")", "s", ",", "rest", "=", "sys", ".", "version", "[", "i", ":", "]", ".", "split", "(", "\" \"", ",", "1", ")", "majorVersion", "=", "int", "(", "s", "[", ":", "-", "2", "]", ")", "-", "6", "minorVersion", "=", "int", "(", "s", "[", "2", ":", "3", "]", ")", "/", "10.0", "# I don't think paths are affected by minor version in version 6", "if", "majorVersion", "==", "6", ":", "minorVersion", "=", "0", "if", "majorVersion", ">=", "6", ":", "return", "majorVersion", "+", "minorVersion", "# else we don't know what version of the compiler this is", "return", "None" ]
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/distutils/msvccompiler.py#L153-L174
mantidproject/mantid
03deeb89254ec4289edb8771e0188c2090a02f32
qt/python/mantidqtinterfaces/mantidqtinterfaces/Muon/GUI/Common/fitting_widgets/model_fitting/model_fitting_model.py
python
ModelFittingModel._extract_x_and_y_from_current_result_table
(self)
Extracts the X, Y and error values from the currently selected result table and saves them in the context.
Extracts the X, Y and error values from the currently selected result table and saves them in the context.
[ "Extracts", "the", "X", "Y", "and", "error", "values", "from", "the", "currently", "selected", "result", "table", "and", "saves", "them", "in", "the", "context", "." ]
def _extract_x_and_y_from_current_result_table(self) -> None: """Extracts the X, Y and error values from the currently selected result table and saves them in the context.""" results_table_name = self.current_result_table_name if results_table_name is not None and check_if_workspace_exist(results_table_name): current_results_table = retrieve_ws(results_table_name) for i, column_name in enumerate(current_results_table.getColumnNames()): self._save_values_from_table_column(column_name, current_results_table.column(i), current_results_table.getPlotType(i)) self._populate_empty_parameter_errors(current_results_table.rowCount())
[ "def", "_extract_x_and_y_from_current_result_table", "(", "self", ")", "->", "None", ":", "results_table_name", "=", "self", ".", "current_result_table_name", "if", "results_table_name", "is", "not", "None", "and", "check_if_workspace_exist", "(", "results_table_name", ")", ":", "current_results_table", "=", "retrieve_ws", "(", "results_table_name", ")", "for", "i", ",", "column_name", "in", "enumerate", "(", "current_results_table", ".", "getColumnNames", "(", ")", ")", ":", "self", ".", "_save_values_from_table_column", "(", "column_name", ",", "current_results_table", ".", "column", "(", "i", ")", ",", "current_results_table", ".", "getPlotType", "(", "i", ")", ")", "self", ".", "_populate_empty_parameter_errors", "(", "current_results_table", ".", "rowCount", "(", ")", ")" ]
https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/qt/python/mantidqtinterfaces/mantidqtinterfaces/Muon/GUI/Common/fitting_widgets/model_fitting/model_fitting_model.py#L128-L138
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
wx/lib/agw/shapedbutton.py
python
SButton.ConvertWXToPIL
(self, bmp)
return img
Converts a :class:`Image` into a PIL image. :param `bmp`: an instance of :class:`Image`.
Converts a :class:`Image` into a PIL image.
[ "Converts", "a", ":", "class", ":", "Image", "into", "a", "PIL", "image", "." ]
def ConvertWXToPIL(self, bmp): """ Converts a :class:`Image` into a PIL image. :param `bmp`: an instance of :class:`Image`. """ width = bmp.GetWidth() height = bmp.GetHeight() img = Image.fromstring("RGBA", (width, height), bmp.GetData()) return img
[ "def", "ConvertWXToPIL", "(", "self", ",", "bmp", ")", ":", "width", "=", "bmp", ".", "GetWidth", "(", ")", "height", "=", "bmp", ".", "GetHeight", "(", ")", "img", "=", "Image", ".", "fromstring", "(", "\"RGBA\"", ",", "(", "width", ",", "height", ")", ",", "bmp", ".", "GetData", "(", ")", ")", "return", "img" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/lib/agw/shapedbutton.py#L902-L913
Tencent/CMONGO
c40380caa14e05509f46993aa8b8da966b09b0b5
buildscripts/packager-enterprise.py
python
make_package
(distro, build_os, arch, spec, srcdir)
return distro.make_pkg(build_os, arch, spec, srcdir)
Construct the package for (arch, distro, spec), getting packaging files from srcdir and any user-specified suffix from suffixes
Construct the package for (arch, distro, spec), getting packaging files from srcdir and any user-specified suffix from suffixes
[ "Construct", "the", "package", "for", "(", "arch", "distro", "spec", ")", "getting", "packaging", "files", "from", "srcdir", "and", "any", "user", "-", "specified", "suffix", "from", "suffixes" ]
def make_package(distro, build_os, arch, spec, srcdir): """Construct the package for (arch, distro, spec), getting packaging files from srcdir and any user-specified suffix from suffixes""" sdir=setupdir(distro, build_os, arch, spec) packager.ensure_dir(sdir) # Note that the RPM packages get their man pages from the debian # directory, so the debian directory is needed in all cases (and # innocuous in the debianoids' sdirs). for pkgdir in ["debian", "rpm"]: print "Copying packaging files from %s to %s" % ("%s/%s" % (srcdir, pkgdir), sdir) # FIXME: sh-dash-cee is bad. See if tarfile can do this. packager.sysassert(["sh", "-c", "(cd \"%s\" && git archive %s %s/ ) | (cd \"%s\" && tar xvf -)" % (srcdir, spec.metadata_gitspec(), pkgdir, sdir)]) # Splat the binaries and snmp files under sdir. The "build" stages of the # packaging infrastructure will move the files to wherever they # need to go. unpack_binaries_into(build_os, arch, spec, sdir) # Remove the mongosniff binary due to libpcap dynamic # linkage. FIXME: this removal should go away # eventually. if os.path.exists(sdir + "bin/mongosniff"): os.unlink(sdir + "bin/mongosniff") return distro.make_pkg(build_os, arch, spec, srcdir)
[ "def", "make_package", "(", "distro", ",", "build_os", ",", "arch", ",", "spec", ",", "srcdir", ")", ":", "sdir", "=", "setupdir", "(", "distro", ",", "build_os", ",", "arch", ",", "spec", ")", "packager", ".", "ensure_dir", "(", "sdir", ")", "# Note that the RPM packages get their man pages from the debian", "# directory, so the debian directory is needed in all cases (and", "# innocuous in the debianoids' sdirs).", "for", "pkgdir", "in", "[", "\"debian\"", ",", "\"rpm\"", "]", ":", "print", "\"Copying packaging files from %s to %s\"", "%", "(", "\"%s/%s\"", "%", "(", "srcdir", ",", "pkgdir", ")", ",", "sdir", ")", "# FIXME: sh-dash-cee is bad. See if tarfile can do this.", "packager", ".", "sysassert", "(", "[", "\"sh\"", ",", "\"-c\"", ",", "\"(cd \\\"%s\\\" && git archive %s %s/ ) | (cd \\\"%s\\\" && tar xvf -)\"", "%", "(", "srcdir", ",", "spec", ".", "metadata_gitspec", "(", ")", ",", "pkgdir", ",", "sdir", ")", "]", ")", "# Splat the binaries and snmp files under sdir. The \"build\" stages of the", "# packaging infrastructure will move the files to wherever they", "# need to go.", "unpack_binaries_into", "(", "build_os", ",", "arch", ",", "spec", ",", "sdir", ")", "# Remove the mongosniff binary due to libpcap dynamic", "# linkage. FIXME: this removal should go away", "# eventually.", "if", "os", ".", "path", ".", "exists", "(", "sdir", "+", "\"bin/mongosniff\"", ")", ":", "os", ".", "unlink", "(", "sdir", "+", "\"bin/mongosniff\"", ")", "return", "distro", ".", "make_pkg", "(", "build_os", ",", "arch", ",", "spec", ",", "srcdir", ")" ]
https://github.com/Tencent/CMONGO/blob/c40380caa14e05509f46993aa8b8da966b09b0b5/buildscripts/packager-enterprise.py#L210-L233
miyosuda/TensorFlowAndroidDemo
35903e0221aa5f109ea2dbef27f20b52e317f42d
jni-build/jni/include/external/bazel_tools/tools/android/incremental_install.py
python
UploadNativeLibs
(adb, native_lib_args, app_dir, full_install)
Uploads native libraries to the device.
Uploads native libraries to the device.
[ "Uploads", "native", "libraries", "to", "the", "device", "." ]
def UploadNativeLibs(adb, native_lib_args, app_dir, full_install): """Uploads native libraries to the device.""" native_libs = ConvertNativeLibs(native_lib_args) libs = set() if native_libs: abi = FindAbi(adb.GetAbi(), native_libs.keys()) if abi: libs = native_libs[abi] basename_to_path = {} install_checksums = {} for lib in sorted(libs): install_checksums[os.path.basename(lib)] = Checksum(lib) basename_to_path[os.path.basename(lib)] = lib device_manifest = None if not full_install: device_manifest = adb.Pull("%s/native/native_manifest" % app_dir) device_checksums = {} if device_manifest is None: # If we couldn't fetch the device manifest or if this is a non-incremental # install, wipe the slate clean adb.Delete("%s/native" % app_dir) else: # Otherwise, parse the manifest. Note that this branch is also taken if the # manifest is empty. for manifest_line in device_manifest.split("\n"): if manifest_line: name, checksum = manifest_line.split(" ") device_checksums[name] = checksum libs_to_delete = set(device_checksums) - set(install_checksums) libs_to_upload = set(install_checksums) - set(device_checksums) common_libs = set(install_checksums).intersection(set(device_checksums)) libs_to_upload.update([l for l in common_libs if install_checksums[l] != device_checksums[l]]) libs_to_push = [(basename_to_path[lib], "%s/native/%s" % (app_dir, lib)) for lib in libs_to_upload] if not libs_to_delete and not libs_to_push and device_manifest is not None: logging.info("Native libs up-to-date") return num_files = len(libs_to_delete) + len(libs_to_push) logging.info("Updating %d native lib%s...", num_files, "s" if num_files != 1 else "") adb.Delete("%s/native/native_manifest" % app_dir) if libs_to_delete: adb.DeleteMultiple([ "%s/native/%s" % (app_dir, lib) for lib in libs_to_delete]) upload_walltime_start = time.time() fs = [adb.Push(local, remote) for local, remote in libs_to_push] done, not_done = futures.wait(fs, return_when=futures.FIRST_EXCEPTION) upload_walltime = time.time() - upload_walltime_start logging.debug("Native library upload walltime: %s seconds", upload_walltime) # If there is anything in not_done, then some adb call failed and we # can cancel the rest. if not_done: for f in not_done: f.cancel() # If any adb call resulted in an exception, re-raise it. for f in done: f.result() install_manifest = [ name + " " + checksum for name, checksum in install_checksums.iteritems()] adb.PushString("\n".join(install_manifest), "%s/native/native_manifest" % app_dir).result()
[ "def", "UploadNativeLibs", "(", "adb", ",", "native_lib_args", ",", "app_dir", ",", "full_install", ")", ":", "native_libs", "=", "ConvertNativeLibs", "(", "native_lib_args", ")", "libs", "=", "set", "(", ")", "if", "native_libs", ":", "abi", "=", "FindAbi", "(", "adb", ".", "GetAbi", "(", ")", ",", "native_libs", ".", "keys", "(", ")", ")", "if", "abi", ":", "libs", "=", "native_libs", "[", "abi", "]", "basename_to_path", "=", "{", "}", "install_checksums", "=", "{", "}", "for", "lib", "in", "sorted", "(", "libs", ")", ":", "install_checksums", "[", "os", ".", "path", ".", "basename", "(", "lib", ")", "]", "=", "Checksum", "(", "lib", ")", "basename_to_path", "[", "os", ".", "path", ".", "basename", "(", "lib", ")", "]", "=", "lib", "device_manifest", "=", "None", "if", "not", "full_install", ":", "device_manifest", "=", "adb", ".", "Pull", "(", "\"%s/native/native_manifest\"", "%", "app_dir", ")", "device_checksums", "=", "{", "}", "if", "device_manifest", "is", "None", ":", "# If we couldn't fetch the device manifest or if this is a non-incremental", "# install, wipe the slate clean", "adb", ".", "Delete", "(", "\"%s/native\"", "%", "app_dir", ")", "else", ":", "# Otherwise, parse the manifest. Note that this branch is also taken if the", "# manifest is empty.", "for", "manifest_line", "in", "device_manifest", ".", "split", "(", "\"\\n\"", ")", ":", "if", "manifest_line", ":", "name", ",", "checksum", "=", "manifest_line", ".", "split", "(", "\" \"", ")", "device_checksums", "[", "name", "]", "=", "checksum", "libs_to_delete", "=", "set", "(", "device_checksums", ")", "-", "set", "(", "install_checksums", ")", "libs_to_upload", "=", "set", "(", "install_checksums", ")", "-", "set", "(", "device_checksums", ")", "common_libs", "=", "set", "(", "install_checksums", ")", ".", "intersection", "(", "set", "(", "device_checksums", ")", ")", "libs_to_upload", ".", "update", "(", "[", "l", "for", "l", "in", "common_libs", "if", "install_checksums", "[", "l", "]", "!=", "device_checksums", "[", "l", "]", "]", ")", "libs_to_push", "=", "[", "(", "basename_to_path", "[", "lib", "]", ",", "\"%s/native/%s\"", "%", "(", "app_dir", ",", "lib", ")", ")", "for", "lib", "in", "libs_to_upload", "]", "if", "not", "libs_to_delete", "and", "not", "libs_to_push", "and", "device_manifest", "is", "not", "None", ":", "logging", ".", "info", "(", "\"Native libs up-to-date\"", ")", "return", "num_files", "=", "len", "(", "libs_to_delete", ")", "+", "len", "(", "libs_to_push", ")", "logging", ".", "info", "(", "\"Updating %d native lib%s...\"", ",", "num_files", ",", "\"s\"", "if", "num_files", "!=", "1", "else", "\"\"", ")", "adb", ".", "Delete", "(", "\"%s/native/native_manifest\"", "%", "app_dir", ")", "if", "libs_to_delete", ":", "adb", ".", "DeleteMultiple", "(", "[", "\"%s/native/%s\"", "%", "(", "app_dir", ",", "lib", ")", "for", "lib", "in", "libs_to_delete", "]", ")", "upload_walltime_start", "=", "time", ".", "time", "(", ")", "fs", "=", "[", "adb", ".", "Push", "(", "local", ",", "remote", ")", "for", "local", ",", "remote", "in", "libs_to_push", "]", "done", ",", "not_done", "=", "futures", ".", "wait", "(", "fs", ",", "return_when", "=", "futures", ".", "FIRST_EXCEPTION", ")", "upload_walltime", "=", "time", ".", "time", "(", ")", "-", "upload_walltime_start", "logging", ".", "debug", "(", "\"Native library upload walltime: %s seconds\"", ",", "upload_walltime", ")", "# If there is anything in not_done, then some adb call failed and we", "# can cancel the rest.", "if", "not_done", ":", "for", "f", "in", "not_done", ":", "f", ".", "cancel", "(", ")", "# If any adb call resulted in an exception, re-raise it.", "for", "f", "in", "done", ":", "f", ".", "result", "(", ")", "install_manifest", "=", "[", "name", "+", "\" \"", "+", "checksum", "for", "name", ",", "checksum", "in", "install_checksums", ".", "iteritems", "(", ")", "]", "adb", ".", "PushString", "(", "\"\\n\"", ".", "join", "(", "install_manifest", ")", ",", "\"%s/native/native_manifest\"", "%", "app_dir", ")", ".", "result", "(", ")" ]
https://github.com/miyosuda/TensorFlowAndroidDemo/blob/35903e0221aa5f109ea2dbef27f20b52e317f42d/jni-build/jni/include/external/bazel_tools/tools/android/incremental_install.py#L504-L579
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/dateutil/tz/tz.py
python
datetime_ambiguous
(dt, tz=None)
return not (same_offset and same_dst)
Given a datetime and a time zone, determine whether or not a given datetime is ambiguous (i.e if there are two times differentiated only by their DST status). :param dt: A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` is provided.) :param tz: A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If ``None`` or not provided, the datetime's own time zone will be used. :return: Returns a boolean value whether or not the "wall time" is ambiguous in ``tz``. .. versionadded:: 2.6.0
Given a datetime and a time zone, determine whether or not a given datetime is ambiguous (i.e if there are two times differentiated only by their DST status).
[ "Given", "a", "datetime", "and", "a", "time", "zone", "determine", "whether", "or", "not", "a", "given", "datetime", "is", "ambiguous", "(", "i", ".", "e", "if", "there", "are", "two", "times", "differentiated", "only", "by", "their", "DST", "status", ")", "." ]
def datetime_ambiguous(dt, tz=None): """ Given a datetime and a time zone, determine whether or not a given datetime is ambiguous (i.e if there are two times differentiated only by their DST status). :param dt: A :class:`datetime.datetime` (whose time zone will be ignored if ``tz`` is provided.) :param tz: A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If ``None`` or not provided, the datetime's own time zone will be used. :return: Returns a boolean value whether or not the "wall time" is ambiguous in ``tz``. .. versionadded:: 2.6.0 """ if tz is None: if dt.tzinfo is None: raise ValueError('Datetime is naive and no time zone provided.') tz = dt.tzinfo # If a time zone defines its own "is_ambiguous" function, we'll use that. is_ambiguous_fn = getattr(tz, 'is_ambiguous', None) if is_ambiguous_fn is not None: try: return tz.is_ambiguous(dt) except Exception: pass # If it doesn't come out and tell us it's ambiguous, we'll just check if # the fold attribute has any effect on this particular date and time. dt = dt.replace(tzinfo=tz) wall_0 = enfold(dt, fold=0) wall_1 = enfold(dt, fold=1) same_offset = wall_0.utcoffset() == wall_1.utcoffset() same_dst = wall_0.dst() == wall_1.dst() return not (same_offset and same_dst)
[ "def", "datetime_ambiguous", "(", "dt", ",", "tz", "=", "None", ")", ":", "if", "tz", "is", "None", ":", "if", "dt", ".", "tzinfo", "is", "None", ":", "raise", "ValueError", "(", "'Datetime is naive and no time zone provided.'", ")", "tz", "=", "dt", ".", "tzinfo", "# If a time zone defines its own \"is_ambiguous\" function, we'll use that.", "is_ambiguous_fn", "=", "getattr", "(", "tz", ",", "'is_ambiguous'", ",", "None", ")", "if", "is_ambiguous_fn", "is", "not", "None", ":", "try", ":", "return", "tz", ".", "is_ambiguous", "(", "dt", ")", "except", "Exception", ":", "pass", "# If it doesn't come out and tell us it's ambiguous, we'll just check if", "# the fold attribute has any effect on this particular date and time.", "dt", "=", "dt", ".", "replace", "(", "tzinfo", "=", "tz", ")", "wall_0", "=", "enfold", "(", "dt", ",", "fold", "=", "0", ")", "wall_1", "=", "enfold", "(", "dt", ",", "fold", "=", "1", ")", "same_offset", "=", "wall_0", ".", "utcoffset", "(", ")", "==", "wall_1", ".", "utcoffset", "(", ")", "same_dst", "=", "wall_0", ".", "dst", "(", ")", "==", "wall_1", ".", "dst", "(", ")", "return", "not", "(", "same_offset", "and", "same_dst", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/site-packages/dateutil/tz/tz.py#L1717-L1760
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/gtk/xrc.py
python
XmlNode.GetParent
(*args, **kwargs)
return _xrc.XmlNode_GetParent(*args, **kwargs)
GetParent(self) -> XmlNode
GetParent(self) -> XmlNode
[ "GetParent", "(", "self", ")", "-", ">", "XmlNode" ]
def GetParent(*args, **kwargs): """GetParent(self) -> XmlNode""" return _xrc.XmlNode_GetParent(*args, **kwargs)
[ "def", "GetParent", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_xrc", ".", "XmlNode_GetParent", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/xrc.py#L410-L412
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/typing/context.py
python
CallFrame.add_return_type
(self, return_type)
Add *return_type* to the list of inferred return-types. If there are too many, raise `TypingError`.
Add *return_type* to the list of inferred return-types. If there are too many, raise `TypingError`.
[ "Add", "*", "return_type", "*", "to", "the", "list", "of", "inferred", "return", "-", "types", ".", "If", "there", "are", "too", "many", "raise", "TypingError", "." ]
def add_return_type(self, return_type): """Add *return_type* to the list of inferred return-types. If there are too many, raise `TypingError`. """ # The maximum limit is picked arbitrarily. # Don't think that this needs to be user configurable. RETTY_LIMIT = 16 self._inferred_retty.add(return_type) if len(self._inferred_retty) >= RETTY_LIMIT: m = "Return type of recursive function does not converge" raise errors.TypingError(m)
[ "def", "add_return_type", "(", "self", ",", "return_type", ")", ":", "# The maximum limit is picked arbitrarily.", "# Don't think that this needs to be user configurable.", "RETTY_LIMIT", "=", "16", "self", ".", "_inferred_retty", ".", "add", "(", "return_type", ")", "if", "len", "(", "self", ".", "_inferred_retty", ")", ">=", "RETTY_LIMIT", ":", "m", "=", "\"Return type of recursive function does not converge\"", "raise", "errors", ".", "TypingError", "(", "m", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numba/typing/context.py#L118-L128
weolar/miniblink49
1c4678db0594a4abde23d3ebbcc7cd13c3170777
third_party/WebKit/Tools/Scripts/webkitpy/style/checkers/cpp.py
python
_FunctionState.modifiers_and_return_type
(self)
return SingleLineView(elided, start_modifiers, self.function_name_start_position).single_line.strip()
Returns the modifiers and the return type.
Returns the modifiers and the return type.
[ "Returns", "the", "modifiers", "and", "the", "return", "type", "." ]
def modifiers_and_return_type(self): """Returns the modifiers and the return type.""" # Go backwards from where the function name is until we encounter one of several things: # ';' or '{' or '}' or 'private:', etc. or '#' or return Position(0, 0) elided = self._clean_lines.elided start_modifiers = _rfind_in_lines(r';|\{|\}|((private|public|protected):)|(#.*)', elided, self.parameter_start_position, Position(0, 0)) return SingleLineView(elided, start_modifiers, self.function_name_start_position).single_line.strip()
[ "def", "modifiers_and_return_type", "(", "self", ")", ":", "# Go backwards from where the function name is until we encounter one of several things:", "# ';' or '{' or '}' or 'private:', etc. or '#' or return Position(0, 0)", "elided", "=", "self", ".", "_clean_lines", ".", "elided", "start_modifiers", "=", "_rfind_in_lines", "(", "r';|\\{|\\}|((private|public|protected):)|(#.*)'", ",", "elided", ",", "self", ".", "parameter_start_position", ",", "Position", "(", "0", ",", "0", ")", ")", "return", "SingleLineView", "(", "elided", ",", "start_modifiers", ",", "self", ".", "function_name_start_position", ")", ".", "single_line", ".", "strip", "(", ")" ]
https://github.com/weolar/miniblink49/blob/1c4678db0594a4abde23d3ebbcc7cd13c3170777/third_party/WebKit/Tools/Scripts/webkitpy/style/checkers/cpp.py#L568-L575
apple/turicreate
cce55aa5311300e3ce6af93cb45ba791fd1bdf49
deps/src/libxml2-2.9.1/python/libxml2.py
python
xmlDoc.encodeEntitiesReentrant
(self, input)
return ret
Do a global encoding of a string, replacing the predefined entities and non ASCII values with their entities and CharRef counterparts. Contrary to xmlEncodeEntities, this routine is reentrant, and result must be deallocated.
Do a global encoding of a string, replacing the predefined entities and non ASCII values with their entities and CharRef counterparts. Contrary to xmlEncodeEntities, this routine is reentrant, and result must be deallocated.
[ "Do", "a", "global", "encoding", "of", "a", "string", "replacing", "the", "predefined", "entities", "and", "non", "ASCII", "values", "with", "their", "entities", "and", "CharRef", "counterparts", ".", "Contrary", "to", "xmlEncodeEntities", "this", "routine", "is", "reentrant", "and", "result", "must", "be", "deallocated", "." ]
def encodeEntitiesReentrant(self, input): """Do a global encoding of a string, replacing the predefined entities and non ASCII values with their entities and CharRef counterparts. Contrary to xmlEncodeEntities, this routine is reentrant, and result must be deallocated. """ ret = libxml2mod.xmlEncodeEntitiesReentrant(self._o, input) return ret
[ "def", "encodeEntitiesReentrant", "(", "self", ",", "input", ")", ":", "ret", "=", "libxml2mod", ".", "xmlEncodeEntitiesReentrant", "(", "self", ".", "_o", ",", "input", ")", "return", "ret" ]
https://github.com/apple/turicreate/blob/cce55aa5311300e3ce6af93cb45ba791fd1bdf49/deps/src/libxml2-2.9.1/python/libxml2.py#L4135-L4141
mindspore-ai/mindspore
fb8fd3338605bb34fa5cea054e535a8b1d753fab
mindspore/python/mindspore/nn/metrics/loss.py
python
Loss.eval
(self)
return self._sum_loss / self._total_num
Calculates the average of the loss. Returns: numpy.float64. The average of the loss. Raises: RuntimeError: If the total number is 0.
Calculates the average of the loss.
[ "Calculates", "the", "average", "of", "the", "loss", "." ]
def eval(self): """ Calculates the average of the loss. Returns: numpy.float64. The average of the loss. Raises: RuntimeError: If the total number is 0. """ if self._total_num == 0: raise RuntimeError("The 'Loss' can not be calculated, because the number of samples is 0, please " "check whether has called update method before calling eval method.") return self._sum_loss / self._total_num
[ "def", "eval", "(", "self", ")", ":", "if", "self", ".", "_total_num", "==", "0", ":", "raise", "RuntimeError", "(", "\"The 'Loss' can not be calculated, because the number of samples is 0, please \"", "\"check whether has called update method before calling eval method.\"", ")", "return", "self", ".", "_sum_loss", "/", "self", ".", "_total_num" ]
https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/nn/metrics/loss.py#L78-L91
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/cookielib.py
python
CookieJar.add_cookie_header
(self, request)
Add correct Cookie: header to request (urllib2.Request object). The Cookie2 header is also added unless policy.hide_cookie2 is true.
Add correct Cookie: header to request (urllib2.Request object).
[ "Add", "correct", "Cookie", ":", "header", "to", "request", "(", "urllib2", ".", "Request", "object", ")", "." ]
def add_cookie_header(self, request): """Add correct Cookie: header to request (urllib2.Request object). The Cookie2 header is also added unless policy.hide_cookie2 is true. """ _debug("add_cookie_header") self._cookies_lock.acquire() try: self._policy._now = self._now = int(time.time()) cookies = self._cookies_for_request(request) attrs = self._cookie_attrs(cookies) if attrs: if not request.has_header("Cookie"): request.add_unredirected_header( "Cookie", "; ".join(attrs)) # if necessary, advertise that we know RFC 2965 if (self._policy.rfc2965 and not self._policy.hide_cookie2 and not request.has_header("Cookie2")): for cookie in cookies: if cookie.version != 1: request.add_unredirected_header("Cookie2", '$Version="1"') break finally: self._cookies_lock.release() self.clear_expired_cookies()
[ "def", "add_cookie_header", "(", "self", ",", "request", ")", ":", "_debug", "(", "\"add_cookie_header\"", ")", "self", ".", "_cookies_lock", ".", "acquire", "(", ")", "try", ":", "self", ".", "_policy", ".", "_now", "=", "self", ".", "_now", "=", "int", "(", "time", ".", "time", "(", ")", ")", "cookies", "=", "self", ".", "_cookies_for_request", "(", "request", ")", "attrs", "=", "self", ".", "_cookie_attrs", "(", "cookies", ")", "if", "attrs", ":", "if", "not", "request", ".", "has_header", "(", "\"Cookie\"", ")", ":", "request", ".", "add_unredirected_header", "(", "\"Cookie\"", ",", "\"; \"", ".", "join", "(", "attrs", ")", ")", "# if necessary, advertise that we know RFC 2965", "if", "(", "self", ".", "_policy", ".", "rfc2965", "and", "not", "self", ".", "_policy", ".", "hide_cookie2", "and", "not", "request", ".", "has_header", "(", "\"Cookie2\"", ")", ")", ":", "for", "cookie", "in", "cookies", ":", "if", "cookie", ".", "version", "!=", "1", ":", "request", ".", "add_unredirected_header", "(", "\"Cookie2\"", ",", "'$Version=\"1\"'", ")", "break", "finally", ":", "self", ".", "_cookies_lock", ".", "release", "(", ")", "self", ".", "clear_expired_cookies", "(", ")" ]
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/cookielib.py#L1312-L1343
benoitsteiner/tensorflow-opencl
cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5
tensorflow/python/saved_model/builder_impl.py
python
SavedModelBuilder.add_meta_graph
(self, tags, signature_def_map=None, assets_collection=None, legacy_init_op=None, clear_devices=False, main_op=None)
Adds the current meta graph to the SavedModel. Creates a Saver in the current scope and uses the Saver to export the meta graph def. Invoking this API requires the `add_meta_graph_and_variables()` API to have been invoked before. Args: tags: The set of tags to annotate the meta graph def with. signature_def_map: The map of signature defs to be added to the meta graph def. assets_collection: Assets collection to be saved with SavedModel. Note that this collection should be a subset of the assets saved as part of the first meta graph in the SavedModel. legacy_init_op: Legacy support for op or group of ops to execute after the restore op upon a load. clear_devices: Set to true if the device info on the default graph should be cleared. main_op: Op or group of ops to execute when the graph is loaded. Note that when the main_op is specified it is run after the restore op at load-time. Raises: AssertionError: If the variables for the SavedModel have not been saved yet, or if the graph already contains one or more legacy init ops.
Adds the current meta graph to the SavedModel.
[ "Adds", "the", "current", "meta", "graph", "to", "the", "SavedModel", "." ]
def add_meta_graph(self, tags, signature_def_map=None, assets_collection=None, legacy_init_op=None, clear_devices=False, main_op=None): """Adds the current meta graph to the SavedModel. Creates a Saver in the current scope and uses the Saver to export the meta graph def. Invoking this API requires the `add_meta_graph_and_variables()` API to have been invoked before. Args: tags: The set of tags to annotate the meta graph def with. signature_def_map: The map of signature defs to be added to the meta graph def. assets_collection: Assets collection to be saved with SavedModel. Note that this collection should be a subset of the assets saved as part of the first meta graph in the SavedModel. legacy_init_op: Legacy support for op or group of ops to execute after the restore op upon a load. clear_devices: Set to true if the device info on the default graph should be cleared. main_op: Op or group of ops to execute when the graph is loaded. Note that when the main_op is specified it is run after the restore op at load-time. Raises: AssertionError: If the variables for the SavedModel have not been saved yet, or if the graph already contains one or more legacy init ops. """ if not self._has_saved_variables: raise AssertionError( "Graph state including variables and assets has not been saved yet. " "Please invoke `add_meta_graph_and_variables()` first.") # Validate the signature def map to ensure all included TensorInfos are # properly populated. self._validate_signature_def_map(signature_def_map) # Save asset files and write them to disk, if any. self._save_and_write_assets(assets_collection) if main_op is None: # Add legacy init op to the SavedModel. self._maybe_add_legacy_init_op(legacy_init_op) else: self._add_main_op(main_op) # Initialize a saver to generate a sharded output for all saveables in the # current scope. saver = tf_saver.Saver( variables._all_saveable_objects(), # pylint: disable=protected-access sharded=True, write_version=saver_pb2.SaverDef.V2, allow_empty=True) # The graph almost certainly previously contained at least one Saver, and # possibly several (e.g. one for loading a pretrained embedding, and another # for the model weights). However, a *new* Saver was just created that # includes all of the variables. Removing the preexisting ones was the # motivation for the clear_extraneous_savers option, but it turns out that # there are edge cases where that option breaks the graph. Until that is # resolved, we just leave the option set to False for now. # TODO(soergel): Reinstate clear_extraneous_savers=True when possible. meta_graph_def = saver.export_meta_graph(clear_devices=clear_devices) # Tag the meta graph def and add it to the SavedModel. self._tag_and_add_meta_graph(meta_graph_def, tags, signature_def_map)
[ "def", "add_meta_graph", "(", "self", ",", "tags", ",", "signature_def_map", "=", "None", ",", "assets_collection", "=", "None", ",", "legacy_init_op", "=", "None", ",", "clear_devices", "=", "False", ",", "main_op", "=", "None", ")", ":", "if", "not", "self", ".", "_has_saved_variables", ":", "raise", "AssertionError", "(", "\"Graph state including variables and assets has not been saved yet. \"", "\"Please invoke `add_meta_graph_and_variables()` first.\"", ")", "# Validate the signature def map to ensure all included TensorInfos are", "# properly populated.", "self", ".", "_validate_signature_def_map", "(", "signature_def_map", ")", "# Save asset files and write them to disk, if any.", "self", ".", "_save_and_write_assets", "(", "assets_collection", ")", "if", "main_op", "is", "None", ":", "# Add legacy init op to the SavedModel.", "self", ".", "_maybe_add_legacy_init_op", "(", "legacy_init_op", ")", "else", ":", "self", ".", "_add_main_op", "(", "main_op", ")", "# Initialize a saver to generate a sharded output for all saveables in the", "# current scope.", "saver", "=", "tf_saver", ".", "Saver", "(", "variables", ".", "_all_saveable_objects", "(", ")", ",", "# pylint: disable=protected-access", "sharded", "=", "True", ",", "write_version", "=", "saver_pb2", ".", "SaverDef", ".", "V2", ",", "allow_empty", "=", "True", ")", "# The graph almost certainly previously contained at least one Saver, and", "# possibly several (e.g. one for loading a pretrained embedding, and another", "# for the model weights). However, a *new* Saver was just created that", "# includes all of the variables. Removing the preexisting ones was the", "# motivation for the clear_extraneous_savers option, but it turns out that", "# there are edge cases where that option breaks the graph. Until that is", "# resolved, we just leave the option set to False for now.", "# TODO(soergel): Reinstate clear_extraneous_savers=True when possible.", "meta_graph_def", "=", "saver", ".", "export_meta_graph", "(", "clear_devices", "=", "clear_devices", ")", "# Tag the meta graph def and add it to the SavedModel.", "self", ".", "_tag_and_add_meta_graph", "(", "meta_graph_def", ",", "tags", ",", "signature_def_map", ")" ]
https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/python/saved_model/builder_impl.py#L236-L305
ChromiumWebApps/chromium
c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7
tools/symsrc/pefile.py
python
PE.get_string_from_data
(self, offset, data)
return s
Get an ASCII string from within the data.
Get an ASCII string from within the data.
[ "Get", "an", "ASCII", "string", "from", "within", "the", "data", "." ]
def get_string_from_data(self, offset, data): """Get an ASCII string from within the data.""" # OC Patch b = None try: b = data[offset] except IndexError: return '' s = '' while ord(b): s += b offset += 1 try: b = data[offset] except IndexError: break return s
[ "def", "get_string_from_data", "(", "self", ",", "offset", ",", "data", ")", ":", "# OC Patch", "b", "=", "None", "try", ":", "b", "=", "data", "[", "offset", "]", "except", "IndexError", ":", "return", "''", "s", "=", "''", "while", "ord", "(", "b", ")", ":", "s", "+=", "b", "offset", "+=", "1", "try", ":", "b", "=", "data", "[", "offset", "]", "except", "IndexError", ":", "break", "return", "s" ]
https://github.com/ChromiumWebApps/chromium/blob/c7361d39be8abd1574e6ce8957c8dbddd4c6ccf7/tools/symsrc/pefile.py#L3027-L3047
microsoft/checkedc-clang
a173fefde5d7877b7750e7ce96dd08cf18baebf2
lldb/third_party/Python/module/pexpect-4.6/pexpect/screen.py
python
screen.cursor_save
(self)
Save current cursor position.
Save current cursor position.
[ "Save", "current", "cursor", "position", "." ]
def cursor_save (self): # <ESC>[s '''Save current cursor position.''' self.cursor_save_attrs()
[ "def", "cursor_save", "(", "self", ")", ":", "# <ESC>[s", "self", ".", "cursor_save_attrs", "(", ")" ]
https://github.com/microsoft/checkedc-clang/blob/a173fefde5d7877b7750e7ce96dd08cf18baebf2/lldb/third_party/Python/module/pexpect-4.6/pexpect/screen.py#L318-L321
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/x86/toolchain/lib/python2.7/logging/__init__.py
python
Logger.log
(self, level, msg, *args, **kwargs)
Log 'msg % args' with the integer severity 'level'. To pass exception information, use the keyword argument exc_info with a true value, e.g. logger.log(level, "We have a %s", "mysterious problem", exc_info=1)
Log 'msg % args' with the integer severity 'level'.
[ "Log", "msg", "%", "args", "with", "the", "integer", "severity", "level", "." ]
def log(self, level, msg, *args, **kwargs): """ Log 'msg % args' with the integer severity 'level'. To pass exception information, use the keyword argument exc_info with a true value, e.g. logger.log(level, "We have a %s", "mysterious problem", exc_info=1) """ if not isinstance(level, int): if raiseExceptions: raise TypeError("level must be an integer") else: return if self.isEnabledFor(level): self._log(level, msg, args, **kwargs)
[ "def", "log", "(", "self", ",", "level", ",", "msg", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "if", "not", "isinstance", "(", "level", ",", "int", ")", ":", "if", "raiseExceptions", ":", "raise", "TypeError", "(", "\"level must be an integer\"", ")", "else", ":", "return", "if", "self", ".", "isEnabledFor", "(", "level", ")", ":", "self", ".", "_log", "(", "level", ",", "msg", ",", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/logging/__init__.py#L1198-L1213
miyosuda/TensorFlowAndroidDemo
35903e0221aa5f109ea2dbef27f20b52e317f42d
jni-build/jni/include/tensorflow/python/ops/variable_scope.py
python
_get_partitioned_variable
(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True)
return scope._get_partitioned_variable( _get_default_variable_store(), name, shape=shape, dtype=dtype, initializer=initializer, regularizer=regularizer, trainable=trainable, collections=collections, caching_device=caching_device, partitioner=partitioner, validate_shape=validate_shape)
Gets or creates a sharded variable list with these parameters. The `partitioner` must be a callable that accepts a fully defined `TensorShape` and returns a sequence of integers (the `partitions`). These integers describe how to partition the given sharded `Variable` along the given dimension. That is, `partitions[1] = 3` means split the `Variable` into 3 shards along dimension 1. Currently, sharding along only one axis is supported. If the list of variables with the given name (prefix) is already stored, we return the stored variables. Otherwise, we create a new one. Set `reuse` to `True` when you only want to reuse existing Variables. Set `reuse` to `False` when you only want to create new Variables. If `reuse` is `None` (the default), both new and existing variables are returned. If initializer is `None` (the default), the default initializer passed in the constructor is used. If that one is `None` too, we use a new `uniform_unit_scaling_initializer`. If initializer is a Tensor, we use it as a value and derive the shape from the initializer. If the initializer is a callable, then it will be called for each shard. Otherwise the initializer should match the shape of the entire sharded Variable, and it will be sliced accordingly for each shard. Some useful partitioners are available. See, e.g., `variable_axis_size_partitioner` and `min_max_variable_partitioner`. Args: name: The name of the new or existing variable. shape: Shape of the new or existing variable. dtype: Type of the new or existing variable (defaults to `DT_FLOAT`). initializer: Initializer for the variable if one is created. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. trainable: If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). collections: List of graph collections keys to add the Variable to. Defaults to `[GraphKeys.VARIABLES]` (see tf.Variable). caching_device: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not `None`, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through `Switch` and other conditional statements. partitioner: Optional callable that accepts a fully defined `TensorShape` and `dtype` of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). validate_shape: If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. Returns: A tuple `(shards, partitions)` where `shards` is the list of `Variable` shards and `partitions` is the output of the partitioner on the input shape. Raises: ValueError: when creating a new variable and shape is not declared, or when violating reuse during variable creation. Reuse is set inside `variable_scope`.
Gets or creates a sharded variable list with these parameters.
[ "Gets", "or", "creates", "a", "sharded", "variable", "list", "with", "these", "parameters", "." ]
def _get_partitioned_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True): """Gets or creates a sharded variable list with these parameters. The `partitioner` must be a callable that accepts a fully defined `TensorShape` and returns a sequence of integers (the `partitions`). These integers describe how to partition the given sharded `Variable` along the given dimension. That is, `partitions[1] = 3` means split the `Variable` into 3 shards along dimension 1. Currently, sharding along only one axis is supported. If the list of variables with the given name (prefix) is already stored, we return the stored variables. Otherwise, we create a new one. Set `reuse` to `True` when you only want to reuse existing Variables. Set `reuse` to `False` when you only want to create new Variables. If `reuse` is `None` (the default), both new and existing variables are returned. If initializer is `None` (the default), the default initializer passed in the constructor is used. If that one is `None` too, we use a new `uniform_unit_scaling_initializer`. If initializer is a Tensor, we use it as a value and derive the shape from the initializer. If the initializer is a callable, then it will be called for each shard. Otherwise the initializer should match the shape of the entire sharded Variable, and it will be sliced accordingly for each shard. Some useful partitioners are available. See, e.g., `variable_axis_size_partitioner` and `min_max_variable_partitioner`. Args: name: The name of the new or existing variable. shape: Shape of the new or existing variable. dtype: Type of the new or existing variable (defaults to `DT_FLOAT`). initializer: Initializer for the variable if one is created. regularizer: A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization. trainable: If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). collections: List of graph collections keys to add the Variable to. Defaults to `[GraphKeys.VARIABLES]` (see tf.Variable). caching_device: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not `None`, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through `Switch` and other conditional statements. partitioner: Optional callable that accepts a fully defined `TensorShape` and `dtype` of the Variable to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned). validate_shape: If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. Returns: A tuple `(shards, partitions)` where `shards` is the list of `Variable` shards and `partitions` is the output of the partitioner on the input shape. Raises: ValueError: when creating a new variable and shape is not declared, or when violating reuse during variable creation. Reuse is set inside `variable_scope`. """ # pylint: disable=protected-access scope = get_variable_scope() if scope.custom_getter is not None: raise ValueError( "Private access to _get_partitioned_variable is not allowed when " "a custom getter is set. Current custom getter: %s. " "It is likely that you're using create_partitioned_variables. " "If so, consider instead using get_variable with a non-empty " "partitioner parameter instead." % scope.custom_getter) return scope._get_partitioned_variable( _get_default_variable_store(), name, shape=shape, dtype=dtype, initializer=initializer, regularizer=regularizer, trainable=trainable, collections=collections, caching_device=caching_device, partitioner=partitioner, validate_shape=validate_shape)
[ "def", "_get_partitioned_variable", "(", "name", ",", "shape", "=", "None", ",", "dtype", "=", "None", ",", "initializer", "=", "None", ",", "regularizer", "=", "None", ",", "trainable", "=", "True", ",", "collections", "=", "None", ",", "caching_device", "=", "None", ",", "partitioner", "=", "None", ",", "validate_shape", "=", "True", ")", ":", "# pylint: disable=protected-access", "scope", "=", "get_variable_scope", "(", ")", "if", "scope", ".", "custom_getter", "is", "not", "None", ":", "raise", "ValueError", "(", "\"Private access to _get_partitioned_variable is not allowed when \"", "\"a custom getter is set. Current custom getter: %s. \"", "\"It is likely that you're using create_partitioned_variables. \"", "\"If so, consider instead using get_variable with a non-empty \"", "\"partitioner parameter instead.\"", "%", "scope", ".", "custom_getter", ")", "return", "scope", ".", "_get_partitioned_variable", "(", "_get_default_variable_store", "(", ")", ",", "name", ",", "shape", "=", "shape", ",", "dtype", "=", "dtype", ",", "initializer", "=", "initializer", ",", "regularizer", "=", "regularizer", ",", "trainable", "=", "trainable", ",", "collections", "=", "collections", ",", "caching_device", "=", "caching_device", ",", "partitioner", "=", "partitioner", ",", "validate_shape", "=", "validate_shape", ")" ]
https://github.com/miyosuda/TensorFlowAndroidDemo/blob/35903e0221aa5f109ea2dbef27f20b52e317f42d/jni-build/jni/include/tensorflow/python/ops/variable_scope.py#L876-L962
Ewenwan/MVision
97b394dfa48cb21c82cd003b1a952745e413a17f
vSLAM/矩阵变换python函数.py
python
is_same_transform
(matrix0, matrix1)
return numpy.allclose(matrix0, matrix1)
Return True if two matrices perform same transformation. >>> is_same_transform(numpy.identity(4), numpy.identity(4)) True >>> is_same_transform(numpy.identity(4), random_rotation_matrix()) False
Return True if two matrices perform same transformation. >>> is_same_transform(numpy.identity(4), numpy.identity(4)) True >>> is_same_transform(numpy.identity(4), random_rotation_matrix()) False
[ "Return", "True", "if", "two", "matrices", "perform", "same", "transformation", ".", ">>>", "is_same_transform", "(", "numpy", ".", "identity", "(", "4", ")", "numpy", ".", "identity", "(", "4", "))", "True", ">>>", "is_same_transform", "(", "numpy", ".", "identity", "(", "4", ")", "random_rotation_matrix", "()", ")", "False" ]
def is_same_transform(matrix0, matrix1): """Return True if two matrices perform same transformation. >>> is_same_transform(numpy.identity(4), numpy.identity(4)) True >>> is_same_transform(numpy.identity(4), random_rotation_matrix()) False """ matrix0 = numpy.array(matrix0, dtype=numpy.float64, copy=True) matrix0 /= matrix0[3, 3] matrix1 = numpy.array(matrix1, dtype=numpy.float64, copy=True) matrix1 /= matrix1[3, 3] return numpy.allclose(matrix0, matrix1)
[ "def", "is_same_transform", "(", "matrix0", ",", "matrix1", ")", ":", "matrix0", "=", "numpy", ".", "array", "(", "matrix0", ",", "dtype", "=", "numpy", ".", "float64", ",", "copy", "=", "True", ")", "matrix0", "/=", "matrix0", "[", "3", ",", "3", "]", "matrix1", "=", "numpy", ".", "array", "(", "matrix1", ",", "dtype", "=", "numpy", ".", "float64", ",", "copy", "=", "True", ")", "matrix1", "/=", "matrix1", "[", "3", ",", "3", "]", "return", "numpy", ".", "allclose", "(", "matrix0", ",", "matrix1", ")" ]
https://github.com/Ewenwan/MVision/blob/97b394dfa48cb21c82cd003b1a952745e413a17f/vSLAM/矩阵变换python函数.py#L1540-L1551
microsoft/checkedc-clang
a173fefde5d7877b7750e7ce96dd08cf18baebf2
llvm/bindings/python/llvm/object.py
python
ObjectFile.__init__
(self, filename=None, contents=None)
Construct an instance from a filename or binary data. filename must be a path to a file that can be opened with open(). contents can be either a native Python buffer type (like str) or a llvm.core.MemoryBuffer instance.
Construct an instance from a filename or binary data.
[ "Construct", "an", "instance", "from", "a", "filename", "or", "binary", "data", "." ]
def __init__(self, filename=None, contents=None): """Construct an instance from a filename or binary data. filename must be a path to a file that can be opened with open(). contents can be either a native Python buffer type (like str) or a llvm.core.MemoryBuffer instance. """ if contents: assert isinstance(contents, MemoryBuffer) if filename is not None: contents = MemoryBuffer(filename=filename) if contents is None: raise Exception('No input found.') ptr = lib.LLVMCreateObjectFile(contents) LLVMObject.__init__(self, ptr, disposer=lib.LLVMDisposeObjectFile) self.take_ownership(contents)
[ "def", "__init__", "(", "self", ",", "filename", "=", "None", ",", "contents", "=", "None", ")", ":", "if", "contents", ":", "assert", "isinstance", "(", "contents", ",", "MemoryBuffer", ")", "if", "filename", "is", "not", "None", ":", "contents", "=", "MemoryBuffer", "(", "filename", "=", "filename", ")", "if", "contents", "is", "None", ":", "raise", "Exception", "(", "'No input found.'", ")", "ptr", "=", "lib", ".", "LLVMCreateObjectFile", "(", "contents", ")", "LLVMObject", ".", "__init__", "(", "self", ",", "ptr", ",", "disposer", "=", "lib", ".", "LLVMDisposeObjectFile", ")", "self", ".", "take_ownership", "(", "contents", ")" ]
https://github.com/microsoft/checkedc-clang/blob/a173fefde5d7877b7750e7ce96dd08cf18baebf2/llvm/bindings/python/llvm/object.py#L102-L120
benoitsteiner/tensorflow-opencl
cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5
tensorflow/python/framework/errors_impl.py
python
NotFoundError.__init__
(self, node_def, op, message)
Creates a `NotFoundError`.
Creates a `NotFoundError`.
[ "Creates", "a", "NotFoundError", "." ]
def __init__(self, node_def, op, message): """Creates a `NotFoundError`.""" super(NotFoundError, self).__init__(node_def, op, message, NOT_FOUND)
[ "def", "__init__", "(", "self", ",", "node_def", ",", "op", ",", "message", ")", ":", "super", "(", "NotFoundError", ",", "self", ")", ".", "__init__", "(", "node_def", ",", "op", ",", "message", ",", "NOT_FOUND", ")" ]
https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/python/framework/errors_impl.py#L237-L239
Xilinx/Vitis-AI
fc74d404563d9951b57245443c73bef389f3657f
tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/configure.py
python
set_tf_nccl_version
(environ_cp)
Set TF_NCCL_VERSION.
Set TF_NCCL_VERSION.
[ "Set", "TF_NCCL_VERSION", "." ]
def set_tf_nccl_version(environ_cp): """Set TF_NCCL_VERSION.""" if not is_linux(): raise ValueError('Currently NCCL is only supported on Linux platform.') if 'TF_NCCL_VERSION' in environ_cp: return ask_nccl_version = ( 'Please specify the locally installed NCCL version you want to use. ' '[Leave empty to use http://github.com/nvidia/nccl]: ') tf_nccl_version = get_from_env_or_user_or_default(environ_cp, 'TF_NCCL_VERSION', ask_nccl_version, '') environ_cp['TF_NCCL_VERSION'] = tf_nccl_version
[ "def", "set_tf_nccl_version", "(", "environ_cp", ")", ":", "if", "not", "is_linux", "(", ")", ":", "raise", "ValueError", "(", "'Currently NCCL is only supported on Linux platform.'", ")", "if", "'TF_NCCL_VERSION'", "in", "environ_cp", ":", "return", "ask_nccl_version", "=", "(", "'Please specify the locally installed NCCL version you want to use. '", "'[Leave empty to use http://github.com/nvidia/nccl]: '", ")", "tf_nccl_version", "=", "get_from_env_or_user_or_default", "(", "environ_cp", ",", "'TF_NCCL_VERSION'", ",", "ask_nccl_version", ",", "''", ")", "environ_cp", "[", "'TF_NCCL_VERSION'", "]", "=", "tf_nccl_version" ]
https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/configure.py#L950-L964
zju3dv/clean-pvnet
5870c509e3cc205e1bb28910a7b1a9a3c8add9a8
lib/utils/pysixd/transform.py
python
translation_from_matrix
(matrix)
return numpy.array(matrix, copy=False)[:3, 3].copy()
Return translation vector from translation matrix. >>> v0 = numpy.random.random(3) - 0.5 >>> v1 = translation_from_matrix(translation_matrix(v0)) >>> numpy.allclose(v0, v1) True
Return translation vector from translation matrix.
[ "Return", "translation", "vector", "from", "translation", "matrix", "." ]
def translation_from_matrix(matrix): """Return translation vector from translation matrix. >>> v0 = numpy.random.random(3) - 0.5 >>> v1 = translation_from_matrix(translation_matrix(v0)) >>> numpy.allclose(v0, v1) True """ return numpy.array(matrix, copy=False)[:3, 3].copy()
[ "def", "translation_from_matrix", "(", "matrix", ")", ":", "return", "numpy", ".", "array", "(", "matrix", ",", "copy", "=", "False", ")", "[", ":", "3", ",", "3", "]", ".", "copy", "(", ")" ]
https://github.com/zju3dv/clean-pvnet/blob/5870c509e3cc205e1bb28910a7b1a9a3c8add9a8/lib/utils/pysixd/transform.py#L235-L244
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/lib-tk/ttk.py
python
_convert_stringval
(value)
return value
Converts a value to, hopefully, a more appropriate Python object.
Converts a value to, hopefully, a more appropriate Python object.
[ "Converts", "a", "value", "to", "hopefully", "a", "more", "appropriate", "Python", "object", "." ]
def _convert_stringval(value): """Converts a value to, hopefully, a more appropriate Python object.""" value = unicode(value) try: value = int(value) except (ValueError, TypeError): pass return value
[ "def", "_convert_stringval", "(", "value", ")", ":", "value", "=", "unicode", "(", "value", ")", "try", ":", "value", "=", "int", "(", "value", ")", "except", "(", "ValueError", ",", "TypeError", ")", ":", "pass", "return", "value" ]
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/lib-tk/ttk.py#L320-L328
hanpfei/chromium-net
392cc1fa3a8f92f42e4071ab6e674d8e0482f83f
third_party/catapult/telemetry/telemetry/internal/util/external_modules.py
python
ImportOptionalModule
(module)
Tries to import the desired module. Returns: The module if successful, None if not.
Tries to import the desired module.
[ "Tries", "to", "import", "the", "desired", "module", "." ]
def ImportOptionalModule(module): """Tries to import the desired module. Returns: The module if successful, None if not.""" try: return ImportRequiredModule(module) except ImportError: return None
[ "def", "ImportOptionalModule", "(", "module", ")", ":", "try", ":", "return", "ImportRequiredModule", "(", "module", ")", "except", "ImportError", ":", "return", "None" ]
https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/telemetry/telemetry/internal/util/external_modules.py#L48-L56
CRYTEK/CRYENGINE
232227c59a220cbbd311576f0fbeba7bb53b2a8c
Editor/Python/windows/Lib/site-packages/pip/_vendor/requests/sessions.py
python
Session.post
(self, url, data=None, json=None, **kwargs)
return self.request('POST', url, data=data, json=json, **kwargs)
Sends a POST request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes.
Sends a POST request. Returns :class:`Response` object.
[ "Sends", "a", "POST", "request", ".", "Returns", ":", "class", ":", "Response", "object", "." ]
def post(self, url, data=None, json=None, **kwargs): """Sends a POST request. Returns :class:`Response` object. :param url: URL for the new :class:`Request` object. :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json to send in the body of the :class:`Request`. :param \*\*kwargs: Optional arguments that ``request`` takes. """ return self.request('POST', url, data=data, json=json, **kwargs)
[ "def", "post", "(", "self", ",", "url", ",", "data", "=", "None", ",", "json", "=", "None", ",", "*", "*", "kwargs", ")", ":", "return", "self", ".", "request", "(", "'POST'", ",", "url", ",", "data", "=", "data", ",", "json", "=", "json", ",", "*", "*", "kwargs", ")" ]
https://github.com/CRYTEK/CRYENGINE/blob/232227c59a220cbbd311576f0fbeba7bb53b2a8c/Editor/Python/windows/Lib/site-packages/pip/_vendor/requests/sessions.py#L499-L508
openvinotoolkit/openvino
dedcbeafa8b84cccdc55ca64b8da516682b381c7
cmake/developer_package/cpplint/cpplint.py
python
CheckSpacingForFunctionCall
(filename, clean_lines, linenum, error)
Checks for the correctness of various spacing around function calls. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found.
Checks for the correctness of various spacing around function calls.
[ "Checks", "for", "the", "correctness", "of", "various", "spacing", "around", "function", "calls", "." ]
def CheckSpacingForFunctionCall(filename, clean_lines, linenum, error): """Checks for the correctness of various spacing around function calls. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found. """ line = clean_lines.elided[linenum] # Since function calls often occur inside if/for/while/switch # expressions - which have their own, more liberal conventions - we # first see if we should be looking inside such an expression for a # function call, to which we can apply more strict standards. fncall = line # if there's no control flow construct, look at whole line for pattern in (r'\bif\s*\((.*)\)\s*{', r'\bfor\s*\((.*)\)\s*{', r'\bwhile\s*\((.*)\)\s*[{;]', r'\bswitch\s*\((.*)\)\s*{'): match = Search(pattern, line) if match: fncall = match.group(1) # look inside the parens for function calls break # Except in if/for/while/switch, there should never be space # immediately inside parens (eg "f( 3, 4 )"). We make an exception # for nested parens ( (a+b) + c ). Likewise, there should never be # a space before a ( when it's a function argument. I assume it's a # function argument when the char before the whitespace is legal in # a function name (alnum + _) and we're not starting a macro. Also ignore # pointers and references to arrays and functions coz they're too tricky: # we use a very simple way to recognize these: # " (something)(maybe-something)" or # " (something)(maybe-something," or # " (something)[something]" # Note that we assume the contents of [] to be short enough that # they'll never need to wrap. if ( # Ignore control structures. not Search(r'\b(if|for|while|switch|return|new|delete|catch|sizeof)\b', fncall) and # Ignore pointers/references to functions. not Search(r' \([^)]+\)\([^)]*(\)|,$)', fncall) and # Ignore pointers/references to arrays. not Search(r' \([^)]+\)\[[^\]]+\]', fncall)): if Search(r'\w\s*\(\s(?!\s*\\$)', fncall): # a ( used for a fn call error(filename, linenum, 'whitespace/parens', 4, 'Extra space after ( in function call') elif Search(r'\(\s+(?!(\s*\\)|\()', fncall): error(filename, linenum, 'whitespace/parens', 2, 'Extra space after (') if (Search(r'\w\s+\(', fncall) and not Search(r'_{0,2}asm_{0,2}\s+_{0,2}volatile_{0,2}\s+\(', fncall) and not Search(r'#\s*define|typedef|using\s+\w+\s*=', fncall) and not Search(r'\w\s+\((\w+::)*\*\w+\)\(', fncall) and not Search(r'\bcase\s+\(', fncall)): # TODO(unknown): Space after an operator function seem to be a common # error, silence those for now by restricting them to highest verbosity. if Search(r'\boperator_*\b', line): error(filename, linenum, 'whitespace/parens', 0, 'Extra space before ( in function call') else: error(filename, linenum, 'whitespace/parens', 4, 'Extra space before ( in function call') # If the ) is followed only by a newline or a { + newline, assume it's # part of a control statement (if/while/etc), and don't complain if Search(r'[^)]\s+\)\s*[^{\s]', fncall): # If the closing parenthesis is preceded by only whitespaces, # try to give a more descriptive error message. if Search(r'^\s+\)', fncall): error(filename, linenum, 'whitespace/parens', 2, 'Closing ) should be moved to the previous line') else: error(filename, linenum, 'whitespace/parens', 2, 'Extra space before )')
[ "def", "CheckSpacingForFunctionCall", "(", "filename", ",", "clean_lines", ",", "linenum", ",", "error", ")", ":", "line", "=", "clean_lines", ".", "elided", "[", "linenum", "]", "# Since function calls often occur inside if/for/while/switch", "# expressions - which have their own, more liberal conventions - we", "# first see if we should be looking inside such an expression for a", "# function call, to which we can apply more strict standards.", "fncall", "=", "line", "# if there's no control flow construct, look at whole line", "for", "pattern", "in", "(", "r'\\bif\\s*\\((.*)\\)\\s*{'", ",", "r'\\bfor\\s*\\((.*)\\)\\s*{'", ",", "r'\\bwhile\\s*\\((.*)\\)\\s*[{;]'", ",", "r'\\bswitch\\s*\\((.*)\\)\\s*{'", ")", ":", "match", "=", "Search", "(", "pattern", ",", "line", ")", "if", "match", ":", "fncall", "=", "match", ".", "group", "(", "1", ")", "# look inside the parens for function calls", "break", "# Except in if/for/while/switch, there should never be space", "# immediately inside parens (eg \"f( 3, 4 )\"). We make an exception", "# for nested parens ( (a+b) + c ). Likewise, there should never be", "# a space before a ( when it's a function argument. I assume it's a", "# function argument when the char before the whitespace is legal in", "# a function name (alnum + _) and we're not starting a macro. Also ignore", "# pointers and references to arrays and functions coz they're too tricky:", "# we use a very simple way to recognize these:", "# \" (something)(maybe-something)\" or", "# \" (something)(maybe-something,\" or", "# \" (something)[something]\"", "# Note that we assume the contents of [] to be short enough that", "# they'll never need to wrap.", "if", "(", "# Ignore control structures.", "not", "Search", "(", "r'\\b(if|for|while|switch|return|new|delete|catch|sizeof)\\b'", ",", "fncall", ")", "and", "# Ignore pointers/references to functions.", "not", "Search", "(", "r' \\([^)]+\\)\\([^)]*(\\)|,$)'", ",", "fncall", ")", "and", "# Ignore pointers/references to arrays.", "not", "Search", "(", "r' \\([^)]+\\)\\[[^\\]]+\\]'", ",", "fncall", ")", ")", ":", "if", "Search", "(", "r'\\w\\s*\\(\\s(?!\\s*\\\\$)'", ",", "fncall", ")", ":", "# a ( used for a fn call", "error", "(", "filename", ",", "linenum", ",", "'whitespace/parens'", ",", "4", ",", "'Extra space after ( in function call'", ")", "elif", "Search", "(", "r'\\(\\s+(?!(\\s*\\\\)|\\()'", ",", "fncall", ")", ":", "error", "(", "filename", ",", "linenum", ",", "'whitespace/parens'", ",", "2", ",", "'Extra space after ('", ")", "if", "(", "Search", "(", "r'\\w\\s+\\('", ",", "fncall", ")", "and", "not", "Search", "(", "r'_{0,2}asm_{0,2}\\s+_{0,2}volatile_{0,2}\\s+\\('", ",", "fncall", ")", "and", "not", "Search", "(", "r'#\\s*define|typedef|using\\s+\\w+\\s*='", ",", "fncall", ")", "and", "not", "Search", "(", "r'\\w\\s+\\((\\w+::)*\\*\\w+\\)\\('", ",", "fncall", ")", "and", "not", "Search", "(", "r'\\bcase\\s+\\('", ",", "fncall", ")", ")", ":", "# TODO(unknown): Space after an operator function seem to be a common", "# error, silence those for now by restricting them to highest verbosity.", "if", "Search", "(", "r'\\boperator_*\\b'", ",", "line", ")", ":", "error", "(", "filename", ",", "linenum", ",", "'whitespace/parens'", ",", "0", ",", "'Extra space before ( in function call'", ")", "else", ":", "error", "(", "filename", ",", "linenum", ",", "'whitespace/parens'", ",", "4", ",", "'Extra space before ( in function call'", ")", "# If the ) is followed only by a newline or a { + newline, assume it's", "# part of a control statement (if/while/etc), and don't complain", "if", "Search", "(", "r'[^)]\\s+\\)\\s*[^{\\s]'", ",", "fncall", ")", ":", "# If the closing parenthesis is preceded by only whitespaces,", "# try to give a more descriptive error message.", "if", "Search", "(", "r'^\\s+\\)'", ",", "fncall", ")", ":", "error", "(", "filename", ",", "linenum", ",", "'whitespace/parens'", ",", "2", ",", "'Closing ) should be moved to the previous line'", ")", "else", ":", "error", "(", "filename", ",", "linenum", ",", "'whitespace/parens'", ",", "2", ",", "'Extra space before )'", ")" ]
https://github.com/openvinotoolkit/openvino/blob/dedcbeafa8b84cccdc55ca64b8da516682b381c7/cmake/developer_package/cpplint/cpplint.py#L3177-L3251
pmq20/node-packer
12c46c6e44fbc14d9ee645ebd17d5296b324f7e0
lts/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py
python
MakefileWriter.WriteMacBundleResources
(self, resources, bundle_deps)
Writes Makefile code for 'mac_bundle_resources'.
Writes Makefile code for 'mac_bundle_resources'.
[ "Writes", "Makefile", "code", "for", "mac_bundle_resources", "." ]
def WriteMacBundleResources(self, resources, bundle_deps): """Writes Makefile code for 'mac_bundle_resources'.""" self.WriteLn('### Generated for mac_bundle_resources') for output, res in gyp.xcode_emulation.GetMacBundleResources( generator_default_variables['PRODUCT_DIR'], self.xcode_settings, [Sourceify(self.Absolutify(r)) for r in resources]): _, ext = os.path.splitext(output) if ext != '.xcassets': # Make does not supports '.xcassets' emulation. self.WriteDoCmd([output], [res], 'mac_tool,,,copy-bundle-resource', part_of_all=True) bundle_deps.append(output)
[ "def", "WriteMacBundleResources", "(", "self", ",", "resources", ",", "bundle_deps", ")", ":", "self", ".", "WriteLn", "(", "'### Generated for mac_bundle_resources'", ")", "for", "output", ",", "res", "in", "gyp", ".", "xcode_emulation", ".", "GetMacBundleResources", "(", "generator_default_variables", "[", "'PRODUCT_DIR'", "]", ",", "self", ".", "xcode_settings", ",", "[", "Sourceify", "(", "self", ".", "Absolutify", "(", "r", ")", ")", "for", "r", "in", "resources", "]", ")", ":", "_", ",", "ext", "=", "os", ".", "path", ".", "splitext", "(", "output", ")", "if", "ext", "!=", "'.xcassets'", ":", "# Make does not supports '.xcassets' emulation.", "self", ".", "WriteDoCmd", "(", "[", "output", "]", ",", "[", "res", "]", ",", "'mac_tool,,,copy-bundle-resource'", ",", "part_of_all", "=", "True", ")", "bundle_deps", ".", "append", "(", "output", ")" ]
https://github.com/pmq20/node-packer/blob/12c46c6e44fbc14d9ee645ebd17d5296b324f7e0/lts/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py#L1155-L1167
potassco/clingo
e0c91d8f95cc28de1c480a871f9c97c30de83d40
libpyclingo/clingo/backend.py
python
Observer.begin_step
(self)
Marks the beginning of a block of directives passed to the solver.
Marks the beginning of a block of directives passed to the solver.
[ "Marks", "the", "beginning", "of", "a", "block", "of", "directives", "passed", "to", "the", "solver", "." ]
def begin_step(self) -> None: ''' Marks the beginning of a block of directives passed to the solver. '''
[ "def", "begin_step", "(", "self", ")", "->", "None", ":" ]
https://github.com/potassco/clingo/blob/e0c91d8f95cc28de1c480a871f9c97c30de83d40/libpyclingo/clingo/backend.py#L97-L100
facebookresearch/ELF
1f790173095cd910976d9f651b80beb872ec5d12
vendor/pybind11/tools/clang/cindex.py
python
Cursor.get_usr
(self)
return conf.lib.clang_getCursorUSR(self)
Return the Unified Symbol Resultion (USR) for the entity referenced by the given cursor (or None). A Unified Symbol Resolution (USR) is a string that identifies a particular entity (function, class, variable, etc.) within a program. USRs can be compared across translation units to determine, e.g., when references in one translation refer to an entity defined in another translation unit.
Return the Unified Symbol Resultion (USR) for the entity referenced by the given cursor (or None).
[ "Return", "the", "Unified", "Symbol", "Resultion", "(", "USR", ")", "for", "the", "entity", "referenced", "by", "the", "given", "cursor", "(", "or", "None", ")", "." ]
def get_usr(self): """Return the Unified Symbol Resultion (USR) for the entity referenced by the given cursor (or None). A Unified Symbol Resolution (USR) is a string that identifies a particular entity (function, class, variable, etc.) within a program. USRs can be compared across translation units to determine, e.g., when references in one translation refer to an entity defined in another translation unit.""" return conf.lib.clang_getCursorUSR(self)
[ "def", "get_usr", "(", "self", ")", ":", "return", "conf", ".", "lib", ".", "clang_getCursorUSR", "(", "self", ")" ]
https://github.com/facebookresearch/ELF/blob/1f790173095cd910976d9f651b80beb872ec5d12/vendor/pybind11/tools/clang/cindex.py#L1381-L1390
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/x86/toolchain/lib/python2.7/imaplib.py
python
IMAP4.expunge
(self)
return self._untagged_response(typ, dat, name)
Permanently remove deleted items from selected mailbox. Generates 'EXPUNGE' response for each deleted message. (typ, [data]) = <instance>.expunge() 'data' is list of 'EXPUNGE'd message numbers in order received.
Permanently remove deleted items from selected mailbox.
[ "Permanently", "remove", "deleted", "items", "from", "selected", "mailbox", "." ]
def expunge(self): """Permanently remove deleted items from selected mailbox. Generates 'EXPUNGE' response for each deleted message. (typ, [data]) = <instance>.expunge() 'data' is list of 'EXPUNGE'd message numbers in order received. """ name = 'EXPUNGE' typ, dat = self._simple_command(name) return self._untagged_response(typ, dat, name)
[ "def", "expunge", "(", "self", ")", ":", "name", "=", "'EXPUNGE'", "typ", ",", "dat", "=", "self", ".", "_simple_command", "(", "name", ")", "return", "self", ".", "_untagged_response", "(", "typ", ",", "dat", ",", "name", ")" ]
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/x86/toolchain/lib/python2.7/imaplib.py#L418-L429
adobe/chromium
cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7
tools/site_compare/scrapers/chrome/chrome01970.py
python
Scrape
(urls, outdir, size, pos, timeout=20, **kwargs)
return chromebase.Scrape(urls, outdir, size, pos, timeout, kwargs)
Invoke a browser, send it to a series of URLs, and save its output. Args: urls: list of URLs to scrape outdir: directory to place output size: size of browser window to use pos: position of browser window timeout: amount of time to wait for page to load kwargs: miscellaneous keyword args Returns: None if succeeded, else an error code
Invoke a browser, send it to a series of URLs, and save its output.
[ "Invoke", "a", "browser", "send", "it", "to", "a", "series", "of", "URLs", "and", "save", "its", "output", "." ]
def Scrape(urls, outdir, size, pos, timeout=20, **kwargs): """Invoke a browser, send it to a series of URLs, and save its output. Args: urls: list of URLs to scrape outdir: directory to place output size: size of browser window to use pos: position of browser window timeout: amount of time to wait for page to load kwargs: miscellaneous keyword args Returns: None if succeeded, else an error code """ chromebase.GetChromeRenderPane = GetChromeRenderPane return chromebase.Scrape(urls, outdir, size, pos, timeout, kwargs)
[ "def", "Scrape", "(", "urls", ",", "outdir", ",", "size", ",", "pos", ",", "timeout", "=", "20", ",", "*", "*", "kwargs", ")", ":", "chromebase", ".", "GetChromeRenderPane", "=", "GetChromeRenderPane", "return", "chromebase", ".", "Scrape", "(", "urls", ",", "outdir", ",", "size", ",", "pos", ",", "timeout", ",", "kwargs", ")" ]
https://github.com/adobe/chromium/blob/cfe5bf0b51b1f6b9fe239c2a3c2f2364da9967d7/tools/site_compare/scrapers/chrome/chrome01970.py#L19-L35
llvm/llvm-project
ffa6262cb4e2a335d26416fad39a581b4f98c5f4
lldb/packages/Python/lldbsuite/support/seven.py
python
bitcast_to_bytes
(s)
return s if six.PY2 else s.encode("latin1")
Take a string and return a string(PY2) or a bytes(PY3) object. The returned object contains the exact same bytes as the input string. (latin1 <-> unicode transformation is an identity operation for the first 256 code points).
Take a string and return a string(PY2) or a bytes(PY3) object. The returned object contains the exact same bytes as the input string. (latin1 <-> unicode transformation is an identity operation for the first 256 code points).
[ "Take", "a", "string", "and", "return", "a", "string", "(", "PY2", ")", "or", "a", "bytes", "(", "PY3", ")", "object", ".", "The", "returned", "object", "contains", "the", "exact", "same", "bytes", "as", "the", "input", "string", ".", "(", "latin1", "<", "-", ">", "unicode", "transformation", "is", "an", "identity", "operation", "for", "the", "first", "256", "code", "points", ")", "." ]
def bitcast_to_bytes(s): """ Take a string and return a string(PY2) or a bytes(PY3) object. The returned object contains the exact same bytes as the input string. (latin1 <-> unicode transformation is an identity operation for the first 256 code points). """ return s if six.PY2 else s.encode("latin1")
[ "def", "bitcast_to_bytes", "(", "s", ")", ":", "return", "s", "if", "six", ".", "PY2", "else", "s", ".", "encode", "(", "\"latin1\"", ")" ]
https://github.com/llvm/llvm-project/blob/ffa6262cb4e2a335d26416fad39a581b4f98c5f4/lldb/packages/Python/lldbsuite/support/seven.py#L37-L44
CRYTEK/CRYENGINE
232227c59a220cbbd311576f0fbeba7bb53b2a8c
Code/Tools/waf-1.7.13/waflib/Tools/c_preproc.py
python
reduce_eval
(lst)
return (NUM, num)
Take a list of tokens and output true or false for #if/#elif conditions. :param lst: a list of tokens :type lst: list of tuple(token, value) :return: a token :rtype: tuple(NUM, int)
Take a list of tokens and output true or false for #if/#elif conditions.
[ "Take", "a", "list", "of", "tokens", "and", "output", "true", "or", "false", "for", "#if", "/", "#elif", "conditions", "." ]
def reduce_eval(lst): """ Take a list of tokens and output true or false for #if/#elif conditions. :param lst: a list of tokens :type lst: list of tuple(token, value) :return: a token :rtype: tuple(NUM, int) """ num, lst = get_term(lst) return (NUM, num)
[ "def", "reduce_eval", "(", "lst", ")", ":", "num", ",", "lst", "=", "get_term", "(", "lst", ")", "return", "(", "NUM", ",", "num", ")" ]
https://github.com/CRYTEK/CRYENGINE/blob/232227c59a220cbbd311576f0fbeba7bb53b2a8c/Code/Tools/waf-1.7.13/waflib/Tools/c_preproc.py#L351-L361
mantidproject/mantid
03deeb89254ec4289edb8771e0188c2090a02f32
scripts/Inelastic/Direct/DirectEnergyConversion.py
python
DirectEnergyConversion._normalize_to_monitor2
(self,run,old_name, range_offset=0.0,external_monitor_ws=None)
return ('monitor-2',old_name)
Helper method implementing normalize_to_monitor_2
Helper method implementing normalize_to_monitor_2
[ "Helper", "method", "implementing", "normalize_to_monitor_2" ]
def _normalize_to_monitor2(self,run,old_name, range_offset=0.0,external_monitor_ws=None): """ Helper method implementing normalize_to_monitor_2 """ # get monitor's workspace separate_monitors = run.is_monws_separate() if external_monitor_ws: separate_monitors = True mon_ws = external_monitor_ws else: mon_ws = run.get_monitors_ws() if self.__in_white_normalization: # we normalize wb integrals by current separately as they often do not # have monitors or are in fact wb workspace with some special ei self.normalise(run,'current',range_offset) ws = run.get_workspace() new_name = ws.name() return ('current',new_name) else: if not mon_ws: # no monitors ws = run.get_workspace() raise RuntimeError('Normalize by monitor-2:: Workspace {0} for run {1} does not have monitors in it' .format(ws.name(),run.run_number())) # kwargs = {'NormFactorWS':'NormMon2_WS' + mon_ws.name()} mon_spect = self.prop_man.mon2_norm_spec mon_index = int(mon_ws.getIndexFromSpectrumNumber(mon_spect)) if separate_monitors: kwargs['MonitorWorkspace'] = mon_ws kwargs['MonitorWorkspaceIndex'] = mon_index else: kwargs['MonitorSpectrum'] = mon_spect #Find TOF range, correspondent to incident energy monitor peak if self._mon2_norm_time_range: # range has been found during ei-calculations norm_range = self._mon2_norm_time_range range_min = norm_range[0] + range_offset range_max = norm_range[1] + range_offset self._mon2_norm_time_range = None else: mon_ws_name = mon_ws.name() #monitor's workspace and detector's workspace are e if mon_ws_name.find('_shifted') != -1: # monitor-2 normalization ranges have to be identified before the # instrument is shifted raise RuntimeError("""Instrument have been shifted but no time range has been identified. Monitor-2 normalization can not be performed """) else: # instrument and workspace shifted, so TOF will be calculated wrt # shifted instrument energy_rage = self.mon2_norm_energy_range TOF_range = self.get_TOF_for_energies(mon_ws,energy_rage,[mon_spect],None,self._debug_mode) range_min = TOF_range[0] range_max = TOF_range[1] # Normalize to monitor 2 NormaliseToMonitor(InputWorkspace=old_name,OutputWorkspace=old_name,IntegrationRangeMin=range_min, IntegrationRangeMax=range_max,IncludePartialBins=True,**kwargs) norm_ws_name = kwargs['NormFactorWS'] norm_mon2ws = mtd[norm_ws_name] norm_factor = norm_mon2ws.dataY(0) if len(norm_factor)>1: raise RuntimeError("Can not normalize by monitor spectra. Normalization range necessary") AddSampleLog(old_name,LogName='NormalizationFactor',LogText=str(norm_factor[0]),LogType='Number') if not self._debug_mode: DeleteWorkspace(norm_ws_name) return ('monitor-2',old_name)
[ "def", "_normalize_to_monitor2", "(", "self", ",", "run", ",", "old_name", ",", "range_offset", "=", "0.0", ",", "external_monitor_ws", "=", "None", ")", ":", "# get monitor's workspace", "separate_monitors", "=", "run", ".", "is_monws_separate", "(", ")", "if", "external_monitor_ws", ":", "separate_monitors", "=", "True", "mon_ws", "=", "external_monitor_ws", "else", ":", "mon_ws", "=", "run", ".", "get_monitors_ws", "(", ")", "if", "self", ".", "__in_white_normalization", ":", "# we normalize wb integrals by current separately as they often do not", "# have monitors or are in fact wb workspace with some special ei", "self", ".", "normalise", "(", "run", ",", "'current'", ",", "range_offset", ")", "ws", "=", "run", ".", "get_workspace", "(", ")", "new_name", "=", "ws", ".", "name", "(", ")", "return", "(", "'current'", ",", "new_name", ")", "else", ":", "if", "not", "mon_ws", ":", "# no monitors", "ws", "=", "run", ".", "get_workspace", "(", ")", "raise", "RuntimeError", "(", "'Normalize by monitor-2:: Workspace {0} for run {1} does not have monitors in it'", ".", "format", "(", "ws", ".", "name", "(", ")", ",", "run", ".", "run_number", "(", ")", ")", ")", "#", "kwargs", "=", "{", "'NormFactorWS'", ":", "'NormMon2_WS'", "+", "mon_ws", ".", "name", "(", ")", "}", "mon_spect", "=", "self", ".", "prop_man", ".", "mon2_norm_spec", "mon_index", "=", "int", "(", "mon_ws", ".", "getIndexFromSpectrumNumber", "(", "mon_spect", ")", ")", "if", "separate_monitors", ":", "kwargs", "[", "'MonitorWorkspace'", "]", "=", "mon_ws", "kwargs", "[", "'MonitorWorkspaceIndex'", "]", "=", "mon_index", "else", ":", "kwargs", "[", "'MonitorSpectrum'", "]", "=", "mon_spect", "#Find TOF range, correspondent to incident energy monitor peak", "if", "self", ".", "_mon2_norm_time_range", ":", "# range has been found during ei-calculations", "norm_range", "=", "self", ".", "_mon2_norm_time_range", "range_min", "=", "norm_range", "[", "0", "]", "+", "range_offset", "range_max", "=", "norm_range", "[", "1", "]", "+", "range_offset", "self", ".", "_mon2_norm_time_range", "=", "None", "else", ":", "mon_ws_name", "=", "mon_ws", ".", "name", "(", ")", "#monitor's workspace and detector's workspace are e", "if", "mon_ws_name", ".", "find", "(", "'_shifted'", ")", "!=", "-", "1", ":", "# monitor-2 normalization ranges have to be identified before the", "# instrument is shifted", "raise", "RuntimeError", "(", "\"\"\"Instrument have been shifted but no time range has been identified.\n Monitor-2 normalization can not be performed \"\"\"", ")", "else", ":", "# instrument and workspace shifted, so TOF will be calculated wrt", "# shifted instrument", "energy_rage", "=", "self", ".", "mon2_norm_energy_range", "TOF_range", "=", "self", ".", "get_TOF_for_energies", "(", "mon_ws", ",", "energy_rage", ",", "[", "mon_spect", "]", ",", "None", ",", "self", ".", "_debug_mode", ")", "range_min", "=", "TOF_range", "[", "0", "]", "range_max", "=", "TOF_range", "[", "1", "]", "# Normalize to monitor 2", "NormaliseToMonitor", "(", "InputWorkspace", "=", "old_name", ",", "OutputWorkspace", "=", "old_name", ",", "IntegrationRangeMin", "=", "range_min", ",", "IntegrationRangeMax", "=", "range_max", ",", "IncludePartialBins", "=", "True", ",", "*", "*", "kwargs", ")", "norm_ws_name", "=", "kwargs", "[", "'NormFactorWS'", "]", "norm_mon2ws", "=", "mtd", "[", "norm_ws_name", "]", "norm_factor", "=", "norm_mon2ws", ".", "dataY", "(", "0", ")", "if", "len", "(", "norm_factor", ")", ">", "1", ":", "raise", "RuntimeError", "(", "\"Can not normalize by monitor spectra. Normalization range necessary\"", ")", "AddSampleLog", "(", "old_name", ",", "LogName", "=", "'NormalizationFactor'", ",", "LogText", "=", "str", "(", "norm_factor", "[", "0", "]", ")", ",", "LogType", "=", "'Number'", ")", "if", "not", "self", ".", "_debug_mode", ":", "DeleteWorkspace", "(", "norm_ws_name", ")", "return", "(", "'monitor-2'", ",", "old_name", ")" ]
https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/scripts/Inelastic/Direct/DirectEnergyConversion.py#L1058-L1126
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/msw/_windows.py
python
PrintDialogData.SetSelection
(*args, **kwargs)
return _windows_.PrintDialogData_SetSelection(*args, **kwargs)
SetSelection(self, bool flag)
SetSelection(self, bool flag)
[ "SetSelection", "(", "self", "bool", "flag", ")" ]
def SetSelection(*args, **kwargs): """SetSelection(self, bool flag)""" return _windows_.PrintDialogData_SetSelection(*args, **kwargs)
[ "def", "SetSelection", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_windows_", ".", "PrintDialogData_SetSelection", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/_windows.py#L5110-L5112
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_cocoa/propgrid.py
python
PropertyGrid.GetInternalFlags
(*args, **kwargs)
return _propgrid.PropertyGrid_GetInternalFlags(*args, **kwargs)
GetInternalFlags(self) -> long
GetInternalFlags(self) -> long
[ "GetInternalFlags", "(", "self", ")", "-", ">", "long" ]
def GetInternalFlags(*args, **kwargs): """GetInternalFlags(self) -> long""" return _propgrid.PropertyGrid_GetInternalFlags(*args, **kwargs)
[ "def", "GetInternalFlags", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_propgrid", ".", "PropertyGrid_GetInternalFlags", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/propgrid.py#L2400-L2402
BVLC/caffe
9b891540183ddc834a02b2bd81b31afae71b2153
python/caffe/io.py
python
Transformer.set_raw_scale
(self, in_, scale)
Set the scale of raw features s.t. the input blob = input * scale. While Python represents images in [0, 1], certain Caffe models like CaffeNet and AlexNet represent images in [0, 255] so the raw_scale of these models must be 255. Parameters ---------- in_ : which input to assign this scale factor scale : scale coefficient
Set the scale of raw features s.t. the input blob = input * scale. While Python represents images in [0, 1], certain Caffe models like CaffeNet and AlexNet represent images in [0, 255] so the raw_scale of these models must be 255.
[ "Set", "the", "scale", "of", "raw", "features", "s", ".", "t", ".", "the", "input", "blob", "=", "input", "*", "scale", ".", "While", "Python", "represents", "images", "in", "[", "0", "1", "]", "certain", "Caffe", "models", "like", "CaffeNet", "and", "AlexNet", "represent", "images", "in", "[", "0", "255", "]", "so", "the", "raw_scale", "of", "these", "models", "must", "be", "255", "." ]
def set_raw_scale(self, in_, scale): """ Set the scale of raw features s.t. the input blob = input * scale. While Python represents images in [0, 1], certain Caffe models like CaffeNet and AlexNet represent images in [0, 255] so the raw_scale of these models must be 255. Parameters ---------- in_ : which input to assign this scale factor scale : scale coefficient """ self.__check_input(in_) self.raw_scale[in_] = scale
[ "def", "set_raw_scale", "(", "self", ",", "in_", ",", "scale", ")", ":", "self", ".", "__check_input", "(", "in_", ")", "self", ".", "raw_scale", "[", "in_", "]", "=", "scale" ]
https://github.com/BVLC/caffe/blob/9b891540183ddc834a02b2bd81b31afae71b2153/python/caffe/io.py#L222-L235
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/pandas/py3/pandas/core/generic.py
python
NDFrame._repr_data_resource_
(self)
Not a real Jupyter special repr method, but we use the same naming convention.
Not a real Jupyter special repr method, but we use the same naming convention.
[ "Not", "a", "real", "Jupyter", "special", "repr", "method", "but", "we", "use", "the", "same", "naming", "convention", "." ]
def _repr_data_resource_(self): """ Not a real Jupyter special repr method, but we use the same naming convention. """ if config.get_option("display.html.table_schema"): data = self.head(config.get_option("display.max_rows")) as_json = data.to_json(orient="table") as_json = cast(str, as_json) return json.loads(as_json, object_pairs_hook=collections.OrderedDict)
[ "def", "_repr_data_resource_", "(", "self", ")", ":", "if", "config", ".", "get_option", "(", "\"display.html.table_schema\"", ")", ":", "data", "=", "self", ".", "head", "(", "config", ".", "get_option", "(", "\"display.max_rows\"", ")", ")", "as_json", "=", "data", ".", "to_json", "(", "orient", "=", "\"table\"", ")", "as_json", "=", "cast", "(", "str", ",", "as_json", ")", "return", "json", ".", "loads", "(", "as_json", ",", "object_pairs_hook", "=", "collections", ".", "OrderedDict", ")" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/pandas/py3/pandas/core/generic.py#L2114-L2124
PaddlePaddle/Paddle
1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c
python/paddle/distributed/fleet/base/role_maker.py
python
PaddleCloudRoleMaker._get_stage_id
(self)
return self._stage_id
return stage id of current heter worker
return stage id of current heter worker
[ "return", "stage", "id", "of", "current", "heter", "worker" ]
def _get_stage_id(self): """ return stage id of current heter worker """ if not self._role_is_generated: self._generate_role() return self._stage_id
[ "def", "_get_stage_id", "(", "self", ")", ":", "if", "not", "self", ".", "_role_is_generated", ":", "self", ".", "_generate_role", "(", ")", "return", "self", ".", "_stage_id" ]
https://github.com/PaddlePaddle/Paddle/blob/1252f4bb3e574df80aa6d18c7ddae1b3a90bd81c/python/paddle/distributed/fleet/base/role_maker.py#L569-L575
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/gtk/richtext.py
python
RichTextCtrl.AddParagraph
(*args, **kwargs)
return _richtext.RichTextCtrl_AddParagraph(*args, **kwargs)
AddParagraph(self, String text) -> RichTextRange Add a new paragraph of text to the end of the buffer
AddParagraph(self, String text) -> RichTextRange
[ "AddParagraph", "(", "self", "String", "text", ")", "-", ">", "RichTextRange" ]
def AddParagraph(*args, **kwargs): """ AddParagraph(self, String text) -> RichTextRange Add a new paragraph of text to the end of the buffer """ return _richtext.RichTextCtrl_AddParagraph(*args, **kwargs)
[ "def", "AddParagraph", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_richtext", ".", "RichTextCtrl_AddParagraph", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/richtext.py#L3687-L3693
Ifsttar/I-Simpa
2283385f4cac769a92e265edabb9c79cb6c42d03
currentRelease/SystemScript/graphy/backends/google_chart_api/encoders.py
python
BaseChartEncoder.Img
(self, width, height)
return tag % (url, width, height)
Get an image tag for our graph.
Get an image tag for our graph.
[ "Get", "an", "image", "tag", "for", "our", "graph", "." ]
def Img(self, width, height): """Get an image tag for our graph.""" url = self.Url(width, height, use_html_entities=True) tag = '<img src="%s" width="%s" height="%s" alt="chart"/>' return tag % (url, width, height)
[ "def", "Img", "(", "self", ",", "width", ",", "height", ")", ":", "url", "=", "self", ".", "Url", "(", "width", ",", "height", ",", "use_html_entities", "=", "True", ")", "tag", "=", "'<img src=\"%s\" width=\"%s\" height=\"%s\" alt=\"chart\"/>'", "return", "tag", "%", "(", "url", ",", "width", ",", "height", ")" ]
https://github.com/Ifsttar/I-Simpa/blob/2283385f4cac769a92e265edabb9c79cb6c42d03/currentRelease/SystemScript/graphy/backends/google_chart_api/encoders.py#L67-L71
tiny-dnn/tiny-dnn
c0f576f5cb7b35893f62127cb7aec18f77a3bcc5
third_party/gemmlowp/meta/generators/gemm_MxNxK.py
python
GenerateGemmSwitch3
(emitter, output_type, aligned, m_mod, n_mod)
Third level of main switch, choose optimized version on depth leftover.
Third level of main switch, choose optimized version on depth leftover.
[ "Third", "level", "of", "main", "switch", "choose", "optimized", "version", "on", "depth", "leftover", "." ]
def GenerateGemmSwitch3(emitter, output_type, aligned, m_mod, n_mod): """Third level of main switch, choose optimized version on depth leftover.""" emitter.EmitSwitch('k % 8') for leftovers in range(0, 8): emitter.EmitCase(leftovers) emitter.PushIndent() GenerateGemmCall(emitter, output_type, aligned, m_mod, n_mod, leftovers) emitter.EmitBreak() emitter.PopIndent() emitter.EmitSwitchEnd()
[ "def", "GenerateGemmSwitch3", "(", "emitter", ",", "output_type", ",", "aligned", ",", "m_mod", ",", "n_mod", ")", ":", "emitter", ".", "EmitSwitch", "(", "'k % 8'", ")", "for", "leftovers", "in", "range", "(", "0", ",", "8", ")", ":", "emitter", ".", "EmitCase", "(", "leftovers", ")", "emitter", ".", "PushIndent", "(", ")", "GenerateGemmCall", "(", "emitter", ",", "output_type", ",", "aligned", ",", "m_mod", ",", "n_mod", ",", "leftovers", ")", "emitter", ".", "EmitBreak", "(", ")", "emitter", ".", "PopIndent", "(", ")", "emitter", ".", "EmitSwitchEnd", "(", ")" ]
https://github.com/tiny-dnn/tiny-dnn/blob/c0f576f5cb7b35893f62127cb7aec18f77a3bcc5/third_party/gemmlowp/meta/generators/gemm_MxNxK.py#L245-L256
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/lib2to3/pytree.py
python
Base.__new__
(cls, *args, **kwds)
return object.__new__(cls)
Constructor that prevents Base from being instantiated.
Constructor that prevents Base from being instantiated.
[ "Constructor", "that", "prevents", "Base", "from", "being", "instantiated", "." ]
def __new__(cls, *args, **kwds): """Constructor that prevents Base from being instantiated.""" assert cls is not Base, "Cannot instantiate Base" return object.__new__(cls)
[ "def", "__new__", "(", "cls", ",", "*", "args", ",", "*", "*", "kwds", ")", ":", "assert", "cls", "is", "not", "Base", ",", "\"Cannot instantiate Base\"", "return", "object", ".", "__new__", "(", "cls", ")" ]
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/lib2to3/pytree.py#L50-L53
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_cocoa/_windows.py
python
GenericProgressDialog.Resume
(*args, **kwargs)
return _windows_.GenericProgressDialog_Resume(*args, **kwargs)
Resume(self) Can be used to continue with the dialog, after the user had chosen to abort.
Resume(self)
[ "Resume", "(", "self", ")" ]
def Resume(*args, **kwargs): """ Resume(self) Can be used to continue with the dialog, after the user had chosen to abort. """ return _windows_.GenericProgressDialog_Resume(*args, **kwargs)
[ "def", "Resume", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_windows_", ".", "GenericProgressDialog_Resume", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/_windows.py#L3725-L3732
trailofbits/llvm-sanitizer-tutorial
d29dfeec7f51fbf234fd0080f28f2b30cd0b6e99
llvm/projects/compiler-rt/lib/sanitizer_common/scripts/cpplint.py
python
_NestingState.Update
(self, filename, clean_lines, linenum, error)
Update nesting state with current line. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found.
Update nesting state with current line.
[ "Update", "nesting", "state", "with", "current", "line", "." ]
def Update(self, filename, clean_lines, linenum, error): """Update nesting state with current line. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found. """ line = clean_lines.elided[linenum] # Update pp_stack first self.UpdatePreprocessor(line) # Count parentheses. This is to avoid adding struct arguments to # the nesting stack. if self.stack: inner_block = self.stack[-1] depth_change = line.count('(') - line.count(')') inner_block.open_parentheses += depth_change # Also check if we are starting or ending an inline assembly block. if inner_block.inline_asm in (_NO_ASM, _END_ASM): if (depth_change != 0 and inner_block.open_parentheses == 1 and _MATCH_ASM.match(line)): # Enter assembly block inner_block.inline_asm = _INSIDE_ASM else: # Not entering assembly block. If previous line was _END_ASM, # we will now shift to _NO_ASM state. inner_block.inline_asm = _NO_ASM elif (inner_block.inline_asm == _INSIDE_ASM and inner_block.open_parentheses == 0): # Exit assembly block inner_block.inline_asm = _END_ASM # Consume namespace declaration at the beginning of the line. Do # this in a loop so that we catch same line declarations like this: # namespace proto2 { namespace bridge { class MessageSet; } } while True: # Match start of namespace. The "\b\s*" below catches namespace # declarations even if it weren't followed by a whitespace, this # is so that we don't confuse our namespace checker. The # missing spaces will be flagged by CheckSpacing. namespace_decl_match = Match(r'^\s*namespace\b\s*([:\w]+)?(.*)$', line) if not namespace_decl_match: break new_namespace = _NamespaceInfo(namespace_decl_match.group(1), linenum) self.stack.append(new_namespace) line = namespace_decl_match.group(2) if line.find('{') != -1: new_namespace.seen_open_brace = True line = line[line.find('{') + 1:] # Look for a class declaration in whatever is left of the line # after parsing namespaces. The regexp accounts for decorated classes # such as in: # class LOCKABLE API Object { # }; # # Templates with class arguments may confuse the parser, for example: # template <class T # class Comparator = less<T>, # class Vector = vector<T> > # class HeapQueue { # # Because this parser has no nesting state about templates, by the # time it saw "class Comparator", it may think that it's a new class. # Nested templates have a similar problem: # template < # typename ExportedType, # typename TupleType, # template <typename, typename> class ImplTemplate> # # To avoid these cases, we ignore classes that are followed by '=' or '>' class_decl_match = Match( r'\s*(template\s*<[\w\s<>,:]*>\s*)?' '(class|struct)\s+([A-Z_]+\s+)*(\w+(?:::\w+)*)' '(([^=>]|<[^<>]*>)*)$', line) if (class_decl_match and (not self.stack or self.stack[-1].open_parentheses == 0)): self.stack.append(_ClassInfo( class_decl_match.group(4), class_decl_match.group(2), clean_lines, linenum)) line = class_decl_match.group(5) # If we have not yet seen the opening brace for the innermost block, # run checks here. if not self.SeenOpenBrace(): self.stack[-1].CheckBegin(filename, clean_lines, linenum, error) # Update access control if we are inside a class/struct if self.stack and isinstance(self.stack[-1], _ClassInfo): access_match = Match(r'\s*(public|private|protected)\s*:', line) if access_match: self.stack[-1].access = access_match.group(1) # Consume braces or semicolons from what's left of the line while True: # Match first brace, semicolon, or closed parenthesis. matched = Match(r'^[^{;)}]*([{;)}])(.*)$', line) if not matched: break token = matched.group(1) if token == '{': # If namespace or class hasn't seen a opening brace yet, mark # namespace/class head as complete. Push a new block onto the # stack otherwise. if not self.SeenOpenBrace(): self.stack[-1].seen_open_brace = True else: self.stack.append(_BlockInfo(True)) if _MATCH_ASM.match(line): self.stack[-1].inline_asm = _BLOCK_ASM elif token == ';' or token == ')': # If we haven't seen an opening brace yet, but we already saw # a semicolon, this is probably a forward declaration. Pop # the stack for these. # # Similarly, if we haven't seen an opening brace yet, but we # already saw a closing parenthesis, then these are probably # function arguments with extra "class" or "struct" keywords. # Also pop these stack for these. if not self.SeenOpenBrace(): self.stack.pop() else: # token == '}' # Perform end of block checks and pop the stack. if self.stack: self.stack[-1].CheckEnd(filename, clean_lines, linenum, error) self.stack.pop() line = matched.group(2)
[ "def", "Update", "(", "self", ",", "filename", ",", "clean_lines", ",", "linenum", ",", "error", ")", ":", "line", "=", "clean_lines", ".", "elided", "[", "linenum", "]", "# Update pp_stack first", "self", ".", "UpdatePreprocessor", "(", "line", ")", "# Count parentheses. This is to avoid adding struct arguments to", "# the nesting stack.", "if", "self", ".", "stack", ":", "inner_block", "=", "self", ".", "stack", "[", "-", "1", "]", "depth_change", "=", "line", ".", "count", "(", "'('", ")", "-", "line", ".", "count", "(", "')'", ")", "inner_block", ".", "open_parentheses", "+=", "depth_change", "# Also check if we are starting or ending an inline assembly block.", "if", "inner_block", ".", "inline_asm", "in", "(", "_NO_ASM", ",", "_END_ASM", ")", ":", "if", "(", "depth_change", "!=", "0", "and", "inner_block", ".", "open_parentheses", "==", "1", "and", "_MATCH_ASM", ".", "match", "(", "line", ")", ")", ":", "# Enter assembly block", "inner_block", ".", "inline_asm", "=", "_INSIDE_ASM", "else", ":", "# Not entering assembly block. If previous line was _END_ASM,", "# we will now shift to _NO_ASM state.", "inner_block", ".", "inline_asm", "=", "_NO_ASM", "elif", "(", "inner_block", ".", "inline_asm", "==", "_INSIDE_ASM", "and", "inner_block", ".", "open_parentheses", "==", "0", ")", ":", "# Exit assembly block", "inner_block", ".", "inline_asm", "=", "_END_ASM", "# Consume namespace declaration at the beginning of the line. Do", "# this in a loop so that we catch same line declarations like this:", "# namespace proto2 { namespace bridge { class MessageSet; } }", "while", "True", ":", "# Match start of namespace. The \"\\b\\s*\" below catches namespace", "# declarations even if it weren't followed by a whitespace, this", "# is so that we don't confuse our namespace checker. The", "# missing spaces will be flagged by CheckSpacing.", "namespace_decl_match", "=", "Match", "(", "r'^\\s*namespace\\b\\s*([:\\w]+)?(.*)$'", ",", "line", ")", "if", "not", "namespace_decl_match", ":", "break", "new_namespace", "=", "_NamespaceInfo", "(", "namespace_decl_match", ".", "group", "(", "1", ")", ",", "linenum", ")", "self", ".", "stack", ".", "append", "(", "new_namespace", ")", "line", "=", "namespace_decl_match", ".", "group", "(", "2", ")", "if", "line", ".", "find", "(", "'{'", ")", "!=", "-", "1", ":", "new_namespace", ".", "seen_open_brace", "=", "True", "line", "=", "line", "[", "line", ".", "find", "(", "'{'", ")", "+", "1", ":", "]", "# Look for a class declaration in whatever is left of the line", "# after parsing namespaces. The regexp accounts for decorated classes", "# such as in:", "# class LOCKABLE API Object {", "# };", "#", "# Templates with class arguments may confuse the parser, for example:", "# template <class T", "# class Comparator = less<T>,", "# class Vector = vector<T> >", "# class HeapQueue {", "#", "# Because this parser has no nesting state about templates, by the", "# time it saw \"class Comparator\", it may think that it's a new class.", "# Nested templates have a similar problem:", "# template <", "# typename ExportedType,", "# typename TupleType,", "# template <typename, typename> class ImplTemplate>", "#", "# To avoid these cases, we ignore classes that are followed by '=' or '>'", "class_decl_match", "=", "Match", "(", "r'\\s*(template\\s*<[\\w\\s<>,:]*>\\s*)?'", "'(class|struct)\\s+([A-Z_]+\\s+)*(\\w+(?:::\\w+)*)'", "'(([^=>]|<[^<>]*>)*)$'", ",", "line", ")", "if", "(", "class_decl_match", "and", "(", "not", "self", ".", "stack", "or", "self", ".", "stack", "[", "-", "1", "]", ".", "open_parentheses", "==", "0", ")", ")", ":", "self", ".", "stack", ".", "append", "(", "_ClassInfo", "(", "class_decl_match", ".", "group", "(", "4", ")", ",", "class_decl_match", ".", "group", "(", "2", ")", ",", "clean_lines", ",", "linenum", ")", ")", "line", "=", "class_decl_match", ".", "group", "(", "5", ")", "# If we have not yet seen the opening brace for the innermost block,", "# run checks here.", "if", "not", "self", ".", "SeenOpenBrace", "(", ")", ":", "self", ".", "stack", "[", "-", "1", "]", ".", "CheckBegin", "(", "filename", ",", "clean_lines", ",", "linenum", ",", "error", ")", "# Update access control if we are inside a class/struct", "if", "self", ".", "stack", "and", "isinstance", "(", "self", ".", "stack", "[", "-", "1", "]", ",", "_ClassInfo", ")", ":", "access_match", "=", "Match", "(", "r'\\s*(public|private|protected)\\s*:'", ",", "line", ")", "if", "access_match", ":", "self", ".", "stack", "[", "-", "1", "]", ".", "access", "=", "access_match", ".", "group", "(", "1", ")", "# Consume braces or semicolons from what's left of the line", "while", "True", ":", "# Match first brace, semicolon, or closed parenthesis.", "matched", "=", "Match", "(", "r'^[^{;)}]*([{;)}])(.*)$'", ",", "line", ")", "if", "not", "matched", ":", "break", "token", "=", "matched", ".", "group", "(", "1", ")", "if", "token", "==", "'{'", ":", "# If namespace or class hasn't seen a opening brace yet, mark", "# namespace/class head as complete. Push a new block onto the", "# stack otherwise.", "if", "not", "self", ".", "SeenOpenBrace", "(", ")", ":", "self", ".", "stack", "[", "-", "1", "]", ".", "seen_open_brace", "=", "True", "else", ":", "self", ".", "stack", ".", "append", "(", "_BlockInfo", "(", "True", ")", ")", "if", "_MATCH_ASM", ".", "match", "(", "line", ")", ":", "self", ".", "stack", "[", "-", "1", "]", ".", "inline_asm", "=", "_BLOCK_ASM", "elif", "token", "==", "';'", "or", "token", "==", "')'", ":", "# If we haven't seen an opening brace yet, but we already saw", "# a semicolon, this is probably a forward declaration. Pop", "# the stack for these.", "#", "# Similarly, if we haven't seen an opening brace yet, but we", "# already saw a closing parenthesis, then these are probably", "# function arguments with extra \"class\" or \"struct\" keywords.", "# Also pop these stack for these.", "if", "not", "self", ".", "SeenOpenBrace", "(", ")", ":", "self", ".", "stack", ".", "pop", "(", ")", "else", ":", "# token == '}'", "# Perform end of block checks and pop the stack.", "if", "self", ".", "stack", ":", "self", ".", "stack", "[", "-", "1", "]", ".", "CheckEnd", "(", "filename", ",", "clean_lines", ",", "linenum", ",", "error", ")", "self", ".", "stack", ".", "pop", "(", ")", "line", "=", "matched", ".", "group", "(", "2", ")" ]
https://github.com/trailofbits/llvm-sanitizer-tutorial/blob/d29dfeec7f51fbf234fd0080f28f2b30cd0b6e99/llvm/projects/compiler-rt/lib/sanitizer_common/scripts/cpplint.py#L1584-L1718
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_carbon/_windows.py
python
FileDialog.SetDirectory
(*args, **kwargs)
return _windows_.FileDialog_SetDirectory(*args, **kwargs)
SetDirectory(self, String dir) Sets the default directory.
SetDirectory(self, String dir)
[ "SetDirectory", "(", "self", "String", "dir", ")" ]
def SetDirectory(*args, **kwargs): """ SetDirectory(self, String dir) Sets the default directory. """ return _windows_.FileDialog_SetDirectory(*args, **kwargs)
[ "def", "SetDirectory", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_windows_", ".", "FileDialog_SetDirectory", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_windows.py#L3159-L3165
H-uru/Plasma
c2140ea046e82e9c199e257a7f2e7edb42602871
Scripts/Python/plasma/Plasma.py
python
PtDumpLogs
(folder)
Dumps all current log files to the specified folder (a sub-folder to the log folder)
Dumps all current log files to the specified folder (a sub-folder to the log folder)
[ "Dumps", "all", "current", "log", "files", "to", "the", "specified", "folder", "(", "a", "sub", "-", "folder", "to", "the", "log", "folder", ")" ]
def PtDumpLogs(folder): """Dumps all current log files to the specified folder (a sub-folder to the log folder)""" pass
[ "def", "PtDumpLogs", "(", "folder", ")", ":", "pass" ]
https://github.com/H-uru/Plasma/blob/c2140ea046e82e9c199e257a7f2e7edb42602871/Scripts/Python/plasma/Plasma.py#L230-L232
neoml-lib/neoml
a0d370fba05269a1b2258cef126f77bbd2054a3e
NeoML/Python/neoml/Dnn/Conv.py
python
TimeConv.filter
(self)
return Blob.Blob(self._internal.get_filter())
Gets the filters. The dimensions: - **BatchLength** is 1 - **BatchWidth** is filter_count - **Height** is filter_size - **Width**, **Depth** are 1 - **Channels** is the inputs' **Height** * **Width** * **Depth** * **Channels**
Gets the filters. The dimensions:
[ "Gets", "the", "filters", ".", "The", "dimensions", ":" ]
def filter(self): """Gets the filters. The dimensions: - **BatchLength** is 1 - **BatchWidth** is filter_count - **Height** is filter_size - **Width**, **Depth** are 1 - **Channels** is the inputs' **Height** * **Width** * **Depth** * **Channels** """ return Blob.Blob(self._internal.get_filter())
[ "def", "filter", "(", "self", ")", ":", "return", "Blob", ".", "Blob", "(", "self", ".", "_internal", ".", "get_filter", "(", ")", ")" ]
https://github.com/neoml-lib/neoml/blob/a0d370fba05269a1b2258cef126f77bbd2054a3e/NeoML/Python/neoml/Dnn/Conv.py#L1158-L1167
sdhash/sdhash
b9eff63e4e5867e910f41fd69032bbb1c94a2a5e
sdhash-ui/cherrypy/lib/sessions.py
python
RamSession.acquire_lock
(self)
Acquire an exclusive lock on the currently-loaded session data.
Acquire an exclusive lock on the currently-loaded session data.
[ "Acquire", "an", "exclusive", "lock", "on", "the", "currently", "-", "loaded", "session", "data", "." ]
def acquire_lock(self): """Acquire an exclusive lock on the currently-loaded session data.""" self.locked = True self.locks.setdefault(self.id, threading.RLock()).acquire()
[ "def", "acquire_lock", "(", "self", ")", ":", "self", ".", "locked", "=", "True", "self", ".", "locks", ".", "setdefault", "(", "self", ".", "id", ",", "threading", ".", "RLock", "(", ")", ")", ".", "acquire", "(", ")" ]
https://github.com/sdhash/sdhash/blob/b9eff63e4e5867e910f41fd69032bbb1c94a2a5e/sdhash-ui/cherrypy/lib/sessions.py#L367-L370
benoitsteiner/tensorflow-opencl
cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5
tensorflow/python/training/basic_session_run_hooks.py
python
CheckpointSaverHook._save
(self, session, step)
Saves the latest checkpoint.
Saves the latest checkpoint.
[ "Saves", "the", "latest", "checkpoint", "." ]
def _save(self, session, step): """Saves the latest checkpoint.""" logging.info("Saving checkpoints for %d into %s.", step, self._save_path) for l in self._listeners: l.before_save(session, step) self._get_saver().save(session, self._save_path, global_step=step) self._summary_writer.add_session_log( SessionLog( status=SessionLog.CHECKPOINT, checkpoint_path=self._save_path), step) for l in self._listeners: l.after_save(session, step)
[ "def", "_save", "(", "self", ",", "session", ",", "step", ")", ":", "logging", ".", "info", "(", "\"Saving checkpoints for %d into %s.\"", ",", "step", ",", "self", ".", "_save_path", ")", "for", "l", "in", "self", ".", "_listeners", ":", "l", ".", "before_save", "(", "session", ",", "step", ")", "self", ".", "_get_saver", "(", ")", ".", "save", "(", "session", ",", "self", ".", "_save_path", ",", "global_step", "=", "step", ")", "self", ".", "_summary_writer", ".", "add_session_log", "(", "SessionLog", "(", "status", "=", "SessionLog", ".", "CHECKPOINT", ",", "checkpoint_path", "=", "self", ".", "_save_path", ")", ",", "step", ")", "for", "l", "in", "self", ".", "_listeners", ":", "l", ".", "after_save", "(", "session", ",", "step", ")" ]
https://github.com/benoitsteiner/tensorflow-opencl/blob/cb7cb40a57fde5cfd4731bc551e82a1e2fef43a5/tensorflow/python/training/basic_session_run_hooks.py#L461-L475
moderngl/moderngl
32fe79927e02b0fa893b3603d677bdae39771e14
moderngl/buffer.py
python
Buffer.read_into
(self, buffer, size=-1, *, offset=0, write_offset=0)
return self.mglo.read_into(buffer, size, offset, write_offset)
Read the content into a buffer. Args: buffer (bytearray): The buffer that will receive the content. size (int): The size in bytes. Value ``-1`` means all. Keyword Args: offset (int): The read offset in bytes. write_offset (int): The write offset in bytes.
Read the content into a buffer.
[ "Read", "the", "content", "into", "a", "buffer", "." ]
def read_into(self, buffer, size=-1, *, offset=0, write_offset=0) -> None: ''' Read the content into a buffer. Args: buffer (bytearray): The buffer that will receive the content. size (int): The size in bytes. Value ``-1`` means all. Keyword Args: offset (int): The read offset in bytes. write_offset (int): The write offset in bytes. ''' return self.mglo.read_into(buffer, size, offset, write_offset)
[ "def", "read_into", "(", "self", ",", "buffer", ",", "size", "=", "-", "1", ",", "*", ",", "offset", "=", "0", ",", "write_offset", "=", "0", ")", "->", "None", ":", "return", "self", ".", "mglo", ".", "read_into", "(", "buffer", ",", "size", ",", "offset", ",", "write_offset", ")" ]
https://github.com/moderngl/moderngl/blob/32fe79927e02b0fa893b3603d677bdae39771e14/moderngl/buffer.py#L119-L132
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/pandas/py3/pandas/core/dtypes/base.py
python
ExtensionDtype._is_numeric
(self)
return False
Whether columns with this dtype should be considered numeric. By default ExtensionDtypes are assumed to be non-numeric. They'll be excluded from operations that exclude non-numeric columns, like (groupby) reductions, plotting, etc.
Whether columns with this dtype should be considered numeric.
[ "Whether", "columns", "with", "this", "dtype", "should", "be", "considered", "numeric", "." ]
def _is_numeric(self) -> bool: """ Whether columns with this dtype should be considered numeric. By default ExtensionDtypes are assumed to be non-numeric. They'll be excluded from operations that exclude non-numeric columns, like (groupby) reductions, plotting, etc. """ return False
[ "def", "_is_numeric", "(", "self", ")", "->", "bool", ":", "return", "False" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/pandas/py3/pandas/core/dtypes/base.py#L307-L315
ricardoquesada/Spidermonkey
4a75ea2543408bd1b2c515aa95901523eeef7858
ipc/ipdl/ipdl/parser.py
python
p_SpawnsStmtsOpt
(p)
SpawnsStmtsOpt : SpawnsStmt SpawnsStmtsOpt | BridgesStmtsOpt
SpawnsStmtsOpt : SpawnsStmt SpawnsStmtsOpt | BridgesStmtsOpt
[ "SpawnsStmtsOpt", ":", "SpawnsStmt", "SpawnsStmtsOpt", "|", "BridgesStmtsOpt" ]
def p_SpawnsStmtsOpt(p): """SpawnsStmtsOpt : SpawnsStmt SpawnsStmtsOpt | BridgesStmtsOpt""" if 2 == len(p): p[0] = p[1] else: p[2].spawnsStmts.insert(0, p[1]) p[0] = p[2]
[ "def", "p_SpawnsStmtsOpt", "(", "p", ")", ":", "if", "2", "==", "len", "(", "p", ")", ":", "p", "[", "0", "]", "=", "p", "[", "1", "]", "else", ":", "p", "[", "2", "]", ".", "spawnsStmts", ".", "insert", "(", "0", ",", "p", "[", "1", "]", ")", "p", "[", "0", "]", "=", "p", "[", "2", "]" ]
https://github.com/ricardoquesada/Spidermonkey/blob/4a75ea2543408bd1b2c515aa95901523eeef7858/ipc/ipdl/ipdl/parser.py#L370-L377
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/importlib/_bootstrap.py
python
_requires_builtin
(fxn)
return _requires_builtin_wrapper
Decorator to verify the named module is built-in.
Decorator to verify the named module is built-in.
[ "Decorator", "to", "verify", "the", "named", "module", "is", "built", "-", "in", "." ]
def _requires_builtin(fxn): """Decorator to verify the named module is built-in.""" def _requires_builtin_wrapper(self, fullname): if fullname not in sys.builtin_module_names: raise ImportError('{!r} is not a built-in module'.format(fullname), name=fullname) return fxn(self, fullname) _wrap(_requires_builtin_wrapper, fxn) return _requires_builtin_wrapper
[ "def", "_requires_builtin", "(", "fxn", ")", ":", "def", "_requires_builtin_wrapper", "(", "self", ",", "fullname", ")", ":", "if", "fullname", "not", "in", "sys", ".", "builtin_module_names", ":", "raise", "ImportError", "(", "'{!r} is not a built-in module'", ".", "format", "(", "fullname", ")", ",", "name", "=", "fullname", ")", "return", "fxn", "(", "self", ",", "fullname", ")", "_wrap", "(", "_requires_builtin_wrapper", ",", "fxn", ")", "return", "_requires_builtin_wrapper" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/importlib/_bootstrap.py#L230-L238
mantidproject/mantid
03deeb89254ec4289edb8771e0188c2090a02f32
qt/python/mantidqt/mantidqt/gui_helper.py
python
show_interface_help
(mantidplot_name, assistant_process, area: str='', collection_file: str='', qt_url: str='', external_url: str="")
Shows the help page for a custom interface @param mantidplot_name: used by showCustomInterfaceHelp @param assistant_process: needs to be started/closed from outside (see example below) @param collection_file: qth file containing the help in format used by qtassistant. The default is ``mantid._bindir + '../docs/qthelp/MantidProject.qhc'`` @param qt_url: location of the help in the qth file. The default value is ``qthelp://org.sphinx.mantidproject.{mtdversion}/doc/interfaces/{mantidplot_name}.html``. @param external_url: location of external page to be displayed in the default browser. The default value is ``http://docs.mantidproject.org/nightly/interfaces/framework/{mantidplot_name}.html`` Example using defaults: #in the __init__ function of the GUI add: self.assistant_process = QtCore.QProcess(self) self.mantidplot_name='DGS Planner' #add a help function in the GUI def help(self): show_interface_help(self.mantidplot_name, self.assistant_process) #make sure you close the qtassistant when the GUI is closed def closeEvent(self, event): self.assistant_process.close() self.assistant_process.waitForFinished() event.accept()
Shows the help page for a custom interface @param mantidplot_name: used by showCustomInterfaceHelp @param assistant_process: needs to be started/closed from outside (see example below) @param collection_file: qth file containing the help in format used by qtassistant. The default is ``mantid._bindir + '../docs/qthelp/MantidProject.qhc'`` @param qt_url: location of the help in the qth file. The default value is ``qthelp://org.sphinx.mantidproject.{mtdversion}/doc/interfaces/{mantidplot_name}.html``. @param external_url: location of external page to be displayed in the default browser. The default value is ``http://docs.mantidproject.org/nightly/interfaces/framework/{mantidplot_name}.html``
[ "Shows", "the", "help", "page", "for", "a", "custom", "interface", "@param", "mantidplot_name", ":", "used", "by", "showCustomInterfaceHelp", "@param", "assistant_process", ":", "needs", "to", "be", "started", "/", "closed", "from", "outside", "(", "see", "example", "below", ")", "@param", "collection_file", ":", "qth", "file", "containing", "the", "help", "in", "format", "used", "by", "qtassistant", ".", "The", "default", "is", "mantid", ".", "_bindir", "+", "..", "/", "docs", "/", "qthelp", "/", "MantidProject", ".", "qhc", "@param", "qt_url", ":", "location", "of", "the", "help", "in", "the", "qth", "file", ".", "The", "default", "value", "is", "qthelp", ":", "//", "org", ".", "sphinx", ".", "mantidproject", ".", "{", "mtdversion", "}", "/", "doc", "/", "interfaces", "/", "{", "mantidplot_name", "}", ".", "html", ".", "@param", "external_url", ":", "location", "of", "external", "page", "to", "be", "displayed", "in", "the", "default", "browser", ".", "The", "default", "value", "is", "http", ":", "//", "docs", ".", "mantidproject", ".", "org", "/", "nightly", "/", "interfaces", "/", "framework", "/", "{", "mantidplot_name", "}", ".", "html" ]
def show_interface_help(mantidplot_name, assistant_process, area: str='', collection_file: str='', qt_url: str='', external_url: str=""): ''' Shows the help page for a custom interface @param mantidplot_name: used by showCustomInterfaceHelp @param assistant_process: needs to be started/closed from outside (see example below) @param collection_file: qth file containing the help in format used by qtassistant. The default is ``mantid._bindir + '../docs/qthelp/MantidProject.qhc'`` @param qt_url: location of the help in the qth file. The default value is ``qthelp://org.sphinx.mantidproject.{mtdversion}/doc/interfaces/{mantidplot_name}.html``. @param external_url: location of external page to be displayed in the default browser. The default value is ``http://docs.mantidproject.org/nightly/interfaces/framework/{mantidplot_name}.html`` Example using defaults: #in the __init__ function of the GUI add: self.assistant_process = QtCore.QProcess(self) self.mantidplot_name='DGS Planner' #add a help function in the GUI def help(self): show_interface_help(self.mantidplot_name, self.assistant_process) #make sure you close the qtassistant when the GUI is closed def closeEvent(self, event): self.assistant_process.close() self.assistant_process.waitForFinished() event.accept() ''' try: # try using built-in help in mantid import mantidqt mantidqt.interfacemanager.InterfaceManager().showCustomInterfaceHelp(mantidplot_name, area) except: #(ImportError, ModuleNotFoundError) raises the wrong type of error # built-in help failed, try external qtassistant then give up and launch a browser # cleanup previous version assistant_process.close() assistant_process.waitForFinished() # where to expect qtassistant helpapp = QtCore.QLibraryInfo.location(QtCore.QLibraryInfo.BinariesPath) + QtCore.QDir.separator() helpapp += 'assistant' collection_file = __get_collection_file(collection_file) if os.path.isfile(helpapp) and os.path.isfile(collection_file): # try to find the collection file and launch qtassistant args = ['-enableRemoteControl', '-collectionFile', collection_file, '-showUrl', __to_qthelp_url(mantidplot_name, area, qt_url)] assistant_process.close() assistant_process.waitForFinished() assistant_process.start(helpapp, args) else: # give up and upen a URL in default browser openUrl=QtGui.QDesktopServices.openUrl sysenv=QtCore.QProcessEnvironment.systemEnvironment() ldp=sysenv.value('LD_PRELOAD') if ldp: del os.environ['LD_PRELOAD'] # create a url to the help in the default location openUrl(__to_external_url(mantidplot_name, area, external_url)) if ldp: os.environ['LD_PRELOAD']=ldp
[ "def", "show_interface_help", "(", "mantidplot_name", ",", "assistant_process", ",", "area", ":", "str", "=", "''", ",", "collection_file", ":", "str", "=", "''", ",", "qt_url", ":", "str", "=", "''", ",", "external_url", ":", "str", "=", "\"\"", ")", ":", "try", ":", "# try using built-in help in mantid", "import", "mantidqt", "mantidqt", ".", "interfacemanager", ".", "InterfaceManager", "(", ")", ".", "showCustomInterfaceHelp", "(", "mantidplot_name", ",", "area", ")", "except", ":", "#(ImportError, ModuleNotFoundError) raises the wrong type of error", "# built-in help failed, try external qtassistant then give up and launch a browser", "# cleanup previous version", "assistant_process", ".", "close", "(", ")", "assistant_process", ".", "waitForFinished", "(", ")", "# where to expect qtassistant", "helpapp", "=", "QtCore", ".", "QLibraryInfo", ".", "location", "(", "QtCore", ".", "QLibraryInfo", ".", "BinariesPath", ")", "+", "QtCore", ".", "QDir", ".", "separator", "(", ")", "helpapp", "+=", "'assistant'", "collection_file", "=", "__get_collection_file", "(", "collection_file", ")", "if", "os", ".", "path", ".", "isfile", "(", "helpapp", ")", "and", "os", ".", "path", ".", "isfile", "(", "collection_file", ")", ":", "# try to find the collection file and launch qtassistant", "args", "=", "[", "'-enableRemoteControl'", ",", "'-collectionFile'", ",", "collection_file", ",", "'-showUrl'", ",", "__to_qthelp_url", "(", "mantidplot_name", ",", "area", ",", "qt_url", ")", "]", "assistant_process", ".", "close", "(", ")", "assistant_process", ".", "waitForFinished", "(", ")", "assistant_process", ".", "start", "(", "helpapp", ",", "args", ")", "else", ":", "# give up and upen a URL in default browser", "openUrl", "=", "QtGui", ".", "QDesktopServices", ".", "openUrl", "sysenv", "=", "QtCore", ".", "QProcessEnvironment", ".", "systemEnvironment", "(", ")", "ldp", "=", "sysenv", ".", "value", "(", "'LD_PRELOAD'", ")", "if", "ldp", ":", "del", "os", ".", "environ", "[", "'LD_PRELOAD'", "]", "# create a url to the help in the default location", "openUrl", "(", "__to_external_url", "(", "mantidplot_name", ",", "area", ",", "external_url", ")", ")", "if", "ldp", ":", "os", ".", "environ", "[", "'LD_PRELOAD'", "]", "=", "ldp" ]
https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/qt/python/mantidqt/mantidqt/gui_helper.py#L83-L150
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/site-packages/pip/_vendor/urllib3/fields.py
python
RequestField._render_parts
(self, header_parts)
return u"; ".join(parts)
Helper function to format and quote a single header. Useful for single headers that are composed of multiple items. E.g., 'Content-Disposition' fields. :param header_parts: A sequence of (k, v) tuples or a :class:`dict` of (k, v) to format as `k1="v1"; k2="v2"; ...`.
[]
def _render_parts(self, header_parts): """ Helper function to format and quote a single header. Useful for single headers that are composed of multiple items. E.g., 'Content-Disposition' fields. :param header_parts: A sequence of (k, v) tuples or a :class:`dict` of (k, v) to format as `k1="v1"; k2="v2"; ...`. """ parts = [] iterable = header_parts if isinstance(header_parts, dict): iterable = header_parts.items() for name, value in iterable: if value is not None: parts.append(self._render_part(name, value)) return u"; ".join(parts)
[ "def", "_render_parts", "(", "self", ",", "header_parts", ")", ":", "parts", "=", "[", "]", "iterable", "=", "header_parts", "if", "isinstance", "(", "header_parts", ",", "dict", ")", ":", "iterable", "=", "header_parts", ".", "items", "(", ")", "for", "name", ",", "value", "in", "iterable", ":", "if", "value", "is", "not", "None", ":", "parts", ".", "append", "(", "self", ".", "_render_part", "(", "name", ",", "value", ")", ")", "return", "u\"; \"", ".", "join", "(", "parts", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/site-packages/pip/_vendor/urllib3/fields.py#L415-L455
y123456yz/reading-and-annotate-mongodb-3.6
93280293672ca7586dc24af18132aa61e4ed7fcf
mongo/src/third_party/scons-2.5.0/scons-local-2.5.0/SCons/Tool/docbook/__init__.py
python
DocbookHtmlChunked
(env, target, source=None, *args, **kw)
return result
A pseudo-Builder, providing a Docbook toolchain for chunked HTML output.
A pseudo-Builder, providing a Docbook toolchain for chunked HTML output.
[ "A", "pseudo", "-", "Builder", "providing", "a", "Docbook", "toolchain", "for", "chunked", "HTML", "output", "." ]
def DocbookHtmlChunked(env, target, source=None, *args, **kw): """ A pseudo-Builder, providing a Docbook toolchain for chunked HTML output. """ # Init target/source if not SCons.Util.is_List(target): target = [target] if not source: source = target target = ['index.html'] elif not SCons.Util.is_List(source): source = [source] # Init XSL stylesheet __init_xsl_stylesheet(kw, env, '$DOCBOOK_DEFAULT_XSL_HTMLCHUNKED', ['html','chunkfast.xsl']) # Setup builder __builder = __select_builder(__lxml_builder, __libxml2_builder, __xsltproc_builder) # Detect base dir base_dir = kw.get('base_dir', '') if base_dir: __create_output_dir(base_dir) # Create targets result = [] r = __builder.__call__(env, __ensure_suffix(str(target[0]), '.html'), source[0], **kw) env.Depends(r, kw['DOCBOOK_XSL']) result.extend(r) # Add supporting files for cleanup env.Clean(r, glob.glob(os.path.join(base_dir, '*.html'))) return result
[ "def", "DocbookHtmlChunked", "(", "env", ",", "target", ",", "source", "=", "None", ",", "*", "args", ",", "*", "*", "kw", ")", ":", "# Init target/source", "if", "not", "SCons", ".", "Util", ".", "is_List", "(", "target", ")", ":", "target", "=", "[", "target", "]", "if", "not", "source", ":", "source", "=", "target", "target", "=", "[", "'index.html'", "]", "elif", "not", "SCons", ".", "Util", ".", "is_List", "(", "source", ")", ":", "source", "=", "[", "source", "]", "# Init XSL stylesheet", "__init_xsl_stylesheet", "(", "kw", ",", "env", ",", "'$DOCBOOK_DEFAULT_XSL_HTMLCHUNKED'", ",", "[", "'html'", ",", "'chunkfast.xsl'", "]", ")", "# Setup builder", "__builder", "=", "__select_builder", "(", "__lxml_builder", ",", "__libxml2_builder", ",", "__xsltproc_builder", ")", "# Detect base dir", "base_dir", "=", "kw", ".", "get", "(", "'base_dir'", ",", "''", ")", "if", "base_dir", ":", "__create_output_dir", "(", "base_dir", ")", "# Create targets", "result", "=", "[", "]", "r", "=", "__builder", ".", "__call__", "(", "env", ",", "__ensure_suffix", "(", "str", "(", "target", "[", "0", "]", ")", ",", "'.html'", ")", ",", "source", "[", "0", "]", ",", "*", "*", "kw", ")", "env", ".", "Depends", "(", "r", ",", "kw", "[", "'DOCBOOK_XSL'", "]", ")", "result", ".", "extend", "(", "r", ")", "# Add supporting files for cleanup", "env", ".", "Clean", "(", "r", ",", "glob", ".", "glob", "(", "os", ".", "path", ".", "join", "(", "base_dir", ",", "'*.html'", ")", ")", ")", "return", "result" ]
https://github.com/y123456yz/reading-and-annotate-mongodb-3.6/blob/93280293672ca7586dc24af18132aa61e4ed7fcf/mongo/src/third_party/scons-2.5.0/scons-local-2.5.0/SCons/Tool/docbook/__init__.py#L557-L589
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/site-packages/pip/_internal/models/selection_prefs.py
python
SelectionPreferences.__init__
( self, allow_yanked, # type: bool allow_all_prereleases=False, # type: bool format_control=None, # type: Optional[FormatControl] prefer_binary=False, # type: bool ignore_requires_python=None, # type: Optional[bool] )
Create a SelectionPreferences object. :param allow_yanked: Whether files marked as yanked (in the sense of PEP 592) are permitted to be candidates for install. :param format_control: A FormatControl object or None. Used to control the selection of source packages / binary packages when consulting the index and links. :param prefer_binary: Whether to prefer an old, but valid, binary dist over a new source dist. :param ignore_requires_python: Whether to ignore incompatible "Requires-Python" values in links. Defaults to False.
Create a SelectionPreferences object.
[ "Create", "a", "SelectionPreferences", "object", "." ]
def __init__( self, allow_yanked, # type: bool allow_all_prereleases=False, # type: bool format_control=None, # type: Optional[FormatControl] prefer_binary=False, # type: bool ignore_requires_python=None, # type: Optional[bool] ): # type: (...) -> None """Create a SelectionPreferences object. :param allow_yanked: Whether files marked as yanked (in the sense of PEP 592) are permitted to be candidates for install. :param format_control: A FormatControl object or None. Used to control the selection of source packages / binary packages when consulting the index and links. :param prefer_binary: Whether to prefer an old, but valid, binary dist over a new source dist. :param ignore_requires_python: Whether to ignore incompatible "Requires-Python" values in links. Defaults to False. """ if ignore_requires_python is None: ignore_requires_python = False self.allow_yanked = allow_yanked self.allow_all_prereleases = allow_all_prereleases self.format_control = format_control self.prefer_binary = prefer_binary self.ignore_requires_python = ignore_requires_python
[ "def", "__init__", "(", "self", ",", "allow_yanked", ",", "# type: bool", "allow_all_prereleases", "=", "False", ",", "# type: bool", "format_control", "=", "None", ",", "# type: Optional[FormatControl]", "prefer_binary", "=", "False", ",", "# type: bool", "ignore_requires_python", "=", "None", ",", "# type: Optional[bool]", ")", ":", "# type: (...) -> None", "if", "ignore_requires_python", "is", "None", ":", "ignore_requires_python", "=", "False", "self", ".", "allow_yanked", "=", "allow_yanked", "self", ".", "allow_all_prereleases", "=", "allow_all_prereleases", "self", ".", "format_control", "=", "format_control", "self", ".", "prefer_binary", "=", "prefer_binary", "self", ".", "ignore_requires_python", "=", "ignore_requires_python" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/site-packages/pip/_internal/models/selection_prefs.py#L43-L99
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_carbon/_controls.py
python
AnyButton.GetBitmapCurrent
(*args, **kwargs)
return _controls_.AnyButton_GetBitmapCurrent(*args, **kwargs)
GetBitmapCurrent(self) -> Bitmap
GetBitmapCurrent(self) -> Bitmap
[ "GetBitmapCurrent", "(", "self", ")", "-", ">", "Bitmap" ]
def GetBitmapCurrent(*args, **kwargs): """GetBitmapCurrent(self) -> Bitmap""" return _controls_.AnyButton_GetBitmapCurrent(*args, **kwargs)
[ "def", "GetBitmapCurrent", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_controls_", ".", "AnyButton_GetBitmapCurrent", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_controls.py#L112-L114
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
wx/lib/agw/thumbnailctrl.py
python
ThumbnailCtrl.OnComboBox
(self, event)
Handles the ``wx.EVT_COMBOBOX`` for the folder combobox. :param `event`: a :class:`CommandEvent` event to be processed.
Handles the ``wx.EVT_COMBOBOX`` for the folder combobox.
[ "Handles", "the", "wx", ".", "EVT_COMBOBOX", "for", "the", "folder", "combobox", "." ]
def OnComboBox(self, event): """ Handles the ``wx.EVT_COMBOBOX`` for the folder combobox. :param `event`: a :class:`CommandEvent` event to be processed. """ dirs = self._combo.GetValue() if os.path.isdir(opj(dirs)): self._scrolled.ShowDir(opj(dirs)) event.Skip()
[ "def", "OnComboBox", "(", "self", ",", "event", ")", ":", "dirs", "=", "self", ".", "_combo", ".", "GetValue", "(", ")", "if", "os", ".", "path", ".", "isdir", "(", "opj", "(", "dirs", ")", ")", ":", "self", ".", "_scrolled", ".", "ShowDir", "(", "opj", "(", "dirs", ")", ")", "event", ".", "Skip", "(", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/lib/agw/thumbnailctrl.py#L1010-L1022
windystrife/UnrealEngine_NVIDIAGameWorks
b50e6338a7c5b26374d66306ebc7807541ff815e
Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/HTMLParser.py
python
HTMLParser.get_starttag_text
(self)
return self.__starttag_text
Return full source of start tag: '<...>'.
Return full source of start tag: '<...>'.
[ "Return", "full", "source", "of", "start", "tag", ":", "<", "...", ">", "." ]
def get_starttag_text(self): """Return full source of start tag: '<...>'.""" return self.__starttag_text
[ "def", "get_starttag_text", "(", "self", ")", ":", "return", "self", ".", "__starttag_text" ]
https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/HTMLParser.py#L125-L127
gem5/gem5
141cc37c2d4b93959d4c249b8f7e6a8b2ef75338
src/python/gem5/components/cachehierarchies/classic/no_cache.py
python
NoCache.__init__
( self, membus: BaseXBar = _get_default_membus.__func__() )
:param membus: The memory bus for this setup. This parameter is optional and will default toa 64 bit width SystemXBar is not specified. :type membus: BaseXBar
:param membus: The memory bus for this setup. This parameter is optional and will default toa 64 bit width SystemXBar is not specified.
[ ":", "param", "membus", ":", "The", "memory", "bus", "for", "this", "setup", ".", "This", "parameter", "is", "optional", "and", "will", "default", "toa", "64", "bit", "width", "SystemXBar", "is", "not", "specified", "." ]
def __init__( self, membus: BaseXBar = _get_default_membus.__func__() ) -> None: """ :param membus: The memory bus for this setup. This parameter is optional and will default toa 64 bit width SystemXBar is not specified. :type membus: BaseXBar """ super().__init__() self.membus = membus
[ "def", "__init__", "(", "self", ",", "membus", ":", "BaseXBar", "=", "_get_default_membus", ".", "__func__", "(", ")", ")", "->", "None", ":", "super", "(", ")", ".", "__init__", "(", ")", "self", ".", "membus", "=", "membus" ]
https://github.com/gem5/gem5/blob/141cc37c2d4b93959d4c249b8f7e6a8b2ef75338/src/python/gem5/components/cachehierarchies/classic/no_cache.py#L76-L86
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/inspect.py
python
isclass
(object)
return isinstance(object, type)
Return true if the object is a class. Class objects provide these attributes: __doc__ documentation string __module__ name of module in which this class was defined
Return true if the object is a class.
[ "Return", "true", "if", "the", "object", "is", "a", "class", "." ]
def isclass(object): """Return true if the object is a class. Class objects provide these attributes: __doc__ documentation string __module__ name of module in which this class was defined""" return isinstance(object, type)
[ "def", "isclass", "(", "object", ")", ":", "return", "isinstance", "(", "object", ",", "type", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/inspect.py#L72-L78
calamares/calamares
9f6f82405b3074af7c99dc26487d2e46e4ece3e5
src/modules/bootloader/main.py
python
run
()
return None
Starts procedure and passes 'fw_type' to other routine. :return:
Starts procedure and passes 'fw_type' to other routine.
[ "Starts", "procedure", "and", "passes", "fw_type", "to", "other", "routine", "." ]
def run(): """ Starts procedure and passes 'fw_type' to other routine. :return: """ fw_type = libcalamares.globalstorage.value("firmwareType") if (libcalamares.globalstorage.value("bootLoader") is None and fw_type != "efi"): libcalamares.utils.warning( "Non-EFI system, and no bootloader is set." ) return None partitions = libcalamares.globalstorage.value("partitions") if fw_type == "efi": efi_system_partition = libcalamares.globalstorage.value("efiSystemPartition") esp_found = [ p for p in partitions if p["mountPoint"] == efi_system_partition ] if not esp_found: libcalamares.utils.warning( "EFI system, but nothing mounted on {!s}".format(efi_system_partition) ) return None try: prepare_bootloader(fw_type) except subprocess.CalledProcessError as e: libcalamares.utils.warning(str(e)) libcalamares.utils.debug("stdout:" + str(e.stdout)) libcalamares.utils.debug("stderr:" + str(e.stderr)) return (_("Bootloader installation error"), _("The bootloader could not be installed. The installation command <pre>{!s}</pre> returned error code {!s}.") .format(e.cmd, e.returncode)) return None
[ "def", "run", "(", ")", ":", "fw_type", "=", "libcalamares", ".", "globalstorage", ".", "value", "(", "\"firmwareType\"", ")", "if", "(", "libcalamares", ".", "globalstorage", ".", "value", "(", "\"bootLoader\"", ")", "is", "None", "and", "fw_type", "!=", "\"efi\"", ")", ":", "libcalamares", ".", "utils", ".", "warning", "(", "\"Non-EFI system, and no bootloader is set.\"", ")", "return", "None", "partitions", "=", "libcalamares", ".", "globalstorage", ".", "value", "(", "\"partitions\"", ")", "if", "fw_type", "==", "\"efi\"", ":", "efi_system_partition", "=", "libcalamares", ".", "globalstorage", ".", "value", "(", "\"efiSystemPartition\"", ")", "esp_found", "=", "[", "p", "for", "p", "in", "partitions", "if", "p", "[", "\"mountPoint\"", "]", "==", "efi_system_partition", "]", "if", "not", "esp_found", ":", "libcalamares", ".", "utils", ".", "warning", "(", "\"EFI system, but nothing mounted on {!s}\"", ".", "format", "(", "efi_system_partition", ")", ")", "return", "None", "try", ":", "prepare_bootloader", "(", "fw_type", ")", "except", "subprocess", ".", "CalledProcessError", "as", "e", ":", "libcalamares", ".", "utils", ".", "warning", "(", "str", "(", "e", ")", ")", "libcalamares", ".", "utils", ".", "debug", "(", "\"stdout:\"", "+", "str", "(", "e", ".", "stdout", ")", ")", "libcalamares", ".", "utils", ".", "debug", "(", "\"stderr:\"", "+", "str", "(", "e", ".", "stderr", ")", ")", "return", "(", "_", "(", "\"Bootloader installation error\"", ")", ",", "_", "(", "\"The bootloader could not be installed. The installation command <pre>{!s}</pre> returned error code {!s}.\"", ")", ".", "format", "(", "e", ".", "cmd", ",", "e", ".", "returncode", ")", ")", "return", "None" ]
https://github.com/calamares/calamares/blob/9f6f82405b3074af7c99dc26487d2e46e4ece3e5/src/modules/bootloader/main.py#L753-L784
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_carbon/html.py
python
HtmlCell.GetLink
(*args, **kwargs)
return _html.HtmlCell_GetLink(*args, **kwargs)
GetLink(self, int x=0, int y=0) -> HtmlLinkInfo
GetLink(self, int x=0, int y=0) -> HtmlLinkInfo
[ "GetLink", "(", "self", "int", "x", "=", "0", "int", "y", "=", "0", ")", "-", ">", "HtmlLinkInfo" ]
def GetLink(*args, **kwargs): """GetLink(self, int x=0, int y=0) -> HtmlLinkInfo""" return _html.HtmlCell_GetLink(*args, **kwargs)
[ "def", "GetLink", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_html", ".", "HtmlCell_GetLink", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/html.py#L642-L644
apple/turicreate
cce55aa5311300e3ce6af93cb45ba791fd1bdf49
src/external/boost/boost_1_68_0/tools/build/src/util/utility.py
python
get_value
(property)
return replace_grist (property, '')
Gets the value of a property, that is, the part following the grist, if any.
Gets the value of a property, that is, the part following the grist, if any.
[ "Gets", "the", "value", "of", "a", "property", "that", "is", "the", "part", "following", "the", "grist", "if", "any", "." ]
def get_value (property): """ Gets the value of a property, that is, the part following the grist, if any. """ assert is_iterable_typed(property, basestring) or isinstance(property, basestring) return replace_grist (property, '')
[ "def", "get_value", "(", "property", ")", ":", "assert", "is_iterable_typed", "(", "property", ",", "basestring", ")", "or", "isinstance", "(", "property", ",", "basestring", ")", "return", "replace_grist", "(", "property", ",", "''", ")" ]
https://github.com/apple/turicreate/blob/cce55aa5311300e3ce6af93cb45ba791fd1bdf49/src/external/boost/boost_1_68_0/tools/build/src/util/utility.py#L85-L89
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/build/waf-1.7.13/lmbrwaflib/default_settings.py
python
out_folder_linux64
(ctx, section_name, option_name, value)
return _get_string_value(ctx, 'Linux x64 Output Folder', value)
Configure output folder for linux x64
Configure output folder for linux x64
[ "Configure", "output", "folder", "for", "linux", "x64" ]
def out_folder_linux64(ctx, section_name, option_name, value): """ Configure output folder for linux x64 """ if not _is_user_input_allowed(ctx, option_name, value): return value # GUI if not ctx.is_option_true('console_mode'): return ctx.gui_get_attribute(section_name, option_name, value) _output_folder_disclaimer(ctx) return _get_string_value(ctx, 'Linux x64 Output Folder', value)
[ "def", "out_folder_linux64", "(", "ctx", ",", "section_name", ",", "option_name", ",", "value", ")", ":", "if", "not", "_is_user_input_allowed", "(", "ctx", ",", "option_name", ",", "value", ")", ":", "return", "value", "# GUI", "if", "not", "ctx", ".", "is_option_true", "(", "'console_mode'", ")", ":", "return", "ctx", ".", "gui_get_attribute", "(", "section_name", ",", "option_name", ",", "value", ")", "_output_folder_disclaimer", "(", "ctx", ")", "return", "_get_string_value", "(", "ctx", ",", "'Linux x64 Output Folder'", ",", "value", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/build/waf-1.7.13/lmbrwaflib/default_settings.py#L298-L308
ukoethe/vigra
093d57d15c8c237adf1704d96daa6393158ce299
vigranumpy/lib/arraytypes.py
python
ScalarImage
(obj, dtype=numpy.float32, order=None, init=True, value=None, axistags=None)
return VigraArray(obj, dtype, None, init, value, axistags)
Factory function for a :class:`~vigra.VigraArray` representing a single-band image (i.e. an array with two spatial axes 'x' and 'y' and no channel axis). Paramters are interpreted as in the VigraArray constructor, but an exception will be raised if the shape or axistags are unsuitable for a single-band image.
Factory function for a :class:`~vigra.VigraArray` representing a single-band image (i.e. an array with two spatial axes 'x' and 'y' and no channel axis). Paramters are interpreted as in the VigraArray constructor, but an exception will be raised if the shape or axistags are unsuitable for a single-band image.
[ "Factory", "function", "for", "a", ":", "class", ":", "~vigra", ".", "VigraArray", "representing", "a", "single", "-", "band", "image", "(", "i", ".", "e", ".", "an", "array", "with", "two", "spatial", "axes", "x", "and", "y", "and", "no", "channel", "axis", ")", ".", "Paramters", "are", "interpreted", "as", "in", "the", "VigraArray", "constructor", "but", "an", "exception", "will", "be", "raised", "if", "the", "shape", "or", "axistags", "are", "unsuitable", "for", "a", "single", "-", "band", "image", "." ]
def ScalarImage(obj, dtype=numpy.float32, order=None, init=True, value=None, axistags=None): ''' Factory function for a :class:`~vigra.VigraArray` representing a single-band image (i.e. an array with two spatial axes 'x' and 'y' and no channel axis). Paramters are interpreted as in the VigraArray constructor, but an exception will be raised if the shape or axistags are unsuitable for a single-band image. ''' obj, axistags = _adjustInput(obj, order, 2, None, axistags, "vigra.ScalarImage()") return VigraArray(obj, dtype, None, init, value, axistags)
[ "def", "ScalarImage", "(", "obj", ",", "dtype", "=", "numpy", ".", "float32", ",", "order", "=", "None", ",", "init", "=", "True", ",", "value", "=", "None", ",", "axistags", "=", "None", ")", ":", "obj", ",", "axistags", "=", "_adjustInput", "(", "obj", ",", "order", ",", "2", ",", "None", ",", "axistags", ",", "\"vigra.ScalarImage()\"", ")", "return", "VigraArray", "(", "obj", ",", "dtype", ",", "None", ",", "init", ",", "value", ",", "axistags", ")" ]
https://github.com/ukoethe/vigra/blob/093d57d15c8c237adf1704d96daa6393158ce299/vigranumpy/lib/arraytypes.py#L1803-L1812
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
wx/tools/Editra/plugins/codebrowser/codebrowser/cbrowser.py
python
CodeBrowserTree.OnTagsReady
(self, evt)
Processing of tag generation has completed, check results and update view. @param evt: EVT_JOB_FINISHED
Processing of tag generation has completed, check results and update view. @param evt: EVT_JOB_FINISHED
[ "Processing", "of", "tag", "generation", "has", "completed", "check", "results", "and", "update", "view", ".", "@param", "evt", ":", "EVT_JOB_FINISHED" ]
def OnTagsReady(self, evt): """Processing of tag generation has completed, check results and update view. @param evt: EVT_JOB_FINISHED """ # Make sure that the values that are being returned are the ones for # the currently active buffer. if evt.GetId() == self._cjob: self._lastjob = u'' self.Freeze() self.UpdateAll(evt.GetValue()) self.Thaw() # Stop busy indicator ed_msg.PostMessage(ed_msg.EDMSG_PROGRESS_STATE, (self._mw.GetId(), 0, 0)) if not self._timer.IsRunning(): self._cpage = None
[ "def", "OnTagsReady", "(", "self", ",", "evt", ")", ":", "# Make sure that the values that are being returned are the ones for", "# the currently active buffer.", "if", "evt", ".", "GetId", "(", ")", "==", "self", ".", "_cjob", ":", "self", ".", "_lastjob", "=", "u''", "self", ".", "Freeze", "(", ")", "self", ".", "UpdateAll", "(", "evt", ".", "GetValue", "(", ")", ")", "self", ".", "Thaw", "(", ")", "# Stop busy indicator", "ed_msg", ".", "PostMessage", "(", "ed_msg", ".", "EDMSG_PROGRESS_STATE", ",", "(", "self", ".", "_mw", ".", "GetId", "(", ")", ",", "0", ",", "0", ")", ")", "if", "not", "self", ".", "_timer", ".", "IsRunning", "(", ")", ":", "self", ".", "_cpage", "=", "None" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/tools/Editra/plugins/codebrowser/codebrowser/cbrowser.py#L477-L495
HyeonwooNoh/caffe
d9e8494a2832d67b25dee37194c7bcb9d52d0e42
scripts/cpp_lint.py
python
FileInfo.Extension
(self)
return self.Split()[2]
File extension - text following the final period.
File extension - text following the final period.
[ "File", "extension", "-", "text", "following", "the", "final", "period", "." ]
def Extension(self): """File extension - text following the final period.""" return self.Split()[2]
[ "def", "Extension", "(", "self", ")", ":", "return", "self", ".", "Split", "(", ")", "[", "2", "]" ]
https://github.com/HyeonwooNoh/caffe/blob/d9e8494a2832d67b25dee37194c7bcb9d52d0e42/scripts/cpp_lint.py#L948-L950
tensorflow/tensorflow
419e3a6b650ea4bd1b0cba23c4348f8a69f3272e
tensorflow/python/distribute/multi_process_runner.py
python
MultiProcessRunner._process_watchdog
(self)
Simulates a cluster management system. - If auto_restart is True, it restarts processes that exit with a non-zero exit code. Note that when join() times out it overrides auto_restart to False. - If dependence_on_chief is True, it terminates all processes once the chief exits. If auto_restart is also True, it only terminates all processes if the chief exit with a zero exit code, otherwise it restarts the chief. This runs in self._watchdog_thread.
Simulates a cluster management system.
[ "Simulates", "a", "cluster", "management", "system", "." ]
def _process_watchdog(self): """Simulates a cluster management system. - If auto_restart is True, it restarts processes that exit with a non-zero exit code. Note that when join() times out it overrides auto_restart to False. - If dependence_on_chief is True, it terminates all processes once the chief exits. If auto_restart is also True, it only terminates all processes if the chief exit with a zero exit code, otherwise it restarts the chief. This runs in self._watchdog_thread. """ while True: time.sleep(1) with self._process_lock: chief = self._processes.get(('chief', 0), None) # Terminate the cluster when _dependence_on_chief is True if either: # - chief has exited with zero exit code. # - chief has exited with non-zero exit code and self._auto_restart is # False. if chief and self._dependence_on_chief and chief.exitcode is not None: if chief.exitcode == 0 or (not self._auto_restart): for p in self._processes.values(): # Give other processes a chance to exit on their own. p.join(timeout=3) self._terminate_all() for p in self._processes.values(): p.join() return # Auto restart failed processes if self._auto_restart is True. if self._auto_restart: has_failure = False for (task_type, task_id), p in self._processes.items(): if p.exitcode is not None and p.exitcode != 0: has_failure = True logging.info('Restarting failed %s-%d', task_type, task_id) self._start_subprocess_and_reading_thread(task_type, task_id) if has_failure: continue # Exit the thread if all processes have exited at this point. if all(p.exitcode is not None for p in self._processes.values()): return
[ "def", "_process_watchdog", "(", "self", ")", ":", "while", "True", ":", "time", ".", "sleep", "(", "1", ")", "with", "self", ".", "_process_lock", ":", "chief", "=", "self", ".", "_processes", ".", "get", "(", "(", "'chief'", ",", "0", ")", ",", "None", ")", "# Terminate the cluster when _dependence_on_chief is True if either:", "# - chief has exited with zero exit code.", "# - chief has exited with non-zero exit code and self._auto_restart is", "# False.", "if", "chief", "and", "self", ".", "_dependence_on_chief", "and", "chief", ".", "exitcode", "is", "not", "None", ":", "if", "chief", ".", "exitcode", "==", "0", "or", "(", "not", "self", ".", "_auto_restart", ")", ":", "for", "p", "in", "self", ".", "_processes", ".", "values", "(", ")", ":", "# Give other processes a chance to exit on their own.", "p", ".", "join", "(", "timeout", "=", "3", ")", "self", ".", "_terminate_all", "(", ")", "for", "p", "in", "self", ".", "_processes", ".", "values", "(", ")", ":", "p", ".", "join", "(", ")", "return", "# Auto restart failed processes if self._auto_restart is True.", "if", "self", ".", "_auto_restart", ":", "has_failure", "=", "False", "for", "(", "task_type", ",", "task_id", ")", ",", "p", "in", "self", ".", "_processes", ".", "items", "(", ")", ":", "if", "p", ".", "exitcode", "is", "not", "None", "and", "p", ".", "exitcode", "!=", "0", ":", "has_failure", "=", "True", "logging", ".", "info", "(", "'Restarting failed %s-%d'", ",", "task_type", ",", "task_id", ")", "self", ".", "_start_subprocess_and_reading_thread", "(", "task_type", ",", "task_id", ")", "if", "has_failure", ":", "continue", "# Exit the thread if all processes have exited at this point.", "if", "all", "(", "p", ".", "exitcode", "is", "not", "None", "for", "p", "in", "self", ".", "_processes", ".", "values", "(", ")", ")", ":", "return" ]
https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/distribute/multi_process_runner.py#L515-L558
miyosuda/TensorFlowAndroidMNIST
7b5a4603d2780a8a2834575706e9001977524007
jni-build/jni/include/tensorflow/models/rnn/ptb/ptb_word_lm.py
python
run_epoch
(session, m, data, eval_op, verbose=False)
return np.exp(costs / iters)
Runs the model on the given data.
Runs the model on the given data.
[ "Runs", "the", "model", "on", "the", "given", "data", "." ]
def run_epoch(session, m, data, eval_op, verbose=False): """Runs the model on the given data.""" epoch_size = ((len(data) // m.batch_size) - 1) // m.num_steps start_time = time.time() costs = 0.0 iters = 0 state = m.initial_state.eval() for step, (x, y) in enumerate(reader.ptb_iterator(data, m.batch_size, m.num_steps)): cost, state, _ = session.run([m.cost, m.final_state, eval_op], {m.input_data: x, m.targets: y, m.initial_state: state}) costs += cost iters += m.num_steps if verbose and step % (epoch_size // 10) == 10: print("%.3f perplexity: %.3f speed: %.0f wps" % (step * 1.0 / epoch_size, np.exp(costs / iters), iters * m.batch_size / (time.time() - start_time))) return np.exp(costs / iters)
[ "def", "run_epoch", "(", "session", ",", "m", ",", "data", ",", "eval_op", ",", "verbose", "=", "False", ")", ":", "epoch_size", "=", "(", "(", "len", "(", "data", ")", "//", "m", ".", "batch_size", ")", "-", "1", ")", "//", "m", ".", "num_steps", "start_time", "=", "time", ".", "time", "(", ")", "costs", "=", "0.0", "iters", "=", "0", "state", "=", "m", ".", "initial_state", ".", "eval", "(", ")", "for", "step", ",", "(", "x", ",", "y", ")", "in", "enumerate", "(", "reader", ".", "ptb_iterator", "(", "data", ",", "m", ".", "batch_size", ",", "m", ".", "num_steps", ")", ")", ":", "cost", ",", "state", ",", "_", "=", "session", ".", "run", "(", "[", "m", ".", "cost", ",", "m", ".", "final_state", ",", "eval_op", "]", ",", "{", "m", ".", "input_data", ":", "x", ",", "m", ".", "targets", ":", "y", ",", "m", ".", "initial_state", ":", "state", "}", ")", "costs", "+=", "cost", "iters", "+=", "m", ".", "num_steps", "if", "verbose", "and", "step", "%", "(", "epoch_size", "//", "10", ")", "==", "10", ":", "print", "(", "\"%.3f perplexity: %.3f speed: %.0f wps\"", "%", "(", "step", "*", "1.0", "/", "epoch_size", ",", "np", ".", "exp", "(", "costs", "/", "iters", ")", ",", "iters", "*", "m", ".", "batch_size", "/", "(", "time", ".", "time", "(", ")", "-", "start_time", ")", ")", ")", "return", "np", ".", "exp", "(", "costs", "/", "iters", ")" ]
https://github.com/miyosuda/TensorFlowAndroidMNIST/blob/7b5a4603d2780a8a2834575706e9001977524007/jni-build/jni/include/tensorflow/models/rnn/ptb/ptb_word_lm.py#L253-L274
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/numbers.py
python
Integral.__rand__
(self, other)
other & self
other & self
[ "other", "&", "self" ]
def __rand__(self, other): """other & self""" raise NotImplementedError
[ "def", "__rand__", "(", "self", ",", "other", ")", ":", "raise", "NotImplementedError" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/numbers.py#L345-L347
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/windows/Lib/site-packages/botocore/loaders.py
python
ExtrasProcessor.process
(self, original_model, extra_models)
Processes data from a list of loaded extras files into a model :type original_model: dict :param original_model: The service model to load all the extras into. :type extra_models: iterable of dict :param extra_models: A list of loaded extras models.
Processes data from a list of loaded extras files into a model
[ "Processes", "data", "from", "a", "list", "of", "loaded", "extras", "files", "into", "a", "model" ]
def process(self, original_model, extra_models): """Processes data from a list of loaded extras files into a model :type original_model: dict :param original_model: The service model to load all the extras into. :type extra_models: iterable of dict :param extra_models: A list of loaded extras models. """ for extras in extra_models: self._process(original_model, extras)
[ "def", "process", "(", "self", ",", "original_model", ",", "extra_models", ")", ":", "for", "extras", "in", "extra_models", ":", "self", ".", "_process", "(", "original_model", ",", "extras", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/site-packages/botocore/loaders.py#L446-L456
klzgrad/naiveproxy
ed2c513637c77b18721fe428d7ed395b4d284c83
src/build/check_gn_headers.py
python
GetHeadersFromNinja
(out_dir, skip_obj, q)
Return all the header files from ninja_deps
Return all the header files from ninja_deps
[ "Return", "all", "the", "header", "files", "from", "ninja_deps" ]
def GetHeadersFromNinja(out_dir, skip_obj, q): """Return all the header files from ninja_deps""" def NinjaSource(): cmd = [os.path.join(DEPOT_TOOLS_DIR, 'ninja'), '-C', out_dir, '-t', 'deps'] # A negative bufsize means to use the system default, which usually # means fully buffered. popen = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=-1) for line in iter(popen.stdout.readline, ''): yield line.rstrip() popen.stdout.close() return_code = popen.wait() if return_code: raise subprocess.CalledProcessError(return_code, cmd) ans, err = set(), None try: ans = ParseNinjaDepsOutput(NinjaSource(), out_dir, skip_obj) except Exception as e: err = str(e) q.put((ans, err))
[ "def", "GetHeadersFromNinja", "(", "out_dir", ",", "skip_obj", ",", "q", ")", ":", "def", "NinjaSource", "(", ")", ":", "cmd", "=", "[", "os", ".", "path", ".", "join", "(", "DEPOT_TOOLS_DIR", ",", "'ninja'", ")", ",", "'-C'", ",", "out_dir", ",", "'-t'", ",", "'deps'", "]", "# A negative bufsize means to use the system default, which usually", "# means fully buffered.", "popen", "=", "subprocess", ".", "Popen", "(", "cmd", ",", "stdout", "=", "subprocess", ".", "PIPE", ",", "bufsize", "=", "-", "1", ")", "for", "line", "in", "iter", "(", "popen", ".", "stdout", ".", "readline", ",", "''", ")", ":", "yield", "line", ".", "rstrip", "(", ")", "popen", ".", "stdout", ".", "close", "(", ")", "return_code", "=", "popen", ".", "wait", "(", ")", "if", "return_code", ":", "raise", "subprocess", ".", "CalledProcessError", "(", "return_code", ",", "cmd", ")", "ans", ",", "err", "=", "set", "(", ")", ",", "None", "try", ":", "ans", "=", "ParseNinjaDepsOutput", "(", "NinjaSource", "(", ")", ",", "out_dir", ",", "skip_obj", ")", "except", "Exception", "as", "e", ":", "err", "=", "str", "(", "e", ")", "q", ".", "put", "(", "(", "ans", ",", "err", ")", ")" ]
https://github.com/klzgrad/naiveproxy/blob/ed2c513637c77b18721fe428d7ed395b4d284c83/src/build/check_gn_headers.py#L29-L50
hughperkins/tf-coriander
970d3df6c11400ad68405f22b0c42a52374e94ca
tensorflow/python/platform/resource_loader.py
python
get_path_to_datafile
(path)
return os.path.join(data_files_path, path)
Get the path to the specified file in the data dependencies. The path is relative to tensorflow/ Args: path: a string resource path relative to tensorflow/ Returns: The path to the specified file present in the data attribute of py_test or py_binary. Raises: IOError: If the path is not found, or the resource can't be opened.
Get the path to the specified file in the data dependencies.
[ "Get", "the", "path", "to", "the", "specified", "file", "in", "the", "data", "dependencies", "." ]
def get_path_to_datafile(path): """Get the path to the specified file in the data dependencies. The path is relative to tensorflow/ Args: path: a string resource path relative to tensorflow/ Returns: The path to the specified file present in the data attribute of py_test or py_binary. Raises: IOError: If the path is not found, or the resource can't be opened. """ data_files_path = os.path.dirname(inspect.getfile(sys._getframe(1))) return os.path.join(data_files_path, path)
[ "def", "get_path_to_datafile", "(", "path", ")", ":", "data_files_path", "=", "os", ".", "path", ".", "dirname", "(", "inspect", ".", "getfile", "(", "sys", ".", "_getframe", "(", "1", ")", ")", ")", "return", "os", ".", "path", ".", "join", "(", "data_files_path", ",", "path", ")" ]
https://github.com/hughperkins/tf-coriander/blob/970d3df6c11400ad68405f22b0c42a52374e94ca/tensorflow/python/platform/resource_loader.py#L65-L81
cms-sw/cmssw
fd9de012d503d3405420bcbeec0ec879baa57cf2
Validation/RecoTrack/python/plotting/plotting.py
python
Plot.addToLegend
(self, legend, legendLabels, denomUncertainty)
Add histograms to a legend. Arguments: legend -- TLegend legendLabels -- List of strings for the legend labels
Add histograms to a legend.
[ "Add", "histograms", "to", "a", "legend", "." ]
def addToLegend(self, legend, legendLabels, denomUncertainty): """Add histograms to a legend. Arguments: legend -- TLegend legendLabels -- List of strings for the legend labels """ first = denomUncertainty for h, label in zip(self._histograms, legendLabels): if h is None: first = False continue if first: self._forLegend = h.Clone() self._forLegend.SetFillStyle(1001) self._forLegend.SetFillColor(ROOT.kGray) entry = legend.AddEntry(self._forLegend, label, "lpf") first = False else: legend.AddEntry(h, label, "LP")
[ "def", "addToLegend", "(", "self", ",", "legend", ",", "legendLabels", ",", "denomUncertainty", ")", ":", "first", "=", "denomUncertainty", "for", "h", ",", "label", "in", "zip", "(", "self", ".", "_histograms", ",", "legendLabels", ")", ":", "if", "h", "is", "None", ":", "first", "=", "False", "continue", "if", "first", ":", "self", ".", "_forLegend", "=", "h", ".", "Clone", "(", ")", "self", ".", "_forLegend", ".", "SetFillStyle", "(", "1001", ")", "self", ".", "_forLegend", ".", "SetFillColor", "(", "ROOT", ".", "kGray", ")", "entry", "=", "legend", ".", "AddEntry", "(", "self", ".", "_forLegend", ",", "label", ",", "\"lpf\"", ")", "first", "=", "False", "else", ":", "legend", ".", "AddEntry", "(", "h", ",", "label", ",", "\"LP\"", ")" ]
https://github.com/cms-sw/cmssw/blob/fd9de012d503d3405420bcbeec0ec879baa57cf2/Validation/RecoTrack/python/plotting/plotting.py#L2230-L2249
ycm-core/ycmd
fc0fb7e5e15176cc5a2a30c80956335988c6b59a
ycmd/completers/language_server/language_server_completer.py
python
WorkspaceEditToFixIt
( request_data, workspace_edit, text='', kind = None )
return responses.FixIt( responses.Location( request_data[ 'line_num' ], request_data[ 'column_num' ], request_data[ 'filepath' ] ), chunks, text, kind )
Converts a LSP workspace edit to a ycmd FixIt suitable for passing to responses.BuildFixItResponse.
Converts a LSP workspace edit to a ycmd FixIt suitable for passing to responses.BuildFixItResponse.
[ "Converts", "a", "LSP", "workspace", "edit", "to", "a", "ycmd", "FixIt", "suitable", "for", "passing", "to", "responses", ".", "BuildFixItResponse", "." ]
def WorkspaceEditToFixIt( request_data, workspace_edit, text='', kind = None ): """Converts a LSP workspace edit to a ycmd FixIt suitable for passing to responses.BuildFixItResponse.""" if not workspace_edit: return None if 'changes' in workspace_edit: chunks = [] # We sort the filenames to make the response stable. Edits are applied in # strict sequence within a file, but apply to files in arbitrary order. # However, it's important for the response to be stable for the tests. for uri in sorted( workspace_edit[ 'changes' ].keys() ): chunks.extend( TextEditToChunks( request_data, uri, workspace_edit[ 'changes' ][ uri ] ) ) else: chunks = [] for text_document_edit in workspace_edit[ 'documentChanges' ]: uri = text_document_edit[ 'textDocument' ][ 'uri' ] edits = text_document_edit[ 'edits' ] chunks.extend( TextEditToChunks( request_data, uri, edits ) ) return responses.FixIt( responses.Location( request_data[ 'line_num' ], request_data[ 'column_num' ], request_data[ 'filepath' ] ), chunks, text, kind )
[ "def", "WorkspaceEditToFixIt", "(", "request_data", ",", "workspace_edit", ",", "text", "=", "''", ",", "kind", "=", "None", ")", ":", "if", "not", "workspace_edit", ":", "return", "None", "if", "'changes'", "in", "workspace_edit", ":", "chunks", "=", "[", "]", "# We sort the filenames to make the response stable. Edits are applied in", "# strict sequence within a file, but apply to files in arbitrary order.", "# However, it's important for the response to be stable for the tests.", "for", "uri", "in", "sorted", "(", "workspace_edit", "[", "'changes'", "]", ".", "keys", "(", ")", ")", ":", "chunks", ".", "extend", "(", "TextEditToChunks", "(", "request_data", ",", "uri", ",", "workspace_edit", "[", "'changes'", "]", "[", "uri", "]", ")", ")", "else", ":", "chunks", "=", "[", "]", "for", "text_document_edit", "in", "workspace_edit", "[", "'documentChanges'", "]", ":", "uri", "=", "text_document_edit", "[", "'textDocument'", "]", "[", "'uri'", "]", "edits", "=", "text_document_edit", "[", "'edits'", "]", "chunks", ".", "extend", "(", "TextEditToChunks", "(", "request_data", ",", "uri", ",", "edits", ")", ")", "return", "responses", ".", "FixIt", "(", "responses", ".", "Location", "(", "request_data", "[", "'line_num'", "]", ",", "request_data", "[", "'column_num'", "]", ",", "request_data", "[", "'filepath'", "]", ")", ",", "chunks", ",", "text", ",", "kind", ")" ]
https://github.com/ycm-core/ycmd/blob/fc0fb7e5e15176cc5a2a30c80956335988c6b59a/ycmd/completers/language_server/language_server_completer.py#L3220-L3251
Ewenwan/MVision
97b394dfa48cb21c82cd003b1a952745e413a17f
CNN/yolo_v1_tf.py
python
Yolo._detect_from_image
(self, image)
return scores, boxes, box_classes
Do detection given a cv image
Do detection given a cv image
[ "Do", "detection", "given", "a", "cv", "image" ]
def _detect_from_image(self, image): """Do detection given a cv image""" img_h, img_w, _ = image.shape#图像长宽 img_resized = cv2.resize(image, (448, 448))#resize到固定尺寸 448*448 img_RGB = cv2.cvtColor(img_resized, cv2.COLOR_BGR2RGB)# 转到 RGB 通道 img_resized_np = np.asarray(img_RGB)#转换成 数组 _images = np.zeros((1, 448, 448, 3), dtype=np.float32) _images[0] = (img_resized_np / 255.0) * 2.0 - 1.0# 像素值归一化到 0~1 之间 scores, boxes, box_classes = self.sess.run([self.scores, self.boxes, self.box_classes], feed_dict={self.images: _images, self.width: img_w, self.height: img_h}) return scores, boxes, box_classes
[ "def", "_detect_from_image", "(", "self", ",", "image", ")", ":", "img_h", ",", "img_w", ",", "_", "=", "image", ".", "shape", "#图像长宽", "img_resized", "=", "cv2", ".", "resize", "(", "image", ",", "(", "448", ",", "448", ")", ")", "#resize到固定尺寸 448*448", "img_RGB", "=", "cv2", ".", "cvtColor", "(", "img_resized", ",", "cv2", ".", "COLOR_BGR2RGB", ")", "# 转到 RGB 通道", "img_resized_np", "=", "np", ".", "asarray", "(", "img_RGB", ")", "#转换成 数组", "_images", "=", "np", ".", "zeros", "(", "(", "1", ",", "448", ",", "448", ",", "3", ")", ",", "dtype", "=", "np", ".", "float32", ")", "_images", "[", "0", "]", "=", "(", "img_resized_np", "/", "255.0", ")", "*", "2.0", "-", "1.0", "# 像素值归一化到 0~1 之间", "scores", ",", "boxes", ",", "box_classes", "=", "self", ".", "sess", ".", "run", "(", "[", "self", ".", "scores", ",", "self", ".", "boxes", ",", "self", ".", "box_classes", "]", ",", "feed_dict", "=", "{", "self", ".", "images", ":", "_images", ",", "self", ".", "width", ":", "img_w", ",", "self", ".", "height", ":", "img_h", "}", ")", "return", "scores", ",", "boxes", ",", "box_classes" ]
https://github.com/Ewenwan/MVision/blob/97b394dfa48cb21c82cd003b1a952745e413a17f/CNN/yolo_v1_tf.py#L200-L210
eric612/MobileNet-YOLO
69b4441cb3ec8d553fbdef788ad033e246f901bd
scripts/cpp_lint.py
python
_NestingState.UpdatePreprocessor
(self, line)
Update preprocessor stack. We need to handle preprocessors due to classes like this: #ifdef SWIG struct ResultDetailsPageElementExtensionPoint { #else struct ResultDetailsPageElementExtensionPoint : public Extension { #endif We make the following assumptions (good enough for most files): - Preprocessor condition evaluates to true from #if up to first #else/#elif/#endif. - Preprocessor condition evaluates to false from #else/#elif up to #endif. We still perform lint checks on these lines, but these do not affect nesting stack. Args: line: current line to check.
Update preprocessor stack.
[ "Update", "preprocessor", "stack", "." ]
def UpdatePreprocessor(self, line): """Update preprocessor stack. We need to handle preprocessors due to classes like this: #ifdef SWIG struct ResultDetailsPageElementExtensionPoint { #else struct ResultDetailsPageElementExtensionPoint : public Extension { #endif We make the following assumptions (good enough for most files): - Preprocessor condition evaluates to true from #if up to first #else/#elif/#endif. - Preprocessor condition evaluates to false from #else/#elif up to #endif. We still perform lint checks on these lines, but these do not affect nesting stack. Args: line: current line to check. """ if Match(r'^\s*#\s*(if|ifdef|ifndef)\b', line): # Beginning of #if block, save the nesting stack here. The saved # stack will allow us to restore the parsing state in the #else case. self.pp_stack.append(_PreprocessorInfo(copy.deepcopy(self.stack))) elif Match(r'^\s*#\s*(else|elif)\b', line): # Beginning of #else block if self.pp_stack: if not self.pp_stack[-1].seen_else: # This is the first #else or #elif block. Remember the # whole nesting stack up to this point. This is what we # keep after the #endif. self.pp_stack[-1].seen_else = True self.pp_stack[-1].stack_before_else = copy.deepcopy(self.stack) # Restore the stack to how it was before the #if self.stack = copy.deepcopy(self.pp_stack[-1].stack_before_if) else: # TODO(unknown): unexpected #else, issue warning? pass elif Match(r'^\s*#\s*endif\b', line): # End of #if or #else blocks. if self.pp_stack: # If we saw an #else, we will need to restore the nesting # stack to its former state before the #else, otherwise we # will just continue from where we left off. if self.pp_stack[-1].seen_else: # Here we can just use a shallow copy since we are the last # reference to it. self.stack = self.pp_stack[-1].stack_before_else # Drop the corresponding #if self.pp_stack.pop() else: # TODO(unknown): unexpected #endif, issue warning? pass
[ "def", "UpdatePreprocessor", "(", "self", ",", "line", ")", ":", "if", "Match", "(", "r'^\\s*#\\s*(if|ifdef|ifndef)\\b'", ",", "line", ")", ":", "# Beginning of #if block, save the nesting stack here. The saved", "# stack will allow us to restore the parsing state in the #else case.", "self", ".", "pp_stack", ".", "append", "(", "_PreprocessorInfo", "(", "copy", ".", "deepcopy", "(", "self", ".", "stack", ")", ")", ")", "elif", "Match", "(", "r'^\\s*#\\s*(else|elif)\\b'", ",", "line", ")", ":", "# Beginning of #else block", "if", "self", ".", "pp_stack", ":", "if", "not", "self", ".", "pp_stack", "[", "-", "1", "]", ".", "seen_else", ":", "# This is the first #else or #elif block. Remember the", "# whole nesting stack up to this point. This is what we", "# keep after the #endif.", "self", ".", "pp_stack", "[", "-", "1", "]", ".", "seen_else", "=", "True", "self", ".", "pp_stack", "[", "-", "1", "]", ".", "stack_before_else", "=", "copy", ".", "deepcopy", "(", "self", ".", "stack", ")", "# Restore the stack to how it was before the #if", "self", ".", "stack", "=", "copy", ".", "deepcopy", "(", "self", ".", "pp_stack", "[", "-", "1", "]", ".", "stack_before_if", ")", "else", ":", "# TODO(unknown): unexpected #else, issue warning?", "pass", "elif", "Match", "(", "r'^\\s*#\\s*endif\\b'", ",", "line", ")", ":", "# End of #if or #else blocks.", "if", "self", ".", "pp_stack", ":", "# If we saw an #else, we will need to restore the nesting", "# stack to its former state before the #else, otherwise we", "# will just continue from where we left off.", "if", "self", ".", "pp_stack", "[", "-", "1", "]", ".", "seen_else", ":", "# Here we can just use a shallow copy since we are the last", "# reference to it.", "self", ".", "stack", "=", "self", ".", "pp_stack", "[", "-", "1", "]", ".", "stack_before_else", "# Drop the corresponding #if", "self", ".", "pp_stack", ".", "pop", "(", ")", "else", ":", "# TODO(unknown): unexpected #endif, issue warning?", "pass" ]
https://github.com/eric612/MobileNet-YOLO/blob/69b4441cb3ec8d553fbdef788ad033e246f901bd/scripts/cpp_lint.py#L1952-L2006
perilouswithadollarsign/cstrike15_src
f82112a2388b841d72cb62ca48ab1846dfcc11c8
thirdparty/protobuf-2.5.0/python/mox.py
python
MockObject.__setitem__
(self, key, value)
return self._CreateMockMethod('__setitem__')(key, value)
Provide custom logic for mocking classes that support item assignment. Args: key: Key to set the value for. value: Value to set. Returns: Expected return value in replay mode. A MockMethod object for the __setitem__ method that has already been called if not in replay mode. Raises: TypeError if the underlying class does not support item assignment. UnexpectedMethodCallError if the object does not expect the call to __setitem__.
Provide custom logic for mocking classes that support item assignment.
[ "Provide", "custom", "logic", "for", "mocking", "classes", "that", "support", "item", "assignment", "." ]
def __setitem__(self, key, value): """Provide custom logic for mocking classes that support item assignment. Args: key: Key to set the value for. value: Value to set. Returns: Expected return value in replay mode. A MockMethod object for the __setitem__ method that has already been called if not in replay mode. Raises: TypeError if the underlying class does not support item assignment. UnexpectedMethodCallError if the object does not expect the call to __setitem__. """ setitem = self._class_to_mock.__dict__.get('__setitem__', None) # Verify the class supports item assignment. if setitem is None: raise TypeError('object does not support item assignment') # If we are in replay mode then simply call the mock __setitem__ method. if self._replay_mode: return MockMethod('__setitem__', self._expected_calls_queue, self._replay_mode)(key, value) # Otherwise, create a mock method __setitem__. return self._CreateMockMethod('__setitem__')(key, value)
[ "def", "__setitem__", "(", "self", ",", "key", ",", "value", ")", ":", "setitem", "=", "self", ".", "_class_to_mock", ".", "__dict__", ".", "get", "(", "'__setitem__'", ",", "None", ")", "# Verify the class supports item assignment.", "if", "setitem", "is", "None", ":", "raise", "TypeError", "(", "'object does not support item assignment'", ")", "# If we are in replay mode then simply call the mock __setitem__ method.", "if", "self", ".", "_replay_mode", ":", "return", "MockMethod", "(", "'__setitem__'", ",", "self", ".", "_expected_calls_queue", ",", "self", ".", "_replay_mode", ")", "(", "key", ",", "value", ")", "# Otherwise, create a mock method __setitem__.", "return", "self", ".", "_CreateMockMethod", "(", "'__setitem__'", ")", "(", "key", ",", "value", ")" ]
https://github.com/perilouswithadollarsign/cstrike15_src/blob/f82112a2388b841d72cb62ca48ab1846dfcc11c8/thirdparty/protobuf-2.5.0/python/mox.py#L427-L457
baidu-research/tensorflow-allreduce
66d5b855e90b0949e9fa5cca5599fd729a70e874
tensorflow/python/debug/wrappers/framework.py
python
BaseDebugWrapperSession.on_run_start
(self, request)
Callback invoked on run() calls to the debug-wrapper session. This is a blocking callback. The invocation happens after the wrapper's run() call is entered, after an increment of run call counter. Args: request: (`OnRunStartRequest`) callback request object carrying information about the run call such as the fetches, feed dict, run options, run metadata, and how many `run()` calls to this wrapper session have occurred. Returns: An instance of `OnRunStartResponse`, carrying information to 1) direct the wrapper session to perform a specified action (e.g., run with or without debug tensor watching, invoking the stepper.) 2) debug URLs used to watch the tensors.
Callback invoked on run() calls to the debug-wrapper session.
[ "Callback", "invoked", "on", "run", "()", "calls", "to", "the", "debug", "-", "wrapper", "session", "." ]
def on_run_start(self, request): """Callback invoked on run() calls to the debug-wrapper session. This is a blocking callback. The invocation happens after the wrapper's run() call is entered, after an increment of run call counter. Args: request: (`OnRunStartRequest`) callback request object carrying information about the run call such as the fetches, feed dict, run options, run metadata, and how many `run()` calls to this wrapper session have occurred. Returns: An instance of `OnRunStartResponse`, carrying information to 1) direct the wrapper session to perform a specified action (e.g., run with or without debug tensor watching, invoking the stepper.) 2) debug URLs used to watch the tensors. """
[ "def", "on_run_start", "(", "self", ",", "request", ")", ":" ]
https://github.com/baidu-research/tensorflow-allreduce/blob/66d5b855e90b0949e9fa5cca5599fd729a70e874/tensorflow/python/debug/wrappers/framework.py#L638-L656
BVLC/caffe
9b891540183ddc834a02b2bd81b31afae71b2153
scripts/cpp_lint.py
python
ProcessLine
(filename, file_extension, clean_lines, line, include_state, function_state, nesting_state, error, extra_check_functions=[])
Processes a single line in the file. Args: filename: Filename of the file that is being processed. file_extension: The extension (dot not included) of the file. clean_lines: An array of strings, each representing a line of the file, with comments stripped. line: Number of line being processed. include_state: An _IncludeState instance in which the headers are inserted. function_state: A _FunctionState instance which counts function lines, etc. nesting_state: A _NestingState instance which maintains information about the current stack of nested blocks being parsed. error: A callable to which errors are reported, which takes 4 arguments: filename, line number, error level, and message extra_check_functions: An array of additional check functions that will be run on each source line. Each function takes 4 arguments: filename, clean_lines, line, error
Processes a single line in the file.
[ "Processes", "a", "single", "line", "in", "the", "file", "." ]
def ProcessLine(filename, file_extension, clean_lines, line, include_state, function_state, nesting_state, error, extra_check_functions=[]): """Processes a single line in the file. Args: filename: Filename of the file that is being processed. file_extension: The extension (dot not included) of the file. clean_lines: An array of strings, each representing a line of the file, with comments stripped. line: Number of line being processed. include_state: An _IncludeState instance in which the headers are inserted. function_state: A _FunctionState instance which counts function lines, etc. nesting_state: A _NestingState instance which maintains information about the current stack of nested blocks being parsed. error: A callable to which errors are reported, which takes 4 arguments: filename, line number, error level, and message extra_check_functions: An array of additional check functions that will be run on each source line. Each function takes 4 arguments: filename, clean_lines, line, error """ raw_lines = clean_lines.raw_lines ParseNolintSuppressions(filename, raw_lines[line], line, error) nesting_state.Update(filename, clean_lines, line, error) if nesting_state.stack and nesting_state.stack[-1].inline_asm != _NO_ASM: return CheckForFunctionLengths(filename, clean_lines, line, function_state, error) CheckForMultilineCommentsAndStrings(filename, clean_lines, line, error) CheckStyle(filename, clean_lines, line, file_extension, nesting_state, error) CheckLanguage(filename, clean_lines, line, file_extension, include_state, nesting_state, error) CheckForNonConstReference(filename, clean_lines, line, nesting_state, error) CheckForNonStandardConstructs(filename, clean_lines, line, nesting_state, error) CheckVlogArguments(filename, clean_lines, line, error) CheckCaffeAlternatives(filename, clean_lines, line, error) CheckCaffeDataLayerSetUp(filename, clean_lines, line, error) CheckCaffeRandom(filename, clean_lines, line, error) CheckPosixThreading(filename, clean_lines, line, error) CheckInvalidIncrement(filename, clean_lines, line, error) CheckMakePairUsesDeduction(filename, clean_lines, line, error) for check_fn in extra_check_functions: check_fn(filename, clean_lines, line, error)
[ "def", "ProcessLine", "(", "filename", ",", "file_extension", ",", "clean_lines", ",", "line", ",", "include_state", ",", "function_state", ",", "nesting_state", ",", "error", ",", "extra_check_functions", "=", "[", "]", ")", ":", "raw_lines", "=", "clean_lines", ".", "raw_lines", "ParseNolintSuppressions", "(", "filename", ",", "raw_lines", "[", "line", "]", ",", "line", ",", "error", ")", "nesting_state", ".", "Update", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "if", "nesting_state", ".", "stack", "and", "nesting_state", ".", "stack", "[", "-", "1", "]", ".", "inline_asm", "!=", "_NO_ASM", ":", "return", "CheckForFunctionLengths", "(", "filename", ",", "clean_lines", ",", "line", ",", "function_state", ",", "error", ")", "CheckForMultilineCommentsAndStrings", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "CheckStyle", "(", "filename", ",", "clean_lines", ",", "line", ",", "file_extension", ",", "nesting_state", ",", "error", ")", "CheckLanguage", "(", "filename", ",", "clean_lines", ",", "line", ",", "file_extension", ",", "include_state", ",", "nesting_state", ",", "error", ")", "CheckForNonConstReference", "(", "filename", ",", "clean_lines", ",", "line", ",", "nesting_state", ",", "error", ")", "CheckForNonStandardConstructs", "(", "filename", ",", "clean_lines", ",", "line", ",", "nesting_state", ",", "error", ")", "CheckVlogArguments", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "CheckCaffeAlternatives", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "CheckCaffeDataLayerSetUp", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "CheckCaffeRandom", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "CheckPosixThreading", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "CheckInvalidIncrement", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "CheckMakePairUsesDeduction", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")", "for", "check_fn", "in", "extra_check_functions", ":", "check_fn", "(", "filename", ",", "clean_lines", ",", "line", ",", "error", ")" ]
https://github.com/BVLC/caffe/blob/9b891540183ddc834a02b2bd81b31afae71b2153/scripts/cpp_lint.py#L4604-L4646
nnrg/opennero
43e12a1bcba6e228639db3886fec1dc47ddc24cb
mods/Roomba/module.py
python
RoombaEnvironment.num_sensors
(self)
return (len(getMod().marker_map)*4 + constants.N_FIXED_SENSORS)
Return total number of sensors
Return total number of sensors
[ "Return", "total", "number", "of", "sensors" ]
def num_sensors(self): """ Return total number of sensors """ return (len(getMod().marker_map)*4 + constants.N_FIXED_SENSORS)
[ "def", "num_sensors", "(", "self", ")", ":", "return", "(", "len", "(", "getMod", "(", ")", ".", "marker_map", ")", "*", "4", "+", "constants", ".", "N_FIXED_SENSORS", ")" ]
https://github.com/nnrg/opennero/blob/43e12a1bcba6e228639db3886fec1dc47ddc24cb/mods/Roomba/module.py#L304-L308
weolar/miniblink49
1c4678db0594a4abde23d3ebbcc7cd13c3170777
third_party/WebKit/Tools/Scripts/webkitpy/style/checkers/cpp.py
python
_FunctionState.count
(self, line_number)
Count line in current function body.
Count line in current function body.
[ "Count", "line", "in", "current", "function", "body", "." ]
def count(self, line_number): """Count line in current function body.""" if self.in_a_function and line_number >= self.body_start_position.row: self.lines_in_function += 1
[ "def", "count", "(", "self", ",", "line_number", ")", ":", "if", "self", ".", "in_a_function", "and", "line_number", ">=", "self", ".", "body_start_position", ".", "row", ":", "self", ".", "lines_in_function", "+=", "1" ]
https://github.com/weolar/miniblink49/blob/1c4678db0594a4abde23d3ebbcc7cd13c3170777/third_party/WebKit/Tools/Scripts/webkitpy/style/checkers/cpp.py#L584-L587
macchina-io/macchina.io
ef24ba0e18379c3dd48fb84e6dbf991101cb8db0
platform/JS/V8/v8/tools/release/check_clusterfuzz.py
python
APIRequest
(key, **params)
return None
Send a request to the clusterfuzz api. Returns a json dict of the response.
Send a request to the clusterfuzz api.
[ "Send", "a", "request", "to", "the", "clusterfuzz", "api", "." ]
def APIRequest(key, **params): """Send a request to the clusterfuzz api. Returns a json dict of the response. """ params["api_key"] = key params = urllib.urlencode(params) headers = {"Content-type": "application/x-www-form-urlencoded"} try: conn = httplib.HTTPSConnection(HOSTNAME) conn.request("POST", "/_api/", params, headers) response = conn.getresponse() # Never leak "data" into public logs. data = response.read() except: raise Exception("ERROR: Connection problem.") try: return json.loads(data) except: raise Exception("ERROR: Could not read response. Is your key valid?") return None
[ "def", "APIRequest", "(", "key", ",", "*", "*", "params", ")", ":", "params", "[", "\"api_key\"", "]", "=", "key", "params", "=", "urllib", ".", "urlencode", "(", "params", ")", "headers", "=", "{", "\"Content-type\"", ":", "\"application/x-www-form-urlencoded\"", "}", "try", ":", "conn", "=", "httplib", ".", "HTTPSConnection", "(", "HOSTNAME", ")", "conn", ".", "request", "(", "\"POST\"", ",", "\"/_api/\"", ",", "params", ",", "headers", ")", "response", "=", "conn", ".", "getresponse", "(", ")", "# Never leak \"data\" into public logs.", "data", "=", "response", ".", "read", "(", ")", "except", ":", "raise", "Exception", "(", "\"ERROR: Connection problem.\"", ")", "try", ":", "return", "json", ".", "loads", "(", "data", ")", "except", ":", "raise", "Exception", "(", "\"ERROR: Could not read response. Is your key valid?\"", ")", "return", "None" ]
https://github.com/macchina-io/macchina.io/blob/ef24ba0e18379c3dd48fb84e6dbf991101cb8db0/platform/JS/V8/v8/tools/release/check_clusterfuzz.py#L159-L186
google/google-api-cpp-client
3df15df632bef43eb320645ac3329d872aabf5ad
prepare_dependencies.py
python
JsonCppPackageInstaller.MaybeTweakAfterUnpackage
(self)
Creates a CMakeLists.txt to build the package.
Creates a CMakeLists.txt to build the package.
[ "Creates", "a", "CMakeLists", ".", "txt", "to", "build", "the", "package", "." ]
def MaybeTweakAfterUnpackage(self): """Creates a CMakeLists.txt to build the package.""" config = self._config os.chdir(self._package_path) if config.force and os.path.exists('CMakeLists.txt'): os.unlink('CMakeLists.txt') if os.path.exists('CMakeLists.txt'): return if config.build_packages: allfiles = '' src_path = os.path.join('src', 'lib_json') for elem in glob.glob('%s/*.cpp' % src_path): allfiles = '%s "%s"' % (allfiles, elem) print '>>> Creating CMakeLists.txt' with open('CMakeLists.txt', 'w') as f: f.write('cmake_minimum_required (VERSION 2.6)\n') f.write('project (JsonCpp)\n') f.write('INCLUDE_DIRECTORIES(./include src/lib_json)\n') f.write('add_library(jsoncpp STATIC %s)\n' % allfiles)
[ "def", "MaybeTweakAfterUnpackage", "(", "self", ")", ":", "config", "=", "self", ".", "_config", "os", ".", "chdir", "(", "self", ".", "_package_path", ")", "if", "config", ".", "force", "and", "os", ".", "path", ".", "exists", "(", "'CMakeLists.txt'", ")", ":", "os", ".", "unlink", "(", "'CMakeLists.txt'", ")", "if", "os", ".", "path", ".", "exists", "(", "'CMakeLists.txt'", ")", ":", "return", "if", "config", ".", "build_packages", ":", "allfiles", "=", "''", "src_path", "=", "os", ".", "path", ".", "join", "(", "'src'", ",", "'lib_json'", ")", "for", "elem", "in", "glob", ".", "glob", "(", "'%s/*.cpp'", "%", "src_path", ")", ":", "allfiles", "=", "'%s \"%s\"'", "%", "(", "allfiles", ",", "elem", ")", "print", "'>>> Creating CMakeLists.txt'", "with", "open", "(", "'CMakeLists.txt'", ",", "'w'", ")", "as", "f", ":", "f", ".", "write", "(", "'cmake_minimum_required (VERSION 2.6)\\n'", ")", "f", ".", "write", "(", "'project (JsonCpp)\\n'", ")", "f", ".", "write", "(", "'INCLUDE_DIRECTORIES(./include src/lib_json)\\n'", ")", "f", ".", "write", "(", "'add_library(jsoncpp STATIC %s)\\n'", "%", "allfiles", ")" ]
https://github.com/google/google-api-cpp-client/blob/3df15df632bef43eb320645ac3329d872aabf5ad/prepare_dependencies.py#L723-L744
scribusproject/scribus
41ec7c775a060912cf251682a8b1437f753f80f4
codegen/cheetah/Cheetah/SettingsManager.py
python
mergeNestedDictionaries
(dict1, dict2, copy=False, deepcopy=False)
return dict1
Recursively merge the values of dict2 into dict1. This little function is very handy for selectively overriding settings in a settings dictionary that has a nested structure.
Recursively merge the values of dict2 into dict1.
[ "Recursively", "merge", "the", "values", "of", "dict2", "into", "dict1", "." ]
def mergeNestedDictionaries(dict1, dict2, copy=False, deepcopy=False): """Recursively merge the values of dict2 into dict1. This little function is very handy for selectively overriding settings in a settings dictionary that has a nested structure. """ if copy: dict1 = copyModule.copy(dict1) elif deepcopy: dict1 = copyModule.deepcopy(dict1) for key, val in dict2.iteritems(): if key in dict1 and isinstance(val, dict) and isinstance(dict1[key], dict): dict1[key] = mergeNestedDictionaries(dict1[key], val) else: dict1[key] = val return dict1
[ "def", "mergeNestedDictionaries", "(", "dict1", ",", "dict2", ",", "copy", "=", "False", ",", "deepcopy", "=", "False", ")", ":", "if", "copy", ":", "dict1", "=", "copyModule", ".", "copy", "(", "dict1", ")", "elif", "deepcopy", ":", "dict1", "=", "copyModule", ".", "deepcopy", "(", "dict1", ")", "for", "key", ",", "val", "in", "dict2", ".", "iteritems", "(", ")", ":", "if", "key", "in", "dict1", "and", "isinstance", "(", "val", ",", "dict", ")", "and", "isinstance", "(", "dict1", "[", "key", "]", ",", "dict", ")", ":", "dict1", "[", "key", "]", "=", "mergeNestedDictionaries", "(", "dict1", "[", "key", "]", ",", "val", ")", "else", ":", "dict1", "[", "key", "]", "=", "val", "return", "dict1" ]
https://github.com/scribusproject/scribus/blob/41ec7c775a060912cf251682a8b1437f753f80f4/codegen/cheetah/Cheetah/SettingsManager.py#L19-L36
hanpfei/chromium-net
392cc1fa3a8f92f42e4071ab6e674d8e0482f83f
third_party/catapult/third_party/gsutil/third_party/boto/boto/services/servicedef.py
python
ServiceDef.get_obj
(self, name)
return obj
Returns the AWS object associated with a given option. The heuristics used are a bit lame. If the option name contains the word 'bucket' it is assumed to be an S3 bucket, if the name contains the word 'queue' it is assumed to be an SQS queue and if it contains the word 'domain' it is assumed to be a SimpleDB domain. If the option name specified does not exist in the config file or if the AWS object cannot be retrieved this returns None.
Returns the AWS object associated with a given option.
[ "Returns", "the", "AWS", "object", "associated", "with", "a", "given", "option", "." ]
def get_obj(self, name): """ Returns the AWS object associated with a given option. The heuristics used are a bit lame. If the option name contains the word 'bucket' it is assumed to be an S3 bucket, if the name contains the word 'queue' it is assumed to be an SQS queue and if it contains the word 'domain' it is assumed to be a SimpleDB domain. If the option name specified does not exist in the config file or if the AWS object cannot be retrieved this returns None. """ val = self.get(name) if not val: return None if name.find('queue') >= 0: obj = boto.lookup('sqs', val) if obj: obj.set_message_class(ServiceMessage) elif name.find('bucket') >= 0: obj = boto.lookup('s3', val) elif name.find('domain') >= 0: obj = boto.lookup('sdb', val) else: obj = None return obj
[ "def", "get_obj", "(", "self", ",", "name", ")", ":", "val", "=", "self", ".", "get", "(", "name", ")", "if", "not", "val", ":", "return", "None", "if", "name", ".", "find", "(", "'queue'", ")", ">=", "0", ":", "obj", "=", "boto", ".", "lookup", "(", "'sqs'", ",", "val", ")", "if", "obj", ":", "obj", ".", "set_message_class", "(", "ServiceMessage", ")", "elif", "name", ".", "find", "(", "'bucket'", ")", ">=", "0", ":", "obj", "=", "boto", ".", "lookup", "(", "'s3'", ",", "val", ")", "elif", "name", ".", "find", "(", "'domain'", ")", ">=", "0", ":", "obj", "=", "boto", ".", "lookup", "(", "'sdb'", ",", "val", ")", "else", ":", "obj", "=", "None", "return", "obj" ]
https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/third_party/gsutil/third_party/boto/boto/services/servicedef.py#L64-L89
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_cocoa/_core.py
python
Window.SetSizeWH
(*args, **kwargs)
return _core_.Window_SetSizeWH(*args, **kwargs)
SetSizeWH(self, int width, int height) Sets the size of the window in pixels.
SetSizeWH(self, int width, int height)
[ "SetSizeWH", "(", "self", "int", "width", "int", "height", ")" ]
def SetSizeWH(*args, **kwargs): """ SetSizeWH(self, int width, int height) Sets the size of the window in pixels. """ return _core_.Window_SetSizeWH(*args, **kwargs)
[ "def", "SetSizeWH", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_core_", ".", "Window_SetSizeWH", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/_core.py#L9365-L9371
cocos2d/cocos2d-x
90f6542cf7fb081335f04e474b880d7ce8c445a1
tools/appveyor-scripts/git_retry.py
python
GitRetry.computeDelay
(self, iteration)
return (self.delay_factor ** (iteration - 1)) * self.delay
Returns: the delay (in seconds) for a given iteration The first iteration has a delay of '0'. Args: iteration: (int) The iteration index (starting with zero as the first iteration)
Returns: the delay (in seconds) for a given iteration The first iteration has a delay of '0'. Args: iteration: (int) The iteration index (starting with zero as the first iteration)
[ "Returns", ":", "the", "delay", "(", "in", "seconds", ")", "for", "a", "given", "iteration", "The", "first", "iteration", "has", "a", "delay", "of", "0", ".", "Args", ":", "iteration", ":", "(", "int", ")", "The", "iteration", "index", "(", "starting", "with", "zero", "as", "the", "first", "iteration", ")" ]
def computeDelay(self, iteration): """Returns: the delay (in seconds) for a given iteration The first iteration has a delay of '0'. Args: iteration: (int) The iteration index (starting with zero as the first iteration) """ if (not self.delay) or (iteration == 0): return 0 if self.delay_factor == 0: # Linear delay return iteration * self.delay # Exponential delay return (self.delay_factor ** (iteration - 1)) * self.delay
[ "def", "computeDelay", "(", "self", ",", "iteration", ")", ":", "if", "(", "not", "self", ".", "delay", ")", "or", "(", "iteration", "==", "0", ")", ":", "return", "0", "if", "self", ".", "delay_factor", "==", "0", ":", "# Linear delay", "return", "iteration", "*", "self", ".", "delay", "# Exponential delay", "return", "(", "self", ".", "delay_factor", "**", "(", "iteration", "-", "1", ")", ")", "*", "self", ".", "delay" ]
https://github.com/cocos2d/cocos2d-x/blob/90f6542cf7fb081335f04e474b880d7ce8c445a1/tools/appveyor-scripts/git_retry.py#L89-L102