text_prompt
stringlengths
100
17.7k
code_prompt
stringlengths
7
9.86k
<SYSTEM_TASK:> Saves symbol to a file. <END_TASK> <USER_TASK:> Description: def save(self, fname): """Saves symbol to a file. You can also use pickle to do the job if you only work on python. The advantage of `load`/`save` functions is that the file contents are language agnostic. This means the model saved by one language binding can be loaded by a different language binding of `MXNet`. You also get the benefit of being able to directly load/save from cloud storage(S3, HDFS). Parameters ---------- fname : str The name of the file. - "s3://my-bucket/path/my-s3-symbol" - "hdfs://my-bucket/path/my-hdfs-symbol" - "/path-to/my-local-symbol" See Also -------- symbol.load : Used to load symbol from file. """
if not isinstance(fname, string_types): raise TypeError('fname need to be string') check_call(_LIB.MXSymbolSaveToFile(self.handle, c_str(fname)))
<SYSTEM_TASK:> Helper function to get NDArray lists handles from various inputs. <END_TASK> <USER_TASK:> Description: def _get_ndarray_inputs(arg_key, args, arg_names, allow_missing): """Helper function to get NDArray lists handles from various inputs. Parameters ---------- arg_key : str The name of argument, used for error message. args : list of NDArray or dict of str to NDArray Input arguments to the symbols. If type is list of NDArray, the position is in the same order of arg_names. If type is dict of str to NDArray, then it maps the name of arguments to the corresponding NDArray, args_names : list of string List of argument names. allow_missing : boolean Whether missing argument is allowed. When allowed, the missing handle will be set to None(null) Returns ------- handles : list of NDArrayHandle The positional list of NDArrayHandles generated from input. """
# setup args arg_handles = [] arg_arrays = [] if isinstance(args, list): if len(args) != len(arg_names): raise ValueError('Length of %s does not match the number of arguments' % arg_key) for narr in args: if narr is None and allow_missing: arg_handles.append(None) elif not isinstance(narr, NDArray): raise TypeError('Only accept list of NDArrays or dict of str to NDArray') else: arg_handles.append(narr.handle) arg_arrays = args elif isinstance(args, dict): for name in arg_names: if name in args: narr = args[name] if not isinstance(narr, NDArray): raise TypeError('Only accept list of NDArrays or dict of str to NDArray') arg_handles.append(narr.handle) arg_arrays.append(narr) else: if allow_missing: arg_handles.append(None) arg_arrays.append(None) else: raise ValueError('key `%s` is missing in `%s`' % (name, arg_key)) else: raise TypeError('Only accept list of NDArrays or dict of str to NDArray') return c_array(NDArrayHandle, arg_handles), arg_arrays
<SYSTEM_TASK:> Binds the current symbol to an executor and returns it. <END_TASK> <USER_TASK:> Description: def bind(self, ctx, args, args_grad=None, grad_req='write', aux_states=None, group2ctx=None, shared_exec=None): """Binds the current symbol to an executor and returns it. We first declare the computation and then bind to the data to run. This function returns an executor which provides method `forward()` method for evaluation and a `outputs()` method to get all the results. Example ------- >>> a = mx.sym.Variable('a') >>> b = mx.sym.Variable('b') >>> c = a + b <Symbol _plus1> >>> ex = c.bind(ctx=mx.cpu(), args={'a' : mx.nd.ones([2,3]), 'b' : mx.nd.ones([2,3])}) >>> ex.forward() [<NDArray 2x3 @cpu(0)>] >>> ex.outputs[0].asnumpy() [[ 2. 2. 2.] [ 2. 2. 2.]] Parameters ---------- ctx : Context The device context the generated executor to run on. args : list of NDArray or dict of str to NDArray Input arguments to the symbol. - If the input type is a list of `NDArray`, the order should be same as the order of `list_arguments()`. - If the input type is a dict of str to `NDArray`, then it maps the name of arguments to the corresponding `NDArray`. - In either case, all the arguments must be provided. args_grad : list of NDArray or dict of str to `NDArray`, optional When specified, `args_grad` provides NDArrays to hold the result of gradient value in backward. - If the input type is a list of `NDArray`, the order should be same as the order of `list_arguments()`. - If the input type is a dict of str to `NDArray`, then it maps the name of arguments to the corresponding NDArray. - When the type is a dict of str to `NDArray`, one only need to provide the dict for required argument gradient. Only the specified argument gradient will be calculated. grad_req : {'write', 'add', 'null'}, or list of str or dict of str to str, optional To specify how we should update the gradient to the `args_grad`. - 'write' means everytime gradient is write to specified `args_grad` `NDArray`. - 'add' means everytime gradient is add to the specified NDArray. - 'null' means no action is taken, the gradient may not be calculated. aux_states : list of `NDArray`, or dict of str to `NDArray`, optional Input auxiliary states to the symbol, only needed when the output of `list_auxiliary_states()` is not empty. - If the input type is a list of `NDArray`, the order should be same as the order of `list_auxiliary_states()`. - If the input type is a dict of str to `NDArray`, then it maps the name of `auxiliary_states` to the corresponding `NDArray`, - In either case, all the auxiliary states need to be provided. group2ctx : Dict of string to mx.Context The dict mapping the `ctx_group` attribute to the context assignment. shared_exec : mx.executor.Executor Executor to share memory with. This is intended for runtime reshaping, variable length sequences, etc. The returned executor shares state with `shared_exec`, and should not be used in parallel with it. Returns ------- executor : Executor The generated executor Notes ----- Auxiliary states are the special states of symbols that do not correspond to an argument, and do not have gradient but are still useful for the specific operations. Common examples of auxiliary states include the `moving_mean` and `moving_variance` states in `BatchNorm`. Most operators do not have auxiliary states and in those cases, this parameter can be safely ignored. One can give up gradient by using a dict in `args_grad` and only specify gradient they interested in. """
# pylint: disable=too-many-locals, too-many-branches if not isinstance(ctx, Context): raise TypeError("Context type error") listed_arguments = self.list_arguments() args_handle, args = self._get_ndarray_inputs('args', args, listed_arguments, False) # setup args gradient if args_grad is None: args_grad_handle = c_array(NDArrayHandle, [None] * len(args)) else: args_grad_handle, args_grad = self._get_ndarray_inputs( 'args_grad', args_grad, listed_arguments, True) if aux_states is None: aux_states = [] aux_args_handle, aux_states = self._get_ndarray_inputs( 'aux_states', aux_states, self.list_auxiliary_states(), False) # setup requirements if isinstance(grad_req, string_types): if grad_req not in _GRAD_REQ_MAP: raise ValueError('grad_req must be in %s' % str(_GRAD_REQ_MAP)) reqs_array = c_array_buf(mx_uint, array('I', [_GRAD_REQ_MAP[grad_req]] * len(listed_arguments))) elif isinstance(grad_req, list): reqs_array = c_array_buf(mx_uint, array('I', [_GRAD_REQ_MAP[item] for item in grad_req])) elif isinstance(grad_req, dict): req_array = [] for name in listed_arguments: if name in grad_req: req_array.append(_GRAD_REQ_MAP[grad_req[name]]) else: req_array.append(0) reqs_array = c_array_buf(mx_uint, array('I', req_array)) ctx_map_keys = [] ctx_map_dev_types = [] ctx_map_dev_ids = [] if group2ctx: for key, val in group2ctx.items(): ctx_map_keys.append(key) ctx_map_dev_types.append(val.device_typeid) ctx_map_dev_ids.append(val.device_id) handle = ExecutorHandle() shared_handle = shared_exec.handle if shared_exec is not None else ExecutorHandle() check_call(_LIB.MXExecutorBindEX(self.handle, ctypes.c_int(ctx.device_typeid), ctypes.c_int(ctx.device_id), mx_uint(len(ctx_map_keys)), c_str_array(ctx_map_keys), c_array_buf(ctypes.c_int, array('i', ctx_map_dev_types)), c_array_buf(ctypes.c_int, array('i', ctx_map_dev_ids)), mx_uint(len(args)), args_handle, args_grad_handle, reqs_array, mx_uint(len(aux_states)), aux_args_handle, shared_handle, ctypes.byref(handle))) executor = Executor(handle, self, ctx, grad_req, group2ctx) executor.arg_arrays = args executor.grad_arrays = args_grad executor.aux_arrays = aux_states return executor
<SYSTEM_TASK:> Gets the autodiff of current symbol. <END_TASK> <USER_TASK:> Description: def gradient(self, wrt): """Gets the autodiff of current symbol. This function can only be used if current symbol is a loss function. .. note:: This function is currently not implemented. Parameters ---------- wrt : Array of String keyword arguments of the symbol that the gradients are taken. Returns ------- grad : Symbol A gradient Symbol with returns to be the corresponding gradients. """
handle = SymbolHandle() c_wrt = c_str_array(wrt) check_call(_LIB.MXSymbolGrad(self.handle, mx_uint(len(wrt)), c_wrt, ctypes.byref(handle))) return Symbol(handle)
<SYSTEM_TASK:> Evaluates a symbol given arguments. <END_TASK> <USER_TASK:> Description: def eval(self, ctx=None, **kwargs): """Evaluates a symbol given arguments. The `eval` method combines a call to `bind` (which returns an executor) with a call to `forward` (executor method). For the common use case, where you might repeatedly evaluate with same arguments, eval is slow. In that case, you should call `bind` once and then repeatedly call forward. This function allows simpler syntax for less cumbersome introspection. Example ------- >>> a = mx.sym.Variable('a') >>> b = mx.sym.Variable('b') >>> c = a + b >>> ex = c.eval(ctx = mx.cpu(), a = mx.nd.ones([2,3]), b = mx.nd.ones([2,3])) >>> ex [<NDArray 2x3 @cpu(0)>] >>> ex[0].asnumpy() array([[ 2., 2., 2.], [ 2., 2., 2.]], dtype=float32) Parameters ---------- ctx : Context The device context the generated executor to run on. kwargs : Keyword arguments of type `NDArray` Input arguments to the symbol. All the arguments must be provided. Returns ---------- result : a list of NDArrays corresponding to the values taken by each symbol when evaluated on given args. When called on a single symbol (not a group), the result will be a list with one element. """
if ctx is None: ctx = current_context() return self.bind(ctx, kwargs).forward()
<SYSTEM_TASK:> Return symbol for target backend. <END_TASK> <USER_TASK:> Description: def get_backend_symbol(self, backend): """Return symbol for target backend. Parameters ---------- backend : str The backend names. Returns ------- out : Symbol The created Symbol for target backend. """
out = SymbolHandle() check_call(_LIB.MXGenBackendSubgraph(self.handle, c_str(backend), ctypes.byref(out))) return Symbol(out)
<SYSTEM_TASK:> Returns a module loaded with the provided model. <END_TASK> <USER_TASK:> Description: def load_model(model_name, epoch_num, data_shapes, label_shapes, label_names, gpus=''): """Returns a module loaded with the provided model. Parameters ---------- model_name: str Prefix of the MXNet model name as stored on the local directory. epoch_num : int Epoch number of model we would like to load. input_shape: tuple The shape of the input data in the form of (batch_size, channels, height, width) files: list of strings List of URLs pertaining to files that need to be downloaded in order to use the model. data_shapes: list of tuples. List of tuples where each tuple is a pair of input variable name and its shape. label_shapes: list of (str, tuple) Typically is ``data_iter.provide_label``. label_names: list of str Name of the output labels in the MXNet symbolic graph. gpus: str Comma separated string of gpu ids on which inferences are executed. E.g. 3,5,6 would refer to GPUs 3, 5 and 6. If empty, we use CPU. Returns ------- MXNet module """
sym, arg_params, aux_params = mx.model.load_checkpoint(model_name, epoch_num) mod = create_module(sym, data_shapes, label_shapes, label_names, gpus) mod.set_params( arg_params=arg_params, aux_params=aux_params, allow_missing=True ) return mod
<SYSTEM_TASK:> Creates a new MXNet module. <END_TASK> <USER_TASK:> Description: def create_module(sym, data_shapes, label_shapes, label_names, gpus=''): """Creates a new MXNet module. Parameters ---------- sym : Symbol An MXNet symbol. input_shape: tuple The shape of the input data in the form of (batch_size, channels, height, width) files: list of strings List of URLs pertaining to files that need to be downloaded in order to use the model. data_shapes: list of tuples. List of tuples where each tuple is a pair of input variable name and its shape. label_shapes: list of (str, tuple) Typically is ``data_iter.provide_label``. label_names: list of str Name of the output labels in the MXNet symbolic graph. gpus: str Comma separated string of gpu ids on which inferences are executed. E.g. 3,5,6 would refer to GPUs 3, 5 and 6. If empty, we use CPU. Returns ------- MXNet module """
if gpus == '': devices = mx.cpu() else: devices = [mx.gpu(int(i)) for i in gpus.split(',')] data_names = [data_shape[0] for data_shape in data_shapes] mod = mx.mod.Module( symbol=sym, data_names=data_names, context=devices, label_names=label_names ) mod.bind( for_training=False, data_shapes=data_shapes, label_shapes=label_shapes ) return mod
<SYSTEM_TASK:> evalute network given validation record file <END_TASK> <USER_TASK:> Description: def evaluate_net(net, path_imgrec, num_classes, num_batch, mean_pixels, data_shape, model_prefix, epoch, ctx=mx.cpu(), batch_size=32, path_imglist="", nms_thresh=0.45, force_nms=False, ovp_thresh=0.5, use_difficult=False, class_names=None, voc07_metric=False): """ evalute network given validation record file Parameters: ---------- net : str or None Network name or use None to load from json without modifying path_imgrec : str path to the record validation file path_imglist : str path to the list file to replace labels in record file, optional num_classes : int number of classes, not including background mean_pixels : tuple (mean_r, mean_g, mean_b) data_shape : tuple or int (3, height, width) or height/width model_prefix : str model prefix of saved checkpoint epoch : int load model epoch ctx : mx.ctx mx.gpu() or mx.cpu() batch_size : int validation batch size nms_thresh : float non-maximum suppression threshold force_nms : boolean whether suppress different class objects ovp_thresh : float AP overlap threshold for true/false postives use_difficult : boolean whether to use difficult objects in evaluation if applicable class_names : comma separated str class names in string, must correspond to num_classes if set voc07_metric : boolean whether to use 11-point evluation as in VOC07 competition """
# set up logger logging.basicConfig() logger = logging.getLogger() logger.setLevel(logging.INFO) # args if isinstance(data_shape, int): data_shape = (3, data_shape, data_shape) assert len(data_shape) == 3 and data_shape[0] == 3 model_prefix += '_' + str(data_shape[1]) # iterator eval_iter = DetRecordIter(path_imgrec, batch_size, data_shape, mean_pixels=mean_pixels, path_imglist=path_imglist, **cfg.valid) # model params load_net, args, auxs = mx.model.load_checkpoint(model_prefix, epoch) # network if net is None: net = load_net else: net = get_symbol(net, data_shape[1], num_classes=num_classes, nms_thresh=nms_thresh, force_suppress=force_nms) if not 'label' in net.list_arguments(): label = mx.sym.Variable(name='label') net = mx.sym.Group([net, label]) # init module mod = mx.mod.Module(net, label_names=('label',), logger=logger, context=ctx, fixed_param_names=net.list_arguments()) mod.bind(data_shapes=eval_iter.provide_data, label_shapes=eval_iter.provide_label) mod.set_params(args, auxs, allow_missing=False, force_init=True) # run evaluation if voc07_metric: metric = VOC07MApMetric(ovp_thresh, use_difficult, class_names) else: metric = MApMetric(ovp_thresh, use_difficult, class_names) num = num_batch * batch_size data = [mx.random.uniform(-1.0, 1.0, shape=shape, ctx=ctx) for _, shape in mod.data_shapes] batch = mx.io.DataBatch(data, []) # empty label dry_run = 5 # use 5 iterations to warm up for i in range(dry_run): mod.forward(batch, is_train=False) for output in mod.get_outputs(): output.wait_to_read() tic = time.time() results = mod.score(eval_iter, metric, num_batch=num_batch) speed = num / (time.time() - tic) if logger is not None: logger.info('Finished inference with %d images' % num) logger.info('Finished with %f images per second', speed) for k, v in results: print("{}: {}".format(k, v))
<SYSTEM_TASK:> Initializes the parameters and auxiliary states. By default this function <END_TASK> <USER_TASK:> Description: def init_params(self, initializer=Uniform(0.01), arg_params=None, aux_params=None, allow_missing=False, force_init=False, allow_extra=False): """Initializes the parameters and auxiliary states. By default this function does nothing. Subclass should override this method if contains parameters. Parameters ---------- initializer : Initializer Called to initialize parameters if needed. arg_params : dict If not ``None``, should be a dictionary of existing `arg_params`. Initialization will be copied from that. aux_params : dict If not ``None``, should be a dictionary of existing `aux_params`. Initialization will be copied from that. allow_missing : bool If ``True``, params could contain missing values, and the initializer will be called to fill those missing params. force_init : bool If ``True``, will force re-initialize even if already initialized. allow_extra : boolean, optional Whether allow extra parameters that are not needed by symbol. If this is True, no error will be thrown when arg_params or aux_params contain extra parameters that is not needed by the executor. """
pass
<SYSTEM_TASK:> Evaluates and accumulates evaluation metric on outputs of the last forward computation. <END_TASK> <USER_TASK:> Description: def update_metric(self, eval_metric, labels, pre_sliced=False): """Evaluates and accumulates evaluation metric on outputs of the last forward computation. Subclass should override this method if needed. Parameters ---------- eval_metric : EvalMetric labels : list of NDArray Typically ``data_batch.label``. """
if self._label_shapes is None: # since we do not need labels, we are probably not a module with a loss # function or predictions, so just ignore this call return if pre_sliced: raise RuntimeError("PythonModule does not support presliced labels") # by default we expect our outputs are some scores that could be evaluated eval_metric.update(labels, self.get_outputs())
<SYSTEM_TASK:> Forward computation. Here we do nothing but to keep a reference to <END_TASK> <USER_TASK:> Description: def forward(self, data_batch, is_train=None): """Forward computation. Here we do nothing but to keep a reference to the scores and the labels so that we can do backward computation. Parameters ---------- data_batch : DataBatch Could be anything with similar API implemented. is_train : bool Default is ``None``, which means `is_train` takes the value of ``self.for_training``. """
self._scores = data_batch.data[0] if is_train is None: is_train = self.for_training if is_train: self._labels = data_batch.label[0]
<SYSTEM_TASK:> Actual implementation of the backward computation. The computation <END_TASK> <USER_TASK:> Description: def _backward_impl(self): """Actual implementation of the backward computation. The computation should take ``self._scores`` and ``self._labels`` and then compute the gradients with respect to the scores, store it as an `NDArray` in ``self._scores_grad``. Instead of defining a subclass and overriding this function, a more convenient way is to pass in a `grad_func` when constructing the module object. Then it will be called to compute the gradients. """
if self._grad_func is not None: grad = self._grad_func(self._scores, self._labels) if not isinstance(grad, nd.NDArray): grad = nd.array(grad) self._scores_grad = grad else: raise NotImplementedError()
<SYSTEM_TASK:> Get the variable given a name if one exists or create a new one if missing. <END_TASK> <USER_TASK:> Description: def get(self, name, **kwargs): """Get the variable given a name if one exists or create a new one if missing. Parameters ---------- name : str name of the variable **kwargs : more arguments that's passed to symbol.Variable """
name = self._prefix + name if name not in self._params: self._params[name] = symbol.Variable(name, **kwargs) return self._params[name]
<SYSTEM_TASK:> Unpack fused weight matrices into separate <END_TASK> <USER_TASK:> Description: def unpack_weights(self, args): """Unpack fused weight matrices into separate weight matrices. For example, say you use a module object `mod` to run a network that has an lstm cell. In `mod.get_params()[0]`, the lstm parameters are all represented as a single big vector. `cell.unpack_weights(mod.get_params()[0])` will unpack this vector into a dictionary of more readable lstm parameters - c, f, i, o gates for i2h (input to hidden) and h2h (hidden to hidden) weights. Parameters ---------- args : dict of str -> NDArray Dictionary containing packed weights. usually from `Module.get_params()[0]`. Returns ------- args : dict of str -> NDArray Dictionary with unpacked weights associated with this cell. See Also -------- pack_weights: Performs the reverse operation of this function. """
args = args.copy() if not self._gate_names: return args h = self._num_hidden for group_name in ['i2h', 'h2h']: weight = args.pop('%s%s_weight'%(self._prefix, group_name)) bias = args.pop('%s%s_bias' % (self._prefix, group_name)) for j, gate in enumerate(self._gate_names): wname = '%s%s%s_weight' % (self._prefix, group_name, gate) args[wname] = weight[j*h:(j+1)*h].copy() bname = '%s%s%s_bias' % (self._prefix, group_name, gate) args[bname] = bias[j*h:(j+1)*h].copy() return args
<SYSTEM_TASK:> Pack separate weight matrices into a single packed <END_TASK> <USER_TASK:> Description: def pack_weights(self, args): """Pack separate weight matrices into a single packed weight. Parameters ---------- args : dict of str -> NDArray Dictionary containing unpacked weights. Returns ------- args : dict of str -> NDArray Dictionary with packed weights associated with this cell. """
args = args.copy() if not self._gate_names: return args for group_name in ['i2h', 'h2h']: weight = [] bias = [] for gate in self._gate_names: wname = '%s%s%s_weight'%(self._prefix, group_name, gate) weight.append(args.pop(wname)) bname = '%s%s%s_bias'%(self._prefix, group_name, gate) bias.append(args.pop(bname)) args['%s%s_weight'%(self._prefix, group_name)] = ndarray.concatenate(weight) args['%s%s_bias'%(self._prefix, group_name)] = ndarray.concatenate(bias) return args
<SYSTEM_TASK:> Unroll an RNN cell across time steps. <END_TASK> <USER_TASK:> Description: def unroll(self, length, inputs, begin_state=None, layout='NTC', merge_outputs=None): """Unroll an RNN cell across time steps. Parameters ---------- length : int Number of steps to unroll. inputs : Symbol, list of Symbol, or None If `inputs` is a single Symbol (usually the output of Embedding symbol), it should have shape (batch_size, length, ...) if layout == 'NTC', or (length, batch_size, ...) if layout == 'TNC'. If `inputs` is a list of symbols (usually output of previous unroll), they should all have shape (batch_size, ...). begin_state : nested list of Symbol, default None Input states created by `begin_state()` or output state of another cell. Created from `begin_state()` if None. layout : str, optional `layout` of input symbol. Only used if inputs is a single Symbol. merge_outputs : bool, optional If False, return outputs as a list of Symbols. If True, concatenate output across time steps and return a single symbol with shape (batch_size, length, ...) if layout == 'NTC', or (length, batch_size, ...) if layout == 'TNC'. If None, output whatever is faster. Returns ------- outputs : list of Symbol or Symbol Symbol (if `merge_outputs` is True) or list of Symbols (if `merge_outputs` is False) corresponding to the output from the RNN from this unrolling. states : nested list of Symbol The new state of this RNN after this unrolling. The type of this symbol is same as the output of begin_state(). """
self.reset() inputs, _ = _normalize_sequence(length, inputs, layout, False) if begin_state is None: begin_state = self.begin_state() states = begin_state outputs = [] for i in range(length): output, states = self(inputs[i], states) outputs.append(output) outputs, _ = _normalize_sequence(length, outputs, layout, merge_outputs) return outputs, states
<SYSTEM_TASK:> slice fused rnn weights <END_TASK> <USER_TASK:> Description: def _slice_weights(self, arr, li, lh): """slice fused rnn weights"""
args = {} gate_names = self._gate_names directions = self._directions b = len(directions) p = 0 for layer in range(self._num_layers): for direction in directions: for gate in gate_names: name = '%s%s%d_i2h%s_weight'%(self._prefix, direction, layer, gate) if layer > 0: size = b*lh*lh args[name] = arr[p:p+size].reshape((lh, b*lh)) else: size = li*lh args[name] = arr[p:p+size].reshape((lh, li)) p += size for gate in gate_names: name = '%s%s%d_h2h%s_weight'%(self._prefix, direction, layer, gate) size = lh**2 args[name] = arr[p:p+size].reshape((lh, lh)) p += size for layer in range(self._num_layers): for direction in directions: for gate in gate_names: name = '%s%s%d_i2h%s_bias'%(self._prefix, direction, layer, gate) args[name] = arr[p:p+lh] p += lh for gate in gate_names: name = '%s%s%d_h2h%s_bias'%(self._prefix, direction, layer, gate) args[name] = arr[p:p+lh] p += lh assert p == arr.size, "Invalid parameters size for FusedRNNCell" return args
<SYSTEM_TASK:> Unfuse the fused RNN in to a stack of rnn cells. <END_TASK> <USER_TASK:> Description: def unfuse(self): """Unfuse the fused RNN in to a stack of rnn cells. Returns ------- cell : mxnet.rnn.SequentialRNNCell unfused cell that can be used for stepping, and can run on CPU. """
stack = SequentialRNNCell() get_cell = {'rnn_relu': lambda cell_prefix: RNNCell(self._num_hidden, activation='relu', prefix=cell_prefix), 'rnn_tanh': lambda cell_prefix: RNNCell(self._num_hidden, activation='tanh', prefix=cell_prefix), 'lstm': lambda cell_prefix: LSTMCell(self._num_hidden, prefix=cell_prefix), 'gru': lambda cell_prefix: GRUCell(self._num_hidden, prefix=cell_prefix)}[self._mode] for i in range(self._num_layers): if self._bidirectional: stack.add(BidirectionalCell( get_cell('%sl%d_'%(self._prefix, i)), get_cell('%sr%d_'%(self._prefix, i)), output_prefix='%sbi_l%d_'%(self._prefix, i))) else: stack.add(get_cell('%sl%d_'%(self._prefix, i))) if self._dropout > 0 and i != self._num_layers - 1: stack.add(DropoutCell(self._dropout, prefix='%s_dropout%d_'%(self._prefix, i))) return stack
<SYSTEM_TASK:> Append a cell into the stack. <END_TASK> <USER_TASK:> Description: def add(self, cell): """Append a cell into the stack. Parameters ---------- cell : BaseRNNCell The cell to be appended. During unroll, previous cell's output (or raw inputs if no previous cell) is used as the input to this cell. """
self._cells.append(cell) if self._override_cell_params: assert cell._own_params, \ "Either specify params for SequentialRNNCell " \ "or child cells, not both." cell.params._params.update(self.params._params) self.params._params.update(cell.params._params)
<SYSTEM_TASK:> Get synthetic gradient value <END_TASK> <USER_TASK:> Description: def synthetic_grad(X, theta, sigma1, sigma2, sigmax, rescale_grad=1.0, grad=None): """Get synthetic gradient value"""
if grad is None: grad = nd.empty(theta.shape, theta.context) theta1 = theta.asnumpy()[0] theta2 = theta.asnumpy()[1] v1 = sigma1 ** 2 v2 = sigma2 ** 2 vx = sigmax ** 2 denominator = numpy.exp(-(X - theta1) ** 2 / (2 * vx)) + numpy.exp( -(X - theta1 - theta2) ** 2 / (2 * vx)) grad_npy = numpy.zeros(theta.shape) grad_npy[0] = -rescale_grad * ((numpy.exp(-(X - theta1) ** 2 / (2 * vx)) * (X - theta1) / vx + numpy.exp(-(X - theta1 - theta2) ** 2 / (2 * vx)) * (X - theta1 - theta2) / vx) / denominator).sum() + theta1 / v1 grad_npy[1] = -rescale_grad * ((numpy.exp(-(X - theta1 - theta2) ** 2 / (2 * vx)) * (X - theta1 - theta2) / vx) / denominator).sum() + theta2 / v2 grad[:] = grad_npy return grad
<SYSTEM_TASK:> wrapper function for loading pascal voc dataset <END_TASK> <USER_TASK:> Description: def load_pascal(image_set, year, devkit_path, shuffle=False): """ wrapper function for loading pascal voc dataset Parameters: ---------- image_set : str train, trainval... year : str 2007, 2012 or combinations splitted by comma devkit_path : str root directory of dataset shuffle : bool whether to shuffle initial list Returns: ---------- Imdb """
image_set = [y.strip() for y in image_set.split(',')] assert image_set, "No image_set specified" year = [y.strip() for y in year.split(',')] assert year, "No year specified" # make sure (# sets == # years) if len(image_set) > 1 and len(year) == 1: year = year * len(image_set) if len(image_set) == 1 and len(year) > 1: image_set = image_set * len(year) assert len(image_set) == len(year), "Number of sets and year mismatch" imdbs = [] for s, y in zip(image_set, year): imdbs.append(PascalVoc(s, y, devkit_path, shuffle, is_train=True)) if len(imdbs) > 1: return ConcatDB(imdbs, shuffle) else: return imdbs[0]
<SYSTEM_TASK:> wrapper function for loading ms coco dataset <END_TASK> <USER_TASK:> Description: def load_coco(image_set, dirname, shuffle=False): """ wrapper function for loading ms coco dataset Parameters: ---------- image_set : str train2014, val2014, valminusminival2014, minival2014 dirname: str root dir for coco shuffle: boolean initial shuffle """
anno_files = ['instances_' + y.strip() + '.json' for y in image_set.split(',')] assert anno_files, "No image set specified" imdbs = [] for af in anno_files: af_path = os.path.join(dirname, 'annotations', af) imdbs.append(Coco(af_path, dirname, shuffle=shuffle)) if len(imdbs) > 1: return ConcatDB(imdbs, shuffle) else: return imdbs[0]
<SYSTEM_TASK:> Convert a transpose layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_transpose(net, node, module, builder): """Convert a transpose layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_name, output_name = _get_input_output_name(net, node) name = node['name'] param = _get_attrs(node) axes = literal_eval(param['axes']) builder.add_permute(name, axes, input_name, output_name)
<SYSTEM_TASK:> Convert a flatten layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_flatten(net, node, module, builder): """Convert a flatten layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_name, output_name = _get_input_output_name(net, node) name = node['name'] mode = 0 # CHANNEL_FIRST builder.add_flatten(name, mode, input_name, output_name)
<SYSTEM_TASK:> Convert an activation layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_activation(net, node, module, builder): """Convert an activation layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_name, output_name = _get_input_output_name(net, node) name = node['name'] mx_non_linearity = _get_attrs(node)['act_type'] #TODO add SCALED_TANH, SOFTPLUS, SOFTSIGN, SIGMOID_HARD, LEAKYRELU, PRELU, ELU, PARAMETRICSOFTPLUS, THRESHOLDEDRELU, LINEAR if mx_non_linearity == 'relu': non_linearity = 'RELU' elif mx_non_linearity == 'tanh': non_linearity = 'TANH' elif mx_non_linearity == 'sigmoid': non_linearity = 'SIGMOID' else: raise TypeError('Unknown activation type %s' % mx_non_linearity) builder.add_activation(name = name, non_linearity = non_linearity, input_name = input_name, output_name = output_name)
<SYSTEM_TASK:> Convert a leakyrelu layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_leakyrelu(net, node, module, builder): """Convert a leakyrelu layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_name, output_name = _get_input_output_name(net, node) name = node['name'] inputs = node['inputs'] args, _ = module.get_params() mx_non_linearity = _get_attrs(node)['act_type'] if mx_non_linearity == 'elu': non_linearity = 'ELU' slope = _get_attrs(node)['slope'] if 'slope' in _get_attrs(node) else 0.25 params = slope elif mx_non_linearity == 'leaky': non_linearity = 'LEAKYRELU' slope = _get_attrs(node)['slope'] if 'slope' in _get_attrs(node) else 0.25 params = [slope] elif mx_non_linearity == 'prelu': non_linearity = 'PRELU' params = args[_get_node_name(net, inputs[1][0])].asnumpy() else: raise TypeError('Unknown activation type %s' % mx_non_linearity) builder.add_activation(name = name, non_linearity = non_linearity, input_name = input_name, output_name = output_name, params = params)
<SYSTEM_TASK:> Convert an elementwise add layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_elementwise_add(net, node, module, builder): """Convert an elementwise add layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_names, output_name = _get_input_output_name(net, node, [0, 1]) name = node['name'] builder.add_elementwise(name, input_names, output_name, 'ADD')
<SYSTEM_TASK:> Convert a convolution layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_convolution(net, node, module, builder): """Convert a convolution layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_name, output_name = _get_input_output_name(net, node) name = node['name'] param = _get_attrs(node) inputs = node['inputs'] args, _ = module.get_params() if 'no_bias' in param.keys(): has_bias = not literal_eval(param['no_bias']) else: has_bias = True if 'pad' in param.keys() and literal_eval(param['pad']) != (0, 0): pad = literal_eval(param['pad']) builder.add_padding( name=name+"_pad", left=pad[1], right=pad[1], top=pad[0], bottom=pad[0], value=0, input_name=input_name, output_name=name+"_pad_output") input_name = name+"_pad_output" border_mode = "valid" n_filters = int(param['num_filter']) n_groups = int(param['num_group']) if 'num_group' in param else 1 W = args[_get_node_name(net, inputs[1][0])].asnumpy() if has_bias: Wb = args[_get_node_name(net, inputs[2][0])].asnumpy() else: Wb = None channels = W.shape[1] stride_height = 1 stride_width = 1 if 'stride' in param.keys(): stride_height, stride_width = literal_eval(param['stride']) kernel_height, kernel_width = literal_eval(param['kernel']) W = W.transpose((2, 3, 1, 0)) builder.add_convolution( name=name, kernel_channels=channels, output_channels=n_filters, height=kernel_height, width=kernel_width, stride_height=stride_height, stride_width=stride_width, border_mode=border_mode, groups=n_groups, W=W, b=Wb, has_bias=has_bias, is_deconv=False, output_shape=None, input_name=input_name, output_name=output_name)
<SYSTEM_TASK:> Convert a pooling layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_pooling(net, node, module, builder): """Convert a pooling layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_name, output_name = _get_input_output_name(net, node) name = node['name'] param = _get_attrs(node) layer_type_mx = param['pool_type'] if layer_type_mx == 'max': layer_type = 'MAX' elif layer_type_mx == 'avg': layer_type = 'AVERAGE' else: raise TypeError("Pooling type %s not supported" % layer_type_mx) # Add padding if there is any if 'pad' in param.keys() and literal_eval(param['pad']) != (0, 0): pad = literal_eval(param['pad']) builder.add_padding( name=name+"_pad", left=pad[1], right=pad[1], top=pad[0], bottom=pad[0], value=0, input_name=input_name, output_name=name+"_pad_output") input_name = name+"_pad_output" stride_height = 1 stride_width = 1 if 'stride' in param.keys(): stride_height, stride_width = literal_eval(param['stride']) kernel_width, kernel_height = literal_eval(param['kernel']) type_map = {'valid': 'VALID', 'full': 'INCLUDE_LAST_PIXEL'} padding_type = param['pooling_convention'] if 'pooling_convention' in param else 'valid' if padding_type not in type_map: raise KeyError("%s type is not supported in this converter. It is a Github issue.") padding_type = type_map[padding_type] if 'global_pool' in param.keys(): is_global = literal_eval(param['global_pool']) else: is_global = False # For reasons why we are not using the standard builder but having our own implementation, # see the function documentation. _add_pooling.add_pooling_with_padding_types( builder=builder, name=name, height=kernel_height, width=kernel_width, stride_height=stride_height, stride_width=stride_width, layer_type=layer_type, padding_type=padding_type, exclude_pad_area=False, is_global=is_global, input_name=input_name, output_name=output_name )
<SYSTEM_TASK:> Convert a batchnorm layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_batchnorm(net, node, module, builder): """Convert a batchnorm layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
input_name, output_name = _get_input_output_name(net, node) name = node['name'] inputs = node['inputs'] eps = 1e-3 # Default value of eps for MXNet. use_global_stats = False # Default value of use_global_stats for MXNet. fix_gamma = True # Default value of fix_gamma for MXNet. attrs = _get_attrs(node) if 'eps' in attrs: eps = literal_eval(attrs['eps']) if 'fix_gamma' in attrs: fix_gamma = literal_eval(attrs['fix_gamma']) args, aux = module.get_params() gamma = args[_get_node_name(net, inputs[1][0])].asnumpy() beta = args[_get_node_name(net, inputs[2][0])].asnumpy() mean = aux[_get_node_name(net, inputs[3][0])].asnumpy() variance = aux[_get_node_name(net, inputs[4][0])].asnumpy() nb_channels = gamma.shape[0] if fix_gamma: gamma.fill(1.) builder.add_batchnorm( name=name, channels=nb_channels, gamma=gamma, beta=beta, mean=mean, variance=variance, input_name=input_name, output_name=output_name, epsilon=eps)
<SYSTEM_TASK:> Convert concat layer from mxnet to coreml. <END_TASK> <USER_TASK:> Description: def convert_concat(net, node, module, builder): """Convert concat layer from mxnet to coreml. Parameters ---------- network: net A mxnet network object. layer: node Node to convert. module: module An module for MXNet builder: NeuralNetworkBuilder A neural network builder object. """
# Get input and output names input_names, output_name = _get_input_output_name(net, node, 'all') name = node['name'] mode = 'CONCAT' builder.add_elementwise(name = name, input_names = input_names, output_name = output_name, mode = mode)
<SYSTEM_TASK:> Unfuses the fused RNN in to a stack of rnn cells. <END_TASK> <USER_TASK:> Description: def _unfuse(self): """Unfuses the fused RNN in to a stack of rnn cells."""
assert not self._projection_size, "_unfuse does not support projection layer yet!" assert not self._lstm_state_clip_min and not self._lstm_state_clip_max, \ "_unfuse does not support state clipping yet!" get_cell = {'rnn_relu': lambda **kwargs: rnn_cell.RNNCell(self._hidden_size, activation='relu', **kwargs), 'rnn_tanh': lambda **kwargs: rnn_cell.RNNCell(self._hidden_size, activation='tanh', **kwargs), 'lstm': lambda **kwargs: rnn_cell.LSTMCell(self._hidden_size, **kwargs), 'gru': lambda **kwargs: rnn_cell.GRUCell(self._hidden_size, **kwargs)}[self._mode] stack = rnn_cell.HybridSequentialRNNCell(prefix=self.prefix, params=self.params) with stack.name_scope(): ni = self._input_size for i in range(self._num_layers): kwargs = {'input_size': ni, 'i2h_weight_initializer': self._i2h_weight_initializer, 'h2h_weight_initializer': self._h2h_weight_initializer, 'i2h_bias_initializer': self._i2h_bias_initializer, 'h2h_bias_initializer': self._h2h_bias_initializer} if self._dir == 2: stack.add(rnn_cell.BidirectionalCell( get_cell(prefix='l%d_'%i, **kwargs), get_cell(prefix='r%d_'%i, **kwargs))) else: stack.add(get_cell(prefix='l%d_'%i, **kwargs)) if self._dropout > 0 and i != self._num_layers - 1: stack.add(rnn_cell.DropoutCell(self._dropout)) ni = self._hidden_size * self._dir return stack
<SYSTEM_TASK:> forward using CUDNN or CPU kenrel <END_TASK> <USER_TASK:> Description: def _forward_kernel(self, F, inputs, states, **kwargs): """ forward using CUDNN or CPU kenrel"""
if self._layout == 'NTC': inputs = F.swapaxes(inputs, dim1=0, dim2=1) if self._projection_size is None: params = (kwargs['{}{}_{}_{}'.format(d, l, g, t)].reshape(-1) for t in ['weight', 'bias'] for l in range(self._num_layers) for d in ['l', 'r'][:self._dir] for g in ['i2h', 'h2h']) else: params = (kwargs['{}{}_{}_{}'.format(d, l, g, t)].reshape(-1) for t in ['weight', 'bias'] for l in range(self._num_layers) for d in ['l', 'r'][:self._dir] for g in ['i2h', 'h2h', 'h2r'] if g != 'h2r' or t != 'bias') params = F._internal._rnn_param_concat(*params, dim=0) rnn = F.RNN(inputs, params, *states, state_size=self._hidden_size, projection_size=self._projection_size, num_layers=self._num_layers, bidirectional=self._dir == 2, p=self._dropout, state_outputs=True, mode=self._mode, lstm_state_clip_min=self._lstm_state_clip_min, lstm_state_clip_max=self._lstm_state_clip_max, lstm_state_clip_nan=self._lstm_state_clip_nan) if self._mode == 'lstm': outputs, states = rnn[0], [rnn[1], rnn[2]] else: outputs, states = rnn[0], [rnn[1]] if self._layout == 'NTC': outputs = F.swapaxes(outputs, dim1=0, dim2=1) return outputs, states
<SYSTEM_TASK:> Measure the accuracy of ResNet <END_TASK> <USER_TASK:> Description: def evaluate_accuracy(data_iterator, network): """ Measure the accuracy of ResNet Parameters ---------- data_iterator: Iter examples of dataset network: ResNet Returns ---------- tuple of array element """
acc = mx.metric.Accuracy() # Iterate through data and label for i, (data, label) in enumerate(data_iterator): # Get the data and label into the GPU data = data.as_in_context(ctx[0]) label = label.as_in_context(ctx[0]) # Get network's output which is a probability distribution # Apply argmax on the probability distribution to get network's classification. output = network(data) predictions = nd.argmax(output, axis=1) # Give network's prediction and the correct label to update the metric acc.update(preds=predictions, labels=label) # Return the accuracy return acc.get()[1]
<SYSTEM_TASK:> Training with multiple GPUs <END_TASK> <USER_TASK:> Description: def train_batch(batch_list, context, network, gluon_trainer): """ Training with multiple GPUs Parameters ---------- batch_list: List list of dataset context: List a list of all GPUs to be used for training network: ResNet gluon_trainer: rain module of gluon """
# Split and load data into multiple GPUs data = batch_list[0] data = gluon.utils.split_and_load(data, context) # Split and load label into multiple GPUs label = batch_list[1] label = gluon.utils.split_and_load(label, context) # Run the forward and backward pass forward_backward(network, data, label) # Update the parameters this_batch_size = batch_list[0].shape[0] gluon_trainer.step(this_batch_size)
<SYSTEM_TASK:> Take an executor's underlying symbol graph and return its generated optimized version. <END_TASK> <USER_TASK:> Description: def get_optimized_symbol(executor): """ Take an executor's underlying symbol graph and return its generated optimized version. Parameters ---------- executor : An executor for which you want to see an optimized symbol. Getting an optimized symbol is useful to compare and verify the work TensorRT has done against a legacy behaviour. Returns ------- symbol : nnvm::Symbol The nnvm symbol optimized. """
handle = SymbolHandle() try: check_call(_LIB.MXExecutorGetOptimizedSymbol(executor.handle, ctypes.byref(handle))) result = sym.Symbol(handle=handle) return result except MXNetError: logging.error('Error while trying to fetch TRT optimized symbol for graph. Please ensure ' 'build was compiled with MXNET_USE_TENSORRT enabled.') raise
<SYSTEM_TASK:> Bind current symbol to get an optimized trt executor. <END_TASK> <USER_TASK:> Description: def tensorrt_bind(symbol, ctx, all_params, type_dict=None, stype_dict=None, group2ctx=None, **kwargs): """Bind current symbol to get an optimized trt executor. Parameters ---------- symbol : Symbol The symbol you wish to bind, and optimize with TensorRT. ctx : Context The device context the generated executor to run on. all_params : Dict of str->ndarray A dictionary of mappings from parameter names to parameter NDArrays. type_dict : Dict of str->numpy.dtype Input type dictionary, name->dtype stype_dict : Dict of str->str Input storage type dictionary, name->storage_type group2ctx : Dict of string to mx.Context The dict mapping the `ctx_group` attribute to the context assignment. kwargs : Dict of str->shape Input shape dictionary, name->shape Returns ------- executor : mxnet.Executor An optimized TensorRT executor. """
kwargs['shared_buffer'] = all_params return symbol.simple_bind(ctx, type_dict=type_dict, stype_dict=stype_dict, group2ctx=group2ctx, **kwargs)
<SYSTEM_TASK:> detect all images in iterator <END_TASK> <USER_TASK:> Description: def detect_iter(self, det_iter, show_timer=False): """ detect all images in iterator Parameters: ---------- det_iter : DetIter iterator for all testing images show_timer : Boolean whether to print out detection exec time Returns: ---------- list of detection results """
num_images = det_iter._size if not isinstance(det_iter, mx.io.PrefetchingIter): det_iter = mx.io.PrefetchingIter(det_iter) start = timer() detections = self.mod.predict(det_iter).asnumpy() time_elapsed = timer() - start if show_timer: logging.info("Detection time for {} images: {:.4f} sec".format( num_images, time_elapsed)) result = Detector.filter_positive_detections(detections) return result
<SYSTEM_TASK:> wrapper for detecting multiple images <END_TASK> <USER_TASK:> Description: def im_detect(self, im_list, root_dir=None, extension=None, show_timer=False): """ wrapper for detecting multiple images Parameters: ---------- im_list : list of str image path or list of image paths root_dir : str directory of input images, optional if image path already has full directory information extension : str image extension, eg. ".jpg", optional Returns: ---------- list of detection results in format [det0, det1...], det is in format np.array([id, score, xmin, ymin, xmax, ymax]...) """
test_db = TestDB(im_list, root_dir=root_dir, extension=extension) test_iter = DetIter(test_db, 1, self.data_shape, self.mean_pixels, is_train=False) return self.detect_iter(test_iter, show_timer)
<SYSTEM_TASK:> visualize detections in one image <END_TASK> <USER_TASK:> Description: def visualize_detection(self, img, dets, classes=[], thresh=0.6): """ visualize detections in one image Parameters: ---------- img : numpy.array image, in bgr format dets : numpy.array ssd detections, numpy.array([[id, score, x1, y1, x2, y2]...]) each row is one object classes : tuple or list of str class names thresh : float score threshold """
import matplotlib.pyplot as plt import random plt.imshow(img) height = img.shape[0] width = img.shape[1] colors = dict() for det in dets: (klass, score, x0, y0, x1, y1) = det if score < thresh: continue cls_id = int(klass) if cls_id not in colors: colors[cls_id] = (random.random(), random.random(), random.random()) xmin = int(x0 * width) ymin = int(y0 * height) xmax = int(x1 * width) ymax = int(y1 * height) rect = plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin, fill=False, edgecolor=colors[cls_id], linewidth=3.5) plt.gca().add_patch(rect) class_name = str(cls_id) if classes and len(classes) > cls_id: class_name = classes[cls_id] plt.gca().text(xmin, ymin - 2, '{:s} {:.3f}'.format(class_name, score), bbox=dict(facecolor=colors[cls_id], alpha=0.5), fontsize=12, color='white') plt.show()
<SYSTEM_TASK:> wrapper for im_detect and visualize_detection <END_TASK> <USER_TASK:> Description: def detect_and_visualize(self, im_list, root_dir=None, extension=None, classes=[], thresh=0.6, show_timer=False): """ wrapper for im_detect and visualize_detection Parameters: ---------- im_list : list of str or str image path or list of image paths root_dir : str or None directory of input images, optional if image path already has full directory information extension : str or None image extension, eg. ".jpg", optional Returns: ---------- """
dets = self.im_detect(im_list, root_dir, extension, show_timer=show_timer) if not isinstance(im_list, list): im_list = [im_list] assert len(dets) == len(im_list) for k, det in enumerate(dets): img = cv2.imread(im_list[k]) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) self.visualize_detection(img, det, classes, thresh)
<SYSTEM_TASK:> Runs the caffe upgrade tool on the prototxt to create a prototxt in the latest format. <END_TASK> <USER_TASK:> Description: def process_network_proto(caffe_root, deploy_proto): """ Runs the caffe upgrade tool on the prototxt to create a prototxt in the latest format. This enable us to work just with latest structures, instead of supporting all the variants :param caffe_root: link to caffe root folder, where the upgrade tool is located :param deploy_proto: name of the original prototxt file :return: name of new processed prototxt file """
processed_deploy_proto = deploy_proto + ".processed" from shutil import copyfile copyfile(deploy_proto, processed_deploy_proto) # run upgrade tool on new file name (same output file) import os upgrade_tool_command_line = caffe_root + '/build/tools/upgrade_net_proto_text.bin ' \ + processed_deploy_proto + ' ' + processed_deploy_proto os.system(upgrade_tool_command_line) return processed_deploy_proto
<SYSTEM_TASK:> Helper function for margin-based loss. Return a distance matrix given a matrix. <END_TASK> <USER_TASK:> Description: def get_distance(F, x): """Helper function for margin-based loss. Return a distance matrix given a matrix."""
n = x.shape[0] square = F.sum(x ** 2.0, axis=1, keepdims=True) distance_square = square + square.transpose() - (2.0 * F.dot(x, x.transpose())) # Adding identity to make sqrt work. return F.sqrt(distance_square + F.array(np.identity(n)))
<SYSTEM_TASK:> cross entropy loss with a mask <END_TASK> <USER_TASK:> Description: def cross_entropy_loss(inputs, labels, rescale_loss=1): """ cross entropy loss with a mask """
criterion = mx.gluon.loss.SoftmaxCrossEntropyLoss(weight=rescale_loss) loss = criterion(inputs, labels) mask = S.var('mask') loss = loss * S.reshape(mask, shape=(-1,)) return S.make_loss(loss.mean())
<SYSTEM_TASK:> word embedding + LSTM Projected <END_TASK> <USER_TASK:> Description: def rnn(bptt, vocab_size, num_embed, nhid, num_layers, dropout, num_proj, batch_size): """ word embedding + LSTM Projected """
state_names = [] data = S.var('data') weight = S.var("encoder_weight", stype='row_sparse') embed = S.sparse.Embedding(data=data, weight=weight, input_dim=vocab_size, output_dim=num_embed, name='embed', sparse_grad=True) states = [] outputs = S.Dropout(embed, p=dropout) for i in range(num_layers): prefix = 'lstmp%d_' % i init_h = S.var(prefix + 'init_h', shape=(batch_size, num_proj), init=mx.init.Zero()) init_c = S.var(prefix + 'init_c', shape=(batch_size, nhid), init=mx.init.Zero()) state_names += [prefix + 'init_h', prefix + 'init_c'] lstmp = mx.gluon.contrib.rnn.LSTMPCell(nhid, num_proj, prefix=prefix) outputs, next_states = lstmp.unroll(bptt, outputs, begin_state=[init_h, init_c], \ layout='NTC', merge_outputs=True) outputs = S.Dropout(outputs, p=dropout) states += [S.stop_gradient(s) for s in next_states] outputs = S.reshape(outputs, shape=(-1, num_proj)) trainable_lstm_args = [] for arg in outputs.list_arguments(): if 'lstmp' in arg and 'init' not in arg: trainable_lstm_args.append(arg) return outputs, states, trainable_lstm_args, state_names
<SYSTEM_TASK:> Sampled softmax via importance sampling. <END_TASK> <USER_TASK:> Description: def sampled_softmax(num_classes, num_samples, in_dim, inputs, weight, bias, sampled_values, remove_accidental_hits=True): """ Sampled softmax via importance sampling. This under-estimates the full softmax and is only used for training. """
# inputs = (n, in_dim) sample, prob_sample, prob_target = sampled_values # (num_samples, ) sample = S.var('sample', shape=(num_samples,), dtype='float32') # (n, ) label = S.var('label') label = S.reshape(label, shape=(-1,), name="label_reshape") # (num_samples+n, ) sample_label = S.concat(sample, label, dim=0) # lookup weights and biases # (num_samples+n, dim) sample_target_w = S.sparse.Embedding(data=sample_label, weight=weight, input_dim=num_classes, output_dim=in_dim, sparse_grad=True) # (num_samples+n, 1) sample_target_b = S.sparse.Embedding(data=sample_label, weight=bias, input_dim=num_classes, output_dim=1, sparse_grad=True) # (num_samples, dim) sample_w = S.slice(sample_target_w, begin=(0, 0), end=(num_samples, None)) target_w = S.slice(sample_target_w, begin=(num_samples, 0), end=(None, None)) sample_b = S.slice(sample_target_b, begin=(0, 0), end=(num_samples, None)) target_b = S.slice(sample_target_b, begin=(num_samples, 0), end=(None, None)) # target # (n, 1) true_pred = S.sum(target_w * inputs, axis=1, keepdims=True) + target_b # samples # (n, num_samples) sample_b = S.reshape(sample_b, (-1,)) sample_pred = S.FullyConnected(inputs, weight=sample_w, bias=sample_b, num_hidden=num_samples) # remove accidental hits if remove_accidental_hits: label_v = S.reshape(label, (-1, 1)) sample_v = S.reshape(sample, (1, -1)) neg = S.broadcast_equal(label_v, sample_v) * -1e37 sample_pred = sample_pred + neg prob_sample = S.reshape(prob_sample, shape=(1, num_samples)) p_target = true_pred - S.log(prob_target) p_sample = S.broadcast_sub(sample_pred, S.log(prob_sample)) # return logits and new_labels # (n, 1+num_samples) logits = S.concat(p_target, p_sample, dim=1) new_targets = S.zeros_like(label) return logits, new_targets
<SYSTEM_TASK:> Split labels into `num_splits` and <END_TASK> <USER_TASK:> Description: def generate_samples(label, num_splits, sampler): """ Split labels into `num_splits` and generate candidates based on log-uniform distribution. """
def listify(x): return x if isinstance(x, list) else [x] label_splits = listify(label.split(num_splits, axis=0)) prob_samples = [] prob_targets = [] samples = [] for label_split in label_splits: label_split_2d = label_split.reshape((-1,1)) sampled_value = sampler.draw(label_split_2d) sampled_classes, exp_cnt_true, exp_cnt_sampled = sampled_value samples.append(sampled_classes.astype(np.float32)) prob_targets.append(exp_cnt_true.astype(np.float32).reshape((-1,1))) prob_samples.append(exp_cnt_sampled.astype(np.float32)) return samples, prob_samples, prob_targets
<SYSTEM_TASK:> Returns a pre-defined model by name <END_TASK> <USER_TASK:> Description: def get_model(name, **kwargs): """Returns a pre-defined model by name Parameters ---------- name : str Name of the model. pretrained : bool Whether to load the pretrained weights for model. classes : int Number of classes for the output layer. ctx : Context, default CPU The context in which to load the pretrained weights. root : str, default '$MXNET_HOME/models' Location for keeping the model parameters. Returns ------- HybridBlock The model. """
models = {'resnet18_v1': resnet18_v1, 'resnet34_v1': resnet34_v1, 'resnet50_v1': resnet50_v1, 'resnet101_v1': resnet101_v1, 'resnet152_v1': resnet152_v1, 'resnet18_v2': resnet18_v2, 'resnet34_v2': resnet34_v2, 'resnet50_v2': resnet50_v2, 'resnet101_v2': resnet101_v2, 'resnet152_v2': resnet152_v2, 'vgg11': vgg11, 'vgg13': vgg13, 'vgg16': vgg16, 'vgg19': vgg19, 'vgg11_bn': vgg11_bn, 'vgg13_bn': vgg13_bn, 'vgg16_bn': vgg16_bn, 'vgg19_bn': vgg19_bn, 'alexnet': alexnet, 'densenet121': densenet121, 'densenet161': densenet161, 'densenet169': densenet169, 'densenet201': densenet201, 'squeezenet1.0': squeezenet1_0, 'squeezenet1.1': squeezenet1_1, 'inceptionv3': inception_v3, 'mobilenet1.0': mobilenet1_0, 'mobilenet0.75': mobilenet0_75, 'mobilenet0.5': mobilenet0_5, 'mobilenet0.25': mobilenet0_25, 'mobilenetv2_1.0': mobilenet_v2_1_0, 'mobilenetv2_0.75': mobilenet_v2_0_75, 'mobilenetv2_0.5': mobilenet_v2_0_5, 'mobilenetv2_0.25': mobilenet_v2_0_25 } name = name.lower() if name not in models: raise ValueError( 'Model %s is not supported. Available options are\n\t%s' % ( name, '\n\t'.join(sorted(models.keys())))) return models[name](**kwargs)
<SYSTEM_TASK:> Return a new handle with specified storage type, shape, dtype and context. <END_TASK> <USER_TASK:> Description: def _new_alloc_handle(stype, shape, ctx, delay_alloc, dtype, aux_types, aux_shapes=None): """Return a new handle with specified storage type, shape, dtype and context. Empty handle is only used to hold results Returns ------- handle A new empty ndarray handle """
hdl = NDArrayHandle() for aux_t in aux_types: if np.dtype(aux_t) != np.dtype("int64"): raise NotImplementedError("only int64 is supported for aux types") aux_type_ids = [int(_DTYPE_NP_TO_MX[np.dtype(aux_t).type]) for aux_t in aux_types] aux_shapes = [(0,) for aux_t in aux_types] if aux_shapes is None else aux_shapes aux_shape_lens = [len(aux_shape) for aux_shape in aux_shapes] aux_shapes = py_sum(aux_shapes, ()) num_aux = mx_uint(len(aux_types)) check_call(_LIB.MXNDArrayCreateSparseEx( ctypes.c_int(int(_STORAGE_TYPE_STR_TO_ID[stype])), c_array_buf(mx_uint, native_array('I', shape)), mx_uint(len(shape)), ctypes.c_int(ctx.device_typeid), ctypes.c_int(ctx.device_id), ctypes.c_int(int(delay_alloc)), ctypes.c_int(int(_DTYPE_NP_TO_MX[np.dtype(dtype).type])), num_aux, c_array_buf(ctypes.c_int, native_array('i', aux_type_ids)), c_array_buf(mx_uint, native_array('I', aux_shape_lens)), c_array_buf(mx_uint, native_array('I', aux_shapes)), ctypes.byref(hdl))) return hdl
<SYSTEM_TASK:> Prepare `source_array` so that it can be used to construct NDArray. <END_TASK> <USER_TASK:> Description: def _prepare_src_array(source_array, dtype): """Prepare `source_array` so that it can be used to construct NDArray. `source_array` is converted to a `np.ndarray` if it's neither an `NDArray` \ nor an `np.ndarray`. """
if not isinstance(source_array, NDArray) and not isinstance(source_array, np.ndarray): try: source_array = np.array(source_array, dtype=dtype) except: raise TypeError('values must be array like object') return source_array
<SYSTEM_TASK:> Prepare the value of dtype if `dtype` is None. If `src_array` is an NDArray, numpy.ndarray <END_TASK> <USER_TASK:> Description: def _prepare_default_dtype(src_array, dtype): """Prepare the value of dtype if `dtype` is None. If `src_array` is an NDArray, numpy.ndarray or scipy.sparse.csr.csr_matrix, return src_array.dtype. float32 is returned otherwise."""
if dtype is None: if isinstance(src_array, (NDArray, np.ndarray)): dtype = src_array.dtype elif spsp and isinstance(src_array, spsp.csr.csr_matrix): dtype = src_array.dtype else: dtype = mx_real_t return dtype
<SYSTEM_TASK:> check s1 == s2 if both are not None <END_TASK> <USER_TASK:> Description: def _check_shape(s1, s2): """check s1 == s2 if both are not None"""
if s1 and s2 and s1 != s2: raise ValueError("Shape mismatch detected. " + str(s1) + " v.s. " + str(s2))
<SYSTEM_TASK:> Creates a `RowSparseNDArray`, a multidimensional row sparse array with a set of \ <END_TASK> <USER_TASK:> Description: def row_sparse_array(arg1, shape=None, ctx=None, dtype=None): """Creates a `RowSparseNDArray`, a multidimensional row sparse array with a set of \ tensor slices at given indices. The RowSparseNDArray can be instantiated in several ways: - row_sparse_array(D): to construct a RowSparseNDArray with a dense ndarray ``D`` - **D** (*array_like*) - An object exposing the array interface, an object whose \ `__array__` method returns an array, or any (nested) sequence. - **ctx** (*Context, optional*) - Device context \ (default is the current default context). - **dtype** (*str or numpy.dtype, optional*) - The data type of the output array. \ The default dtype is ``D.dtype`` if ``D`` is an NDArray or numpy.ndarray, \ float32 otherwise. - row_sparse_array(S) to construct a RowSparseNDArray with a sparse ndarray ``S`` - **S** (*RowSparseNDArray*) - A sparse ndarray. - **ctx** (*Context, optional*) - Device context \ (default is the current default context). - **dtype** (*str or numpy.dtype, optional*) - The data type of the output array. \ The default dtype is ``S.dtype``. - row_sparse_array((D0, D1 .. Dn)) to construct an empty RowSparseNDArray with shape ``(D0, D1, ... Dn)`` - **D0, D1 .. Dn** (*int*) - The shape of the ndarray - **ctx** (*Context, optional*) - Device context \ (default is the current default context). - **dtype** (*str or numpy.dtype, optional*) - The data type of the output array. \ The default dtype is float32. - row_sparse_array((data, indices)) to construct a RowSparseNDArray based on the definition of row sparse format \ using two separate arrays, \ where the `indices` stores the indices of the row slices with non-zeros, while the values are stored in `data`. The corresponding NDArray ``dense`` represented by RowSparseNDArray ``rsp`` has \ ``dense[rsp.indices[i], :, :, :, ...] = rsp.data[i, :, :, :, ...]`` The row indices for are expected to be **sorted in ascending order.** \ - **data** (*array_like*) - An object exposing the array interface, which \ holds all the non-zero row slices of the array. - **indices** (*array_like*) - An object exposing the array interface, which \ stores the row index for each row slice with non-zero elements. - **shape** (*tuple of int, optional*) - The shape of the array. The default \ shape is inferred from the indices and indptr arrays. - **ctx** (*Context, optional*) - Device context \ (default is the current default context). - **dtype** (*str or numpy.dtype, optional*) - The data type of the output array. \ The default dtype is float32. Parameters ---------- arg1 : NDArray, numpy.ndarray, RowSparseNDArray, tuple of int or tuple of array_like The argument to help instantiate the row sparse ndarray. See above for further details. shape : tuple of int, optional The shape of the row sparse ndarray. (Default value = None) ctx : Context, optional Device context (default is the current default context). dtype : str or numpy.dtype, optional The data type of the output array. (Default value = None) Returns ------- RowSparseNDArray An `RowSparseNDArray` with the `row_sparse` storage representation. Examples -------- >>> a = mx.nd.sparse.row_sparse_array(([[1, 2], [3, 4]], [1, 4]), shape=(6, 2)) >>> a.asnumpy() array([[ 0., 0.], [ 1., 2.], [ 0., 0.], [ 0., 0.], [ 3., 4.], [ 0., 0.]], dtype=float32) See Also -------- RowSparseNDArray : MXNet NDArray in row sparse format. """
# construct a row sparse array from (D0, D1 ..) or (data, indices) if isinstance(arg1, tuple): arg_len = len(arg1) if arg_len < 2: raise ValueError("Unexpected length of input tuple: " + str(arg_len)) elif arg_len > 2: # empty ndarray with shape _check_shape(arg1, shape) return empty('row_sparse', arg1, ctx=ctx, dtype=dtype) else: # len(arg1) = 2, is either shape or (data, indices) if isinstance(arg1[0], integer_types) and isinstance(arg1[1], integer_types): # empty ndarray with shape _check_shape(arg1, shape) return empty('row_sparse', arg1, ctx=ctx, dtype=dtype) else: # data, indices, indptr return _row_sparse_ndarray_from_definition(arg1[0], arg1[1], shape=shape, ctx=ctx, dtype=dtype) else: # construct a row sparse ndarray from a dense / sparse array if isinstance(arg1, RowSparseNDArray): # construct a row sparse ndarray from RowSparseNDArray _check_shape(arg1.shape, shape) return array(arg1, ctx=ctx, dtype=dtype) elif isinstance(arg1, CSRNDArray): raise ValueError("Unexpected input type: CSRNDArray") else: # construct a csr matrix from a dense one # prepare default dtype since mx.nd.array doesn't use default values # based on source_array dtype = _prepare_default_dtype(arg1, dtype) # create dns array with provided dtype. ctx is not passed since copy across # ctx requires dtype to be the same dns = _array(arg1, dtype=dtype) if ctx is not None and dns.context != ctx: dns = dns.as_in_context(ctx) _check_shape(dns.shape, shape) return dns.tostype('row_sparse')
<SYSTEM_TASK:> Creates a sparse array from any object exposing the array interface. <END_TASK> <USER_TASK:> Description: def array(source_array, ctx=None, dtype=None): """Creates a sparse array from any object exposing the array interface. Parameters ---------- source_array : RowSparseNDArray, CSRNDArray or scipy.sparse.csr.csr_matrix The source sparse array ctx : Context, optional The default context is ``source_array.context`` if ``source_array`` is an NDArray. \ The current default context otherwise. dtype : str or numpy.dtype, optional The data type of the output array. The default dtype is ``source_array.dtype`` if `source_array` is an `NDArray`, `numpy.ndarray` or `scipy.sparse.csr.csr_matrix`, \ `float32` otherwise. Returns ------- RowSparseNDArray or CSRNDArray An array with the same contents as the `source_array`. Examples -------- >>> import scipy.sparse as spsp >>> csr = spsp.csr_matrix((2, 100)) >>> mx.nd.sparse.array(csr) <CSRNDArray 2x100 @cpu(0)> >>> mx.nd.sparse.array(mx.nd.sparse.zeros('csr', (3, 2))) <CSRNDArray 3x2 @cpu(0)> >>> mx.nd.sparse.array(mx.nd.sparse.zeros('row_sparse', (3, 2))) <RowSparseNDArray 3x2 @cpu(0)> """
ctx = current_context() if ctx is None else ctx if isinstance(source_array, NDArray): assert(source_array.stype != 'default'), \ "Please use `tostype` to create RowSparseNDArray or CSRNDArray from an NDArray" # prepare dtype and ctx based on source_array, if not provided dtype = _prepare_default_dtype(source_array, dtype) # if both dtype and ctx are different from source_array, we cannot copy directly if source_array.dtype != dtype and source_array.context != ctx: arr = empty(source_array.stype, source_array.shape, dtype=dtype) arr[:] = source_array arr = arr.as_in_context(ctx) else: arr = empty(source_array.stype, source_array.shape, dtype=dtype, ctx=ctx) arr[:] = source_array return arr elif spsp and isinstance(source_array, spsp.csr.csr_matrix): # TODO(haibin) implement `_sync_copy_from` with scipy csr object to reduce a copy # preprocess scipy csr to canonical form csr = source_array.sorted_indices() csr.sum_duplicates() dtype = _prepare_default_dtype(source_array, dtype) return csr_matrix((csr.data, csr.indices, csr.indptr), shape=csr.shape, \ dtype=dtype, ctx=ctx) elif isinstance(source_array, (np.ndarray, np.generic)): raise ValueError("Please use mx.nd.array to create an NDArray with source_array of type ", type(source_array)) else: raise ValueError("Unexpected source_array type: ", type(source_array))
<SYSTEM_TASK:> Return a copy of the array after casting to a specified type. <END_TASK> <USER_TASK:> Description: def astype(self, dtype, copy=True): """Return a copy of the array after casting to a specified type. Parameters ---------- dtype : numpy.dtype or str The type of the returned array. copy : bool Default `True`. By default, astype always returns a newly allocated ndarray on the same context. If this is set to `False`, and the dtype requested is the same as the ndarray's dtype, the ndarray is returned instead of a copy. Examples -------- >>> x = mx.nd.sparse.zeros('row_sparse', (2,3), dtype='float32') >>> y = x.astype('int32') >>> y.dtype <type 'numpy.int32'> """
if not copy and np.dtype(dtype) == self.dtype: return self res = zeros(shape=self.shape, ctx=self.context, dtype=dtype, stype=self.stype) self.copyto(res) return res
<SYSTEM_TASK:> Check whether the NDArray format is valid. <END_TASK> <USER_TASK:> Description: def check_format(self, full_check=True): """Check whether the NDArray format is valid. Parameters ---------- full_check : bool, optional If `True`, rigorous check, O(N) operations. Otherwise basic check, O(1) operations (default True). """
check_call(_LIB.MXNDArraySyncCheckFormat(self.handle, ctypes.c_bool(full_check)))
<SYSTEM_TASK:> A deep copy NDArray of the data array associated with the BaseSparseNDArray. <END_TASK> <USER_TASK:> Description: def _data(self): """A deep copy NDArray of the data array associated with the BaseSparseNDArray. This function blocks. Do not use it in performance critical code. """
self.wait_to_read() hdl = NDArrayHandle() check_call(_LIB.MXNDArrayGetDataNDArray(self.handle, ctypes.byref(hdl))) return NDArray(hdl)
<SYSTEM_TASK:> Get a deep copy NDArray of the i-th aux data array associated with the <END_TASK> <USER_TASK:> Description: def _aux_data(self, i): """ Get a deep copy NDArray of the i-th aux data array associated with the BaseSparseNDArray. This function blocks. Do not use it in performance critical code. """
self.wait_to_read() hdl = NDArrayHandle() check_call(_LIB.MXNDArrayGetAuxNDArray(self.handle, i, ctypes.byref(hdl))) return NDArray(hdl)
<SYSTEM_TASK:> Returns a ``scipy.sparse.csr.csr_matrix`` object with value copied from this array <END_TASK> <USER_TASK:> Description: def asscipy(self): """Returns a ``scipy.sparse.csr.csr_matrix`` object with value copied from this array Examples -------- >>> x = mx.nd.sparse.zeros('csr', (2,3)) >>> y = x.asscipy() >>> type(y) <type 'scipy.sparse.csr.csr_matrix'> >>> y <2x3 sparse matrix of type '<type 'numpy.float32'>' with 0 stored elements in Compressed Sparse Row format> """
data = self.data.asnumpy() indices = self.indices.asnumpy() indptr = self.indptr.asnumpy() if not spsp: raise ImportError("scipy is not available. \ Please check if the scipy python bindings are installed.") return spsp.csr_matrix((data, indices, indptr), shape=self.shape, dtype=self.dtype)
<SYSTEM_TASK:> Return a copy of the array with chosen storage type. <END_TASK> <USER_TASK:> Description: def tostype(self, stype): """Return a copy of the array with chosen storage type. Returns ------- NDArray or RowSparseNDArray A copy of the array with the chosen storage stype """
# pylint: disable= no-member, protected-access if stype == 'csr': raise ValueError("cast_storage from row_sparse to csr is not supported") return op.cast_storage(self, stype=stype)
<SYSTEM_TASK:> Benchmarking both storage and dot <END_TASK> <USER_TASK:> Description: def bench_dot(lhs_row_dim, lhs_col_dim, rhs_col_dim, density, rhs_density, dot_func, trans_lhs, lhs_stype, rhs_stype, only_storage, distribution="uniform"): """ Benchmarking both storage and dot """
lhs_nd = rand_ndarray((lhs_row_dim, lhs_col_dim), lhs_stype, density, distribution=distribution) if not only_storage: rhs_nd = rand_ndarray((lhs_col_dim, rhs_col_dim), rhs_stype, density=rhs_density, distribution=distribution) out = dot_func(lhs_nd, rhs_nd, trans_lhs) mx.nd.waitall()
<SYSTEM_TASK:> Convert caffe mean <END_TASK> <USER_TASK:> Description: def convert_mean(binaryproto_fname, output=None): """Convert caffe mean Parameters ---------- binaryproto_fname : str Filename of the mean output : str, optional Save the mean into mxnet's format Returns ------- NDArray Mean in ndarray """
mean_blob = caffe_parser.caffe_pb2.BlobProto() with open(binaryproto_fname, 'rb') as f: mean_blob.ParseFromString(f.read()) img_mean_np = np.array(mean_blob.data) img_mean_np = img_mean_np.reshape( mean_blob.channels, mean_blob.height, mean_blob.width ) # swap channels from Caffe BGR to RGB img_mean_np[[0, 2], :, :] = img_mean_np[[2, 0], :, :] nd = mx.nd.array(img_mean_np) if output is not None: mx.nd.save(output, {"mean_image": nd}) return nd
<SYSTEM_TASK:> Build network symbol for training SSD <END_TASK> <USER_TASK:> Description: def get_symbol_train(network, num_classes, from_layers, num_filters, strides, pads, sizes, ratios, normalizations=-1, steps=[], min_filter=128, nms_thresh=0.5, force_suppress=False, nms_topk=400, **kwargs): """Build network symbol for training SSD Parameters ---------- network : str base network symbol name num_classes : int number of object classes not including background from_layers : list of str feature extraction layers, use '' for add extra layers For example: from_layers = ['relu4_3', 'fc7', '', '', '', ''] which means extract feature from relu4_3 and fc7, adding 4 extra layers on top of fc7 num_filters : list of int number of filters for extra layers, you can use -1 for extracted features, however, if normalization and scale is applied, the number of filter for that layer must be provided. For example: num_filters = [512, -1, 512, 256, 256, 256] strides : list of int strides for the 3x3 convolution appended, -1 can be used for extracted feature layers pads : list of int paddings for the 3x3 convolution, -1 can be used for extracted layers sizes : list or list of list [min_size, max_size] for all layers or [[], [], []...] for specific layers ratios : list or list of list [ratio1, ratio2...] for all layers or [[], [], ...] for specific layers normalizations : int or list of int use normalizations value for all layers or [...] for specific layers, -1 indicate no normalizations and scales steps : list specify steps for each MultiBoxPrior layer, leave empty, it will calculate according to layer dimensions min_filter : int minimum number of filters used in 1x1 convolution nms_thresh : float non-maximum suppression threshold force_suppress : boolean whether suppress different class objects nms_topk : int apply NMS to top K detections Returns ------- mx.Symbol """
label = mx.sym.Variable('label') body = import_module(network).get_symbol(num_classes, **kwargs) layers = multi_layer_feature(body, from_layers, num_filters, strides, pads, min_filter=min_filter) loc_preds, cls_preds, anchor_boxes = multibox_layer(layers, \ num_classes, sizes=sizes, ratios=ratios, normalization=normalizations, \ num_channels=num_filters, clip=False, interm_layer=0, steps=steps) tmp = mx.symbol.contrib.MultiBoxTarget( *[anchor_boxes, label, cls_preds], overlap_threshold=.5, \ ignore_label=-1, negative_mining_ratio=3, minimum_negative_samples=0, \ negative_mining_thresh=.5, variances=(0.1, 0.1, 0.2, 0.2), name="multibox_target") loc_target = tmp[0] loc_target_mask = tmp[1] cls_target = tmp[2] cls_prob = mx.symbol.SoftmaxOutput(data=cls_preds, label=cls_target, \ ignore_label=-1, use_ignore=True, grad_scale=1., multi_output=True, \ normalization='valid', name="cls_prob") loc_loss_ = mx.symbol.smooth_l1(name="loc_loss_", \ data=loc_target_mask * (loc_preds - loc_target), scalar=1.0) loc_loss = mx.symbol.MakeLoss(loc_loss_, grad_scale=1., \ normalization='valid', name="loc_loss") # monitoring training status cls_label = mx.symbol.MakeLoss(data=cls_target, grad_scale=0, name="cls_label") det = mx.symbol.contrib.MultiBoxDetection(*[cls_prob, loc_preds, anchor_boxes], \ name="detection", nms_threshold=nms_thresh, force_suppress=force_suppress, variances=(0.1, 0.1, 0.2, 0.2), nms_topk=nms_topk) det = mx.symbol.MakeLoss(data=det, grad_scale=0, name="det_out") # group output out = mx.symbol.Group([cls_prob, loc_loss, cls_label, det]) return out
<SYSTEM_TASK:> Build network for testing SSD <END_TASK> <USER_TASK:> Description: def get_symbol(network, num_classes, from_layers, num_filters, sizes, ratios, strides, pads, normalizations=-1, steps=[], min_filter=128, nms_thresh=0.5, force_suppress=False, nms_topk=400, **kwargs): """Build network for testing SSD Parameters ---------- network : str base network symbol name num_classes : int number of object classes not including background from_layers : list of str feature extraction layers, use '' for add extra layers For example: from_layers = ['relu4_3', 'fc7', '', '', '', ''] which means extract feature from relu4_3 and fc7, adding 4 extra layers on top of fc7 num_filters : list of int number of filters for extra layers, you can use -1 for extracted features, however, if normalization and scale is applied, the number of filter for that layer must be provided. For example: num_filters = [512, -1, 512, 256, 256, 256] strides : list of int strides for the 3x3 convolution appended, -1 can be used for extracted feature layers pads : list of int paddings for the 3x3 convolution, -1 can be used for extracted layers sizes : list or list of list [min_size, max_size] for all layers or [[], [], []...] for specific layers ratios : list or list of list [ratio1, ratio2...] for all layers or [[], [], ...] for specific layers normalizations : int or list of int use normalizations value for all layers or [...] for specific layers, -1 indicate no normalizations and scales steps : list specify steps for each MultiBoxPrior layer, leave empty, it will calculate according to layer dimensions min_filter : int minimum number of filters used in 1x1 convolution nms_thresh : float non-maximum suppression threshold force_suppress : boolean whether suppress different class objects nms_topk : int apply NMS to top K detections Returns ------- mx.Symbol """
body = import_module(network).get_symbol(num_classes, **kwargs) layers = multi_layer_feature(body, from_layers, num_filters, strides, pads, min_filter=min_filter) loc_preds, cls_preds, anchor_boxes = multibox_layer(layers, \ num_classes, sizes=sizes, ratios=ratios, normalization=normalizations, \ num_channels=num_filters, clip=False, interm_layer=0, steps=steps) cls_prob = mx.symbol.softmax(data=cls_preds, axis=1, name='cls_prob') out = mx.symbol.contrib.MultiBoxDetection(*[cls_prob, loc_preds, anchor_boxes], \ name="detection", nms_threshold=nms_thresh, force_suppress=force_suppress, variances=(0.1, 0.1, 0.2, 0.2), nms_topk=nms_topk) return out
<SYSTEM_TASK:> Get the output and gradients of output of a convolutional layer. <END_TASK> <USER_TASK:> Description: def get_conv_out_grad(net, image, class_id=None, conv_layer_name=None): """Get the output and gradients of output of a convolutional layer. Parameters: ---------- net: Block Network to use for visualization. image: NDArray Preprocessed image to use for visualization. class_id: int Category ID this image belongs to. If not provided, network's prediction will be used. conv_layer_name: str Name of the convolutional layer whose output and output's gradients need to be acptured."""
return _get_grad(net, image, class_id, conv_layer_name, image_grad=False)
<SYSTEM_TASK:> Get the gradients of the image. <END_TASK> <USER_TASK:> Description: def get_image_grad(net, image, class_id=None): """Get the gradients of the image. Parameters: ---------- net: Block Network to use for visualization. image: NDArray Preprocessed image to use for visualization. class_id: int Category ID this image belongs to. If not provided, network's prediction will be used."""
return _get_grad(net, image, class_id, image_grad=True)
<SYSTEM_TASK:> Convert gradients of image obtained using `get_image_grad` <END_TASK> <USER_TASK:> Description: def grad_to_image(gradient): """Convert gradients of image obtained using `get_image_grad` into image. This shows parts of the image that is most strongly activating the output neurons."""
gradient = gradient - gradient.min() gradient /= gradient.max() gradient = np.uint8(gradient * 255).transpose(1, 2, 0) gradient = gradient[..., ::-1] return gradient
<SYSTEM_TASK:> Draw a heatmap on top of the original image using intensities from activation_map <END_TASK> <USER_TASK:> Description: def get_img_heatmap(orig_img, activation_map): """Draw a heatmap on top of the original image using intensities from activation_map"""
heatmap = cv2.applyColorMap(activation_map, cv2.COLORMAP_COOL) heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB) img_heatmap = np.float32(heatmap) + np.float32(orig_img) img_heatmap = img_heatmap / np.max(img_heatmap) img_heatmap *= 255 return img_heatmap.astype(int)
<SYSTEM_TASK:> Convert gradients to grayscale. This gives a saliency map. <END_TASK> <USER_TASK:> Description: def to_grayscale(cv2im): """Convert gradients to grayscale. This gives a saliency map."""
# How strongly does each position activate the output grayscale_im = np.sum(np.abs(cv2im), axis=0) # Normalize between min and 99th percentile im_max = np.percentile(grayscale_im, 99) im_min = np.min(grayscale_im) grayscale_im = np.clip((grayscale_im - im_min) / (im_max - im_min), 0, 1) grayscale_im = np.expand_dims(grayscale_im, axis=0) return grayscale_im
<SYSTEM_TASK:> Helper function for checking shape of label and prediction <END_TASK> <USER_TASK:> Description: def check_label_shapes(labels, preds, wrap=False, shape=False): """Helper function for checking shape of label and prediction Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. wrap : boolean If True, wrap labels/preds in a list if they are single NDArray shape : boolean If True, check the shape of labels and preds; Otherwise only check their length. """
if not shape: label_shape, pred_shape = len(labels), len(preds) else: label_shape, pred_shape = labels.shape, preds.shape if label_shape != pred_shape: raise ValueError("Shape of labels {} does not match shape of " "predictions {}".format(label_shape, pred_shape)) if wrap: if isinstance(labels, ndarray.ndarray.NDArray): labels = [labels] if isinstance(preds, ndarray.ndarray.NDArray): preds = [preds] return labels, preds
<SYSTEM_TASK:> Creates evaluation metric from metric names or instances of EvalMetric <END_TASK> <USER_TASK:> Description: def create(metric, *args, **kwargs): """Creates evaluation metric from metric names or instances of EvalMetric or a custom metric function. Parameters ---------- metric : str or callable Specifies the metric to create. This argument must be one of the below: - Name of a metric. - An instance of `EvalMetric`. - A list, each element of which is a metric or a metric name. - An evaluation function that computes custom metric for a given batch of labels and predictions. *args : list Additional arguments to metric constructor. Only used when metric is str. **kwargs : dict Additional arguments to metric constructor. Only used when metric is str Examples -------- >>> def custom_metric(label, pred): ... return np.mean(np.abs(label - pred)) ... >>> metric1 = mx.metric.create('acc') >>> metric2 = mx.metric.create(custom_metric) >>> metric3 = mx.metric.create([metric1, metric2, 'rmse']) """
if callable(metric): return CustomMetric(metric, *args, **kwargs) elif isinstance(metric, list): composite_metric = CompositeEvalMetric() for child_metric in metric: composite_metric.add(create(child_metric, *args, **kwargs)) return composite_metric return _create(metric, *args, **kwargs)
<SYSTEM_TASK:> Creates a custom evaluation metric that receives its inputs as numpy arrays. <END_TASK> <USER_TASK:> Description: def np(numpy_feval, name=None, allow_extra_outputs=False): """Creates a custom evaluation metric that receives its inputs as numpy arrays. Parameters ---------- numpy_feval : callable(label, pred) Custom evaluation function that receives labels and predictions for a minibatch as numpy arrays and returns the corresponding custom metric as a floating point number. name : str, optional Name of the custom metric. allow_extra_outputs : bool, optional Whether prediction output is allowed to have extra outputs. This is useful in cases like RNN where states are also part of output which can then be fed back to the RNN in the next step. By default, extra outputs are not allowed. Returns ------- float Custom metric corresponding to the provided labels and predictions. Example ------- >>> def custom_metric(label, pred): ... return np.mean(np.abs(label-pred)) ... >>> metric = mx.metric.np(custom_metric) """
def feval(label, pred): """Internal eval function.""" return numpy_feval(label, pred) feval.__name__ = numpy_feval.__name__ return CustomMetric(feval, name, allow_extra_outputs)
<SYSTEM_TASK:> Update the internal evaluation with named label and pred <END_TASK> <USER_TASK:> Description: def update_dict(self, label, pred): """Update the internal evaluation with named label and pred Parameters ---------- labels : OrderedDict of str -> NDArray name to array mapping for labels. preds : OrderedDict of str -> NDArray name to array mapping of predicted outputs. """
if self.output_names is not None: pred = [pred[name] for name in self.output_names] else: pred = list(pred.values()) if self.label_names is not None: label = [label[name] for name in self.label_names] else: label = list(label.values()) self.update(label, pred)
<SYSTEM_TASK:> Resets the internal evaluation result to initial state. <END_TASK> <USER_TASK:> Description: def reset(self): """Resets the internal evaluation result to initial state."""
self.num_inst = 0 self.sum_metric = 0.0 self.global_num_inst = 0 self.global_sum_metric = 0.0
<SYSTEM_TASK:> Gets the current evaluation result. <END_TASK> <USER_TASK:> Description: def get(self): """Gets the current evaluation result. Returns ------- names : list of str Name of the metrics. values : list of float Value of the evaluations. """
if self.num_inst == 0: return (self.name, float('nan')) else: return (self.name, self.sum_metric / self.num_inst)
<SYSTEM_TASK:> Gets the current global evaluation result. <END_TASK> <USER_TASK:> Description: def get_global(self): """Gets the current global evaluation result. Returns ------- names : list of str Name of the metrics. values : list of float Value of the evaluations. """
if self._has_global_stats: if self.global_num_inst == 0: return (self.name, float('nan')) else: return (self.name, self.global_sum_metric / self.global_num_inst) else: return self.get()
<SYSTEM_TASK:> Returns zipped name and value pairs. <END_TASK> <USER_TASK:> Description: def get_name_value(self): """Returns zipped name and value pairs. Returns ------- list of tuples A (name, value) tuple list. """
name, value = self.get() if not isinstance(name, list): name = [name] if not isinstance(value, list): value = [value] return list(zip(name, value))
<SYSTEM_TASK:> Returns zipped name and value pairs for global results. <END_TASK> <USER_TASK:> Description: def get_global_name_value(self): """Returns zipped name and value pairs for global results. Returns ------- list of tuples A (name, value) tuple list. """
if self._has_global_stats: name, value = self.get_global() if not isinstance(name, list): name = [name] if not isinstance(value, list): value = [value] return list(zip(name, value)) else: return self.get_name_value()
<SYSTEM_TASK:> Returns a new dataset with each sample transformed by the <END_TASK> <USER_TASK:> Description: def transform(self, fn, lazy=True): """Returns a new dataset with each sample transformed by the transformer function `fn`. Parameters ---------- fn : callable A transformer function that takes a sample as input and returns the transformed sample. lazy : bool, default True If False, transforms all samples at once. Otherwise, transforms each sample on demand. Note that if `fn` is stochastic, you must set lazy to True or you will get the same result on all epochs. Returns ------- Dataset The transformed dataset. """
trans = _LazyTransformDataset(self, fn) if lazy: return trans return SimpleDataset([i for i in trans])
<SYSTEM_TASK:> Forward the image through the LSTM network model <END_TASK> <USER_TASK:> Description: def forward_ocr(self, img_): """Forward the image through the LSTM network model Parameters ---------- img_: int of array Returns ---------- label_list: string of list """
img_ = cv2.resize(img_, (80, 30)) img_ = img_.transpose(1, 0) print(img_.shape) img_ = img_.reshape((1, 80, 30)) print(img_.shape) # img_ = img_.reshape((80 * 30)) img_ = np.multiply(img_, 1 / 255.0) self.predictor.forward(data=img_, **self.init_state_dict) prob = self.predictor.get_output(0) label_list = [] for p in prob: print(np.argsort(p)) max_index = np.argsort(p)[::-1][0] label_list.append(max_index) return self.__get_string(label_list)
<SYSTEM_TASK:> Iterate over all layers <END_TASK> <USER_TASK:> Description: def layer_iter(layers, layer_names): """Iterate over all layers"""
if use_caffe: for layer_idx, layer in enumerate(layers): layer_name = re.sub('[-/]', '_', layer_names[layer_idx]) layer_type = layer.type layer_blobs = layer.blobs yield (layer_name, layer_type, layer_blobs) else: for layer in layers: layer_name = re.sub('[-/]', '_', layer.name) layer_type = layer.type layer_blobs = layer.blobs yield (layer_name, layer_type, layer_blobs)
<SYSTEM_TASK:> Set up the profiler state to 'run' or 'stop'. <END_TASK> <USER_TASK:> Description: def set_state(state='stop', profile_process='worker'): """Set up the profiler state to 'run' or 'stop'. Parameters ---------- state : string, optional Indicates whether to run the profiler, can be 'stop' or 'run'. Default is `stop`. profile_process : string whether to profile kvstore `server` or `worker`. server can only be profiled when kvstore is of type dist. if this is not passed, defaults to `worker` """
state2int = {'stop': 0, 'run': 1} profile_process2int = {'worker': 0, 'server': 1} check_call(_LIB.MXSetProcessProfilerState(ctypes.c_int(state2int[state]), profile_process2int[profile_process], profiler_kvstore_handle))
<SYSTEM_TASK:> Dump profile and stop profiler. Use this to save profile <END_TASK> <USER_TASK:> Description: def dump(finished=True, profile_process='worker'): """Dump profile and stop profiler. Use this to save profile in advance in case your program cannot exit normally. Parameters ---------- finished : boolean Indicates whether to stop statistic output (dumping) after this dump. Default is True profile_process : string whether to profile kvstore `server` or `worker`. server can only be profiled when kvstore is of type dist. if this is not passed, defaults to `worker` """
fin = 1 if finished is True else 0 profile_process2int = {'worker': 0, 'server': 1} check_call(_LIB.MXDumpProcessProfile(fin, profile_process2int[profile_process], profiler_kvstore_handle))
<SYSTEM_TASK:> Return a printable string of aggregate profile stats. <END_TASK> <USER_TASK:> Description: def dumps(reset=False): """Return a printable string of aggregate profile stats. Parameters ---------- reset: boolean Indicates whether to clean aggeregate statistical data collected up to this point """
debug_str = ctypes.c_char_p() do_reset = 1 if reset is True else 0 check_call(_LIB.MXAggregateProfileStatsPrint(ctypes.byref(debug_str), int(do_reset))) return py_str(debug_str.value)
<SYSTEM_TASK:> Pause profiling. <END_TASK> <USER_TASK:> Description: def pause(profile_process='worker'): """Pause profiling. Parameters ---------- profile_process : string whether to profile kvstore `server` or `worker`. server can only be profiled when kvstore is of type dist. if this is not passed, defaults to `worker` """
profile_process2int = {'worker': 0, 'server': 1} check_call(_LIB.MXProcessProfilePause(int(1), profile_process2int[profile_process], profiler_kvstore_handle))
<SYSTEM_TASK:> Resume paused profiling. <END_TASK> <USER_TASK:> Description: def resume(profile_process='worker'): """ Resume paused profiling. Parameters ---------- profile_process : string whether to profile kvstore `server` or `worker`. server can only be profiled when kvstore is of type dist. if this is not passed, defaults to `worker` """
profile_process2int = {'worker': 0, 'server': 1} check_call(_LIB.MXProcessProfilePause(int(0), profile_process2int[profile_process], profiler_kvstore_handle))
<SYSTEM_TASK:> Set counter value. <END_TASK> <USER_TASK:> Description: def set_value(self, value): """Set counter value. Parameters ---------- value : int Value for the counter """
check_call(_LIB.MXProfileSetCounter(self.handle, int(value)))
<SYSTEM_TASK:> Decrement counter value. <END_TASK> <USER_TASK:> Description: def decrement(self, delta=1): """Decrement counter value. Parameters ---------- value_change : int Amount by which to subtract from the counter """
check_call(_LIB.MXProfileAdjustCounter(self.handle, -int(delta)))
<SYSTEM_TASK:> Set up the profiler state to record operator. <END_TASK> <USER_TASK:> Description: def mark(self, scope='process'): """Set up the profiler state to record operator. Parameters ---------- scope : string, optional Indicates what scope the marker should refer to. Can be 'global', 'process', thread', task', and 'marker' Default is `process`. """
check_call(_LIB.MXProfileSetMarker(self.domain.handle, c_str(self.name), c_str(scope)))
<SYSTEM_TASK:> r"""Get CUDA kernel from compiled module. <END_TASK> <USER_TASK:> Description: def get_kernel(self, name, signature): r"""Get CUDA kernel from compiled module. Parameters ---------- name : str String name of the kernel. signature : str Function signature for the kernel. For example, if a kernel is declared as:: extern "C" __global__ void axpy(const float *x, double *y, int alpha) Then its signature should be:: const float *x, double *y, int alpha or:: const float *, double *, int Note that `*` in signature marks an argument as array and `const` marks an argument as constant (input) array. Returns ------- CudaKernel CUDA kernels that can be launched on GPUs. """
hdl = CudaKernelHandle() is_ndarray = [] is_const = [] dtypes = [] pattern = re.compile(r"""^\s*(const)?\s*([\w_]+)\s*(\*)?\s*([\w_]+)?\s*$""") args = re.sub(r"\s+", " ", signature).split(",") for arg in args: match = pattern.match(arg) if not match or match.groups()[1] == 'const': raise ValueError( 'Invalid function prototype "%s". Must be in the ' 'form of "(const) type (*) (name)"'%arg) is_const.append(bool(match.groups()[0])) dtype = match.groups()[1] is_ndarray.append(bool(match.groups()[2])) if dtype not in _DTYPE_CPP_TO_NP: raise TypeError( "Unsupported kernel argument type %s. Supported types are: %s."%( arg, ','.join(_DTYPE_CPP_TO_NP.keys()))) dtypes.append(_DTYPE_NP_TO_MX[_DTYPE_CPP_TO_NP[dtype]]) check_call(_LIB.MXRtcCudaKernelCreate( self.handle, c_str(name), len(dtypes), c_array_buf(ctypes.c_int, array('i', is_ndarray)), c_array_buf(ctypes.c_int, array('i', is_const)), c_array_buf(ctypes.c_int, array('i', dtypes)), ctypes.byref(hdl))) return CudaKernel(hdl, name, is_ndarray, dtypes)
<SYSTEM_TASK:> Launch cuda kernel. <END_TASK> <USER_TASK:> Description: def launch(self, args, ctx, grid_dims, block_dims, shared_mem=0): """Launch cuda kernel. Parameters ---------- args : tuple of NDArray or numbers List of arguments for kernel. NDArrays are expected for pointer types (e.g. `float*`, `double*`) while numbers are expected for non-pointer types (e.g. `int`, `float`). ctx : Context The context to launch kernel on. Must be GPU context. grid_dims : tuple of 3 integers Grid dimensions for CUDA kernel. block_dims : tuple of 3 integers Block dimensions for CUDA kernel. shared_mem : integer, optional Size of dynamically allocated shared memory. Defaults to 0. """
assert ctx.device_type == 'gpu', "Cuda kernel can only be launched on GPU" assert len(grid_dims) == 3, "grid_dims must be a tuple of 3 integers" assert len(block_dims) == 3, "grid_dims must be a tuple of 3 integers" assert len(args) == len(self._dtypes), \ "CudaKernel(%s) expects %d arguments but got %d"%( self._name, len(self._dtypes), len(args)) void_args = [] ref_holder = [] for i, (arg, is_nd, dtype) in enumerate(zip(args, self._is_ndarray, self._dtypes)): if is_nd: assert isinstance(arg, NDArray), \ "The %d-th argument is expected to be a NDArray but got %s"%( i, type(arg)) void_args.append(arg.handle) else: assert isinstance(arg, numeric_types), \ "The %d-th argument is expected to be a number, but got %s"%( i, type(arg)) ref_holder.append(np.array(arg, dtype=dtype)) void_args.append(ref_holder[-1].ctypes.data_as(ctypes.c_void_p)) check_call(_LIB.MXRtcCudaKernelCall( self.handle, ctx.device_id, c_array(ctypes.c_void_p, void_args), mx_uint(grid_dims[0]), mx_uint(grid_dims[1]), mx_uint(grid_dims[2]), mx_uint(block_dims[0]), mx_uint(block_dims[1]), mx_uint(block_dims[2]), mx_uint(shared_mem)))
<SYSTEM_TASK:> Clear the internal statistics to initial state. <END_TASK> <USER_TASK:> Description: def reset(self): """Clear the internal statistics to initial state."""
if getattr(self, 'num', None) is None: self.num_inst = 0 self.sum_metric = 0.0 else: self.num_inst = [0] * self.num self.sum_metric = [0.0] * self.num self.records = dict() self.counts = dict()
<SYSTEM_TASK:> get recall and precision from internal records <END_TASK> <USER_TASK:> Description: def _recall_prec(self, record, count): """ get recall and precision from internal records """
record = np.delete(record, np.where(record[:, 1].astype(int) == 0)[0], axis=0) sorted_records = record[record[:,0].argsort()[::-1]] tp = np.cumsum(sorted_records[:, 1].astype(int) == 1) fp = np.cumsum(sorted_records[:, 1].astype(int) == 2) if count <= 0: recall = tp * 0.0 else: recall = tp / float(count) prec = tp.astype(float) / (tp + fp) return recall, prec
<SYSTEM_TASK:> calculate average precision <END_TASK> <USER_TASK:> Description: def _average_precision(self, rec, prec): """ calculate average precision Params: ---------- rec : numpy.array cumulated recall prec : numpy.array cumulated precision Returns: ---------- ap as float """
# append sentinel values at both ends mrec = np.concatenate(([0.], rec, [1.])) mpre = np.concatenate(([0.], prec, [0.])) # compute precision integration ladder for i in range(mpre.size - 1, 0, -1): mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) # look for recall value changes i = np.where(mrec[1:] != mrec[:-1])[0] # sum (\delta recall) * prec ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) return ap
<SYSTEM_TASK:> Insert records according to key <END_TASK> <USER_TASK:> Description: def _insert(self, key, records, count): """ Insert records according to key """
if key not in self.records: assert key not in self.counts self.records[key] = records self.counts[key] = count else: self.records[key] = np.vstack((self.records[key], records)) assert key in self.counts self.counts[key] += count
<SYSTEM_TASK:> calculate average precision, override the default one, <END_TASK> <USER_TASK:> Description: def _average_precision(self, rec, prec): """ calculate average precision, override the default one, special 11-point metric Params: ---------- rec : numpy.array cumulated recall prec : numpy.array cumulated precision Returns: ---------- ap as float """
ap = 0. for t in np.arange(0., 1.1, 0.1): if np.sum(rec >= t) == 0: p = 0 else: p = np.max(prec[rec >= t]) ap += p / 11. return ap
<SYSTEM_TASK:> Internal verbose print function <END_TASK> <USER_TASK:> Description: def _verbose_print(self, desc, init, arr): """Internal verbose print function Parameters ---------- desc : InitDesc or str name of the array init : str initializer pattern arr : NDArray initialized array """
if self._verbose and self._print_func: logging.info('Initialized %s as %s: %s', desc, init, self._print_func(arr))
<SYSTEM_TASK:> Legacy initialization method. <END_TASK> <USER_TASK:> Description: def _legacy_init(self, name, arr): """Legacy initialization method. Parameters ---------- name : str Name of corresponding NDArray. arr : NDArray NDArray to be initialized. """
warnings.warn( "\033[91mCalling initializer with init(str, NDArray) has been deprecated." \ "please use init(mx.init.InitDesc(...), NDArray) instead.\033[0m", DeprecationWarning, stacklevel=3) if not isinstance(name, string_types): raise TypeError('name must be string') if not isinstance(arr, NDArray): raise TypeError('arr must be NDArray') if name.startswith('upsampling'): self._init_bilinear(name, arr) elif name.startswith('stn_loc') and name.endswith('weight'): self._init_zero(name, arr) elif name.startswith('stn_loc') and name.endswith('bias'): self._init_loc_bias(name, arr) elif name.endswith('bias'): self._init_bias(name, arr) elif name.endswith('gamma'): self._init_gamma(name, arr) elif name.endswith('beta'): self._init_beta(name, arr) elif name.endswith('weight'): self._init_weight(name, arr) elif name.endswith("moving_mean"): self._init_zero(name, arr) elif name.endswith("moving_var"): self._init_one(name, arr) elif name.endswith("moving_inv_var"): self._init_zero(name, arr) elif name.endswith("moving_avg"): self._init_zero(name, arr) elif name.endswith('min'): self._init_zero(name, arr) elif name.endswith('max'): self._init_one(name, arr) else: self._init_default(name, arr)
<SYSTEM_TASK:> load class names from text file <END_TASK> <USER_TASK:> Description: def _load_class_names(self, filename, dirname): """ load class names from text file Parameters: ---------- filename: str file stores class names dirname: str file directory """
full_path = osp.join(dirname, filename) classes = [] with open(full_path, 'r') as f: classes = [l.strip() for l in f.readlines()] return classes