desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'.. todo:: WRITEME'
def __repr__(self):
return str(self)
'An object representing the data type used by this space. For simple spaces, this will be a dtype string, as used by numpy, scipy, and theano (e.g. \'float32\'). For data-less spaces like NoneType, this will be some other string. For composite spaces, this will be a nested tuple of such strings.'
@property def dtype(self):
raise NotImplementedError()
'.. todo:: WRITEME'
@dtype.setter def dtype(self, new_value):
raise NotImplementedError()
'.. todo:: WRITEME'
@dtype.deleter def dtype(self):
raise RuntimeError('You may not delete the dtype of a space, though you can set it to None.')
'Returns the origin in this space. Returns origin : ndarray An NumPy array, the shape of a single points in this space, representing the origin.'
def get_origin(self):
raise NotImplementedError()
'Returns a batch containing `batch_size` copies of the origin. Parameters batch_size : int The number of examples in the batch to be returned. dtype : WRITEME The dtype of the batch to be returned. Default = None. If None, use self.dtype. Returns batch : ndarray A NumPy array in the shape of a batch of `batch_size` points in this space (with points being indexed along the first axis), each `batch[i]` being a copy of the origin.'
def get_origin_batch(self, batch_size, dtype=None):
raise NotImplementedError()
'.. todo:: WRITEME'
def make_shared_batch(self, batch_size, name=None, dtype=None):
dtype = self._clean_dtype_arg(dtype) origin_batch = self.get_origin_batch(batch_size, dtype) return theano.shared(origin_batch, name=name)
'Returns a symbolic variable representing a batch of points in this space. Parameters name : str Variable name for the returned batch. dtype : str Data type for the returned batch. If omitted (None), self.dtype is used. batch_size : int Number of examples in the returned batch. Returns batch : TensorVariable, SparseVariable, or tuple thereof A batch with the appropriate number of dimensions and appropriate broadcast flags to represent a batch of points in this space.'
def make_theano_batch(self, name=None, dtype=None, batch_size=None):
raise NotImplementedError()
'An alias to make_theano_batch'
def make_batch_theano(self, name=None, dtype=None, batch_size=None):
return self.make_theano_batch(name=name, dtype=dtype, batch_size=batch_size)
'Returns a Python int (not a theano iscalar) representing the dimensionality of a point in this space. If you format a batch of examples in this space as a design matrix (i.e., VectorSpace batch) then the number of columns will be equal to the total dimension.'
def get_total_dimension(self):
raise NotImplementedError((str(type(self)) + ' does not implement get_total_dimension.'))
'Returns a numeric batch (e.g. a numpy.ndarray or scipy.sparse sparse array), formatted to lie in this space. This is just a wrapper around self._format_as, with an extra check to throw an exception if <batch> is symbolic. Should be invertible, i.e. batch should equal `space.format_as(self.format_as(batch, space), self)` Parameters batch : numpy.ndarray, or one of the scipy.sparse matrices. Array which lies in this space. space : Space Target space to format batch to. Returns WRITEME The formatted batch'
def np_format_as(self, batch, space):
self._check_is_numeric(batch) return self._format_as(is_numeric=True, batch=batch, space=space)
'Called by self._format_as(space), to check whether self and space have compatible sizes. Throws a ValueError if they don\'t.'
def _check_sizes(self, space):
my_dimension = self.get_total_dimension() other_dimension = space.get_total_dimension() if (my_dimension != other_dimension): raise ValueError(((((((str(self) + ' with total dimension ') + str(my_dimension)) + " can't format a batch into ") + str(space)) + 'because its total dimension is ') + str(other_dimension)))
'.. todo:: WRITEME'
def format_as(self, batch, space):
self._check_is_symbolic(batch) return self._format_as(is_numeric=False, batch=batch, space=space)
'The shared implementation of format_as() and np_format_as(). Agnostic to whether batch is symbolic or numeric, which avoids duplicating a lot of code between format_as() and np_format_as(). Calls the appropriate callbacks, then calls self._format_as_impl(). Should be invertible, i.e. batch should equal `space._format_as(self._format_as(batch, space), self)` Parameters is_numeric : bool Set to True to call np_validate_callbacks(). Set to False to call validate_callbacks(). batch : WRITEME space : Space WRITEME Returns WRITEME'
def _format_as(self, is_numeric, batch, space):
assert isinstance(is_numeric, bool) self._validate(is_numeric, batch) self._check_sizes(space) return self._format_as_impl(is_numeric, batch, space)
'Actual implementation of format_as/np_format_as. Formats batch to target_space. Should be invertible, i.e. batch should equal `space._format_as_impl(self._format_as_impl(batch, space), self)` Parameters is_numeric : bool Set to True to treat batch as a numeric batch, False to treat it as a symbolic batch. This is necessary because sometimes a batch\'s numeric/symbolicness can be ambiguous, i.e. when it\'s the empty tuple (). batch : a numpy.ndarray, scipy.sparse matrix, theano symbol, or a nested tuple thereof Implementations of this method may assume that batch lies in this space (i.e. that it passed self._validate(batch) without throwing an exception). target_space : A Space subclass The space to transform batch into. Returns The batch, converted to the target_space.'
def _format_as_impl(self, is_numeric, batch, target_space):
raise NotImplementedError(('%s does not implement _format_as_impl().' % type(self)))
'Returns a numeric batch (e.g. a numpy.ndarray or scipy.sparse sparse array), with formatting from space undone. This is just a wrapper around self._undo_format_as, with an extra check to throw an exception if <batch> is symbolic. Parameters batch : numpy.ndarray, or one of the scipy.sparse matrices. Array which lies in this space. space : Space Space to undo formatting from. Returns numpy.ndarray or one of the scipy.sparse matrices The formatted batch.'
def undo_np_format_as(self, batch, space):
self._check_is_numeric(batch) return space.np_format_as(batch=batch, space=self)
'Returns a symbolic batch (e.g. a theano.tensor or theano.sparse array), with formatting from space undone. This is just a wrapper around self._undo_format_as, with an extra check to throw an exception if <batch> is symbolic. Formatting to space Parameters batch : numpy.ndarray, or one of the scipy.sparse matrices. Array which lies in this space. space : Space Space to undo formatting from. Returns A symbolic Theano variable The batch formatted as self.'
def undo_format_as(self, batch, space):
self._check_is_symbolic(batch) space.validate(batch) self._check_sizes(space) batch = self._undo_format_as_impl(batch=batch, space=space) self.validate(batch) return batch
'Actual implementation of undo_format_as. Undoes target_space_formatting. Note that undo_np_format_as calls np_format_as. Parameters batch : a theano symbol, or a nested tuple thereof Implementations of this method may assume that batch lies in space (i.e. that it passed self._validate(batch) without throwing an exception). target_space : A Space subclass The space to undo batch formatting from. Returns A symbolic Theano variable The batch, converted from target_space, back to self.'
def _undo_format_as_impl(self, batch, target_space):
raise NotImplementedError(('%s does not implement _undo_format_as_impl().' % type(self)))
'Runs all validate_callbacks, then checks that batch lies in this space. Raises an exception if the batch isn\'t symbolic, or if any of these checks fails. Parameters batch : a symbolic (Theano) variable that lies in this space.'
def validate(self, batch):
self._check_is_symbolic(batch) self._validate(is_numeric=False, batch=batch)
'Runs all np_validate_callbacks, then checks that batch lies in this space. Raises an exception if the batch isn\'t numeric, or if any of these checks fails. Parameters batch : a numeric (numpy/scipy.sparse) variable that lies in this space'
def np_validate(self, batch):
self._check_is_numeric(batch) self._validate(is_numeric=True, batch=batch)
'Shared implementation of validate() and np_validate(). Calls validate_callbacks or np_validate_callbacks as appropriate, then calls self._validate_impl(batch) to verify that batch belongs to this space. Parameters is_numeric : bool. Set to True to call np_validate_callbacks, False to call validate_callbacks. Necessary because it can be impossible to tell from the batch whether it should be treated as a numeric of symbolic batch, for example when the batch is the empty tuple (), or NullSpace batch None. batch : a theano variable, numpy ndarray, scipy.sparse matrix or a nested tuple thereof Represents a batch belonging to this space.'
def _validate(self, is_numeric, batch):
if is_numeric: self._check_is_numeric(batch) callbacks_name = 'np_validate_callbacks' else: self._check_is_symbolic(batch) callbacks_name = 'validate_callbacks' if (not hasattr(self, callbacks_name)): raise TypeError((('The ' + str(type(self))) + ' Space subclass is required to call the Space superclass constructor but does not.')) else: callbacks = getattr(self, callbacks_name) for callback in callbacks: callback(batch) self._validate_impl(is_numeric, batch)
'Subclasses must override this method so that it throws an exception if the batch is the wrong shape or dtype for this Space. Parameters is_numeric : bool Set to True to treat batch as a numeric type (numpy.ndarray or scipy.sparse matrix). Set to False to treat batch as a symbolic (Theano) variable. Necessary because batch could be (), which could be numeric or symbolic. batch : A numpy ndarray, scipy.sparse matrix, theano variable or a nested tuple thereof. Must be a valid batch belonging to this space.'
def _validate_impl(self, is_numeric, batch):
raise NotImplementedError(('Class "%s" does not implement _validate_impl()' % type(self)))
'Returns the batch size of a symbolic batch. Parameters batch : WRITEME'
def batch_size(self, batch):
return self._batch_size(is_numeric=False, batch=batch)
'Returns the batch size of a numeric (numpy/scipy.sparse) batch. Parameters batch : WRITEME'
def np_batch_size(self, batch):
return self._batch_size(is_numeric=True, batch=batch)
'.. todo:: WRITEME'
def _batch_size(self, is_numeric, batch):
self._validate(is_numeric, batch) return self._batch_size_impl(is_numeric, batch)
'Returns the batch size of a batch. Parameters batch : WRITEME'
def _batch_size_impl(self, is_numeric, batch):
raise NotImplementedError(('%s does not implement batch_size' % type(self)))
'Returns a batch of data starting from index `start` to index `stop` Parameters data : WRITEME start : WRITEME end : WRITEME'
def get_batch(self, data, start, end):
raise NotImplementedError(((str(type(self)) + ' does not implement ') + 'get_batch'))
'.. todo:: WRITEME'
@staticmethod def _check_is_numeric(batch):
if (not is_numeric_batch(batch)): raise TypeError(('Expected batch to be a numeric variable, but instead it was of type "%s"' % type(batch)))
'.. todo:: WRITEME'
@staticmethod def _check_is_symbolic(batch):
if (not is_symbolic_batch(batch)): raise TypeError(('Expected batch to be a symbolic variable, but instead it was of type "%s"' % type(batch)))
'Checks dtype string for validity, and returns it if it is. If dtype is \'floatX\', returns the theano.config.floatX dtype (this will either be \'float32\' or \'float64\'.'
def _clean_dtype_arg(self, dtype):
if isinstance(dtype, np.dtype): dtype = str(dtype) if (dtype == 'floatX'): return theano.config.floatX if ((dtype is None) or (dtype in tuple((x.dtype for x in theano.scalar.all_types)))): return dtype raise TypeError(('Unrecognized value "%s" (type %s) for dtype arg' % (dtype, type(dtype))))
'if dtype is None, checks that self.dtype is not None. Otherwise, same as superclass\' implementation.'
def _clean_dtype_arg(self, dtype):
if (dtype is None): if (self.dtype is None): raise TypeError('self.dtype is None, so you must provide a non-None dtype argument to this method.') return self.dtype return super(SimplyTypedSpace, self)._clean_dtype_arg(dtype)
'.. todo:: WRITEME'
def _validate_impl(self, is_numeric, batch):
if isinstance(batch, tuple): raise TypeError('This space only supports simple dtypes, but received a composite batch.') def is_complex(dtype): return np.issubdtype(dtype, np.complex) def is_integral(dtype): return np.issubdtype(dtype, np.integer) if (self.dtype is not None): if ((is_complex(batch.dtype) and (not is_complex(self.dtype))) or ((not is_integral(batch.dtype)) and is_integral(self.dtype))): raise TypeError(("Cannot safely cast batch dtype %s to space's dtype %s. " % (batch.dtype, self.dtype)))
'.. todo:: WRITEME'
@property def dtype(self):
return self._dtype
'.. todo:: WRITEME'
@dtype.setter def dtype(self, new_dtype):
self._dtype = super(SimplyTypedSpace, self)._clean_dtype_arg(new_dtype)
'.. todo:: WRITEME'
def __setstate__(self, state_dict):
self.__dict__.update(state_dict) if ('_dtype' not in state_dict): self._dtype = theano.config.floatX
'Return a string representation'
def __str__(self):
return ('%(classname)s(dim=%(dim)s, max_labels=%(max_labels)s, dtype=%(dtype)s)' % dict(classname=self.__class__.__name__, dim=self.dim, max_labels=self.max_labels, dtype=self.dtype))
'.. todo:: WRITEME'
def __eq__(self, other):
return ((type(self) == type(other)) and (self.max_labels == other.max_labels) and (self.dim == other.dim) and (self.dtype == other.dtype))
'.. todo:: WRITEME'
def __ne__(self, other):
return (not (self == other))
'.. todo:: WRITEME'
@functools.wraps(Space._validate_impl) def _validate_impl(self, is_numeric, batch):
super(IndexSpace, self)._validate_impl(is_numeric, batch) if is_numeric: if ((not isinstance(batch, np.ndarray)) and (str(type(batch)) != "<type 'CudaNdarray'>")): raise TypeError(('The value of a IndexSpace batch should be a numpy.ndarray, or CudaNdarray, but is %s.' % str(type(batch)))) if (batch.ndim != 2): raise ValueError(('The value of a IndexSpace batch must be 2D, got %d dimensions for %s.' % (batch.ndim, batch))) if (batch.shape[1] != self.dim): raise ValueError(("The width of a IndexSpace batch must match with the space's dimension, but batch has shape %s and dim = %d." % (str(batch.shape), self.dim))) else: if (not isinstance(batch, theano.gof.Variable)): raise TypeError(('IndexSpace batch should be a theano Variable, got ' + str(type(batch)))) if (not isinstance(batch.type, (theano.tensor.TensorType, CudaNdarrayType))): raise TypeError(('IndexSpace batch should be TensorType or CudaNdarrayType, got ' + str(batch.type))) if (batch.ndim != 2): raise ValueError(('IndexSpace batches must be 2D, got %d dimensions' % batch.ndim)) for val in get_debug_values(batch): self.np_validate(val)
'.. todo:: WRITEME'
def __str__(self):
return ('%s(dim=%d%s, dtype=%s)' % (self.__class__.__name__, self.dim, (', sparse' if self.sparse else ''), self.dtype))
'.. todo:: WRITEME'
def __eq__(self, other):
return ((type(self) == type(other)) and (self.dim == other.dim) and (self.sparse == other.sparse) and (self.dtype == other.dtype))
'.. todo:: WRITEME'
def __hash__(self):
return hash((type(self), self.dim, self.sparse, self.dtype))
'.. todo:: WRITEME'
@functools.wraps(Space._validate_impl) def _validate_impl(self, is_numeric, batch):
super(VectorSpace, self)._validate_impl(is_numeric, batch) if isinstance(batch, theano.gof.Variable): if self.sparse: if (not isinstance(batch.type, theano.sparse.SparseType)): raise TypeError(('This VectorSpace is%s sparse, but the provided batch is not. (batch type: "%s")' % (('' if self.sparse else ' not'), type(batch)))) elif (not isinstance(batch.type, (theano.tensor.TensorType, CudaNdarrayType))): raise TypeError(('VectorSpace batch should be TensorType or CudaNdarrayType, got ' + str(batch.type))) if (batch.ndim != 2): raise ValueError(('VectorSpace batches must be 2D, got %d dimensions' % batch.ndim)) for val in get_debug_values(batch): self.np_validate(val) else: if ((not self.sparse) and (not isinstance(batch, np.ndarray)) and (type(batch) != 'CudaNdarray')): raise TypeError(('The value of a VectorSpace batch should be a numpy.ndarray, or CudaNdarray, but is %s.' % str(type(batch)))) if self.sparse: if (not theano.sparse.enable_sparse): raise TypeError('theano.sparse is not enabled, cannot have a value for a sparse VectorSpace.') if (not scipy.sparse.issparse(batch)): raise TypeError(('The value of a sparse VectorSpace batch should be a sparse scipy matrix, got %s of type %s.' % (batch, type(batch)))) if (batch.ndim != 2): raise ValueError(('The value of a VectorSpace batch must be 2D, got %d dimensions for %s.' % (batch.ndim, batch))) if (batch.shape[1] != self.dim): raise ValueError(("The width of a VectorSpace batch must match with the space's dimension, but batch has shape %s and dim = %d." % (str(batch.shape), self.dim)))
'Return a string representation'
def __str__(self):
return ('%(classname)s(dim=%(dim)s, dtype=%(dtype)s)' % dict(classname=self.__class__.__name__, dim=self.dim, dtype=self.dtype))
'Return a string representation'
def __str__(self):
return ('%(classname)s(dim=%(dim)s, max_labels=%(max_labels)s, dtype=%(dtype)s)' % dict(classname=self.__class__.__name__, dim=self.dim, max_labels=self.max_labels, dtype=self.dtype))
'.. todo:: WRITEME'
def __eq__(self, other):
return ((type(self) == type(other)) and (self.max_labels == other.max_labels) and (self.dim == other.dim) and (self.dtype == other.dtype))
'.. todo:: WRITEME'
def __str__(self):
return ('%s(shape=%s, num_channels=%d, axes=%s, dtype=%s)' % (self.__class__.__name__, str(self.shape), self.num_channels, str(self.axes), self.dtype))
'.. todo:: WRITEME'
def __eq__(self, other):
assert isinstance(self.axes, tuple) if isinstance(other, Conv2DSpace): assert isinstance(other.axes, tuple) return ((type(self) == type(other)) and (self.shape == other.shape) and (self.num_channels == other.num_channels) and (self.axes == other.axes) and (self.dtype == other.dtype))
'.. todo:: WRITEME'
def __hash__(self):
return hash((type(self), self.shape, self.num_channels, self.axes, self.dtype))
'Returns a view of tensor using the axis semantics defined by dst_axes. (If src_axes matches dst_axes, returns tensor itself) Useful for transferring tensors between different Conv2DSpaces. Parameters tensor : tensor_like A 4-tensor representing a batch of images src_axes : WRITEME Axis semantics of tensor dst_axes : WRITEME WRITEME'
@staticmethod def convert(tensor, src_axes, dst_axes):
src_axes = tuple(src_axes) dst_axes = tuple(dst_axes) assert (len(src_axes) == 4) assert (len(dst_axes) == 4) if (src_axes == dst_axes): return tensor shuffle = [src_axes.index(elem) for elem in dst_axes] if is_symbolic_batch(tensor): return tensor.dimshuffle(*shuffle) else: return tensor.transpose(*shuffle)
'.. todo:: WRITEME'
@staticmethod def convert_numpy(tensor, src_axes, dst_axes):
return Conv2DSpace.convert(tensor, src_axes, dst_axes)
'.. todo:: WRITEME'
def __eq__(self, other):
return ((type(self) == type(other)) and (len(self.components) == len(other.components)) and all(((my_component == other_component) for (my_component, other_component) in zip(self.components, other.components))))
'.. todo:: WRITEME'
def __hash__(self):
return hash((type(self), tuple(self.components)))
'.. todo:: WRITEME'
def __str__(self):
return ('%(classname)s(%(components)s)' % dict(classname=self.__class__.__name__, components=', '.join([str(c) for c in self.components])))
'Returns a nested tuple of dtype strings. NullSpaces will yield a bogus dtype string (see NullSpace.dtype).'
@property def dtype(self):
def get_dtype_of_space(space): if isinstance(space, CompositeSpace): return tuple((get_dtype_of_space(c) for c in space.components)) elif isinstance(space, NullSpace): return NullSpace().dtype else: return space.dtype return get_dtype_of_space(self)
'If new_dtype is None or a string, it will be applied to all components (except any NullSpaces). If new_dtype is a (nested) tuple, its elements will be applied to corresponding components.'
@dtype.setter def dtype(self, new_dtype):
if isinstance(new_dtype, tuple): for (component, new_dt) in safe_zip(self.components, new_dtype): component.dtype = new_dt elif ((new_dtype is None) or isinstance(new_dtype, str)): for component in self.components: if (not isinstance(component, NullSpace)): component.dtype = new_dtype
'Returns a new Space containing only the components whose indices are given in subset. The new space will contain the components in the order given in the subset list. Parameters subset : WRITEME Notes The returned Space may not be a CompositeSpace if `subset` contains only one index.'
def restrict(self, subset):
assert isinstance(subset, (list, tuple)) if (len(subset) == 1): (idx,) = subset return self.components[idx] return CompositeSpace([self.components[i] for i in subset])
'Returns a batch containing only the components whose indices are present in subset. May not be a tuple anymore if there is only one index. Outputs will be ordered in the order that they appear in subset. Only supports symbolic batches. Parameters batch : WRITEME subset : WRITEME'
def restrict_batch(self, batch, subset):
self._validate(is_numeric=False, batch=batch) assert isinstance(subset, (list, tuple)) if (len(subset) == 1): (idx,) = subset return batch[idx] return tuple([batch[i] for i in subset])
'Supports formatting to a single VectorSpace, or to a CompositeSpace. CompositeSpace->VectorSpace: Traverses the nested components in depth-first order, serializing the leaf nodes (i.e. the non-composite subspaces) into the VectorSpace. CompositeSpace->CompositeSpace: Only works for two CompositeSpaces that have the same nested structure. Traverses both CompositeSpaces\' nested components in parallel, converting between corresponding non-composite components in <self> and <space> as: `self_component._format_as(is_numeric, batch_component, space_component)` Parameters batch : WRITEME space : WRITEME Returns WRITEME'
@functools.wraps(Space._format_as_impl) def _format_as_impl(self, is_numeric, batch, space):
if isinstance(space, VectorSpace): pieces = [] for (component, input_piece) in zip(self.components, batch): subspace = VectorSpace(dim=component.get_total_dimension(), dtype=space.dtype, sparse=space.sparse) pieces.append(component._format_as(is_numeric, input_piece, subspace)) if (len(pieces) > 0): for piece in pieces[1:]: if (pieces[0].dtype != piece.dtype): assert (space.dtype is None) raise TypeError(('Tried to format components with differing dtypes into a VectorSpace with no dtype of its own. dtypes: %s' % str(tuple((str(p.dtype) for p in pieces))))) if is_symbolic_batch(batch): if space.sparse: return theano.sparse.hstack(pieces) else: return tensor.concatenate(pieces, axis=1) elif space.sparse: return scipy.sparse.hstack(pieces) else: return np.concatenate(pieces, axis=1) if isinstance(space, CompositeSpace): def recursive_format_as(orig_space, batch, dest_space): if (not (isinstance(orig_space, CompositeSpace) == isinstance(dest_space, CompositeSpace))): raise TypeError("Can't convert between CompositeSpaces with different tree structures") if isinstance(orig_space, CompositeSpace): return tuple((recursive_format_as(os, bt, ds) for (os, bt, ds) in safe_zip(orig_space.components, batch, dest_space.components))) else: return orig_space._format_as(is_numeric, batch, dest_space) return recursive_format_as(self, batch, space) raise NotImplementedError(((str(self) + ' does not know how to format as ') + str(space)))
'Undoes the formatting to a single VectorSpace, or to a CompositeSpace. CompositeSpace->VectorSpace: Traverses the nested components in depth-first order, serializing the leaf nodes (i.e. the non-composite subspaces) into the VectorSpace. CompositeSpace->CompositeSpace: Only works for two CompositeSpaces that have the same nested structure. Traverses both CompositeSpaces\' nested components in parallel, converting between corresponding non-composite components in <self> and <space> as: `self_component._format_as(is_numeric, batch_component, space_component)` Parameters batch : WRITEME space : WRITEME Returns WRITEME'
@functools.wraps(Space._undo_format_as_impl) def _undo_format_as_impl(self, batch, space):
if isinstance(space, VectorSpace): if space.sparse: owner = batch.owner assert (owner is not None) assert ('HStack' in str(owner.op)) batch = owner.inputs else: owner = batch.owner assert (owner is not None) assert (str(owner.op) == 'Join') batch = owner.inputs[1:] def extract_dtype(dtype): if isinstance(dtype, tuple): return extract_dtype(dtype[0]) else: return dtype def compose_batch(composite_space, batch_list): rval = () for (sp, bt) in safe_zip(composite_space.components, batch_list): if (False and isinstance(sp, CompositeSpace)): (composed, batch_list) = compose_batch(sp, batch_list) rval += (composed,) else: sparse = getattr(sp, 'sparse', False) dtype = extract_dtype(sp.dtype) new_sp = VectorSpace(dim=sp.get_total_dimension(), dtype=dtype, sparse=sparse) new_batch = sp.undo_format_as(bt, new_sp) rval += (new_batch,) return rval composed = compose_batch(self, batch) return composed if isinstance(space, CompositeSpace): def recursive_undo_format_as(orig_space, batch, dest_space): if (not (isinstance(orig_space, CompositeSpace) == isinstance(dest_space, CompositeSpace))): raise TypeError("Can't convert between CompositeSpaces with different tree structures") if isinstance(orig_space, CompositeSpace): return tuple((recursive_undo_format_as(os, bt, ds) for (os, bt, ds) in safe_zip(orig_space.components, batch, dest_space.components))) else: return orig_space.undo_format_as(batch, dest_space) return recursive_undo_format_as(self, batch, space) raise NotImplementedError(((str(self) + ' does not know how to format as ') + str(space)))
'Calls get_origin_batch on all subspaces, and returns a (nested) tuple containing their return values. Parameters batch_size : int Batch size. dtype : str the dtype to use for all the get_origin_batch() calls on subspaces. If dtype is None, or a single dtype string, that will be used for all calls. If dtype is a (nested) tuple, it must mirror the tree structure of this CompositeSpace.'
def get_origin_batch(self, batch_size, dtype=None):
dtype = self._clean_dtype_arg(dtype) return tuple((component.get_origin_batch(batch_size, dt) for (component, dt) in safe_zip(self.components, dtype)))
'Calls make_theano_batch on all subspaces, and returns a (nested) tuple containing their return values. Parameters name : str Name of the symbolic variable dtype : str The dtype of the returned batch. If dtype is a string, it will be applied to all components. If dtype is None, C.dtype will be used for each component C. If dtype is a nested tuple, its elements will be applied to corresponding elements in the components. batch_size : int Batch size.'
@functools.wraps(Space.make_theano_batch) def make_theano_batch(self, name=None, dtype=None, batch_size=None):
if (name is None): name = ([None] * len(self.components)) elif (not isinstance(name, (list, tuple))): name = [('%s[%i]' % (name, i)) for i in xrange(len(self.components))] dtype = self._clean_dtype_arg(dtype) assert isinstance(name, (list, tuple)) assert isinstance(dtype, (list, tuple)) rval = tuple([x.make_theano_batch(name=n, dtype=d, batch_size=batch_size) for (x, n, d) in safe_zip(self.components, name, dtype)]) return rval
'If dtype is None or a string, this returns a nested tuple that mirrors the tree structure of this CompositeSpace, with dtype at the leaves. If dtype is a nested tuple, this checks that it has the same tree structure as this CompositeSpace.'
def _clean_dtype_arg(self, dtype):
super_self = super(CompositeSpace, self) def make_dtype_tree(dtype, space): '\n Creates a nested tuple tree that mirrors the tree structure of\n <space>, populating the leaves with <dtype>.\n ' if isinstance(space, CompositeSpace): return tuple((make_dtype_tree(dtype, component) for component in space.components)) else: return super_self._clean_dtype_arg(dtype) def check_dtype_tree(dtype, space): '\n Verifies that a dtype tree mirrors the tree structure of <space>,\n calling Space._clean_dtype_arg on the leaves.\n ' if isinstance(space, CompositeSpace): if (not isinstance(dtype, tuple)): raise TypeError('Tree structure mismatch.') return tuple((check_dtype_tree(dt, c) for (dt, c) in safe_zip(dtype, space.components))) else: if (not ((dtype is None) or isinstance(dtype, str))): raise TypeError('Tree structure mismatch.') return super_self._clean_dtype_arg(dtype) if ((dtype is None) or isinstance(dtype, str)): dtype = super_self._clean_dtype_arg(dtype) return make_dtype_tree(dtype, self) else: return check_dtype_tree(dtype, self)
'.. todo:: WRITEME'
def __str__(self):
return 'NullSpace'
'.. todo:: WRITEME'
def __eq__(self, other):
return (type(self) == type(other))
'.. todo:: WRITEME'
def __hash__(self):
return hash(type(self))
'.. todo:: WRITEME'
@property def dtype(self):
return ("%s's dtype" % self.__class__.__name__)
'.. todo:: WRITEME'
@dtype.setter def dtype(self, new_dtype):
if (new_dtype != self.dtype): raise TypeError(('%s can only take the bogus dtype "%s"' % (self.__class__.__name__, self.dtype)))
'.. todo:: WRITEME'
def minimize(self, *inputs):
if self.verbose: logger.info('minimizing') alpha_list = list(self.init_alpha) orig_obj = self.obj(*inputs) if self.verbose: logger.info(orig_obj) iters = 0 if self.reset_conjugate: norm = 0.0 else: norm = 1.0 while (iters != self.max_iter): if self.verbose: logger.info('batch gradient descent iteration {0}'.format(iters)) iters += 1 self._cache_values() if self.conjugate: self._store_old_grad(norm) self._compute_grad(*inputs) if self.conjugate: self._make_conjugate() norm = self._normalize_grad() if (self.line_search_mode is None): (best_obj, best_alpha, best_alpha_ind) = (self.obj(*inputs), 0.0, (-1)) prev_best_obj = best_obj for (ind, alpha) in enumerate(alpha_list): self._goto_alpha(alpha) obj = self.obj(*inputs) if self.verbose: logger.info(' DCTB {0} {1}'.format(alpha, obj)) if (obj <= best_obj): best_obj = obj best_alpha = alpha best_alpha_ind = ind if self.verbose: logger.info(best_obj) assert (not np.isnan(best_obj)) assert (best_obj <= prev_best_obj) self._goto_alpha(best_alpha) step_size = best_alpha if ((best_alpha_ind < 1) and (alpha_list[0] > self.tol)): alpha_list = [(alpha / 3.0) for alpha in alpha_list] if self.verbose: logger.info('shrinking the step size') elif (best_alpha_ind > (len(alpha_list) - 2)): alpha_list = [(alpha * 2.0) for alpha in alpha_list] if self.verbose: logger.info('growing the step size') elif ((best_alpha_ind == (-1)) and (alpha_list[0] <= self.tol)): if (alpha_list[(-1)] > 1): if self.verbose: logger.info('converged') break if self.verbose: logger.info('expanding the range of step sizes') for i in xrange(len(alpha_list)): for j in xrange(i, len(alpha_list)): alpha_list[j] *= 1.5 else: a = np.asarray(alpha_list) s = (a[1:] / a[:(-1)]) max_gap = 5.0 if (s.max() > max_gap): weight = 0.99 if self.verbose: logger.info('shrinking the range of step sizes') alpha_list = [((alpha ** weight) * (best_alpha ** (1.0 - weight))) for alpha in alpha_list] assert all([(second > first) for (first, second) in safe_zip(alpha_list[:(-1)], alpha_list[1:])]) else: assert (self.line_search_mode == 'exhaustive') if (self.verbose > 1): logger.info('Exhaustive line search') obj = self.obj(*inputs) if np.isnan(obj): logger.warning('Objective is NaN for these parameters.') results = [(0.0, obj)] for alpha in alpha_list: if (not (alpha > results[(-1)][0])): logger.error('alpha: {0}'.format(alpha)) logger.error('most recent alpha (should be smaller): {0}'.format(results[(-1)][0])) assert False self._goto_alpha(alpha) obj = self.obj(*inputs) if np.isnan(obj): obj = np.inf results.append((alpha, obj)) if (self.verbose > 1): for (alpha, obj) in results: logger.info(' DCTB {0} {1}'.format(alpha, obj)) logger.info(' DCTB -------') prev_improvement = 0.0 while True: alpha_list = [alpha for (alpha, obj) in results] obj = [obj for (alpha, obj) in results] mn = min(obj) idx = obj.index(mn) def do_point(x): self._goto_alpha(x) res = self.obj(*inputs) if (self.verbose > 1): logger.info(' DCTB {0} {1}'.format(x, res)) if np.isnan(res): res = np.inf for i in xrange(len(results)): elem = results[i] ex = elem[0] if (x == ex): raise AssertionError((str(ex) + 'is already in the list.')) if (x > ex): if (((i + 1) == len(results)) or (x < results[(i + 1)][0])): results.insert((i + 1), (x, res)) return (mn - res) assert False if (idx == 0): x = ((alpha_list[0] + alpha_list[1]) / 2.0) elif (idx == (len(alpha_list) - 1)): x = (2 * alpha_list[(-1)]) elif (obj[(idx + 1)] < obj[(idx - 1)]): x = ((alpha_list[idx] + alpha_list[(idx + 1)]) / 2.0) else: x = ((alpha_list[idx] + alpha_list[(idx - 1)]) / 2.0) if (x < 1e-20): break improvement = do_point(x) if (((improvement > 0) and (improvement < (0.01 * prev_improvement))) or (len(obj) > 10)): break prev_improvement = improvement alpha_list = [alpha for (alpha, ignore_obj) in results] obj = [obj_elem for (alpha, obj_elem) in results] mn = min(obj) idx = obj.index(mn) x = alpha_list[idx] self._goto_alpha(x) step_size = x if self.verbose: logger.info('best objective: {0}'.format(mn)) assert (not np.isnan(mn)) if (idx == 0): x = alpha_list[1] if (self.min_init_alpha is not None): x = max(x, (2.0 * self.min_init_alpha)) alpha_list = [(x / 2.0), x] best_obj = mn new_weight = self.new_weight.get_value() old = self.ave_step_size.get_value() update = ((new_weight * step_size) + ((1 - new_weight) * old)) update = np.cast[config.floatX](update) if (self.ave_step_size.dtype == 'float32'): assert (update.dtype == 'float32') self.ave_step_size.set_value(update) old = self.ave_grad_mult.get_value() update = ((new_weight * (step_size / norm)) + ((1.0 - new_weight) * old)) update = np.cast[config.floatX](update) self.ave_grad_mult.set_value(update) if (new_weight == 1.0): self.new_weight.set_value(0.01) if (not self.reset_alpha): self.init_alpha = alpha_list return best_obj
'.. todo:: WRITEME'
def _true_inputs(self, inputs):
return [elem for (elem, shared) in safe_zip(inputs, self._shared_mask) if (not shared)]
'.. todo:: WRITEME'
def _shared_inputs(self, inputs):
return [elem for (elem, shared) in safe_zip(inputs, self._shared_mask) if shared]
'.. todo:: WRITEME'
def _set_shared(self, inputs):
for (elem, mask, shared) in safe_zip(inputs, self._shared_mask, self._shared): if mask: shared.set_value(elem)
'.. todo:: WRITEME'
def __call__(self, *batches):
for batch in batches: if (not isinstance(batch, list)): raise TypeError(((('Expected each argument to be a list, but one argument is ' + str(batch)) + ' of type ') + str(type(batch)))) total_examples = np.cast[config.floatX](sum([batch[0].shape[0] for batch in batches])) if self.has_updates: self._clear() augmented = (self._true_inputs(batches[0]) + [total_examples]) self._set_shared(batches[0]) rval = self._func(*augmented) for batch in batches[1:]: augmented = (self._true_inputs(batch) + [total_examples]) self._set_shared(batch) cur_out = self._func(*augmented) rval = [(x + y) for (x, y) in safe_zip(rval, cur_out)] if (len(rval) == 1): return rval[0] return rval
'Temporary method to manage the deprecation'
def __new__(cls, filename, X=None, topo_view=None, y=None, load_all=False, cache_size=None, sources=None, spaces=None, aliases=None, use_h5py='auto', **kwargs):
if ((X is not None) or (topo_view is not None)): warnings.warn('A dataset is using the old interface that is now deprecated and will become officially unsupported as of July 27, 2015. The dataset should use the new interface that inherits from the dataset class instead of the DenseDesignMatrix class. Please refer to pylearn2.datasets.hdf5.py for more details on arguments and details of the new interface.', DeprecationWarning) return HDF5DatasetDeprecated(filename, X, topo_view, y, load_all, cache_size, **kwargs) else: return super(HDF5Dataset, cls).__new__(cls, filename, sources, spaces, aliases, load_all, cache_size, use_h5py, **kwargs)
'Class constructor'
def __init__(self, filename, sources, spaces, aliases=None, load_all=False, cache_size=None, use_h5py='auto', **kwargs):
assert isinstance(filename, string_types) assert isfile(filename), ('%s does not exist.' % filename) assert isinstance(sources, list) assert all([isinstance(el, string_types) for el in sources]) assert isinstance(spaces, list) assert all([isinstance(el, Space) for el in spaces]) assert (len(sources) == len(spaces)) if (aliases is not None): assert isinstance(aliases, list) assert all([isinstance(el, string_types) for el in aliases if (el is not None)]) assert (len(aliases) == len(sources)) assert isinstance(load_all, bool) assert ((cache_size is None) or isinstance(cache_size, py_integer_types)) assert (isinstance(use_h5py, bool) or (use_h5py == 'auto')) self.load_all = load_all self._aliases = (aliases if aliases else [None for _ in sources]) self._sources = sources self.spaces = alias_dict() for (i, (source, alias)) in enumerate(safe_zip(self._sources, self._aliases)): self.spaces[(source, alias)] = spaces[i] del spaces, aliases, sources if load_all: warnings.warn("You can load all the data in memory for speed, but DO NOT use modify all the dataset at once (e.g., reshape, transform, etc, ...) because your code will fail if at some point you won't have enough memory to store the dataset alltogheter. Use the iterator to reshape the data after you load it from the dataset.") datasetCache = cache.datasetCache filename = datasetCache.cache_file(filename) if (use_h5py == 'auto'): use_h5py = (True if (tables is None) else False) if use_h5py: if (h5py is None): raise RuntimeError('Could not import h5py.') if cache_size: propfaid = h5py.h5p.create(h5py.h5p.FILE_ACCESS) settings = list(propfaid.get_cache()) settings[2] = cache_size propfaid.set_cache(*settings) self._fhandler = h5py.File(h5py.h5f.open(filename, fapl=propfaid), mode='r') else: self._fhandler = h5py.File(filename, mode='r') else: if (tables is None): raise RuntimeError('Could not import tables.') self._fhandler = tables.openFile(filename, mode='r') self.data = self._read_hdf5(self._sources, self._aliases, load_all, use_h5py) assert (len(self.data) != 0), 'No dataset was loaded. Please make sure that sources is a list with at least one value and that the provided values are keys of the dataset you are trying to load.' super(HDF5Dataset, self).__init__(**kwargs)
'Loads elements from an HDF5 dataset using either h5py or tables. It can load either the whole object in memory or a reference to the object on disk, depending on the load_all parameter. Returns a list of objects. Parameters sources : list of str List of HDF5 keys corresponding to the data to be loaded. load_all : bool, optional (default False) If true, load dataset into memory. use_h5py: bool, optional (default True) If true uses h5py, else tables.'
def _read_hdf5(self, sources, aliases, load_all=False, use_h5py=True):
data = alias_dict() if use_h5py: for (s, a) in safe_zip(sources, aliases): if load_all: data[(s, a)] = self._fhandler[s][:] else: data[(s, a)] = self._fhandler[s] data[s].ndim = len(data[s].shape) else: for (s, a) in safe_zip(sources, aliases): if load_all: data[(s, a)](self._fhandler.getNode('/', s)[:]) else: data[(s, a)] = self._fhandler.getNode('/', s) return data
'if data_specs is set to None, the aliases (or sources) and spaces provided when the dataset object has been created will be used.'
@wraps(Dataset.iterator, assigned=(), updated=(), append=True) def iterator(self, mode=None, data_specs=None, batch_size=None, num_batches=None, rng=None, return_tuple=False, **kwargs):
if (data_specs is None): data_specs = (self._get_sources, self._get_spaces) [mode, batch_size, num_batches, rng, data_specs] = self._init_iterator(mode, batch_size, num_batches, rng, data_specs) convert = None return FiniteDatasetIterator(self, mode(self.get_num_examples(), batch_size, num_batches, rng), data_specs=data_specs, return_tuple=return_tuple, convert=convert)
'Returns the aliases (if defined, sources otherwise) provided when the HDF5 object was created Returns A string or a list of strings.'
def _get_sources(self):
return tuple([(alias if alias else source) for (alias, source) in safe_zip(self._aliases, self._sources)])
'Returns the Space(s) associated with the aliases (or sources) specified when the HDF5 object has been created. Returns A Space or a list of Spaces.'
def _get_spaces(self):
space = [self.spaces[s] for s in self._get_sources] return (space[0] if (len(space) == 1) else tuple(space))
'Returns a tuple `(space, source)` for each one of the provided source_or_alias keys, if any. If no key is provided, it will use self.aliases, if not None, or self.sources.'
def get_data_specs(self, source_or_alias=None):
if (source_or_alias is None): source_or_alias = self._get_sources() if isinstance(source_or_alias, (list, tuple)): space = tuple([self.spaces[s] for s in source_or_alias]) space = CompositeSpace(space) else: space = self.spaces[source_or_alias] return (space, source_or_alias)
'DEPRECATED Returns all the data, as it is internally stored. The definition and format of these data are described in `self.get_data_specs()`. Returns data : numpy matrix or 2-tuple of matrices The data'
def get_data(self):
return tuple([self.data[s] for s in self._get_sources()])
'Retrieves the requested elements from the dataset. Parameter sources : tuple A tuple of source identifiers indexes : slice or list A slice or a list of indexes Return rval : tuple A tuple of batches, one for each source'
def get(self, sources, indexes):
assert (isinstance(sources, (tuple, list)) and (len(sources) > 0)), 'sources should be an instance of tuple and not empty' assert all([isinstance(el, string_types) for el in sources]), 'sources elements should be strings' assert isinstance(indexes, (tuple, list, slice, py_integer_types)), 'indexes should be either an int, a slice or a tuple/list of ints' if isinstance(indexes, (tuple, list)): assert ((len(indexes) > 0) and all([isinstance(i, py_integer_types) for i in indexes])), 'indexes elements should be ints' rval = [] for s in sources: try: sdata = self.data[s] except ValueError as e: reraise_as(ValueError(('The requested source %s is not part of the dataset' % sources[s]), *e.args)) if (isinstance(indexes, (slice, py_integer_types)) or (len(indexes) == 1)): rval.append(sdata[indexes]) else: warnings.warn('Accessing non sequential elements of an HDF5 file will be at best VERY slow. Avoid using iteration schemes that access random/shuffled data with hdf5 datasets!!') val = [] [val.append(sdata[idx]) for idx in indexes] rval.append(val) return tuple(rval)
'Return the number of examples *OF THE FIRST SOURCE*. Note that this behavior will probably be deprecated in the future, returing a list of num_examples. Do not rely on this function unless unavoidable. Parameter source_or_alias : str, optional The source you want the number of examples of'
@wraps(Dataset.get_num_examples, assigned=(), updated=()) def get_num_examples(self, source_or_alias=None):
assert ((source_or_alias is None) or isinstance(source_or_alias, string_types)) if (source_or_alias is None): alias = self._get_sources() alias = (alias[0] if isinstance(alias, (list, tuple)) else alias) data = self.data[alias] else: data = self.data[source_or_alias] return data.shape[0]
'Returns the item corresponding to a key or an alias. Parameter key_or_alias: any valid key for a dictionary A key or an alias.'
def __getitem__(self, key_or_alias):
assert isinstance(key_or_alias, string_types) try: return super(alias_dict, self).__getitem__(key_or_alias) except KeyError: return super(alias_dict, self).__getitem__(self.__a2k__[key_or_alias])
'Add an element to the dictionary Parameter keys: either a tuple `(key, alias)` or any valid key for a dictionary The key and optionally the alias of the new element. value: any input accepted as value by a dictionary The value of the new element.i Notes You can add elements to the dictionary as follows: 1) my_dict[key] = value 2) my_dict[key, alias] = value'
def __setitem__(self, keys, value):
assert isinstance(keys, (list, tuple, string_types)) if isinstance(keys, (list, tuple)): assert all([((el is None) or isinstance(el, string_types)) for el in keys]) if isinstance(keys, (list, tuple)): if (keys[1] is not None): if ((keys[0] in self.__a2k__) or (keys[0] in super(alias_dict, self).keys())): raise Exception('The key is already used in the dictionary either as key or alias') if ((keys[1] in self.__a2k__) or (keys[1] in super(alias_dict, self).keys())): raise Exception('The alias is already used in the dictionary either as key or alias') self.__k2a__[keys[0]] = keys[1] self.__a2k__[keys[1]] = keys[0] keys = keys[0] return super(alias_dict, self).__setitem__(keys, value)
'Add an alias to a key of the dictionary that doesn\'t have already an alias. Parameter keys: any valid key for a dictionary A key of the dictionary. alias: any input accepted as key by a dictionary The alias.'
def set_alias(self, key, alias):
if (alias is None): return if (key not in super(alias_dict, self).keys()): raise NameError('The key is not in the dictionary') if ((key in self.__k2a__) and (alias != self.__k2a__[key])): raise NameError('The key is already associated to a different alias') if (((alias in self.__a2k__) and (key != self.__a2k__[alias])) or (alias in super(alias_dict, self).keys())): raise Exception('The alias is already used in the dictionary either as key or alias') self.__k2a__[key] = alias self.__a2k__[alias] = key
'Returns true if the key or alias is an element of the dictionary Parameter keys_or_alias: any valid key for a dictionary The key or the alias to look for.'
def __contains__(self, key_or_alias):
try: isalias = super(alias_dict, self).__contains__(self.__k2a__[key_or_alias]) except KeyError: isalias = False pass return (isalias or super(alias_dict, self).__contains__(key_or_alias))
'.. todo:: WRITEME'
def __init__(self, preprocessor=None):
self.class_names = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'] lines = iris_data.split('\n') X = [] y = [] for line in lines: row = line.split(',') X.append([float(elem) for elem in row[:(-1)]]) y.append(self.class_names.index(row[(-1)])) X = np.array(X) assert (X.shape == (150, 4)) assert (len(y) == 150) y = np.array([[y_i] for y_i in y]) assert (min(y) == 0) assert (max(y) == 2) super(Iris, self).__init__(X=X, y=y, y_labels=3, preprocessor=preprocessor)
'Reads the specified NORB dataset from a memmap cache. Creates this cache first, if necessary. Parameters which_norb : str Valid values: \'big\' or \'small\'. Chooses between the (big) \'NORB dataset\', and the \'Small NORB dataset\'. which_set : str Valid values: \'test\', \'train\', or \'both\'. Chooses between the testing set or the training set. If \'both\', the two datasets will be stacked together (testing data in the first N rows, then training data). image_dtype : str, or numpy.dtype The dtype to store image data as in the memmap cache. Default is uint8, which is what the original NORB files use.'
def __init__(self, which_norb, which_set, image_dtype='uint8'):
if (which_norb not in ('big', 'small')): raise ValueError(("Expected which_norb argument to be either 'big' or 'small', not '%s'" % str(which_norb))) if (which_set not in ('test', 'train', 'both')): raise ValueError(("Expected which_set argument to be either 'test' or 'train', not '%s'." % str(which_set))) image_dtype = numpy.dtype(image_dtype) self.label_index_to_name = ('category', 'instance', 'elevation', 'azimuth', 'lighting condition') if (which_norb == 'big'): self.label_index_to_name = (self.label_index_to_name + ('horizontal shift', 'vertical shift', 'lumination change', 'contrast', 'object scale', 'rotation')) self.label_name_to_index = {} for (index, name) in enumerate(self.label_index_to_name): self.label_name_to_index[name] = index self.label_to_value_funcs = (get_category_value, get_instance_value, get_elevation_value, get_azimuth_value, get_lighting_value) if (which_norb == 'big'): self.label_to_value_funcs = (self.label_to_value_funcs + (get_horizontal_shift_value, get_vertical_shift_value, get_lumination_change_value, get_contrast_change_value, get_scale_change_value, get_rotation_change_value)) image_length = (96 if (which_norb == 'small') else 108) def read_norb_files(norb_files, output): '\n Reads the contents of a list of norb files into a matrix.\n Data is assumed to be in row-major order.\n ' def read_norb_file(norb_file_path, debug=False): '\n Returns the numbers in a single NORB file as a 1-D ndarray.\n\n Parameters\n ----------\n\n norb_file_path : str\n A NORB file from which to read.\n Can be uncompressed (*.mat) or compressed (*.mat.gz).\n\n debug : bool\n Set to True if you want debug printfs.\n ' if (not (norb_file_path.endswith('.mat') or norb_file_path.endswith('.mat.gz'))): raise ValueError(("Expected norb_file_path to end in either '.mat' or '.mat.gz'. Instead got '%s'" % norb_file_path)) if (not os.path.isfile(norb_file_path)): raise IOError(("Could not find NORB file '%s' in expected directory '%s'." % reversed(os.path.split(norb_file_path)))) file_handle = (gzip.open(norb_file_path) if norb_file_path.endswith('.mat.gz') else open(norb_file_path)) def readNums(file_handle, num_type, count): '\n Reads some numbers from a file and returns them as a\n numpy.ndarray.\n\n Parameters\n ----------\n\n file_handle : file handle\n The file handle from which to read the numbers.\n\n num_type : str, numpy.dtype\n The dtype of the numbers.\n\n count : int\n Reads off this many numbers.\n ' num_bytes = (count * numpy.dtype(num_type).itemsize) string = file_handle.read(num_bytes) return numpy.fromstring(string, dtype=num_type) (elem_type, elem_size, _num_dims, shape, num_elems) = read_header(file_handle, debug) del _num_dims beginning = file_handle.tell() result = None if isinstance(file_handle, (gzip.GzipFile, bz2.BZ2File)): result = readNums(file_handle, elem_type, (num_elems * elem_size)).reshape(shape) else: result = numpy.fromfile(file_handle, dtype=elem_type, count=num_elems).reshape(shape) return result row_index = 0 for norb_file in norb_files: print(('copying NORB file %s' % os.path.split(norb_file)[1])) norb_data = read_norb_file(norb_file) norb_data = norb_data.reshape((-1), output.shape[1]) end_row = (row_index + norb_data.shape[0]) output[row_index:end_row, :] = norb_data row_index = end_row assert (end_row == output.shape[0]) if (which_norb == 'small'): training_set_size = 24300 testing_set_size = 24300 else: assert (which_norb == 'big') num_rows_per_file = 29160 training_set_size = (num_rows_per_file * 10) testing_set_size = (num_rows_per_file * 2) def load_images(which_norb, which_set, dtype): "\n Reads image data from memmap disk cache, if available. If not, then\n first builds the memmap file from the NORB files.\n\n Parameters\n ----------\n which_norb : str\n 'big' or 'small'.\n\n which_set : str\n 'test', 'train', or 'both'.\n\n dtype : numpy.dtype\n The dtype of the image memmap cache file. If a\n cache of this dtype doesn't exist, it will be created.\n " assert (type(dtype) == numpy.dtype) memmap_path = get_memmap_path(which_norb, ('images_%s' % str(dtype))) row_size = (2 * (image_length ** 2)) shape = ((training_set_size + testing_set_size), row_size) def make_memmap(): dat_files = get_norb_file_paths(which_norb, 'both', 'dat') memmap_dir = os.path.split(memmap_path)[0] if (not os.path.isdir(memmap_dir)): os.mkdir(memmap_dir) print(('Allocating memmap file %s' % memmap_path)) writeable_memmap = numpy.memmap(filename=memmap_path, dtype=dtype, mode='w+', shape=shape) read_norb_files(dat_files, writeable_memmap) if (not os.path.isfile(memmap_path)): print('Caching images to memmap file. This will only be done once.') make_memmap() images = numpy.memmap(filename=memmap_path, dtype=dtype, mode='r', shape=shape) if (which_set == 'train'): images = images[:training_set_size, :] elif (which_set == 'test'): images = images[training_set_size:, :] return images def load_labels(which_norb, which_set): '\n Reads label data (both category and info data) from memmap disk\n cache, if available. If not, then first builds the memmap file from\n the NORB files.\n ' memmap_path = get_memmap_path(which_norb, 'labels') dtype = numpy.dtype('int32') row_size = (5 if (which_norb == 'small') else 11) shape = ((training_set_size + testing_set_size), row_size) def make_memmap(): (cat_files, info_files) = [get_norb_file_paths(which_norb, 'both', x) for x in ('cat', 'info')] memmap_dir = os.path.split(memmap_path)[0] if (not os.path.isdir(memmap_dir)): os.mkdir(memmap_dir) print("allocating labels' memmap...") writeable_memmap = numpy.memmap(filename=memmap_path, dtype=dtype, mode='w+', shape=shape) print('... done.') cat_memmap = writeable_memmap[:, :1] info_memmap = writeable_memmap[:, 1:] for (norb_files, memmap) in safe_zip((cat_files, info_files), (cat_memmap, info_memmap)): read_norb_files(norb_files, memmap) if (not os.path.isfile(memmap_path)): print(('Caching images to memmap file %s.\nThis will only be done once.' % memmap_path)) make_memmap() labels = numpy.memmap(filename=memmap_path, dtype=dtype, mode='r', shape=shape) if (which_set == 'train'): labels = labels[:training_set_size, :] elif (which_set == 'test'): labels = labels[training_set_size:, :] return labels def get_norb_dir(which_norb): datasets_dir = os.getenv('PYLEARN2_DATA_PATH') if (datasets_dir is None): raise RuntimeError("Please set the 'PYLEARN2_DATA_PATH' environment variable to tell pylearn2 where the datasets are.") if (not os.path.isdir(datasets_dir)): raise IOError(("The PYLEARN2_DATA_PATH directory (%s) doesn't exist." % datasets_dir)) return os.path.join(datasets_dir, ('norb' if (which_norb == 'big') else 'norb_small')) norb_dir = get_norb_dir(which_norb) def get_memmap_path(which_norb, file_basename): assert (which_norb in ('big', 'small')) assert ((file_basename == 'labels') or file_basename.startswith('images')), file_basename memmap_dir = os.path.join(norb_dir, 'memmaps_of_original') return os.path.join(memmap_dir, ('%s.npy' % file_basename)) def get_norb_file_paths(which_norb, which_set, norb_file_type): "\n Returns a list of paths for a given norb file type.\n\n For example,\n\n get_norb_file_paths('big', 'test', 'cat')\n\n Will return the category label files ('cat') for the big NORB\n dataset's test set.\n " assert (which_set in ('train', 'test', 'both')) if (which_set == 'both'): return (get_norb_file_paths(which_norb, 'train', norb_file_type) + get_norb_file_paths(which_norb, 'test', norb_file_type)) norb_file_types = ('cat', 'dat', 'info') if (norb_file_type not in norb_file_types): raise ValueError(("Expected norb_file_type to be one of %s, but it was '%s'" % (str(norb_file_types), norb_file_type))) instance_list = ('01235' if (which_set == 'test') else '46789') if (which_norb == 'small'): templates = [('smallnorb-5x%sx9x18x6x2x96x96-%sing-%%s.mat' % (instance_list, which_set))] else: numbers = range(1, (3 if (which_set == 'test') else 11)) templates = [('norb-5x%sx9x18x6x2x108x108-%sing-%02d-%%s.mat' % (instance_list, which_set, n)) for n in numbers] original_files_dir = os.path.join(norb_dir, 'original') return [os.path.join(original_files_dir, (t % norb_file_type)) for t in templates] def make_view_converter(which_norb, which_set): image_length = (96 if (which_norb == 'small') else 108) datum_shape = (2, image_length, image_length, 1) axes = ('b', 's', 0, 1, 'c') return StereoViewConverter(datum_shape, axes) images = load_images(which_norb, which_set, image_dtype) labels = load_labels(which_norb, which_set) view_converter = make_view_converter(which_norb, which_set) super(NORB, self).__init__(X=images, y=labels, y_labels=(numpy.max(labels) + 1), view_converter=view_converter) self.X_memmap_info = None self.y_memmap_info = None
'Return a topological view. Parameters mat : ndarray A design matrix of images, one per row. single_tensor : bool If True, returns a single tensor. If False, returns separate tensors for the left and right stereo images. returns : ndarray, tuple If single_tensor is True, returns ndarray. Else, returns the tuple (left_images, right_images).'
@functools.wraps(DenseDesignMatrix.get_topological_view) def get_topological_view(self, mat=None, single_tensor=False):
result = super(NORB, self).get_topological_view(mat) if single_tensor: if ('s' not in self.view_converter.axes): raise ValueError(('self.view_converter.axes must contain "s" (stereo image index) in order to split the images into left and right images. Instead, the axes were %s.' % str(self.view_converter.axes))) assert isinstance(result, tuple) assert (len(result) == 2) axes = list(self.view_converter.axes) s_index = axes.index('s') assert (axes.index('b') == 0) num_image_pairs = result[0].shape[0] shape = ((num_image_pairs,) + self.view_converter.shape) mono_shape = ((shape[:s_index] + (1,)) + shape[(s_index + 1):]) result = tuple((t.reshape(mono_shape) for t in result)) result = numpy.concatenate(result, axis=s_index) return result
'Support method for pickling. Returns the complete state of this object as a dictionary, which is then pickled. This state does not include the memmaps\' contents. Rather, it includes enough info to find the memmap and re-load it from disk in the same state. Note that pickling a NORB will set its memmaps (self.X and self.y) to be read-only. This is to prevent the memmaps from accidentally being edited after the save. To make them writeable again, the user must explicitly call setflags(write=True) on the memmaps.'
def __getstate__(self):
_check_pickling_support() result = copy.copy(self.__dict__) assert isinstance(self.X, numpy.memmap), ('Expected X to be a memmap, but it was a %s.' % str(type(self.X))) assert isinstance(self.y, numpy.memmap), ('Expected y to be a memmap, but it was a %s.' % str(type(self.y))) del result['X'] del result['y'] def get_memmap_info(memmap): assert isinstance(memmap, numpy.memmap) if (not isinstance(memmap.filename, str)): raise ValueError(('Expected memmap.filename to be a str; instead got a %s, %s' % (type(memmap.filename), str(memmap.filename)))) result = {} def get_relative_path(full_path): '\n Returns the relative path to the PYLEARN2_DATA_PATH.\n ' data_dir = string_utils.preprocess('${PYLEARN2_DATA_PATH}') if (not memmap.filename.startswith(data_dir)): raise ValueError(('Expected memmap.filename to start with the PYLEARN2_DATA_PATH (%s). Instead it was %s.' % (data_dir, memmap.filename))) return os.path.relpath(full_path, data_dir) return {'filename': get_relative_path(memmap.filename), 'dtype': memmap.dtype, 'shape': memmap.shape, 'offset': memmap.offset, 'mode': ('r+' if (memmap.mode in ('r+', 'w+')) else 'r')} result['X_info'] = get_memmap_info(self.X) result['y_info'] = get_memmap_info(self.y) for memmap in (self.X, self.y): memmap.flush() memmap.setflags(write=False) return result
'Support method for unpickling. Takes a \'state\' dictionary and interprets it in order to set this object\'s fields.'
def __setstate__(self, state):
_check_pickling_support() X_info = state['X_info'] y_info = state['y_info'] del state['X_info'] del state['y_info'] self.__dict__.update(state) def load_memmap_from_info(info): data_dir = string_utils.preprocess('${PYLEARN2_DATA_PATH}') info['filename'] = os.path.join(data_dir, info['filename']) shape = info['shape'] offset = info['offset'] if (offset == 0): del info['offset'] return numpy.memmap(**info) else: del info['shape'] result = numpy.memmap(**info) return result.reshape(shape) self.X = load_memmap_from_info(X_info) self.y = load_memmap_from_info(y_info)
'The arguments describe how the data is laid out in the design matrix. Parameters shape : tuple A tuple of 4 ints, describing the shape of each datum. This is the size of each axis in <axes>, excluding the \'b\' axis. axes : tuple A tuple of the following elements in any order: \'b\' batch axis \'s\' stereo axis 0 image axis 0 (row) 1 image axis 1 (column) \'c\' channel axis'
def __init__(self, shape, axes=None):
shape = tuple(shape) if (not all((isinstance(s, int) for s in shape))): raise TypeError('Shape must be a tuple/list of ints') if (len(shape) != 4): raise ValueError(('Shape array needs to be of length 4, got %s.' % shape)) datum_axes = list(axes) datum_axes.remove('b') if (shape[datum_axes.index('s')] != 2): raise ValueError(("Expected 's' axis to have size 2, got %d.\n axes: %s\n shape: %s" % (shape[datum_axes.index('s')], axes, shape))) self.shape = shape self.set_axes(axes) def make_conv2d_space(shape, axes): shape_axes = list(axes) shape_axes.remove('b') image_shape = tuple((shape[shape_axes.index(axis)] for axis in (0, 1))) conv2d_axes = list(axes) conv2d_axes.remove('s') return Conv2DSpace(shape=image_shape, num_channels=shape[shape_axes.index('c')], axes=conv2d_axes, dtype=None) conv2d_space = make_conv2d_space(shape, axes) self.topo_space = CompositeSpace((conv2d_space, conv2d_space)) self.storage_space = VectorSpace(dim=numpy.prod(shape))
'Returns a batch formatted to a space. Parameters batch : ndarray The batch to format space : a pylearn2.space.Space The target space to format to.'
def get_formatted_batch(self, batch, space):
return self.storage_space.np_format_as(batch, space)
'Called by DenseDesignMatrix.get_formatted_view(), get_batch_topo() Parameters design_mat : ndarray'
def design_mat_to_topo_view(self, design_mat):
return self.storage_space.np_format_as(design_mat, self.topo_space)
'Called by DenseDesignMatrix.get_weights_view() Parameters design_mat : ndarray'
def design_mat_to_weights_view(self, design_mat):
return self.design_mat_to_topo_view(design_mat)
'Used by DenseDesignMatrix.set_topological_view(), .get_design_mat() Parameters topo_batch : ndarray'
def topo_view_to_design_mat(self, topo_batch):
return self.topo_space.np_format_as(topo_batch, self.storage_space)
'TODO: write documentation.'
def view_shape(self):
return self.shape
'TODO: write documentation.'
def weights_view_shape(self):
return self.view_shape()
'Change the order of the axes. Parameters axes : tuple Must have length 5, must contain \'b\', \'s\', 0, 1, \'c\'.'
def set_axes(self, axes):
axes = tuple(axes) if (len(axes) != 5): raise ValueError(('Axes must have 5 elements; got %s' % str(axes))) for required_axis in ('b', 's', 0, 1, 'c'): if (required_axis not in axes): raise ValueError(("Axes must contain 'b', 's', 0, 1, and 'c'. Got %s." % str(axes))) if (axes.index('b') != 0): raise ValueError(("The 'b' axis must come first (axes = %s)." % str(axes))) def remove_b_axis(axes): axes = list(axes) axes.remove('b') return tuple(axes) if hasattr(self, 'axes'): assert hasattr(self, 'shape') old_axes = remove_b_axis(self.axes) new_axes = remove_b_axis(axes) new_shape = tuple((self.shape[old_axes.index(a)] for a in new_axes)) self.shape = new_shape self.axes = axes