desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Reconstruction of input x. Parameters inputs : tuple Tuple (lenght 2) of theano symbolic representing the input minibatch(es) to be encoded. Assumed to be 2-tensors, with the first dimension indexing training examples and the second indexing the two data dimensions (X, Y).'
def reconstructX(self, inputs):
if (self.act_dec is None): act_dec = (lambda x: x) else: act_dec = self.act_dec return act_dec(self.decodeX(inputs))
'Reconstruction of input y. Parameters inputs : tuple Tuple (lenght 2) of theano symbolic representing the input minibatch(es) to be encoded. Assumed to be 2-tensors, with the first dimension indexing training examples and the second indexing the two data dimensions (X, Y).'
def reconstructY(self, inputs):
if (self.act_dec is None): act_dec = (lambda x: x) else: act_dec = self.act_dec return act_dec(self.decodeY(inputs))
'Reconstruction of both datasets. Parameters inputs : tuple Tuple (lenght 2) of theano symbolic representing the input minibatch(es) to be encoded. Assumed to be 2-tensors, with the first dimension indexing training examples and the second indexing the two data dimensions (X, Y). Returns Reconstruction: tuple Tuple (lenght 2) of the tensor_like reconstruction of the datasets.'
def reconstructXY(self, inputs):
return (self.reconstructX(inputs), self.reconstructY(inputs))
'This just aliases the `_hidden_activation()` function for syntactic sugar/convenience.'
def __call__(self, inputs):
return self._hidden_activation(inputs)
'The filters have to be normalized in each update to increase their stability.'
@wraps(GatedAutoencoder._modify_updates) def _modify_updates(self, updates):
wxf = self.wxf wyf = self.wyf wxf_updated = updates[wxf] wyf_updated = updates[wyf] nwxf = (wxf_updated.std(0) + SMALL)[numpy.newaxis, :] nwyf = (wyf_updated.std(0) + SMALL)[numpy.newaxis, :] meannxf = nwxf.mean() meannyf = nwyf.mean() centered_wxf = (wxf_updated - wxf_updated.mean(0)) centered_wyf = (wyf_updated - wyf_updated.mean(0)) wxf_updated = (centered_wxf * (meannxf / nwxf)) wyf_updated = (centered_wyf * (meannyf / nwyf)) updates[wxf] = wxf_updated updates[wyf] = wyf_updated
'Returns a topological view of the weights, the first half corresponds to wxf and the second half to wyf. Returns weights : ndarray Same as the return value of `get_weights` but formatted as a 4D tensor with the axes being (hidden/factor units, rows, columns, channels).The the number of channels is either 1 or 3 (because they will be visualized as grayscale or RGB color). At the moment the function only supports factors whose sqrt is exact.'
def get_weights_topo(self):
if ((not isinstance(self.input_space.components[0], Conv2DSpace)) or (not isinstance(self.input_space.components[1], Conv2DSpace))): raise NotImplementedError() wxf = self.wxf.get_value(borrow=False).T wyf = self.wyf.get_value(borrow=False).T convx = self.input_space.components[0] convy = self.input_space.components[1] vecx = VectorSpace(self.nvisx) vecy = VectorSpace(self.nvisy) wxf_view = vecx.np_format_as(wxf, Conv2DSpace(convx.shape, num_channels=convx.num_channels, axes=('b', 0, 1, 'c'))) wyf_view = vecy.np_format_as(wyf, Conv2DSpace(convy.shape, num_channels=convy.num_channels, axes=('b', 0, 1, 'c'))) h = int(numpy.ceil(numpy.sqrt(self.nfac))) new_weights = numpy.zeros(((wxf_view.shape[0] * 2), wxf_view.shape[1], wxf_view.shape[2], wxf_view.shape[3]), dtype=wxf_view.dtype) t = 0 while (t < (self.nfac // h)): filter_pair = numpy.concatenate((wxf_view[(h * t):(h * (t + 1)), ...], wyf_view[(h * t):(h * (t + 1)), ...]), 0) new_weights[((h * 2) * t):((h * 2) * (t + 1)), ...] = filter_pair t += 1 return new_weights
'Reconstruction of both datasets. Parameters inputs : tuple Tuple (lenght 2) of theano symbolic representing the input minibatch(es) to be encoded. Assumed to be 2-tensors, with the first dimension indexing training examples and the second indexing the two data dimensions (X, Y). Returns Reconstruction: tuple Tuple (lenght 2) of the tensor_like reconstruction of the datasets. Notes Reconstructions from corrupted data.'
def reconstructXY(self, inputs):
corrupted = self.corruptor(inputs) return (self.reconstructX(corrupted), self.reconstructY(corrupted))
'Method that returns the reconstruction without noise Parameters inputs : tuple Tuple (lenght 2) of theano symbolic representing the input minibatch(es) to be encoded. Assumed to be 2-tensors, with the first dimension indexing training examples and the second indexing the two data dimensions (X, Y). Returns Reconstruction: tuple Tuple (lenght 2) of the tensor_like reconstruction of the datasets.'
def reconstructXY_NoiseFree(self, inputs):
return (self.reconstructX(inputs), self.reconstructY(inputs))
'.. todo:: WRITEME'
def redo_everything(self):
self.beta = sharedX((np.ones((self.nvis,)) * self.init_beta), 'beta') self.mu = sharedX((np.ones((self.nvis,)) * self.init_mu), 'mu') self.redo_theano()
'.. todo:: WRITEME'
def free_energy(self, X):
diff = (X - self.mu) sq = T.sqr(diff) return (0.5 * T.dot(sq, self.beta))
'.. todo:: WRITEME'
def log_prob(self, X):
return ((- self.free_energy(X)) - self.log_partition_function())
'.. todo:: WRITEME'
def log_partition_function(self):
return (((float(self.nvis) / 2.0) * np.log((2 * np.pi))) - (0.5 * T.sum(T.log(self.beta))))
'.. todo:: WRITEME'
def redo_theano(self):
init_names = dir(self) self.censored_updates = {} for param in self.get_params(): self.censored_updates[param] = set([]) final_names = dir(self) self.register_names_to_del([name for name in final_names if (name not in init_names)])
'.. todo:: WRITEME'
def _modify_updates(self, updates):
if ((self.beta in updates) and (updates[self.beta] not in self.censored_updates[self.beta])): updates[self.beta] = T.clip(updates[self.beta], self.min_beta, self.max_beta) params = self.get_params() for param in updates: if (param in params): self.censored_updates[param] = self.censored_updates[param].union(set([updates[param]]))
'.. todo:: WRITEME'
def get_params(self):
return [self.mu, self.beta]
'Returns the log probability of a batch of examples. Parameters X : WRITEME The examples whose log probability should be computed. Returns log_prob : WRITEME The log probability of the examples.'
def log_prob(self, X):
return ((- self.ebm.free_energy(X)) - (self.logZ_driver * self.logZ_lr_scale))
'Returns the free energy of a batch of examples. Parameters X : WRITEME The examples whose free energy should be computed. Returns free_energy : WRITEME The free energy of the examples.'
def free_energy(self, X):
return self.ebm.free_energy(X)
'Returns a SufficientStatistics .. todo:: WRITEME properly Parameters needed_stats : WRITEME a set of string names of the statistics to include V : WRITEME a num_examples x nvis matrix of input examples H_hat : WRITEME a num_examples x nhid matrix of \hat{h} variational parameters S_hat : WRITEME variational parameters for expectation of s given h=1 var_s0_hat : WRITEME variational parameters for variance of s given h=0 (only a vector of length nhid, since this is the same for all inputs) var_s1_hat : WRITEME variational parameters for variance of s given h=1 (again, a vector of length nhid)'
@classmethod def from_observations(cls, needed_stats, V, H_hat, S_hat, var_s0_hat, var_s1_hat):
m = T.cast(V.shape[0], config.floatX) H_name = make_name(H_hat, 'anon_H_hat') S_name = make_name(S_hat, 'anon_S_hat') assert (H_hat.dtype == config.floatX) mean_h = T.mean(H_hat, axis=0) assert (H_hat.dtype == mean_h.dtype) assert (mean_h.dtype == config.floatX) mean_h.name = (('mean_h(' + H_name) + ')') mean_v = T.mean(V, axis=0) mean_sq_v = T.mean(T.sqr(V), axis=0) mean_s1 = T.mean(S_hat, axis=0) mean_sq_S = ((H_hat * (var_s1_hat + T.sqr(S_hat))) + ((1.0 - H_hat) * var_s0_hat)) mean_sq_s = T.mean(mean_sq_S, axis=0) mean_HS = (H_hat * S_hat) mean_hs = T.mean(mean_HS, axis=0) mean_hs.name = ('mean_hs(%s,%s)' % (H_name, S_name)) mean_s = mean_hs mean_D_sq_mean_Q_hs = T.mean(T.sqr(mean_HS), axis=0) mean_sq_HS = (H_hat * (var_s1_hat + T.sqr(S_hat))) mean_sq_hs = T.mean(mean_sq_HS, axis=0) mean_sq_hs.name = ('mean_sq_hs(%s,%s)' % (H_name, S_name)) mean_sq_mean_hs = T.mean(T.sqr(mean_HS), axis=0) mean_sq_mean_hs.name = ('mean_sq_mean_hs(%s,%s)' % (H_name, S_name)) sum_hsv = T.dot(mean_HS.T, V) sum_hsv.name = 'sum_hsv<from_observations>' mean_hsv = (sum_hsv / m) d = {'mean_h': mean_h, 'mean_v': mean_v, 'mean_sq_v': mean_sq_v, 'mean_s': mean_s, 'mean_s1': mean_s1, 'mean_sq_s': mean_sq_s, 'mean_hs': mean_hs, 'mean_sq_hs': mean_sq_hs, 'mean_sq_mean_hs': mean_sq_mean_hs, 'mean_hsv': mean_hsv} final_d = {} for stat in needed_stats: final_d[stat] = d[stat] final_d[stat].name = ('observed_' + stat) return SufficientStatistics(final_d)
'.. todo:: WRITEME'
def reset_rng(self):
self.rng = make_np_rng(self.seed, [1, 2, 3], which_method='uniform')
'.. todo:: WRITEME'
def redo_everything(self):
if (self.init_W is not None): W = self.init_W.copy() else: W = self.rng.uniform((- self.irange), self.irange, (self.nvis, self.nhid)) if (self.constrain_W_norm or self.init_unit_W): norms = numpy_norms(W) W /= norms self.W = sharedX(W, name='W') self.bias_hid = sharedX((np.zeros(self.nhid) + self.init_bias_hid), name='bias_hid') self.alpha = sharedX((np.zeros(self.nhid) + self.init_alpha), name='alpha') self.mu = sharedX((np.zeros(self.nhid) + self.init_mu), name='mu') if self.tied_B: self.B_driver = sharedX((0.0 + self.init_B), name='B') else: self.B_driver = sharedX((np.zeros(self.nvis) + self.init_B), name='B') if self.recycle_q: self.prev_H = sharedX(np.zeros((self.recycle_q, self.nhid)), name='prev_H') self.prev_S = sharedX(np.zeros((self.recycle_q, self.nhid)), name='prev_S') if self.debug_m_step: warnings.warn('M step debugging activated-- this is only valid for certain settings, and causes a performance slowdown.') self.energy_functional_diff = sharedX(0.0) if (self.momentum_saturation_example is not None): self.params_to_incs = {} for param in self.get_params(): self.params_to_incs[param] = sharedX(np.zeros(param.get_value().shape), name=(param.name + '_inc')) self.momentum = sharedX(self.init_momentum, name='momentum') if self.monitor_norms: self.debug_norms = sharedX(np.zeros(self.nhid)) self.redo_theano()
'.. todo:: WRITEME'
@classmethod def energy_functional_needed_stats(cls):
return S3C.expected_log_prob_vhs_needed_stats()
'.. todo:: WRITEME Returns the energy_functional for a single batch of data stats is assumed to be computed from and only from the same data points that yielded H'
def energy_functional(self, H_hat, S_hat, var_s0_hat, var_s1_hat, stats):
entropy_term = self.entropy_hs(H_hat=H_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat).mean() likelihood_term = self.expected_log_prob_vhs(stats, H_hat=H_hat, S_hat=S_hat) energy_functional = (likelihood_term + entropy_term) assert (len(energy_functional.type.broadcastable) == 0) return energy_functional
'.. todo:: WRITEME Returns the energy_functional for a single batch of data stats is assumed to be computed from and only from the same data points that yielded H'
def energy_functional_batch(self, V, H_hat, S_hat, var_s0_hat, var_s1_hat):
entropy_term = self.entropy_hs(H_hat=H_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat) assert (len(entropy_term.type.broadcastable) == 1) likelihood_term = self.expected_log_prob_vhs_batch(V=V, H_hat=H_hat, S_hat=S_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat) assert (len(likelihood_term.type.broadcastable) == 1) energy_functional = (likelihood_term + entropy_term) assert (len(energy_functional.type.broadcastable) == 1) return energy_functional
'.. todo:: WRITEME'
def set_monitoring_channel_prefix(self, prefix):
self.monitoring_channel_prefix = prefix
'.. todo:: WRITEME'
def get_monitoring_channels(self, data):
(space, source) = self.get_monitoring_data_specs() space.validate(data) V = data try: self.compile_mode() if (self.m_step != None): rval = self.m_step.get_monitoring_channels(V, self) else: rval = {} if (self.momentum_saturation_example is not None): rval['momentum'] = self.momentum from_e_step = self.e_step.get_monitoring_channels(V) rval.update(from_e_step) if self.debug_m_step: rval['m_step_diff'] = self.energy_functional_diff monitor_stats = (len(self.monitor_stats) > 0) if (monitor_stats or self.monitor_functional): obs = self.infer(V) needed_stats = set(self.monitor_stats) if self.monitor_functional: needed_stats = needed_stats.union(S3C.expected_log_prob_vhs_needed_stats()) stats = SufficientStatistics.from_observations(needed_stats=needed_stats, V=V, **obs) H_hat = obs['H_hat'] S_hat = obs['S_hat'] var_s0_hat = obs['var_s0_hat'] var_s1_hat = obs['var_s1_hat'] if self.monitor_functional: energy_functional = self.energy_functional(H_hat=H_hat, S_hat=S_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat, stats=stats) rval['energy_functional'] = energy_functional if monitor_stats: for stat in self.monitor_stats: stat_val = stats.d[stat] rval[(stat + '_min')] = T.min(stat_val) rval[(stat + '_mean')] = T.mean(stat_val) rval[(stat + '_max')] = T.max(stat_val) if (len(self.monitor_params) > 0): for param in self.monitor_params: param_val = getattr(self, param) rval[(param + '_min')] = full_min(param_val) rval[(param + '_mean')] = T.mean(param_val) mx = full_max(param_val) assert (len(mx.type.broadcastable) == 0) rval[(param + '_max')] = mx if (param == 'mu'): abs_mu = abs(self.mu) rval['mu_abs_min'] = full_min(abs_mu) rval['mu_abs_mean'] = T.mean(abs_mu) rval['mu_abs_max'] = full_max(abs_mu) if (param == 'W'): norms = theano_norms(self.W) rval['W_norm_min'] = full_min(norms) rval['W_norm_mean'] = T.mean(norms) rval['W_norm_max'] = T.max(norms) if self.monitor_norms: rval['post_solve_norms_min'] = T.min(self.debug_norms) rval['post_solve_norms_max'] = T.max(self.debug_norms) rval['post_solve_norms_mean'] = T.mean(self.debug_norms) new_rval = {} for key in rval: new_rval[(self.monitoring_channel_prefix + key)] = rval[key] rval = new_rval return rval finally: self.deploy_mode()
'Get the data_specs describing the data for get_monitoring_channel. This implementation returns specification corresponding to unlabeled inputs. WRITEME: Returns section'
def get_monitoring_data_specs(self):
return (self.get_input_space(), self.get_input_source())
'.. todo:: WRITEME This is the symbolic transformation for the Block class'
def __call__(self, V):
if (not hasattr(self, 'w')): self.make_pseudoparams() obs = self.infer(V) return obs['H_hat']
'If any shared variables need to have batch-size dependent sizes, sets them all to the sizes used for interactive debugging during graph construction'
def compile_mode(self):
if self.recycle_q: self.prev_H.set_value(np.cast[self.prev_H.dtype]((np.zeros((self._test_batch_size, self.nhid)) + (1.0 / (1.0 + np.exp((- self.bias_hid.get_value()))))))) self.prev_S.set_value(np.cast[self.prev_S.dtype]((np.zeros((self._test_batch_size, self.nhid)) + self.mu.get_value())))
'If any shared variables need to have batch-size dependent sizes, sets them all to their runtime sizes'
def deploy_mode(self):
if self.recycle_q: self.prev_H.set_value(np.cast[self.prev_H.dtype]((np.zeros((self.recycle_q, self.nhid)) + (1.0 / (1.0 + np.exp((- self.bias_hid.get_value()))))))) self.prev_S.set_value(np.cast[self.prev_S.dtype]((np.zeros((self.recycle_q, self.nhid)) + self.mu.get_value())))
'.. todo:: WRITEME'
def get_params(self):
return [self.W, self.bias_hid, self.alpha, self.mu, self.B_driver]
'.. todo:: WRITEME H MUST be binary'
def energy_vhs(self, V, H, S):
h_term = (- T.dot(H, self.bias_hid)) assert (len(h_term.type.broadcastable) == 1) s_term_1 = (T.dot(T.sqr(S), self.alpha) / 2.0) s_term_2 = (- T.dot(((S * self.mu) * H), self.alpha)) s_term_3 = (T.dot((T.sqr(self.mu) * H), self.alpha) / 2.0) s_term = ((s_term_1 + s_term_2) + s_term_3) assert (len(s_term.type.broadcastable) == 1) recons = T.dot((H * S), self.W.T) v_term_1 = (T.dot(T.sqr(V), self.B) / 2.0) v_term_2 = T.dot(((- V) * recons), self.B) v_term_3 = (T.dot(T.sqr(recons), self.B) / 2.0) v_term = ((v_term_1 + v_term_2) + v_term_3) assert (len(v_term.type.broadcastable) == 1) rval = ((h_term + s_term) + v_term) assert (len(rval.type.broadcastable) == 1) return rval
'.. todo:: WRITEME This is not the same as negative expected log prob, which includes the constant term for the log partition function'
def expected_energy_vhs(self, V, H_hat, S_hat, var_s0_hat, var_s1_hat):
var_HS = ((H_hat * var_s1_hat) + ((1.0 - H_hat) * var_s0_hat)) half = as_floatX(0.5) HS = (H_hat * S_hat) sq_HS = (H_hat * (var_s1_hat + T.sqr(S_hat))) sq_S = (sq_HS + ((1.0 - H_hat) * var_s0_hat)) presign = T.dot(H_hat, self.bias_hid) presign.name = 'presign' h_term = (- presign) assert (len(h_term.type.broadcastable) == 1) precoeff = T.dot(sq_S, self.alpha) precoeff.name = 'precoeff' s_term_1 = (half * precoeff) assert (len(s_term_1.type.broadcastable) == 1) presign2 = T.dot(HS, (self.alpha * self.mu)) presign2.name = 'presign2' s_term_2 = (- presign2) assert (len(s_term_2.type.broadcastable) == 1) s_term_3 = (half * T.dot(H_hat, (T.sqr(self.mu) * self.alpha))) assert (len(s_term_3.type.broadcastable) == 1) s_term = ((s_term_1 + s_term_2) + s_term_3) v_term_1 = (half * T.dot(T.sqr(V), self.B)) assert (len(v_term_1.type.broadcastable) == 1) term6_factor1 = (V * self.B) term6_factor2 = T.dot(HS, self.W.T) v_term_2 = (- (term6_factor1 * term6_factor2).sum(axis=1)) assert (len(v_term_2.type.broadcastable) == 1) term7_subterm1 = T.dot(T.sqr(T.dot(HS, self.W.T)), self.B) assert (len(term7_subterm1.type.broadcastable) == 1) term7_subterm2 = (- T.dot(T.dot(T.sqr(HS), T.sqr(self.W.T)), self.B)) term7_subterm3 = T.dot(T.dot(sq_HS, T.sqr(self.W.T)), self.B) v_term_3 = (half * ((term7_subterm1 + term7_subterm2) + term7_subterm3)) assert (len(v_term_3.type.broadcastable) == 1) v_term = ((v_term_1 + v_term_2) + v_term_3) rval = ((h_term + s_term) + v_term) return rval
'.. todo:: WRITEME'
def entropy_h(self, H_hat):
for H_hat_v in get_debug_values(H_hat): assert (H_hat_v.min() >= 0.0) assert (H_hat_v.max() <= 1.0) return entropy_binary_vector(H_hat)
'.. todo:: WRITEME'
def entropy_hs(self, H_hat, var_s0_hat, var_s1_hat):
half = as_floatX(0.5) one = as_floatX(1.0) two = as_floatX(2.0) pi = as_floatX(np.pi) for H_hat_v in get_debug_values(H_hat): assert (H_hat_v.min() >= 0.0) assert (H_hat_v.max() <= 1.0) term1_plus_term2 = self.entropy_h(H_hat) assert (len(term1_plus_term2.type.broadcastable) == 1) term3 = T.sum((H_hat * (half * ((T.log(var_s1_hat) + T.log((two * pi))) + one))), axis=1) assert (len(term3.type.broadcastable) == 1) term4 = T.dot((1.0 - H_hat), (half * ((T.log(var_s0_hat) + T.log((two * pi))) + one))) assert (len(term4.type.broadcastable) == 1) for (t12, t3, t4) in get_debug_values(term1_plus_term2, term3, term4): debug_assert((not contains_nan(t12))) debug_assert((not contains_nan(t3))) debug_assert((not contains_nan(t4))) rval = ((term1_plus_term2 + term3) + term4) assert (len(rval.type.broadcastable) == 1) return rval
'.. todo:: WRITEME'
def infer(self, V, return_history=False):
return self.e_step.infer(V, return_history)
'WRITEME Parameters V : tensor_like A symbolic design matrix WRITEME: Returns section'
def make_learn_func(self, V):
hidden_obs = self.infer(V) stats = SufficientStatistics.from_observations(needed_stats=self.m_step.needed_stats(), V=V, **hidden_obs) H_hat = hidden_obs['H_hat'] S_hat = hidden_obs['S_hat'] learning_updates = self.m_step.get_updates(self, stats, H_hat, S_hat) if self.recycle_q: learning_updates[self.prev_H] = H_hat learning_updates[self.prev_S] = S_hat self.modify_updates(learning_updates) if self.debug_m_step: energy_functional_before = self.energy_functional(H=hidden_obs['H'], var_s0_hat=hidden_obs['var_s0_hat'], var_s1_hat=hidden_obs['var_s1_hat'], stats=stats) tmp_bias_hid = self.bias_hid tmp_mu = self.mu tmp_alpha = self.alpha tmp_W = self.W tmp_B_driver = self.B_driver self.bias_hid = learning_updates[self.bias_hid] self.mu = learning_updates[self.mu] self.alpha = learning_updates[self.alpha] if (self.W in learning_updates): self.W = learning_updates[self.W] self.B_driver = learning_updates[self.B_driver] self.make_pseudoparams() try: energy_functional_after = self.energy_functional(H_hat=hidden_obs['H_hat'], var_s0_hat=hidden_obs['var_s0_hat'], var_s1_hat=hidden_obs['var_s1_hat'], stats=stats) finally: self.bias_hid = tmp_bias_hid self.mu = tmp_mu self.alpha = tmp_alpha self.W = tmp_W self.B_driver = tmp_B_driver self.make_pseudoparams() energy_functional_diff = (energy_functional_after - energy_functional_before) learning_updates[self.energy_functional_diff] = energy_functional_diff logger.info('compiling s3c learning function...') t1 = time.time() rval = function([V], updates=learning_updates) t2 = time.time() logger.debug('... compilation took {0} seconds'.format((t2 - t1))) logger.debug('graph size: {0}'.format(len(rval.maker.fgraph.toposort()))) return rval
'.. todo:: WRITEME'
def _modify_updates(self, updates):
assert (self.bias_hid in self.censored_updates) def should_censor(param): return ((param in updates) and (updates[param] not in self.censored_updates[param])) if should_censor(self.W): if self.disable_W_update: del updates[self.W] elif self.constrain_W_norm: norms = theano_norms(updates[self.W]) updates[self.W] /= norms.dimshuffle('x', 0) if should_censor(self.alpha): updates[self.alpha] = T.clip(updates[self.alpha], self.min_alpha, self.max_alpha) if should_censor(self.mu): updates[self.mu] = T.clip(updates[self.mu], self.min_mu, self.max_mu) if should_censor(self.B_driver): updates[self.B_driver] = T.clip(updates[self.B_driver], self.min_B, self.max_B) if should_censor(self.bias_hid): updates[self.bias_hid] = T.clip(updates[self.bias_hid], self.min_bias_hid, self.max_bias_hid) model_params = self.get_params() for param in updates: if (param in model_params): self.censored_updates[param] = self.censored_updates[param].union(set([updates[param]]))
'.. todo:: WRITEME Parameters H_sample: a matrix of values of H, optional if none is provided, samples one from the prior (H_sample is used if you want to see what samples due to specific hidden units look like, or when sampling from a larger model that s3c is part of)'
def random_design_matrix(self, batch_size, theano_rng=None, H_sample=None, S_sample=None, full_sample=True, return_all=False):
if (theano_rng is None): assert (H_sample is not None) assert (S_sample is not None) assert (full_sample == False) if (not hasattr(self, 'p')): self.make_pseudoparams() hid_shape = (batch_size, self.nhid) if (H_sample is None): H_sample = theano_rng.binomial(size=hid_shape, n=1, p=self.p, dtype=self.W.dtype) assert (H_sample.dtype == 'float32') if hasattr(H_sample, '__array__'): assert (len(H_sample.shape) == 2) else: assert (len(H_sample.type.broadcastable) == 2) if (S_sample is None): S_sample = theano_rng.normal(size=hid_shape, avg=self.mu, std=T.sqrt((1.0 / self.alpha))) assert (S_sample.dtype == 'float32') final_hs_sample = (H_sample * S_sample) assert (len(final_hs_sample.type.broadcastable) == 2) V_mean = T.dot(final_hs_sample, self.W.T) if (not full_sample): warnings.warn('showing conditional means (given sampled h and s) on visible units rather than true samples') if return_all: return (H_sample, S_sample, V_mean) return V_mean V_sample = theano_rng.normal(size=V_mean.shape, avg=V_mean, std=T.sqrt((1.0 / self.B))) assert (V_sample.dtype == 'float32') if return_all: assert (H_sample is not None) assert (S_sample is not None) assert (V_sample is not None) return (H_sample, S_sample, V_sample) return V_sample
'.. todo:: WRITEME'
@classmethod def expected_log_prob_vhs_needed_stats(cls):
h = S3C.expected_log_prob_h_needed_stats() s = S3C.expected_log_prob_s_given_h_needed_stats() v = S3C.expected_log_prob_v_given_hs_needed_stats() union = h.union(s).union(v) return union
'.. todo:: WRITEME'
def expected_log_prob_vhs(self, stats, H_hat, S_hat):
expected_log_prob_v_given_hs = self.expected_log_prob_v_given_hs(stats, H_hat=H_hat, S_hat=S_hat) expected_log_prob_s_given_h = self.expected_log_prob_s_given_h(stats) expected_log_prob_h = self.expected_log_prob_h(stats) rval = ((expected_log_prob_v_given_hs + expected_log_prob_s_given_h) + expected_log_prob_h) assert (len(rval.type.broadcastable) == 0) return rval
'.. todo:: WRITEME'
def log_partition_function(self):
half = as_floatX(0.5) two = as_floatX(2.0) pi = as_floatX(np.pi) N = as_floatX(self.nhid) term1 = ((- half) * T.sum(T.log(self.B))) term2 = ((half * N) * T.log((two * pi))) term3 = ((- half) * T.log(self.alpha).sum()) term4 = ((half * N) * T.log((two * pi))) term5 = T.nnet.softplus(self.bias_hid).sum() return ((((term1 + term2) + term3) + term4) + term5)
'.. todo:: WRITEME'
def expected_log_prob_vhs_batch(self, V, H_hat, S_hat, var_s0_hat, var_s1_hat):
half = as_floatX(0.5) two = as_floatX(2.0) pi = as_floatX(np.pi) N = as_floatX(self.nhid) negative_log_partition_function = (- self.log_partition_function()) assert (len(negative_log_partition_function.type.broadcastable) == 0) negative_energy = (- self.expected_energy_vhs(V=V, H_hat=H_hat, S_hat=S_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat)) assert (len(negative_energy.type.broadcastable) == 1) rval = (negative_log_partition_function + negative_energy) return rval
'V, H, S are SAMPLES (i.e., H must be LITERALLY BINARY) Return value is a vector, of length batch size Parameters V : WRITEME H : WRITEME S : WRITEME Returns WRITEME'
def log_prob_v_given_hs(self, V, H, S):
half = as_floatX(0.5) two = as_floatX(2.0) pi = as_floatX(np.pi) N = as_floatX(self.nhid) term1 = (half * T.sum(T.log(self.B))) term2 = (((- half) * N) * T.log((two * pi))) mean_HS = (H * S) recons = T.dot((H * S), self.W.T) residuals = (V - recons) term3 = ((- half) * T.dot(T.sqr(residuals), self.B)) rval = ((term1 + term2) + term3) assert (len(rval.type.broadcastable) == 1) return rval
'.. todo:: WRITEME'
@classmethod def expected_log_prob_v_given_hs_needed_stats(cls):
return set(['mean_sq_v', 'mean_hsv', 'mean_sq_hs', 'mean_sq_mean_hs'])
'Return value is a SCALAR-- expectation taken across batch index too Parameters stats : WRITEME H_hat : WRITEME S_hat : WRITEME Returns WRITEME'
def expected_log_prob_v_given_hs(self, stats, H_hat, S_hat):
'\n E_v,h,s \\sim Q log P( v | h, s)\n = sum_k [ E_v,h,s \\sim Q log sqrt(B/2 pi) exp( - 0.5 B (v- W[v,:] (h*s) )^2) ]\n = sum_k [ E_v,h,s \\sim Q 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) - 0.5 B_k sum_i sum_j W[k,i] W[k,j] h_i s_i h_j s_j ]\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ] - 0.5 sum_k B_k sum_i,j W[k,i] W[k,j] < h_i s_i h_j s_j >\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ] - (1/2T) sum_k B_k sum_i,j W[k,i] W[k,j] sum_t <h_it s_it h_jt s_t>\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ] - (1/2T) sum_k B_k sum_t sum_i,j W[k,i] W[k,j] <h_it s_it h_jt s_t>\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_k B_k sum_t sum_i W[k,i] sum_{j\neq i} W[k,j] <h_it s_it> <h_jt s_t>\n - (1/2T) sum_k B_k sum_t sum_i W[k,i]^2 <h_it s_it^2>\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_k B_k sum_t sum_i W[k,i] <h_it s_it> sum_j W[k,j] <h_jt s_t>\n + (1/2T) sum_k B_k sum_t sum_i W[k,i]^2 <h_it s_it>^2\n - (1/2T) sum_k B_k sum_t sum_i W[k,i]^2 <h_it s_it^2>\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_k B_k sum_t sum_i W[k,i] <h_it s_it> sum_j W[k,j] <h_jt s_t>\n + (1/2T) sum_k B_k sum_t sum_i W[k,i]^2 (<h_it s_it>^2 - <h_it s_it^2>)\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_k B_k sum_t sum_i W_ki HS_it sum_j W_kj HS_tj\n + (1/2T) sum_k B_k sum_t sum_i sq(W)_ki ( sq(HS)-sq_HS)_it\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_k B_k sum_t sum_i W_ki HS_it sum_j W_kj HS_tj\n + (1/2T) sum_k B_k sum_t sum_i sq(W)_ki ( sq(HS)-sq_HS)_it\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_k B_k sum_t sum_i W_ki HS_it sum_j W_kj HS_tj\n + (1/2T) sum_k B_k sum_t sum_i sq(W)_ki ( sq(HS)-sq_HS)_it\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_k B_k sum_t (HS_t: W_k:^T) (HS_t: W_k:^T)\n + (1/2) sum_k B_k sum_i sq(W)_ki ( mean_sq_mean_hs-mean_sq_hs)_i\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2T) sum_t sum_k B_k (HS_t: W_k:^T)^2\n + (1/2) sum_k B_k sum_i sq(W)_ki ( mean_sq_mean_hs-mean_sq_hs)_i\n = sum_k [ 0.5 log B_k - 0.5 log 2 pi - 0.5 B_k v_k^2 + v_k B_k W[k,:] (h*s) ]\n - (1/2) mean( (HS W^T)^2 B )\n + (1/2) sum_k B_k sum_i sq(W)_ki ( mean_sq_mean_hs-mean_sq_hs)_i\n ' half = as_floatX(0.5) two = as_floatX(2.0) pi = as_floatX(np.pi) N = as_floatX(self.nhid) mean_sq_v = stats.d['mean_sq_v'] mean_hsv = stats.d['mean_hsv'] mean_sq_mean_hs = stats.d['mean_sq_mean_hs'] mean_sq_hs = stats.d['mean_sq_hs'] term1 = (half * T.sum(T.log(self.B))) term2 = (((- half) * N) * T.log((two * pi))) term3 = ((- half) * T.dot(self.B, mean_sq_v)) term4 = T.dot(self.B, (self.W * mean_hsv.T).sum(axis=1)) HS = (H_hat * S_hat) recons = T.dot(HS, self.W.T) sq_recons = T.sqr(recons) weighted = T.dot(sq_recons, self.B) assert (len(weighted.type.broadcastable) == 1) term5 = ((- half) * T.mean(weighted)) term6 = (half * T.dot(self.B, T.dot(T.sqr(self.W), (mean_sq_mean_hs - mean_sq_hs)))) rval = (((((term1 + term2) + term3) + term4) + term5) + term6) assert (len(rval.type.broadcastable) == 0) return rval
'.. todo:: WRITEME'
@classmethod def expected_log_prob_s_given_h_needed_stats(cls):
return set(['mean_h', 'mean_hs', 'mean_sq_s'])
'.. todo:: WRITEME E_h,s\sim Q log P(s|h) = E_h,s\sim Q log sqrt( alpha / 2pi) exp(- 0.5 alpha (s-mu h)^2) = E_h,s\sim Q log sqrt( alpha / 2pi) - 0.5 alpha (s-mu h)^2 = E_h,s\sim Q 0.5 log alpha - 0.5 log 2 pi - 0.5 alpha s^2 + alpha s mu h + 0.5 alpha mu^2 h^2 = E_h,s\sim Q 0.5 log alpha - 0.5 log 2 pi - 0.5 alpha s^2 + alpha mu h s + 0.5 alpha mu^2 h = 0.5 log alpha - 0.5 log 2 pi - 0.5 alpha mean_sq_s + alpha mu mean_hs - 0.5 alpha mu^2 mean_h'
def expected_log_prob_s_given_h(self, stats):
mean_h = stats.d['mean_h'] mean_sq_s = stats.d['mean_sq_s'] mean_hs = stats.d['mean_hs'] half = as_floatX(0.5) two = as_floatX(2.0) N = as_floatX(self.nhid) pi = as_floatX(np.pi) term1 = (half * T.log(self.alpha).sum()) term2 = (((- half) * N) * T.log((two * pi))) term3 = ((- half) * T.dot(self.alpha, mean_sq_s)) term4 = T.dot((self.mu * self.alpha), mean_hs) term5 = ((- half) * T.dot(T.sqr(self.mu), (self.alpha * mean_h))) rval = ((((term1 + term2) + term3) + term4) + term5) assert (len(rval.type.broadcastable) == 0) return rval
'.. todo:: WRITEME'
@classmethod def expected_log_prob_h_needed_stats(cls):
return set(['mean_h'])
'Returns the expected log probability of the vector h under the model when the data is drawn according to stats: E_h\sim Q log P(h) = E_h\sim Q log exp( bh) / (1+exp(b)) = E_h\sim Q bh - softplus(b) Parameters stats : WRITEME Returns WRITEME'
def expected_log_prob_h(self, stats):
mean_h = stats.d['mean_h'] term1 = T.dot(self.bias_hid, mean_h) term2 = (- T.nnet.softplus(self.bias_hid).sum()) rval = (term1 + term2) assert (len(rval.type.broadcastable) == 0) return rval
'.. todo:: WRITEME'
def make_pseudoparams(self):
if self.tied_B: self.B = (self.B_driver + as_floatX(np.zeros(self.nvis))) self.B.name = 'S3C.tied_B' else: self.B = self.B_driver self.w = T.dot(self.B, T.sqr(self.W)) self.w.name = 'S3C.w' self.p = T.nnet.sigmoid(self.bias_hid) self.p.name = 'S3C.p'
'.. todo:: WRITEME'
def reset_censorship_cache(self):
self.censored_updates = {} self.register_names_to_del(['censored_updates']) for param in self.get_params(): self.censored_updates[param] = set([])
'.. todo:: WRITEME'
def redo_theano(self):
self.reset_censorship_cache() if (not self.autonomous): return try: self.compile_mode() init_names = dir(self) self.make_pseudoparams() self.e_step.register_model(self) self.get_B_value = function([], self.B) X = T.matrix(name='V') X.tag.test_value = np.cast[config.floatX](self.rng.randn(self._test_batch_size, self.nvis)) self.learn_func = self.make_learn_func(X) final_names = dir(self) self.register_names_to_del([name for name in final_names if (name not in init_names)]) finally: self.deploy_mode()
'.. todo:: WRITEME'
def train_batch(self, dataset, batch_size):
if self.set_B_to_marginal_precision: assert (not self.tied_B) var = dataset.X.var(axis=0) self.B_driver.set_value((1.0 / (var + 0.01))) if (self.stop_after_hack is not None): if (self.monitor.examples_seen > self.stop_after_hack): logger.error('stopping due to too many examples seen') quit((-1)) self.learn_mini_batch(dataset.get_batch_design(batch_size)) return True
'.. todo:: WRITEME'
def print_status(self):
b = self.bias_hid.get_value(borrow=True) assert (not contains_nan(b)) p = (1.0 / (1.0 + np.exp((- b)))) logger.info('p: ({0}, {1}, {2})'.format(p.min(), p.mean(), p.max())) B = self.B_driver.get_value(borrow=True) assert (not contains_nan(B)) logger.info('B: ({0}, {1}, {2})'.format(B.min(), B.mean(), B.max())) mu = self.mu.get_value(borrow=True) assert (not contains_nan(mu)) logger.info('mu: ({0}, {1}, {2})'.format(mu.min(), mu.mean(), mu.max())) alpha = self.alpha.get_value(borrow=True) assert (not contains_nan(alpha)) logger.info('alpha: ({0}, {1}, {2})'.format(alpha.min(), alpha.mean(), alpha.max())) W = self.W.get_value(borrow=True) assert isfinite(W) logger.info('W: ({0}, {1}, {2})'.format(W.min(), W.mean(), W.max())) norms = numpy_norms(W) logger.info('W norms: ({0}, {1}, {2})'.format(norms.min(), norms.mean(), norms.max()))
'.. todo:: WRITEME'
def learn_mini_batch(self, X):
self.learn_func(X) if (self.momentum_saturation_example is not None): alpha = (float(self.monitor.get_examples_seen()) / float(self.momentum_saturation_example)) alpha = min(alpha, 1.0) self.momentum.set_value(np.cast[config.floatX]((((1.0 - alpha) * self.init_momentum) + (alpha * self.final_momentum)))) if ((self.monitor.get_examples_seen() % self.print_interval) == 0): self.print_status() if self.debug_m_step: if (self.energy_functional_diff.get_value() < 0.0): warnings.warn('m step decreased the em functional') if (self.debug_m_step != 'warn'): quit((-1))
'.. todo:: WRITEME'
def get_weights_format(self):
return ['v', 'h']
'.. todo:: WRITEME'
def get_weights(self):
W = self.W.get_value() x = input('multiply weights by mu? (y/n) ') if (x == 'y'): return (W * self.mu.get_value()) elif (x == 'n'): return W assert False
'.. todo:: WRITEME'
def get_monitoring_channels(self, V):
rval = {} if self.autonomous: if (self.monitor_kl or self.monitor_energy_functional or self.monitor_s_mag or self.monitor_ranges): obs_history = self.model.infer(V, return_history=True) assert isinstance(obs_history, list) final_vals = obs_history[(-1)] S_hat = final_vals['S_hat'] H_hat = final_vals['H_hat'] HS = (H_hat * S_hat) hs_max = T.max(HS, axis=0) hs_min = T.min(HS, axis=0) hs_range = (hs_max - hs_min) rval['hs_range_min'] = T.min(hs_range) rval['hs_range_mean'] = T.mean(hs_range) rval['hs_range_max'] = T.max(hs_range) h_max = T.max(H_hat, axis=0) h_min = T.min(H_hat, axis=0) h_range = (h_max - h_min) rval['h_range_min'] = T.min(h_range) rval['h_range_mean'] = T.mean(h_range) rval['h_range_max'] = T.max(h_range) for i in xrange(1, (2 + len(self.h_new_coeff_schedule))): obs = obs_history[(i - 1)] if self.monitor_kl: if (i == 1): rval[('trunc_KL_' + str(i))] = self.truncated_KL(V, obs=obs).mean() else: coeff = self.h_new_coeff_schedule[(i - 2)] rval[(((('trunc_KL_' + str(i)) + '.2(h ') + str(coeff)) + ')')] = self.truncated_KL(V, obs=obs).mean() obs = {} for key in obs_history[(i - 1)]: obs[key] = obs_history[(i - 1)][key] obs['H_hat'] = obs_history[(i - 2)]['H_hat'] coeff = self.s_new_coeff_schedule[(i - 2)] rval[(((('trunc_KL_' + str(i)) + '.1(s ') + str(coeff)) + ')')] = self.truncated_KL(V, obs=obs).mean() obs = obs_history[(i - 1)] if self.monitor_energy_functional: rval[('energy_functional_' + str(i))] = self.energy_functional(V, self.model, obs).mean() if self.monitor_s_mag: rval[('s_mag_' + str(i))] = T.sqrt(T.sum(T.sqr(obs['S_hat']))) return rval
'Return value is a scalar Parameters V : WRITEME model : WRITEME obs : WRITEME Returns WRITEME'
def energy_functional(self, V, model, obs):
needed_stats = S3C.expected_log_prob_vhs_needed_stats() stats = SufficientStatistics.from_observations(needed_stats=needed_stats, V=V, **obs) H_hat = obs['H_hat'] S_hat = obs['S_hat'] var_s0_hat = obs['var_s0_hat'] var_s1_hat = obs['var_s1_hat'] entropy_term = model.entropy_hs(H_hat=H_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat).mean() likelihood_term = model.expected_log_prob_vhs(stats, H_hat=H_hat, S_hat=S_hat) energy_functional = (entropy_term + likelihood_term) return energy_functional
'.. todo:: WRITEME'
def register_model(self, model):
self.model = model
'KL divergence between variation and true posterior, dropping terms that don\'t depend on the variational parameters Parameters V : WRITEME Y : WRITEME obs : WRITEME Returns WRITEME'
def truncated_KL(self, V, Y=None, obs=None):
assert (Y is None) assert (obs is not None) H_hat = obs['H_hat'] var_s0_hat = obs['var_s0_hat'] var_s1_hat = obs['var_s1_hat'] S_hat = obs['S_hat'] model = self.model for H_hat_v in get_debug_values(H_hat): assert (H_hat_v.min() >= 0.0) assert (H_hat_v.max() <= 1.0) entropy_term = (- model.entropy_hs(H_hat=H_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat)) energy_term = model.expected_energy_vhs(V, H_hat=H_hat, S_hat=S_hat, var_s0_hat=var_s0_hat, var_s1_hat=var_s1_hat) for (entropy, energy) in get_debug_values(entropy_term, energy_term): debug_assert((not contains_nan(entropy))) debug_assert((not contains_nan(energy))) KL = (entropy_term + energy_term) return KL
'.. todo:: WRITEME'
def init_H_hat(self, V):
if self.model.recycle_q: rval = self.model.prev_H if (config.compute_test_value != 'off'): if (rval.get_value().shape[0] != V.tag.test_value.shape[0]): raise Exception('E step given wrong test batch size', rval.get_value().shape, V.tag.test_value.shape) else: value = T.nnet.sigmoid(self.model.bias_hid) rval = T.alloc(value, V.shape[0], value.shape[0]) for (rval_value, V_value) in get_debug_values(rval, V): if (rval_value.shape[0] != V_value.shape[0]): debug_error_message("rval.shape = %s, V.shape = %s, element 0 should match but doesn't", str(rval_value.shape), str(V_value.shape)) return rval
'.. todo:: WRITEME'
def init_S_hat(self, V):
if self.model.recycle_q: rval = self.model.prev_S_hat else: value = self.model.mu assert (self.model.mu.get_value(borrow=True).shape[0] == self.model.nhid) rval = T.alloc(value, V.shape[0], value.shape[0]) return rval
'.. todo:: WRITEME'
def infer_S_hat(self, V, H_hat, S_hat):
for (Vv, Hv) in get_debug_values(V, H_hat): if (Vv.shape != (self.model._test_batch_size, self.model.nvis)): raise Exception((((('Well this is awkward. We require visible input test tags to be of shape ' + str((self.model._test_batch_size, self.model.nvis))) + ' but the monitor gave us something of shape ') + str(Vv.shape)) + ". The batch index part is probably only important if recycle_q is enabled. It's also probably not all that realistic to plan on telling the monitor what size of batch we need for test tags. the best thing to do is probably change self.model._test_batch_size to match what the monitor does")) assert (Vv.shape[0] == Hv.shape[0]) if (not (Hv.shape[1] == self.model.nhid)): raise AssertionError(('Hv.shape[1] is %d, does not match self.model.nhid, %d' % (Hv.shape[1], self.model.nhid))) mu = self.model.mu alpha = self.model.alpha W = self.model.W B = self.model.B w = self.model.w BW = (B.dimshuffle(0, 'x') * W) BW.name = 'infer_S_hat:BW' HS = (H_hat * S_hat) HS.name = 'infer_S_hat:HS' mean_term = (mu * alpha) mean_term.name = 'infer_S_hat:mean_term' assert (V.dtype == config.floatX) assert (BW.dtype == config.floatX), ('Expected %s, got %s' % (config.floatX, BW.dtype)) data_term = T.dot(V, BW) data_term.name = 'infer_S_hat:data_term' iterm_part_1 = (- T.dot(T.dot(HS, W.T), BW)) iterm_part_1.name = 'infer_S_hat:iterm_part_1' assert (w.name is not None) iterm_part_2 = (w * HS) iterm_part_2.name = 'infer_S_hat:iterm_part_2' interaction_term = (iterm_part_1 + iterm_part_2) interaction_term.name = 'infer_S_hat:interaction_term' for (i1v, Vv) in get_debug_values(iterm_part_1, V): assert (i1v.shape[0] == Vv.shape[0]) assert (mean_term.dtype == config.floatX) assert (data_term.dtype == config.floatX) assert (interaction_term.dtype == config.floatX) debug_interm = (mean_term + data_term) debug_interm.name = 'infer_S_hat:debug_interm' numer = (debug_interm + interaction_term) numer.name = 'infer_S_hat:numer' assert (numer.dtype == config.floatX) alpha = self.model.alpha w = self.model.w denom = (alpha + w) assert (denom.dtype == config.floatX) denom.name = 'infer_S_hat:denom' S_hat = (numer / denom) return S_hat
'.. todo:: WRITEME'
def infer_var_s0_hat(self):
return (1.0 / self.model.alpha)
'Returns the variational parameter for the variance of s given h=1. This is data-independent so its just a vector of size (nhid,) and doesn\'t take any arguments Returns WRITEME'
def infer_var_s1_hat(self):
rval = (1.0 / (self.model.alpha + self.model.w)) rval.name = 'var_s1' return rval
'Computes the value of H_hat prior to the application of the sigmoid function. This is a useful quantity to compute for larger models that influence h with top-down terms. Such models can apply the sigmoid themselves after adding the top-down interactions Parameters V : WRITEME H_hat : WRITEME S_hat : WRITEME Returns WRITEME'
def infer_H_hat_presigmoid(self, V, H_hat, S_hat):
half = as_floatX(0.5) alpha = self.model.alpha w = self.model.w mu = self.model.mu W = self.model.W B = self.model.B BW = (B.dimshuffle(0, 'x') * W) HS = (H_hat * S_hat) t1f1t1 = V t1f1t2 = (- T.dot(HS, W.T)) iterm_corrective = ((w * H_hat) * T.sqr(S_hat)) t1f1t3_effect = (((- half) * w) * T.sqr(S_hat)) term_1_factor_1 = (t1f1t1 + t1f1t2) term_1 = (((T.dot(term_1_factor_1, BW) * S_hat) + iterm_corrective) + t1f1t3_effect) term_2_subterm_1 = (((- half) * alpha) * T.sqr(S_hat)) term_2_subterm_2 = ((alpha * S_hat) * mu) term_2_subterm_3 = (((- half) * alpha) * T.sqr(mu)) term_2 = ((term_2_subterm_1 + term_2_subterm_2) + term_2_subterm_3) term_3 = self.model.bias_hid term_4 = ((- half) * T.log((alpha + self.model.w))) term_5 = (half * T.log(alpha)) arg_to_sigmoid = ((((term_1 + term_2) + term_3) + term_4) + term_5) return arg_to_sigmoid
'.. todo:: WRITEME'
def infer_H_hat(self, V, H_hat, S_hat, count=None):
arg_to_sigmoid = self.infer_H_hat_presigmoid(V, H_hat, S_hat) H = T.nnet.sigmoid(arg_to_sigmoid) V_name = make_name(V, anon='anon_V') if (count is not None): H.name = ('H_hat(%s, %d)' % (V_name, count)) return H
'... todo:: WRITEME Parameters V : WRITEME return_history : bool If True, returns a list of dictionaries with showing the history of the variational parameters throughout fixed point updates If False, returns a dictionary containing the final variational parameters Returns WRITEME'
def infer(self, V, return_history=False):
if (not self.autonomous): raise ValueError('Non-autonomous model asked to perform inference on its own') alpha = self.model.alpha var_s0_hat = (1.0 / alpha) var_s1_hat = self.infer_var_s1_hat() H_hat = self.init_H_hat(V) S_hat = self.init_S_hat(V) def check_H(my_H, my_V): if (my_H.dtype != config.floatX): raise AssertionError(('my_H.dtype should be config.floatX, but they are %s and %s, respectively' % (my_H.dtype, config.floatX))) allowed_v_types = ['float32'] if (config.floatX == 'float64'): allowed_v_types.append('float64') assert (my_V.dtype in allowed_v_types) if (config.compute_test_value != 'off'): from theano.gof.op import PureOp Hv = PureOp._get_test_value(my_H) Vv = my_V.tag.test_value assert (Hv.shape[0] == Vv.shape[0]) check_H(H_hat, V) def make_dict(): return {'H_hat': H_hat, 'S_hat': S_hat, 'var_s0_hat': var_s0_hat, 'var_s1_hat': var_s1_hat} history = [make_dict()] count = 2 h_new_coeff_schedule = self.h_new_coeff_schedule s_new_coeff_schedule = self.s_new_coeff_schedule assert isinstance(s_new_coeff_schedule, (list, tuple)) assert isinstance(h_new_coeff_schedule, (list, tuple)) for (new_H_coeff, new_S_coeff) in zip(h_new_coeff_schedule, s_new_coeff_schedule): new_H_coeff = as_floatX(new_H_coeff) new_S_coeff = as_floatX(new_S_coeff) assert (V.dtype == config.floatX) assert (H_hat.dtype == config.floatX) assert (S_hat.dtype == config.floatX) new_S_hat = self.infer_S_hat(V, H_hat, S_hat) assert (new_S_hat.type.dtype == config.floatX) if self.clip_reflections: clipped_S_hat = reflection_clip(S_hat=S_hat, new_S_hat=new_S_hat, rho=self.rho) else: clipped_S_hat = new_S_hat assert (clipped_S_hat.dtype == config.floatX) assert (S_hat.type.dtype == config.floatX) assert (new_S_coeff.dtype == config.floatX) S_hat = damp(old=S_hat, new=clipped_S_hat, new_coeff=new_S_coeff) S_hat.name = ('S_hat_' + str(count)) assert (S_hat.type.dtype == config.floatX) new_H = self.infer_H_hat(V, H_hat, S_hat, count) assert (new_H.type.dtype == config.floatX) count += 1 H_hat = damp(old=H_hat, new=new_H, new_coeff=new_H_coeff) check_H(H_hat, V) history.append(make_dict()) if return_history: return history else: return history[(-1)]
'.. todo:: WRITEME'
def __setstate__(self, d):
if ('autonomous' not in d): d['autonomous'] = True self.__dict__.update(d)
'.. todo:: WRITEME'
def get_updates(self, model, stats, H_hat, S_hat):
assert self.autonomous params = model.get_params() obj = (((model.expected_log_prob_vhs(stats, H_hat, S_hat) - (T.mean(model.p) * self.p_penalty)) - (T.mean(model.B) * self.B_penalty)) - (T.mean(model.alpha) * self.alpha_penalty)) constants = set(stats.d.values()).union([H_hat, S_hat]) grads = T.grad(obj, params, consider_constant=constants) updates = OrderedDict() for (param, grad) in zip(params, grads): learning_rate = self.learning_rate if (param is model.W): learning_rate = (learning_rate * self.W_learning_rate_scale) if (param is model.B_driver): learning_rate = (learning_rate * self.B_learning_rate_scale) if (param is model.alpha): learning_rate = (learning_rate * self.alpha_learning_rate_scale) if (model.momentum_saturation_example is None): if ((param is model.W) and model.constrain_W_norm): g_k = (learning_rate * grad) h_k = (g_k - ((g_k * model.W).sum(axis=0) * model.W)) theta_k = T.sqrt((1e-08 + T.sqr(h_k).sum(axis=0))) u_k = (h_k / theta_k) updates[model.W] = ((T.cos(theta_k) * model.W) + (T.sin(theta_k) * u_k)) else: pparam = param inc = (learning_rate * grad) updated_param = (pparam + inc) updates[param] = updated_param else: inc = model.params_to_incs[param] updates[inc] = ((model.momentum * inc) + (learning_rate * grad)) updates[param] = (param + inc) return updates
'.. todo:: WRITEME'
def needed_stats(self):
return S3C.expected_log_prob_vhs_needed_stats()
'.. todo:: WRITEME'
def get_monitoring_channels(self, V, model):
hid_observations = model.infer(V) stats = SufficientStatistics.from_observations(needed_stats=S3C.expected_log_prob_vhs_needed_stats(), V=V, **hid_observations) H_hat = hid_observations['H_hat'] S_hat = hid_observations['S_hat'] obj = model.expected_log_prob_vhs(stats, H_hat, S_hat) return {'expected_log_prob_vhs': obj}
'WRITEME Parameters V : WRITEME return_history : bool If True, returns a list of dictionaries with showing the history of the variational parameters throughout fixed point updates If False, returns a dictionary containing the final variational parameters Returns WRITEME'
def infer(self, V, return_history=False):
if (not self.autonomous): raise ValueError('Non-autonomous model asked to perform inference on its own') alpha = self.model.alpha var_s0_hat = (1.0 / alpha) var_s1_hat = self.infer_var_s1_hat() H_hat = self.init_H_hat(V) S_hat = self.init_S_hat(V) def inner_function(new_H_coeff, new_S_coeff, H_hat, S_hat): orig_H_dtype = H_hat.dtype orig_S_dtype = S_hat.dtype new_S_hat = self.infer_S_hat(V, H_hat, S_hat) if self.clip_reflections: clipped_S_hat = reflection_clip(S_hat=S_hat, new_S_hat=new_S_hat, rho=self.rho) else: clipped_S_hat = new_S_hat S_hat = damp(old=S_hat, new=clipped_S_hat, new_coeff=new_S_coeff) new_H = self.infer_H_hat(V, H_hat, S_hat) H_hat = damp(old=H_hat, new=new_H, new_coeff=new_H_coeff) assert (H_hat.dtype == orig_H_dtype) assert (S_hat.dtype == orig_S_dtype) return (H_hat, S_hat) ((H_hats, S_hats), _) = scan(fn=inner_function, sequences=[self.h_new_coeff_schedule, self.s_new_coeff_schedule], outputs_info=[H_hat, S_hat]) if return_history: hist = [{'H_hat': H_hats[i], 'S_hat': S_hats[i], 'var_s0_hat': var_s0_hat, 'var_s1_hat': var_s1_hat} for i in xrange(self.h_new_coeff_schedule.get_value().shape[0])] hist.insert(0, {'H_hat': H_hat, 'S_hat': S_hat, 'var_s0_hat': var_s0_hat, 'var_s1_hat': var_s1_hat}) return hist return {'H_hat': H_hats[(-1)], 'S_hat': S_hats[(-1)], 'var_s0_hat': var_s0_hat, 'var_s1_hat': var_s1_hat}
'Process kmeans algorithm on the input to localize clusters. Parameters dataset : WRITEME mu : WRITEME Returns rval : bool WRITEME'
def train_all(self, dataset, mu=None):
X = dataset.get_design_matrix() (n, m) = X.shape k = self.k if (milk is not None): (cluster_ids, mu) = milk.kmeans(X, k) else: if (mu is not None): if (not (len(mu) == k)): raise Exception(('You gave %i clusters, but k=%i were expected' % (len(mu), k))) else: indices = numpy.random.randint(X.shape[0], size=k) mu = X[indices] try: dists = numpy.zeros((n, k)) except MemoryError as e: improve_memory_error_message(e, 'dying trying to allocate dists matrix for {0} examples and {1} means'.format(n, k)) old_kills = {} iter = 0 mmd = prev_mmd = float('inf') while True: if self.verbose: logger.info('kmeans iter {0}'.format(iter)) if contains_nan(mu): logger.info('nan found') return X for i in xrange(k): dists[:, i] = numpy.square((X - mu[i, :])).sum(axis=1) if (iter > 0): prev_mmd = mmd min_dists = dists.min(axis=1) mmd = min_dists.mean() logger.info('cost: {0}'.format(mmd)) if ((iter > 0) and ((iter >= self.max_iter) or (abs((mmd - prev_mmd)) < self.convergence_th))): break min_dist_inds = dists.argmin(axis=1) i = 0 blacklist = [] new_kills = {} while (i < k): b = (min_dist_inds == i) if (not numpy.any(b)): killed_on_prev_iter = True if (i in old_kills): d = (old_kills[i] - 1) if (d == 0): d = 50 new_kills[i] = d else: d = 5 mu[i, :] = 0 for j in xrange(d): idx = numpy.argmax(min_dists) min_dists[idx] = 0 mu[i, :] += X[idx, :] blacklist.append(idx) mu[i, :] /= float(d) dists[:, i] = numpy.square((X - mu[i, :])).sum(axis=1) min_dists = dists.min(axis=1) for idx in blacklist: min_dists[idx] = 0 min_dist_inds = dists.argmin(axis=1) i += 1 else: mu[i, :] = numpy.mean(X[b, :], axis=0) if contains_nan(mu): logger.info('nan found at {0}'.format(i)) return X i += 1 old_kills = new_kills iter += 1 self.mu = sharedX(mu) self._params = [self.mu]
'.. todo:: WRITEME'
def get_params(self):
if (not hasattr(self.mu, 'get_value')): self.mu = sharedX(self.mu) if (not hasattr(self, '_params')): self._params = [self.mu] return [param for param in self._params]
'Compute for each sample its probability to belong to a cluster. Parameters X : numpy.ndarray Matrix of sampless of shape (n, d) Returns WRITEME'
def __call__(self, X):
(n, m) = X.shape k = self.k mu = self.mu dists = numpy.zeros((n, k)) for i in xrange(k): dists[:, i] = numpy.square((X - mu[i, :])).sum(axis=1) return (dists / dists.sum(axis=1).reshape((-1), 1))
'.. todo:: WRITEME'
def get_weights(self):
return self.mu
'.. todo:: WRITEME'
def get_weights_format(self):
return ['h', 'v']
'Don\'t let subclasses use censor_updates.'
def _disallow_censor_updates(self):
if self._overrides_censor_updates(): raise TypeError((str(type(self)) + ' overrides Model.censor_updates, which is no longer in use. Change this to _modify_updates. This check may quit being performed after 2015-05-13.'))
'Makes sure the model has an "extensions" field.'
def _ensure_extensions(self):
if (not hasattr(self, 'extensions')): raise TypeError((('The ' + str(type(self))) + ' Model subclass is required to call the Model superclass constructor but does not.')) self.extensions = []
'An implementation of __setstate__ that patches old pickle files.'
def __setstate__(self, d):
self._disallow_censor_updates() self.__dict__.update(d) if ('extensions' not in d): self.extensions = []
'Returns the default cost to use with this model. Returns default_cost : Cost The default cost to use with this model.'
def get_default_cost(self):
raise NotImplementedError((str(type(self)) + ' does not implement get_default_cost.'))
'If implemented, performs one epoch of training. Parameters dataset : pylearn2.datasets.dataset.Dataset Dataset object to draw training data from Notes This method is useful for models with highly specialized training algorithms for which is does not make much sense to factor the training code into a separate class. It is also useful for implementors that want to make their model trainable without enforcing compatibility with pylearn2 TrainingAlgorithms.'
def train_all(self, dataset):
raise NotImplementedError((str(type(self)) + ' does not implement train_all.'))
'If train_all is used to train the model, this method is used to determine when the training process has converged. This method is called after the monitor has been run on the latest parameters. Returns rval : bool True if training should continue'
def continue_learning(self):
raise NotImplementedError((str(type(self)) + ' does not implement continue_learning.'))
'If implemented, performs an update on a single minibatch. Parameters dataset: pylearn2.datasets.dataset.Dataset The object to draw training data from. batch_size: int Size of the minibatch to draw from dataset. Returns rval : bool True if the method should be called again for another update. False if convergence has been reached.'
def train_batch(self, dataset, batch_size):
raise NotImplementedError()
'Returns the shape `PatchViewer` should use to display the weights. Returns shape : tuple A tuple containing two ints. These are used as the `grid_shape` argument to `PatchViewer` when displaying the weights of this model. Notes This can be useful when there is some geometric significance to the order of your weight vectors. For example, the `Maxout` model makes sure that all of the filters for the same hidden unit appear on the same row of the display.'
def get_weights_view_shape(self):
raise NotImplementedError((str(type(self)) + ' does not implement get_weights_view_shape (perhaps by design)'))
'Get monitoring channels for this model. Parameters data : tensor_like, or (possibly nested) tuple of tensor_likes, This is data on which the monitoring quantities will be calculated (e.g., a validation set). See `self.get_monitoring_data_specs()`. Returns channels : OrderedDict A dictionary with strings as keys, mapping channel names to symbolic values that depend on the variables in `data`. Notes You can make any channel names you want, just try to make sure they won\'t collide with names made by the training Cost, etc. Anything you think is worth monitoring during training can be added here. You probably want to control which channels get added with some config option for your model.'
def get_monitoring_channels(self, data):
(space, source) = self.get_monitoring_data_specs() space.validate(data) return OrderedDict()
'Get the data_specs describing the data for get_monitoring_channels. This implementation returns an empty data_specs, appropriate for when no monitoring channels are defined, or when none of the channels actually need data (for instance, if they only monitor functions of the model\'s parameters). Returns data_specs : TODO WRITEME TODO WRITEME'
def get_monitoring_data_specs(self):
return (NullSpace(), '')
'Sets the batch size used by the model. Parameters batch_size : int If None, allows the model to use any batch size.'
def set_batch_size(self, batch_size):
pass
'Returns the weights (of the first layer if more than one layer is present). Returns weights : ndarray Returns any matrix that is analogous to the weights of the first layer of an MLP, such as the dictionary of a sparse coding model. This implementation raises NotImplementedError. For models where this method is not conceptually applicable, do not override it. Format should be compatible with the return value of self.get_weights_format.'
def get_weights(self):
raise NotImplementedError((str(type(self)) + ' does not implement get_weights (perhaps by design)'))
'Returns a description of how to interpret the return value of `get_weights`. Returns format : tuple Either (\'v\', \'h\') or (\'h\', \'v\'). (\'v\', \'h\') means self.get_weights returns a matrix of shape (num visible units, num hidden units), while (\'h\', \'v\') means it returns the transpose of this.'
def get_weights_format(self):
return ('v', 'h')
'Returns a topological view of the weights. Returns weights : ndarray Same as the return value of `get_weights` but formatted as a 4D tensor with the axes being (hidden units, rows, columns, channels). Only applicable for models where the weights can be viewed as 2D-multichannel, and the number of channels is either 1 or 3 (because they will be visualized as grayscale or RGB color).'
def get_weights_topo(self):
raise NotImplementedError((str(type(self)) + ' does not implement get_weights_topo (perhaps by design)'))
'Compute a "score function" for this model, if this model has probabilistic semantics. Parameters V : tensor_like, 2-dimensional A batch of i.i.d. examples with examples indexed along the first axis and features along the second. This is data on which the monitoring quantities will be calculated (e.g., a validation set). Returns score : tensor_like The gradient of the negative log probability of the model on the given datal. Notes If the model implements a probability distribution on R^n, this method should return the gradient of the log probability of the batch with respect to V, or raise an exception explaining why this is not possible.'
def score(self, V):
return T.grad((- self.free_energy(V).sum()), V)
'Specify how to rescale the learning rate on each parameter. Returns lr_scalers : OrderedDict A dictionary mapping the parameters of the model to floats. The learning rate will be multiplied by the float for each parameter. If a parameter does not appear in the dictionary, it will use the global learning rate with no scaling.'
def get_lr_scalers(self):
return OrderedDict()
'Returns true if the model overrides censor_updates. (It shouldn\'t do so because it\'s deprecated, and we have to take special action to handle this case)'
def _overrides_censor_updates(self):
return (type(self).censor_updates != Model.censor_updates)
'Deprecated method. Callers should call modify_updates instead. Subclasses should override _modify_updates instead. This method may be removed on or after 2015-05-25. Parameters updates : dict A dictionary mapping shared variables to symbolic values they will be updated to.'
def censor_updates(self, updates):
raise TypeError('Model.censor_updates has been replaced by Model.modify_updates.')
'Modifies the parameters before a learning update is applied. Behavior is defined by subclass\'s implementation of _modify_updates and any ModelExtension\'s implementation of post_modify_updates. Parameters updates : dict A dictionary mapping shared variables to symbolic values they will be updated to Notes For example, if a given parameter is not meant to be learned, a subclass or extension should remove it from the dictionary. If a parameter has a restricted range, e.g.. if it is the precision of a normal distribution, a subclass or extension should clip its update to that range. If a parameter has any other special properties, its updates should be modified to respect that here, e.g. a matrix that must be orthogonal should have its update value modified to be orthogonal here. This is the main mechanism used to make sure that generic training algorithms such as those found in pylearn2.training_algorithms respect the specific properties of the models passed to them.'
def modify_updates(self, updates):
self._modify_updates(updates) self._ensure_extensions() for extension in self.extensions: extension.post_modify_updates(updates, self)
'Subclasses may override this method to add functionality to modify_updates. Parameters updates : dict A dictionary mapping shared variables to symbolic values they will be updated to.'
def _modify_updates(self, updates):
self._disallow_censor_updates()
'Returns an instance of pylearn2.space.Space describing the format of the vector space that the model operates on (this is a generalization of get_input_dim)'
def get_input_space(self):
return self.input_space