Code
stringlengths
103
85.9k
Summary
sequencelengths
0
94
Please provide a description of the function:def gumbel_softmax_discrete_bottleneck(x, bottleneck_bits, beta=0.25, decay=0.999, epsilon=1e-5, temperature_warmup_steps=150000, hard=False, summary=True): bottleneck_size = 2**bottleneck_bits x_shape = common_layers.shape_list(x) hidden_size = x_shape[-1] means, ema_means, ema_count = get_vq_codebook(bottleneck_size, hidden_size) x = tf.reshape(x, [-1, hidden_size]) bottleneck_size = common_layers.shape_list(means)[0] x_norm_sq = tf.reduce_sum(tf.square(x), axis=-1, keepdims=True) means_norm_sq = tf.reduce_sum(tf.square(means), axis=-1, keepdims=True) scalar_prod = tf.matmul(x, means, transpose_b=True) dist = x_norm_sq + tf.transpose(means_norm_sq) - 2 * scalar_prod class_probs = tf.nn.softmax(dist) log_class_probs = tf.nn.log_softmax(dist) gumbel_samples = gumbel_sample(common_layers.shape_list(dist)) steps = temperature_warmup_steps gumbel_samples *= common_layers.inverse_exp_decay(steps // 5) * 0.5 temperature = 1.2 - common_layers.inverse_lin_decay(steps) # 10% of the time keep reasonably high temperature to keep learning. temperature = tf.cond( tf.less(tf.random_uniform([]), 0.9), lambda: temperature, lambda: tf.random_uniform([], minval=0.5, maxval=1.0)) gumbel_softmax_samples = tf.nn.softmax( (log_class_probs + gumbel_samples) / temperature) # Calculate KL between q and a uniform prior. kl = tf.reduce_sum( class_probs * (log_class_probs - tf.log(1.0 / bottleneck_size)), -1) if summary: tf.summary.histogram("KL", tf.reshape(kl, [-1])) # Straight-through gradient estimation when we're using hard assignments. if hard: x_means_idx = tf.reshape(tf.argmax(gumbel_softmax_samples, axis=-1), [-1]) x_means_hot = tf.one_hot(x_means_idx, bottleneck_size) x_means_assignments = gumbel_softmax_samples + tf.stop_gradient( x_means_hot - gumbel_softmax_samples) else: x_means_assignments = gumbel_softmax_samples x_means_assignments_flat = tf.reshape(x_means_assignments, [-1, bottleneck_size]) x_means = tf.matmul(x_means_assignments_flat, means) commitment_loss = tf.reduce_mean( tf.squared_difference(x, tf.stop_gradient(x_means))) # Update the ema variables. updated_ema_count = moving_averages.assign_moving_average( ema_count, tf.reduce_sum( tf.reshape(x_means_assignments, shape=[-1, bottleneck_size]), axis=0), decay, zero_debias=False) dw = tf.matmul(x_means_assignments, x, transpose_a=True) updated_ema_means = tf.identity( moving_averages.assign_moving_average( ema_means, dw, decay, zero_debias=False)) n = tf.reduce_sum(updated_ema_count, axis=-1, keepdims=True) updated_ema_count = ( (updated_ema_count + epsilon) / (n + bottleneck_size * epsilon) * n) updated_ema_means /= tf.expand_dims(updated_ema_count, axis=-1) with tf.control_dependencies([commitment_loss]): update_means = means.assign(updated_ema_means) with tf.control_dependencies([update_means]): loss = beta * commitment_loss # Add KL loss. loss += tf.reduce_mean(kl) x_means_assignments = tf.reshape(x_means_assignments, x_shape[:-1] + [bottleneck_size]) return x_means_assignments, loss
[ "VQ-VAE using Gumbel-Softmax.\n\n Different from `gumbel_softmax()` function as\n this function calculates the KL by using the discrete entropy\n instead of taking the argmax, and it also uses an exponential moving average\n to update the codebook while the `gumbel_softmax()` function includes no\n codebook update.\n\n Args:\n x: A `float`-like `Tensor` containing the latent vectors to be compared to\n the codebook, whose squared difference is used as the Gumbel-Softmax\n logits.\n bottleneck_bits: An `int` that sets the size of the bottleneck in `log_2`.\n beta: Beta factor for commitment loss (Default: 0.25).\n decay: Decay factor for exponential moving average (Default: 0.999).\n epsilon: Small value to avoid dividing by zero in EMA update\n (Default: 1e-5).\n temperature_warmup_steps: Number of steps it takes to decay temperature to 0\n (Default: 150000).\n hard: When `True`, we use hard Gumbel-Softmax samples and force\n discrete latents by taking the argmax. When `False`, we use soft samples,\n which we treat as codebook weights (Default: False).\n summary: When `True`, we save histogram summaries of the KL term (Default:\n True).\n\n Returns:\n x_means_assignments: A `float`-like `Tensor` containing the codebook\n assignments. When `hard == True`, this is one-hot, containing the arg-max\n of the Gumbel-Softmax samples (and we use the straightthrough gradient).\n Otherwise, it contains the Gumbel-Softmax samples exactly, which are\n values from the `(K-1)`-simplex where `K` is the bottleneck size.\n loss: The loss, which is the sum of the KL between the Gumbel-Softmax and\n the uniform prior and the commitment loss multiplied by the beta factor.\n We approximate the KL by using the entropy of a categorical distribution\n instead of the Gumbel Softmax.\n\n " ]
Please provide a description of the function:def tanh_discrete_bottleneck(x, bottleneck_bits, bottleneck_noise, discretize_warmup_steps, mode): x = tf.layers.dense(x, bottleneck_bits, name="tanh_discrete_bottleneck") d0 = tf.stop_gradient(2.0 * tf.to_float(tf.less(0.0, x))) - 1.0 if mode == tf.estimator.ModeKeys.TRAIN: x += tf.truncated_normal( common_layers.shape_list(x), mean=0.0, stddev=0.2) x = tf.tanh(x) d = x + tf.stop_gradient(2.0 * tf.to_float(tf.less(0.0, x)) - 1.0 - x) if mode == tf.estimator.ModeKeys.TRAIN: noise = tf.random_uniform(common_layers.shape_list(x)) noise = 2.0 * tf.to_float(tf.less(bottleneck_noise, noise)) - 1.0 d *= noise d = common_layers.mix(d, x, discretize_warmup_steps, mode == tf.estimator.ModeKeys.TRAIN) return d, d0
[ "Simple discretization through tanh, flip bottleneck_noise many bits." ]
Please provide a description of the function:def tanh_discrete_unbottleneck(x, hidden_size): x = tf.layers.dense(x, hidden_size, name="tanh_discrete_unbottleneck") return x
[ "Simple un-discretization from tanh." ]
Please provide a description of the function:def isemhash_bottleneck(x, bottleneck_bits, bottleneck_noise, discretize_warmup_steps, mode, isemhash_noise_dev=0.5, isemhash_mix_prob=0.5): with tf.variable_scope("isemhash_bottleneck"): x = tf.layers.dense(x, bottleneck_bits, name="dense") y = common_layers.saturating_sigmoid(x) if isemhash_noise_dev > 0 and mode == tf.estimator.ModeKeys.TRAIN: noise = tf.truncated_normal( common_layers.shape_list(x), mean=0.0, stddev=isemhash_noise_dev) y = common_layers.saturating_sigmoid(x + noise) d = tf.to_float(tf.less(0.5, y)) + y - tf.stop_gradient(y) d = 2.0 * d - 1.0 # Move from [0, 1] to [-1, 1]. if mode == tf.estimator.ModeKeys.TRAIN: # Flip some bits. noise = tf.random_uniform(common_layers.shape_list(x)) noise = 2.0 * tf.to_float(tf.less(bottleneck_noise, noise)) - 1.0 d *= noise d = common_layers.mix( d, 2.0 * y - 1.0, discretize_warmup_steps, mode == tf.estimator.ModeKeys.TRAIN, max_prob=isemhash_mix_prob) return d, 0.0
[ "Improved semantic hashing bottleneck." ]
Please provide a description of the function:def isemhash_unbottleneck(x, hidden_size, isemhash_filter_size_multiplier=1.0): filter_size = int(hidden_size * isemhash_filter_size_multiplier) x = 0.5 * (x - 1.0) # Move from [-1, 1] to [0, 1]. with tf.variable_scope("isemhash_unbottleneck"): h1a = tf.layers.dense(x, filter_size, name="hidden1a") h1b = tf.layers.dense(1.0 - x, filter_size, name="hidden1b") h2 = tf.layers.dense(tf.nn.relu(h1a + h1b), filter_size, name="hidden2") return tf.layers.dense(tf.nn.relu(h2), hidden_size, name="final")
[ "Improved semantic hashing un-bottleneck." ]
Please provide a description of the function:def parametrized_bottleneck(x, hparams): if hparams.bottleneck_kind == "tanh_discrete": d, _ = tanh_discrete_bottleneck( x, hparams.bottleneck_bits, hparams.bottleneck_noise * 0.5, hparams.discretize_warmup_steps, hparams.mode) return d, 0.0 if hparams.bottleneck_kind == "isemhash": return isemhash_bottleneck( x, hparams.bottleneck_bits, hparams.bottleneck_noise * 0.5, hparams.discretize_warmup_steps, hparams.mode, hparams.isemhash_noise_dev, hparams.isemhash_mix_prob) if hparams.bottleneck_kind == "vq": return vq_discrete_bottleneck(x, hparams.bottleneck_bits, hparams.vq_beta, hparams.vq_decay, hparams.vq_epsilon) if hparams.bottleneck_kind == "em": return vq_discrete_bottleneck( x, hparams.bottleneck_bits, hparams.vq_beta, hparams.vq_decay, hparams.vq_epsilon, soft_em=True, num_samples=hparams.vq_num_samples) if hparams.bottleneck_kind == "gumbel_softmax": return gumbel_softmax_discrete_bottleneck( x, hparams.bottleneck_bits, hparams.vq_beta, hparams.vq_decay, hparams.vq_epsilon, hparams.temperature_warmup_steps, hard=False, summary=True) raise ValueError( "Unsupported hparams.bottleneck_kind %s" % hparams.bottleneck_kind)
[ "Meta-function calling all the above bottlenecks with hparams." ]
Please provide a description of the function:def parametrized_unbottleneck(x, hidden_size, hparams): if hparams.bottleneck_kind == "tanh_discrete": return tanh_discrete_unbottleneck(x, hidden_size) if hparams.bottleneck_kind == "isemhash": return isemhash_unbottleneck(x, hidden_size, hparams.isemhash_filter_size_multiplier) if hparams.bottleneck_kind in ["vq", "em", "gumbel_softmax"]: return vq_discrete_unbottleneck(x, hidden_size) raise ValueError( "Unsupported hparams.bottleneck_kind %s" % hparams.bottleneck_kind)
[ "Meta-function calling all the above un-bottlenecks with hparams." ]
Please provide a description of the function:def iaf_hparams(hidden_size=512, filter_size=4096): hparams = common_hparams.basic_params1() # Attention hyperparameters. hparams.hidden_size = hidden_size hparams.add_hparam("attention_key_channels", None) hparams.add_hparam("attention_value_channels", None) hparams.add_hparam("num_heads", 4) hparams.add_hparam("attention_dropout", 0.1) hparams.add_hparam("shared_rel", False) hparams.add_hparam("block_width", 1) hparams.add_hparam("block_length", 1) hparams.add_hparam("q_filter_width", 1) hparams.add_hparam("kv_filter_width", 1) # Preprocessing and postprocesing hyperparameters. hparams.layer_preprocess_sequence = "n" hparams.layer_prepostprocess_dropout = 0.1 hparams.norm_type = "layer" hparams.norm_epsilon = 1e-06 hparams.layer_prepostprocess_dropout_broadcast_dims = "" hparams.layer_postprocess_sequence = "da" # Feedforward neural network hyperparameters. hparams.add_hparam("filter_size", filter_size) hparams.add_hparam("ffn_layer", "conv_hidden_relu") hparams.add_hparam("relu_dropout", 0.1) return hparams
[ "Create hyperpameters for inverse autoregressive flows.\n\n Args:\n hidden_size: Width of attention layers and neural network output layer.\n filter_size: Hidden layer width for neural network.\n\n Returns:\n hparams: Hyperpameters with basic presets for inverse autoregressive flows.\n " ]
Please provide a description of the function:def _original_vocab(tmp_dir): vocab_url = ("http://download.tensorflow.org/models/LM_LSTM_CNN/" "vocab-2016-09-10.txt") vocab_filename = os.path.basename(vocab_url + ".en") vocab_filepath = os.path.join(tmp_dir, vocab_filename) if not os.path.exists(vocab_filepath): generator_utils.maybe_download(tmp_dir, vocab_filename, vocab_url) return set([ text_encoder.native_to_unicode(l.strip()) for l in tf.gfile.Open(vocab_filepath) ])
[ "Returns a set containing the original vocabulary.\n\n This is important for comparing with published results.\n\n Args:\n tmp_dir: directory containing dataset.\n\n Returns:\n a set of strings\n " ]
Please provide a description of the function:def _replace_oov(original_vocab, line): return u" ".join( [word if word in original_vocab else u"UNK" for word in line.split()])
[ "Replace out-of-vocab words with \"UNK\".\n\n This maintains compatibility with published results.\n\n Args:\n original_vocab: a set of strings (The standard vocabulary for the dataset)\n line: a unicode string - a space-delimited sequence of words.\n\n Returns:\n a unicode string - a space-delimited sequence of words.\n " ]
Please provide a description of the function:def _maybe_download_corpus(tmp_dir): corpus_url = ("http://www.statmt.org/lm-benchmark/" "1-billion-word-language-modeling-benchmark-r13output.tar.gz") corpus_filename = os.path.basename(corpus_url) corpus_filepath = os.path.join(tmp_dir, corpus_filename) if not os.path.exists(corpus_filepath): generator_utils.maybe_download(tmp_dir, corpus_filename, corpus_url) with tarfile.open(corpus_filepath, "r:gz") as corpus_tar: corpus_tar.extractall(tmp_dir)
[ "Download and unpack the corpus.\n\n Args:\n tmp_dir: directory containing dataset.\n " ]
Please provide a description of the function:def lossfn(real_input, fake_input, compress, hparams, lsgan, name): eps = 1e-12 with tf.variable_scope(name): d1 = discriminator(real_input, compress, hparams, "discriminator") d2 = discriminator(fake_input, compress, hparams, "discriminator", reuse=True) if lsgan: dloss = tf.reduce_mean( tf.squared_difference(d1, 0.9)) + tf.reduce_mean(tf.square(d2)) gloss = tf.reduce_mean(tf.squared_difference(d2, 0.9)) loss = (dloss + gloss)/2 else: # cross_entropy dloss = -tf.reduce_mean( tf.log(d1 + eps)) - tf.reduce_mean(tf.log1p(eps - d2)) gloss = -tf.reduce_mean(tf.log(d2 + eps)) loss = (dloss + gloss)/2 return loss
[ "Loss function." ]
Please provide a description of the function:def cycle_gan_internal(inputs, targets, _, hparams): with tf.variable_scope("cycle_gan"): # Embed inputs and targets. inputs_orig, targets_orig = tf.to_int32(inputs), tf.to_int32(targets) inputs = common_layers.embedding( inputs_orig, hparams.vocab_size, hparams.hidden_size, "embed") targets = common_layers.embedding( targets_orig, hparams.vocab_size, hparams.hidden_size, "embed", reuse=True) x, _ = split_on_batch(inputs) _, y = split_on_batch(targets) # Y --> X y_fake = generator(y, hparams, "Fy", reuse=False) y_to_x_loss = lossfn(y, y_fake, True, hparams, True, "YtoX") # X --> Y x_fake = generator(x, hparams, "Gx", reuse=False) x_to_y_loss = lossfn(y, x_fake, True, hparams, True, "XtoY") # Cycle-Consistency y_fake_ = generator(y_fake, hparams, "Gx", reuse=True) x_fake_ = generator(x_fake, hparams, "Fy", reuse=True) x_to_x_loss = hparams.cycle_loss_multiplier1 * tf.reduce_mean( tf.abs(x_fake_ - x)) y_to_y_loss = hparams.cycle_loss_multiplier2 * tf.reduce_mean( tf.abs(y_fake_ - y)) cycloss = x_to_x_loss + y_to_y_loss sample_generated = generator(inputs, hparams, "Gx", reuse=True) sample_generated = tf.layers.dense( sample_generated, hparams.vocab_size, name="softmax", reuse=None) sample_generated = tf.stop_gradient( tf.expand_dims(sample_generated, axis=2)) losses = {"cycloss": cycloss, "y_to_x_loss": y_to_x_loss, "x_to_y_loss": x_to_y_loss} return sample_generated, losses
[ "Cycle GAN, main step used for training." ]
Please provide a description of the function:def cycle_gan_small(): hparams = transformer_vae.transformer_ae_small() hparams.batch_size = 2048 hparams.bottom = { "inputs": modalities.identity_bottom, "targets": modalities.identity_bottom, } hparams.top = { "targets": modalities.identity_top, } hparams.weight_decay = 3.0 hparams.learning_rate = 0.05 hparams.kl_warmup_steps = 5000 hparams.learning_rate_warmup_steps = 3000 hparams.add_hparam("vocab_size", 66) # Vocabulary size, need to set here. hparams.add_hparam("cycle_loss_multiplier1", 10.0) hparams.add_hparam("cycle_loss_multiplier2", 10.0) return hparams
[ "Set of hyperparameters." ]
Please provide a description of the function:def decode_hparams(overrides=""): hparams = decoding.decode_hparams() # Number of interpolations between [0.0, 1.0]. hparams.add_hparam("num_interp", 11) # Which level(s) to interpolate. hparams.add_hparam("level_interp", [0, 1, 2]) # "all" or "ranked", interpolate all channels or a "ranked". hparams.add_hparam("channel_interp", "all") # interpolate channels ranked according to squared L2 norm. hparams.add_hparam("rank_interp", 1) # Whether on not to save frames as summaries hparams.add_hparam("save_frames", True) hparams.parse(overrides) return hparams
[ "Hparams for decoding." ]
Please provide a description of the function:def preprocess_frame(frame): # Normalize from [0.0, 1.0] -> [-0.5, 0.5] frame = common_layers.convert_rgb_to_real(frame) frame = frame - 0.5 frame, _ = glow_ops.uniform_binning_correction(frame) return frame
[ "Preprocess frame.\n\n 1. Converts [0, 255] to [-0.5, 0.5]\n 2. Adds uniform noise.\n\n Args:\n frame: 3-D Tensor representing pixels.\n Returns:\n frame: 3-D Tensor with values in between [-0.5, 0.5]\n " ]
Please provide a description of the function:def frame_to_latents(frame, hparams): # Preprocess frame = preprocess_frame(frame) # Encode [X_t] to [z^1_t, z^2_t .. z^l_t] glow_vals = glow_ops.encoder_decoder( "codec", frame, hparams, eps=None, reverse=False) z_top, _, level_eps, _, _ = glow_vals return z_top, level_eps
[ "Encode frames to latents." ]
Please provide a description of the function:def latents_to_frames(z_top_interp, level_eps_interp, hparams): # Decode [z^1_t, z^2_t .. z^l_t] to [X_t] images, _, _, _ = glow_ops.encoder_decoder( "codec", z_top_interp, hparams, eps=level_eps_interp, reverse=True) images = glow_ops.postprocess(images) return images
[ "Decodes latents to frames." ]
Please provide a description of the function:def interpolate(features, hparams, decode_hp): inputs, targets = features["inputs"], features["targets"] inputs = tf.unstack(inputs, axis=1) targets = tf.unstack(targets, axis=1) coeffs = np.linspace(0.0, 1.0, decode_hp.num_interp) # (X_1, X_t) -> (z_1, z_t) first_frame, last_frame = inputs[0], targets[-1] first_top_z, first_level_eps = frame_to_latents(first_frame, hparams) last_top_z, last_level_eps = frame_to_latents(last_frame, hparams) # Interpolate latents at all levels. first_lats = first_level_eps + [first_top_z] last_lats = last_level_eps + [last_top_z] interp_lats = [] lat_iterator = enumerate(zip(first_lats, last_lats)) for level_ind, (first_lat, last_lat) in lat_iterator: if level_ind in decode_hp.level_interp: if decode_hp.channel_interp == "all": interp_lat = glow_ops.linear_interpolate(first_lat, last_lat, coeffs) else: interp_lat = glow_ops.linear_interpolate_rank( first_lat, last_lat, coeffs, decode_hp.rank_interp) else: interp_lat = tf.tile(first_lat, [decode_hp.num_interp, 1, 1, 1]) interp_lats.append(interp_lat) level_eps_interp = interp_lats[:hparams.n_levels-1] z_top_interp = interp_lats[-1] images = latents_to_frames(z_top_interp, level_eps_interp, hparams) return images, first_frame, last_frame
[ "Interpolate between the first input frame and last target frame.\n\n Args:\n features: dict of tensors\n hparams: HParams, training hparams.\n decode_hp: HParams, decode hparams.\n Returns:\n images: interpolated images, 4-D Tensor, shape=(num_interp, H, W, C)\n first_frame: image, 3-D Tensor, shape=(1, H, W, C)\n last_frame: image, 3-D Tensor, shape=(1, H, W, C)\n " ]
Please provide a description of the function:def get_summaries_log_dir(decode_hp, output_dir, dataset_split): child_dir = decode_hp.summaries_log_dir level_dir = "".join([str(level) for level in decode_hp.level_interp]) if decode_hp.channel_interp == "all": rank_dir = "all" else: rank_dir = "rank_%d" % decode_hp.rank_interp child_dir = "%s/%s_%s" % (child_dir, level_dir, rank_dir) if dataset_split is not None: child_dir += "_{}".format(dataset_split) return os.path.join(output_dir, child_dir)
[ "Get nested summaries_log_dir based on decode_hp." ]
Please provide a description of the function:def interpolations_to_summary(sample_ind, interpolations, first_frame, last_frame, hparams, decode_hp): parent_tag = "sample_%d" % sample_ind frame_shape = hparams.problem.frame_shape interp_shape = [hparams.batch_size, decode_hp.num_interp] + frame_shape interpolations = np.reshape(interpolations, interp_shape) interp_tag = "%s/interp/%s" % (parent_tag, decode_hp.channel_interp) if decode_hp.channel_interp == "ranked": interp_tag = "%s/rank_%d" % (interp_tag, decode_hp.rank_interp) summaries, _ = common_video.py_gif_summary( interp_tag, interpolations, return_summary_value=True, max_outputs=decode_hp.max_display_outputs, fps=decode_hp.frames_per_second) if decode_hp.save_frames: first_frame_summ = image_utils.image_to_tf_summary_value( first_frame, "%s/first" % parent_tag) last_frame_summ = image_utils.image_to_tf_summary_value( last_frame, "%s/last" % parent_tag) summaries.append(first_frame_summ) summaries.append(last_frame_summ) return summaries
[ "Converts interpolated frames into tf summaries.\n\n The summaries consists of:\n 1. Image summary corresponding to the first frame.\n 2. Image summary corresponding to the last frame.\n 3. The interpolated frames as a gif summary.\n\n Args:\n sample_ind: int\n interpolations: Numpy array, shape=(num_interp, H, W, 3)\n first_frame: Numpy array, shape=(HWC)\n last_frame: Numpy array, shape=(HWC)\n hparams: HParams, train hparams\n decode_hp: HParams, decode hparams\n Returns:\n summaries: list of tf Summary Values.\n " ]
Please provide a description of the function:def next_frame_epva(): hparams = basic_deterministic_params.next_frame_basic_deterministic() hparams.video_num_input_frames = 4 hparams.video_num_target_frames = 4 hparams.bottom = { "inputs": modalities.video_raw_bottom, "targets": modalities.video_raw_targets_bottom, } hparams.loss = { "targets": modalities.video_l2_raw_loss, } hparams.top = { "targets": modalities.video_raw_top, } hparams.learning_rate_schedule = "constant" hparams.learning_rate_constant = 1e-05 hparams.batch_size = 2 hparams.clip_grad_norm = 0.01 # TODO(msaffar): disentangle EPVA from SV2P hparams.add_hparam("reward_prediction", False) hparams.add_hparam("clip_pixel_values", True) hparams.add_hparam("context_frames", 5) hparams.add_hparam("enc_learning_rate", 1e-5) hparams.add_hparam("enc_pred_loss_scale", 0.1) hparams.add_hparam("enc_pred_loss_scale_delay", 6e5) hparams.add_hparam("enc_size", 64) hparams.add_hparam("enc_keep_prob", .65) hparams.add_hparam("enc_pred_use_l1_loss", False) hparams.add_hparam("enc_pred_use_l2norm", False) hparams.add_hparam("van_learning_rate", 3e-5) hparams.add_hparam("van_keep_prob", .9) hparams.add_hparam("sequence_length ", 64) hparams.add_hparam("skip_num", 2) hparams.add_hparam("pred_noise_std", 0) hparams.add_hparam("lstm_state_noise_stddev", 0) return hparams
[ "EPVA hparams." ]
Please provide a description of the function:def _create_slots(self, var_list): super(MultistepAdamOptimizer, self)._create_slots(var_list) first_var = min(var_list, key=lambda x: x.name) self._create_non_slot_variable(initial_value=0 if self._n == 1 else 1, name="iter", colocate_with=first_var) for v in var_list: self._zeros_slot(v, "grad_acc", self._name)
[ "Create slot variables for Adam with accumulated gradients." ]
Please provide a description of the function:def _apply_cond(self, apply_fn, grad, var, *args, **kwargs): grad_acc = self.get_slot(var, "grad_acc") def apply_adam(grad_acc, apply_fn, grad, var, *args, **kwargs): total_grad = (grad_acc + grad) / tf.cast(self._n_t, grad.dtype) adam_op = apply_fn(total_grad, var, *args, **kwargs) with tf.control_dependencies([adam_op]): grad_acc_to_zero_op = grad_acc.assign(tf.zeros_like(grad_acc), use_locking=self._use_locking) return tf.group(adam_op, grad_acc_to_zero_op) def accumulate_gradient(grad_acc, grad): assign_op = tf.assign_add(grad_acc, grad, use_locking=self._use_locking) return tf.group(assign_op) # Strip return value return tf.cond( tf.equal(self._get_iter_variable(), 0), lambda: apply_adam(grad_acc, apply_fn, grad, var, *args, **kwargs), lambda: accumulate_gradient(grad_acc, grad))
[ "Apply conditionally if counter is zero." ]
Please provide a description of the function:def _finish(self, update_ops, name_scope): iter_ = self._get_iter_variable() beta1_power, beta2_power = self._get_beta_accumulators() with tf.control_dependencies(update_ops): with tf.colocate_with(iter_): def update_beta_op(): update_beta1 = beta1_power.assign( beta1_power * self._beta1_t, use_locking=self._use_locking) update_beta2 = beta2_power.assign( beta2_power * self._beta2_t, use_locking=self._use_locking) return tf.group(update_beta1, update_beta2) maybe_update_beta = tf.cond( tf.equal(iter_, 0), update_beta_op, tf.no_op) with tf.control_dependencies([maybe_update_beta]): update_iter = iter_.assign(tf.mod(iter_ + 1, self._n_t), use_locking=self._use_locking) return tf.group( *update_ops + [update_iter, maybe_update_beta], name=name_scope)
[ "Updates beta_power variables every n batches and incrs counter." ]
Please provide a description of the function:def transformer_revnet_encoder(encoder_input, encoder_self_attention_bias, hparams, name="encoder"): def f(x, side_input): encoder_self_attention_bias = side_input[0] old_hid_size = hparams.hidden_size hparams.hidden_size = old_hid_size // 2 with tf.variable_scope("self_attention"): y = common_attention.multihead_attention( common_layers.layer_preprocess( x, hparams), None, encoder_self_attention_bias, hparams.attention_key_channels or hparams.hidden_size, hparams.attention_value_channels or hparams.hidden_size, hparams.hidden_size, hparams.num_heads, hparams.attention_dropout) y = common_layers.layer_postprocess(x, y, hparams) hparams.hidden_size = old_hid_size return y def g(x): old_hid_size = hparams.hidden_size hparams.hidden_size = old_hid_size // 2 with tf.variable_scope("ffn"): y = transformer.transformer_ffn_layer( common_layers.layer_preprocess(x, hparams), hparams) y = common_layers.layer_postprocess(x, y, hparams) hparams.hidden_size = old_hid_size return y x1, x2 = tf.split(encoder_input, 2, axis=-1) with tf.variable_scope(name): y1, y2 = tf.contrib.layers.rev_block( x1, x2, f, g, num_layers=hparams.num_hidden_layers, f_side_input=[encoder_self_attention_bias], is_training=hparams.mode == tf.estimator.ModeKeys.TRAIN) y = tf.concat([y1, y2], axis=-1) return common_layers.layer_preprocess(y, hparams)
[ "A stack of transformer layers.\n\n Args:\n encoder_input: a Tensor\n encoder_self_attention_bias: bias Tensor for self-attention\n (see common_attention.attention_bias())\n hparams: hyperparameters for model\n name: a string\n\n Returns:\n y: a Tensors\n ", "f(x) for reversible layer, self-attention layer.", "g(x) for reversible layer, feed-forward layer." ]
Please provide a description of the function:def transformer_revnet_decoder(decoder_input, encoder_output, decoder_self_attention_bias, encoder_decoder_attention_bias, hparams, name="decoder"): def f(x, side_input): decoder_self_attention_bias = side_input[0] encoder_decoder_attention_bias = side_input[1] encoder_output = side_input[2] old_hid_size = hparams.hidden_size hparams.hidden_size = old_hid_size // 2 with tf.variable_scope("self_attention"): y = common_attention.multihead_attention( common_layers.layer_preprocess( x, hparams), None, decoder_self_attention_bias, hparams.attention_key_channels or hparams.hidden_size, hparams.attention_value_channels or hparams.hidden_size, hparams.hidden_size, hparams.num_heads, hparams.attention_dropout) y = common_layers.layer_postprocess(x, y, hparams) if encoder_output is not None: with tf.variable_scope("encdec_attention"): y = common_attention.multihead_attention( common_layers.layer_preprocess( x, hparams), encoder_output, encoder_decoder_attention_bias, hparams.attention_key_channels or hparams.hidden_size, hparams.attention_value_channels or hparams.hidden_size, hparams.hidden_size, hparams.num_heads, hparams.attention_dropout) y = common_layers.layer_postprocess(x, y, hparams) hparams.hidden_size = old_hid_size return y def g(x): old_hid_size = hparams.hidden_size hparams.hidden_size = old_hid_size // 2 with tf.variable_scope("ffn"): y = transformer.transformer_ffn_layer( common_layers.layer_preprocess(x, hparams), hparams) y = common_layers.layer_postprocess(x, y, hparams) hparams.hidden_size = old_hid_size return y x1, x2 = tf.split(decoder_input, 2, axis=-1) with tf.variable_scope(name): y1, y2 = tf.contrib.layers.rev_block( x1, x2, f, g, num_layers=hparams.num_hidden_layers, f_side_input=[ decoder_self_attention_bias, encoder_decoder_attention_bias, encoder_output ], is_training=hparams.mode == tf.estimator.ModeKeys.TRAIN) y = tf.concat([y1, y2], axis=-1) return common_layers.layer_preprocess(y, hparams)
[ "A stack of transformer layers.\n\n Args:\n decoder_input: a Tensor\n encoder_output: a Tensor\n decoder_self_attention_bias: bias Tensor for self-attention\n (see common_attention.attention_bias())\n encoder_decoder_attention_bias: bias Tensor for encoder-decoder attention\n (see common_attention.attention_bias())\n hparams: hyperparameters for model\n name: a string\n\n Returns:\n y: a Tensors\n ", "f(x) for reversible layer, self-attention and enc-dec attention.", "g(x) for reversible layer, feed-forward layer." ]
Please provide a description of the function:def transformer_revnet_base(): hparams = transformer.transformer_big() # Use settings from transformer_n_da hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" hparams.learning_rate = 0.4 return hparams
[ "Base hparams for TransformerRevnet." ]
Please provide a description of the function:def transformer_revnet_big(): hparams = transformer_revnet_base() # The TransformerRevnet uses significantly less memory than the Transformer. # Increase batch size and model size. hparams.batch_size *= 2 hparams.hidden_size *= 2 hparams.num_heads *= 2 hparams.num_hidden_layers += 1 return hparams
[ "Base hparams for TransformerRevnet." ]
Please provide a description of the function:def data_parallelism_from_flags(daisy_chain_variables=True, all_workers=False): dp_arg_names = inspect.getargspec(data_parallelism).args blacklist = ["daisy_chain_variables", "all_workers"] kwargs = {} for arg in dp_arg_names: if arg in blacklist: continue kwargs[arg] = getattr(tf.flags.FLAGS, arg) return data_parallelism( daisy_chain_variables=daisy_chain_variables, all_workers=all_workers, **kwargs)
[ "Over which devices do we split each training batch.\n\n In old-fashioned async mode, we split the batch over all GPUs on the\n current worker.\n\n In sync mode, we split the batch over all the parameter server GPUs.\n\n This function returns an expert_utils.Parallelism object, which can be used\n to build the model. It is configured in a way that any variables created\n by `tf.get_variable` will be assigned to the parameter servers and shared\n between datashards.\n\n Args:\n daisy_chain_variables: whether to copy variables in a daisy chain on GPUs.\n all_workers: whether the devices are all async workers or just this one.\n\n Returns:\n a expert_utils.Parallelism.\n " ]
Please provide a description of the function:def data_parallelism(daisy_chain_variables=True, all_workers=False, ps_replicas=0, ps_job="/job:ps", ps_gpu=0, schedule="continuous_train_and_eval", sync=False, worker_gpu=1, worker_replicas=1, worker_id=0, gpu_order="", worker_job="/job:localhost", no_data_parallelism=False): tf.logging.info("schedule=%s" % schedule) tf.logging.info("worker_gpu=%s" % worker_gpu) tf.logging.info("sync=%s" % sync) def _ps_replicas(all_workers=False): if all_workers: return list(range(ps_replicas)) # Worker K will be using replicas {0,...n-1} + K*n if we have n replicas. num_replicas = ps_replicas // worker_replicas return [d + worker_id * num_replicas for d in range(num_replicas)] def _gpu_order(num_gpus): if gpu_order: ret = [int(s) for s in gpu_order.split(" ")] if len(ret) == num_gpus: return ret return list(range(num_gpus)) def _ps_gpus(all_workers=False): ps_gpus = [] for d in _ps_replicas(all_workers=all_workers): ps_gpus.extend([(d, gpu) for gpu in _gpu_order(ps_gpu)]) return ps_gpus def ps_devices(all_workers=False): if ps_replicas > 0: if ps_gpu > 0: return [ ps_job + "/task:%d/GPU:%d" % (d, gpu) for (d, gpu) in _ps_gpus(all_workers=all_workers) ] else: return [ ps_job + "/task:%d" % d for d in _ps_replicas(all_workers=all_workers) ] else: if worker_gpu > 0: return ["gpu:%d" % d for d in _gpu_order(worker_gpu)] else: return [""] def _replica_device_setter(worker_device): if ps_replicas == 0: return worker_device return tf.train.replica_device_setter( worker_device=worker_device, ps_tasks=ps_replicas, ps_device=ps_job + "/GPU:0" if ps_gpu > 0 else ps_job) is_single_machine = ps_replicas == 0 and worker_replicas == 1 if no_data_parallelism: datashard_devices = [""] caching_devices = None elif is_single_machine: tf.logging.warn( "Schedule=%s. Assuming that training is running on a single machine.", schedule) datashard_devices = ["gpu:%d" % d for d in _gpu_order(worker_gpu)] if worker_gpu < 1: datashard_devices += ["cpu:0"] caching_devices = None elif sync and ps_replicas > 0: # compute on ps datashard_devices = [ _replica_device_setter(d) for d in ps_devices(all_workers=all_workers) ] if ps_gpu > 0 and ps_replicas > 1: caching_devices = [ ps_job + "/task:%d/cpu:0" % d for (d, _) in _ps_gpus(all_workers=all_workers) ] else: caching_devices = None else: # compute on worker - this is either a single-worker setup or asynchronous # with parameter servers. if worker_gpu > 1: datashard_devices = [ _replica_device_setter(worker_job + "/GPU:%d" % d) for d in _gpu_order(worker_gpu) ] caching_devices = None else: datashard_devices = [_replica_device_setter(worker_job)] caching_devices = None tf.logging.info("datashard_devices: %s", datashard_devices) tf.logging.info("caching_devices: %s", caching_devices) tf.logging.info("ps_devices: %s", ps_devices(all_workers=all_workers)) return eu.Parallelism( datashard_devices, caching_devices=caching_devices, daisy_chain_variables=daisy_chain_variables, ps_devices=ps_devices(all_workers=all_workers))
[ "See data_parallelism_from_flags.", "List of ps devices (where to put the experts).\n\n Args:\n all_workers: whether the list is for all async workers or just this one.\n\n Returns:\n a list of device names\n " ]
Please provide a description of the function:def concat_generator(filename, up_threshold, low_threshold=10): txt = "" for line in tf.gfile.Open(filename): line = line.strip() if len(txt) + len(line) + 1 >= up_threshold: ret = txt txt = "" # We don't yield very short long parts to prevent noisy examples. if len(ret) > low_threshold and len(ret) < up_threshold: yield {"targets": ret} if not txt: txt = line else: txt = " ".join([txt, line])
[ "Generate concatenated lines from file upto up_threshold characters." ]
Please provide a description of the function:def mix_generators(generator_list): i = 0 l = len(generator_list) stopiters_seen = 0 while stopiters_seen <= l: try: yield six.next(generator_list[i % l]) i += 1 stopiters_seen = 0 except StopIteration: i += 1 stopiters_seen += 1
[ "Given python generators, generate from one, then from another, etc." ]
Please provide a description of the function:def compute_bleu_summaries(hook_args): decode_hparams = hook_args.decode_hparams if not (decode_hparams.decode_reference and decode_hparams.decode_to_file): return None values = [] bleu = 100 * bleu_hook.bleu_wrapper( decode_hparams.decode_reference, decode_hparams.decode_to_file) values.append(tf.Summary.Value(tag="BLEU", simple_value=bleu)) tf.logging.info("%s: BLEU = %6.2f" % (decode_hparams.decode_to_file, bleu)) if hook_args.hparams.mlperf_mode: current_step = decode_hparams.mlperf_decode_step mlperf_log.transformer_print( key=mlperf_log.EVAL_TARGET, value=decode_hparams.mlperf_threshold) mlperf_log.transformer_print( key=mlperf_log.EVAL_ACCURACY, value={ "epoch": max(current_step // decode_hparams.iterations_per_loop - 1, 0), "value": bleu }) mlperf_log.transformer_print(key=mlperf_log.EVAL_STOP) if bleu >= decode_hparams.mlperf_threshold: decode_hparams.set_hparam("mlperf_success", True) return values
[ "Compute BLEU core summaries using the decoder output.\n\n Args:\n hook_args: DecodeHookArgs namedtuple\n Returns:\n A list of tf.Summary values if hook_args.hparams contains the\n reference file and the translated file.\n " ]
Please provide a description of the function:def _preprocess_sgm(line, is_sgm): if not is_sgm: return line # In SGM files, remove <srcset ...>, <p>, <doc ...> lines. if line.startswith("<srcset") or line.startswith("</srcset"): return "" if line.startswith("<doc") or line.startswith("</doc"): return "" if line.startswith("<p>") or line.startswith("</p>"): return "" # Strip <seg> tags. line = line.strip() if line.startswith("<seg") and line.endswith("</seg>"): i = line.index(">") return line[i + 1:-6]
[ "Preprocessing to strip tags in SGM files." ]
Please provide a description of the function:def compile_data(tmp_dir, datasets, filename, datatypes_to_clean=None): datatypes_to_clean = datatypes_to_clean or [] filename = os.path.join(tmp_dir, filename) lang1_fname = filename + ".lang1" lang2_fname = filename + ".lang2" if tf.gfile.Exists(lang1_fname) and tf.gfile.Exists(lang2_fname): tf.logging.info("Skipping compile data, found files:\n%s\n%s", lang1_fname, lang2_fname) return filename with tf.gfile.GFile(lang1_fname, mode="w") as lang1_resfile: with tf.gfile.GFile(lang2_fname, mode="w") as lang2_resfile: for dataset in datasets: url = dataset[0] compressed_filename = os.path.basename(url) compressed_filepath = os.path.join(tmp_dir, compressed_filename) if url.startswith("http"): generator_utils.maybe_download(tmp_dir, compressed_filename, url) if dataset[1][0] == "tmx": cleaning_requested = "tmx" in datatypes_to_clean tmx_filename = os.path.join(tmp_dir, dataset[1][1]) if tmx_filename.endswith(".gz"): with gzip.open(tmx_filename, "rb") as tmx_file: _tmx_to_source_target(tmx_file, lang1_resfile, lang2_resfile, do_cleaning=cleaning_requested) else: with tf.gfile.Open(tmx_filename) as tmx_file: _tmx_to_source_target(tmx_file, lang1_resfile, lang2_resfile, do_cleaning=cleaning_requested) elif dataset[1][0] == "tsv": _, src_column, trg_column, glob_pattern = dataset[1] filenames = tf.gfile.Glob(os.path.join(tmp_dir, glob_pattern)) if not filenames: # Capture *.tgz and *.tar.gz too. mode = "r:gz" if compressed_filepath.endswith("gz") else "r" with tarfile.open(compressed_filepath, mode) as corpus_tar: corpus_tar.extractall(tmp_dir) filenames = tf.gfile.Glob(os.path.join(tmp_dir, glob_pattern)) for tsv_filename in filenames: if tsv_filename.endswith(".gz"): new_filename = tsv_filename.strip(".gz") generator_utils.gunzip_file(tsv_filename, new_filename) tsv_filename = new_filename with tf.gfile.Open(tsv_filename) as tsv_file: for line in tsv_file: if line and "\t" in line: parts = line.split("\t") source, target = parts[src_column], parts[trg_column] source, target = source.strip(), target.strip() clean_pairs = [(source, target)] if "tsv" in datatypes_to_clean: clean_pairs = cleaner_en_xx.clean_en_xx_pairs(clean_pairs) for source, target in clean_pairs: if source and target: lang1_resfile.write(source) lang1_resfile.write("\n") lang2_resfile.write(target) lang2_resfile.write("\n") else: lang1_filename, lang2_filename = dataset[1] lang1_filepath = os.path.join(tmp_dir, lang1_filename) lang2_filepath = os.path.join(tmp_dir, lang2_filename) is_sgm = ( lang1_filename.endswith("sgm") and lang2_filename.endswith("sgm")) if not (tf.gfile.Exists(lang1_filepath) and tf.gfile.Exists(lang2_filepath)): # For .tar.gz and .tgz files, we read compressed. mode = "r:gz" if compressed_filepath.endswith("gz") else "r" with tarfile.open(compressed_filepath, mode) as corpus_tar: corpus_tar.extractall(tmp_dir) if lang1_filepath.endswith(".gz"): new_filepath = lang1_filepath.strip(".gz") generator_utils.gunzip_file(lang1_filepath, new_filepath) lang1_filepath = new_filepath if lang2_filepath.endswith(".gz"): new_filepath = lang2_filepath.strip(".gz") generator_utils.gunzip_file(lang2_filepath, new_filepath) lang2_filepath = new_filepath for example in text_problems.text2text_txt_iterator( lang1_filepath, lang2_filepath): line1res = _preprocess_sgm(example["inputs"], is_sgm) line2res = _preprocess_sgm(example["targets"], is_sgm) clean_pairs = [(line1res, line2res)] if "txt" in datatypes_to_clean: clean_pairs = cleaner_en_xx.clean_en_xx_pairs(clean_pairs) for line1res, line2res in clean_pairs: if line1res and line2res: lang1_resfile.write(line1res) lang1_resfile.write("\n") lang2_resfile.write(line2res) lang2_resfile.write("\n") return filename
[ "Concatenates all `datasets` and saves to `filename`." ]
Please provide a description of the function:def get_or_create_vocab(self, data_dir, tmp_dir, force_get=False): # We assume that vocab file is present in data_dir directory where the # data generated will be stored. vocab_filepath = os.path.join(data_dir, self.vocab_filename) encoder = text_encoder.SubwordTextEncoder(vocab_filepath) return encoder
[ "Get vocab for distill problems." ]
Please provide a description of the function:def set_hparams_from_args(args): if not args: return hp_prefix = "--hp_" tf.logging.info("Found unparsed command-line arguments. Checking if any " "start with %s and interpreting those as hparams " "settings.", hp_prefix) pairs = [] i = 0 while i < len(args): arg = args[i] if arg.startswith(hp_prefix): pairs.append((arg[len(hp_prefix):], args[i+1])) i += 2 else: tf.logging.warn("Found unknown flag: %s", arg) i += 1 as_hparams = ",".join(["%s=%s" % (key, val) for key, val in pairs]) if FLAGS.hparams: as_hparams = "," + as_hparams FLAGS.hparams += as_hparams
[ "Set hparams overrides from unparsed args list." ]
Please provide a description of the function:def create_hparams(): if FLAGS.use_tpu and "tpu" not in FLAGS.hparams_set: tf.logging.warn("Not all hyperparameter sets work on TPU. " "Prefer hparams_sets with a '_tpu' suffix, " "e.g. transformer_tpu, if available for your model.") hparams_path = os.path.join(FLAGS.output_dir, "hparams.json") return trainer_lib.create_hparams(FLAGS.hparams_set, FLAGS.hparams, hparams_path=hparams_path)
[ "Create hparams." ]
Please provide a description of the function:def create_run_config(hp, output_dir=None): save_ckpt_steps = max(FLAGS.iterations_per_loop, FLAGS.local_eval_frequency) save_ckpt_secs = FLAGS.save_checkpoints_secs or None if save_ckpt_secs: save_ckpt_steps = None assert FLAGS.output_dir or FLAGS.checkpoint_path tpu_config_extra_kwargs = {} if FLAGS.tpu_job_name is not None: tpu_config_extra_kwargs["tpu_job_name"] = FLAGS.tpu_job_name if getattr(hp, "mtf_mode", False): save_ckpt_steps = None # Disable the default saver save_ckpt_secs = None # Disable the default saver tpu_config_extra_kwargs = { "num_cores_per_replica": 1, "per_host_input_for_training": tpu_config.InputPipelineConfig.BROADCAST, } # the various custom getters we have written do not play well together yet. # TODO(noam): ask rsepassi for help here. daisy_chain_variables = ( hp.daisy_chain_variables and hp.activation_dtype == "float32" and hp.weight_dtype == "float32") return trainer_lib.create_run_config( model_name=FLAGS.model, model_dir=output_dir or os.path.expanduser(FLAGS.output_dir), master=FLAGS.master, iterations_per_loop=FLAGS.iterations_per_loop, num_shards=FLAGS.tpu_num_shards, log_device_placement=FLAGS.log_device_placement, save_checkpoints_steps=save_ckpt_steps, save_checkpoints_secs=save_ckpt_secs, keep_checkpoint_max=FLAGS.keep_checkpoint_max, keep_checkpoint_every_n_hours=FLAGS.keep_checkpoint_every_n_hours, num_gpus=FLAGS.worker_gpu, gpu_order=FLAGS.gpu_order, num_async_replicas=FLAGS.worker_replicas, gpu_mem_fraction=FLAGS.worker_gpu_memory_fraction, enable_graph_rewriter=FLAGS.enable_graph_rewriter, use_tpu=FLAGS.use_tpu, use_tpu_estimator=FLAGS.use_tpu_estimator, xla_jit_level=FLAGS.xla_jit_level, schedule=FLAGS.schedule, no_data_parallelism=hp.no_data_parallelism, optionally_use_dist_strat=FLAGS.optionally_use_dist_strat, daisy_chain_variables=daisy_chain_variables, ps_replicas=FLAGS.ps_replicas, ps_job=FLAGS.ps_job, ps_gpu=FLAGS.ps_gpu, sync=FLAGS.sync, worker_id=FLAGS.worker_id, worker_job=FLAGS.worker_job, random_seed=FLAGS.random_seed, tpu_infeed_sleep_secs=FLAGS.tpu_infeed_sleep_secs, inter_op_parallelism_threads=FLAGS.inter_op_parallelism_threads, log_step_count_steps=FLAGS.log_step_count_steps, intra_op_parallelism_threads=FLAGS.intra_op_parallelism_threads, tpu_config_extra_kwargs=tpu_config_extra_kwargs, cloud_tpu_name=FLAGS.cloud_tpu_name)
[ "Create a run config.\n\n Args:\n hp: model hyperparameters\n output_dir: model's output directory, defaults to output_dir flag.\n\n Returns:\n a run config\n " ]
Please provide a description of the function:def save_metadata(hparams): output_dir = os.path.expanduser(FLAGS.output_dir) if not tf.gfile.Exists(output_dir): tf.gfile.MakeDirs(output_dir) # Save FLAGS in txt file if hasattr(FLAGS, "flags_into_string"): flags_str = FLAGS.flags_into_string() t2t_flags_str = "\n".join([ "--%s=%s" % (f.name, f.value) for f in FLAGS.flags_by_module_dict()["tensor2tensor.utils.flags"] ]) else: flags_dict = FLAGS.__dict__["__flags"] flags_str = "\n".join( ["--%s=%s" % (name, str(f)) for (name, f) in flags_dict.items()]) t2t_flags_str = None flags_txt = os.path.join(output_dir, "flags.txt") with tf.gfile.Open(flags_txt, "w") as f: f.write(flags_str) if t2t_flags_str: t2t_flags_txt = os.path.join(output_dir, "flags_t2t.txt") with tf.gfile.Open(t2t_flags_txt, "w") as f: f.write(t2t_flags_str) # Save hparams as hparams.json new_hparams = hparams_lib.copy_hparams(hparams) # Modality class is not JSON serializable so remove. new_hparams.del_hparam("modality") hparams_fname = os.path.join(output_dir, "hparams.json") with tf.gfile.Open(hparams_fname, "w") as f: f.write(new_hparams.to_json(indent=0, sort_keys=True))
[ "Saves FLAGS and hparams to output_dir." ]
Please provide a description of the function:def residual_block(x, hparams): k = (hparams.kernel_height, hparams.kernel_width) dilations_and_kernels = [((1, 1), k) for _ in range(3)] y = common_layers.subseparable_conv_block( x, hparams.hidden_size, dilations_and_kernels, padding="SAME", separability=0, name="residual_block") x = common_layers.layer_norm(x + y, hparams.hidden_size, name="lnorm") return tf.nn.dropout(x, 1.0 - hparams.dropout)
[ "A stack of convolution blocks with residual connection." ]
Please provide a description of the function:def xception_internal(inputs, hparams): with tf.variable_scope("xception"): cur = inputs if cur.get_shape().as_list()[1] > 200: # Large image, Xception entry flow cur = xception_entry(cur, hparams.hidden_size) else: # Small image, conv cur = common_layers.conv_block( cur, hparams.hidden_size, [((1, 1), (3, 3))], first_relu=False, padding="SAME", force2d=True, name="small_image_conv") for i in range(hparams.num_hidden_layers): with tf.variable_scope("layer_%d" % i): cur = residual_block(cur, hparams) return xception_exit(cur)
[ "Xception body." ]
Please provide a description of the function:def xception_entry(inputs, hidden_dim): with tf.variable_scope("xception_entry"): def xnet_resblock(x, filters, res_relu, name): with tf.variable_scope(name): y = common_layers.separable_conv_block( x, filters, [((1, 1), (3, 3)), ((1, 1), (3, 3))], first_relu=True, padding="SAME", force2d=True, name="sep_conv_block") y = common_layers.pool(y, (3, 3), "MAX", "SAME", strides=(2, 2)) return y + common_layers.conv_block( x, filters, [((1, 1), (1, 1))], padding="SAME", strides=(2, 2), first_relu=res_relu, force2d=True, name="res_conv0") tf.summary.image("inputs", inputs, max_outputs=2) x = common_layers.conv_block( inputs, 32, [((1, 1), (3, 3))], first_relu=False, padding="SAME", strides=(2, 2), force2d=True, name="conv0") x = common_layers.conv_block( x, 64, [((1, 1), (3, 3))], padding="SAME", force2d=True, name="conv1") x = xnet_resblock(x, min(128, hidden_dim), True, "block0") x = xnet_resblock(x, min(256, hidden_dim), False, "block1") return xnet_resblock(x, hidden_dim, False, "block2")
[ "Xception entry flow.", "Resblock." ]
Please provide a description of the function:def xception_exit(inputs): with tf.variable_scope("xception_exit"): x = inputs x_shape = x.get_shape().as_list() if x_shape[1] is None or x_shape[2] is None: length_float = tf.to_float(tf.shape(x)[1]) length_float *= tf.to_float(tf.shape(x)[2]) spatial_dim_float = tf.sqrt(length_float) spatial_dim = tf.to_int32(spatial_dim_float) x_depth = x_shape[3] x = tf.reshape(x, [-1, spatial_dim, spatial_dim, x_depth]) elif x_shape[1] != x_shape[2]: spatial_dim = int(math.sqrt(float(x_shape[1] * x_shape[2]))) if spatial_dim * spatial_dim != x_shape[1] * x_shape[2]: raise ValueError("Assumed inputs were square-able but they were " "not. Shape: %s" % x_shape) x = tf.reshape(x, [-1, spatial_dim, spatial_dim, x_depth]) x = common_layers.conv_block_downsample(x, (3, 3), (2, 2), "SAME") return tf.nn.relu(x)
[ "Xception exit flow." ]
Please provide a description of the function:def get_text_from_html(html): try: soup = bs4.BeautifulSoup(html, "html.parser") except: # pylint: disable=bare-except # Some docs don't parse return "" # Remove script and style tags for s in soup(["script", "style"]): s.decompose() return "\n".join([s for s in _soup_strings(soup)])
[ "Returns a plaintext representation of HTML content." ]
Please provide a description of the function:def _soup_strings(soup): paragraph_tags = set([ "caption", "details", "h1", "h2", "h3", "h4", "h5", "h6", "li", "p", "td", "div", "span" ]) skip_children = None for descendant in soup.descendants: # If we've treated a tag as a contiguous paragraph, don't re-emit the # children (see below). if skip_children is not None: try: in_skip = descendant in skip_children # pylint: disable=unsupported-membership-test except RecursionError: # pylint: disable=undefined-variable # Possible for this check to hit a nasty infinite recursion because of # BeautifulSoup __eq__ checks. in_skip = True if in_skip: continue else: skip_children = None # Treat some tags as contiguous paragraphs, regardless of other tags nested # inside (like <a> or <b>). if isinstance(descendant, bs4.Tag): if descendant.name in paragraph_tags: if descendant.find_all(paragraph_tags): # If there are nested paragraph tags, don't treat it as a single # contiguous tag. continue skip_children = list(descendant.descendants) text = " ".join(descendant.get_text(" ", strip=True).split()) if text: yield text continue if (isinstance(descendant, bs4.Comment) or not isinstance(descendant, bs4.NavigableString)): continue text = " ".join(descendant.strip().split()) if text: yield text
[ "Return text strings in soup." ]
Please provide a description of the function:def image_transformer_base(): hparams = common_hparams.basic_params1() hparams.hidden_size = 512 hparams.batch_size = 4 hparams.max_length = 3075 hparams.dropout = 0.0 hparams.clip_grad_norm = 0. # i.e. no gradient clipping hparams.optimizer_adam_epsilon = 1e-9 hparams.learning_rate_decay_scheme = "noam" hparams.learning_rate = 0.1 hparams.learning_rate_warmup_steps = 4000 hparams.initializer_gain = 0.2 hparams.num_hidden_layers = 6 hparams.initializer = "uniform_unit_scaling" hparams.weight_decay = 0.0 hparams.optimizer_adam_beta1 = 0.9 hparams.optimizer_adam_beta2 = 0.98 hparams.label_smoothing = 0.0 hparams.bottom["targets"] = modalities.image_channel_embeddings_bottom hparams.top["targets"] = modalities.identity_top hparams.norm_type = "layer" hparams.layer_prepostprocess_dropout = 0.0 hparams.add_hparam("filter_size", 512) # Add new ones like this. # attention-related flags hparams.add_hparam("num_heads", 8) hparams.add_hparam("attention_key_channels", 0) hparams.add_hparam("attention_value_channels", 0) hparams.add_hparam("ffn_layer", "conv_hidden_relu") # All hyperparameters ending in "dropout" are automatically set to 0.0 # when not in training mode. hparams.add_hparam("attention_dropout", 0.0) hparams.add_hparam("relu_dropout", 0.0) hparams.add_hparam("pos", "timing") # timing, none hparams.add_hparam("nbr_decoder_problems", 1) hparams.add_hparam("num_output_layers", 3) hparams.add_hparam("block_size", 1) # dilated attention based flags hparams.add_hparam("gap_sizes", [2, 4, 8, 16, 32, 64, 2, 4, 8, 16, 32, 64]) # image size related flags # assuming that the image has same height and width hparams.add_hparam("img_len", 32) hparams.add_hparam("num_channels", 3) # Local attention params hparams.add_hparam("local_and_global_att", False) hparams.add_hparam("block_length", 256) hparams.add_hparam("block_width", 128) hparams.add_hparam("num_encoder_layers", 4) hparams.add_hparam("num_decoder_layers", 12) hparams.add_hparam("dec_attention_type", cia.AttentionType.LOCAL_1D) hparams.add_hparam("block_raster_scan", False) # multipos attention params hparams.add_hparam("q_filter_width", 1) hparams.add_hparam("kv_filter_width", 1) hparams.add_hparam("likelihood", cia.DistributionType.CAT) hparams.add_hparam("unconditional", False) # unconditional generation # parameters of discretized mixture of logistics loss from pixel cnn++ hparams.add_hparam("num_mixtures", 10) # These parameters are only used when ffn_layer=="local_moe_tpu" hparams.add_hparam("moe_overhead_train", 1.0) hparams.add_hparam("moe_overhead_eval", 2.0) hparams.moe_num_experts = 8 hparams.moe_loss_coef = 1e-3 # These parameters are for relative attention hparams.add_hparam("shared_rel", False) # share relative embeddings return hparams
[ "Set of hyperparameters." ]
Please provide a description of the function:def imagetransformer_cifar10_base(): hparams = image_transformer_base() hparams.batch_size = 4 hparams.num_heads = 4 hparams.num_decoder_layers = 12 hparams.block_length = 256 hparams.hidden_size = 512 hparams.filter_size = 2048 hparams.learning_rate = 0.5 hparams.learning_rate_warmup_steps = 4000 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.layer_prepostprocess_dropout = 0.3 hparams.unconditional = True return hparams
[ "Best config for 2.90 bits/dim on CIFAR10 using cross entropy." ]
Please provide a description of the function:def imagetransformer_cifar10_base_dmol(): hparams = image_transformer_base() hparams.likelihood = cia.DistributionType.DMOL hparams.num_channels = 1 hparams.bottom["targets"] = modalities.image_channel_compress_targets_bottom hparams.top["targets"] = modalities.identity_top hparams.num_heads = 8 hparams.batch_size = 8 hparams.sampling_method = "random" hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" hparams.summarize_grads = True hparams.hidden_size = 256 hparams.filter_size = 512 hparams.attention_key_channels = 512 hparams.attention_value_channels = 512 hparams.num_decoder_layers = 12 hparams.layer_prepostprocess_dropout = 0.1 hparams.learning_rate = 0.1 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.pos = "emb" hparams.unconditional = True return hparams
[ "Best config for 2.90 bits/dim on CIFAR10 using DMOL." ]
Please provide a description of the function:def imagetransformer_base_tpu(): hparams = imagetransformer_bas8l_8h_big_uncond_dr03_imgnet() update_hparams_for_tpu(hparams) hparams.batch_size = 4 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 12 hparams.block_length = 128 hparams.hidden_size = 512 hparams.filter_size = 2048 hparams.learning_rate = 0.2 hparams.learning_rate_warmup_steps = 6000 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.layer_prepostprocess_dropout = 0.3 return hparams
[ "Transformer base params for cifar-10." ]
Please provide a description of the function:def imagetransformer_base_imagenet_tpu(): hparams = imagetransformer_base_tpu() hparams.batch_size = 4 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 12 hparams.block_length = 128 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.layer_prepostprocess_dropout = 0.1 return hparams
[ "Transformer base params for cifar-10." ]
Please provide a description of the function:def imagetransformer_sep_channels(): hparams = imagetransformer_base() hparams.num_heads = 4 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.hidden_size = 256 hparams.filter_size = 512 hparams.num_hidden_layers = 6 return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformer_sep_channels_8l(): hparams = imagetransformer_base() hparams.num_heads = 4 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.hidden_size = 256 hparams.filter_size = 256 hparams.num_hidden_layers = 8 hparams.sampling_method = "random" return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformer_base_8l_8h_big_cond_dr03_dan(): hparams = imagetransformer_sep_channels_8l() hparams.block_width = 256 hparams.block_length = 256 hparams.hidden_size = 512 hparams.num_heads = 8 hparams.filter_size = 2048 hparams.batch_size = 4 hparams.max_length = 3075 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.num_decoder_layers = 8 hparams.layer_prepostprocess_dropout = 0.3 return hparams
[ "big 1d model for conditional image generation.2.99 on cifar10." ]
Please provide a description of the function:def imagetransformer_base_10l_8h_big_uncond_dr03_dan_64(): hparams = imagetransformer_base_10l_8h_big_cond_dr03_dan() hparams.unconditional = True hparams.max_length = 14000 hparams.batch_size = 1 hparams.img_len = 64 hparams.layer_prepostprocess_dropout = 0.1 return hparams
[ "big 1d model for unconditional generation on imagenet." ]
Please provide a description of the function:def imagetransformerpp_sep_channels_8l_8h(): hparams = imagetransformer_base() hparams.likelihood = cia.DistributionType.DMOL hparams.num_channels = 1 hparams.bottom["targets"] = modalities.image_channel_compress_targets_bottom hparams.top["targets"] = modalities.identity_top hparams.num_heads = 8 hparams.batch_size = 4 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.hidden_size = 512 hparams.filter_size = 512 hparams.num_hidden_layers = 8 hparams.sampling_method = "random" hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" hparams.summarize_grads = True hparams.learning_rate = 0.1 return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformerpp_base_8l_8h_big_cond_dr03_dan(): hparams = imagetransformerpp_sep_channels_8l_8h() hparams.hidden_size = 512 hparams.num_heads = 8 hparams.filter_size = 2048 hparams.batch_size = 4 hparams.max_length = 3075 hparams.layer_prepostprocess_dropout = 0.3 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.summarize_grads = True hparams.learning_rate = 0.01 return hparams
[ "big 1d model for conditional image generation.2.99 on cifar10." ]
Please provide a description of the function:def imagetransformerpp_base_14l_8h_big_uncond_dr03_dan_p(): hparams = imagetransformerpp_base_12l_8h_big_uncond_dr03_dan_l() hparams.num_decoder_layers = 14 hparams.batch_size = 8 hparams.layer_prepostprocess_dropout = 0.2 return hparams
[ "Gets to 2.92 in just under 4 days on 8 p100s." ]
Please provide a description of the function:def imagetransformerpp_base_5l_8h_big_uncond_dr00_dan_g_bs1(): hparams = imagetransformerpp_base_10l_8h_big_uncond_dr03_dan_g() # TODO(trandustin): I forgot to set this in the runs! Maybe it's not used in # image transformer training implementation? # hparams.img_len = 256 hparams.max_length = 66000 # allow for 256x256 hparams.batch_size = 1 hparams.num_decoder_layers = 5 hparams.hidden_size = 128 hparams.filter_size = 128 hparams.attention_key_channels = 64 hparams.attention_value_channels = 64 hparams.layer_prepostprocess_dropout = 0.0 return hparams
[ "For 256x256." ]
Please provide a description of the function:def imagetransformer_base_8l_8h_big_cond_dr03_dan_dilated(): hparams = imagetransformer_base_8l_8h_big_cond_dr03_dan() hparams.gap_sizes = [0, 16, 64, 0, 16, 64, 128, 0] hparams.dec_attention_type = cia.AttentionType.DILATED hparams.block_length = 128 hparams.block_width = 128 hparams.add_hparam("num_memory_blocks", 1) return hparams
[ "Dilated hparams." ]
Please provide a description of the function:def imagetransformer_base_12l_8h_big(): hparams = imagetransformer_sep_channels_8l_8h() hparams.filter_size = 1024 hparams.num_decoder_layers = 12 hparams.batch_size = 1 hparams.hidden_size = 512 hparams.learning_rate_warmup_steps = 4000 hparams.sampling_method = "random" hparams.beam_size = 1 hparams.block_width = 256 return hparams
[ "big 1d model for conditional image generation." ]
Please provide a description of the function:def imagetransformer1d_base_8l_64by64(): hparams = image_transformer_base() hparams.num_heads = 8 hparams.hidden_size = 512 hparams.filter_size = 2048 hparams.num_decoder_layers = 8 hparams.batch_size = 1 hparams.block_length = 512 hparams.block_width = 768 hparams.layer_prepostprocess_dropout = 0.1 hparams.max_length = 14000 hparams.unconditional = int(False) return hparams
[ "hparams fo 12 layer big 1d model for imagenet 64x64." ]
Please provide a description of the function:def imagetransformer_sep_channels_12l_16h_imagenet_large(): hparams = imagetransformer_sep_channels_8l_8h() hparams.num_hidden_layers = 12 hparams.batch_size = 1 hparams.filter_size = 2048 hparams.num_heads = 16 hparams.learning_rate_warmup_steps = 16000 hparams.sampling_method = "random" hparams.learning_rate = 0.1 return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformer_sep_channels_16l_16h_imgnet_lrg_loc(): hparams = imagetransformer_sep_channels_12l_16h_imagenet_large() hparams.num_hidden_layers = 16 hparams.local_attention = True hparams.batch_size = 1 hparams.block_length = 256 return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformer_sep_channels_16l_16h_imgnet_lrg_loc_128(): hparams = imagetransformer_sep_channels_12l_16h_imagenet_large() hparams.num_hidden_layers = 16 hparams.local_attention = True hparams.batch_size = 1 hparams.block_length = 128 return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformer_base_10l_16h_big_uncond_dr01_imgnet(): hparams = imagetransformer_base_14l_8h_big_dr01() # num_hidden_layers hparams.num_decoder_layers = 10 hparams.num_heads = 16 hparams.hidden_size = 1024 hparams.filter_size = 4096 hparams.batch_size = 1 hparams.layer_prepostprocess_dropout = 0.1 return hparams
[ "big 1d model for conditional image generation." ]
Please provide a description of the function:def imagetransformer_base_10l_16h_big_dr01_imgnet(): hparams = imagetransformer_base_14l_8h_big_dr01() # num_hidden_layers hparams.num_decoder_layers = 10 hparams.num_heads = 16 hparams.hidden_size = 1024 hparams.filter_size = 4096 hparams.batch_size = 1 hparams.unconditional = False hparams.layer_prepostprocess_dropout = 0.1 return hparams
[ "big 1d model for conditional image generation." ]
Please provide a description of the function:def imagetransformer_sep_channels_8l_8h(): hparams = imagetransformer_base() hparams.num_heads = 8 hparams.batch_size = 1 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.hidden_size = 512 hparams.filter_size = 512 hparams.num_hidden_layers = 8 hparams.sampling_method = "random" return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformer_sep_channels_8l_8h_local_and_global_att(): hparams = imagetransformer_sep_channels_8l_8h() hparams.num_heads = 8 hparams.batch_size = 1 hparams.attention_key_channels = hparams.attention_value_channels = 0 hparams.hidden_size = 256 hparams.filter_size = 256 hparams.num_hidden_layers = 4 hparams.sampling_method = "random" hparams.local_and_global_att = True return hparams
[ "separate rgb embeddings." ]
Please provide a description of the function:def imagetransformer_bas8l_8h_big_uncond_dr03_imgnet(): hparams = imagetransformer_base_14l_8h_big_dr01() # num_hidden_layers hparams.num_decoder_layers = 8 hparams.num_heads = 8 hparams.hidden_size = 512 hparams.filter_size = 2048 hparams.layer_prepostprocess_dropout = 0.3 return hparams
[ "big 1d model for conditional image generation." ]
Please provide a description of the function:def imagetransformer_base_10l_16h_big_dr01_moe_imgnet(): hparams = imagetransformer_base_10l_16h_big_dr01_imgnet() hparams.initializer = "orthogonal" hparams.learning_rate_warmup_steps = 16000 hparams.add_hparam("moe_layers_decoder", "2,7") # Which layer is MoE. hparams.moe_hidden_sizes = "4096" # Hidden layer sizes (comma-separated). hparams.moe_num_experts = 64 # Number of experts in each MoE layer. hparams.moe_k = 4 # How many experts to use per batch element (try 2 or 4). hparams.moe_loss_coef = 3e-2 # MoE loss coefficient (1e-2 is usually ok). hparams.scheduled_sampling_prob = 0.1 hparams.scheduled_sampling_warmup_steps = 200000 return hparams
[ "big 1d model for conditional image generation." ]
Please provide a description of the function:def imagetransformer_moe_tiny(): hparams = imagetransformer_tiny() hparams.hidden_size = 64 hparams.batch_size = 1 hparams.num_hidden_layers = 3 hparams.dec_attention_type = cia.AttentionType.MOE_LOCAL_1D hparams.add_hparam("moe_layers_decoder", "1") # Which layer is MoE. hparams.moe_hidden_sizes = "1024" # Hidden layer sizes (comma-separated). hparams.moe_num_experts = 16 # Number of experts in each MoE layer. hparams.moe_k = 2 # How many experts to use per batch element (try 2 or 4). hparams.moe_loss_coef = 1e-2 # MoE loss coefficient (1e-2 is usually ok). return hparams
[ "Set of hyperparameters for a very small imagetransformer with MoE." ]
Please provide a description of the function:def imagetransformer_sep_channels_8l_tpu(): hparams = imagetransformer_sep_channels_8l() update_hparams_for_tpu(hparams) hparams.batch_size = 4 hparams.num_heads = 4 # heads are expensive on tpu hparams.shared_embedding_and_softmax_weights = False return hparams
[ "Hparams for training imagetransformer on tpu." ]
Please provide a description of the function:def imagetransformer_b10l_4h_big_uncond_dr03_tpu(): hparams = imagetransformer_bas8l_8h_big_uncond_dr03_imgnet() update_hparams_for_tpu(hparams) hparams.batch_size = 4 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 10 hparams.block_length = 128 hparams.hidden_size = 512 hparams.filter_size = 1024 hparams.learning_rate = 0.2 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" return hparams
[ "Small model for tpu cifar 10." ]
Please provide a description of the function:def imagetransformer_b10l_dr03_moe_tpu(): hparams = imagetransformer_b10l_4h_big_uncond_dr03_tpu() update_hparams_for_tpu(hparams) hparams.batch_size = 4 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 10 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.ffn_layer = "local_moe_tpu" return hparams
[ "Moe tpu params." ]
Please provide a description of the function:def imagetransformer_b10l_4h_big_uncond_dr03_lr025_tpu(): hparams = imagetransformer_bas8l_8h_big_uncond_dr03_imgnet() update_hparams_for_tpu(hparams) hparams.batch_size = 4 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 10 hparams.learning_rate = 0.25 hparams.learning_rate_warmup_steps = 8000 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" # hparams.unconditional = True return hparams
[ "TPU related small model." ]
Please provide a description of the function:def imagetransformer_b12l_4h_b256_uncond_dr03_tpu(): hparams = imagetransformer_bas8l_8h_big_uncond_dr03_imgnet() update_hparams_for_tpu(hparams) hparams.batch_size = 4 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 12 hparams.block_length = 256 hparams.hidden_size = 512 hparams.filter_size = 2048 hparams.learning_rate = 0.5 hparams.learning_rate_warmup_steps = 4000 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.layer_prepostprocess_dropout = 0.3 hparams.unconditional = True return hparams
[ "works very well on 4x4." ]
Please provide a description of the function:def imagetransformer_b12l_4h_b256_uncond_dr03_rel_tpu(): hparams = imagetransformer_b12l_4h_b256_uncond_dr03_tpu() hparams.shared_rel = True hparams.dec_attention_type = cia.AttentionType.RELATIVE_LOCAL_1D return hparams
[ "works very well on 4x4." ]
Please provide a description of the function:def imagetransformer_cifar_tpu_range(rhp): # After starting from base, set intervals for some parameters. rhp.set_float("learning_rate", 0.01, 1.0, scale=rhp.LOG_SCALE) rhp.set_discrete("num_decoder_layers", [8, 10, 12, 14, 16]) rhp.set_discrete("hidden_size", [256, 512, 1024]) rhp.set_discrete("block_length", [128, 256, 512]) rhp.set_categorical("dec_attention_type", [ cia.AttentionType.RELATIVE_LOCAL_1D, cia.AttentionType.LOCAL_1D])
[ "Range of hyperparameters for vizier." ]
Please provide a description of the function:def imagetransformer_b12l_4h_b128_h512_uncond_dr01_im(): hparams = imagetransformer_b12l_4h_b256_uncond_dr03_tpu() update_hparams_for_tpu(hparams) hparams.batch_size = 4 hparams.optimizer = "Adafactor" hparams.learning_rate_schedule = "rsqrt_decay" hparams.learning_rate_warmup_steps = 6000 hparams.layer_prepostprocess_dropout = 0.1 return hparams
[ "TPU related imagenet model." ]
Please provide a description of the function:def imagetransformer_b12l_4h_uncond_dr03_tpu(): hparams = imagetransformer_b12l_4h_b256_uncond_dr03_tpu() hparams.learning_rate = 0.2 hparams.learning_rate_warmup_steps = 4000 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.layer_prepostprocess_dropout = 0.3 return hparams
[ "TPU related small model." ]
Please provide a description of the function:def imagetransformer_b12l_4h_b128_uncond_dr03_tpu(): hparams = imagetransformer_bas8l_8h_big_uncond_dr03_imgnet() update_hparams_for_tpu(hparams) hparams.batch_size = 2 hparams.num_heads = 4 # heads are expensive on tpu hparams.num_decoder_layers = 12 hparams.block_length = 128 hparams.hidden_size = 256 hparams.filter_size = 2048 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.layer_prepostprocess_dropout = 0.1 hparams.optimizer = "Adafactor" hparams.learning_rate_schedule = "rsqrt_decay" hparams.learning_rate_warmup_steps = 10000 return hparams
[ "TPU config for cifar 10." ]
Please provide a description of the function:def imagetransformer_b12l_8h_b256_uncond_dr03_tpu(): hparams = imagetransformer_bas8l_8h_big_uncond_dr03_imgnet() update_hparams_for_tpu(hparams) hparams.batch_size = 2 hparams.num_heads = 8 # heads are expensive on tpu hparams.num_decoder_layers = 12 hparams.block_length = 256 hparams.hidden_size = 512 hparams.filter_size = 2048 hparams.layer_preprocess_sequence = "none" hparams.layer_postprocess_sequence = "dan" hparams.layer_prepostprocess_dropout = 0.3 return hparams
[ "TPU related 12 layer 8 heads model." ]
Please provide a description of the function:def imagetransformer_b10l_4h_big_uncond_dr01_tpu(): hparams = imagetransformer_b12l_4h_big_uncond_dr03_tpu() # num_hidden_layers hparams.num_decoder_layers = 10 hparams.num_heads = 4 hparams.hidden_size = 1024 hparams.filter_size = 4096 hparams.batch_size = 1 hparams.layer_prepostprocess_dropout = 0.1 return hparams
[ "big 1d model for conditional image generation." ]
Please provide a description of the function:def training_loop(self): if not self.restarting: self._write_counters(self._local_step_at_start, self._global_step) tf.logging.info( "Training %s up to %d, %d to go", self.model_mode, self.target_local_step, self.steps_to_go ) yield self._write_counters(self.target_local_step, -1)
[ "Context manager wrapping the training loop, updates step counters." ]
Please provide a description of the function:def _read_words(filename): with tf.gfile.GFile(filename, "r") as f: if sys.version_info[0] >= 3: return f.read().replace("\n", " %s " % EOS).split() else: return f.read().decode("utf-8").replace("\n", " %s " % EOS).split()
[ "Reads words from a file." ]
Please provide a description of the function:def _build_vocab(filename, vocab_path, vocab_size): data = _read_words(filename) counter = collections.Counter(data) count_pairs = sorted(counter.items(), key=lambda x: (-x[1], x[0])) words, _ = list(zip(*count_pairs)) words = words[:vocab_size] with open(vocab_path, "w") as f: f.write("\n".join(words))
[ "Reads a file to build a vocabulary of `vocab_size` most common words.\n\n The vocabulary is sorted by occurrence count and has one word per line.\n Originally from:\n https://github.com/tensorflow/models/blob/master/tutorials/rnn/ptb/reader.py\n\n Args:\n filename: file to read list of words from.\n vocab_path: path where to save the vocabulary.\n vocab_size: size of the vocabulary to generate.\n " ]
Please provide a description of the function:def _get_token_encoder(vocab_dir, vocab_name, filename): vocab_path = os.path.join(vocab_dir, vocab_name) if not tf.gfile.Exists(vocab_path): _build_vocab(filename, vocab_path, 10000) return text_encoder.TokenTextEncoder(vocab_path)
[ "Reads from file and returns a `TokenTextEncoder` for the vocabulary." ]
Please provide a description of the function:def _maybe_download_corpus(tmp_dir, vocab_type): filename = os.path.basename(PTB_URL) compressed_filepath = generator_utils.maybe_download( tmp_dir, filename, PTB_URL) ptb_files = [] ptb_char_files = [] with tarfile.open(compressed_filepath, "r:gz") as tgz: files = [] # Selecting only relevant files. for m in tgz.getmembers(): if "ptb" in m.name and ".txt" in m.name: if "char" in m.name: ptb_char_files += [m.name] else: ptb_files += [m.name] files += [m] tgz.extractall(tmp_dir, members=files) if vocab_type == text_problems.VocabType.CHARACTER: return ptb_char_files else: return ptb_files
[ "Download and unpack the corpus.\n\n Args:\n tmp_dir: directory containing dataset.\n vocab_type: which vocabulary are we using.\n\n Returns:\n The list of names of files.\n " ]
Please provide a description of the function:def resize(att_mat, max_length=None): for i, att in enumerate(att_mat): # Add extra batch dim for viz code to work. if att.ndim == 3: att = np.expand_dims(att, axis=0) if max_length is not None: # Sum across different attention values for each token. att = att[:, :, :max_length, :max_length] row_sums = np.sum(att, axis=2) # Normalize att /= row_sums[:, :, np.newaxis] att_mat[i] = att return att_mat
[ "Normalize attention matrices and reshape as necessary." ]
Please provide a description of the function:def _get_attention(inp_text, out_text, enc_atts, dec_atts, encdec_atts): def get_full_attention(layer): enc_att = enc_atts[layer][0] dec_att = dec_atts[layer][0] encdec_att = encdec_atts[layer][0] enc_att = np.transpose(enc_att, [0, 2, 1]) dec_att = np.transpose(dec_att, [0, 2, 1]) encdec_att = np.transpose(encdec_att, [0, 2, 1]) # [heads, query_length, memory_length] enc_length = enc_att.shape[1] dec_length = dec_att.shape[1] num_heads = enc_att.shape[0] first = np.concatenate([enc_att, encdec_att], axis=2) second = np.concatenate( [np.zeros((num_heads, dec_length, enc_length)), dec_att], axis=2) full_att = np.concatenate([first, second], axis=1) return [ha.T.tolist() for ha in full_att] def get_inp_inp_attention(layer): att = np.transpose(enc_atts[layer][0], (0, 2, 1)) return [ha.T.tolist() for ha in att] def get_out_inp_attention(layer): att = np.transpose(encdec_atts[layer][0], (0, 2, 1)) return [ha.T.tolist() for ha in att] def get_out_out_attention(layer): att = np.transpose(dec_atts[layer][0], (0, 2, 1)) return [ha.T.tolist() for ha in att] def get_attentions(get_attention_fn): num_layers = len(enc_atts) return [get_attention_fn(i) for i in range(num_layers)] attentions = { 'all': { 'att': get_attentions(get_full_attention), 'top_text': inp_text + out_text, 'bot_text': inp_text + out_text, }, 'inp_inp': { 'att': get_attentions(get_inp_inp_attention), 'top_text': inp_text, 'bot_text': inp_text, }, 'inp_out': { 'att': get_attentions(get_out_inp_attention), 'top_text': inp_text, 'bot_text': out_text, }, 'out_out': { 'att': get_attentions(get_out_out_attention), 'top_text': out_text, 'bot_text': out_text, }, } return attentions
[ "Compute representation of the attention ready for the d3 visualization.\n\n Args:\n inp_text: list of strings, words to be displayed on the left of the vis\n out_text: list of strings, words to be displayed on the right of the vis\n enc_atts: numpy array, encoder self-attentions\n [num_layers, batch_size, num_heads, enc_length, enc_length]\n dec_atts: numpy array, decoder self-attentions\n [num_layers, batch_size, num_heads, dec_length, dec_length]\n encdec_atts: numpy array, encoder-decoder attentions\n [num_layers, batch_size, num_heads, dec_length, enc_length]\n\n Returns:\n Dictionary of attention representations with the structure:\n {\n 'all': Representations for showing all attentions at the same time.\n 'inp_inp': Representations for showing encoder self-attentions\n 'inp_out': Representations for showing encoder-decoder attentions\n 'out_out': Representations for showing decoder self-attentions\n }\n and each sub-dictionary has structure:\n {\n 'att': list of inter attentions matrices, one for each attention head\n 'top_text': list of strings, words to be displayed on the left of the vis\n 'bot_text': list of strings, words to be displayed on the right of the vis\n }\n ", "Get the full input+output - input+output attentions." ]
Please provide a description of the function:def decode(tokens): token_is_alnum = [t[0] in _ALPHANUMERIC_CHAR_SET for t in tokens] ret = [] for i, token in enumerate(tokens): if i > 0 and token_is_alnum[i - 1] and token_is_alnum[i]: ret.append(u" ") ret.append(token) return "".join(ret)
[ "Decode a list of tokens to a unicode string.\n\n Args:\n tokens: a list of Unicode strings\n Returns:\n a unicode string\n " ]
Please provide a description of the function:def _read_filepattern(filepattern, max_lines=None, split_on_newlines=True): filenames = sorted(tf.gfile.Glob(filepattern)) lines_read = 0 for filename in filenames: with tf.gfile.Open(filename) as f: if split_on_newlines: for line in f: yield line.strip() lines_read += 1 if max_lines and lines_read >= max_lines: return else: if max_lines: doc = [] for line in f: doc.append(line) lines_read += 1 if max_lines and lines_read >= max_lines: yield "".join(doc) return yield "".join(doc) else: yield f.read()
[ "Reads files matching a wildcard pattern, yielding the contents.\n\n Args:\n filepattern: A wildcard pattern matching one or more files.\n max_lines: If set, stop reading after reading this many lines.\n split_on_newlines: A boolean. If true, then split files by lines and strip\n leading and trailing whitespace from each line. Otherwise, treat each\n file as a single string.\n\n Yields:\n The contents of the files as lines, if split_on_newlines is True, or\n the entire contents of each file if False.\n " ]
Please provide a description of the function:def corpus_token_counts( text_filepattern, corpus_max_lines, split_on_newlines=True): counts = collections.Counter() for doc in _read_filepattern( text_filepattern, max_lines=corpus_max_lines, split_on_newlines=split_on_newlines): counts.update(encode(_native_to_unicode(doc))) mlperf_log.transformer_print( key=mlperf_log.PREPROC_VOCAB_SIZE, value=len(counts)) return counts
[ "Read the corpus and compute a dictionary of token counts.\n\n Args:\n text_filepattern: A pattern matching one or more files.\n corpus_max_lines: An integer; maximum total lines to read.\n split_on_newlines: A boolean. If true, then split files by lines and strip\n leading and trailing whitespace from each line. Otherwise, treat each\n file as a single string.\n\n Returns:\n a dictionary mapping token to count.\n " ]
Please provide a description of the function:def vocab_token_counts(text_filepattern, max_lines): ret = {} for i, line in enumerate( _read_filepattern(text_filepattern, max_lines=max_lines)): if "," not in line: tf.logging.warning("Malformed vocab line #%d '%s'", i, line) continue token, count = line.rsplit(",", 1) ret[_native_to_unicode(token)] = int(count) return ret
[ "Read a vocab file and return a dictionary of token counts.\n\n Reads a two-column CSV file of tokens and their frequency in a dataset. The\n tokens are presumed to be generated by encode() or the equivalent.\n\n Args:\n text_filepattern: A pattern matching one or more files.\n max_lines: An integer; maximum total lines to read.\n\n Returns:\n a dictionary mapping token to count.\n " ]
Please provide a description of the function:def _make_example(input_ids, problem, input_feature_name="inputs"): features = { input_feature_name: tf.train.Feature(int64_list=tf.train.Int64List(value=input_ids)) } # Fill in dummy values for any other required features that presumably # will not actually be used for prediction. data_fields, _ = problem.example_reading_spec() for fname, ftype in data_fields.items(): if fname == input_feature_name: continue if not isinstance(ftype, tf.FixedLenFeature): # Only FixedLenFeatures are required continue if ftype.default_value is not None: # If there's a default value, no need to fill it in continue num_elements = functools.reduce(lambda acc, el: acc * el, ftype.shape, 1) if ftype.dtype in [tf.int32, tf.int64]: value = tf.train.Feature( int64_list=tf.train.Int64List(value=[0] * num_elements)) if ftype.dtype in [tf.float32, tf.float64]: value = tf.train.Feature( float_list=tf.train.FloatList(value=[0.] * num_elements)) if ftype.dtype == tf.bytes: value = tf.train.Feature( bytes_list=tf.train.BytesList(value=[""] * num_elements)) tf.logging.info("Adding dummy value for feature %s as it is required by " "the Problem.", fname) features[fname] = value return tf.train.Example(features=tf.train.Features(feature=features))
[ "Make a tf.train.Example for the problem.\n\n features[input_feature_name] = input_ids\n\n Also fills in any other required features with dummy values.\n\n Args:\n input_ids: list<int>.\n problem: Problem.\n input_feature_name: name of feature for input_ids.\n\n Returns:\n tf.train.Example\n " ]
Please provide a description of the function:def make_grpc_request_fn(servable_name, server, timeout_secs): stub = _create_stub(server) def _make_grpc_request(examples): request = predict_pb2.PredictRequest() request.model_spec.name = servable_name request.inputs["input"].CopyFrom( tf.make_tensor_proto( [ex.SerializeToString() for ex in examples], shape=[len(examples)])) response = stub.Predict(request, timeout_secs) outputs = tf.make_ndarray(response.outputs["outputs"]) scores = tf.make_ndarray(response.outputs["scores"]) assert len(outputs) == len(scores) return [{ # pylint: disable=g-complex-comprehension "outputs": output, "scores": score } for output, score in zip(outputs, scores)] return _make_grpc_request
[ "Wraps function to make grpc requests with runtime args.", "Builds and sends request to TensorFlow model server." ]
Please provide a description of the function:def make_cloud_mlengine_request_fn(credentials, model_name, version): def _make_cloud_mlengine_request(examples): api = discovery.build("ml", "v1", credentials=credentials) parent = "projects/%s/models/%s/versions/%s" % (cloud.default_project(), model_name, version) input_data = { "instances": [{ # pylint: disable=g-complex-comprehension "input": { "b64": base64.b64encode(ex.SerializeToString()) } } for ex in examples] } prediction = api.projects().predict(body=input_data, name=parent).execute() return prediction["predictions"] return _make_cloud_mlengine_request
[ "Wraps function to make CloudML Engine requests with runtime args.", "Builds and sends requests to Cloud ML Engine." ]
Please provide a description of the function:def predict(inputs_list, problem, request_fn): assert isinstance(inputs_list, list) fname = "inputs" if problem.has_inputs else "targets" input_encoder = problem.feature_info[fname].encoder input_ids_list = [ _encode(inputs, input_encoder, add_eos=problem.has_inputs) for inputs in inputs_list ] examples = [_make_example(input_ids, problem, fname) for input_ids in input_ids_list] predictions = request_fn(examples) output_decoder = problem.feature_info["targets"].encoder outputs = [ (_decode(prediction["outputs"], output_decoder), prediction["scores"]) for prediction in predictions ] return outputs
[ "Encodes inputs, makes request to deployed TF model, and decodes outputs." ]