id
stringlengths
3
8
text
stringlengths
1
115k
st205500
Interesting, I wonder how they trained the VGG19 in keras.applications Here it is in mid 2016: github.com keras-team/keras/blob/90d0eb9b88c5ef6f756574f60d314c0aa7916f2c/keras/layers/convolutional.py#L369 2 `(samples, rows, cols, channels)` if dim_ordering='tf'. # Output shape 4D tensor with shape: `(samples, nb_filter, new_rows, new_cols)` if dim_ordering='th' or 4D tensor with shape: `(samples, new_rows, new_cols, nb_filter)` if dim_ordering='tf'. `rows` and `cols` values might have changed due to padding. ''' def __init__(self, nb_filter, nb_row, nb_col, init='glorot_uniform', activation='linear', weights=None, border_mode='valid', subsample=(1, 1), dim_ordering='default', W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, **kwargs): if dim_ordering == 'default': dim_ordering = K.image_dim_ordering() if border_mode not in {'valid', 'same'}: raise Exception('Invalid border mode for Convolution2D:', border_mode) self.nb_filter = nb_filter self.nb_row = nb_row It’s probably one of those things that got set at one point when it made sense and then got locked in by backwards compatibility guarantees. Aside from updating the keras.applications to allow initializers as arguments. Annother possible solution would be for keras to implements a global “default_initializer” or something like that. Either one would take some work.
st205501
markdaoust: Annother possible solution would be for keras to implements a global “default_initializer” or something like that. But if I remember correctly something similar didn’t pass in 2019: github.com/tensorflow/tensorflow Changing the default initializer globally for tf.keras.layers 7 opened Mar 2, 2019 closed Mar 9, 2019 guillaumekln TF 2.0 comp:ops stat:awaiting tensorflower type:feature **System information** - TensorFlow version: 2.0 - Are you willing to contribu…te it: No **Describe the feature and the current behavior/state.** For V1 `tf.layers`, one could set a default initializer to the global variable scope: ```python with tf.variable_scope(name, initializer=...): ... ``` In V2 `tf.keras.layers`, the default initializer is hardcoded to `initializers.glorot_uniform()`. I currently see 2 imperfect ways to change the default initializer globally: * change the initializer to every layers that are created **but** this is not convenient for large models * implement a manual assignation loop **but** AFAIK there is no clean way to detect if the variable initializer is the default one or an initializer that was explicitly passed by the developer (typically a constant initialization). **Will this change the current api? How?** I think the most user friendly approach to this is to add an endpoint to globally change the default initializer: each layer created after this statement will use this initializer by default. **Who will benefit with this feature?** Anyone who wants to easily apply a global initialization strategy while still ensuring that specific initializer can be set locally. --- Any recommendation to achieve this with the current code is of course appreciated.
st205502
markdaoust: Ah, so scratch that one. Thanks. …Or as team changes in 2021 we could have a different evaluation.
st205503
Hi, in recent while I have been researching a lot ( a reply I did while researching ) and learning about text summarization and NLP in general. But everything that I have read and implemented have been around gigs of either word embeddings or huge pretrained models. I am not expert in ML by any means so I have no idea how to go about doing this feasibly on a mobile device. I know there is always an option of creating hosting these huge files on a server but it would be awesome if it’s possible without it because one of my goals is to make internet not a necessary for my project. Any help regarding this would be very appreciated as I am totally clueless how to go about how I should approach this.
st205504
Hello everyone! I am in the process of experimenting with different Tensorflow Serving batching configurations and I am looking for a way to speed up my experiments. Currently, I have a model running in a pod within a Kubernetes cluster, and each time I want to change the values set in the batching.config file I have to edit the file in cloud storage and restart the pods so that they pull the updated version. After reading some of the docs I am under the impression that this can be done programmatically (on the fly, without having to restart the pod) with python, however, I cannot figure out how to do that. I tried to read through the source code but don’t really understand how to translate what I see in the source code into python and more so how to actually achieve what I am trying to do. Any help would be greatly appreciated!
st205505
Hey masters, maybe the question is impossible to get an answer. Does the exam grader judge the code or just the last uploaded models? Because I am pro-user of colab, I think I can use it to complete the exam fully, however, I really not sure, must I copy back the code to IDE(exam environment)? Thank you
st205506
Maybe @Jocelyn_Becker or @Laurence_Moroney might be able to shine some light here
st205507
The grader uses your model by running test data through it to see how it performs. Please refer to the exam handbook for everything else: https://www.tensorflow.org/extras/cert/TF_Certificate_Candidate_Handbook.pdf 3
st205508
Hello, I am trying to run a SavedModel that I have exported from a checkpoint using ‘exporter_main_v2.py’ and during conversion, the ‘function_optimizer.py’ optimizer seems to return an empty graph (0 nodes). Is there any reason that it does that? Thanks
st205509
More context: I am trying to convert a MobilenetV2 SSD re-trained SavedModel into another framework format. I have successfully re-trained the model and exported the checkpoint to a SavedModel using exporter_main_v2.py. However, when I try to run the conversion script, it calls the 'meta_optimizer.cc file, which eventually results in an empty graph that cannot be converted. The following is the output that I get: empty_nodes_grappler1337×207 51 KB
st205510
I cannot provide that because it is propietary. However, I believe the issue is with the exporting from the re-trained checkpoints to the SavedModel that is causing this issue. The pre-trained object-detection models in the TF2 Zoo contain both the checkpoints and the saved_model.pb. If i try to convert the pre-trained saved_model.pb, I face no issues at all. However, if I try to export the pre-trained checkpoint (that I have just downloaded) into a *SavedModel * and then try to convert that, I get the error I just mentioned. So it has to be doing something wrong in the checkpoint to SavedModel step right (exporter_main_v2.py)?
st205511
Do you mean that saved_model.pb in the official repo is not the same as the one generated with exporter_main_v2.py using the checkpoint in the official repo?
st205512
That’s the only explanation! Because the saved_model.pb that exists in the official repo can be converted with no issues and doesn’t result in 0 nodes and 0 edges when I try to convert it to the other framework, unlike the saved_model.pb that is generated with exporter_main_v2.py using the checkpoint in the official repo.
st205513
Is TF2.x model zoo or TF1.x? github.com tensorflow/models - Tensorflow 2.x 5 master/research/object_detection Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub.
st205514
Any thoughts on why this could be happening? It might be something that I have to change in the config file perhaps, but I’m using the config file in the official repo so I wouldn’t know what I would need to change. Do the saved_model.pb files in the official repos use those same config files? And do they also use the same exact exporter_main_v2.py command? It would be really useful if I could know what command is used exactly along with its arguments.
st205515
@thea I would really appreciate any help or guidance that I can get for this. It’s either that I’m performing the exporting process from checkpoints to a SavedModel wrong (using exporter_main_v2.py), or that some kind of post-exporting optimization is done to the final product which is then used in the official TF2 Model Zoo repos. I’m not really sure what to do from here.
st205516
@Bhack @thea no input on this at all? I would really appreciate any help I can get with this, please.
st205517
Honestly I don’t know if any maintainer is in the Forum: github.com tensorflow/models - Maintainers 8 master/research/object_detection Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub. /cc @markdaoust What do you think?
st205518
Ooof. There are a dozen different files there called “export_*.py”. github.com tensorflow/models 8 master/research/object_detection Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub. My first guess here is that the provided saved_models are tf1 format and you’re saving in tf2 format. And something in your conversion script isn’t quite translating. Throwing out all the nodes sounds like something it would do if the inputs/outputs weren’t properly registered in the saved model signature. Have you tried inspecting the saved_model? can you reload and run it correctly from python? There’s a similar bug here where it breaks in a while-loop but IIRC lite has fixed their control-flow support since them: github.com/tensorflow/tensorflow Conversion error when trying to convert model using BeamSearchDecoder from tensorflow-addons [RNN] 4 opened Jun 16, 2020 closed Jul 9, 2020 gcuder TF 2.2 TFLiteConverter type:bug **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04…): MAC OS 10.15.5 - TensorFlow installed from (source or binary): binary - TensorFlow version (or github SHA if from source): 2.2 **Command used to run the converter or code if you’re using the Python API** ``` converter = tf.lite.TFLiteConverter.from_keras_model(inference_model) converter.experimental_new_converter = True converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS ] tflite_quantized_model = converter.convert() ``` **The output from the converter invocation** ``` 2020-06-16 12:24:25.843841: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 2020-06-16 12:24:25.843958: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session 2020-06-16 12:24:25.913996: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: graph_to_optimize 2020-06-16 12:24:25.914107: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: Graph size after: 460 nodes (66), 546 edges (111), time = 11.364ms. 2020-06-16 12:24:25.914119: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: Graph size after: 460 nodes (0), 546 edges (0), time = 9.66ms. 2020-06-16 12:24:25.914126: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: while_body_5784 2020-06-16 12:24:25.914133: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.002ms. 2020-06-16 12:24:25.914140: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914151: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: TensorArrayV2Write_1_cond_true_6956 2020-06-16 12:24:25.914160: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914166: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914171: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: TensorArrayV2Write_2_cond_true_6974 2020-06-16 12:24:25.914177: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914183: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914193: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: while_cond_5783 2020-06-16 12:24:25.914199: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914205: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914210: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: TensorArrayV2Write_cond_true_6938 2020-06-16 12:24:25.914215: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914228: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914235: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: model_predictive_typing_addons_beam_search_decoder_decoder_while_body_6435 2020-06-16 12:24:25.914240: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: Graph size after: 521 nodes (0), 597 edges (0), time = 4.097ms. 2020-06-16 12:24:25.914244: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: Graph size after: 521 nodes (0), 597 edges (0), time = 4.171ms. 2020-06-16 12:24:25.914248: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: BeamSearchDecoderStep_cond_true_6915 2020-06-16 12:24:25.914253: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914258: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914264: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: BeamSearchDecoderStep_cond_false_6916 2020-06-16 12:24:25.914270: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914276: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914279: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: TensorArrayV2Write_2_cond_false_6975 2020-06-16 12:24:25.914283: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914289: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914294: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: TensorArrayV2Write_1_cond_false_6957 2020-06-16 12:24:25.914298: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914302: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914307: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: model_predictive_typing_addons_beam_search_decoder_decoder_while_cond_6434 2020-06-16 12:24:25.914313: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914319: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0ms. 2020-06-16 12:24:25.914323: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: TensorArrayV2Write_cond_false_6939 2020-06-16 12:24:25.914328: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. 2020-06-16 12:24:25.914333: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799] function_optimizer: function_optimizer did nothing. time = 0.001ms. Traceback (most recent call last): File "lite_conversion.py", line 60, in <module> main() File "lite_conversion.py", line 53, in main tflite_quantized_model = converter.convert() File "/home/gc/miniconda3/envs/grammatica_tf2-gpu/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 459, in convert self._funcs[0], lower_control_flow=False)) File "/home/gc/miniconda3/envs/grammatica_tf2-gpu/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 706, in convert_variables_to_constants_v2_as_graph func, lower_control_flow, aggressive_inlining) File "/home/gc/miniconda3/envs/grammatica_tf2-gpu/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 461, in _convert_variables_to_constants_v2_impl node_defs, tensor_data, name_to_node) File "/home/gc/miniconda3/envs/grammatica_tf2-gpu/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 286, in _get_control_flow_function_data arg_types[idx] = get_resource_type(node.input[idx + 1]) File "/home/gc/miniconda3/envs/grammatica_tf2-gpu/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 259, in get_resource_type node_name = get_source_node_name_through_identities(node_name) File "/home/gc/miniconda3/envs/grammatica_tf2-gpu/lib/python3.7/site-packages/tensorflow/python/framework/convert_to_constants.py", line 254, in get_source_node_name_through_identities while name_to_node[node_name].op == "Identity": KeyError: 'beamsearchdecoderstep_cond_input_1_0' ``` **Any other info / logs** I am trying to convert a Seq2Seq model to TF-Lite, however I am facing some issues with the BeamSearchDecoder from tensorflow-addons. My model works fine in Python and the source code looks like this: ``` class MySeq2SeqModel(tf.keras.models.Model): def __init__(self, vocab_size: int, input_len: int, output_len: int, batch_size, rnn_units: int = 64, dense_units: int = 64, embedding_dim: int = 256, **kwargs): super(MySeq2SeqModel, self).__init__(**kwargs) # Base Attributes self.vocab_size = vocab_size self.input_len = input_len self.output_len = output_len self.rnn_units = rnn_units self.dense_units = dense_units self.embedding_dim = embedding_dim self.batch_size = batch_size # Beam search attributes self.beam_width = 3 # Encoder self.encoder_embedding = layers.Embedding(vocab_size, embedding_dim, input_length=input_len) self.encoder_rnn = layers.LSTM(rnn_units, return_sequences=True, return_state=True) # Decoder self.decoder_embedding = layers.Embedding(vocab_size, embedding_dim, input_length=output_len) self.decoder_rnncell = tf.keras.layers.LSTMCell(rnn_units) # Attention self.attention_mechanism = tfa.seq2seq.LuongAttention(dense_units) self.rnn_cell = self.build_rnn_cell(batch_size=batch_size) # Output self.dense_layer = tf.keras.layers.Dense(vocab_size) self.inference_decoder = BeamSearchDecoder(cell=self.rnn_cell, beam_width=self.beam_width, output_layer=self.dense_layer, # As tf.nn.embedding_lookup is not supported by tflite embedding_fn=lambda ids: tf.gather(tf.identity( self.decoder_embedding.variables[0]), ids), coverage_penalty_weight=0.0, dynamic=False, parallel_iterations=1, maximum_iterations=output_len ) def call(self, inputs, training=None, mask=None): # Encoder encoder = self.encoder_embedding(inputs[0]) encoder_outputs, state_h, state_c = self.encoder_rnn(encoder) decoder_emb = self.decoder_embedding(inputs[1]) tiled_a = tfa.seq2seq.tile_batch(encoder_outputs, multiplier=self.beam_width) tiled_a_tx = tfa.seq2seq.tile_batch(state_h, multiplier=self.beam_width) tiled_c_tx = tfa.seq2seq.tile_batch(state_c, multiplier=self.beam_width) start_tokens = tf.fill([1], START_ID) self.attention_mechanism.setup_memory(tiled_a) final_output, final_state, _ = self.inference_decoder(embedding=None, start_tokens=start_tokens, end_token=EOS_ID, initial_state=self.build_decoder_initial_state( size=1 * self.beam_width, encoder_state=[tiled_a_tx, tiled_c_tx], Dtype=tf.float32)) return final_output.predicted_ids ```
st205519
I made sure to be using models from the TF2 Zoo with the tutorial for TF2 OD. I am able to perform inference on my re-trained saved_model.pb and properly load it with no issues at all. However the difficulty I am facing is with converting it to a different proprietary format. As mentioned before, I can convert the saved_model.pb in the official repo, but not my trained saved_model.pb.
st205520
I suggest you to open a ticket in the model repository and post the link link to the ticket here as we need to understand how the file was exported in the zoo. I don’t think that Model garden maintainers are (still?) subscribed to the model_gsarden tag in this threads.
st205521
I have posted this issue on both the ‘tensorflow/tensorflow/’ and ‘tensorflow/models’ repositories. Here are the links for both: github.com/tensorflow/models exporter_main_v2.py on official TF2 OD checkpoints produces saved_model.pb different than official saved_model.pb #50653 7 opened Jul 9, 2021 ahmadchalhoub models:research type:bug <em>Please make sure that this is a bug. As per our [GitHub Policy](https://git…hub.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em> **System information** - Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04 - Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: - - TensorFlow installed from (source or binary): pip - TensorFlow version (use command below): 2.3.0 - Python version: 3.6.9 - Bazel version (if compiling from source): - - GCC/Compiler version (if compiling from source): - - CUDA/cuDNN version: CUDA 11.4 - GPU model and memory: GeForce RTX 3090 I am trying to convert a re-trained TF2 object detection SSD MobilenetV2 model to a proprietary framework. I have successfully re-trained the network and it runs properly. However, I am having trouble with converting the saved_model.pb to the other framework. The conversion script from the SDK I am working with performs optimization on the saved_model.pb, using 'meta_optimizer.cc', which returns an empty graph after running through my re-trained model. I used '_exporter_main_v2.py_' to export my re-trained checkpoint to the saved_model.pb which I am having trouble with. The issue is not with my training or checkpoints, but with the exporting process from checkpoint to a saved_model.pb using '_exporter_main_v2.py_'. I know this because I downloaded the SSD MobilenetV2 model from the TF2 Zoo to test with it. I have no issue converting the official saved_model.pb file found in the official repo, but when I try to convert the official checkpoints found in the repo to a saved_model.pb using '_exporter_main_v2.py_', I face the same issue trying to convert the newly produced saved_model.pb file to the proprietary framework. This means that something wrong is happening when executing the 'exporter_main_v2.py' script. **Describe the expected behavior** The exported saved_model.pb file should not be different than the official saved_model.pb file found in the official repo. The following is what I get, showing 0 nodes and 0 edges ![grappler_empty_graph](https://user-images.githubusercontent.com/84783887/124790681-5c42f300-df19-11eb-9201-7c5258e8d59b.png) **Standalone code to reproduce the issue** The model I downloaded is: http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz The command I used to export the official checkpoint to a saved_model.pb is: **python ~/models/research/object_detection/exporter_main_v2.py --input_type image_tensor --pipeline_config_path pipeline.config --trained_checkpoint_dir checkpoint/ --output_directory exported_model/** github.com/tensorflow/tensorflow exporter_main_v2.py on official TF2 OD checkpoints produces saved_model.pb different than official saved_model.pb 2 opened Jul 7, 2021 ahmadchalhoub TF 2.3 comp:model type:bug type:others <em>Please make sure that this is a bug. As per our [GitHub Policy](https://git…hub.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em> **System information** - Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04 - Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: - - TensorFlow installed from (source or binary): pip - TensorFlow version (use command below): 2.3.0 - Python version: 3.6.9 - Bazel version (if compiling from source): - - GCC/Compiler version (if compiling from source): - - CUDA/cuDNN version: CUDA 11.4 - GPU model and memory: GeForce RTX 3090 I am trying to convert a re-trained TF2 object detection SSD MobilenetV2 model to a proprietary framework. I have successfully re-trained the network and it runs properly. However, I am having trouble with converting the saved_model.pb to the other framework. The conversion script from the SDK I am working with performs optimization on the saved_model.pb, using 'meta_optimizer.cc', which returns an empty graph after running through my re-trained model. I used '_exporter_main_v2.py_' to export my re-trained checkpoint to the saved_model.pb which I am having trouble with. The issue is not with my training or checkpoints, but with the exporting process from checkpoint to a saved_model.pb using '_exporter_main_v2.py_'. I know this because I downloaded the SSD MobilenetV2 model from the TF2 Zoo to test with it. I have no issue converting the official saved_model.pb file found in the official repo, but when I try to convert the official checkpoints found in the repo to a saved_model.pb using '_exporter_main_v2.py_', I face the same issue trying to convert the newly produced saved_model.pb file to the proprietary framework. This means that something wrong is happening when executing the 'exporter_main_v2.py' script. **Describe the expected behavior** The exported saved_model.pb file should not be different than the official saved_model.pb file found in the official repo. The following is what I get, showing 0 nodes and 0 edges ![grappler_empty_graph](https://user-images.githubusercontent.com/84783887/124790681-5c42f300-df19-11eb-9201-7c5258e8d59b.png) **Standalone code to reproduce the issue** The model I downloaded is: http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz The command I used to export the official checkpoint to a saved_model.pb is: **python ~/models/research/object_detection/exporter_main_v2.py --input_type image_tensor --pipeline_config_path pipeline.config --trained_checkpoint_dir checkpoint/ --output_directory exported_model/**
st205522
I think that for now in model is enough as It is better to not open duplicate issues in the ecosystem. As all the Ecosystem repositories (if we exclude Keras) are under the same Github org we could move the tickets across repos if required.
st205523
Okay, I closed the issue under ‘tensorflow/tensorflow’. Will be waiting for a response. Thank you
st205524
Hello Community 1st) Pls tell me if I am completely at the wrong place with my request/question 2nd) My Question is: Which is the best/most easiest way to setup a TF/CUDA env on Debian Buster? May I describe my difficulties at first: There is (a very, very special) dependency in setting both successfully up on a (Debian) Linux System: (on which’s investigations not always lead to a successful one) GCC used/requested in TF - CUDA working only w/ spec. GCC which is in contrast to TF - std. inst. of GCC on Linux system which differs very often to both → How to find out which CUDA incl. GCC works with which version of TF AND the GCC version on Linux system? And, which may be the traps/pit falls I may stumble in? There are also (nearly the same very special) dependencies concerning the used/necessary python/pip versions to be installed/already installed. → So, the same question here belonging to Python There is also a specialty regarding the way/prereq’s on how to run the ‘.run’ file of CUDA Once done wrong, it is very hard to find all the places where there had been tracks/spots been left of a former setup to be get the system cleaned up very clearly and than to get a second try successfully run of the CUDA ‘.run’ file to get it installed. Since I am not a developer it is very hard for me to setup CUDA together with TF successfully on Debian Buster to have it right at hand for teaching purposes. So, which way of setting up those incl. correct dependencies would you suggest to be successful in this matter? Thank you very much for any helpful response which will be highly appreciated. Kind regards from Switzerland, Roger
st205525
Personally I prefer to directly use or build derived images from the official Dockerhub repo: https://hub.docker.com/r/tensorflow/tensorflow/ 5
st205526
Thank you for your response. Since I am not that aware of ‘building derived images’: Could you pls provide me some information on how you/to do this? And, in the images you mentioned by the link you gave, is there CUDA, Python, Tensorflow and all necessary stuff integrated into its/one image? Or do I have to run each of them in a seperated container? Regards Roger
st205527
You can create derived images with the FROM directive: Docker Documentation – 8 Jul 21 Best practices for writing Dockerfiles Hints, tips and guidelines for writing clean, reliable Dockerfiles And, in the images you mentioned by the link you gave, is there CUDA, Python, Tensorflow and all necessary stuff integrated into its/one image Yes, for GPU support use the GPU images: TensorFlow Docker  |  TensorFlow 1
st205528
In the following snippet, I construct a very simple neural network and evaluate it on some synthetic data. I don’t train the network, just evaluate it with the initial weights. I’m using binary cross entropy, and I compute the loss in two ways: By calling model.evaluate(). By calling model.predict() and manually computing the loss. I expect to get the same result in each case, but I consistently get very different values. import keras import numpy as np input_dimension = 100 rng = np.random.default_rng(0) train_xs = rng.uniform(0, 255, (1, input_dimension)) train_ys = rng.uniform(0, 1, (1, 1)) keras_model = keras.Sequential([ keras.layers.Dense(1000, activation="relu"), keras.layers.Dense(1, activation="sigmoid") ]) keras_model.compile( optimizer=keras.optimizers.SGD(learning_rate=0.01, momentum=0.0, nesterov=False), loss=keras.losses.binary_crossentropy, metrics=["accuracy"] ) keras_model.build(input_shape=(1, input_dimension)) y = keras_model.predict(train_xs) loss, _ = keras_model.evaluate(train_xs, train_ys, verbose=0) print(loss) print(keras.losses.binary_crossentropy(train_ys, y)) If the activation in the hidden layer is changed from ReLU to sigmoid, the behavior goes away. What’s going on?
st205529
I believe I’m closer to figuring it out. Here’s a simpler case: import keras import numpy as np import tensorflow as tf def binary_cross_entropy(y_hat, y): epsilon = 1e-7 y_hat = np.clip(y_hat, epsilon, 1 - epsilon) bce = y * np.log(y_hat + epsilon) bce += (1 - y) * np.log(1 - y_hat + epsilon) return -np.mean(bce) train_xs = np.ones((1, 1000)) * 10000 train_ys = np.zeros((1, 1)) tf.random.set_seed(2) keras_model = keras.Sequential([ keras.layers.Dense(1000, activation="linear"), keras.layers.Dense(1, activation="sigmoid") ]) keras_model.compile( loss=keras.losses.binary_crossentropy, metrics=[] ) y = keras_model.predict(train_xs) assert y.shape == train_ys.shape metrics = keras_model.evaluate(train_xs, train_ys, verbose=0, return_dict=True) print("Outputs:", y) print(metrics["loss"]) print(keras.losses.binary_crossentropy(train_ys, y)) print(binary_cross_entropy(y, train_ys)) So now I’m using non-random, constant inputs, and my target is zero. I also changed the activation in the hidden layer to linear (it also works with ReLU - just not with sigmoid) I’m computing the BCE in three ways: model.evaluate() model.predict() and Keras’s implementation of BCE model.predict() and my own implementation of BCE (2) and (3) match, but (1) gives a very different value. What surprises me is that the loss should depend only on the outputs of the model, but if I change the constant 1000 at line 12 to 10000 or 100000, the value of (1) continues to increase, even though the output of the model is constant the whole time (equal to 1). For example, when the constant is 1000: Outputs: [[1.]] 909.5416870117188 tf.Tensor([15.333239], shape=(1,), dtype=float32) 15.33323860168457 when it’s 10000: Outputs: [[1.]] 9095.416015625 tf.Tensor([15.333239], shape=(1,), dtype=float32) 15.33323860168457 Notice the output of model.evaluate() was scaled by a factor of 10, just like the inputs to the model. This leads me to the following hypothesis. Presumably, inside model.evaluate(), the model’s predictions are computed using a higher precision than what I’m getting by running model.predict() myself. Because the sigmoid exponentially decays towards 1 at infinity, when I scale the inputs by 10, this scales the inputs to the sigmoid by 10 (because I’m using linear or ReLU activations in the hidden layer), which causes the sigmoid’s output to be, basically, 1 - exp(-10t) instead of 1 - exp(-t). Because BCE is logarithmic, this undoes that and we get a loss which is ten times higher. However, when I do this same calculation in (2) or (3), the output of the model is saturated at 1 and so nothing changes. This difference might seem trivial, but I think it actually has dramatic consequences. When I train certain networks using model.fit(), which apparently uses the same higher-precision calculations that model.evaluate() is using, I get 90% test accuracy. At the lower precision of the manual model.predict() method (implemented using another library, namely JAX), in the same scenario I get 50% accuracy! I believe this discrepancy is what’s ultimately behind issue #7171 I raised on the JAX issue tracker (sorry, I’m not allowed to post a link). What I would like to know is: how does model.evaluate() compute the model’s predictions and the BCE loss? I tried debugging it, but the source code is completely opaque to me, jumping through layer after layer of wrappers and compilation steps. How can I, for example, compute my model’s predictions in the same way that model.evaluate() does it, and then compute the loss in the same way that model.evaluate() does it?
st205530
Hello together, i currently work on training a object detection model using a ssd mobilenet v2 configuration in tensorflow 2.5. This in general works ok with the training finishing around ~0.1 loss. Loading the model results in good detections with which i can work so far. Problem is: My current test cases all run on single images. In the final application this model is supposed to do the object detection live on incoming camera images. So the plan was to use a tpu and convert the trained model to tflite for the coral edgetpu compiler to be able to handle it. For this to work the tflite model needs to be 8 bit integer quantized but the model becomes unusable after beeing converted to tflite with 8 bit quantization. If my understanding of my model configuration is correct i even used quantization aware training to reduce the model accuracy loss but it still just results in garbage detections. I use the following configuration for an ssd mobilenet v2: model { ssd { inplace_batchnorm_update: true freeze_batchnorm: false num_classes: 3 box_coder { faster_rcnn_box_coder { y_scale: 10.0 x_scale: 10.0 height_scale: 5.0 width_scale: 5.0 } } matcher { argmax_matcher { matched_threshold: 0.5 unmatched_threshold: 0.5 ignore_thresholds: false negatives_lower_than_unmatched: true force_match_for_each_row: true use_matmul_gather: true } } similarity_calculator { iou_similarity { } } encode_background_as_zeros: true anchor_generator { ssd_anchor_generator { num_layers: 6 min_scale: 0.2 max_scale: 0.95 aspect_ratios: 1.0 aspect_ratios: 2.0 aspect_ratios: 0.5 aspect_ratios: 3.0 aspect_ratios: 0.3333 } } image_resizer { fixed_shape_resizer { height: 514 width: 614 } } box_predictor { convolutional_box_predictor { min_depth: 0 max_depth: 0 num_layers_before_predictor: 0 use_dropout: false dropout_keep_probability: 0.8 kernel_size: 1 box_code_size: 4 apply_sigmoid_to_scores: false class_prediction_bias_init: -4.6 conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { weight: 0.00004 } } initializer { random_normal_initializer { stddev: 0.01 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.97, epsilon: 0.001, } } } } feature_extractor { type: 'ssd_mobilenet_v2_keras' min_depth: 16 depth_multiplier: 1.0 conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { weight: 0.00004 } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.97, epsilon: 0.001, } } override_base_feature_extractor_hyperparams: true } loss { classification_loss { weighted_sigmoid_focal { alpha: 0.75, gamma: 2.0 } } localization_loss { weighted_smooth_l1 { delta: 1.0 } } classification_weight: 1.0 localization_weight: 1.0 } normalize_loss_by_num_matches: true normalize_loc_loss_by_codesize: true post_processing { batch_non_max_suppression { score_threshold: 1e-8 iou_threshold: 0.6 max_detections_per_class: 5 max_total_detections: 15 } score_converter: SIGMOID } } } train_config: { batch_size: 8 sync_replicas: true startup_delay_steps: 0 replicas_to_aggregate: 8 num_steps: 40000 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { random_crop_image { min_object_covered: 0.0 min_aspect_ratio: 0.75 max_aspect_ratio: 3.0 min_area: 0.75 max_area: 1.0 overlap_thresh: 0.0 } } optimizer { momentum_optimizer: { learning_rate: { cosine_decay_learning_rate { learning_rate_base: .13 total_steps: 40000 warmup_learning_rate: .026666 warmup_steps: 1000 } } momentum_optimizer_value: 0.9 } use_moving_average: false } max_number_of_boxes: 100 unpad_groundtruth_tensors: false } train_input_reader: { label_map_path: "...\\label_map.pbtxt" tf_record_input_reader { input_path: "...\\training.tfrecord" } } eval_config: { metrics_set: "coco_detection_metrics" use_moving_averages: false } eval_input_reader: { label_map_path: "...\\label_map.pbtxt" shuffle: false num_epochs: 1 tf_record_input_reader { input_path: "...\\eval.tfrecord" } } graph_rewriter { quantization { delay: 1000 weight_bits: 8 activation_bits: 8 } } I’d like to show some images in here but the forum wont let me. Will try to figure out a way to post some example images in here… As a description of what you’d see: The tflite model creates like 5 detections with 50 cofidence and horrible bounding boxes (2-3x times too large and the center of the object is somewhere on the border line) sometimes even with the wrong label. Meanwhile the original model is always 100% spot on with at least 99% confidence and perfectly fitting bounding boxes (max 2-3 pixels off) I am fully aware that a quantized model will never bring the same accuracy as a float model but the tensorflow documentation led me to believe that the accuracy loss should be somewhat around <3%. While i do not know how the term “accuracy loss” is defined here i would not think that my results are something that can be described as a <3% accuracy loss. For completion of my workflow: I convert the model from checkpoints to saved model format using the script export_tflite_graph_tf2.py from the object detection API. After this i use the following python script to load the model, convert it and save it: def representative_dataset_gen(): data = tf_training_helper.load_images_in_folder_to_numpy_array(IMAGE_BASE_PATH) (count, x,y,c) = data.shape for i in range(count): yield [data[i,:,:,:].reshape(1,x,y,c).astype(np.float32)] input_data = tf_training_helper.load_image_into_numpy_array(IMAGE_PATHS) converter = tf.lite.TFLiteConverter.from_saved_model(MODEL_PATH) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_dataset_gen converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 converter.target_spec.supported_types = [tf.int8] tflite_model = converter.convert() print("model conversion finished. Starting validation... ") interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() interpreter.set_tensor(input_details[0]['index'], input_data.astype(np.uint8)) interpreter.invoke() out_boxes = interpreter.get_tensor(output_details[0]['index']) out_classes = interpreter.get_tensor(output_details[1]['index']) out_scores = interpreter.get_tensor(output_details[2]['index']) with tf.io.gfile.GFile(MODEL_SAVE_PATH, 'wb') as f: f.write(tflite_model) (i redacted some unecessary parts from it to make it a bit shorter. (e.g. path definitions)) If you have any idea on how this loss in accuracy can be prevented or even just why it is there please let me know. Any ideas or suggestions are welcome. (e.g. is ssd mobilenet the correct thing to go for? are other models more robust to conversion?) Please also let me know if i missed to add any information, i’d be happy to provide it. Have a great weekend, Cheers Georg
st205531
hello, how exactly you performed QAT? , did you fine tune the model or trained it from scratch? fine tune is better , are you quantizing all layers in the model? , there might be operations in the first layers that once quantized make the model generalization performance decrease. I think once you have done QAT, you don’t need a representative dataset to calibrate the range of the activations, just run the converter with converter.optimizations = [tf.lite.Optimize.DEFAULT] like it is done here in the docs: Quantization aware training comprehensive guide 5 you can check the model layers details in the Netron app, I use it to debug and see if the layer inputs/outputs and weights make sense after quantization.
st205532
Hey, thank you for your thoughts on this! To 1: As far as i am aware QAT (assuming this means quantization aware training) is done by solely adding the following to the training config at the bottom: graph_rewriter { quantization { delay: 1000 weight_bits: 8 activation_bits: 8 } } Which should activate the quantization effects after the first 1000 steps of the training run. I do not know if or how i could tell if all layers are quantized by this as i was not able to find much information regarding this. (if you can point me to a good read on this topic i’d gladly take it) I train the model from scratch as my application is really narrow and other detections are not needed/unwanted. From what i can tell finetuning should have nothing to do with this no? in the end the model itself works perfect before tflite conversion. To 2: Yes that was my thought as well but as far as i know the full quantization process needed for the coral edgetpu-compiler is only achived with this configuration: converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 converter.target_spec.supported_types = [tf.int8] This however fails with an error demanding a representative dataset if none is provided. Following the advice you gave however i ran a conversion test using only converter.optimizations = [tf.lite.Optimize.DEFAULT] instead. Sadly this leads to even worse results with the highest confidence beeing 21% with the wrong label and a bounding box that is not even close to the object. Thank you for pointing me towards netron. The net setup seems valid to me, the inbetween tensor sizes could be valid as well. As for weights i am not able to see them in the tflite model for some reason. The only thing i can confirm from this is that the quantized tflite model from my original post indeed is quantized as all factors in all layers seem to be integers and the input and output each run through a (de)quantization block. Do you think that the workflow that i use in general is not to be used to achive the desired result? I read some github issues that this in general should work. Cheers
st205533
System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS TensorFlow installed from (source or binary): Colab TensorFlow version (use command below): 2.5.0 Python version: python 3.7 GPU model and memory: Tesla T4 Error TypeError: Input 'y' of 'Sub' Op has type float16 that does not match type float32 of argument 'x' Current behaviour While using Mixed Precision and building a Keras Functional API model (EfficientNet B0), it shows the below error image1870×762 160 KB image2496×684 128 KB Describe the expected behaviour The Global Policy I set in the previous cell was mixed_float16. The problem works fine when running on tensorflow 2.4.1 so the bug is with tensorflow 2.5.0 You can reproduce the same error using the below notebook : colab.research.google.com Google Colaboratory 24
st205534
Can you post the code in the screenshot? It seems that the colab Is a more complex and slow example to reproduce.
st205535
For some reason, I can’t include images or links in this reply. The above screenshots are enough to get the gist of the problem, but if still confused, please check out the GitHub Repo/Issues of Tensorflow. I’ve reported the same issue there too.
st205536
You can include inline code in the reply. From the screenshot I see that there are just few lines
st205537
Code used when i set the global_policy : from tensorflow.keras import mixed_precision mixed_precision.set_global_policy(policy='mixed_float16') And the dataset i used was in float32
st205538
Code used for building the model : from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing # Create base model input_shape = (224, 224, 3) base_model = tf.keras.applications.EfficientNetB0(include_top=False) base_model.trainable = False # freeze base model layers # Create Functional model inputs = layers.Input(shape=input_shape, name="input_layer") # Note: EfficientNetBX models have rescaling built-in but if your model didn't you could have a layer like below # x = preprocessing.Rescaling(1./255)(x) x = base_model(inputs, training=False) # set base_model to inference mode only x = layers.GlobalAveragePooling2D(name="pooling_layer")(x) x = layers.Dense(len(class_names))(x) # want one output neuron per class # Separate activation of output layer so we can output float32 activations outputs = layers.Activation("softmax")(x) model = tf.keras.Model(inputs, outputs) # Compile the model model.compile(loss="sparse_categorical_crossentropy", # Use sparse_categorical_crossentropy when labels are *not* one-hot optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) model.summary()
st205539
You can simply reproduce this with 3 lines: import tensorflow as tf tf.keras.mixed_precision.set_global_policy('mixed_float16') model = tf.keras.applications.EfficientNetB0() I think that there is an issue with the autocasting in the preprocessing normalization layer. You could try to open a Bug on Github
st205540
I was working with mixed_precision earlier today seems things were working smoothly but not sure when I tried to run the same block of code it throws an error. code1870×762 160 KB error2496×684 128 KB And the TensorFlow version I am running on is 2.5.0. Downgrading to version 2.4.1 works fine. Any help with this? Also reading some of the old threads seems it has been fixed after the version update.
st205541
The ticket Is github.com/tensorflow/tensorflow Dtype error when using Mixed Precision and building EfficientNetB0 Model 48 opened May 26, 2021 gauravreddy08 TF 2.5 comp:keras type:bug **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04…): MacOS - TensorFlow installed from (source or binary): Colab - TensorFlow version (use command below): 2.5.0 - Python version: python 3.7 - GPU model and memory: Tesla T4 **Error** `TypeError: Input 'y' of 'Sub' Op has type float16 that does not match type float32 of argument 'x'` **Current behaviour** While using Mixed Precision and building a Keras Functional API model (EfficientNet B0), it shows the below error <img width="935" alt="image" src="https://user-images.githubusercontent.com/57211163/119546314-6669c180-bdb1-11eb-918f-3cb13d225759.png"> <img width="1248" alt="image" src="https://user-images.githubusercontent.com/57211163/119546338-6c5fa280-bdb1-11eb-844d-91175160e6e5.png"> **Describe the expected behaviour** The Global Policy I set in the previous cell was `mixed_float16`. The problem works fine when running on `tensorflow 2.4.1` so the bug is with `tensorflow 2.5.0` You can reproduce the same error using the below notebook : https://colab.research.google.com/drive/1TfNZSIJ_I7IZI35RsGFnTdj-6beMHV2_?usp=sharing
st205542
So does it mean there is an issue with with EfficientNetB0 model? Because just now I built an Resnet101 model and performed mixed_precision and it works fine. Screenshot 2021-05-27 at 5.14.47 AM974×683 91 KB
st205543
I am facing the same issue on my custom model while using mixed-precision. I am using tf 2.5.0. This issue is visible on Windows and Ubuntu. x = self.conv4(y1) + self.conv3(y2) here x, y1, y2 have dtype = float16 self.conv3 and self.conv4 are mixed precision layers
st205544
Hi all, Which is the best way to learn pure tensorflow without using any libraries. Also if there are researches or projects used that, please mentioned it. Thank you.
st205545
Hi there, Before I provide a good source to learn TensorFlow code-first, I want to ask why exactly the constraint of not using any libraries? Other than that, there is a beautiful TensorFlow course that teaches in a code-first approach, meaning you learn by doing. I think this is a perfect way to be introduced to this topic. https://dev.mrdbourke.com/tensorflow-deep-learning/ 11 It’s written by Daniel Bourke, who is sometimes also roaming these forums Hope this helps!
st205546
I’m also curious to understand the “without using any libraries” Another good start is the the main documentation Tutorials  |  TensorFlow Core 3 it goes from the basics to more advanced uses
st205547
when you mean “without using any libraries” you are referring to the low level API (no Keras)? if its the case you can always build stuff using common python and constructing models with tf.Module  |  TensorFlow Core v2.5.0 1 for example, this is the code to implement a simple conv layer without Keras: weights initialization: def weights(name, shape, mean=0.0, stddev=0.02): var = tf.Variable(tf.random_normal_initializer(mean=mean, stddev=stddev)(shape), name=name) return var layer: class Conv2D(tf.Module): def __init__(self, out_feats=8 ks=3, use_relu=False, name=None): super(Conv2D, self).__init__(name=name) self.use_relu = use_relu self.out_feats = out_feats self.ks = ks @tf.Module.with_name_scope def __call__(self, input): if not hasattr(self, 'weights'): self.weights = weights('weights', (self.ks, self.ks, input.get_shape()[-1], self.out_feats)) conv = tf.nn.conv2d(input, self.weights, strides=[1, 1, 1, 1], padding='SAME') if self.use_relu: output = tf.nn.relu(conv) else: output = conv return output
st205548
Hi all! I’ve to implement cnn on a multi-dimensional signal data whose shape is somehow similar to that of a colorful image, as in each of my data points has a dimension of [65 * 1 * 1100] (analogy with a colored image is that instead of 3 layers of colors i have 65 layers, and for (length * width) of an image i have (1*1100). These data are currently in python list form because from raw csv file i first extract & convert them to python list. I just wanted to know, how exactly can they be converted from list to a form which will be suitable for tensorflow, and all the train/test split type tasks may be done easily? I know this seems to be a silly problem but I’m relatively new to tf and i really don’t know what to do here. Any help is really appreciated!
st205549
hello, you can check the tf.data.Dataset documentation here: tf.data.Dataset  |  TensorFlow Core v2.5.0 2 if you want to build a tensorflow dataset from a list you can use data = tf.data.Dataset.from_tensor_slices(your_list) once you have this dataset, you can compute any other stuff wtih tf.data.Dataset methods and pass the dataset to your model, either with model.fit or looping through the elements of the dataset like in the documentation example (in that case you will need to create your custom training loop and training/test step)
st205550
I need to deploy a RandomForestModel regression model, then consume it. I have made a zip file of the entire folder containing the trained RandomForestModel regression model for deployment. In a separate notebook, I unzipped and loaded the model: loadedModel = tf.saved_model.load(’/RF_model’) With the loaded model, I need to predict with a new dataset. However, I can’t find any information about what methods I have to use. Any help is greatly appreciated.
st205551
Hi Frank, you can try something like this: reloaded_model = tf.saved_model.load('./my_model') result = reloaded_model(your_data)
st205552
Hi, Adding some more details on top of @lgusm reply. In TensorFlow/Keras, there are two ways to load and use a model: The SavedModel API : tf.saved_model.load('./my_model'). The Keras API: tf.keras.models.load_model('./my_model'). The Keras API calls the SavedModel API and adds some extra utility functions. If you can the choose, I would recommend using the Keras API. Once a model is loaded you can use if as follow: # Generate some predictions. Works with all APIs. # dataset can be a tensor, array, structure of tensor or structure of arrays. # All tensors need to be of rank>=2. predictions = model(dataset) # Generate some predictions. Works only with the Keras APIs. # dataset can be anything as before. Also support dataset being a tf.dataset. # There are not constrains on the tensor ranks. predictions = model.predict(dataset) # Evaluate the model. metrics = model.evaluate(dataset) Cheers, M.
st205553
I try to copy the weights from a pytorch model to tf2 model. I can successfully do the influence. I output the feature map to have a look. However, the feature map after a conv2d has black bars on the top and left hand side of the map. i-stack-imgur-com/E45K6.png The pytorch feature map does not have these black bars. The pytorch code of conv2d is nn.Conv2d(1, 56, kernel_size=5, padding=5//2) The tensorflow build is Conv2D(56, 5, padding=“same”)(inp) The copying weight from pytorch model to tf2 is : onnx_1_w_num = onnx_l.weight.data.permute(2,3,1,0).numpy() onnx_1_b_num = onnx_l.bias.data.numpy() tf_l.set_weights([onnx_1_w_num,onnx_1_b_num]) What is wrong? Thanks.
st205554
We’re having difficulty being able to identify which object or parameter we could use to deliver a URL (e.g., the chatbot transcript) to our user at the end of a chatbot session. Does anyone have any suggestions? What we’ve tried so far just over-layers the chatbot window we’ve created in Python. Example from our json structure: {“intents”: [ {“tag”: “greeting”, “patterns”: [“Hi”, “How are you”, “Is anyone there?”, “Hello”, “Good day”, “Whats up”], “responses”: [“Hello!”, “Good to see you again!”, “Hi there, how can I help?”], “context_set”: “” }, {“tag”: “goodbye”, “patterns”: [“cya”, “See you later”, “Goodbye”, “I am Leaving”, “Have a Good day”], “responses”: [“Sad to see you go :(”, “Talk to you later”, “Goodbye!”], “context_set”: “” },
st205555
Hi Dave, I didn’t understand your question, can you clarify a little bit? that would make easier for the community to help.
st205556
Hi! I’m a beginner when it comes to TensorFlow and machine learning in general, and I want to try and create a model that can syllabify words, for example: Input data: 'syllable', 'tensorflow' Output data: 'syl-la-ble', 'ten-sor-flow' I think that a Recurrent Neural Network with LSTM would be most appropriate for this, but I am unsure as to what the model’s structure would be because the output (syllabified words) would be variable-length based on the length of the word that is inputted, and I have not encountered anything like this in the course of my self-study.
st205557
Just to take an initial idea you can find an old reference of Language-Agnostic Syllabification with Neural Sequence Labeling in: github.com jacobkrantz/lstm-syllabify 16 Breaks a word into syllables using an LSTM-based neural network.
st205558
I’m working on a Transformer based model and I followed the great example of the positional encoding from: Since the original implementation relies heavily on Numpy, I’ve created a pure TF variation that runs 100 times faster. Hopes it would help others too: import tensorflow as tf def get_angles(pos, i, d_model): return pos * (1 / tf.math.pow(10000.0, (2 * (i // 2)) / d_model)) @tf.function def positional_encoding(pos, d_model): angle_rads = get_angles(tf.range(pos, dtype=tf.float32)[:, tf.newaxis], tf.range(d_model, dtype=tf.float32)[tf.newaxis, :], d_model) return tf.reshape(tf.concat([tf.expand_dims(tf.sin(angle_rads[:, ::2]), axis=-1), tf.expand_dims(tf.cos(angle_rads[:, 1::2]), axis=-1)], axis=-1), [1, pos, -1]) n, d = 2048, 512 timeit pos_encoding = positional_encoding(n, d) 137 µs ± 2.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
st205559
Sorry, I omitted the from link path (text/tutorials/transformer) And the original timing of the tutorial implementation is: timeit pos_encoding = positional_encoding(n, d) 17.8 ms ± 582 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
st205560
AttributeError: module 'tensorflow.core.framework.types_pb2 this error giving when i was executing code please help me
st205561
Deep Learning tuning and optimization . yesterday it was working ny tensorflow tool but today not ruuning tensorflow library. i am doing this code “Deep Learning tuning and optimization”
st205562
tf version 2.5 import tensorflow as tf run this code via jupyter show error AttributeError Traceback (most recent call last) in ----> 1 import tensorflow as tf c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow_init_.py in 39 import sys as _sys 40 —> 41 from tensorflow.python.tools import module_util as _module_util 42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader 43 c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python_init_.py in 44 45 # Bring in subpackages. —> 46 from tensorflow.python import data 47 from tensorflow.python import distribute 48 from tensorflow.python import keras c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\data_init_.py in 23 24 # pylint: disable=unused-import —> 25 from tensorflow.python.data import experimental 26 from tensorflow.python.data.ops.dataset_ops import AUTOTUNE 27 from tensorflow.python.data.ops.dataset_ops import Dataset c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\data\experimental_init_.py in 97 98 # pylint: disable=unused-import —> 99 from tensorflow.python.data.experimental import service 100 from tensorflow.python.data.experimental.ops.batching import dense_to_ragged_batch 101 from tensorflow.python.data.experimental.ops.batching import dense_to_sparse_batch c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\data\experimental\service_init_.py in 138 from future import print_function 139 → 140 from tensorflow.python.data.experimental.ops.data_service_ops import distribute 141 from tensorflow.python.data.experimental.ops.data_service_ops import from_dataset_id 142 from tensorflow.python.data.experimental.ops.data_service_ops import register_dataset c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py in 23 24 from tensorflow.python import tf2 —> 25 from tensorflow.python.data.experimental.ops import compression_ops 26 from tensorflow.python.data.experimental.ops.distribute_options import AutoShardPolicy 27 from tensorflow.python.data.experimental.ops.distribute_options import ExternalStatePolicy c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py in 18 from future import print_function 19 —> 20 from tensorflow.python.data.util import structure 21 from tensorflow.python.ops import gen_experimental_dataset_ops as ged_ops 22 c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\data\util\structure.py in 24 import wrapt 25 —> 26 from tensorflow.python.data.util import nest 27 from tensorflow.python.framework import composite_tensor 28 from tensorflow.python.framework import ops c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\data\util\nest.py in 38 import six as _six 39 —> 40 from tensorflow.python.framework import sparse_tensor as _sparse_tensor 41 from tensorflow.python.util import _pywrap_utils 42 from tensorflow.python.util import nest c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\framework\sparse_tensor.py in 26 from tensorflow.python import tf2 27 from tensorflow.python.framework import composite_tensor —> 28 from tensorflow.python.framework import constant_op 29 from tensorflow.python.framework import dtypes 30 from tensorflow.python.framework import ops c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\framework\constant_op.py in 27 from tensorflow.core.framework import types_pb2 28 from tensorflow.python.eager import context —> 29 from tensorflow.python.eager import execute 30 from tensorflow.python.framework import dtypes 31 from tensorflow.python.framework import op_callbacks c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\execute.py in 26 from tensorflow.python.eager import core 27 from tensorflow.python.framework import dtypes —> 28 from tensorflow.python.framework import ops 29 from tensorflow.python.framework import tensor_shape 30 from tensorflow.python.util import compat c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\framework\ops.py in 52 from tensorflow.python.framework import c_api_util 53 from tensorflow.python.framework import composite_tensor —> 54 from tensorflow.python.framework import cpp_shape_inference_pb2 55 from tensorflow.python.framework import device as pydev 56 from tensorflow.python.framework import dtypes c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\framework\cpp_shape_inference_pb2.py in 188 _CPPSHAPEINFERENCERESULT_HANDLESHAPEANDTYPE.fields_by_name[‘shape’].message_type = tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2._TENSORSHAPEPROTO 189 _CPPSHAPEINFERENCERESULT_HANDLESHAPEANDTYPE.fields_by_name[‘dtype’].enum_type = tensorflow_dot_core_dot_framework_dot_types__pb2._DATATYPE → 190 _CPPSHAPEINFERENCERESULT_HANDLESHAPEANDTYPE.fields_by_name[‘specialized_type’].enum_type = tensorflow_dot_core_dot_framework_dot_types__pb2._SPECIALIZEDTYPE 191 _CPPSHAPEINFERENCERESULT_HANDLESHAPEANDTYPE.containing_type = _CPPSHAPEINFERENCERESULT 192 _CPPSHAPEINFERENCERESULT_HANDLEDATA.fields_by_name[‘shape_and_type’].message_type = _CPPSHAPEINFERENCERESULT_HANDLESHAPEANDTYPE AttributeError: module ‘tensorflow.core.framework.types_pb2’ has no attribute ‘_SPECIALIZEDTYPE’
st205563
thank you very much my tensorflow is working but i m getting another error. AttributeError Traceback (most recent call last) in 2 tf.compat.v1.ConfigProto 3 tf.executing_eagerly() ----> 4 he_init = tf.contrib.layers.variance_scaling_initializer() 5 hidden1 = tf.layers.dense(X,n_hidden1,activation=tf.nn.relu,kernel_initializer=he_init,name=“hidden1”) AttributeError: module ‘tensorflow.compat.v1’ has no attribute ‘contrib’
st205564
tf.contrib was removed. You can use: TensorFlow tf.keras.initializers.VarianceScaling  |  TensorFlow Core v2.5.0 4 Initializer capable of adapting its scale to the shape of weights tensors.
st205565
from tensorflow.examples.tutorials.mnist import input_data #from tensorflow.keras.datasets.mnist import input_data mnist = input_data.read_data_sets("/tmp/data/") this code giving error in tensorflow 2.5 ModuleNotFoundError Traceback (most recent call last) in ----> 1 from tensorflow.examples.tutorials.mnist import input_data 2 #from tensorflow.keras.datasets.mnist import input_data 3 mnist = input_data.read_data_sets("/tmp/data/") c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\examples\tutorials\mnist_init_.py in 19 from future import print_function 20 —> 21 from tensorflow.examples.tutorials.mnist import input_data 22 from tensorflow.examples.tutorials.mnist import mnist c:\users\eshant\appdata\local\programs\python\python38\lib\site-packages\tensorflow\examples\tutorials\mnist\input_data.py in 27 from six.moves import xrange # pylint: disable=redefined-builtin 28 import tensorflow as tf —> 29 from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
st205566
from tensorflow.examples.tutorials.mnist import input_data I don’t think we have this in TF 2.5.
st205567
for each observation in the training set, as test sample initialize a dataframe to save all the test run results df_loocv = pd.DataFrame() for index, row in df_iris.iterrows(): # save the current in the processed df_loocv df_loocv = df_loocv.append(df_iris.iloc[index: index+1]) # drop the current row chosen as test sample # save it in temp df df_iris_trim = df_iris.drop(index) test_sepal_length = float(df_iris.loc[index, ['sepal_length']]) test_sepal_width = float(df_iris.loc[index, ['sepal_width']]) test_petal_length = float(df_iris.loc[index, ['petal_length']]) test_petal_width = float(df_iris.loc[index, ['petal_width']]) # for each row in the dataframe, calculate the distance for index1, row1 in df_iris_trim.iterrows(): eucDist = sqrt(((test_sepal_length - float(row1['sepal_length'])) ** 2 + (test_sepal_width - float(row1['sepal_width'])) ** 2 + (test_petal_length - float(row1['petal_length'])) ** 2 + (test_petal_width - float(row1['petal_width'])) ** 2)) df_iris_trim.loc[index1, 'distance'] = eucDist # sort on distance, ascending. df_iris_trim.sort_values('distance', ascending=True, inplace=True) # select the first K rows, into a new df K = int(K1) df_iris_trim_K = df_iris_trim.iloc[0:K, :] # The resulting object will be in descending order so that the first element # is the most frequently-occurring element. Excludes NA values by default. df_iris_trim_K_grouped = df_iris_trim_K['class'].value_counts() # get the first index of the resulting pandas series above (value_counts) pred_class = df_iris_trim_K_grouped.index[0] # save the predicated class in the test data frame df_loocv.loc[index, 'pred_class'] = pred_class I AM GETTING ERROR THINS ONE FOR SEPAL_LENGTH, SEPAL_WIDTH KeyError: “None of [Index([‘sepal_length’], dtype=‘object’)] are in the [index]”
st205568
I’m using TensorFlow 1.15 on anaconda. However “The TensorFlow contrib module will not be included in TensorFlow 2.0.” error occurs. Why? I’m not using TensorFlow 2.0. There any reason that it does that? Thanks
st205569
It’s just a warning message that informs you that the tf.contrib module won’t be included in the next version of TensorFlow (the first 2.0 release) and you should upgrade your code for making it work with the new version. If you don’t plan to upgrade it, you can safely ignore that warning
st205570
What is the num_steps variable in the pipeline.config file regarding TFOD? I understand epochs and batch_size from pure TF, but this num_steps variable makes me confused. EDIT: I Think I found the answer to this. Num_steps set how many trainings steps we are going to use based on the batch_size. A batch_size of 100 and num_steps set to 50 will be equal to processing 5000 images in total. Thanks for any help!
st205571
Hi folks, I am delighted to share my latest example on keras.io 6 - Self-supervised contrastive learning with SimSiam: keras.io Keras documentation: Self-supervised contrastive learning with SimSiam 51 This one introduces you to the world of self-supervised learning for computer vision and at the same time walks you through a decent self-supervised learning method (SimSiam) for the field. While self-supervision has been pretty predominant in the NLP world since time immemorial but it was only in the past year these methods showed progress for the vision systems. We have evidence that these methods can often beat their supervised counterparts but only that their pre-training does not require any labeled data (but far more inductive priors). Happy to address any feedback on the post.
st205572
Thank you for posting this tutorial! I’ve been trying to use it as a baseline for something I’m working on and I am having trouble reproducing the original paper’s results with the example. In the supplemental section D of the Simsiam paper, they report their results on Cifar10 after training for what I presume is 100 epochs. By carefully looking at your code and the paper they seem to be the same besides a few hyperparameter changes and the data augmentation. I added a true random cropping function to the code you provided and have tried training for up to 800 epochs. Despite all this, I never get a linear evaluation higher than 35%. Any idea what else might be missing in the tutorial?
st205573
Please note that Keras Examples are primarily meant for demonstrating workflows as noted here 12. I added a true random cropping function to the code you provided Not sure what you meant by true random cropping function. Is your backbone architecture the same as the paper? The original one is ResNet-18. Even if that is the case, you’d need to ensure it’s following the same hyperparameters (like zero bias in the BatchNorm layers before the identity, etc.). Following what’s noted under the supplementary A. Implementation Details section is important here barring the CIFAR-10 specific changes.
st205574
Thank you for your quick reply. I am well aware that tutorials are not meant to be benchmark implementations but I was surprised that it did not train usable representations at all. By true random cropping I mean a random crop and resize as mentioned in the paper. This crops a random percentage of the original image, given as hyperparameters, and then resizes the cropped image to the original size. “tf.image.random_crop” as used in the tutorial, randomly crops an image at the size given, since we’re just cropping a 32x32 image to 32x32 it is effectively doing nothing. I thought that the lack of cropping was causing the tutorial code to not work, but fixing this did not improve the performance. I did not match the backbone from the paper and their hyperparameters. I used what you used in the tutorial as it is very close. I’ll try changing the backbone and maybe it’ll help but I’d be very surprised if a slightly different backbone changes the linear transfer score from 35% to the 90% reported in the paper. It would also be pretty damning for SimSiam if it turned out the backbone was the primary contributor to benchmark success. Thanks for any help. I’m just trying to get a clean, simple, implementation that performs roughly similar to the paper and hopefully find a few tweaks that could allow a person to get a usable trained model from the tutorial.
st205575
pbontrager: Thank you for your quick reply. I am well aware that tutorials are not meant to be benchmark implementations but I was surprised that it did not train usable representations at all. Wanted to reiterate one of the statements regarding these examples again: NOTE THAT THIS COMMAND WILL ERROR OUT IF ANY CELLS TAKES TOO LONG TO EXECUTE. In that case, make your code lighter/faster. Remember that examples are meant to demonstrate workflows, not train state-of-the-art models. They should stay very lightweight. So, in the case of this example, we get to 35% which is better than random chance (10%). But I do understand your point and I will try my best to suggest from my experience (that’s the whole purpose of having a discussion forum, isn’t it :)). pbontrager: By true random cropping I mean a random crop and resize as mentioned in the paper. This crops a random percentage of the original image, given as hyperparameters, and then resizes the cropped image to the original size. “tf.image.random_crop” as used in the tutorial, randomly crops an image at the size given, since we’re just cropping a 32x32 image to 32x32 it is effectively doing nothing. I thought that the lack of cropping was causing the tutorial code to not work, but fixing this did not improve the performance. Random-resized crops. You are right. I should have included a note about this in the tutorial itself. I was a bit worried about the lines of code. But refer to this implementation 6 and see if this works. It’s a bit different from PyTorch’s RandomResizedCrop() but it has resulted well in the other experiments I have performed in this area. pbontrager: I did not match the backbone from the paper and their hyperparameters. I used what you used in the tutorial as it is very close. I’ll try changing the backbone and maybe it’ll help but I’d be very surprised if a slightly different backbone changes the linear transfer score from 35% to the 90% reported in the paper. It would also be pretty damning for SimSiam if it turned out the backbone was the primary contributor to benchmark success. I agree here. But what happens is the relation between your dataset’s complexity and architectural complexity matters. In some methods like SimCLR 1, this effect is very pronounced. But in methods like Barlow Twins, it’s lesser. You can find an implementation of Barlow Twins here 3 and within just 100 epochs of pre-training, I was able to get to 62.61% in terms of linear evaluation on CIFAR-10. pbontrager: Thanks for any help. I’m just trying to get a clean, simple, implementation that performs roughly similar to the paper and hopefully find a few tweaks that could allow a person to get a usable trained model from the tutorial. I appreciate this so much. The field is relatively new, so as contributors we wanted to get the workflow out as soon as possible without missing out on the important bits. So, I am here to suggest anything I can to better your results. What you are currently running into (poor performance) is likely because of a phenomenon called representation collapse. This is when your backbone predicts the same output for a given image and self-supervised methods for vision easily latch into this problem. More on this later. One thing to note this, for linear evaluation, often a separate configuration of hyperparameters and augmentation pipeline are incorporated. For SimSiam’s case, they are specified under the A. Implementation Details section. Unfortunately, they did not state these for CIFAR-10. So, this leaves us a fairground for further experimentation. At this point, you’d be totally correct to think that self-supervision (for computer vision) is often sensitive to hyperparameters. But methods like Barlow Twins, VICReg 1 help to eliminate that to some extent. They also gracefully mitigate the problem of representation collapse (DINO 1 as well). I hope this helps.
st205576
I’ve not tested this personally but you can check: github.com PaperCodeReview/SimSiam-TF 7 TF 2.x implementation of SimSiam (Exploring Simple Siamese Representation Learning, CVPR 2021)
st205577
Thank you for this reply. I’ll have time to experiment with this more this week. For context, I am running the code outside of a notebook on my own machine so I can train 100 epochs pretty quick. I train the model, save it, and then load into an eval script that uses the code from the tutorial to train on linear evaluation but allows me to use different hyperparameters. I have also made my own random crop and resize using TensorFlow’s sampled_distorted_bounding_box. def random_crop_and_resize(image, size, area_range): # Crop randomly smaller and resize to size begin, shape, _ = tf.image.sample_distorted_bounding_box( tf.shape(image), [[[0.0, 0.0, 1.0, 1.0]]], area_range=area_range) image = tf.image.crop_to_bounding_box(image, begin[0], begin[1], shape[0], shape[1]) image = tf.image.resize(image, (size, size)) return image Thanks for the Barlow Twins recommendation, the paper seemed neat but I didn’t realize that it is empirically less prone to representation collapse. Currently, I need SimSiam for some work I want to do extending it, so getting my own working version of it that I understand is important. I also believe that this can shed some light on how important different aspects of the core algorithm are vs training tricks. I hadn’t realized from any of the papers that any augmentations were used during the linear evaluation training step. I will experiment with this a bit more. I’ll also inspect the output representations from training to see if they’ve collapsed to a fixed output. I’ll post here if the tuning is successful or if I get stuck again. Thank you.
st205578
Thanks, I’m trying to build my own example right now with the minimal important parts so that’s why I started with the tutorial. But I’ll inspect this soon to see if there are any important parts the keras code is missing.
st205579
Sure. It will only make the knowledge interchange better. I would also manually inspect the outputs produced by random_crop_and_resize() (in case you haven’t already). pbontrager: also believe that this can shed some light on how important different aspects of the core algorithm are vs training tricks. From most of the papers, the training tricks are important. But I see your point.
st205580
Hi, could you figure it out? I am facing the same issue. The representations are collapsing.
st205581
@Arnab_Mondal check the suggestions I shared in my comments. They are likely gonna help with that. For SimSiam the following factors collectively contribute to preventing the representation collapse: stop_gradient() Augmentations The specific use of Batch Norm The use of an Autoencoder like predictor network All of these are reflected in the initial example I posted barring augmentations and the exact number of dense units in the predictor. As I mentioned previously in my comments, I am happy to help here with all the suggestions I can provide from my little experience of working with SSL. So, please keep this thread posted with anything you find.
st205582
Hi When I doing the deconvolution for super-resolution, the Conv2dtranspose generates a black bar on the top (2xh in pixels) and left (2xw in pixels). The photo: drive-google-com/file/d/12Ob2gVrd52j2mDbQ8_wMSUDkISmA37ZA/view?usp=sharing The input before Conv2dtranspose layer is (600,720,1), and it should be 4x times bigger to (2400, 2880,1) after the Conv2dtranspose. The layer is : out = Conv2DTranspose(num_channels, 9, strides=scale_factor, padding=‘same’)(x) Does anyone familiar with this? Thanks.
st205583
Update, The layer is : out = Conv2DTranspose(1, 9, strides=4, padding=‘same’)(x)
st205584
The Movenet model is not (yet) available in ml5.js however Movenet can still be used directly. When using a video stream on mobile the speed is very slow unless the input size is reduced. This works great in the demo but when reducing the size of the input video in p5.js only the reduced input region seems to be used, almost like the input video is cropped. Looking at detector.js there is a initCropRegion to initially set the region, I wonder if this is using the wrong input size and therefore using a cropped region as an input? Example here: p5.js Web Editor 43 (Try moving your nose past half way…)
st205585
It seems you are working with 2 diff sizes in your code. 1 is 1280x720 and the other is 640x480. In order to render the correct position to the larger canvas you will need to scale all resulting points. Assuming input video was 640x480 from getUserMedia then you can take the co-ordinates that come back and convert them to percentage offsets instead. For example, if nose point was at 320,240 then it can be represented as 0.5,0.5 which you then simply multiply by the new width/height eg 1280x720 to get a new enlarged co-ordinate of 640,360 - and then it should overlap correctly on your new video size.
st205586
Yep exactly, the first set of variables is my canvas size and second the video size and I am doing this conversion, if you set the video input size to 1280x720 or whatever your camera is, the code works. The scaling also works with a smaller size but only in the clipped zone… When the video input size to something smaller, e.g. 360x180, only this portion of the video is used as input. It is as if the clipped portion has been used rather than the scaled input. The video DOM element has the correct size.
st205587
So for me if I change your videoReady() function to have the sampled video width/height to assign that to your global wv/hv vars it works for me. async function videoReady() { console.log("Capture loaded... or has it?"); console.log("Capture: " + video.width + ", " + video.height); console.log("Video element: " + video.elt.videoWidth + ", " + video.elt.videoHeight); wv = video.elt.videoWidth; hv = video.elt.videoHeight; console.log("video ready"); await getPoses(); } In other news though this p5 code is running very slow which is odd as I usually get much higher FPS on my old laptop which I am using right now (eg 45 FPS which I just confirmed with the original MoveNet demo code) There may be some code you are using that is not efficient here which I advise checking for - however they are indeed rendered at the correct position now at least.
st205588
Ah I found why your code was running slow. You were using setTimeout with 100ms delay. Be sure to use requestAnimationFrame for production when not debugging to get buttery smooth performance like this: async function getPoses() { if(detector){ poses = await detector.estimatePoses(video.elt); } //console.log(poses); requestAnimationFrame(getPoses); }
st205589
Hi , thanks for clarification . I was wondering isnt there an inbuilt ålibrary for displaying the skeleton overlay ? Another question, i tried to run this code on mobile browser of smartphone activating the front camera but didn’t seem to get any detection. I was under the impression it was independent of front camera/back camera to make detection. Sample p5js sketch snippet let detector; let poses; let video; let wv=1280; //the default input size of the camera let hv=720; let w=640; //the size of the canvas let h=480; let f=2; //the amount we want to scale down the video input var capture ; async function init() { const detectorConfig = { modelType: poseDetection.movenet.modelType.SINGLEPOSE_LIGHTNING, }; detector = await poseDetection.createDetector( poseDetection.SupportedModels.PoseNet, detectorConfig ); //detector.initCropRegion(w,h); //not callable } async function videoReady() { console.log("Capture loaded... or has it?"); console.log("Capture: " + video.width + ", " + video.height); console.log("Video element: " + video.elt.videoWidth + ", " + video.elt.videoHeight); wv = video.elt.videoWidth; hv = video.elt.videoHeight; //console.log("video ready"); await getPoses(); } function setup() { createCanvas(displayWidth, displayHeight); var constraints = { audio: false, video: { facingMode: { exact: "environment" } } //video: { //facingMode: "user" //} }; capture = createCapture(constraints,videoReady); capture.hide(); } async function getPoses() { if(detector){ poses = await detector.estimatePoses(video.elt); } console.log(poses); requestAnimationFrame(getPoses); } function draw() { image(capture, 0, 0); }
st205590
Thanks Jason, this makes a big difference to the speed (I realised setTimeout was not ideal ). The problem still persists after assigning the width/ height in videoReady. I think it just appears like it is working for you as the capture size is equal to the canvas (e.g. 1280/2) or greater. Try it with a smaller capture size e.g. 360x180 or f=4 and it doesn’t work. I have tested on Chrome, Safari, Firefox. Try moving your head to the right half of the screen, you’ll see what I mean…
st205591
The machine learning happens on the raw data from the video frame from the camera I believe (which on my laptop defaults to 640x480) not the CSS resizing of the video which is just cosmetic. When you grab the data from the video itself it gives you the camera data I think (640x480). Therefore the co-ordinates coming back will be in that co-ordinate space and you will need to assume that to do any transformation after on canvas circle drawing. Thus you should be having some math that figures out the percentage x,y based on the true video width height which you can then multiply by the canvas width/height to find the new co-ordinates you need to render at. Also it seems P5 does some rendering magic of its own - it draws the video frame to canvas and then the dots on top of that, which is not terribly efficient as you are sampling video frame twice - you can just absolute position canvas on top of video element and draw the circles only on top of the already playing video saving you pushing twice the number of video pixel data around and only needing to worry about rendering dots to canvas based on the rendered size of the canvas. f does not need to exist. This is how I would do it: let detector; let poses; let video; let wv=1280; //the default input size of the camera let hv=720; let w=640; //the size of the canvas let h=480; async function init() { const detectorConfig = { modelType: poseDetection.movenet.modelType.SINGLEPOSE_LIGHTNING, }; detector = await poseDetection.createDetector( poseDetection.SupportedModels.PoseNet, detectorConfig ); } async function videoReady() { console.log("Capture loaded... or has it?"); console.log("Capture: " + video.width + ", " + video.height); console.log("Video element: " + video.elt.videoWidth + ", " + video.elt.videoHeight); wv = video.elt.videoWidth; hv = video.elt.videoHeight; console.log("video ready"); await getPoses(); } async function setup() { video = createCapture(VIDEO, videoReady); let cnv = createCanvas(w, h); cnv.style('position', 'absolute'); await init(); } async function getPoses() { if(detector){ poses = await detector.estimatePoses(video.elt); } //console.log(poses); requestAnimationFrame(getPoses); } function draw() { if (poses && poses.length > 0) { clear(); text('nose x: ' + poses[0].keypoints[0].x.toFixed(2) + ' ' + (100 * poses[0].keypoints[0].x / video.width).toFixed(1) + '%', 10,10); text('nose y: ' + poses[0].keypoints[0].y.toFixed(2) + ' ' + (100 * poses[0].keypoints[0].y / video.height).toFixed(1) + '%', 10,20); text('nose s: ' + poses[0].keypoints[0].score.toFixed(2), 10,30); for (let kp of poses[0].keypoints) { let { x, y, score } = kp; x = x/wv; y = y/hv; if (score > 0.5) { fill(255); stroke(0); strokeWeight(4); circle(x*w,y*h, 16); } } } }
st205592
Thanks Jason. My understanding was that on a mobile device/ camera which is far higher resolution that 480p the video needs to be scaled down for it to be performant (as is possible in the settings in the movenet demo). I was trying to resize the video using video.size, which worked previously using ml5.js and posenet, I was under the impression this reduced the input frame size that the model uses. With my posenet projects on mobile if the video size was not reduced they ran at less than 5 FPS. WIth your approach, how would I go about reducing the input size? (e.g. for a mobile device) Thanks
st205593
If you are using our pre-made models like the above, we do all this resizing for you as it must fit the model’s supported input Tensor sizes correctly to use the ML model behind the scenes. To make these pre-made models easy to use we handle all of that. Of course if you are using a raw TFJS model yourself (loading in the model.json and *.bin files manually) - not our nice JS wrapper classes, then of course you would need to do this resizing and conversion to Tensors yourself to perform an inference, but in the above, there is no need as we handle it all for you. To resize images I recommend checking out the TensorFlow.js API which has functions for this purpose: TensorFlow.js API 5 Note the differences between: tf.image.resizeBilinear 6 tf.image.resizeNearestNeighbor 12 As this can effect your image data. See this image for a quick visual comparison of what different types of resize do to an image: Gant’s new TensorFlow.js book goes into a lot more detail on these methods if you are curious to learn more: Learning TensorFlow.js [Book] 9
st205594
I am porting some of my tf1 code to tf2 and am wondering how can I define a model for training and then reuse that same model at intermediary steps for predictions? Example: x = tf.keras.Input(shape=(5,)) s = tf.keras.Sequential([...])(x) # now do several things with s independently a = tf.keras.Dense(...)(s) b = tf.keras.Sequential(...)(s) # do some more things with a and b and calculate the output for the model output = tf.keras.Sequential([tf.keras.Dense(tf.concat([a,b])), ...]) model = tf.keras.Model(inputs=x, outputs=output) Now, I want to be able to just calculate a and b (without calculating output) but also including the output. I could write models for a, a and b and another for b where all the models get input x. But what if I want to calculate a,b and output. Now if I would do that using all the models I have previously defined, I would forward pass the graph multiple times, for each output that I need. I feel like I am missing something here. Alternatively, I could consecutively define models that don’t have the original input as input, but instead I layer multiple input nodes within my code everytime I am working the next part of the graph like so ... a = tf.keras.Dense(...)(s) b = tf.keras.Sequential(...)(s) model_a = tf.keras.Model(inputs=input, output=a) model_b = tf.keras.Model(inputs=input, output=b) input_a = tf.keras.Input(shape_of_a) input_b = tf.keras.Input(shape_of_b) output = tf.keras.Sequential([tf.keras.Dense(tf.concat([a,b])), ...]) model = tf.keras.Model(inputs=[input_a, input_b], outputs=output) This is all so complicated and ugly… Now during training I have to first pass forward through each model on the way and so on. It seems that I am in a bind. Either define multiple models using the same input, each time rewriting at least some of my code (which is in itself ugly) and accept the inefficiency during non-training when calculating a lot of nodes; Or: Have tons of input nodes and complicated feed forward structures Coming from tf1 where all of this was trivial, I feel like I am missunderstanding something very fundamental about how tf2 is supposed to work. Please feel free to answer generally what I am missing, my code was just for illustration.
st205595
You can modify the last line of your first code snippet to: model = tf.keras.Model(inputs=x, outputs=[output, a, b]) This will allow you to access the intermediate tensors when calling the forward pass. Internally this will not call your layers multiple times. Please let me know if this was your intended point.
st205596
Yes, thank you very much… I kind of overlooked that not thinking that a possible loss added would fit the pattern by adding None values. Anyway, this solves my issue
st205597
I’m training a model which inputs and outputs images with same shape (H, W, C). My loss function is MSE over these images, but in another color space. The color space conversion is defined by transform_space function, which takes and returns one image. I’m inheriting tf.keras.losses.Loss to implement this loss. The method call however takes images not one by one, but in batches of shape (None, H. W, C). The problem is the first dimension of this batch is None. I was trying to iterate through these batches, but got error iterating over tf.Tensor is not allowed. So, how should I calculate my loss? The reasons I can’t use a new color space as input and output for my model: the model is using one of pretrained tf.keras.applications which works with RGB reverse transformation can’t be done because part of information is lost during transformation I’m using tf.distribute.MirroredStrategy if it matters. # Takes an image of shape (H, W, C), # converts it to a new color space # and returns a new image with shape (H, W, C) def transform_space(image): # ...color space transformation... return image_in_a_new_color_space class MyCustomLoss(tf.keras.losses.Loss): def __init__(self): super().__init__() # The loss function is defined this way # due to the fact that I use "tf.distribute.MirroredStrategy" mse = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.NONE) self.loss_fn = lambda true, pred: tf.math.reduce_mean(mse(true, pred)) def call(self, true_batch, pred_batch): # Since shape of true/pred_batch is (None, H, W, C) # and transform_space expects shape (H, W, C) # the following transformations are impossible: true_batch_transformed = transform_space(true_batch) pred_batch_transformed = transform_space(pred_batch) return self.loss_fn(true_batch_transformed, pred_batch_transformed)
st205598
Hello everyone! I am trying to implement an inference function with tensorflow. I find that tf.keras.applications is particularly slow in my case. This is my main file. import argparse import tensorflow as tf import tensorflow.keras as keras from util import load_data, Timer def main(use_cuda): model_name = 'resnet152' with Timer("step 0: dummy input", use_cuda): dummy_input = tf.zeros([1]) with Timer('step 1: load model', use_cuda): model = keras.applications.ResNet152(weights=None) model.load_weights(f'models/{model_name}.h5') with Timer('step 2: preprocess data', use_cuda): img_files, img_data = load_data(use_tf=True) data = img_data with Timer('step 3: predict', use_cuda): pred = model(data) result = tf.argmax(pred, axis=1).numpy() return dict(zip(img_files, result)) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument("--use_cuda", default="0", type=int) args = parser.parse_args() main(use_cuda=args.use_cuda) This is the util file. import os import time from contextlib import ContextDecorator import numpy as np from PIL import Image def load_data(root='./data', use_tf=False): pass class Timer(ContextDecorator): def __init__(self, description, cuda): self.description = description self.cuda = cuda def __enter__(self): self.start_time = time.perf_counter() def __exit__(self, *args): self.end_time = time.perf_counter() print(f'>>>>>>>>>> {self.description:25s}---{(self.end_time - self.start_time) * 1000:8.2f} ms <<<<<<<<<<') This is the output. I cannot below it takes around 2 seconds in the load model step. 2021-07-07 18:29:31.453675: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. >>>>>>>>>> step 0: dummy input --- 1.34 ms <<<<<<<<<< >>>>>>>>>> step 1: load model --- 1994.82 ms <<<<<<<<<< >>>>>>>>>> step 2: preprocess data --- 29.40 ms <<<<<<<<<< >>>>>>>>>> step 3: predict --- 2787.95 ms <<<<<<<<<< Thanks!
st205599
Can you check if It Is running on GPU with: TensorFlow Use a GPU  |  TensorFlow Core 3