id
stringlengths
3
8
text
stringlengths
1
115k
st205100
Okay! For this version of Model Maker I do not see an option to change the Optimizer. So do it the old way and convert the Saved Model with TF Lite converter 6.
st205101
Hi @jeri_shanks , currently TensorFlow Lite Model Maker doesn’t provide such customization. We are only targeting at easy-to-use interface for creating a model, instead of swapping every bit of modules. Usually, changing learning_rate and epochs will be the option to tune model quality. (BTW, if you are interested in adding this feature, you can contribute code to Model Maker’s repo, and make it work in nightly version.)
st205102
image_2021-07-08_134625760×229 8.08 KB i have a question about the optimizer, can i change the optimizer?if i can, where i can find the code?
st205103
optimizer_v2.py in the screenshot you shared has the base class tensorflow/optimizer_v2.py at v2.5.0 · tensorflow/tensorflow · GitHub 3 . As it says in the doc string: " You should not use this class directly, but instead instantiate one of its subclasses such as tf.keras.optimizers.SGD, tf.keras.optimizers.Adam, etc." You can use a number of built-in optimizers from this list of APIs: TensorFlow Module: tf.keras.optimizers  |  TensorFlow Core v2.5.0 10 Built-in optimizer classes. keras.io Keras documentation: Optimizers 4 Hope this helps
st205104
The ML Basics section (with Keras) in Core TensorFlow tutorials may be a good start: TensorFlow Basic classification: Classify images of clothing  |  TensorFlow Core 7 Note that in the beginner tutorials when you specify the optimizer inside the compile method, you use the alias (e.g. 'adam'). Later on, as you get to the advanced tutorials, you may notice you can state the optimizers using the full API names, such as optimizer = tf.keras.optimizers.SGD(learning_rate=0.01): TensorFlow Custom training: walkthrough  |  TensorFlow Core 2
st205105
sorry but i’m new with this, recently i trying to learn image classification from this tutorial “Image classification with TensorFlow Lite Model Maker” and there is a code like the one below “model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)” so if the code is like that, how can i add the “tf.keras.optimizers.Adam” if i want to put in that code
st205106
Sorry, but still can’t understand how to run this below code from the “Image classification with TensorFlow Lite Model Maker” tutorial with different optimizer , can you please tell me how? “model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)”
st205107
Hi @Yirmeyahu_Pakpahan , currently TensorFlow Lite Model Maker doesn’t provide such customization. We are only targeting at easy-to-use interface for creating a model, instead of swapping every bit of modules. Usually, changing learning_rate and epochs will be the option to tune model quality. (BTW, if you are interested in adding this feature, you can contribute code to Model Maker’s repo, and make it work in nightly version.)
st205108
Hello guys , I’m trying to implement a web application for object detection on images using TensorFlow.js. I’ve trained my model and loaded it using “tf.loadLayersModel()”. Then I made predictions using “model.predict()” function. The problem is that I don’t understand how to interpret “predict” which is model.predict() output. The output from "console.log(predict); " is : Array(3) [ {…}, {…}, {…} ] ​0: Object { kept: false, isDisposedInternal: false, dtype: “float32”, … } ​1: Object { kept: false, isDisposedInternal: false, dtype: “float32”, … } ​2: Object { kept: false, isDisposedInternal: false, dtype: “float32”, … } ​length: 3​ The output from “console.log(predict[0].dataSync())” is : Float32Array(18) [ -0.933699369430542, 2.0597338676452637, -8.093562126159668, -1.653637170791626, 29.91468048095703, -2.5104284286499023, -1.6961907148361206, 5.431970596313477, -8.678093910217285, 1.1465117931365967, … ] The output from “console.log(predict[1].dataSync())” is : Float32Array(72) [ -3.040828227996826, 19.110309600830078, -0.057641081511974335, -1.872375249862671, -124.70093536376953, 2.4474217891693115, 11.702408790588379, -9.773043632507324, -3.7172935009002686, 5.327491283416748, … ] The output from “console.log(predict[2].dataSync())” is : Float32Array(288) [ -36.274513244628906, -19.568172454833984, 12.820204734802246, -21.152021408081055, -225.57720947265625, 1.2267011404037476, 6.923470497131348, 26.43770408630371, 2.0712578296661377, -11.071444511413574, … ] The aim is to get boxes location. Thanks !
st205109
Welcome to the forum. Which model are you loading exactly? Typically whoever wrote that model architecture you are using will be able to tell you how to interpret the tensor output of the model. Usually there is some level of documentation if you are retraining an existing model somewhere as to what the inputs/outputs should be. As you correctly discovered you need to read the data from the resulting returned Tensor else you will get Object printed as you saw above. Typically for multibox detection the output of the model is a prediction map. There will be tonnes of predictions - many even overlapping, and then you will need to do some post processing to filter through all of that to figure out what is worthy of actually drawing. This article has a nice write up: Lambda Blog – 6 Jan 19 Tips for implementing SSD Object Detection (with TensorFlow code) 6 This blog gives a brief introduction on the history of object detection, explains the idea behind Single-Shot Detection (SSD), and discusses a number of implementation details that will make-or-break the performance. Tensorflow implementation is also...
st205110
Hi all, I was following this tutorial below, and tried converting the saved model (tf SavedModel) to TensorflowJS format (model.json) using tensorflow_converter. Reason I am doing this rather than running MobileBERT directly from tfhub on TensorflowJS is because I plan to train this model on my own dataset. https://www.tensorflow.org/text/tutorials/classify_text_with_bert 6 The conversion works, and I loaded my converted in Javascript using tf.loadGraphModel. However doing so I am met with an error “TypeError: Cannot read property ‘producer’ of undefined" on browser console”. I did my Googling and found out it was due to the usage of Functional apis. Some said using tf.loadLayersModel work, but when I tried it gave me error saying “could be due to the Keras Functional api usage”. Any ideas how I can convert that to be runnable on browser? I am not too sure how I can change the Functional api usage to Sequential etc as I am pretty new to it.
st205111
Welcome to the community and thanks for posting your question. Let me check and get back to you what may be causing this. In the meantime do you have your code hosted anywhere so we can replicate the error and inspect it eg Glitch.com 5 or Codepen.io 1?
st205112
Thank you! Sorry for the late response, had internet issue yesterday. I have uploaded my training script, TFJS script and my models into my Github repo here: https://github.com/jonathanlawhh/tfjs-producer 9 Nothing fancy and nothing much changed from both TFJS and BERT tutorial, except for the data loading part in Python.
st205113
Hello again! Sorry for slight delay. Was discussing with our wider team to figure out what to do here. Looking at the codelab you linked to it seems to be using hub.kerasLayer which we do not support currently. The team asked me if you could open a new issue using the following link to help us to track this: github.com Build software better, together 3 rflow/tfjs/issues/new/choose GitHub is where people build software. More than 65 million people use GitHub to discover, fork, and contribute to over 200 million projects. Depending what you are trying to do precisely here you may find my other learning paths on text classification for TFJS useful here: Path 1: Get started with comment-spam detection  |  Google Developers 5 Path 2: Go further with comment spam detection  |  Google Developers 4 Thank you.
st205114
Hi Jason, no worries. Thank you for looking into it, I have opened a new issue accordingly here: https://github.com/tensorflow/tfjs/issues/5409 18 The 2 path you shared has definitely helped me a lot! Will be trying it out to understand more. Thank you once again
st205115
Most welcome and good luck with your future hacking! Look forward to seeing your future creations - do share with us what you end up making if you are able
st205116
I installed tensorflow-gpu 2.4.1 by the command ‘conda install tensorflow-gpu’, and it could recognize my gpu, that is, it will return True when I call tf.test.is_gpu_available(). But when I was installing tensorflow_federated by ‘pip install tensorflow_federated’, I found that my tensorflow-gpu installed by conda was uninstalled. And after the pip installation, tensorflow could not recognize my gpu anymore. How can I use tensorflow_federated on GPU?
st205117
Hi Jason, What I think might have happened is that you installed the version 0.19 of TF Federated and that depends on TF 2.5 and it might have updated your 2.4 installation. (versions table: GitHub - tensorflow/federated: A framework for implementing federated learning 1) You can verify these by print(tf.__version__) print(tff.__version__) on your env. If you install TF 2.5-gpu everything should work
st205118
Hi I’m a STEM Teacher in Hong Kong. Recently, I’ studying about the potential of TinyML in Hong Kong. I would like to launch a project on TinyML. To design a series of activity and promotion to K12 Student. May I know could Tensorflow Team give us some support on this project? At the very beginning stage, maybe we just need a representative from Tensorflow team to introduce what is TinyML to Hong Kong tecahers. Is it feasible for this? Could any Tensorflow Team member can give me a hand? Cheers, Jason Pang
st205119
Thank you Laurence! Great to see you here!!! I am taking your course in EdX TinyML. Could we mind to leave your email address to me so we can go more deeper? Thank you so much!!! My email address: Removed by moderator
st205120
Hello, Laurence! I’m trying to explain my plan in brief. I am a secondary school STEM teacher and also a leader of a local educators community. I am going to introduce the concept and the bright future of TinyML. I am going to apply to be a speaker of the local educational exhibition and talking about this. Due to the COVID measures, it is difficult to invite you to come in-person. However, we can have a video playback or live online meeting sessions through the exhibition. This is only the very beginning stage. So we may just give a brief intro to what TinyML is in the session and what is the goal for this from the point of view of Google/TensorFlow Team. I can also do some live demo on the stage to inspire educators/vendors through the session. Please let me know your thinking about this. Thank you again for your reply😉
st205121
Is there a way to introduce sparsity Fourier/Cosine/others constraint on the weights of an autoencoder to achieve compressions? I want to use the encoder part to replicate a better basis with the STL-10 dataset. Autoencoders | Kaggle 1 The features learned should be more optimized than the compression algorithms like JPEG.
st205122
I suggest to take an overview at: arXiv.org Learning End-to-End Lossy Image Compression: A Benchmark Image compression is one of the most fundamental techniques and commonly used applications in the image and video processing field. Earlier methods built a well-designed pipeline, and efforts were made to improve all modules of the pipeline by... Then you can take a look at the CLIC 21 tasks/leaderboard: http://compression.cc/ See also our repository at: github.com GitHub - tensorflow/compression: Data compression in TensorFlow 3 Data compression in TensorFlow. Contribute to tensorflow/compression development by creating an account on GitHub.
st205123
Thanks, will look into it. I am curious if I can replace L1 regularization with autoencoder?
st205124
The first link two links are specifically related to the image (or video) compression tasks. The TF compression repository instead it is related to multiple data compression use cases. I am curious if I can replace L1 regularization with autoencoder? Generally you can use also KL-divergence as sparsity penalty.
st205125
Hey Community, I have a data set which each sample has it’s own label, for instance : I have 10000 sample which each sample has one word as label, and each label is unique for that sentence, this made 10000 training samples with 10000 labels. Is anyone here has an idea about how to do this or a toy code? Thank you so much for your help
st205126
I don’t think that 1 single sample for 1 label is going to learn well. But, in general, assign a label to a sinlge word is in the perimeter of the Named Entity Recognition (NER). You can find many tutorial with Keras/TF like: keras.io Keras documentation: Named Entity Recognition using Transformers 1
st205127
I don’t know exactly what your is your domain but the problem setup It seems to be similar to NER. With just 1 sample per class it seems similar to one-shot NER. So the most similar thing that I can think about your data is finding some idea in few-shot NER approaches: arXiv.org Few-Shot Named Entity Recognition: A Comprehensive Study 1 This paper presents a comprehensive study to efficiently build named entity recognition (NER) systems when a small number of in-domain labeled data is available. Based upon recent Transformer-based self-supervised pre-trained language models (PLMs),... arXiv.org Few-NERD: A Few-Shot Named Entity Recognition Dataset Recently, considerable literature has grown up around the theme of few-shot named entity recognition (NER), but little published benchmark data specifically focused on the practical and challenging task. Current approaches collect existing supervised...
st205128
Started learning Python and Tensorflow about a month ago, reading from the web. Like Python as it is intuitive. Tensorflow is another animal! Too much syntax to get it right. Using Pycharm 21.2, Python 3.9 and Tensor Flow 2.5.0. Reading “Learn TensorFlow 2.0 Implement Machine Learning And Deep Learning Models With Python”. I tried the programs in the book to learn. But get errors in running them. Ditto with programs from the web. Used compat.v1 for the different versions but still get errors here and there. Try this one from the book and see if you can it running without any error. colab.research.google.com Google Colaboratory 3 A. Maybe it is IDE dependant? Hope not. B. Easy to understand book on Tensorflow? C. New and better Tensor Flow 2.6 coming soon? D. Best alternative to Tensor Flow? Sci-kit? Thanks.
st205129
The bast place to start for modern examples is tensorflow.org/tutorials 3 and tensorflow.org/guide 1, we work to keep those examples tested and up to date with best-practices. That example runs fine in Colab, what error are you getting? compat.v1 tf.estimator If you’re learning, or writing new code: don’t use these. If you need tree models see: TensorFlow Decision Forests 2, which implements them in Keras.
st205130
Noted and many thanks. I will try the tutorial and guides you mentioned. Cheerio.
st205131
Hi, I need help for my fyp where i train a model of text to speech in tflite formate. i want to implement this tflite model in android studio but the tensorflow guide is bit confusing. Is it possible to run text to speech tflite model in android app.
st205132
It is possible. If you have already the tflite model please provide us the link to download and tell you our opinion.
st205133
Thank you so much for replaying, But i don’t have Tlite model file. i need to write just a SRS for my FYP. I just need to understand is it feasible to run text to speech in android device using tensor-flow lite. I know it is possible to build text to speech for English language in Tensor-flow but not sure about Tensor-flow Lite is able to run Text to Speech for Urdu Language in android.
st205134
Tensorflow Lite procedure is not different than Tensorflow. You have pre-processing, execution and post-processing. If it is OK with TensorFlow and you can convert to TFLite then you can apply the same procedures. You can use custom code or TensorFlow Lite Support Library to help you during the procedures
st205135
Thank you Above Info will help me in my FYP, But What about the text to speech in Urdu. Is Tensor-Flow and Tensor-Flow Lite give any functionality or support for machine learning on Urdu language text. Basically I am building a text to speech model, In which Urdu language text is input and output is Urdu Speech of given text. I am planning to run this model on android device. I saw many Youtube videos about text to speech but they only support English language. I just need to confirm about the Urdu language text as feature input in machine learning.
st205136
I really do not know about Urdu language. It is true that a lot of examples are in English language. You have to search specifically for this and if there is not an example you have to crete it yourself.
st205137
You can take a look at the adapting steps for your target language in: github.com GitHub - TensorSpeech/TensorFlowTTS: TensorFlowTTS: Real-Time State-of-the-art... 17 :stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, Korean, Chinese, German and Easy to adapt for other language...
st205138
i got this results during training and i set epoch 100 … will the results can be change to be better or this indicates to be over fitting ? 1 loss: 4.5090 - accuracy: 0.2570 - val_loss: 3.8487 - val_accuracy: 0.3179 2 loss: 3.7380 - accuracy: 0.3306 - val_loss: 3.6578 - val_accuracy: 0.3398 3 loss: 3.5954 - accuracy: 0.3449 - val_loss: 3.6016 - val_accuracy: 0.3481 4 loss: 3.5336 - accuracy: 0.3519 - val_loss: 3.5808 - val_accuracy: 0.3527 5 loss: 3.4969 - accuracy: 0.3563 - val_loss: 3.5751 - val_accuracy: 0.3555 6 loss: 3.4720 - accuracy: 0.3595 - val_loss: 3.5733 - val_accuracy: 0.3574 7 loss: 3.4528 - accuracy: 0.3619 - val_loss: 3.5809 - val_accuracy: 0.3585 8 loss: 3.4388 - accuracy: 0.3638 - val_loss: 3.5888 - val_accuracy: 0.3595 9 loss: 3.4283 - accuracy: 0.3655 - val_loss: 3.5965 - val_accuracy: 0.3606 10 loss: 3.4207 - accuracy: 0.3666 - val_loss: 3.6070 - val_accuracy: 0.3612 11 loss: 3.4155 - accuracy: 0.3675 - val_loss: 3.6225 - val_accuracy: 0.3615 12 loss: 3.4112 - accuracy: 0.3688 - val_loss: 3.6362 - val_accuracy: 0.3622 13 loss: 3.4085 - accuracy: 0.3696 - val_loss: 3.6467 - val_accuracy: 0.3623 14 loss: 3.4024 - accuracy: 0.3697 - val_loss: 3.6510 - val_accuracy: 0.3617 15 loss: 3.3850 - accuracy: 0.3702 - val_loss: 3.6485 - val_accuracy: 0.3622 16 loss: 3.3729 - accuracy: 0.3709 - val_loss: 3.6563 - val_accuracy: 0.3622 can i share the model here ?
st205139
This is a little bit hard to say without more information. This could show that your model is not converging to a solution as the accuracy are not moving much, but that could also be specific to your use case. Maybe your model is too simple and cannot extract the complexity of your data. To know if your model is overfitting, the easiest way is to finish your training and then evaluate your model on your dataset’s test split. If the results are much worse than the training accuracy, that’s a good indicator of overfitting. some more resources here: Overfit and underfit  |  TensorFlow Core 3
st205140
Hello ! When I first run object detect example app with provided model (detect.tflite) , it has almost 90 classes with inference time was 30ms… but I trained with small number of image like 100~200 images and I just have 3 class. I used mobile net v2 640x640 and didn’t find tune what make inference time smaller ? I quantize with float so, my model size is reduced to 4mb from 11mb… but as to inference time, it remain same.
st205141
This is hard to answer with only this information. Are you running the model on a phone? is it using GPU? are you using multi-thread? Usually what I did in the past to find out why a model is slow on my app was to run the benchmark tool: Performance measurement  |  TensorFlow Lite 1 one thing that could help is decreasing the image input size. 640x640 is quite big and object detection is a complex task by itself.
st205142
Hello everyone and anyone who is reading this, I have a question about my training output. I am new to tensorflow especially using the object detection api. My question: Is the image → here 1 the correct output I should be receiving? It is displaying the output twice when the 100 steps are done, so is this correct or am I doing something wrong? Everywhere online i see only one instance of the response. Please help me.
st205143
I can’t find a detailed tutorial about multi-label classification on TensorFlow site. Any resources?
st205144
I recommend this video tutorial at Medium – 26 Jul 21 How to solve Multi-Class Classification Problems in Deep Learning with... 22 In this tutorial, we will focus on how to select Accuracy Metrics, Activation & Loss functions in Multi-Class Classification Problems. Reading time: 13 min read
st205145
Thanks @Takashi_Futada, But I’m looking for Multi-label Classification tutorial not Multi-Class
st205146
He mentioned multi-labels as well as multi-classes in this article. There is no entire codes but some snippet with multi-hot encoding. Medium – 26 Jul 21 How to solve Classification Problems in Deep Learning with Tensorflow &... 21 Select correct Label Encoding, Activation, and Loss functions, along with accuracy metrics in a Deep Neural Network Reading time: 10 min read
st205147
AFAIK, The closest thing we have on tensorflow.org 3 right now is the segmentation tutorial, just because it, effectively, runs a separate classifier per pixel. THere’s nothing to stop you from using an extra dimension in your outputs and labels to run a bunch of classifiers in parallel. But also note that if your model returns a distionary of tensors, keras’s model.fit will also accept a dictionary of losses and loss_weights (it optimizes the weighted sum of the losses). So if you need 10 binary classifiers, 3 different-N N-way classifiers, and a M-element regression you can do that from a single model.fit.
st205148
Recently, I try to use bert to do some contrastive learning in tfhub. You know that dropout rate is a important param in contrastive learning (for nlp task), but I don’t find any method to set dropout rate in tfhub api. Is there some way to set dropout in tfhub? Thanks!
st205149
Hi Ryan, I don’t think you can change dropout as a parameter from the models on Hub. What you could do instead, is create a model yourself (eg: Classify text with BERT  |  Text  |  TensorFlow 2) based on the BERT encoders on TFHub and define the dropout rate you want. With this you have full control of the parameter. does that work for you?
st205150
Thanks for your reply. But in Contrastive Learning I must set dropout of the pretrained part. The tutorial Classify text with BERT | Text | TensorFlow 1 can’t help me. Is there any solution or new features for Contrastive Learning in TFHub? (Reference: [2104.08821] SimCSE: Simple Contrastive Learning of Sentence Embeddings 1)
st205151
I think e.g. in Albert V2 was removed: github.com/google-research/ALBERT "no dropout" on v2 models 2 opened Dec 18, 2019 closed Dec 27, 2019 peregilk You say that you are using "no dropout" on the TFHub v2-models. However, looking… at the albert_config.json-files there seem to be dropout on most models (https://tfhub.dev/google/albert_base/2). Only on the xxlarge, there is no dropout (https://tfhub.dev/google/albert_xxlarge/2). What is correct?
st205152
I tried googling around, but there doesn’t seem to be any elegant ways to manually edit TF checkpoints. For TF1 checkpoints, I found a dirty hack, but for TF2 checkpoints, the dirty hack does not work. It seems that TF2 checkpoints have an additional variable “_CHECKPOINTABLE_OBJECT_GRAPH” (which contains a byte string?). I tried saving a checkpoint without it, but when TF2 tries to load the checkpoint, I get a ton of “WARNING:tensorflow:Unresolved object in checkpoint: (root).chip_layer.layer_with_weights-0” errors (aside from the “root” part, the rest of the name is right). However, I can’t create/edit a variable by that name, and hence can’t save a new checkpoint with it. This doesn’t even preclude the possibility of the checkpoint containing other stuff that’s much harder to access. I really do not want to use a hex editor… Context: I recently converted a large project from TF1 → TF2 with keras interface. However, doing so also drastically changed all the names of all the variables. I would like to continue using previously trained models, and not train from scratch. As a desperate workaround, the best I can come up with is to load the model in TF2, and manually load in the variables from the old checkpoint 1 var at a time.
st205153
Hi all, When I train a model, wondering about what is effect of use_bias. The model is base_model = VGG19(input_shape=(config.img_size, config.img_size, 3), include_top=False) gap = GlobalAveragePooling2D()(base_model.output) # class output dense_b1_1 = Dense(256, use_bias=False)(gap) relu_b1_2 = Activation(tf.nn.relu)(dense_b1_1) dense_b1_3 = Dense(256, use_bias=False)(relu_b1_2) relu_b1_4 = Activation(tf.nn.relu)(dense_b1_3) cls_output = Dense(len(config.class_dict), activation='softmax')(relu_b1_4) # regression output dense_b2_1 = Dense(128, use_bias=False)(gap) relu_b2_2 = Activation(tf.nn.relu)(dense_b2_1) dense_b2_3 = Dense(128, use_bias=False)(relu_b2_2) relu_b2_4 = Activation(tf.nn.relu)(dense_b2_3) reg_output = Dense(4, activation='sigmoid')(relu_b2_4) concat = Concatenate()([cls_output, reg_output]) model = Model(inputs=base_model.inputs, outputs=concat) I used same network, the result of using use_bias=False one is Epoch 1/200 160/160 [==============================] - 69s 384ms/step - loss: 0.5087 - val_loss: 0.4395 Epoch 2/200 160/160 [==============================] - 62s 386ms/step - loss: 0.3718 - val_loss: 0.4238 Epoch 3/200 160/160 [==============================] - 59s 372ms/step - loss: 0.3471 - val_loss: 0.3383 Epoch 4/200 160/160 [==============================] - 60s 377ms/step - loss: 0.3177 - val_loss: 0.3598 Epoch 5/200 160/160 [==============================] - 61s 371ms/step - loss: 0.3153 - val_loss: 0.3069 Epoch 6/200 160/160 [==============================] - 59s 372ms/step - loss: 0.3128 - val_loss: 0.3124 Epoch 7/200 160/160 [==============================] - 59s 372ms/step - loss: 0.2946 - val_loss: 0.2869 Epoch 8/200 160/160 [==============================] - 60s 376ms/step - loss: 0.2702 - val_loss: 0.3102 Epoch 9/200 160/160 [==============================] - 62s 376ms/step - loss: 0.2888 - val_loss: 0.2878 Epoch 10/200 160/160 [==============================] - 60s 376ms/step - loss: 0.2674 - val_loss: 0.3123 Epoch 00010: ReduceLROnPlateau reducing learning rate to 9.999999747378753e-11. Epoch 11/200 160/160 [==============================] - 60s 377ms/step - loss: 0.3003 - val_loss: 0.3123 Epoch 12/200 160/160 [==============================] - 59s 370ms/step - loss: 0.2878 - val_loss: 0.3123 Restoring model weights from the end of the best epoch. Epoch 00012: early stopping but with use_bias=True, the result is 160/160 [==============================] - 71s 396ms/step - loss: 1.1255 - val_loss: 1.1355 Epoch 2/200 160/160 [==============================] - 61s 380ms/step - loss: 1.1305 - val_loss: 1.1355 Epoch 3/200 160/160 [==============================] - 61s 383ms/step - loss: 1.1266 - val_loss: 1.1355 Epoch 4/200 160/160 [==============================] - 61s 383ms/step - loss: 1.1267 - val_loss: 1.1355 Epoch 00004: ReduceLROnPlateau reducing learning rate to 9.999999747378753e-11. Epoch 5/200 160/160 [==============================] - 62s 380ms/step - loss: 1.1328 - val_loss: 1.1355 Epoch 6/200 160/160 [==============================] - 61s 383ms/step - loss: 1.1141 - val_loss: 1.1355 Epoch 7/200 160/160 [==============================] - 61s 385ms/step - loss: 1.1289 - val_loss: 1.1355 Epoch 00007: ReduceLROnPlateau reducing learning rate to 9.99999943962493e-16. Epoch 8/200 160/160 [==============================] - 61s 379ms/step - loss: 1.1305 - val_loss: 1.1355 Restoring model weights from the end of the best epoch. Epoch 00008: early stopping Both of networks are built without batch normalization. But why use_bias=False one has less loss than use_bias=True one?
st205154
A Dense Layer will try to find the weights such as y= Wx + b W are the weights and b is the bias. When x is a matrix of values, W is also a matrix and b is a vector. The use_bias parameter tells the layer if you want to add (and calculate) this vector on your results. My understanding of the bias is that enables the model to find weights to a function closer to the ground truth. How it affects your results depends on the data complexity. To have an even better understanding, I’d suggest you take a look on this colab: https://bit.ly/tf_basic_01 7
st205155
I’m working on an application where I’d like to retrieve the standard deviation of the predictions made by the trees within an ensemble (currently a tfdf.keras.RandomForestModel) to use as an estimate of the confidence of a given prediction. These are regression predictions rather than categorical so I’m assuming the best way would be to look at the distribution of predictions within the ensemble, but other ideas very welcome! It looks like I could do this by running a prediction on each individual tree with inspector.iterate_on_nodes() but is there a better way to do this via the main predict method that I’ve missed in the documentation, or some other recommended way?
st205156
Hi Jamie, I am copying the answer 5 from the github . As you corrected noted, the API does not allow to obtain the individual tree predictions directly. Please feel free to create a feature request :). If we see traction, we will prioritize it. In the mean time, there is two alternative solutions: Training multiple Random Forest models, each with one tree (while making sure to change the random seed). Training a single Random Forest model and dividing it per trees using the model inspector and model builder. Using the model builder to generate the individual trees might be easier than running the inference manually in python. While faster than solution 1., the solution 2. can still be slow on large models and datasets as the model deserialization+re-serialization in python is relatively slow. It would look like this: # Train a Random Forest with 10 trees model = tfdf.keras.RandomForestModel(num_trees=10) model.fit(train_ds) # Extract each of the 10 trees into a separate model. inspector = model.make_inspector() # TODO: Run in parallel. models = [] for tree_idx, tree in enumerate(inspector.extract_all_trees()): print(f"Extract and export tree #{tree_idx}") # Create a RF model with a single tree. path = os.path.join(f"/tmp/model/{tree_idx}") builder = tfdf.builder.RandomForestBuilder( path=path, objective=inspector.objective(), import_dataspec=inspector.dataspec) builder.add_tree(tree) builder.close() models.append(tf.keras.models.load_model(path)) # Compute the predictions of all the trees together. class CombinedModel (tf.keras.Model): def call(self, inputs): # We assume that we have a binary classication model that returns a single # probability. In case of multi-class classification, use tf.stack instead. return tf.concat([ submodel(inputs) for submodel in models], axis=1) print("Prediction of all the trees") combined_model = CombinedModel() all_trees_predictions = combined_model.predict(test_with_cast_ds) See this colab 5 for a full example. Ps: Make sure to correctly use the all_trees_predictions to compute the prediction confidence interval. For example using Wager et al. 2 method. Cheers, M. Edit: Add the import_dataspec constructor argument in the model builder. This will help with some of the situation with categorical features. See this page for some explanations.
st205157
Thanks @Mathieu - incredibly helpful and appreciate you taking the time to put together the example! I’ll give your #2 route a go with my application since I’m already training a Random Forest with multiple trees. Also thanks for the link to Wagner et al paper - I was aware of the technique as I’ve seen reference in the R ranger package and this Scikit Learn contrib package 1, but hadn’t seen the source paper so look forward to taking a look to understand more deeply. Sorry for asking on both GitHub and the forum - after posting there I wasn’t sure if GitHub was only for code focused issues and the forum for help requests so thought here might be more appropriate. Will add a feature request to GitHub. Do you prefer that they sit at the TFDF level? In this case I think it would probably involve changes to YDF (i.e. a custom reducer similar to here 1) but it looks like most activity is on the TFDF repository.
st205158
Happy to help Yes. Posting the feature request in TF-DF is better for the reasons you mentioned. Ps: Happy to see a R user.
st205159
Hi, I am new to TF 2.x. When BatchNormalization layer is created in TensorFlow 2.x, am unable to read BN parameters (such as momentum, epsilon etc.) the way we used to in TF1.x. Is there a way to work around this? Specifically, if we have Keras BN op with no training argument, how do we read the associated parameters such as beta, gamma, epsilon, momentum , moving mean and average from it?
st205160
Hi @codingrat, The image related to your post is in form of Preformatted Text. Kindly upload the image for better solutions.
st205161
I am trying to implement a WGAN loss function with Gradient Penalty on TPU. After training, the result is not what I expected it to be. Below is the graph So,What I expected: I expected a continuous decrease in both Generator and Discriminator loss. The values should have been under a certain limit. My code for the Generator and Critic(Discriminator) Loss: class CriticLoss(object): """ Criric Loss Args: discriminator:Discriminator model Dx: Output of the real images from discriminator Dx_hat: Output of the generated images(fake) from discriminator x_interpolated:combined fake and real images """ def __init__(self, gp_lambda=10): self.gp_lambda = gp_lambda def __call__(self,discriminator, Dx, Dx_hat,x_interpolated): #orgnal critic loss d_loss = tf.reduce_mean(Dx_hat) - tf.reduce_mean(Dx) #calculate gradinet penalty with tf.GradientTape() as tape: tape.watch(x_intepolated) dx_inter = discriminator(x_interpolated, training=True) gradients=tape.gradient(dx_inter, [x_interpolated])[0] grad_l2 = tf.sqrt(tf.reduce_sum(tf.square(gradients), axis=[1, 2, 3])) grad_penalty = tf.reduce_mean(tf.square(grad_l2 - 1.0)) #final discriminator loss d_loss += self.gp_lambda * grad_penalty return d_loss #Generator loss class GeneratorLoss(object): """ Generator Loss """ def __call__(self,Dx_hat): return tf.reduce_mean(-Dx_hat) Since, I already checked my DCGAN model with CrossEntropy Loss and It works perfectly fine.So my model is not in fault here.It could be the fact that how TPU distribution strategy works and the loss functions calculated in the individual TPU device might not adding up to provide suitable values. Also, I should point out that the loss values in the graph are calculated in the following way. gen_loss.update_state(g_loss * tpu_strategy.num_replicas_in_sync) disc_loss.update_state(d_loss * tpu_strategy.num_replicas_in_sync ) where gen_loss and disc_loss are defined as tf.keras.metrics.Mean() inside tpu_strategy.scope() while g_loss and d_loss are the output values from the GeneratorLoss and CriticLoss repsectively in the step_fn
st205162
Hi, I tried to use clustering after pruning the model using the example pruning code by tensorflow. TensorFlow Pruning in Keras example  |  TensorFlow Model Optimization 2 I getting this error: /usr/local/lib/python3.7/dist-packages/tensorflow_model_optimization/python/core/clustering/keras/cluster_wrapper.py in build(self, input_shape) 165 # stripping 166 position_original_weight = next( → 167 i for i, w in enumerate(self.layer.weights) if w is original_weight) 168 self.position_original_weights[position_original_weight] = weight_name 169 StopIteration Can someone please help me understand the issue and if it is possible to do clustering after pruning.
st205163
I guess you didn’t strip pruning wrapper layer. You can apply clustering to the vanilla model without pruning wrapper. See end of the section - Sparsity preserving clustering Keras example 1 Anyway, you can follow the instructions in there to run sparsity + clustering , as David suggested.
st205164
This could be a bug with the wasm backend, or it could be me doing something silly. In the following snippet https://replit.com/@epi-morphism/tfjs-bug2#index.html 7 tfjs proudces garbage output as shown in this screenshot. Using the WebGL or CPU backend however produces the below screenshot, which is the expected result (see this https://replit.com/@epi-morphism/tfjs-bug#index.html 5). If this was the cause of using some unsupported op then I’d expect a warning or error logged, but currently nothing out of the ordinary is being written to the console. Anyone have any idea what the matter is here?
st205165
Thanks for reaching out and welcome to the forum. So after consulting with the team it seems that you may be using an op that is partially implemented in WASM which is why you do not get an op not supported error - as technically it is supported, but not fully implemented yet so some parameters that are being sent to WebGL version are being thrown away / never used in the WASM version. It seems your model is composed of the following ops: Add: 51 AddN: 20 AddV2: 45 Const: 504 Conv2D: 22 Conv2DBackpropInput: 2 DepthwiseConv2dNative: 18 GatherV2: 40 Identity: 21 Maximum: 1 Mean: 82 Merge: 20 Minimum: 1 MirrorPad: 20 Mul: 86 Neg: 20 Pack: 4 Pad: 4 Placeholder: 1 Relu: 14 Rsqrt: 41 Shape: 23 Slice: 22 SquaredDifference: 41 StridedSlice: 14 Sub: 53 Switch: 60 This list was generated using your model.json file and then within tfjs-converter directory executing: yarn model-summary path_to_model_json_file My understanding is that one of these ops may be “partial” and this is why error is occuring. For now if you are unable to change code to not use the offending op, your best bet is to force WebGL to ensure it always uses this for execution assuming WebGL is available (which it should be on most regular laptops/smartphones as 97.9% support on CanIUse 3). tf.setBackend('webgl');
st205166
Thank you for the prompt response Jason! Yeah using the webgl backend seems like the way to go. As this is targeted towards smartphones it unfortunately doesn’t solve my problem since on both safari and chrome on iOS it doesn’t run because webgl 2.0 isn’t supported. But that seems like something I’ll just have to wait for until the tensorflowjs webgl backend is more mature. 1125×1570 132 KB 1125×933 164 KB
st205167
So for WebGL 2.0 support on iOS would be a request for Apple Safari / iOS folk That being said, the above is just a warning. TensorFlow.js will fall back to WebGL 1.0 in this case so you can still use WebGL on these browsers on iOS but it will use the 1.0 version instead.
st205168
Hi all I would like to build a LSTM for forecasting using a dataset where I have values for each date/hour like below: 2021-01-04 00:00 AM;0.00042 2021-01-04 01:00 AM;0.00375 2021-01-04 03:00 AM;0.00021 2021-01-04 05:00 AM;0.00164 2021-01-04 06:00 AM;0.06367 2021-01-04 07:00 AM;0.05686 2021-01-04 08:00 AM;0.816 2021-01-04 09:00 AM;1.84477 2021-01-04 10:00 AM;2.14783 2021-01-04 11:00 AM;1.87455 2021-01-04 12:00 AM;1.3206 2021-01-04 01:00 PM;1.41993 2021-01-04 02:00 PM;1.76163 ....... until 2021-30-04 11:00 PM My data started in jan 2021 until now - month by month. I want to predict the next month. Initially the full month but if predict only first 15 days is better, no problem… after i can predict the next 15 days. Somebody has an example with code (link… blog… article) where I can learn the “windows method” using LSTM in Tensoforflow?
st205169
Perhaps this article Time series forecasting  |  TensorFlow Core 5 can help you.
st205170
Hello. I want to use keras for time series forecasting. When forecasting, I hope that the forecast result at time t will be used as the input at time t+1. So when I want to train, put the final output of the training at time t to time t+1 as input. May I ask how it can be implemented in keras. Or how can it be achieved with tensorflow.
st205171
Hi, I am using XLA AOT to compile my model (using tf_library macro). All works well and the .h and object files are generated. Now I would like to get LLVM IR that was internally generated by tfcompile but I can not find a way to do so. Neither I can find any flag for the tfcompile that would allow me to get the LLVM IR. Is there anyway to get the LLVM when using XLA AOT and tfcompile? Thanks
st205172
Have you tried setting the ENVS in: XLA: Optimizing Compiler for Machine Learning  |  TensorFlow 7 ?
st205173
Hi Bhack Thanks for the tip. Yes I tried that, but did not work. It seems that that may work when you are using JIT (not AOT) in your tensorflow program (python), at least that is how I understand that example. In my case I am using AOT and the tf_library macro. So I use bazel buil to call the macro and in this case I do not know how to pass the XLA_FLAGS. tf_library has an input option called flags but XLA_FLAG is not one of those. I did set the XLA_FLAG env variable, just in case, but did not work (don’t think that bazel will read linux env variables). I am literally following this in order to the the graph compiled TensorFlow Using AOT compilation  |  XLA  |  TensorFlow 2 gist.github.com https://gist.github.com/carlthome/6ae8a570e21069c60708017e3f96c9fd 3 tfcompile.ipynb { "cells": [ { "cell_type": "markdown", "metadata": { "slideshow": { "slide_type": "slide" } }, "source": [ This file has been truncated. show original In short, after preparing the frozen_graph, creating the graph.config.pbtxt and updating the BUILD with the tf_library macro info you call to: bazel build --show_progress_rate_limit=600 @org_tensorflow//:graph That works, the header file and the cc_library is generated but I can not get the llvm IR. And do not know how to pass in this case the XLA_FLAG Any ideas? Maybe there is other way to use AOT and being able to pass that flag. Thanks
st205174
I don’t know if this is still available or substituded with something else: github.com/tensorflow/tensorflow Retrieving LLVM IR from AOT tfcompile 2 opened Jul 13, 2017 closed Jul 15, 2017 williamjeremy Is it possible to get LLVM intermediate representation (.ll) files from tfcompil…e instead of object code? If so, how can it be done? Alternatively, can one get source code instead of object code from tfcompile?
st205175
Yes I tried that too. In the latest master branch the parameter is called --xla_dump_to github.com tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc 1 /* Copyright 2017 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include "tensorflow/compiler/xla/service/cpu/cpu_compiler.h" #include <stddef.h> #include <string.h> This file has been truncated. show original But it did not work. However I am not sure if I am passing the parameter correctly. What I did was this bazel build --show_progress_rate_limit=100 --xla_dump_to='mypath' @org_tensorflow//:graph But the option is unrecognized [ERROR] Maybe is just that that is not the correct way to pass that option (certainly looks like that). But I do not know how pass the parameter. Do you have any idea? Thanks again for your help!
st205176
Have you already tried also passing it like in tfcompile_flags: github.com tensorflow/tensorflow/blob/master/tensorflow/compiler/aot/tests/BUILD#L238 1 tf_library( name = "test_graph_tfmatmulandadd", testonly = 1, config = "test_graph_tfmatmulandadd.config.pbtxt", cpp_class = "::foo::bar::MatMulAndAddComp", graph = "test_graph_tfmatmulandadd.pb", mlir_components = "None", tags = [ "manual", ], tfcompile_flags = "--gen_name_to_index --gen_program_shape", ) tf_library( name = "test_graph_tfmatmulandadd_with_profiling", testonly = 1, config = "test_graph_tfmatmulandadd.config.pbtxt", cpp_class = "MatMulAndAddCompWithProfiling", enable_xla_hlo_profiling = True, graph = "test_graph_tfmatmulandadd.pb", mlir_components = "None",
st205177
L_M: –xla_dump_to=‘mypath’ Yeap I also tried that but it did not work either. I also took a look to the flags.cc file github.com tensorflow/tensorflow/blob/master/tensorflow/compiler/aot/flags.cc /* Copyright 2017 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include "tensorflow/compiler/aot/flags.h" namespace tensorflow { namespace tfcompile { This file has been truncated. show original and it did not look like that --xla_dump_to is supported. So I can see that the cpu_compiler.cc seems is contemplating that flag but then I am banging my head against the wall to provide it
st205178
I’ve recompiled TF on a fresh master version: bazel build tensorflow/compiler/aot:tfcompile export XLA_FLAGS="--xla_hlo_profile --xla_dump_to=/tmp/foo --xla_dump_hlo_as_text" ; bazel-bin/tensorflow/compiler/aot/tfcompile --graph=./tensorflow/compiler/aot/test_graph_tfadd.pbtxt --config=tensorflow/compiler/aot/test_graph_tfadd.config.pbtxt --cpp_class="myns::test" ls -1 /tmp/foo/ 1627458376191776.module_0000.tfcompile.8.before_optimizations.txt 1627458376191776.module_0000.tfcompile.8.cpu_after_optimizations-buffer-assignment.txt 1627458376191776.module_0000.tfcompile.8.cpu_after_optimizations.txt execution_options.txt module_0000.tfcompile.8.buffer_assignment module_0000.tfcompile.8.ir-no-opt-noconst.ll module_0000.tfcompile.8.ir-no-opt.ll module_0000.tfcompile.8.ir-with-opt-noconst.ll module_0000.tfcompile.8.ir-with-opt.ll module_0000.tfcompile.8.o It seems to work, so with bazel have you tried to set the envs with: bazel.build Specifying environment variables - Bazel 3
st205179
Hi Bhack, Calling to tfcompile through the BUILD macro, still did not work with the llvm flag, even after following the doc from bazel that you provided, to pass the env_variables. (still quite new with Blaze to maybe its me here to blame not sure) Anyway I followed your example calling directly tfcompile and that DID work. I can see the lovely ll files. So thank you very much for your help and for the detail explanation on how to get the llvm dump.
st205180
Did you use something like bazel build --action_env=XLA_FLAGS="--xla_hlo_profile --xla_dump_to=/tmp/foo --xla_dump_hlo_as_text" ...?
st205181
Hi Bhack, Yeap, that is what I did Also tried by exporting first the variable with the value and then just passing --action_env=XLA_FLAGS but also did not work. Anyhow I got it working, but not using the BUILD macro but by calling directly tfcompile (as in your example)
st205182
I don’t know the details. But have you seen the experimental_get_compiler_ir method on the GenericFunction 1 class returned by tf.function 1? Is there maybe a way that it could help here?
st205183
markdaoust: But have you seen the experimental_get_compiler_ir method on the GenericFunction class returned by tf.function? Is there maybe a way that it could help here? I think this is at runtime jit and not for aot, right? If you know someone that is working on this area It could be nice to know how users could use this with bazel build and add the command in that Doc page.
st205184
Bhack: I think this is at runtime jit and not for aot, right? Yes. You can do anything in a bazel genrule 2 but hopefully there’s a cleaner option somewhere. If you know someone that is working on this area I’ll try.
st205185
def define_model(vocab_size, max_length, curr_shape): inputs1 = Input(shape=curr_shape) fe1 = Dropout(0.5)(inputs1) fe2 = Dense(256, activation='relu')(fe1) model = tf.keras.models.Sequential() inputs2 = Input(shape=(max_length,)) se1 = Embedding(vocab_size, 256, mask_zero=True)(inputs2) se2 = Dropout(0.5)(se1) se3 = LSTM(256)(se2) decoder1 =Concatenate()([fe2, se3]) decoder2 = Dense(256, activation='relu')(decoder1) outputs = Dense(vocab_size, activation='softmax')(decoder2) model = Model(inputs=[inputs1, inputs2], outputs=outputs) model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy']) model.summary() return model the model as follows Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 49)] 0 __________________________________________________________________________________________________ input_1 (InputLayer) [(None, 1120)] 0 __________________________________________________________________________________________________ embedding (Embedding) (None, 49, 256) 6235648 input_2[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 1120) 0 input_1[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 49, 256) 0 embedding[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 256) 286976 dropout[0][0] __________________________________________________________________________________________________ lstm (LSTM) (None, 256) 525312 dropout_1[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 512) 0 dense[0][0] lstm[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 256) 131328 concatenate[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 24358) 6260006 dense_1[0][0] i set history = model.fit(train_generator, epochs=1, steps_per_epoch=train_steps, verbose=1, callbacks=[checkpoint], validation_data=val_generator, validation_steps=val_steps) and got one sentence in model.predict() every time … how can i defined number of epochs or learning rate well to make model better. My dataset is COCO while training set is 82700 and testing is 40500 . The goal for the model is to make image captioning
st205186
Hi, To do a good hyperparameter tuning, I’d suggest you use Keras Tuner: Introduction to the Keras Tuner  |  TensorFlow Core 4 This will help you
st205187
I’m working on my first machine learning project in Python - using TensorFlow to try and syllabify words using the Moby Hyphenator II 1 dataset. I am treating this as a multi-label classification problem in which words and their syllables are encoded in the following format: T e n - s o r - f l o w 0 0 1 0 0 1 0 0 0 0 When reading through this guide 1 as a starting point, I saw that the author used a custom function - they averaged weighted binary cross-entropy with the root mean squared error in PyTorch as such: def bce_rmse(pred, target, pos_weight = 1.3, epsilon = 1e-12): # Weighted binary cross entropy loss_pos = target * torch.log(pred + epsilon) loss_neg = (1 - target) * torch.log(1 - pred + epsilon) bce = torch.mean(torch.neg(pos_weight * loss_pos + loss_neg)) # Root mean squared error mse = (torch.sum(pred, dim = 0) - torch.sum(target, dim = 0)) ** 2 rmse = torch.mean(torch.sqrt(mse + epsilon)) return (bce + rmse) / 2 I have tried to implement this in TensorFlow in the following way: def weighted_bce_mse(y_true, y_prediction): # Binary crossentropy with weighting epsilon = 1e-12 positive_weight = 4.108897148948174 loss_positive = y_true * tf.math.log(y_prediction + epsilon) loss_negative = (1 - y_true) * tf.math.log(1 - y_prediction + epsilon) bce_loss = np.mean(tf.math.negative(positive_weight * loss_positive + loss_negative)) # Mean squared error mse = tf.keras.losses.MeanSquaredError() mse_loss = mse(y_true, y_prediction) averaged_bce_mse = (bce_loss + mse_loss) / 2 return averaged_bce_mse On doing so, I receive the error ValueError: 'outputs' must be defined before the loop. and I’m not sure why as I define this function before I build and compile my model. I’m using the Keras Functional API, and my compilation and fit stages are: model.compile(optimizer="adam", loss=weighted_bce_mse, metrics=["accuracy"], steps_per_execution=64) history = model.fit(padded_inputs, padded_outputs, validation_data=(validation_inputs, validation_outputs), epochs=10, verbose=2)
st205188
I’d like to program an environment that has a changing action_spec, related to a game that swaps between 2 alternating phases. The decision to be made is quite different. How would one go about that?
st205189
Some additional info and my own thoughts on that: observation returns 0 or 1 to signify the phase, and in phase 1 the dice result if phase is 0, action 0 is “pass and take your share”, action 1 is “go on” if phase is 1, action 0-13 is interpreted as a corresponding action in that game I could easily treat action 2-13 as “no-action” in phase 0, or all of 1-13 as “go on”. But I expect that would make convergence much slower and more unlikely, since the DQN would have to learn these additional unnecessary relations.
st205190
I realized now, my question is not necessary for this game. I can just roll the dice and present them as observation, ask for 0-13 possible actions to do with them, having deterministic consequences AND as 15th action, if there is “pass” or “go on” afterwards. Nevertheless, there definitely are games out there which have different, (repeating) phases with stochastic elements between them like dice, or interaction of other players, so the actions for different phases cannot simply be put together in one vector like that. So the question remains to be answered, even though I can continue my project now.
st205191
Hi I was trying to figure out the capacity of a neural network using tensorflow. The project I was using to test amount of neurons and layers is the TinyML sinewave project Intro to TinyML Part 2: Deploying a TensorFlow Lite Model to Arduino | Digi-Key Electronics - YouTube 1. I tried doing a 1 layer with 256 neurons, 2 layers with 128, 4x64, 8x32, 16x16, 32x8, 64x4, 128x2. I had a problem with the models with neurons lower than 16 in a layer. I also noticed that the amount of layers had to decrease in order to use less neurons per layer to get a sine wave prediction. When I did 32x8 … I ended up with a line as a prediction. I am wondering why this is happening?
st205192
Richie_C: I had a problem with the models with neurons lower than 16 in a layer. The more the hidden layer the more the number of parameters to learn (it depends on what kind of layers you have used). Higher the number of parameters, higher the memory required. Please share a standalone code for further support. Thanks!
st205193
I am using google colab and found this error. Please help me I want to import rnn and slim through 'tensorflow.contrib.
st205194
@Swati_Zambre Did you try the following way of importing slim !pip install tf_slim import tf_slim as slim Check this response 109 for more details. Thanks! Share a standalone code to reproduce your error. Thanks!
st205195
Hi, The new TF-DF library is quite impressive! However, it seems that there is a problem with plotting the tree. The following code does not work outside of Colab (for example Kaggle Kernels)? tfdf.model_plotter.plot_model(model)
st205196
maybe there’s some configuration issue with D3 (the JS library to display this kind of visualization) @Mathieu , I tried changing the imports and playing with the JS usage but I couldn’t figure it out what is the issue.
st205197
Hi, Thanks @nncv for reporting the issue and thanks @lgusm for looking it . The method tfdf.model_plotter.plot_model is expected to be independent of colab, but there is probably an issue. I’ll take a look and comeback to you. ETA: Tomorow.
st205198
Hi there, I want to replica some nodes on different devices in Grappler optimization pass. In this situation, there are some cases when I need to share the Variable node on different devices. Let’s say I have a node x on GPU :0 , it’s type is an _Arg node. I want to replicate the same node on GPU:1, let’s call it x1. How could I share this variable on graph rewrite pass? If I directly replicate the x node and keep everything but the name the same, it will get an error of something like variable is not initialized. In addition, tf1 could get a graph that how variable is assigned , it have a graph with Variable/Assign node, but tf2 seems delete this, how could I get how variable is initialized and passed to device?
st205199
Hello, Is the conversion from a trained Keras model (.h5) to a TensorFlow model (either TF1 or TF2) possible/supported by TensorFlow? Thanks, Ahmad