id
stringlengths
3
8
text
stringlengths
1
115k
st205000
Hi, If you remove the Keras Tuner code, does your model train with without issues? The error seems to be pointing to a discrepancy between your input layer and your data format.
st205001
Hi, Thank you for the prompt response. I can try but I need the tuner to select the best model parameters. Still, let me select some parameters myself and try running the model without the tuner code.
st205002
I know, but you usually use the tuner after your model architecture is set and working. After you are sure that everything connects properly, then you try hyperparameter.
st205003
Acknowledged. Thank you for the response again. Let me try this and get back to you.
st205004
Hello, I have a Keras model (model.h5) of a tiny YOLOv3 object-detection model that results in two outputs, conv2d_2 and conv2d_5 (convolution layers), after implementing the following tutorial: GitHub - david8862/keras-YOLOv3-model-set: end-to-end YOLOv4/v3/v2 object detection pipeline, implemented on tf.keras with different technologies 4 In the evaluation/testing script provided, eval.py, every image is fed into an internal method of the Keras model, .predict(), and two Numpy arrays are returned. I would like to understand how these arrays are produced (what the method takes as input data from the model) and what exactly the data that is in them means. I tried to go through the source code for the .predict() method (tensorflow/training.py at v2.5.0 · tensorflow/tensorflow · GitHub 2), but I honestly couldn’t really grasp how it works or what it’s doing exactly. I would appreciate any help I can get with this. Thanks, Ahmad
st205005
I am refactoring my codebase so that I can use MLImage for all the models that I am using (MLKit PoseDetection and a custom detection model TFLite). This was a success, however, I noticed that Google just introduced MlImageAdapter that basically converts an MLimage to TensorImage as the Tflite model requires the image in TensorImage format. A code snippet from one of their latest commits is below: public List<Detection> detect(MlImage image, ImageProcessingOptions options) { image.getInternal().acquire(); TensorImage tensorImage = MlImageAdapter.createTensorImageFrom(image); List<Detection> result = detect(tensorImage, options); image.close(); return result; } indent preformatted text by 4 spaces But as I try to use MlImageAdapter in my code base, it is unable to import it. And the dependencies in their BUILD file is puzzling for me and so I cannot add relevant dependencies in gradle file. The link is below: github.com/tensorflow/tflite-support Make OSS Task library support ODML image. committed Jul 13, 2021 xunkai55 +134 -0 PiperOrigin-RevId: 384370940 Any help would be awesome. Thanks!!!
st205006
Hi @Sohail_Zia Provide us the dependencies you have used for the android project. Take a look here 2 where you can find latest and there 3 if you want to use nightly builds. Use: mavenCentral // should be already there maven { // add this repo to use snapshots name 'ossrh-snapshot' url 'http://oss.sonatype.org/content/repositories/snapshots' } } at project’s build gradle file and: implementation("org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly-SNAPSHOT") at app’s build.gradle file Get back with some feedback on the solution. It will be helpful for the TFLite team Best
st205007
Currently, I am using the following, implementation 'org.tensorflow:tensorflow-lite:2.5.0' implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly' implementation 'org.tensorflow:tensorflow-lite-support:0.2.0' I tried adding implementation("org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly-SNAPSHOT") , but It didn’t work. Probably it’s caching the 0.2.0 which shadows 0.0.0-nightly. I also tried 0.0.0-nightly-SNAPSHOT!! to make it strict, but this is also won’t do. I see that MLImageAdapter is currently in snapshot build. So, if you guys have any plans to add this in stable build, then I can wait till then.
st205008
I can’t share the code as it’s a private repo. But I can share screenshots. https://drive.google.com/drive/folders/13BpwH-zom-GF-G-r3e5FPgIaj-1pLqqK?usp=sharing 4
st205009
BTW I heard that you guys will be releasing this in a stable minor version this quarter. So, I can probably wait till then. Not a big deal.
st205010
Sohail_Zia: TensorImage tensorImage = MlImageAdapter.createTensorImageFrom(image); Hi again It seems that it works for me sohain1224×579 103 KB
st205011
Everything is perfect now. I updated my dependencies to the following: implementation "org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly-SNAPSHOT" implementation 'org.tensorflow:tensorflow-lite:2.5.0' implementation 'org.tensorflow:tensorflow-lite-gpu:2.5.0'
st205012
I have a 102 filters of size 32 x 32. How do I regularize the weights of the neural network to become {0, 1} or { -1, 1} . If weights is more than 0, then w = 1. Else, w = -1 or 0. Dense(inputs=1024, units=102, activation= , kernel_initializer=tf.random_normal_initializer(stddev=0.05))
st205013
you can have a look here Quantization aware training comprehensive guide 2 , you can write then a custom quantization algorithm that clamp the values of your weights on the desired range (also activations). then you annotate the layers you want with the custom quantizer and perform Quantization Aware Training (QAT) as in the tutorial
st205014
Hey everyone, I was looking at the L2 Normalization routine for TFLite, and noticed the use of GetInvSqrtQuantizedMultiplierExp function here 2. While I understand the high level goal of this routine is to find a multiplier which acts as 1 / sqrt(sum_of_squared_inputs) normalizing factor, I’m having trouble understanding how exactly is this computation being done, and what is the purpose and formulation for Newton-Raphson in this function? Any help would be appreciated. Thanks in advance.
st205015
Hi, guys I was trying to convert custom trained yolov5s model to tensorflow model for only predict. First, converting yolov5s to onnx model was successful by running export.py, and to tensorflow representation too. Pb folder created, and there are assets(but just empty folder), variables folder and saved_model.pb file. With them, I used tf.keras.models.load_model, the type of model was _UserObject. I cannot use summary, predict because the model is _UserObject. Is there something wrong with it? [version] tensorflow-gpu : 2.3.0 onnx : 1.8.1 onnx-tf : 1.8.0
st205016
Hi @human Can you share some of the code here, so the community can try to debug it with you? So the issue is, based on your description, is that, after the onnx-tf conversion, you can’t use either tf.keras.Model's summary or predict.
st205017
[yolov5s → onnx] runnned in google colab Expand Table 1 !python /content/drive/MyDrive/yolov5/models/export.py --train --weights /content/drive/MyDrive/yolov5/runs/yolov5_results6/weights/best.pt --img 512 --batch 1 cs 48 [onnx → tensorflow representation → pb folder] 1 2 3 4 5 6 import onnx from onnx_tf.backend import prepare   onnx_model = onnx.load(r'test\convert_pt_to_tf\weights\best2.onnx')  # load onnx model tf_rep = prepare(onnx_model)  # prepare tf representation tf_rep.export_graph(r'test\convert_pt_to_tf\best2_pb')  # export the model cs 48 [pb folder → tensorflow model] 1 2 3 import tensorflow as tf   pb_model = tf.keras.models.load_model(r'best2_pb') cs 48
st205018
Okay, so there are two flavors of saved_model: “vanilla” and “keras”. Vanilla only has the basic TensorFlow constructs (functions, variables). The “keras” flavored ones also have all the metadata required to rebuild the keras objects. It can’t automatically uncompile the low-level representation up into a higher-level keras representation. tf.keras.models.load_model should be printing a warning that there’s no keras metadata available in that saved_model. IIRC, future versions of tensorflow will fail if you use tf.keras.models.load_model on a saved_model that doesn’t have it. Use tf.saved_model.load. It will return the same _UserObect. Inspect that that to find your functions. does it have a .signatures attirbute?
st205019
Fortunately, .signatures returns something 1 2 pb_model = tf.keras.models.load_model(r'best2_pb') pb_model.signatures Colored by Color Scripter 35 cs 35 [return] _SignatureMap({'serving_default': <ConcreteFunction signature_wrapper(images) at 0x1E30B312910>})
st205020
I am facing the same issue after converting onnx model to tensorflow, and trying to get model.summary(), it doesn’t work: import onnx from onnx_tf.backend import prepare Load the ONNX model onnx_model = onnx.load(“model_1.onnx”) tf_rep = prepare(onnx_model) # prepare tf representation tf_rep.export_graph("./model_1") # export the model as .pb Load tf model and use from tensorflow import keras keras_model = keras.models.load_model(‘model_1’) keras_model.summary() Output: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-298-81ed83d75a0f> in <module> ----> 1 keras_model.summary() AttributeError: '_UserObject' object has no attribute 'summary'
st205021
Facing the same issue as above. The github issue has not yet been resolved so I have little hope for this working. github.com/tensorflow/models '_UserObject' object has no attribute 'summary' 53 opened Jul 29, 2020 nicholasguimaraes models:research:odapi type:support Hello, I'm trying to load a ssd_resnet50_v1_fpn_640x640_coco17_tpu-8 I just fine… tuned but I'm coming across this error: '_UserObject' object has no attribute 'summary' Here are the 4 lines of code I have; import tensorflow as tf model_dir = 'C:/Users/Windows/Documents/Tensorflow_Obj_Det_API/models/research/object_detection/inference_graph/saved_model' trained_model = tf.saved_model.load(model_dir) trained_model.summary() I've tried including the save_model.pb on the path to the model but then I get this error: SavedModel file does not exist at: C:\Users\Windows\Documents\Tensorflow_Obj_Det_API\models\research\object_detection\inference_graph\saved_model\saved_model.pb/{saved_model.pbtxt|saved_model.pb} Anyone knows how to load a trained model to do inference?
st205022
Hey i trained a model in hugging face(wav2vec) it seems like pure tf doesn’t support it and i don’t wanna use Keras so any help with how i would go about loading it , thank you in advance
st205023
Hi Ahmed, do you have a saved_model file? You don’t need to use the Keras API to load and use a model. You can find more information here: Using the SavedModel format  |  TensorFlow Core 11
st205024
hey i have a h5 file at the moment, Hugging face hasn’t updated wav2vec with saved model format they said they will update but for now i am stuck with h5
st205025
h5 is a Keras specific format (as far as I know) You can still use the tutorial I shared earlier to load it
st205026
thank you is there a way to convert h5 to the format I can use with TensorFlow serve?
st205027
yes you can. There is some of that information on the page I posted earlier. You can load the h5, save it as a saved_model and maybe add a serving signature (if the default is not good enough) and then do the serving. There’s some more info here: Save and load Keras models  |  TensorFlow Core 6
st205028
thanks! I opened another question where I am havivg issues loading the h5 files Problems loading h5 model General Discussion import os import tensorflow as tf from tensorflow import keras from google.colab import drive drive.mount(’/content/gdrive/’) reconstructed_model = keras.models.load_model("/content/gdrive/MyDrive/TF/tf_model.h5") error: Drive already mounted at /content/gdrive/; to attempt to forcibly remount, call drive.mount("/content/gdrive/", force_remount=True). ValueError Traceback (most recent call last) in () 6 drive.mount(’/content/gdrive/’) 7 ----> 8 recons…
st205029
In most machine learning tutorials ,when it comes to labeling classes in object detection ,it usually start with 1 as in below .Is ordering important when labeling classes and giving them numbers such as id:1 ,id:2 ,etc. Can we give it different numbers like id:5000 where there is only few classes ? Can we start with id:1000 for example ? does the engine expect ordered numbers for classes item { id:1 name:'platenumber' } item { id:2 name:'carframe_black' } item { id:3 name:'carframe_blue' } I’m using transfer learning in tensorflow to train my own data ,I have some objects to train ,when I label them ,I didn’t give it ordered numbers starting from 1 ,I gave it different/random numbers like 89 ,100 .When the training is finished and I started to do inferencing ,the detected classes doesn’t show the numbers which I used for labeling ,it shows ordered numbers such as 1 ,2 ,4 etc. Those ordered numbers are not in my labels ,and they are annotating random areas in the image .I started to think ,this is the original model structure before I did the transfer learning .
st205030
Hi, I don’t know exactly which tutorial you are using but I don’t think it will work with random numbers. What I’d expect is, the model will return an array with probabilities. Each position of the array is the probability for that class. If you mention id 5000, that would mean position 5000 of the array. I think that the process is normalizing the ids to be a 1, 2, 3 and this might result in weird results. I’d first try using a proper sequence of numbers and test with it. does it makes sense?
st205031
sometimes id 0 is marked as background but I’m not sure for this API specifically
st205032
I think he his talking about our label_map_utils.py in model garden: github.com tensorflow/models/blob/master/research/object_detection/utils/label_map_util.py 2 # Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Label map utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function This file has been truncated. show original As you can see in this section and in the next one: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#create-label-map 1
st205033
(Reposted from stackoverflow 4 due to non response there; similar unresolved issues from other users are also referenced) Win 10 64-bit 21H1; TF2.5, CUDA 11 installed in environment (Python 3.9.5 Xeus) I am not the only one seeing this error; see also (unanswered) here 1 and here 3. The issue is obscure and the proposed resolutions are unclear/don’t seem to work (see e.g. here 3) Issue Using the TF Linear_Mixed_Effects_Models.ipynb example (download from TensorFlow github here 1) execution reaches the point of performing the “warm up stage” then throws the error: InternalError: libdevice not found at ./libdevice.10.bc [Op:__inference_one_e_step_2806] The console contains this output showing that it finds the GPU but XLA initialisation fails to find the - existing! - libdevice in the specified paths 2021-08-01 22:04:36.691300: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9623 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) 2021-08-01 22:04:37.080007: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. 2021-08-01 22:04:54.122528: I tensorflow/compiler/xla/service/service.cc:169] XLA service 0x1d724940130 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2021-08-01 22:04:54.127766: I tensorflow/compiler/xla/service/service.cc:177] StreamExecutor device (0): NVIDIA GeForce GTX 1080 Ti, Compute Capability 6.1 2021-08-01 22:04:54.215072: W tensorflow/compiler/tf2xla/kernels/random_ops.cc:241] Warning: Using tf.random.uniform with XLA compilation will ignore seeds; consider using tf.random.stateless_uniform instead if reproducible behavior is desired. 2021-08-01 22:04:55.506464: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:73] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice. 2021-08-01 22:04:55.512876: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74] Searched for CUDA in the following directories: 2021-08-01 22:04:55.517387: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin 2021-08-01 22:04:55.520773: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 2021-08-01 22:04:55.524125: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] . 2021-08-01 22:04:55.526349: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:79] You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions. For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work. Now the interesting thing is that the paths searched includes “C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin” the content of that folder includes all the (successfully loaded at TF startup) DLLs, including cudart64_110.dll, dudnn64_8.dll… and of course libdevice.10.bc Question Since TF says it is searching this location for this file and the file exists there, what is wrong and how do I fix it? (NB C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 does not exist… CUDA is intalled in the environment; this path must be a best guess for an OS installation) Info: I am setting the path by aPath = '--xla_gpu_cuda_data_dir=C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin' print(aPath) os.environ['XLA_FLAGS'] = aPath but I have also set an OS environment variable XLA_FLAGS to the same string value… I don’t know which one is actually working yet, but the fact that the console output says it searched the intended path is good enough for now
st205034
Hi @Julian_Moore , Since Anaconda is not officially supported by the TensorFlow team, I would suggest looking through the GPU doc 2 and possibly the Docker GPU doc 2 and verify that NVIDIA’s software is installed correctly (can be tricky, I know). And might check the WIndows pip install doc 3 and make sure you have the Microsoft Visual C++ Redistributable installed and all that. Then you can start to narrow down if it’s a GPU driver issue, an Anaconda installation issue, or a TensorFlow issue.
st205035
Hi @billy Many thx for the input! The GPU software is otherwise just fine: many models have been run, all cuda DLLs load. And Anaconda is used only to build the base environment and install Python & CUDA. Several working TF2 GPU envs have been built this way without issue (obviously though this case has never been exercised before) Whilst I will take a look at the C++ aspect, the key point for me is that AFAICT a correct location is being searched for a file that exists there, yet is not “found”. If you have an explanation for that it might suggest specific mitigations (whatever the C++ redistributable situation is, I don’t see how it would account for the observed behaviour)
st205036
Just confirming @billy that I did download and install the latest VC_redist v14.29.30040.0 (I have vs 2015, 2017), installation fine, rebooted, reran the notebook and the libdevice not found error persists. FYI this previously noted error, also still occurs. (relevance?) 2021-08-04 18:48:06.587240: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:73] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice.
st205037
Ah, sorry. I found a similiar issue 15 filed with JAX (that also depends on XLA). Try taking a look through some of the comments and see if those symlinks help.
st205038
@Billy I’ve fixed it, but the debug info is deeply unhelpful and something is way off generally. I had seen similar threads to the one you linked; they were where I got XLA_FLAGS suggestions & info. Of course it’s all complicated by my OS being windows and the other discussions being linux oriented and the whole thing being at a technical level generally way over my head - I just want to code in Python. However, the issue was resolved by providing the file (as a copy) at this path C:\Users\Julian\anaconda3\envs\TF250_PY395_xeus\Library\bin\nvvm\libdevice\ Note that C:\Users\Julian\anaconda3\envs\TF250_PY395_xeus\Library\bin was the path given to XLA_FLAGS, but it seems it is not looking for the libdevice file there it is looking for the \nvvm\libdevice\ path This means that I can’t just set a different value in XLA_FLAGS to point to the actual location of the libdevice file because, to coin a phrase, it’s not (just) the file it’s looking for. This is really annoying as I will have to hand patch every environment I create… and I have yet to see whether this would also avoid libdevice issues with JAX as well The debug info earlier: 2021-08-05 08:38:52.889213: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:73] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice. 2021-08-05 08:38:52.896033: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74] Searched for CUDA in the following directories: 2021-08-05 08:38:52.899128: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Users/Julian/anaconda3/envs/TF250_PY395_xeus/Library/bin 2021-08-05 08:38:52.902510: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 2021-08-05 08:38:52.905815: W tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] . is incorrect insofar as there is no “CUDA” in the search path; and FWIW I think a different error should have been given for searching in C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 since there is no such folder (there’s an old V10.0 folder there, but no OS install of CUDA 11) There is a Windows environment variable CUDA_PATH that points to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0 but that is not referenced by any error messages either. The searched path C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 seems to be a guess, and as such a) unsuccessful b) unhelpful c) misleading I note that the info also included the message “You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule’s DebugOptions” but I could not work out how to do that, and again, a level of technical complexity that seems inappropriate. Rhetorical question: since TensorFlow generally is quite happy with the cuda installation in the (anaconda) environment, why is other stuff looking elsewhere in this idiosyncratic way? So… you have some new input & 1 new question for sometime: are such libdevice issues something that will get sorted out (for the benefit of JAX too)? Hope this has been helpful. Cheers, Julian
st205039
Hi I have a problem statement where I have to train an action recognizer using videos. but due to non availability of good dataset there is a slight misclassification. Is there any setting or function where I can train the misclassified or weak learners by assigning them more weights. Its sounds similar like Xgboost but I want something which can work on Deeplearning frame works and on Image and Video data set Thanks.
st205040
I have scored 5/5 for the first four questions. The last question troubled me a lot with the grading. It took about 80 means (40 minutes each) to test the model and never returned a response. It just showed “Testing model…” for 2 * 40 minutes. I had to cancel the testing and retest the model, which again did not work. And then I got a notification in PyCharm (bottom-right corner in red), and it seemed like something crashed during the process. After 2 minutes of completing the examination, I’ve received an email saying I did not pass the test. I’ve checked my code and method with 2 Deep learning professionals, and they have told me that my code was OK. I have sent an email to [email protected] about 40 hrs ago, and I’m still waiting for a reply.
st205041
Hi, currently we are using Python 3.6 by Enthought with TensorFlow 2.2.2. We are going to update to Python 3.8 with TensorFlow 2.5+. Are there any improvements (speed/performance) beteen these version of TensorFlow or Python? I can’t find any relative information about this topic so maybe you can just point me on right place.
st205042
Some time ago there was a proposal at: github.com/tensorflow/build How easy it is to externalize the visibility of the benchmarks we already run continuously? 4 opened Jul 3, 2020 bhack This a pinpoint to not lost the benchmarking subthread started at https://github….com/tensorflow/tensorflow/pull/33945#issuecomment-652471742 Quoting @zongweiz > We have [TF2 benchmarks] (https://github.com/tensorflow/models/tree/master/official/benchmark) from every official model garden model and they continuously running internally, using PerfZero framework. We should be able to select a set of benchmarks and hook it up with github CI (jenkins). Added @sganeshb to this thread for information and we will follow up. Another thought is to start CI tests using a selected set of tf_benchmarks /cc @alextp @naveenjha123 @sganeshb As you can see this is a topic that could interest also more in general the community on some specific type of contribution: github.com/tensorflow/tensorflow Lifting variable on retrace tensorflow:master ← bhack:patch-18 opened May 19, 2021 bhack +68 -3 Explore the effect on tests to fix: https://github.com/tensorflow/tensorflow/iss…ues/27120 /cc @markdaoust I don’t know if you have a specific team member to mention here as I don’t have a one-o-one mapping between our github nicks involved in a thread and the ones on the forums
st205043
Hello community, I am a beginner and have a lot of general questions to understand TensorFlow. I.e. are the frozen model interoperable between TFv1 and TFv2? So. i.e. can I create a model, freeze it in TFv1 and then restore it with TFv2? And also the other way around, creating , freezing in TFv2 and restoring in TFv1? Thanks in advance
st205044
Have you already checked: TensorFlow Migrate your TensorFlow 1 code to TensorFlow 2  |  TensorFlow Core 2
st205045
I am new to TensorFlow, and not sure this is an already answered question. Here is my problem: I need to convert TensorFlow HLO to mlir-hlo dialect, and then convert it my customized MLIR dialect. I need to link both TensorFlow library and LLVM/MLIR. I know during the TensorFlow build process, a version of LLVM is pulled over and built from source. I have a customized module, which wants to use the TensorFlow library and LLVM library built in the building process. Is it possible? In another word, how to build intermediate libraries used by TensorFlow into static libraries so other component can use it?
st205046
Are you using bazel for your build? If so you could just add normal build does between these - I’ve at least always only done this by way of workspaces and external deps. I have not checked to see what the .a libraries produced looks like. I believe some have bee able to just build and then link as normal. What issues are you running into?
st205047
Integrating your build with TF can be painful. What we do in IREE is use some import binaries to get from TF models to an MLIR text representation. Then we pass of the MLIR text to our own ecosystem, which builds with both Bazel and CMake (getting TF to interop with CMake was a no-go). So you could do something similar and use tf-opt (or your own standalone tool integrated with the TF Bazel build system) to get to an MHLO representation and then hand it off from there. In IREE our input to the core compiler is now linalg, but we used to use MHLO. MHLO, specifically, does not need TF to build and has CMake support 1. There’s also a standalone repository 5 if you want to use that instead
st205048
Hi tensorflow team and friends, I am working on a project to build tensorflow models based on tensorflow MLIR IR. According to one of your video (https://youtu.be/R5LLIj8EMxw?t=1226 1), TFLite converter leverages the MLIR IR to generate TFLite models from tensorflow models. Therefore, I assume that the converter has integrated the tool that I need. My questions are: (1) if I have generated tensorflow dialect IR (Dialect 'tf' definition  |  TensorFlow MLIR 2), how could I build a tensorflow model or the protobuf file? (2) Similarly, if I have the TFLite dialect IR, could I generate the tflite flatbuf file? (3) Is there any existing tool that I can leverage or any resource that I can refer to? Thanks!
st205049
Hey, You would want to look at the translate tools here. tf-mlir-translate has the conversions to/from external formats. For exporting back to graphdef youd want to first run island breakup pass too (there is tf-graph-export registered pipeline which you can use with tf-opt for that and some other passes). The translate tool is mostly for testing/development so the user experience is intended for that audience (e.g., it has sharp edges). It does show the paths to invoke (and indeed for tflite converter and new bridge we wrap these up in a more user facing package). Let me know if it doesn’t work. Best, Jacques
st205050
Hi, so in recent while I have been researching about BERT (ALBERT specifically) and its related works and while working with them I have a few questions (which I have tried to get answer but I probably have knowledge gaps) How is the preprocessing done for bert and albert alike? So far I have been able to preprocess the text using albert_en_preprocess 1 and sentencepiece tokenizer but its like a genie in a bottle which I don’t really understand it, its like calling function and boom its done. I skimmed through albert’s paper 1909.11942.pdf (arxiv.org) still didn’t find it. It works but I don’t get it. Yes I did try looking it sentencepiece’s source code for a sec but even its code structure went through my head in mach 5 Output vectors The output ALBERT vectors contain 2 vectors, one is pooled_output and sequence_output. The pooled_output is the sentence embedding of the dimension 1x768 and the sequence output is the token level embedding of the dimension 1x(token_length)x768 This is pretty clear about what is what. But I couldn’t find a reason for x768 fixed thingy, it probably is my lack of research at this point. Other than that I have no problems working with the models, it would be awesome if someone with more experience can tell me the details of why, I am pretty sure the x768 will be a short and OH! I SEE THAT. Thanks
st205051
Solved by lgusm in post #3 Hi Sid, regarding the preprocess model, To understand what’s going on behind the scene, I’d look on how to use a BERT model without the preprocessing help (Fine-tuning a BERT model  |  Text  |  TensorFlow). As you can see, there’s a lot of boilerplate code to transform text to the proper input. That…
st205052
Hi Sid, regarding the preprocess model, To understand what’s going on behind the scene, I’d look on how to use a BERT model without the preprocessing help (Fine-tuning a BERT model  |  Text  |  TensorFlow 4). As you can see, there’s a lot of boilerplate code to transform text to the proper input. That’s all wrapped in the preprocessing models using regular TF operations (mainly from tensorflow_text package). Regarding the output, the 768 is part of the model parameters and my guess (citation needed) this is to keep the model memory usage under some constraints. The 768 specifically is because the ALBERT you’re using is based on a BERT with the same output size. If you need an ALBERT with a bitter output size (for more accuracy) you can chose another one from this collection Of course, the larger the output (and the model) the more resources you’ll need to fine tune and use it later. does it makes sense?
st205053
Thanks for the direction into the preprocessing part, I will definitely look into it. Also thanks for the arbitrary 768 thingy, before I was just looking it as something that I missed without realising it might be a choice . Really appreciate for answering.
st205054
Is there a clear implementation of multivariate data into TFP’s distribution.HiddenMarkovModel? Despite repeated attempts I have yet to find any example in official documentation or publicly available repositories of the hidden Markov model in TensorFlow Probability being used with correlated data (Specifically for my use case, multiple time-series). I know of similar implementations in other libraries (Pyro, pymc3, etc.) however it would be preferable for my situation to stay in the TensorFlow environment. Furthermore, going through the source code for the HMM, it does seem event_shape for ‘observation_distribution’ is utilized, but more in relation to num_steps than for the purpose of interpreting multivariate data? Any help would be greatly appreciated
st205055
@markdaoust We don’t have a TFP tag and TFP subscribed member. Can you reach someone internally?
st205056
It should ‘just work’ to build an HMM using an observation distribution with multivariate (vector, matrix, etc) events. If the observation distribution has event shape [d] at each timestep, then the HMM as a whole will have event shape [num_steps, d]. I threw together a quick example of fitting an HMM with multivariate normal emissions here (code also copied below): colab.research.google.com Google Colaboratory 1 import tensorflow as tf import tensorflow_probability as tfp from matplotlib import pylab as plt tfb = tfp.bijectors tfd = tfp.distributions # Generate 'ground truth' data from a known HMM as test input: true_initial_logits = [1., 0., 0.5] true_transition_logits = [[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]] true_emission_locs = tf.Variable([[0., 0.], [2., 2.], [-2., -2.]]) true_emission_scale_trils = tf.eye(2) true_hmm = tfd.HiddenMarkovModel( initial_distribution=tfd.Categorical(true_initial_logits), transition_distribution=tfd.Categorical(true_transition_logits), observation_distribution=tfd.MultivariateNormalTriL( loc=true_emission_locs, scale_tril=true_emission_scale_trils), num_steps=10) print(true_hmm.event_shape) # [10, 2] ys = true_hmm.sample(500) print(ys.shape) # [500, 10, 2] # Define trainable variables for HMM parameters: num_states = 3 initial_logits = tf.Variable(tf.zeros([num_states])) transition_logits = tf.Variable(tf.zeros([num_states, num_states])) emission_locs = tf.Variable(tf.random.stateless_normal([num_states, 2], seed=(42, 42))) emission_scale_trils = tfp.util.TransformedVariable( tf.eye(2, batch_shape=[num_states]), tfb.FillScaleTriL()) hmm = tfd.HiddenMarkovModel( initial_distribution=tfd.Categorical(initial_logits), transition_distribution=tfd.Categorical(transition_logits), observation_distribution=tfd.MultivariateNormalTriL( loc=emission_locs, scale_tril=emission_scale_trils), num_steps=10) print(hmm.event_shape) # [10, 2] # Maximize the log-prob of observed samples: losses = tfp.math.minimize( lambda: -hmm.log_prob(ys), num_steps=200, optimizer=tf.optimizers.Adam(0.1))
st205057
Hello, I am trying to create my own custom dataset for the training of the model efficientdet_lite3, but when I run this code (The data csv file has the same format as the one specified on this link Object Detection with TensorFlow Lite Model Maker): train_data, validation_data, test_data = object_detector.DataLoader.from_csv(‘data.csv’) The error that i get from the above code: Traceback (most recent call last): File “”, line 1, in File “/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_bundle/pydev_umd.py”, line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File “/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py”, line 18, in execfile exec(compile(contents+"\n", file, ‘exec’), glob, loc) File “/Users/maryam/PycharmProjects/sp/newmodel.py”, line 23, in train_data, validation_data, test_data = object_detector.DataLoader.from_csv(‘data.csv’) File “/Users/maryam/PycharmProjects/sp/venv/lib/python3.8/site-packages/tensorflow_examples/lite/model_maker/core/data_util/object_detector_dataloader.py”, line 292, in from_csv cache_writer.write_files(cache_files, csv_lines=csv_lines) File “/Users/maryam/PycharmProjects/sp/venv/lib/python3.8/site-packages/tensorflow_examples/lite/model_maker/core/data_util/object_detector_dataloader_util.py”, line 247, in write_files for idx, xml_dict in enumerate(self._get_xml_dict(*args, **kwargs)): File “/Users/maryam/PycharmProjects/sp/venv/lib/python3.8/site-packages/tensorflow_examples/lite/model_maker/core/data_util/object_detector_dataloader_util.py”, line 380, in _get_xml_dict xml_dict = _get_xml_dict_from_csv_lines(self.images_dir, image_filename, File “/Users/maryam/PycharmProjects/sp/venv/lib/python3.8/site-packages/tensorflow_examples/lite/model_maker/core/data_util/object_detector_dataloader_util.py”, line 337, in _get_xml_dict_from_csv_lines xmin, ymin = float(line[3]) * width, float(line[4]) * height ValueError: could not convert string to float: ‘’ Is there a possible answer for this problem? Thank you
st205058
Hi Marz, By the error ("ValueError: could not convert string to float: ") maybe the column has some values with the wrong format (maybe on the column 3 and 4). Maybe the float separator is not the expected one. hope it helps
st205059
The third and fourth element on the line are the x and y min values of the images that need to be trained and i made sure that they are on float data type and still i get the same error.
st205060
I understand, It’s a little bit hard to know what is wrong, but maybe the lib is trying to load an empty string if you try a = float("") it returns exactly the same error. If could be an extra “\n” in the end or some empty line in the file. Sorry for the vague answer but it might need some debugging to find the the value that can’t be converted.
st205061
Maybe because the test data has no numbers and this is considered almost float(""). I dont know how to fix it.
st205062
yes, that could be an option. Can’t you split your train data to get a something on the test split? At least to unblock you and get the full process going.
st205063
Hi All, very new to ML and tensorflow so just finding my feet. Google has it’s own Natural language api and specifically the classifyText method. This is where a text read from a file or given as a variable is assessed and assigned a Content category or taxonomies. here is an example of returned output: Query: Google Home enables users to speak voice commands to interact with services through the Home's intelligent personal assistant called Google Assistant. A large number of services, both in-house and third-party, are integrated, allowing users to listen to music, look at videos or photos, or receive news updates entirely by voice. Category: /Internet & Telecom, confidence: 0.509999990463 Category: /Computers & Electronics/Software, confidence: 0.550000011921 Is there an open source version of this? I have tried looking through the libraries and cannot seem to find anything.
st205064
We have some basic examples of text classification. But nothing industrial-strength like this. It looks like it might be a binary classifier per-topic.
st205065
deepdream555×732 38.9 KB I am using deep learning with python by Francois Chollet to learn Keras and i have been trying do deepdream example. i am getting this error and i am not sure what to do.
st205066
I am trying to read a .pbtxt file and have two questions regarding the following section: node { name: "bert/embeddings/Slice/begin" op: "Const" input: "^bert/embeddings/assert_less_equal/Assert/Assert" ... attr { key: "value" value { tensor { dtype: DT_INT32 tensor_shape { dim { size: 2 } } tensor_content: "\000\000\000\000\000\000\000\000" } } } } Why does a Const node have an input? Is the value not defined by the attr part of this node? What does the ^ mean at the beginning of the input field? Is there documentation available for the TensorFlow pbtxt format? I was not able to find anything. Many thanks for your help in advance.
st205067
I am working on the simple_audio command recognition model, provided by tensorflow. I have successfully converted the model into a tflite model. But, the accuracy of the converted model has downgraded drastically. I used the same test dataset for both the models, but the latter performed poorly. I have also tried quantisation-aware training, but there isn’t much change. I used signal.stft for deriving the spectrogram of the given audio file as I cannot use tf.stft in the inference code. I have tried a lot of different ways to debug it, but am facing issues. The only difference I can think of is in the process of generating the spectrogram. IN TF Model its done as this: def get_spectrogram(waveform): zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32) waveform = tf.cast(waveform, tf.float32) equal_length = tf.concat([waveform, zero_padding], 0) spectrogram = tf.signal.stft( equal_length, frame_length=255, frame_step=128) spectrogram = tf.abs(spectrogram) return spectrogram While in tflite model its written as: def get_spectrogram1(mySound): y=np.shape(mySound) res = int(''.join(map(str, y))) zero_padding=np.zeros((16000) - res,dtype=np.float32) equal_length=np.concatenate((mySound,zero_padding),axis=0) f,t,Zw= signal.stft(equal_length, nperseg=247,noverlap=122) Zw=np.absolute(Zw) return Zw Can someone point out some options or suggestions?
st205068
There was a mega thread at: github.com/tensorflow/tensorflow Need tf.signal.rfft op in TFLite 2 opened Mar 30, 2019 jpangburn comp:lite comp:signal type:feature **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04…): macOS 10.14.4 - TensorFlow installed from (source or binary): binary - TensorFlow version (or github SHA if from source): 1.13.1 **Provide the text output from tflite_convert** If I pass the SELECT_TF_OPS option then I get: ``` Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CAST, CONCATENATION, DIV, EXPAND_DIMS, FLOOR_DIV, FULLY_CONNECTED, GATHER, LOG, MAXIMUM, MINIMUM, MUL, PACK, PAD, RANGE, RESHAPE, SHAPE, SPLIT, SPLIT_V, STRIDED_SLICE, SUB, TRANSPOSE. Here is a list of operators for which you will need custom implementations: RFFT. ``` If I don't pass the SELECT_TF_OPS option then I get: ``` Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, CAST, CONCATENATION, DIV, EXPAND_DIMS, FLOOR_DIV, FULLY_CONNECTED, GATHER, LOG, MAXIMUM, MINIMUM, MUL, PACK, PAD, RANGE, RESHAPE, SHAPE, SPLIT, SPLIT_V, STRIDED_SLICE, SUB, TRANSPOSE. Here is a list of operators for which you will need custom implementations: ComplexAbs, Cos, LinSpace, RFFT. ``` Also, please include a link to a GraphDef or the model if possible. The code to create the model is pretty short since I hardcoded a bunch of parameters for now: with tf.Graph().as_default(), tf.Session() as sess: # input sound data as a waveform waveform = tf.placeholder(tf.float32, [None]) # A Tensor of [batch_size, num_samples] mono PCM samples in the range [-1, 1]. pcm = tf.math.scalar_mul(1/32768.0, waveform) # compute Short Time Fourier Transform stft = tf.signal.stft(pcm, frame_length=400, frame_step=160, fft_length=512) spectrogram = tf.abs(stft) # Warp the linear scale spectrograms into the mel-scale. num_spectrogram_bins = stfts.shape[-1].value lower_edge_hertz, upper_edge_hertz, num_mel_bins = 125.0, 7500.0, 64 linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix( num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz, upper_edge_hertz) mel_spectrogram = tf.tensordot( spectrogram, linear_to_mel_weight_matrix, 1) mel_spectrogram.set_shape(spectrogram.shape[:-1].concatenate( linear_to_mel_weight_matrix.shape[-1:])) # Compute a stabilized log to get log-magnitude mel-scale spectrograms. log_mel_spectrogram = tf.log(mel_spectrogram + 1e-6) # with the model loaded and input/output tensors defined, convert to tf.lite converter = tf.lite.TFLiteConverter.from_session(sess, [waveform], [log_mel_spectrogram]) converter.target_ops = [tf.lite.OpsSet.SELECT_TF_OPS, tf.lite.OpsSet.TFLITE_BUILTINS] tflite_model = converter.convert() **Any other info / logs** Assuming the SELECT_TF_OPS option produces a model that will work on TFLite on iOS, then I guess all I need is RFFT. Thank you! At the end It seems that there Is a workaround.
st205069
You do not have enable any optimizations? Post the code you have use for convert in tflite.
st205070
Yes, I have used optimisation during the conversion of the tflite model. The code used for the conversion is attached below: converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() with open('model.tflite', 'wb') as f: f.write(tflite_model) Let me know if you want more information.
st205071
you can try without optimizations but i do not thing is the issues do you have try to show refult of get_spectrogram and chek if is the same value? you can also try to trunk you model and check after eache layer if the result are equal and found the layer that do a error modeltruncate = Model(inputs=model.input, outputs=model.layers[x].output) #x is the number of the layer where you trunk modeltruncate.summary() predict=modeltruncate.predict(matrixTest); # your data set print(predict)
st205072
OSError: SavedModel file does not exist at: facefeatures_new_model_final.h5{saved_model.pbtxt|saved_model.pb} from tensorflow.keras.preprocessing import image model = tf.keras.models.load_model(‘facefeatures_new_model_final.h5’) This is the code I used. I can see the file is there but on opening,it shows this: Image attached: Screenshot_11826×413 28.3 KB
st205073
maybe you should try use complete mode path …like r"c:\user…\facefeatures_new_model_final.h5’
st205074
how did you save the model?, does it have custom ops?, what is your tf version? if you want to use the SaveModel format and H5, do it in this way: Create the model and save it: import os model = FancyCNN() mode.save(os.path.join(your_model_path, 'model.kerasmodel')) model.save(os.path.join(your_model_path, 'model.h5'))) that will create an h5 file containing the model and other objects and a folder named model.kerasmodel with 3 things (a folder called variables, other called assets and the saved_model.pb) when you load the SavedModel you have to point to the folder model.kerasmodel like this: model = tf.keras.models.load_model(os.path.join(your_model_path, 'model.kerasmodel')) model = tf.keras.models.load_model(os.path.join(your_model_path, 'model.h5')) Remember, if you have custom layers you have to pass the argument custom_objects in this call tf.keras.models.load_model(), custom objects should be a dictionary where the keys are the names of the custom layers and values the objects themselves.
st205075
hello, I have the following questions related to Quantization Aware Training (QAT) In the documentation for QAT configs: tfmot.quantization.keras.QuantizeConfig 3 it says: ‘’’ In most cases, a layer outputs only a single tensor so it should only have one quantizer’’ in the case I have a multiple outputs in one layer, should I return a list where every element inside is the respective quantizer for every ordered output? How can I simulate exactly the way tflite makes the quantization during training?, you have different options here (Last Value, All Values and Moving average): Module: tfmot.quantization.keras.quantizers 3 in the case of convolutions, dense, batch normalization or simple max layers which of these techniques is used by tflite to apply the quantization for the respective layer how do you configurate this properly, why would I need Moving average for the activations and Last value for the weights, do you have any documentation I can read Does it make sense to annotate certain operations: for example suppose I have element-wise max activations instead of relu, does it make sense to quantize the output of these? the output of the previous layer probably is int8 so if there is no posterior transformation to the values but is a slice and selection or max, do I need to necessarily to quantize these ops, what about maxpooling, or paddings.
st205076
Current, QAT API doesn’t support multiple output quantizer on QuantizeConfig. (We have to supports at some point.) Our default QAT API scheme follows some practice as a paper on bottom of this doc : Quantization aware training  |  TensorFlow Model Optimization 3 But QAT API supports custom quantization scheme, so you can change the quantizer by create new scheme or quantizeConfig if you want to. QAT default scheme is only for TFLite int8 quantization. (But API supports other deployment logic by implements scheme.) This QAT default scheme is sometimes changed when we add some optimized kernels on TFLite. (e.g. if we added FC-Relu fused layer to TFLite, then scheme also reflect that and remove fakequant between FC and relu.), And documentation for this underlying logic is not ready yet. It also depends on TFLite converter and kernel. I know it’s not easy to find where you should add quantizer. There’s some tip to know where you should add quantizer: A. Run PTQ and see tflite structure. most of common case, the result from QAT also very similar result except their weights. You may have to add quantizer between PTQ ops. B. Add quantizer and then run QAT. B-1. If result TFLite contains useless quantize op, then remove quantizer on that location. B-2. If some op is not quantize, then we have to add more quantizer around that op.
st205077
Dear community, In my opinion with version 2.5.0 the dependencies for tensorflow-mkl are wrong (at least with Linux) as can be seen on https://anaconda.org/anaconda/tensorflow-mkl/files/modal/info/60bf9a1b45c8b2069eef15f3 2 depends _tflow_select ==2.2.0 eigen, tensorflow 2.5.0 To me this should read depends _tflow_select ==2.3.0 mkl, tensorflow 2.5.0 instead. Has anyone else noticed this, is there an explanation and is this to be fixed? It was not that way up to and including version 2.4.1. Thx Franz
st205078
Does anyone have any idea why JPEG images decoded from [TFRecord]s using tf.io.decode_jpeg (TFRecord and tf.train.Example  |  TensorFlow Core 1)s look different from the original images? For example: Original image: Webp.net-resizeimage600×600 79.4 KB The same image after being decoded and reshaped to 600x600: decoded_image600×600 41.9 KB Is this normal? I created a dataset of sharded TFRecords converted from a dataset of images, of shape 600x600x3, using the Keras image_dataset_from_directory function. This function automatically converts images to tensors, of dimensions 600x600x3 in this case, and each tensor was encoded to a byte string using the tf.io.encode_jpeg just like this: image = tf.image.convert_image_dtype(image_tensor, dtype=tf.uint8) image = tf.io.encode_jpeg(image) Each TFRecord example was created like this: def make_example(encoded_image, label): image_feature = tf.train.Feature( bytes_list=tf.train.BytesList(value=[ encoded_image ]) ) label_feature = tf.train.Feature( int64_list=tf.train.Int64List(value=[ label ]) ) features = tf.train.Features(feature={ 'image': image_feature, 'label': label_feature }) example = tf.train.Example(features=features) return example.SerializeToString() And below is the code that loads the TFRecords Dataset, using tf.image.decode_jpeg to decode the images back to tensors of shape 600x600x3, and then saves one image to disk using PIL: def read_tfrecord(example): tfrecord = { "image": tf.io.FixedLenFeature([], tf.string), "label": tf.io.FixedLenFeature([], tf.int64), } example = tf.io.parse_single_example(example, tfrecord) image = tf.image.decode_jpeg(example['image'], channels=3) label = tf.cast(example['label'], tf.int32) return image, label I have absolutely no idea what is causing this apparent loss of image information, so any help would be much appreciated! Notes: I’m using Tensorflow v2.5.0. and Pillow v8.0.1 Find entire source code here: Images2TFRecords.py · GitHub 1
st205079
Thanks in advance for any guidance on this issue and I apologize if I am missing something in the docs. However, despite attempting related solutions (e.g., #1108, #1885, #1906 etc.) I have failed to successfully create two export signature defs one that would allow of raw text predictions via ai platform’s prediction service and one that would allow for the the examplegen component example output (example_gen.outputs[‘examples’]) to be used for model evaluation in the evaluator component. For reference, I am following the taxi example closely, with the main difference being that I am pulling my own data via BigQuery with BiqQueryExampleGen. dependencies tfx[kfp]==0.30.0 python 3.7 My latest attempt follows this solution which integrates a separate export signature for raw data via a MyModule(tf.Module) class. Similar to the author @jason-brian-anderson, we avoided the use of tf.reshape because we use the _fill_in_missing operation in our preprocessing_fn which expects and parses sparsetensors. Below is the code embedded within the scope of the run_fn. class MyModule(tf.Module): def __init__(self, model, tf_transform_output): self.model = model self.tf_transform_output = tf_transform_output self.model.tft_layer = self.tf_transform_output.transform_features_layer() @tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')]) def serve_tf_examples_fn(self, serialized_tf_examples): feature_spec = self.tf_transform_output.raw_feature_spec() feature_spec.pop(features.LABEL_KEY) parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec) transformed_features = self.model.tft_layer(parsed_features) return self.model(transformed_features) @tf.function(input_signature=[tf.TensorSpec(shape=(None), dtype=tf.string, name='raw_data')]) def tf_serving_raw_input_fn(self, raw_data): raw_data_sp_tensor = tf.sparse.SparseTensor( indices=[[0, 0]], values=raw_data, dense_shape=(1, 1) ) parsed_features = {'raw_data': raw_data_sp_tensor, } transformed_features = self.model.tft_layer(parsed_features) return self.model(transformed_features) module = MyModule(model, tf_transform_output) signatures = {"serving_default": module.serve_tf_examples_fn, "serving_raw_input": module.tf_serving_raw_input_fn, } tf.saved_model.save(module, export_dir=fn_args.serving_model_dir, signatures=signatures, options=None, ) The keras model expects 3 inputs: 1 DENSE_FLOAT_FEATURE_KEY and 2 VOCAB_FEATURE_KEYS. The error I am currently experiencing is “can only concatenate str (not “SparseTensor”) to str” which is occurring at parsed_features = {‘raw_data’: raw_data_sp_tensor, } I also attempted to manually create the feature spec re:(test by copybara-service · Pull Request #1906 · tensorflow/tfx · GitHub 1) but there were naming convention issues and I was unable to expose schema2tensorspec to ensure similar expected input names/data types. Any and all help is welcome.
st205080
I don’t really have a complete answer, but here are some ideas. parsed_features = {‘raw_data’: raw_data_sp_tensor, } That’s just creating a dict, so I think the error is really with: transformed_features = self.model.tft_layer(parsed_features) Does that line work in serve_tf_examples_fn? If so, then that’s a clue to the problem. Fundamentally though you’re giving it a sparse tensor and it wants a string.
st205081
I suggest to take a look at: github.com GitHub - nengo/keras-spiking: Spiking neuron integration for Keras 5 Spiking neuron integration for Keras. Contribute to nengo/keras-spiking development by creating an account on GitHub.
st205082
Hi, I am new to tensor flow and currently am reading Keras. So I see a code like this: tf.keras.layers.Dense( units, activation=None, use_bias=True, kernel_initializer="glorot_uniform", bias_initializer="zeros", kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs ) I see that there are couple of initializers and Regularizers and Constraint BOTH for Bias as well as kernel. I have searched on the internet but could not find the calling sequence information. For example is the calling sequence like this: use_bias → bias_initializer → bias_regularizer → bias_constraint AND kernel_initializer → kernel_regularizer → kernel_constraint or is the constraint called before regularizer. Also is kernel = activation function or kernel is something that runs activation function?
st205083
Hi, A couple of days ago I talked about something very similar here: What is effect of use_bias - #3 by lgusm The Dense layers computes: output = activation(dot(input, kernel) + bias) A dense layer is trying to calculate: y= Ax + b input (the x in the equation) are the data used for training Kernel (the A in the equation) are the weights your layer is trying to find Bias (b in the equation) is a vector. The use_bias parameter configures the layer to calculate a bias vector together with with the weights. The initializer, regularizer and constraint will be used with the bias calculation is enabled. for more information: tf.keras.layers.Dense  |  TensorFlow Core v2.5.0
st205084
Architecture1004×448 29 KB I want to build an autoencoder model in the image above. The model consists of fully connected layers. How am I suppose to build the 2D convolution, BN (batch normalization), Relu unit.
st205085
I’d suggest you watch this video: Modern Keras design patterns | Ses 4 it will give you some insights.
st205086
According to GPU support | TensorFlow 2, I should install TensorRT 6.0. However, looking at https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-support-matrix/index.html 1, the most recent version of CUDA that TensorRT 6.0 supports is 10.2, yet the former web page says I should install CUDA 11.2. According to Documentation Archives :: NVIDIA Deep Learning TensorRT Documentation 1, the oldest version of TensorRT that supports CUDA 11.2 is 7.2.2. My question is, which version of TensorRT should I install? Should I install version 6.0, which is most likely not compatible with CUDA 11.2, or should I install a later version of TensorRT, which might not be compatible with TensorFlow 2.5.0? If so, which one?
st205087
Latest TensorRT version 8.0.1 supports following CUDA versions, 11.3 update 1 1 11.2 update 2 1 11.1 update 1 1 11.0 update 1 1 10.2 1 TensorRT 8.0.1 is tested with Tensorflow 1.15.5 Take a look compatibility section in this link 3
st205088
When you add a dropout layer to a model (like below), does the dropout only apply to the preceding layer or does it apply to all the hidden layers? model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ])
st205089
It really depends by the arch. E.g. This was the position some years ago for the convolutional layers: http://mipal.snu.ac.kr/images/1/16/Dropout_ACCV2016.pdf 1 More in general one interesting recent work co-authored by Google is autodropout but I don’t know why the code isn’t available: arXiv.org AutoDropout: Learning Dropout Patterns to Regularize Deep Networks 1 Neural networks are often over-parameterized and hence benefit from aggressive regularization. Conventional regularization methods, such as Dropout or weight decay, do not leverage the structures of the network's inputs and hidden states. As a... github.com/google-research/google-research Why cannot find the code for "AutoDropout: Learning Dropout Patterns to Regularize Deep Networks" ? opened Jun 7, 2021 lcshr123 Hi, I want to refer to the implementation of AutoDropout, but it seems that I ca…nnot find it here. The paper said that code is published in this website, so how can I touch it ?
st205090
Thanks. I guess my question is more specific to tf.keras.layers.Dropout(). If I want to use dropout regularization throughout my model, do I need to add a second Dropout layer after tf.keras.layers.Dense(128, activation=‘relu’)?
st205091
When using the tf.keras.layers.Dropout layer, the Dropout operation is applied only to the preceding layer.
st205092
yes, if you want to apply the Dropout layer, it needs to do to each layer separately after it. In addition, the Dropout layer also can be used for the Input layer. Moreover, “drop_level” is a hyperparameter in Dropout(drop_level).
st205093
Some examples: The original Transformers paper - Attention is all you need - mentioned the following: Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. Here’s an example with code that uses a Transformer-based architecture: Timeseries classification with a Transformer model Notice how dropout is applied throughout the architecture: Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 500, 1)] 0 __________________________________________________________________________________________________ layer_normalization (LayerNorma (None, 500, 1) 2 input_1[0][0] __________________________________________________________________________________________________ multi_head_attention (MultiHead (None, 500, 1) 7169 layer_normalization[0][0] layer_normalization[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 500, 1) 0 multi_head_attention[0][0] __________________________________________________________________________________________________ tf.__operators__.add (TFOpLambd (None, 500, 1) 0 dropout[0][0] input_1[0][0] __________________________________________________________________________________________________ layer_normalization_1 (LayerNor (None, 500, 1) 2 tf.__operators__.add[0][0] __________________________________________________________________________________________________ conv1d (Conv1D) (None, 500, 4) 8 layer_normalization_1[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 500, 4) 0 conv1d[0][0] __________________________________________________________________________________________________ conv1d_1 (Conv1D) (None, 500, 1) 5 dropout_1[0][0] __________________________________________________________________________________________________ tf.__operators__.add_1 (TFOpLam (None, 500, 1) 0 conv1d_1[0][0] tf.__operators__.add[0][0] __________________________________________________________________________________________________ layer_normalization_2 (LayerNor (None, 500, 1) 2 tf.__operators__.add_1[0][0] __________________________________________________________________________________________________ multi_head_attention_1 (MultiHe (None, 500, 1) 7169 layer_normalization_2[0][0] layer_normalization_2[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 500, 1) 0 multi_head_attention_1[0][0] __________________________________________________________________________________________________ tf.__operators__.add_2 (TFOpLam (None, 500, 1) 0 dropout_2[0][0] tf.__operators__.add_1[0][0] __________________________________________________________________________________________________ layer_normalization_3 (LayerNor (None, 500, 1) 2 tf.__operators__.add_2[0][0] __________________________________________________________________________________________________ conv1d_2 (Conv1D) (None, 500, 4) 8 layer_normalization_3[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 500, 4) 0 conv1d_2[0][0] __________________________________________________________________________________________________ conv1d_3 (Conv1D) (None, 500, 1) 5 dropout_3[0][0] __________________________________________________________________________________________________ tf.__operators__.add_3 (TFOpLam (None, 500, 1) 0 conv1d_3[0][0] tf.__operators__.add_2[0][0] __________________________________________________________________________________________________ layer_normalization_4 (LayerNor (None, 500, 1) 2 tf.__operators__.add_3[0][0] __________________________________________________________________________________________________ multi_head_attention_2 (MultiHe (None, 500, 1) 7169 layer_normalization_4[0][0] layer_normalization_4[0][0] __________________________________________________________________________________________________ dropout_4 (Dropout) (None, 500, 1) 0 multi_head_attention_2[0][0] __________________________________________________________________________________________________ tf.__operators__.add_4 (TFOpLam (None, 500, 1) 0 dropout_4[0][0] tf.__operators__.add_3[0][0] __________________________________________________________________________________________________ layer_normalization_5 (LayerNor (None, 500, 1) 2 tf.__operators__.add_4[0][0] __________________________________________________________________________________________________ conv1d_4 (Conv1D) (None, 500, 4) 8 layer_normalization_5[0][0] __________________________________________________________________________________________________ dropout_5 (Dropout) (None, 500, 4) 0 conv1d_4[0][0] __________________________________________________________________________________________________ conv1d_5 (Conv1D) (None, 500, 1) 5 dropout_5[0][0] __________________________________________________________________________________________________ tf.__operators__.add_5 (TFOpLam (None, 500, 1) 0 conv1d_5[0][0] tf.__operators__.add_4[0][0] __________________________________________________________________________________________________ layer_normalization_6 (LayerNor (None, 500, 1) 2 tf.__operators__.add_5[0][0] __________________________________________________________________________________________________ multi_head_attention_3 (MultiHe (None, 500, 1) 7169 layer_normalization_6[0][0] layer_normalization_6[0][0] __________________________________________________________________________________________________ dropout_6 (Dropout) (None, 500, 1) 0 multi_head_attention_3[0][0] __________________________________________________________________________________________________ tf.__operators__.add_6 (TFOpLam (None, 500, 1) 0 dropout_6[0][0] tf.__operators__.add_5[0][0] __________________________________________________________________________________________________ layer_normalization_7 (LayerNor (None, 500, 1) 2 tf.__operators__.add_6[0][0] __________________________________________________________________________________________________ conv1d_6 (Conv1D) (None, 500, 4) 8 layer_normalization_7[0][0] __________________________________________________________________________________________________ dropout_7 (Dropout) (None, 500, 4) 0 conv1d_6[0][0] __________________________________________________________________________________________________ conv1d_7 (Conv1D) (None, 500, 1) 5 dropout_7[0][0] __________________________________________________________________________________________________ tf.__operators__.add_7 (TFOpLam (None, 500, 1) 0 conv1d_7[0][0] tf.__operators__.add_6[0][0] __________________________________________________________________________________________________ global_average_pooling1d (Globa (None, 500) 0 tf.__operators__.add_7[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 128) 64128 global_average_pooling1d[0][0] __________________________________________________________________________________________________ dropout_8 (Dropout) (None, 128) 0 dense[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 2) 258 dropout_8[0][0] ================================================================================================== A discussion on StackOverflow: machine learning - Where Dropout should be inserted.? Fully Connected Layer.? Convolutional Layer.? or Both.? - Stack Overflow 2
st205094
Dropout is generally used after a Dense or Convolutional layer. It affects the hidden neurons passed to the following layer. A Convolutional layer outputs a set of feature maps. In 2D image-processing, these feature maps are gray-scale images that correspond to common shapes in the image set: cat eyes v.s. cat noses, for example. Applying Dropout after Convolutional layers does not do what you would expect, because the values in feature maps are strongly correlated: it is like putting a slice of Swiss cheese over a picture- you can still see the picture through the holes! Convolutional layers, Dropout and BatchNormalization interact in complex ways. The best discussion that I have found on this topic is right here: https://stackoverflow.com/questions/59634780/correct-order-for-spatialdropout2d-batchnormalization-and-activation-function
st205095
Capture1024×503 82 KB I’m new with image classification, recently i try to try a tutorial from tensorflow, Image classification with TensorFlow Lite Model Maker 7 i have a question about optimizer, can i change the optimizer? How can i use another optimizer based on the code from image?
st205096
Hi @jeri_shanks Reading the API 10 I do not see a way to change the optimizer. You do not achieve good accuracy?
st205097
No, i just need it for comparation, i have a task from my college about image classification and i asked to do it with different optimizer to evaluate which one is better.
st205098
So I presume that you do not need it for mobile or IoT device. For now, you have to do it without using model maker then
st205099
I also have to do it with android, that’s why i need to train the model using mobilenet