id
stringlengths
3
8
text
stringlengths
1
115k
st205300
It’s usually removed and expanded. Here 69 you can find a non-exhaustive list of all such operations for which this behavior is applied.
st205301
Dear all, I am trying to compile the Hello World example sketch included in the Tensorflow Lite library which can be installed within the Arduino IDE. I experienced some problems. On the first try it gave me an error, saying that a cmsis_gcc.h was missing in some directory. I added this file manually from the ARM github. After this another error occured. This seems to do with a missing bracket “}”. The last “if()” statement seems to be missing the brackets {}. If I improve on this syntax error, the compiler will still give the error about “}” or just crash. It does not matter how many "}"s I add, it will still give this error. Is this a well known problem? I am not posting the code here since it is simply the built-in HelloWorld example. For anybody who can comment on this, you have my gratitude! Owen
st205302
Hello George, This is the link to the github page of this example. github.com tflite-micro/tensorflow/lite/micro/examples/hello_world at main ·... 8 main/tensorflow/lite/micro/examples/hello_world TensorFlow Lite for Microcontrollers. Contribute to tensorflow/tflite-micro development by creating an account on GitHub. For some reason this page does not contain the actual .ino file required to run the code within the Arduino IDE. I will post this (the original version with faulty syntax) below: As I mentioned above the example is included in the Tensorflow Lite Arduino library. /* Copyright 2020 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include <TensorFlowLite.h> #include “main_functions.h” #include “tensorflow/lite/micro/all_ops_resolver.h” #include “constants.h” #include “model.h” #include “output_handler.h” #include “tensorflow/lite/micro/micro_error_reporter.h” #include “tensorflow/lite/micro/micro_interpreter.h” #include “tensorflow/lite/schema/schema_generated.h” #include “tensorflow/lite/version.h” // Globals, used for compatibility with Arduino-style sketches. namespace { tflite::ErrorReporter* error_reporter = nullptr; const tflite::Model* model = nullptr; tflite::MicroInterpreter* interpreter = nullptr; TfLiteTensor* input = nullptr; TfLiteTensor* output = nullptr; int inference_count = 0; constexpr int kTensorArenaSize = 2000; uint8_t tensor_arena[kTensorArenaSize]; } // namespace // The name of this function is important for Arduino compatibility. void setup() { // Set up logging. Google style is to avoid globals or statics because of // lifetime uncertainty, but since this has a trivial destructor it’s okay. // NOLINTNEXTLINE(runtime-global-variables) static tflite::MicroErrorReporter micro_error_reporter; error_reporter = &micro_error_reporter; // Map the model into a usable data structure. This doesn’t involve any // copying or parsing, it’s a very lightweight operation. model = tflite::GetModel(g_model); if (model->version() != TFLITE_SCHEMA_VERSION) { TF_LITE_REPORT_ERROR(error_reporter, "Model provided is schema version %d not equal " “to supported version %d.”, model->version(), TFLITE_SCHEMA_VERSION); return; } // This pulls in all the operation implementations we need. // NOLINTNEXTLINE(runtime-global-variables) static tflite::AllOpsResolver resolver; // Build an interpreter to run the model with. static tflite::MicroInterpreter static_interpreter( model, resolver, tensor_arena, kTensorArenaSize, error_reporter); interpreter = &static_interpreter; // Allocate memory from the tensor_arena for the model’s tensors. TfLiteStatus allocate_status = interpreter->AllocateTensors(); if (allocate_status != kTfLiteOk) { TF_LITE_REPORT_ERROR(error_reporter, “AllocateTensors() failed”); return; } // Obtain pointers to the model’s input and output tensors. input = interpreter->input(0); output = interpreter->output(0); // Keep track of how many inferences we have performed. inference_count = 0; } // The name of this function is important for Arduino compatibility. void loop() { // Calculate an x value to feed into the model. We compare the current // inference_count to the number of inferences per cycle to determine // our position within the range of possible x values the model was // trained on, and use this to calculate a value. float position = static_cast(inference_count) / static_cast(kInferencesPerCycle); float x = position * kXrange; // Quantize the input from floating-point to integer int8_t x_quantized = x / input->params.scale + input->params.zero_point; // Place the quantized input in the model’s input tensor input->data.int8[0] = x_quantized; // Run inference, and report any error TfLiteStatus invoke_status = interpreter->Invoke(); if (invoke_status != kTfLiteOk) { TF_LITE_REPORT_ERROR(error_reporter, “Invoke failed on x: %f\n”, static_cast(x)); return; } // Obtain the quantized output from model’s output tensor int8_t y_quantized = output->data.int8[0]; // Dequantize the output from integer to floating-point float y = (y_quantized - output->params.zero_point) * output->params.scale; // Output the results. A custom HandleOutput function can be implemented // for each supported hardware target. HandleOutput(error_reporter, x, y); // Increment the inference_counter, and reset it if we have reached // the total number per cycle inference_count += 1; if (inference_count >= kInferencesPerCycle) inference_count = 0; } Best regards, Owen
st205303
if statement do not need {} if you have only one instruction for exemple if(a>0) b++; //only do if a>0 d++;//do always chek your #include if a #include have a { not close the compilator detect error in the “main” file.
st205304
Hi I run a CNN (with Tenserflow Lite) Bue the input data are a sliding windows and when I send data to my CNN is send full windows My query it is possible to send only news datas and procecing only the news datas because other datas are only sliding of 1 on the left A exemple are more easy to understande then i do this exemple. image566×883 24 KB You can watch data in red and green have only move on the left it is uslesse to processing it. If you have a idea that can help me to win many processing resource Sorry for me english
st205305
Should @tf.function only be used during training, or should it also be used during inference? The function I use during inference calls the model, sends the output to a softmax. and then calls argmax. Adding @tf.function to this function works well, but does it provide any benefits?
st205306
@tf.function provides some nice performance benefits and I can’t think of any particular reason not to use it at inference unless your input shapes change often or for some reason your inference logic forces recompilation. It’s a requirement to use tf.function to even export models to the SavedModel format, so I think it makes sense: Better performance with tf.function  |  TensorFlow Core 2
st205307
Can anyone share the process to save the Tensorflow model in PB and pbtxt format
st205308
Hi Ragu, here are some resources to help you: Save and load models  |  TensorFlow Core 9 Using the SavedModel format  |  TensorFlow Core 3
st205309
I am trying to build TF with ASAN flag enabled , using the command : bazel build --config=dbg --config=asan --copt -fsanitize=address --linkopt -fsanitize=address //tensorflow/tools/pip_package:build_pip_package --verbose_failures but this would give me an error about the third party library: ERROR: /home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/external/libjpeg_turbo/BUILD.bazel:238:8: Executing genrule @libjpeg_turbo//:simd_x86_64_assemblage23 failed (Exit 1): bash failed: error executing command (cd /home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow && \ exec env - \ LD_LIBRARY_PATH=/usr/local/cuda-11.3/lib64 \ PATH=/home/user/bin:/usr/local/cuda-11.3/bin:/home/user/anaconda3/envs/tfenv/bin:/home/user/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \ /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; for out in bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jccolor-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jccolor-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jcgray-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jcgray-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jchuff-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jcphuff-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jcsample-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jcsample-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jdcolor-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jdcolor-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jdmerge-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jdmerge-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jdsample-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jdsample-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jfdctflt-sse.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jfdctfst-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jfdctint-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jfdctint-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jidctflt-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jidctfst-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jidctint-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jidctint-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jidctred-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jquantf-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jquanti-avx2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jquanti-sse2.o bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/simd/x86_64/jsimdcpu.o; do bazel-out/k8-dbg/bin/external/nasm/nasm -f elf64 -DELF -DPIC -D__x86_64__ -I $(dirname bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/jconfig.h)/ -I $(dirname bazel-out/k8-opt-exec-50AE0418/bin/external/libjpeg_turbo/jconfigint.h)/ -I $(dirname external/libjpeg_turbo/simd/nasm/jsimdcfg.inc.h)/ -I $(dirname external/libjpeg_turbo/simd/x86_64/jccolext-sse2.asm)/ -o $out $(dirname external/libjpeg_turbo/simd/x86_64/jccolext-sse2.asm)/$(basename ${out%.o}.asm) done') Execution platform: @local_execution_config_platform//:platform ================================================================= ==479927==ERROR: LeakSanitizer: detected memory leaks Direct leak of 5888 byte(s) in 32 object(s) allocated from: #0 0x7f0adb6bcbc8 in malloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10dbc8) #1 0x55d56e9d76d1 in nasm_malloc (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x1d96d1) #2 0x55d56e9b9c2e in do_directive (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x1bbc2e) #3 0x55d56e9c5d60 in pp_getline (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x1c7d60) #4 0x55d56e99d13f in assemble_file.constprop.8 (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x19f13f) #5 0x55d56e97b275 in main (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x17d275) #6 0x7f0adb2020b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2) Direct leak of 768 byte(s) in 192 object(s) allocated from: #0 0x7f0adb6bcbc8 in malloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10dbc8) #1 0x55d56e9d76d1 in nasm_malloc (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x1d96d1) #2 0x55d56e9ac010 in new_Token.constprop.9 (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x1ae010) #3 0x55d56e9c706d in pp_getline (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x1c906d) #4 0x55d56e99d13f in assemble_file.constprop.8 (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x19f13f) #5 0x55d56e97b275 in main (/home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/execroot/org_tensorflow/bazel-out/k8-dbg/bin/external/nasm/nasm+0x17d275) #6 0x7f0adb2020b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2) So I tried to add the following to the third_party/jpec/BUILD.bazel : # line 238 genrule( name = "simd_x86_64_assemblage23", + tags = [ + "noasan", + "no_cuda_asan", + "nomsan", + "notsan", + ], This still does not help. Is there tips of how to deal with this?
st205310
Just to link your ticket to the thread: github.com/tensorflow/tensorflow Tensorflow could not build with `config=asan` flag 8 opened Jul 22, 2021 Stonepia type:build/install <em>Please make sure that this is a build/installation issue. As per our [GitHub… Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template</em> **System information** OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04 Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: TensorFlow installed from (source or binary): source TensorFlow version: master Python version: 3.8.8 Installed using virtualenv? pip? conda?: cinda Bazel version (if compiling from source): 3.7.2 GCC/Compiler version (if compiling from source): clang-11 CUDA/cuDNN version: CUDA 11.3, CuDNN 8 GPU model and memory: GTX 1080 Ti **Describe the problem** **Provide the exact sequence of commands / steps that you executed before running into the problem** I am trying to build TF with Address Sanitizer enabled. TF could not built with the flag `--config asan`. I tried to use the following command: ```Batch CC=/usr/lib/llvm-11/bin/clang TMP=~/tmp bazel build --config asan --config=dbg --per_file_copt=//tensorflow/core/grappler/./.*\.cc@-g //tensorflow/tools/pip_package:build_pip_package --verbose_failures ``` and it failed with the following ```Log tensorflow git:(master) ✗ CC=/usr/lib/llvm-11/bin/clang TMP=~/tmp bazel build --config asan --config=dbg --per_file_copt=//tensorflow/core/grappler/./.*\.cc@-g //tensorflow/tools/pip_package:build_pip_package --verbose_failures WARNING: The following configs were expanded more than once: [cuda_clang, cuda]. For repeatable flags, repeats are counted twice and may lead to unexpected behavior. INFO: Options provided by the client: Inherited 'common' options: --isatty=1 --terminal_columns=149 INFO: Reading rc options for 'build' from /home/user/tensorflow/.bazelrc: Inherited 'common' options: --experimental_repo_remote_exec INFO: Reading rc options for 'build' from /home/user/tensorflow/.bazelrc: 'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true INFO: Reading rc options for 'build' from /home/user/tensorflow/.tf_configure.bazelrc: 'build' options: --action_env PYTHON_BIN_PATH=/home/user/anaconda3/envs/tfenv/bin/python3 --action_env PYTHON_LIB_PATH=/home/user/anaconda3/envs/tfenv/lib/python3.8/site-packages --python_path=/home/user/anaconda3/envs/tfenv/bin/python3 --action_env CUDA_TOOLKIT_PATH=/usr/local/cuda-11.3 --action_env TF_CUDA_COMPUTE_CAPABILITIES=6.1,6.1,6.1,6.1 --action_env LD_LIBRARY_PATH=/usr/local/cuda-11.3/lib64 --config=cuda_clang --action_env CLANG_CUDA_COMPILER_PATH=/usr/lib/llvm-11/bin/clang --config=cuda_clang INFO: Reading rc options for 'build' from /home/user/tensorflow/.bazelrc: 'build' options: --verbose_failures INFO: Found applicable config definition build:short_logs in file /home/user/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING INFO: Found applicable config definition build:v2 in file /home/user/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1 INFO: Found applicable config definition build:cuda_clang in file /home/user/tensorflow/.bazelrc: --config=cuda --repo_env TF_CUDA_CLANG=1 --@local_config_cuda//:cuda_compiler=clang INFO: Found applicable config definition build:cuda in file /home/user/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda INFO: Found applicable config definition build:cuda_clang in file /home/user/tensorflow/.bazelrc: --config=cuda --repo_env TF_CUDA_CLANG=1 --@local_config_cuda//:cuda_compiler=clang INFO: Found applicable config definition build:cuda in file /home/user/tensorflow/.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda INFO: Found applicable config definition build:asan in file /home/user/tensorflow/.bazelrc: --strip=never --copt -fsanitize=address --copt -DADDRESS_SANITIZER --copt -g --copt -O3 --copt -fno-omit-frame-pointer --linkopt -fsanitize=address INFO: Found applicable config definition build:dbg in file /home/user/tensorflow/.bazelrc: -c dbg --per_file_copt=+.*,-tensorflow.*@-g0 --per_file_copt=+tensorflow/core/kernels.*@-g0 --cxxopt -DTF_LITE_DISABLE_X86_NEON --copt -DDEBUG_BUILD INFO: Found applicable config definition build:linux in file /home/user/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false --experimental_guard_against_concurrent_changes INFO: Found applicable config definition build:dynamic_kernels in file /home/user/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS WARNING: Download from http://mirror.tensorflow.org/github.com/tensorflow/toolchains/archive/d781e89e2ee797ea7afd0c8391e761616fc5d50d.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found WARNING: Download from http://mirror.tensorflow.org/github.com/tensorflow/runtime/archive/c362c993e4cca885ba6afef155d864a2dfd21f85.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/bcf6f641acdbeb208ea07a9e8ded37cd5b796d26.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (0 packages loaded, 0 targets configured). INFO: Found 1 target... ERROR: /home/user/.cache/bazel/_bazel_user/7562dd0a46bcb6cada6ae19650738275/external/highwayhash/BUILD.bazel:23:11: undeclared inclusion(s) in rule '@highwayhash//:arch_specific': this rule is missing dependency declarations for the following files included by 'highwayhash/highwayhash/arch_specific.cc': '/usr/lib/llvm-11/lib/clang/11.1.0/share/asan_blacklist.txt' Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 0.680s, Critical Path: 0.40s INFO: 17 processes: 17 internal. FAILED: Build did NOT complete successfully ``` Also tried the command of `--copt -fsanitize=address --linkopt -fsanitize=address`: ```Batch CC=/usr/lib/llvm-11/bin/clang TMP=~/tmp bazel build --copt -fsanitize=address --linkopt -fsanitize=address --config=dbg --per_file_copt=//tensorflow/core/grappler/./.*\.cc@-g //tensorflow/tools/pip_package:build_pip_package --verbose_failures ``` and get the similar error. **Any other info / logs** Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. ```Batch $env USER=user LOGNAME=user HOME=/home/user PATH=/home/user/bin:/usr/local/cuda-11.3/bin:/home/user/anaconda3/envs/tfenv/bin:/home/user/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin SHELL=/usr/bin/zsh TERM=xterm-256color XDG_SESSION_ID=99 XDG_RUNTIME_DIR=/run/user/1000 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus XDG_SESSION_TYPE=tty XDG_SESSION_CLASS=user MOTD_SHOWN=pam LANG=en_US.UTF-8 LC_NUMERIC=zh_CN.UTF-8 LC_TIME=zh_CN.UTF-8 LC_MONETARY=zh_CN.UTF-8 LC_PAPER=zh_CN.UTF-8 LC_NAME=zh_CN.UTF-8 LC_ADDRESS=zh_CN.UTF-8 LC_TELEPHONE=zh_CN.UTF-8 LC_MEASUREMENT=zh_CN.UTF-8 LC_IDENTIFICATION=zh_CN.UTF-8 SSH_CLIENT=10.249.174.114 49269 22 SSH_CONNECTION=10.249.174.114 49269 10.239.44.39 22 SSH_TTY=/dev/pts/3 SHLVL=1 PWD=/home/user/tensorflow OLDPWD=/home/user ZSH=/home/user/.oh-my-zsh PAGER=less LESS=-R LSCOLORS=Gxfxcxdxbxegedabagacad LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36: LD_PRELOAD=/home/user/anaconda3/envs/dlrm/lib/libiomp5.so CONDA_EXE=/home/user/anaconda3/bin/conda _CE_M= _CE_CONDA= CONDA_PYTHON_EXE=/home/user/anaconda3/bin/python CONDA_SHLVL=2 CONDA_PREFIX=/home/user/anaconda3/envs/tfenv CONDA_DEFAULT_ENV=tfenv CONDA_PROMPT_MODIFIER=(tfenv) LD_LIBRARY_PATH=/usr/local/cuda-11.3/lib64 CONDA_PREFIX_1=/home/user/anaconda3 _=/usr/bin/env ```
st205311
Hi all. Currently, I am working on copying a trained pytorch model to tensorflow 2.3 platform. For the Conv2d layers, the feature map output of pytorch and tensoflows are the same. Thus, the conversion for conv2d layers from pytorch to tf2 are all fine now. However, when I add a Depth_to_space layer to the model, the output contains lots of mosaic, whereas the output of Pixelshuffle does not have this problem. I am curious that is the mechanism of Depth_to_space are different to Pixelshuffle? Or is my code has some problem? Here is part of my code. Pytorch model some linear mapping layer with nn.Conv2d self.pslayers = [nn.Conv2d(d, 412,3 ,1, 3//2)] self.pslayers.extend([nn.PixelShuffle(2)]) self.pslayers.extend([nn.Conv2d(12, 4num_channels,3 ,1, 3//2)]) self.pslayers.extend([nn.PixelShuffle(2)]) self.pslayers = nn.Sequential(*self.pslayers) The TF2 model: some linear mapping layer with Conv2d x = Conv2D(4*12, 3, padding=“same”)(x) x = tf.nn.depth_to_space(x, 2, data_format=‘NHWC’) x = Conv2D(4, 3, padding=“same”)(x) out = tf.nn.depth_to_space(x, 2, data_format=‘NHWC’) The conversion for Conv2d layer from torch weights to tf2: onnx_1_w_num = onnx_l.weight.data.numpy().transpose(2, 3, 1, 0) onnx_1_b_num = onnx_l.bias.data.numpy() tf_l.set_weights([onnx_1_w_num,onnx_1_b_num]) I am struggling with this problem for a long while. Very appreciate for those helping me!
st205312
Probably the organization of the channels in your code is problematic. You can refer to the following example for a proper usage: keras.io Keras documentation: Image Super-Resolution using an Efficient Sub-Pixel CNN 25
st205313
Thanks for reply. May I know that the organization of the channels you mean is the output of Conv2D layer or the weights for Conv2d when I copying it from pytorch to tensorflow model? For the Conv2d layer, I am quite sure that the output of nn.Conv2d(d, 4x12,3 ,1, 3//2) are equal to Conv2D(4x12, 3, padding=“same”)(x) as I checked their feature map output. Is the problem occur on tf.nn.depth_to_space? May I have further hints for that? Thanks.
st205314
Hi! I found that the depth_to_space work fine (mosaic gone) when I reduce the upsampling to 1 time (x4 upsampling by 1), i.e. x = Conv2D(4*4, 3, padding=“same”)(x) x = tf.nn.depth_to_space(x, 4, data_format=‘NHWC’) However, the mosaic is still there when upsampling 2 times (x2,x2). Do you have any idea for that? Thanks.
st205315
I have found the official introduction example guides on the official site Tutorials  |  TensorFlow Core But I could not find more example codes up until I have bumped in to: https://www.tensorflow.org/text/tutorials/ 2 So is there any other example full projects specialized on topics? (I love their format and the added COLAB just makes it easier to learn it.)
st205316
keras.io Keras documentation: Developer guides 1 keras.io Keras documentation: Code examples 2 TensorFlow Guide  |  TensorFlow Core 1 Learn basic and advanced concepts of TensorFlow such as eager execution, Keras high-level APIs and flexible model building.
st205317
Hello, As you’ve seen, TensorFlow has a lot of projects We encourage writing Jupyter (Colab) notebooks for all guides and tutorials since we can 1. test the docs to make sure everything works on tensorflow.org, and 2. provide an nice user experience for interactive learning. Here’s a blog post about that. The “core” TensorFlow docs live together under /guide and /tutorials and these are mostly notebooks. There are additional projects listed under the Resources section in the top nav: Models & datasets, Tools, and LIbraries & extensions. They may link to other sections of the website or maybe just out to a GitHub project—it depends if it’s for production, for research, and project stability. There’s not a good way to see all technical docs and notebooks across the site but, in the translations repo, you can find a nightly snapshot of all project docs aggregated together. Could be interesting browsing if you’re just looking for notebooks and to get a sense of the larger structure.
st205318
Hi! I have trained my model using MobileNetV3 architecture def get_training_model(trainable=False): # Load the MobileNetV3Small model but exclude the classification layers EXTRACTOR = MobileNetV3Small(weights="imagenet", include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3)) # We will set it to both True and False EXTRACTOR.trainable = trainable # Construct the head of the model that will be placed on top of the # the base model class_head = EXTRACTOR.output class_head = GlobalAveragePooling2D()(class_head) class_head = Dense(1024, activation="relu")(class_head) class_head = Dense(512, activation="relu")(class_head) class_head = Dropout(0.5)(class_head) class_head = Dense(NUM_CLASSES, activation="softmax", dtype="float32")(class_head) # Create the new model classifier = tf.keras.Model(inputs=EXTRACTOR.input, outputs=class_head) # Compile and return the model classifier.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) return classifier But when I am doing Quantisation Aware Training it is giving me an error. image1028×762 97 KB
st205319
Can you try without setting dtype to float as string? By default the dtype is set to float32 Sayan_Nath: class_head = Dense(NUM_CLASSES, activation="softmax")(class_head) Also if you can provide a reproducible code example that will be great.
st205320
When trying to run a custom component in the vertexAI platform, getting the above error, although the component is defined. Following is some code example: class ImageExampleGenExecutor(BaseExampleGenExecutor): def GetInputSourceToExamplePTransform(self) -> beam.PTransform: """Returns PTransform for image to TF examples.""" return ImageToExample #-------------------------------------------------------------------------------------------------------------------------------------------------- import tfx from tfx import v1 #import custom_examplegen_trainer #from custom_examplegen_trainer import ImageExampleGenExecutor def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str, module_file: str, serving_model_dir: str, ) → tfx.v1.dsl.Pipeline: “”“Creates a three component penguin pipeline with TFX.”"" Brings data into the pipeline. #example_gen = tfx.components.CsvExampleGen(input_base=data_root) output = example_gen_pb2.Output( split_config=example_gen_pb2.SplitConfig(splits=[ example_gen_pb2.SplitConfig.Split(name=‘train’, hash_buckets=4), example_gen_pb2.SplitConfig.Split(name=‘eval’, hash_buckets=1) ])) input_config = example_gen_pb2.Input(splits=[ example_gen_pb2.Input.Split(name=‘images’, pattern=’/.jpg’), ]) data_root = os.path.join(“content/”, ‘PetImages’) #input_artifact = tfx.types.standard_artifacts.Examples() #input_artifact.uri = data_root #input_channel = tfx.types.channel_utils.as_channel(artifacts=[input_artifact]) example_gen = FileBasedExampleGen( input_base=data_root, input_config=input_config, output_config=output, custom_executor_spec=executor_spec.ExecutorClassSpec(ImageExampleGenExecutor)) #print (example_gen) Uses user-provided Python function that trains a model. Following three components will be included in the pipeline. components = [ example_gen, #trainer, #pusher, ] return tfx.v1.dsl.Pipeline( pipeline_name=pipeline_name, pipeline_root=pipeline_root, components=components) #-------------------------------------------------------------------------------------------------------------------------------------------- import os import tfx from tfx import v1 #import custom_examplegen_trainer PIPELINE_DEFINITION_FILE = PIPELINE_NAME + ‘_pipeline.json’ runner = tfx.v1.orchestration.experimental.KubeflowV2DagRunner( config=tfx.v1.orchestration.experimental.KubeflowV2DagRunnerConfig(), output_filename=PIPELINE_DEFINITION_FILE) Following function will write the pipeline definition to PIPELINE_DEFINITION_FILE. outpipe = _create_pipeline( pipeline_name=PIPELINE_NAME, pipeline_root=PIPELINE_ROOT, data_root=DATA_ROOT, module_file=os.path.join(MODULE_ROOT, _trainer_module_file), serving_model_dir=SERVING_MODEL_DIR) _ = runner.run( outpipe) #---------------------------------------------------------------------------------------------------------------------------------------------- from kfp.v2.google import client pipelines_client = client.AIPlatformClient( project_id=GOOGLE_CLOUD_PROJECT, region=GOOGLE_CLOUD_REGION, ) _ = pipelines_client.create_run_from_job_spec(PIPELINE_DEFINITION_FILE) Not sure if I am missing any configuration here. Thanks Subhasish
st205321
Getting this error always when I am performing training in the latest TFX version! Need some solution,working from past few weeks, found nothing! I am trying to train my TFX model training pipeline on inequality dataset but getting this error always! ValueError: Got non-flat/non-unique argument names for SavedModel signature ‘serving_default’: more than one argument to ‘__inference_signature_wrapper_3681’ was named ‘fixed acidity’. Signatures have one Tensor per named input, so to have predictable names Python functions used to generate these signatures should avoid *args and Tensors in nested structures unless unique names are specified for each. Use tf.TensorSpec(…, name=…) to provide a name for a Tensor input.
st205322
… more than one argument to ‘__inference_signature_wrapper_3681’ was named ‘fixed acidity’. Could you check your data? Which TFX component is giving this error?
st205323
What is the latest version of tensorflow supported by Raspberry Pi 32-bit OS? I have successfully compiled and installed tf 2.2.0 from a source on a 32-bit RPi OS, but I need at least 2.3.0 or 2.4.0 tf version for my model to run on RPi 4B.
st205324
Why do we apply the standard compression algorithm after the pruning or weight clustering ? To use the model, anyhow we need to unzip it, right ? so what difference does it make ??
st205325
Hi Mohanish. This is because the tflite model storing the pruned elements without changing their data type. But the modern compression algorithms pretty efficiently compresses the zero values, so it can reduce the model size after compression. Yes, the compression might need to undo before the execution, however, considering the tflite is designed for mobile/ioT devices, the model is usually compressed while delivering and storing it to the device. So it is meaningful to have reduced (compressed) model size. hope this help, any more discussion is welcome!
st205326
Hello @Rino_Lee , So, please correct my understanding from your reply. The models are compressed automatically by tflite when storing it in Microcontroller ? or it should be done manually ?
st205327
That’s not done by tflite. It should be done manually by the application developer.
st205328
But when converting the tflite model to C array for deployment on microcontrollers the zip is not used. So, the g-zip step is only for theoretical use/results and has no practical meaning ? Because I cannot find any practical flow for this.
st205329
An example of a practical flow is ARM’s “VELA” tooling for the U55 NPU. This post-processes tflite flatbuffers, compressing constant weight tensors (U55 supports on-they-fly HW weight decompression ).
st205330
Happily using the tfmot in various projects for a while now… but one aspect of the current design puzzles the heck out of me. Why are there disjoint APIs from QAT and pruning? After all: You can model pruning as “just another kind of Quantizer” (one that maps some values to 0.0 and leaves the rest unchanged). Though less vital than for pruning , supporting scheduling of the degree of quantization applied by Quantizers can be useful in QAT (esp. when quantizing down to sub 8=bit bit-widths). Pruning-as-a-kind-of-QAT also avoids the need for a special-cases two-step training processs if QAT and pruning are to be combined. A quick PoC implementation (Quantizer composition operator + Incremental Pruning “Quantizer”) created to simplifiy porting models using a legacy internal library seems to work just fine. On a similar note: the pruning API seems go to some trouble to prune by over-writing pruned weight variables rather than simply “injecting” a masking operation (with straight-through estimator for gradient). Surely, due to constant folding applied when models are optimized for inference (e.g. tflite converter) the “end-result” of masking would be the same for less coding effort? What am I missing? Anyone from the tfmot Team(s) care to shed any light?
st205331
I am doing QAT and then full integer conversion in TFLite. I realize that for some reason TFLite requires that 0.0 to be always in the tensor/weight’s [min, max] range. This is commented in quant_ops.py 1 and tf.quantization 1. I wonder what TFLite forces min <= 0 <= max? I have encountered cases that weights are all positive (or negative) and observed significant loss in quantization accuracy. Is there any way to work around it?
st205332
Is there anyway to specify the cache folder for TF build, so that I don’t have to rebuilt everytime after the computer is rebooted? I tried to set the TMP folder, but it does not help. TMP=~/tmp bazel build //tensorflow/tools/pip_package:build_pip_package This seems does not work, the ~/tmp folder is empty all the time.
st205333
The default is ~/.cache/bazel so it is available after reboot. You can add an extra cache with: https://docs.bazel.build/versions/main/remote-caching.html#disk-cache 8 But you need to consider also the cache invalidation effect of large regular changes like this one: LLVM updates and bazel cache Build As we are updating LLVM two times at day I’ve tried to query bazel with: bazel aquery "rdeps(//tensorflow:*,//third_party/llvm:*)" --include_aspects 2>/dev/null | grep Compiling | wc -l I am not a bazel ninja so probably the query could be wrong or improved but I currently see 9938 files on master (CPU only). What is the effect of this bi-daily rolling update on the average community contributor compiling workflow/environment and his bazel cache? More in general you could be interested in: github.com/tensorflow/build Provide Bazel cache for TensorFlow builds 6 opened May 15, 2020 angerson Providing a TensorFlow build cache could be very helpful to external developers,… and lower the barrier to entry of contributing to TF. Some ideas for this we've discussed before are: - Offer [Bazel RBE](https://docs.bazel.build/versions/master/remote-execution.html) resources on behalf of SIG Build. This service is in alpha on GCP. - Provide a read-only [build cache](https://docs.bazel.build/versions/master/remote-caching.html#google-cloud-storage) in a GCP bucket. - Provide `devel_cache` Docker images containing a build cache (these could be very large) - Provide code-and-cache volumes for the docker `devel` images. See also: - https://github.com/tensorflow/tensorflow/issues/39560 - https://github.com/tensorflow/tensorflow/issues/4116 - https://github.com/tensorflow/addons/issues/1414
st205334
I am currently working on a project which uses huggingface. I created the huggingface datasets and converted it to tensorflow. The method of conversion is not from_tensor_slices(), the one shown in their documentation but using from_generator(). I found this method a lot faster but at the time of training using TFTrainer(), I encounter an error: ValueError: The training dataset must have an asserted cardinality I checked and found the reason was from_generator(). Inorder to verify this, I created a very basic dataset using from_generator() method and checked its cardinality: dumm_ds = tf.data.Dataset.from_generator(lambda: [tf.constant(1)]*1000, output_signature=tf.TensorSpec(shape=[None], dtype=tf.int64)) tf.data.experimental.cardinality(dumm_ds) Output: <tf.Tensor: shape=(), dtype=int64, numpy=-2> where, ‘-2’ mean UNKNOWN_CARDINALITY. I would like to know whether this is a bug or not? and If not then, how can I change the cardinality?
st205335
Check TypeError: dataset length is unknown tensorflow General Discussion Just to extend the official example in the documentation import tensorflow as tf def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) print(dataset.cardinality()) You will see that cardinality is -2 that is tf.data.experimental.UNKNOWN_CARDINALITY. So it can’t quickly e…
st205336
Hi, When we use the C APIs to create kernels, we can create a new TF_Tensor or get from context. For the ops using generic data type such as float or int8. We can get the raw buffer pointer and do what we want. But when we get the tensor whose type is TF_VARIANT, is there any solution that we can get the data in it and do computing ? Based on currently C APIs, we can only get some meta data of the Tensor, such as shape info, data type and so on. Thanks
st205337
From the documentation 2 of tf.keras.layers.experimental.preprocessing.Normalization: What happens in adapt: Compute mean and variance of the data and store them as the layer’s weights. adapt should be called before fit, evaluate, or predict. As one can imagine when dealing with large datasets, figuring out how to properly use and call the adapt() is not clear. The tutorial below might be useful here: TensorFlow Simple audio recognition: Recognizing keywords  |  TensorFlow Core 3 From the tutorial: norm_layer = preprocessing.Normalization() norm_layer.adapt(spectrogram_ds.map(lambda x, _: x)) spectrogram_ds returns tuples containing (spectrogram, label_id).
st205338
Yep, the holiday season came early for TensorFlow.js devs with this new model from a collaboration with Google Research that have created a super fast native TensorFlow.js model (like 120 FPS fast on my modern desktop at peak) for body pose estimation that runs in your web browser. What will you make? 810×480 I am sure you want to try it out and learn more so without further ado here is what you need to know: Read more and learn about MoveNet: Next-Generation Pose Detection with MoveNet and TensorFlow.js — The TensorFlow Blog 69 Try the demo: https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=movenet 52 Try the first community made demo that uses movenet already in a game - just 2 days after launch - amazing! You will have a lot of fun with this one: https://twitter.com/devdevcharlie/status/1395102385387843584 47
st205339
You can use NPM to import via Node: import * as poseDetection from '@tensorflow-models/pose-detection'; See the blog post above for more details. The rest of the code should be the same.
st205340
Yes I have read the post. Look at example and source , but I don’t understand how to give a stream to the method poseEstilamaton if I run my code on server side.
st205341
This tutorial may be of use to you which shows you how to use imagery with one of our other models in Node but the principle would be similar: Medium – 12 Apr 21 Image Classification API With NodeJS, TensorflowJS, And MobileNet Model 49 Now, with TensorFlow.js developers can easily run a machine learning model using JavaScript. With Pre-trained models, you can complete… Reading time: 5 min read
st205342
I was quite interested to know or have some similar tutorials, in tensor flow react native ? I am going through some sample general react native tutorials . Do you have some suggestions on that ? It seems like not many tutorials are written on movenet.
st205343
This guy is probably making some of the best React Native stuff on YouTube right now that also uses TensorFlow.js:
st205344
Hi Jason thanks for the info. I think most of the videos , made by him is running in react javacript but perhaps not react native expo . I saw some where , react native tensor flow implementation of some models (like object detection) but nothing specifically, for pose estimation , as far as I have dug into.
st205345
@Gant_Laborde May have some experience of React Native maybe? Have you seen anyone use PoseNet or MoveNet in react native Gant? Off the top of my head I can not think of any examples using this right now. @DLC_Trial That being said if you are going to be the first to try, I would recommend trying our much newer “MoveNet” model 11 which is far more accurate and faster as detailed on this thread if you do decide to try using within React Native. Also you may find this tutorial useful in how someone ported an arbitrary TFJS model to run in React Native which is what you may need to do for this too: https://heartbeat.fritz.ai/image-classification-on-react-native-with-tensorflow-js-and-mobilenet-48a39185717c 16
st205346
Thanks @Jason , I tried the link for image classification and could compile it . I was thinking an intermediate step could be to try p5js for movenet. Yes, starting with movenet would be ideal as you suggested . But blazepose /media pipe definetely has more keypoints to track ( but i think the results are about 12fps) from what i read . Thanks
st205347
Here’s a repo that uses TFJS on RN Expo github.com amandeepmittal/mobilenet-tfjs-expo 16 ⚛️ 📱 MobileNet Image Classification with TensorFlow and Expo Using the TFJS platform adapter for React Native works on raw RN and Expo apps as far as I know.
st205348
Yep it depends what you want. More keypoints (33 vs 17) or a model that has less “jitter” when predicting etc - movenet was designed for fast movements and more extreme poses I believe and when I last tested it runs at 60 FPS on a current gen iPhone, and around 120 FPS on a desktop with 1070 Nvidia GPU (which is a few generations old now). We have had companies in physiotherapy use MoveNet for real world production use cases for this reason. So choose what is best for your use case and try them both out! Both run well in browser so you can see what works best for your use case and then once you know you can decide which one you want to port to React Native. Good luck!
st205349
Welcome to the community @Takuya. I am unsure of this at present, if I hear of any updates though I shall post something once I hear more details around this area.
st205350
@Takuya So I can confirm that this multi-person detection is currently being investigated. Once there are demos and such to share I will be posting them to our usual social channels for all to see so keep an eye out for that when it is launched. If you have not seen any updates in a month or two around this, please feel free to reach out to me again to get a new ETA.
st205351
Thanks @Jason Sorry I asked update schedule here. I look forward to the update at social channels.
st205352
No problems @Takuya Feel free to post if you have anything on your mind related to TensorFlow.js! It is great to hear from you and excited to see what you create
st205353
Basically, training my model after some feature engineering from the raw data. The feature engineering includes converting an Order_date column (from a pandas dataframe having dtype: Object) to four separate features, i.e. date in month, month, year and weekday. See the following screenshot. It is a simple task using pandas functionalities, but recently, I read from some keras and tensorflow documentation that the best models are end to end, i.e. take the raw data, then implement all feature engineering as layers before the neural network. So trying to adhere to that philosophy, but being somewhat noob in tensorflow api, not sure if this is possible here. I also looked up the tf.data pipelines and methods available tf.feature_column, but seems they do not fit my use case. So any help regarding the cleanest way to achieve it while sticking with tensorflow layer philosophy? I can potentially use a custom layer subclassing from the base layer class which takes a batch of yyyy-mm-dd strings and spits out the four features as output, using pandas timestamp (which I want to avoid) and actually have not implemented that yet. There should still be a cleaner way using tf APIs.
st205354
I think you want something like the second bullet point in: github.com/tensorflow/transform Transforming date values 11 opened Apr 23, 2019 Malonl stat:awaiting tensorflower type:feature Hi. I have a use case where I want to use date features as input values for a… predictive model. I need to transform the date features to be useful. Examples I would like to be able to do: - Given two date columns, generate a new column with the difference between them in days. The days can be days, weeks, or even years apart - From a column with dates, create three different columns with extracted data, one for day, one for month, one for year If we could use some python library (datetime for example), it would be trivial. Without the library, we would need to implement the knowledge about the calendar (number of days in each month etc) I believe we cannot use a conventional python library because if we use it, the transformation would not be written to the graph, and thus we would not be able to have it at serving time. My question is if it is possible to do this within TF Transform and if not, is this a feature that is coming up? If there is no way such operations to the graph, we would need to implement a piece of pipeline transforming the data both before training and before serving, outside of the graph. Thanks in advance! PS: For transparancy, I have asked this question on [tensorflow/tensorflow](https://github.com/tensorflow/tensorflow/issues/27893#issuecomment-484983991) and [tensorflow/tfx](https://github.com/tensorflow/tfx/issues/41#issuecomment-485289180) as well, but have been redirected to this forum.
st205355
Hi everyone, I’m trying to implement a simple feed-forward neural network with a modification based on this paper arxiv:2007.11207 2 (See figure 3b). Basically the inputs (x) are scaled by a vector, e.g., (1,2,3) producing new inputs (1x, 2x, 3x). These are then passed into 3 sub-networks that are independent of each other. The outputs of these networks are then summed together at the end to produce an output. I have the following TF2 code below but when I try to plot the model using tf.keras.utils.plot_model(model) I get the following error. AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute '_keras_history' The code I am using to build the model is: (for simplicity the sub-networks just have a single layer) import tensorflow as tf def build_model(): input_shape = 1 output_shape = 1 units = 128 scales = tf.convert_to_tensor([[1],[2],[3]], dtype=tf.float32) input_layer = tf.keras.Input(shape=(input_shape,)) # create sub-networks xs = [] for scale in scales: scaled_input = tf.keras.layers.multiply([input_layer, scale]) xs.append(tf.keras.layers.Dense(units, activation='relu')(scaled_input)) # add the outputs of each sub-network output_layer = tf.keras.layers.add([x for x in xs]) output_layer = tf.keras.layers.Dense(output_shape, activation="linear")(output_layer) model = tf.keras.models.Model(inputs=input_layer, outputs=output_layer) return model model = build_model() # then the following throws the error. tf.keras.utils.plot_model(model) Thanks in advance for any help!
st205356
Hi all, I think I managed to track down the problem to this line: scaled_input = tf.keras.layers.multiply([input_layer, scale]) TF2 doesn’t like this multiplication. Is it because scale is not a layer? I guess to fix this I could make a new layer or use a Lambda layer that does the multiplication between the inputs and a scalar. Thanks!
st205357
I managed to get this working now and there was a similar problem on SO here 69. I’ll post the working code here in case anyone is interested. import tensorflow as tf # create a new layer that simple multiplies the input by a scalar # got this from # https://www.tensorflow.org/api_docs/python/tf/keras/layers/Lambda class ScaleLayer(tf.keras.layers.Layer): def __init__(self, scale): super(ScaleLayer, self).__init__() self.scale = scale def call(self, inputs): return inputs * self.scale # need a function to create sub-networks def build_subnetwork(input_tensor, units, activation='relu', n_hidden=1): x = tf.keras.layers.Dense(units, activation=activation)(input_tensor) for _ in range(1, n_hidden): x = tf.keras.layers.Dense(units, activation=activation)(x) # single output here? x = tf.keras.layers.Dense(1, activation='linear')(x) return x def build_model( input_shape = 1, output_shape = 1, units = 128, activation = 'relu', n_hidden_subnetwork = 2, scales = [1] ): input_layer = tf.keras.Input(shape=(input_shape,)) # create sub-networks xs = [] for scale in scales: scaled_input = ScaleLayer(scale)(input_layer) xs.append(build_subnetwork(scaled_input, units, activation, n_hidden_subnetwork)) if len(xs) > 1: output_layer = tf.keras.layers.add([x for x in xs]) else: output_layer = xs[0] output_layer = tf.keras.layers.Dense(output_shape, activation="linear")(output_layer) model = tf.keras.models.Model(inputs=input_layer, outputs=output_layer) return model
st205358
Hello, I was wondering if it is possible in any way to perform optimizations, such as pruning, on a trained TF2 Object-Detection SavedModel? Thanks, Ahmad
st205359
It’s not easy to apply pruning in PTQ fashion, so I’d say pruning should be there from the training phase.
st205360
You can upvote and check: github.com/tensorflow/model-optimization Object detection API 5 opened Oct 22, 2019 dbrazey technique:pruning Hello all, I was wondering if your pruning tools could be used on my object …detection models (SSD / Faster RCNN, ...) trained with the tensorflow object detection API. When training, I don't use directly Keras, but I follow tutorials of the object detection API. Thanks for your help :)
st205361
getting started. so all the niave questions come out. Can I embed TF in my C program using the C API and not have to deal with Python at all?
st205362
The C API is very minimal: TensorFlow Install TensorFlow for C 4 What is your use case?
st205363
i want to merge a DNA sequence alignment program (minimap2) with patterns in gene alignments. Since the bulk of the work is in C code, I’d prefer to stay in C.
st205364
I am trying to modify the layers used in the TF Lite backend but I’m not sure how to recompile just the file(s) I modified without recompiling the entire codebase. To install from source, I followed the instructions here 1], but this is my first time using Bazel to build something (I’ve only used make or Cmake before). Also the instructions ask to install TF once it is built. So my three related questions are: When I make changes to a layer in a C file (e.g., tensorflow/lite/kernels/internal/reference/fully_connected.h), how do I recompile just this file? When I run the bazel build option again, it seems to be recompiling all of TF again. (I did see that the official instructions are to make a new OP and register it etc, which I do want to do, but for now modifying an existing kernel seemed like a simpler place to start. Then I can try out my custom op separately before adding a new OP in TF for that. ) Is there a way to run TF without installing it each time, since I will be making a lot of changes? I am so used to being able to print stuff for debug but I don’t seem to be able to do that within this layer. Is there a way within TF to print something as a debug message? Thank you!
st205365
Let’s say I have an array (final) of shape (2048, 128). I have got another array called indices of shape (2048, 1). If I wanted to scatter a value of 1 in final w.r.t indices, how will I do that? What I have tried: indices = tf.constant(np.random.randint(1000, size=(2048, 1))) output = tf.scatter_nd(indices, 1, [2048, 128]) Results into: InvalidArgumentError: Updates shape must have rank at least one. Found:[] [Op:ScatterNd] What is the recommended solution?
st205366
Oh, I guess I can just do: indices = tf.constant(np.random.randint(1000, size=(2048, 1))) output = tf.one_hot(indices, 128)
st205367
How to log AUC score/ custom metrics at every tree? I tried with: model = tfdf.keras.GradientBoostedTreesModel(num_trees=3000,features=all_features, exclude_non_specified_features=True, early_stopping_num_trees_look_ahead=30, task=tfdf.keras.Task.CLASSIFICATION, verbose=True, apply_link_function=False) model.compile(metrics=tf.keras.metrics.AUC()) Though, when calling model.make_inspector().training_logs(), only loss and accuracy are logged (for classification task).
st205368
Hi, I also tried the same but it only calculates metrics when training is finished.
st205369
Hi An Tran, The metrics returned by the inspector’s training logs are hardcoded 3. The TF-DF API does not currently allow to customize it. If of importance, please file a feature request. In the mean time, a (relatively inefficient) solution is to remove trees from the model (using the model inspector 2 and model builder 1) and then to evaluate each intermediate model with metrics=[tf.keras.metrics.AUC()].
st205370
Thanks for the info. I’m trying to find a way to make the model log aucs metric when calling model.make_inspector().training_logs(). Currently, the output is like this for the classification task: TrainLog(num_trees=11, evaluation=Evaluation(num_examples=233, accuracy=0.9570815450643777, loss=0.5311337378147846, rmse=None, ndcg=None, aucs=None)),
st205371
TL;DR: It is not possible to output AUC in model.make_inspector().training_logs() with the current code (without adding a new parameter and implementing the related logic).
st205372
I am working on a text recognition app from images.for that i am using a ‘tflite’ model(lite-model_east-text-detector_dr_1.tflite). After processing the input image I got a Tensor buffer that contains a float array. How can I convert this output float array to text? Is there any better ‘tflite’ model for text recognition from images?
st205373
Hi @Nison_Sunny , This float array indicates the position of inputs in a text file. Check the project you have for .txt file. Inside there you will see an input for every position. Check this project at specific line 15 Regards
st205374
Hi @George_Soloupis am using the below-added model for Text Recognition models 8 I can’t find a text file with this model. Actually, I am looking for a text file that can map the output array, Similar to your Keras-ocr project
st205375
CTCDecoder is removed from these models so i think you also need to run decoder on your model output and then you will get text positions
st205376
According to this issue CTC tensorflow lite conversion problem · Issue #33494 · tensorflow/tensorflow · GitHub 7 CTC is supported in TF2.4. So maybe these models are already updated. What is the output shape from the model?
st205377
Hi @Kzyh I think float32[1,26,37] is the output shape of the ‘tflite’ file that I used. The tflite model that I have used is added below. https://tfhub.dev/tulasiram58827/lite-model/rosetta/dr/1 10
st205378
@Kzyh Yes. I tried the keras-ocr. It’s working fine for me. But I think that model is only trained for a single word only. My Requirement is similar to reading data from Images like Bills, Cards, etc. craft mode 7 This is a tfmodel similar to my requirement. But i cant decode the output to text
st205379
Yeah. Seems like you need to detect text first, then run OCR on the detections. I dont think there is single model on tfhub to do all that at once. You colud use craft-text-detector, cut out detected text boxes and feed them to keras-ocr.
st205380
Please help me to convert .pb file to .tflite format. I can’t convert again after new update in TensorFlow
st205381
George_Soloupis: code snippet? I am using Google Collab. This is My Code. Please help me in Section 12. I fail to convert .pb file to .tflite. https://colab.research.google.com/drive/1D7WrDIvuXmrQStHnXBJHBzQlo-1S8UvQ
st205382
Aulya_Yarzuki: Google Colaboratory It needs access. Please add Removed by Moderator
st205383
So what is the error? I do not see executed cells to see what is wrong. I seems that it is a standard procedure with scripts that have been created and tested from Google like “export_tflite_graph_tf2.py”. Because executing all the cells it is time consuming please train the model and use “export_tflite_graph_tf2.py” script to export to tflite file… Or train the model and execute the command to convert Saved Model to TFLite with the TensorFlow Lite converter to see the error logcat. It will be also helpful if you create the Saved Model folder with all the contents and share it. This way we can reproduce the issue any time.
st205384
Hi, May I know which model you are using? Classification or detection? and what architecture ? I have faced similar issue. And when I tried to look into you code, it was saying access is denied
st205385
There is no error but .tflite model become file with size only 2 kb. But, file .pb size is 11 MB. When I load .tflite it will become error.
st205386
Hello Please help me why file .tflite is always become 2 Kb? Here is the saved_model.zip saved_model.zip - Google Drive 4
st205387
Hi @Aulya_Yarzuki , Saved model file is succescfully converted to .tflite file. I have used a simple colab notebook and method from here 3. The file that is produced is about 10MB and is succesfully loaded in netron 5 showing that is an object detection application that you want to build. Check below code: ayla1570×560 57.9 KB Ping me if you want anything else. Regards
st205388
My question is related with how keras implements the weights constraints, in particular in dense layers. I am using mainly R and its keras implementation, so I will use here their notation, but I will be also happy if someone has a solution in Python. As far as I know, keras constraints like constraint_maxnorm() can be applied to a layer_dense() with the arguments kernel_constraint or bias_constraint, where the kernel refers to the weigth matrix without the bias terms. However, in order to test new approaches I think it would be useful to also be able to apply constraints to the full weight matrix, where the biases vector forms an extra row in the weigth matrix. An example of this kind of constraint would be to restrict to 1 the L1 norm of the weight vectors incident to a neuron (considering the bias at that neuron also as an element of said vector). In this sense, custom constraints 1 can be created but the problem is that they receive the kernel weights as input, not both the kernel weights and bias, and the output is also only the kernel weights, not the full weight matrix. Is it possible to implement this kind of constraints in keras or tensorflow that affect both the kernel and the bias at the same time? In this sense, it would be enough to be able to create custom constraints that accept both the kernel weights and the bias as input, and then apply the desired constraint twice, at the kernel and at the bias.
st205389
Hi everybody, I’d like to discuss the correctness of the ResNet and ResNetV2 implementations, recently I noticed a few flaws in the Keras applications implementation, starting with this. Now I want to discuss the identity mappings, it seems like they make a huge difference, according to the ResnetV2 paper. The way ResNetV1 is implemented, the skip connections are not Identity at all, there are BN and activations in the way, which defeats the idea in which ResNets were designed. ResNetV2 tries to correct those issues, but I still see some MaxPooling layers in the skip connections. I have compared my results training a binary classifier with ResNetV1, ResNetV2 and a modified ResNetV1 with all identity mappings. The modified ResNetV1 is providing me the best results. My dataset contains images with a high number of channels, which impedes the usage of Imagenet weights. See results here (scroll image to the right to see ResnetV2): ResNet_test/networks_precision_recall.png at main · ririya/ResNet_test · GitHub 2
st205390
Sorry all, my current forum member level does not allow me to embed pictures and only allows me to share two urls, that’s why I had to create a single image.
st205391
I think this is a good observation. Even though your points are great but I would still want to try those modifications on a dataset of natural images just to confirm the hunches. Why don’t you discuss this in the repository: github.com keras-team/keras 3 Deep Learning for humans. Contribute to keras-team/keras development by creating an account on GitHub. /cc: @fchollet
st205392
Hello, I want to take the Tensorflow certification and I want to know if I can work in the USA with this certification knowing that I am not currently living there. Sincerely
st205393
I’m trying to deploy my image classification model on streamlit but I encounter this warning message when I trying to make inferences using the app. image949×157 62.3 KB image943×144 63 KB
st205394
The first image is just showing information about your platform. You can pretty much safely ignore them. About the second image…Your inference function is a tf.function which constructs, optimizes, and compiles a static graph at runtime (by “tracing” the annotated function, outside the scope of this answer). The warning is telling you that each call to your predict function has to reconstruct, optimize, and compile a new graph. That ends up being really expensive, and you miss out on the performance benefits of tf.function. To break it down, here’s what it’s telling you could be happening: creating @tf.function repeatedly in a loop while True: my_predict_function = lambda x: x * x my_tf_predict_function = tf.function(func=my_predict_function) # uh oh, this gets called every loop iteration my_tf_predict_function(tf.random.uniform((2, 2))) If you’re not defining / redefining your model or predict function in a loop, this is unlikely to be the case. passing tensors with different shapes my_predict_function = lambda x: x * x my_tf_predict_function = tf.function(func=my_predict_function) my_tf_predict_function(tf.random.uniform((2, 2))) my_tf_predict_function(tf.random.uniform((3, 2))) # different shapes, our graph was previously optimized on a shape of (2, 2) If you’re passing images of different dimensions, this is likely the case. passing Python objects instead of tensors my_predict_function = lambda x, flag: x * x if flag else x my_tf_predict_function = tf.function(func=my_predict_function) input = tf.random.uniform((2, 2)) my_tf_predict_function(input, True) # the graph compiled is x * x my_tf_predict_function(input, False) # we have to recompile because it takes a different path based on flag! Notice how the graph changes based on flag which is a Python object and not a tensor. So even though the shapes are the same, the graph is different. If you control your model with some hyperparameters which are Python scalars or other Python objects, then it will need to recompile the graph every time you change a parameter.
st205395
Are there any ways get embeddings from one-hot vectors ? As far as I know, embedding layers only accept integer tokens. But, is there a way to input one-hot vectors and get embeddings ?
st205396
Is there a reason you need to use one-hot vectors? If there’s no explicit constraint, you can just convert the vectors to integer tokens: import tensorflow as tf from tensorflow.keras import backend as K # Create random integer tensor, 128 tokens with vocab size of 100 integer_embed = tf.random.uniform((32, 128), minval=0, maxval=100, dtype=tf.int64) # Create one-hot encoding one_hot = tf.keras.utils.to_categorical(integer_embed) model = tf.keras.layers.Embedding(100, 32, input_shape=(128,)) model(K.argmax(one_hot, axis=-1)) If you do have a constraint, here’s an example One-Hot Embedding using Layer subclassing (from: Is there anyway to pass one-hot binary vector as input and embed it? · Issue #2505 · keras-team/keras · GitHub 1): class OnehotEmbedding(tf.keras.layers.Layer): def __init__(self, Nembeddings, **kwargs): self.Nembeddings = Nembeddings super(OnehotEmbedding, self).__init__(**kwargs) def build(self, input_shape): # Create a trainable weight variable for this layer. self.kernel = self.add_weight(name='kernel', shape=(input_shape[2], self.Nembeddings), initializer='uniform', trainable=True) super(OnehotEmbedding, self).build(input_shape) # Be sure to call this at the end def call(self, x): return K.dot(x, self.kernel) def compute_output_shape(self, input_shape): return (input_shape[0], input_shape[1], self.Nembeddings) model = OnehotEmbedding(32, input_shape=(32, 100, 128)) model(one_hot)
st205397
I was trying to implement YOLO with 5 anchor boxes. I wanted to use keras.backend.max as described here 1 but it’s not functioning properly or I am misunderstanding it. The code looks something like this: ` from keras import backend as K with tf.compat.v1.Session() as test_a: example = tf.random.normal([4, 4, 2, 2], mean=1, stddev=4, seed = 1) print(example.eval()) # test = K.argmax(example, axis=-1) # print(test.eval()) another_test = K.max(example, axis = -1) print(another_test.eval()) ` The output is this: ` [[[[-2.2452729 6.938395 ] [ 1.2613175 -8.770817 ]] [[ 1.3969936 3.3648973 ] [ 3.3712919 -7.4917183 ]] [[-1.8915889 0.7749185 ] [ 3.5741792 -0.05729628]] [[ 8.426533 3.2713668 ] [-0.5313436 -4.9413733 ]]] [[[ 6.0470843 0.8987757 ] [-0.05851877 7.131255 ]] [[-5.9719086 -0.7515718 ] [-1.26404 2.2826772 ]] [[ 5.531324 -8.113029 ] [ 2.9312482 -4.250835 ]] [[ 2.4274013 -5.9211335 ] [ 0.83932906 4.5986476 ]]] [[[-4.5223565 6.9258494 ] [ 0.01802081 -1.9305887 ]] [[ 0.21641421 1.2868321 ] [ 3.5319235 -5.284763 ]] [[ 6.317525 -3.693468 ] [ 1.1261784 2.9082098 ]] [[ 2.747768 -0.26723564] [-0.8030013 -6.2242627 ]]] [[[ 1.4995985 -2.0826168 ] [-1.9849663 -0.12781298]] [[-6.835262 -0.35044277] [ 5.1207933 7.053607 ]] [[ 1.9006321 -0.14264834] [ 2.0753016 7.984844 ]] [[ 4.695484 -7.2363987 ] [-0.25753224 5.841353 ]]]] [[[-0.0205164 5.3679433 ] [ 3.9440122 4.684675 ] [ 4.1706614 1.3993862 ] [ 9.333472 3.0931065 ]] [[ 6.7779894 5.3317556 ] [-3.0122952 -1.5482914 ] [-0.5402719 -5.03914 ] [ 3.0354424 2.9042442 ]] [[ 4.730604 2.9970727 ] [ 3.8026135 0.04406112] [ 5.1083784 9.928441 ] [-1.8228705 1.4313807 ]] [[ 7.075215 -4.0160027 ] [ 2.046008 0.94906276] [ 0.6468721 1.6242697 ] [-0.2219665 5.4110713 ]]] ` As far as I can comprehend it should return the max value but what is it returning?
st205398
First thing to point out: while I’m not as well-versed in TF 1, I believe your example will end up evaluating the maximum of a completely different random tensor then the one you inspect because eval runs a the graph in a different session each time, e.g. tf.random.normal will get called again to evaluate the max. So (I think) you won’t be able to follow the results by inspecting them in this way. Otherwise, this is the basically the result you should expect. keras.backend.max will return the maximum value along a specified axis/axes, effectively reducing the dimension away. Your examples starts by printing a (4, 4, 2, 2) tensor. After the call to keras.backend.max, with axis=-1 (which wraps to the last axis in the tensor), the result is a tensor with shape (4, 4, 2). That’s because you’re telling the function to “walk” the last axis and find the maximum value of the If you want the absolute maximum value of the entire tensor, axis should be set to None. Here are some answers that demonstrate keras.backend.max with different axis arguments, and one with keepdims set to True: with tf.compat.v1.Session() as test_a: example = tf.random.normal([4, 4, 2, 2], mean=1, stddev=4, seed = 1) # Print absolute maximum value (shape: ()) abs_max = K.max(example, axis=None) print(abs_max.shape) # Print maximum values along axis 0, 2 (shape: (4, 2)) max_axis_0_2 = K.max(example, axis=(0, 2,)) print(max_axis_0_2.shape) # Print maximum values along axis 1 (shape: (4, 2, 2)) max_axis_1 = K.max(example, axis=(1,)) print(max_axis_1.shape) # Print maximum values along axis 0, 2, and keep reduced axes # reduced axes will be size 1 max_axis_0_2_keep = K.max(example, axis=(0, 2,), keepdims=True) print(max_axis_0_2_keep.shape)
st205399
Thanks for replying! Is there any way in which I can inspect both the original & the tensor which returns after K.max ?