id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st207200 | Thanks to everyone who attended this month’s meeting. Here’s a summary of some of the main points:
About 20 vulnerabilities are expected to be patched on top of TF 2.6; there will be patch releases for previous releases as well.
The DevInfra team is looking at bumping some dependencies, but it’s slow going because of how much testing is necessary.
manylinux2014 or perennial manylinux is a requirement for CUDA 11.4. The DevInfra team is working on this.
I’m looking at Docker containers again (finally!) and will be checking out the PYPA containers to see if we can use those directly instead of using our own custom toolchain, which we needed for Ubuntu previously.
Check out the notes, linked in the first post, for full details. |
st207201 | I want to create a neural network with a locally connected layer but without summation over the 3rd dimension (colors) of the inputs.
I saw in the docs of LocallyConnected2D 2 that there is no “group” argument as in the conv2d.
Is there a way to do that? |
st207202 | Do you mean something like this?
github.com/keras-team/keras
Feature request: In-plane locally-connected 2D convolutions 6
opened
Mar 29, 2018
closed
Jun 25, 2021
tsoernes
I'm requesting a convolution that works on each channel separately using the sam…e filter (as https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/contrib/layers/conv2d_in_plane), and but uses different filters spatially (like LocallyConnected2D).
One approach would be to modify `LocallyConnected2D`:
```
individual_channels = tf.split(inputs, inputs.shape[3], -1)
convs = []
for channel in individual_channels:
conv = K.local_conv2d(channel, self.kernel, self.kernel_size, self.strides,
(self.output_row, self.output_col), self.data_format)
convs.append(conv)
outputs = tf.concat(convs, -1)
```
where
```
self.kernel_shape = (output_row * output_col,
self.kernel_size[0] * self.kernel_size[1], 1)
```
But the above approach is very slow. |
st207203 | My python code snippet is like:
import os
import numpy as np
import tensorflow as tf
from tensorflow.python.eager import context
from tensorflow.python.framework import dtypes
from tensorflow.python.platform import test
from tensorflow.python.framework import config
import tensorflow.compat.v1 as tf
config.enable_mlir_bridge()
tf.config.experimental.enable_mlir_bridge()
class CustomModule(tf.Module):
def init(self):
super(CustomModule, self).init()
self.condition = tf.Variable(np.array([[True, False, False],[False, True, False],[True, True, True]]), dtype = tf.bool)
self.x = tf.Variable(np.array([[1, 2, 3],[4, 5, 6],[7, 8, 9]]), dtype = tf.int32)
self.y =tf.Variable(np.array([[11, 12, 13],[14, 15, 16],[17, 18, 19]]), dtype = tf.int32)
@tf.function
def call(self, x):
r = tf.where(self.condition, self.x, self.y)
m= tf.where(self.condition, self.x, self.y)
c=tf.sets.intersection(tf.expand_dims(r, 0),tf.expand_dims(m, 0))
return c
module = CustomModule()
module_with_signature_path = os.path.join("/data/aruna/tf_ops", ‘sets_intersection’)
call = module.call.get_concrete_function(tf.TensorSpec(shape=(), dtype=tf.int32))
signatures = {‘predict’: call}
tf.saved_model.save(module, module_with_signature_path, signatures=call)
print(‘Saving model…’)
if name == ‘main’:
test.main()
I ran this python code and got saved_model.pb.
Then I used following commands:
tensorflow/compiler/mlir/tf-mlir-translate --savedmodel-objectgraph-to-mlir --tf-savedmodel-exported-names=predict -tf-enable-shape-inference-on-import=true $PWD -o sample.mlir
tensorflow/compiler/mlir/tf-opt --tf-executor-to-functional-conversion --tf-shape-inference -xla-legalize-tf --print-ir-before-all sample.mlir
TF dialect looks like:
// -----// IR Dump Before LegalizeTF //----- //
builtin.func private @__inference___call___750(%arg0: tensor {tf._user_specified_name = “x”}, %arg1: tensor<!tf_type.resource>, %arg2: tensor<!tf_type.resource>, %arg3: tensor<!tf_type.resource>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) attributes {tf._construction_context = “kEagerRuntime”, tf._input_shapes = [#tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>], tf.signature.is_stateful} {
%cst = “tf.Const”() {device = “”, value = dense<0> : tensor} : () → tensor
%cst_0 = “tf.Const”() {device = “”, value = dense<0> : tensor}2021-09-08 09:56:50.579733: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
: () → tensor
%0 = “tf.ReadVariableOp”(%arg2) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%1 = “tf.ReadVariableOp”(%arg2) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%2 = “tf.ReadVariableOp”(%arg3) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%3 = “tf.ReadVariableOp”(%arg3) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi32>
%4 = “tf.ReadVariableOp”(%arg1) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi1>
%5 = “tf.Select”(%4, %0, %2) {device = “”} : (tensor<3x3xi1>, tensor<3x3xi32>, tensor<3x3xi32>) → tensor<3x3xi32>
%6 = “tf.ExpandDims”(%5, %cst) {device = “”} : (tensor<3x3xi32>, tensor) → tensor<1x3x3xi32>
%7 = “tf.ReadVariableOp”(%arg1) {device = “”} : (tensor<!tf_type.resource>) → tensor<3x3xi1>
“tf.NoOp”() {_acd_function_control_output = true, device = “”} : () → ()
%8 = “tf.Select”(%7, %1, %3) {device = “”} : (tensor<3x3xi1>, tensor<3x3xi32>, tensor<3x3xi32>) → tensor<3x3xi32>
%9 = “tf.ExpandDims”(%8, %cst_0) {device = “”} : (tensor<3x3xi32>, tensor) → tensor<1x3x3xi32>
%10:3 = “tf.DenseToDenseSetOperation”(%6, %9) {T = i32, device = “”, set_operation = “intersection”, validate_indices = true} : (tensor<1x3x3xi32>, tensor<1x3x3xi32>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>)
%11 = “tf.Identity”(%10#0) {device = “”} : (tensor<?x3xi64>) → tensor<?x3xi64>
%12 = “tf.Identity”(%10#1) {device = “”} : (tensor<?xi32>) → tensor<?xi32>
%13 = “tf.Identity”(%10#2) {device = “”} : (tensor<3xi64>) → tensor<3xi64>
return %11, %12, %13 : tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>
}
Error is:
sample.mlir:5:3: error: The following operations cannot be legalized: tf.DenseToDenseSetOperation (count: 1); tf.NoOp (count: 1); tf.ReadVariableOp (count: 6). These legalization failure(s) may be due to missing TF to HLO lowerings and/or unsupported attributes, etc.
builtin.func private @__inference___call___340(%arg0: tensor {tf._user_specified_name = “x”}, %arg1: tensor<!tf_type.resource>, %arg2: tensor<!tf_type.resource>, %arg3: tensor<!tf_type.resource>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) attributes {tf._construction_context = “kEagerRuntime”, tf._input_shapes = [#tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>], tf.signature.is_stateful} {
^
sample.mlir:5:3: error: Emitting more detail about one op that failed to legalize…
builtin.func private @__inference___call___340(%arg0: tensor {tf._user_specified_name = “x”}, %arg1: tensor<!tf_type.resource>, %arg2: tensor<!tf_type.resource>, %arg3: tensor<!tf_type.resource>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) attributes {tf._construction_context = “kEagerRuntime”, tf._input_shapes = [#tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>, #tf_type.shape<>], tf.signature.is_stateful} {
^
sample.mlir:20:61: error: ‘tf.DenseToDenseSetOperation’ op is not legalizable
%outputs_23:3, %control_24 = tf_executor.island wraps “tf.DenseToDenseSetOperation”(%outputs_14, %outputs_21) {T = i32, device = “”, set_operation = “intersection”, validate_indices = true} : (tensor<1x3x3xi32>, tensor<1x3x3xi32>) → (tensor<?x3xi64>, tensor<?xi32>, tensor<3xi64>) |
st207204 | tf.CropAndResize
tf.StridedSlice
tf.Unique
tf.Where
tf.SparseToDense
tf.NonMaxSuppressionV4
tf.TensorListFromTensor
tf.TensorListGetItem
tf.DenseToDenseSetOperation
tf.TensorListReserve
tf.TensorListSetItem
tf.TensorListStack
tf.TopKV2
For the above ops also I am getting same error while lowering these ops to HLO |
st207205 | Hello
for keras platform in model.compile only we use single learning rate, but I need multi learning rate for my model. for example my model include, Backbone with learning rate of 10^-4 and one transformer with learning rate of 10^-3. How could I set this two learning rate inside the model.compile with Adam optimizer or any optimizer? |
st207206 | It seams that TensorFlow Addons has an optimizer with this capability: tfa.optimizers.MultiOptimizer | TensorFlow Addons 10 |
st207207 | As we are updating LLVM two times at day 5 I’ve tried to query bazel with:
bazel aquery "rdeps(//tensorflow:*,//third_party/llvm:*)" --include_aspects 2>/dev/null | grep Compiling | wc -l
I am not a bazel ninja so probably the query could be wrong or improved but I currently see 9938 files on master (CPU only).
What is the effect of this bi-daily rolling update on the average community contributor compiling workflow/environment and his bazel cache? |
st207208 | I’d assume that Bazel is smart enough to only recompile the pieces that actually changed in there, so the impact will vary depending on the actual updates we’re picking up.
As a workflow, when I use to develop in LLVM on a laptop, I would have a cron script that would run a git pull and build at 7am before I show up in the office so that when I arrive I have the most recent copy of the code with the build cache up-to-date |
st207209 | Mehdi_AMINI:
I’d assume that Bazel is smart enough to only recompile the pieces that actually changed in there, so the impact will vary depending on the actual updates we’re picking up.
If you execute the query on your laptop you will see that we have many targets depending on this. Are we going to invalidate all this targets cache?
Mehdi_AMINI:
As a workflow, when I use to develop in LLVM on a laptop, I would have a cron script that would run a git pull and build at 7am before I show up in the office so that when I arrive I have the most recent copy of the code with the build cache up-to-date
This could work for you or any other TF team member as a daily routine.
But I think that it is really not the case of
an occasional/episodic contributor that invest his sparse time to contribute a TF PR.
What do you think? |
st207210 | Bhack:
If you execute the query on your laptop you will see that we have many targets depending on this. Are we going to invalidate all this targets cache?
Bazel is supposed to cache thing based on the content hash of the files: so updating LLVM matters only for the actual files changed in there.
Bhack:
This could work for you or any other TF team member as a daily routine.
But I think that it is really not the case of
an occasional/episodic contributor that invest his sparse time to contribute a TF PR.
Yes, but if you’re an occasional contributor, whether we update some code twice a day or every other week shouldn’t matter: you’ll have to (re)build it, won’t you? |
st207211 | Mehdi_AMINI:
Yes, but if you’re an occasional contributor, whether we update some code twice a day or every other week shouldn’t matter: you’ll have to (re)build it, won’t you?
Without an available nightly cache 3 probably.
But it really depends on how many targets llvm will invalidate in the case the contribution is not too sparse.
An llvm update Is It going to invalidate all the targets depending on llvm? |
st207212 | Bhack:
An llvm update Is It going to invalidate all the targets depending on llvm?
LLVM isn’t monolithic, it depends what is changed in LLVM. Bazel tracks finer grain dependencies. If LLVM LibSupport is changed, then most things will get rebuilt, but if there is a fix in the X86 backend I would expect only the JIT dependencies to be rebuilt for example. |
st207213 | I am not worried about LLVM itself but of all the targets in its chain if the query was correct.
Is a small change in llvm going to invalidated all these targets?
bazel aquery "rdeps(//tensorflow:*,//third_party/llvm:*)" --include_aspects 2>/dev/null | grep Compiling |
st207214 | So as Mehdi was saying, you can’t just look at all TF dependencies on anything in LLVM: it’s going to matter what in LLVM changes. A small change in LLVM is going to invalidate things that depend on the specific target in LLVM that changed. That said, TF’s monolithic structure is pretty likely to create unnecessary dependencies and cause unnecessary recompiles.
Your query also doesn’t work for me. Maybe because of recent changes to use the upstream LLVM Bazel build files? //third_party/llvm is empty. I think you want @llvm-project. Everything in the repo would be @llvm-project//.... Similarly, your query for //tensorflow:* is I believe only capturing rules directly in that package and you’d need //tensorflow/... to get everything. But for reasons that aren’t really clear to me doing an aquery with either of those wildcards fail in different ways. Fun. Anyway, so we’ll limit ourselves to individual packages for now.
If llvm:Support changes:
$ bazel aquery "rdeps(//tensorflow:*, @llvm-project//llvm:Support)" --include_aspects 2>/dev/null | grep Compiling | wc -l
4930
but if llvm:Symbolize changes, then nothing needs to be recompiled
$ bazel aquery "rdeps(//tensorflow:*, @llvm-project//llvm:Symbolize)" --include_aspects 2>/dev/null | grep Compiling | wc -l
0
for an in-between example:
$ bazel aquery "rdeps(//tensorflow:*, @llvm-project//llvm:ProfileData)" --include_aspects 2>/dev/null | grep Compiling | wc -l
2818
and don’t forget about MLIR:
$ bazel aquery "rdeps(//tensorflow:*, @llvm-project//mlir:Support)" --include_aspects 2>/dev/null | grep Compiling | wc -l
1924
Another important component is that if you’re using a cache like --disk_cache, Bazel will only rerun a compile command if the inputs actually change because it’s hashing the content. So if you have a change to llvm:Support that adds something hidden behind a preprocessor macro but that doesn’t actually result in a different compiled output, then Bazel will not recompile things that depend on it, instead using the cache hit. |
st207215 | gcmn:
Maybe because of recent changes to use the upstream LLVM Bazel build files?
Yes probably as the query was tested in June.
gcmn:
That said, TF’s monolithic structure is pretty likely to create unnecessary dependencies and cause unnecessary recompiles.
This Is true but I am not shure how a so low-level components like LLVM and MLIR without version releases could be really isolated as modular dependecies.
gcmn:
Another important component is that if you’re using a cache like --disk_cache
To freeze TF only can you just analyze the cache hits missing 1 before ad after a few of llvm updated commits? |
st207216 | Bhack:
To freeze TF only can you just analyze the [cache hits missing](Probably It Is something like Debugging remote cache hits for remote execution - Bazel 4.2.0) before ad after a few of llvm updated commits?
I’m not really sure. But you could also look at the the command profile after an LLVM update: https://source.cloud.google.com/results/invocations/db44e648-2768-4e83-85fc-e63e092c880b/artifacts/ 1
Just bazel analyze-profile on that doesn’t offer any insight on cache hit rate but perhaps extracting the data with --dump=raw would. |
st207217 | Just ran an experiment checking a commit before the recent LLVM bump on a newly created disk cache:
...$ git checkout f696b566f40baa87d776c92a12b03cca1d83bfd1
...$ bazel clean --expunge
...$ time bazel build --disk_cache=~/bazel_cache //tensorflow/tools/pip_package:build_pip_package
...
INFO: Elapsed time: 1176.812s, Critical Path: 251.94s
INFO: 11114 processes: 1289 internal, 9825 local.
INFO: Build completed successfully, 11114 total actions
real 19m36.954s
user 0m0.359s
sys 0m0.380s
Next, switch to the LLVM bump and compile again:
...$ git checkout a868b0d057b34dbd487a1e3d2b08d5489651b3ff
...$ time bazel build --disk_cache=~/bazel_cache //tensorflow/tools/pip_package:build_pip_package
...
INFO: Elapsed time: 523.303s, Critical Path: 208.88s
INFO: 3273 processes: 166 internal, 3107 local.
INFO: Build completed successfully, 3273 total actions
real 8m43.377s
user 0m0.166s
sys 0m0.171s
Note, this is on a fast machine, OSS contributors likely won’t have access to these specs:
...$ cat /proc/cpuinfo # 56 CPUs
...
processor : 55
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
stepping : 1
microcode : 0xb00003e
cpu MHz : 3192.602
cache size : 35840 KB
...
...$ free -h # 126 GB RAM
total used free shared buff/cache available
Mem: 125Gi 42Gi 61Gi 1.5Gi 21Gi 80Gi
Swap: 127Gi 741Mi 127Gi
This experiment could be repeated for other LLVM bumps (or TFRT ones too).
So, the most recent LLVM bump caused a recompile of almost 50% (time-wise) of a full compile. This impacts anyone developing in OSS but also prevents us from using caching and GitHub Actions for GitHub presubmits. Maybe both of our teams can invest some time to reduce this (assuming this happens on other LLVM bumps)? |
st207218 | Thank you @mihaimaruseac for this small test on a random lllvm udpate.
Also occasional contributors will not reach this resources configuration also with the top Codespaces profile 5 (32 cores). |
st207219 | A different set of commits, a new test:
...$ git checkout b836deac35cd58c271aebbebdc6b0bd13a058585
...$ rm -rf ~/bazel_cache/
...$ bazel clean --expunge
...$ time bazel build --disk_cache=~/bazel_cache //tensorflow/tools/pip_package:build_pip_package
...
INFO: Elapsed time: 1164.101s, Critical Path: 308.54s
INFO: 11114 processes: 1289 internal, 9825 local.
INFO: Build completed successfully, 11114 total actions
real 19m24.243s
user 0m0.360s
sys 0m0.336s
...$ git checkout b71106370c45bd584ffbdde02be21d35b882d9ee
Previous HEAD position was b836deac35c Remove TensorShape dependency from ScopedMemoryDebugAnnotation.
HEAD is now at b71106370c4 Integrate LLVM at llvm/llvm-project@bd7ece4e063e
...$ time bazel build --disk_cache=~/bazel_cache //tensorflow/tools/pip_package:build_pip_package
...
INFO: Elapsed time: 262.158s, Critical Path: 120.47s
INFO: 1567 processes: 149 internal, 1418 local.
INFO: Build completed successfully, 1567 total actions
real 4m22.240s
user 0m0.095s
sys 0m0.092s
This is around LLVM bump from 23 hours ago, so the two tests run so far cover the bumps that occur in a single day.
This is slightly faster that before, only at 20% of a full compile. |
st207220 | mihaimaruseac:
This experiment could be repeated for other LLVM bumps (or TFRT ones too).
It think that all these bumps, with this cache miss rate, are going to impact also the disk-cache size o. disk as It could be quite common to have more then one branch/PR waiting for a review and so you need to switch forth and back between branches on different bumps (or probably need we to costantly rebase/merge when possibile?).
As we know clean up the disk-cache over time is really not a so straightforward task:
github.com/bazelbuild/bazel
local caching: implement --disk_cache_size=GiB
opened
May 2, 2018
buchgr
type: feature request
P2
team-Remote-Exec
Break out from https://github.com/bazelbuild/bazel/issues/4870.
Bazel can use… a local directory as a remote cache via the `--disk_cache` flag.
We want it to also be able to automatically clean the cache after a size threshold
has been reached. It probably makes sense to clean based on least recently used
semantics.
@RNabel would you want to work on this?
@RNabel @davido |
st207221 | Actually, let’s run a longer experiment and track changes since the first commit of this week:
...$ bazel clean --expunge
...$ rm -rf ~/bazel_cache/
...$ for commit in $(git log --first-parent --pretty=oneline ...623ed300b593b368e665899be3cf080c5a0e3ebe | tac | cut -d' ' -f1)
> do
> git checkout ${commit} &> /dev/null
> echo "Building at ${commit}"
> (time bazel build --disk_cache=~/bazel_cache //tensorflow/tools/pip_package:build_pip_package >/dev/null) 2>&1 | grep real
> done | tee ~/build_log
This is 273 commits. At the end of the job, the build cache has 124GB (something most people in OSS cannot afford either):
...$ du -sh ~/bazel_cache/
124G /usr/local/google/home/mihaimaruseac/bazel_cache/
Anyway, let’s look at the timing info from the log:
# concatenate the lines, transform xmys time into (60*x+y) seconds
...$ awk '!(NR%2) { print p, $2} { p = $3 }' ~/build_log | sed -e 's/m/ /' -e 's/s$//' | awk '{ print $1, ($2 * 60 + $3) }' > ~/times_in_seconds
# get a histogram binned for every 10 seconds
...$ sort -rnk2 ~/times_in_seconds | cut -d. -f1 | cut -d' ' -f2 | sed -e 's/.$//' | uniq -c | perl -lane 'print $F[1], "0-", $F[1], "9\t", "=" x ($F[0] / 2)'
1180-1189
580-589
570-579
550-559 =
520-529 =
430-439
360-369
280-289
270-279
240-249
230-239
210-219 =
170-179 =
160-169 =
140-149
120-129 ==
110-119 =
90-99
80-89 =
70-79 ===
60-69 ==
50-59 ===
40-49 ==========
30-39 =====================================================================================================
# also print the values instead of just ====s
...$ sort -rnk2 ~/times_in_seconds | cut -d. -f1 | cut -d' ' -f2 | sed -e 's/.$//' | uniq -c
1 118
1 58
1 57
2 55
2 52
1 43
1 36
1 28
1 27
1 24
1 23
2 21
2 17
3 16
1 14
4 12
3 11
1 9
3 8
7 7
5 6
7 5
20 4
202 3
As you see, most incremental builds take 30-40 seconds (202 out of 273!) but there are some that take much longer. Let’s look into them
# longest 20 times
...$ sort -rnk2 ~/times_in_seconds | head -n20
699c63cf6b0136a330ae8c5f56a2087361f6701e 1184.46
b836deac35cd58c271aebbebdc6b0bd13a058585 585.35
abcced051cb1bd8fb05046ac3b6023a7ebcc4578 574.188
b49d731332e5d9929acc9bfc9aed88ace61b6d18 556.711
42f72014a24e218a836a87452885359919866b0b 553.296
982608d75d1493b4e351ef84d58bc0fdf78203c8 527.231
a868b0d057b34dbd487a1e3d2b08d5489651b3ff 523.162
c432f62159879d83e62d72afc9ef80cb6cdbe1e5 433.18
8b05b58c7c9cb8d1ed838a3157ddda8694c028f4 366.548
36931bae2a36efda71f96c9e879e91b087874e89 280.591
b71106370c45bd584ffbdde02be21d35b882d9ee 272.807
86fb36271f9068f84ddcecae74fe0b7df9ce83ee 242.273
1848375d184177741de4dfa4b65e497b868283cd 239.788
9770c84ea45587524e16de233d3cf8b258a9bd77 219.21
61bcb9df099b3be7dfbbbba051ca007032bfb777 214.006
d3a17786019d534fb7a112dcda5583b8fd6e7a62 172.092
e8dc63704c88007ee4713076605c90188d66f3d2 170.582
ddcc48f003e6fe233a6d63d3d3f5fde9f17404f1 169.959
2035c4acc478b475c149f9be4f2209531d3d2d0d 169.84
3edbbc918a940162fc9ae4d69bba0fff86db9ca2 167.948
# what are the commits for each one
...$ for commit in $(sort -rnk2 ~/times_in_seconds | head -n20 | awk '{ print $1 }'); do git log -n1 --pretty=oneline ${commit}; done
699c63cf6b0136a330ae8c5f56a2087361f6701e use tensorflow==2.5.0 to temporarily solve the failure of `evaluate_tflite` function.
b836deac35cd58c271aebbebdc6b0bd13a058585 Remove TensorShape dependency from ScopedMemoryDebugAnnotation.
abcced051cb1bd8fb05046ac3b6023a7ebcc4578 Prevent crashes when loading tensor slices with unsupported types.
b49d731332e5d9929acc9bfc9aed88ace61b6d18 Integrate LLVM at llvm/llvm-project@955b91c19c00
42f72014a24e218a836a87452885359919866b0b Remove experimental flag `fetch_remote_devices_in_multi_client`.
982608d75d1493b4e351ef84d58bc0fdf78203c8 Switched to OSS llvm build rules instead of scripts imported from third_party.
a868b0d057b34dbd487a1e3d2b08d5489651b3ff Integrate LLVM at llvm/llvm-project@fe611b1da84b
c432f62159879d83e62d72afc9ef80cb6cdbe1e5 Integrate LLVM at llvm/llvm-project@b52171629f56
8b05b58c7c9cb8d1ed838a3157ddda8694c028f4 Integrate LLVM at llvm/llvm-project@8c3886b0ec98
36931bae2a36efda71f96c9e879e91b087874e89 Integrate LLVM at llvm/llvm-project@4b4bc1ea16de
b71106370c45bd584ffbdde02be21d35b882d9ee Integrate LLVM at llvm/llvm-project@bd7ece4e063e
86fb36271f9068f84ddcecae74fe0b7df9ce83ee Integrate LLVM at llvm/llvm-project@fda176892e64
1848375d184177741de4dfa4b65e497b868283cd Merge pull request #51511 from PragmaTwice:patch-1
9770c84ea45587524e16de233d3cf8b258a9bd77 Integrate LLVM at llvm/llvm-project@cc4bfd7f59d5
61bcb9df099b3be7dfbbbba051ca007032bfb777 Integrate LLVM at llvm/llvm-project@8e284be04f2c
d3a17786019d534fb7a112dcda5583b8fd6e7a62 Fix and resubmit subgroup change
e8dc63704c88007ee4713076605c90188d66f3d2 Add BuildTensorSlice for building from unvalidated TensorSliceProtos.
ddcc48f003e6fe233a6d63d3d3f5fde9f17404f1 [XLA:SPMD] Improve partial manual sharding handling. - Main change: make sharding propagation work natively with manual subgroup sharding. There were some problems when propagating with tuple shapes. This also avoids many copies, which is important for performance since the pass runs multiple times. - Normalize HloSharding::Subgroup() to merge the same type of subgroup dims. - Handle tuple-shaped ops (e.g., argmax as reduce, sort) in SPMD partitioner. - Make SPMD partitioner to handle pass-through ops (e.g., tuple) natively, since they can mix partial and non-partial elements in a tuple.
2035c4acc478b475c149f9be4f2209531d3d2d0d Legalizes GatherOp via canonicalization to GatherV2Op; i.e. Providing default values of 0 for the axis parameter and the batch_dims attribute.
3edbbc918a940162fc9ae4d69bba0fff86db9ca2 Internal change
10 of these 20 commits are LLVM hash bumps. In total, there are 11 such commits in the 273 considered:
...$ for commit in $(cat ~/times_in_seconds | awk '{ print $1 }'); do git log -n1 --pretty=oneline ${commit}; done | grep LLVM | wc -l
11
So, almost all LLVM commits result in large compile times. Half of the top 20 longest compile times are LLVM hash bumps
I’d say this is quite costly and we need to find a plan to handle this in a way that helps OSS users.
Edit: Actually ALL LLVM hash bumps are included in the longest compiles, the missing one is just the conversion to upstream files:
...$ for commit in $(cat ~/times_in_seconds | awk '{ print $1 }'); do git log -n1 --pretty=oneline ${commit}; done | grep LLVM
b49d731332e5d9929acc9bfc9aed88ace61b6d18 Integrate LLVM at llvm/llvm-project@955b91c19c00
3487b91d529f2cbc412121d60845cda014e0db7d Integrate LLVM at llvm/llvm-project@9cdd4ea06f09
c432f62159879d83e62d72afc9ef80cb6cdbe1e5 Integrate LLVM at llvm/llvm-project@b52171629f56
86fb36271f9068f84ddcecae74fe0b7df9ce83ee Integrate LLVM at llvm/llvm-project@fda176892e64
36931bae2a36efda71f96c9e879e91b087874e89 Integrate LLVM at llvm/llvm-project@4b4bc1ea16de
e624ad903f9c796a98bd309268ccfca5e7a9c19a Use upstream LLVM Bazel build rules
8b05b58c7c9cb8d1ed838a3157ddda8694c028f4 Integrate LLVM at llvm/llvm-project@8c3886b0ec98
9770c84ea45587524e16de233d3cf8b258a9bd77 Integrate LLVM at llvm/llvm-project@cc4bfd7f59d5
b71106370c45bd584ffbdde02be21d35b882d9ee Integrate LLVM at llvm/llvm-project@bd7ece4e063e
a868b0d057b34dbd487a1e3d2b08d5489651b3ff Integrate LLVM at llvm/llvm-project@fe611b1da84b
61bcb9df099b3be7dfbbbba051ca007032bfb777 Integrate LLVM at llvm/llvm-project@8e284be04f2c |
st207222 | mihaimaruseac:
So, almost all LLVM commits result in large compile times. Half of the top 20 longest compile times are LLVM hash bumps
I’d say this is quite costly and we need to find a plan to handle this in a way that helps OSS users.
Thank you for this extended analysis.
Can you limit the number of cores/ram on one of these builds just to understand what we are talking about on an expected OSS user hw configuration? |
st207223 | I can run the experiment at home from a different computer. But only compiling before and after a LLVM bump, I’ll pick one that was already considered in the thread |
st207224 | I don’t know if you can
constrain It with these args is it enought also on you standard build machine:
https://docs.bazel.build/versions/main/user-manual.html#flag--local_{ram,cpu}_resources 3 |
st207225 | I think this is a nice start, but we need to go deeper: how much of the recompilation is due to which target dependencies changing? E.g., if we didn’t have the monolith :core target, what would have needed recompilation vs what is recompiled today? Cache only works if the dependencies are detangled, if independent parts are intermingled for historical reasons then there’ll be a lot of rebuilding. Also, how does the proposed work on making shared libraries support proper affect this? (These may be intermingled as tot get the best shared library, one would need better deps, but if statistically all linked together then cache hits will be lower and link time higher).
But I think that goes to the above question: who is the user that this affecting? E.g., using the git development model, I’d be on a branch getting my work done up until sending for review and would hit this only if I manually pull (so I choose the frequency). At the point where I do that (and I think Mihai’s numbers may be with a RBE as I can’t built TF under an hour on my machine without a populated cache), I context switch and work on something else done. So updating affects a user that is pulling frequently for some reason, but not quickly enough to get enough cache reuse? |
st207226 | Jacques_Pienaar:
So updating affects a user that is pulling frequently for some reason, but not quickly enough to get enough cache reuse
I think It will affect external contributors that have more then one waiting PR for a review or when they need to rebase as we have conflict against master.
These two cases really depend on how fast our review process is vs the number of bumps and the internal “direct commit/copybara” activities.
Also it will affect when I contribute a new PR e.g. just a week or few days later to the last merged one.
Then, but this a separated topic, it is also about what we are asking to the external contributor to execute a clean room build to populate the first cache and how to manage the garbage collection for the --disk_cache as It is currently unmaneged in bazel specially if we are growing too fast on the disk size with these bumps. |
st207227 | Hi,
We’re calling SavedModelBundle.exporter("/data/").withSession(sess).withSignature(mySig).export to save a model. It seems to save out fine (according to the cli tool). But when we’re loading it from the Python side using tf.saved_model.load("/data/"), we’re getting the following error in ValueError: Feeds must be tensors.
Any ideas ?
more details…
Signature (according to saved_model_cli)
The given SavedModel SignatureDef contains the following input(s):
inputs['_input_0'] tensor_info:
dtype: DT_FLOAT
shape: (10)
name: Placeholder_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['_out_0'] tensor_info:
dtype: DT_FLOAT
shape: (10)
name: Relu_1:0
Method name is:
And full python stacktrace
wrap_function.py.prune(/apps/python3/lib/python3.7/site-packages/tensorflow/python/eager/wrap_function.py:262)
load_v1_in_v2.py._extract_saver_restore(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py:105)
load_v1_in_v2.py.load(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py:211)
load_v1_in_v2.py.load(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load_v1_in_v2.py:263)
load.py.load_internal(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py:613)
load.py.load(/apps/python3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py:578) |
st207228 | What version of TF-Java exported the model, and what version of TF Python is loading it in?
It’s a little weird it’s coming in as a TF v1 model, we might need to set a bit somewhere to have it load using the appropriate path. |
st207229 | We’re exporting from the latest nightly of TF-Java. And loading into Python TF 2.2 |
st207230 | Just to clarify, there is no nightly build for TF-Java, so I guess you mean the latest version of the 0.4.0 snapshot? |
st207231 | One thing to check would be to see if it loads into TF Python 2.5 as that’s the version of the native TF library that is in TF-Java 0.4.0-SNAPSHOT at the moment. I’m not sure how good TF’s compatibility is for loading in saved models from the future. |
st207232 | Just tried in TF 2.6 (loading from Python side). And getting same error + stacktrace. |
st207233 | Ok, I can replicate this using a simple CNN model I trained in TF-Java 0.3.1 and TF Python 2.6.0. Guess we’ll need to trace through the Python code and figure out what it is looking for. |
st207234 | I opened this issue to track it - TF-Java SavedModels can't be loaded into TF Python · Issue #372 · tensorflow/java · GitHub 11 |
st207235 | @Aish_Fenton , were you able to complete successfully this process of loading in Python a model that has been trained and saved in Java with a previous version of TF Java? If so, which version was that? |
st207236 | I may have found the problem, see this comment 11. Looks like Python is more sensitive on how to pass the path of a tensor in the SaverDef. |
st207237 | Hi @Aish_Fenton , please note that this bug should be now fixed in the version 0.3.3 that has just been released 7 and I’ve merged the fix to the current snapshot as well.
Let us know if you are still facing this or other issues, thank you
Karl |
st207238 | Good evening, my problem is that I want to train a Keras CNN that could tell me if in a image there is a sewer or not.
I have 2 datasets (one with positives images with sewer and another with no sewer) files with a 8000x60 matrix of decoded depth images, each image if 80x60 so each dataset has like 100 images.
My problem is that i dont know how to code that input to train the CNN. I have always worked with png datasets and now that type. If you have questions just ask.
Thanks in advance. |
st207239 | If your images are already decoded into a matrix, you can try to use tf.data.Dataset.from_tensor_slices() method (tf.data.Dataset | TensorFlow Core v2.6.0 2) to create inputs for your model. You pass a tuple into this method: the first element is decoded image matrix (could be numpy array or other array-like types), the second element is at array of integer labels (0 and 1 in your case). |
st207240 | That could be nice but how do i tell tf.data.Dataset.from_tensor_slices () to slice my dataset each 80 lines or ¿he is gonna do it auto because there is a white line separator between matrix of images?
And also i didnt understand ““the second element is at array of integer labels (0 and 1 in your case).”” how do i say to the cnn that the first dataset for example is the 1 label (positive sewer) and the second one is 0 (no sewer)
Thank u so much for your response. |
st207241 | If you image data is np.array of shape=(8000, 60) and each 80 rows represent a separate image, you can do: new_data = data.reshape((100, 80, 60))
Then you create two arrays with target values (for each of your original arrays): y_1 = np.zeros(100) and y_2 = np.ones(100)
You create a dataset passing a tuple where the first element is your input data and the second element contains target values: ds = tf.data.Dataset.from_tensor_slices((new_data, y_1))
In your case you’ll have to create two datasets and then concatenate them and shuffle. |
st207242 | Thank you so much Ekaterina for your help i could have continued a lot my work with these information and i have coded this:
img_width, img_height = 80, 60
n_positives_img, n_negatives_img = 17874, 26308
ds_negatives = ["negative_depth.txt"]
ds_positives = ["positive_depth.txt"]
arrayceros = np.zeros(n_negatives_img)
arrayunos = np.ones(n_positives_img)
arraynegativos= ds_negatives.reshape(( n_negatives_img, img_width, img_height))
arraypositivos= ds_positives.reshape((n_positives_img, img_width, img_height))
ds_negatives_target = tf.data.Dataset.from_tensor_slices((arraynegativos, arrayceros))
ds_positives_target = tf.data.Dataset.from_tensor_slices((arraypositivos, arrayunos))
dataset = pd.concat(ds_negatives_target, ds_positives_target)
datasetfinal = np.random.shuffle(dataset)
Im uploading right now the files to google collab to try this, do u think this is good or i have to change something, love your work.
Thanks in advance |
st207243 | You should concatenate tensorflow datasets directly and then randomly shuffle the result:
ds_combined = ds1.concatenate(ds2).shuffle(n_samples)
n_samples should be total number of images in two datasets. |
st207244 | Thank you for your aclaration but when i run the code it gives me this error:
25 arraynegativos= ds_negatives.reshape(( n_negatives_img, img_width, img_height))
26 arraypositivos= ds_positives.reshape((n_positives_img, img_width, img_height))
AttributeError: 'list' object has no attribute 'reshape'
So i converted my ds_negative to numpy array like this:
ds_negatives1 = np.array(ds_negatives)
But it gives me this error:
cannot reshape array of size 1 into shape (26308,80,60)
So now im a bit confused, how do i transform my dataset to be reshaped into that?
Thanks in advance.
Link to google collab script: Google Colab 2 |
st207245 | SIG-JVM’s next meeting is tomorrow, 27th August at 9am PDT. The meeting call-in details and agenda are here 3, feel free to add additional topics. Among other things, we will be planning the next release of TF Java (0.4.0). |
st207246 | Hi - following discussion in the SIG Build, we are planning to upgrade TensorFlow to CUDA 11.4 and cuDNN 8.2 and will be releasing new binaries via tf-nightly builds soon. These new libraries will be available for Ubuntu and Windows, and the tracking issue for this upgrade process is #51659 58.
Please let us know in this forum if you have any questions or feedback! |
st207247 | When creating large models (couple thousands nodes) in graph mode, initializing the metrics can take a very long time. The following toy example takes ~30 seconds on my machine (TF 2.6) to start training:
import tensorflow as tf
import numpy as np
from tensorflow.python.keras import backend as K
with K.get_session() as sess:
print("DEF")
model = tf.keras.Sequential(
[tf.keras.layers.Dense(1) for _ in 500]
)
print("METRICS")
metrics = [tf.keras.metrics.Accuracy(str(i)) for i in range(100)]
print("COMPILE")
model.compile(loss="mse", metrics=metrics, run_eagerly=False)
x, y = np.zeros((2, 1000), dtype=np.float32)
print("FIT")
model.fit(x=x, y=y)
Most of the startup time is spend in this loop 6 initializing the metrics.
In the actual model I am currently investigating, startup takes ~20 minutes since it’s quite a large model with data loading included in the graph and ~400 metrics. The latter is due to having 4 per-class metrics for ~100 classes. This time quadruples when adding another GPU with MirroredStrategy. What could I do to improve startup time in this case? So far, I’ve tried:
running in eager mode, which works fine on a single GPU, but scaling out is going to be more challenging
Creating one metric-class for all classes so that I only need to register 4 metrics. But it doesn’t seem to be possible for metrics to return arrays. |
st207248 | Turns out it’s only a problem with Tensorflow 1.x graph mode. Removing the line with K.get_session() as sess: fixes it. |
st207249 | By Considering these Chart of Complete Neural Networks Types:
ImgBB
Network-Types 8
Image Network-Types hosted in ImgBB
And also here is a Commonly used Neural Network types in Keras I found, which I take note also for its most Practice Cases (You can correct me if I am wrong):
1 DNN (Deep Neural Network) → Most Practice Cases: Time series + predicting plain response y variable (flexible on numeric integer categoric
2 CNN (Convolution Neural Network) → Most Practice Cases: Time Series + Image Classification/Computer Vision
3 LSTM (Long Short-Term Memory) → Most Practice Cases: Time Series + NLP, → Recommended use for a long set of training data (More complex structure than GRU → LSTM has three gates (namely input, output and forget gates)
4 RNN (Recurrent Neural Network) → Most Practice Cases: Time Series + NLP, RNN → Recommended use for Sequence Data (faster training, computationally less expensive
5 GRU (Gated Recurrent Unit → Most Practice Cases: Time Series + NLP GRU → Recommended use for a shorter set of training data (Less complex structure than LSTM GRU has two gates (reset and update gates)
6 Auto Encoders (AE) → Most Practice Cases: Image Classification/Computer Vision (Noising & Denoising), Auto Encoders basically are a set of CNN, Pooling2D, and CNN-Transpose
Finally my Question:
Are there any types of Neural Network in above chart which structure of Network are currently not possible to build by Keras component?
if there’s any Network which aren’t possible, could you point me what types, and why?
Are there any more Commonly used Neural Network aside from what I notes above? Appreciate it if theres any improvisation added to it
Appreciate any effort put into this question, Thank You! |
st207250 | Hi Jovan,
I’d associate GRU, RNN and LSTM with sequences instead of time series. It’s broader and easier to remember. That’s why NLP is in the same group (sequence of characters/words)
As far as I know, Keras can build all the Neural Networks presented and if the layer you need is not available directly, you can customize or create your own with the same API.
The only detail on your image that I could spot is the Support Vector Machine (SVM). I don’t know if that can be considered a Neural Network and I don’t know if that can be built using Keras. |
st207251 | You can also implement models with attention, including the Transformer-based models. Here are some examples:
Neural machine translation with attention | Text | TensorFlow 2
Transformer model for language understanding | Text | TensorFlow 3
Text classification with Transformer
Image classification with Vision Transformer 2
API docs, including:
tf.keras.layers.Attention | TensorFlow Core v2.5.0
tf.keras.layers.MultiHeadAttention | TensorFlow Core v2.5.0 2 |
st207252 | lgusm:
The only detail on your image that I could spot is the Support Vector Machine (SVM).
Just to play a little bit with the concept we accepted a tutorial
keras.io
Keras documentation: A Quasi-SVM in Keras 5
if the layer you need is not available directly, you can customize or create your own with the same API.
I think this is the main point.
More in general, to extend the evaluation over that specific cheat-sheet, you could need to write a custom op inside a brand new layer and the compositional performance of you python code for the TF ops is too slow to practically train a model.
In that case, as Keras is a standalone python-only project, you need to add your own custom ops/kernel in TF, in TF Addons or to maintain it in your own repository.
This could create a management overhead in the case you will need more general model portability (TFlite, TPU, other GPUs) etc.
We can always collaborate with the XLA/MLIR team to better improve the compiler stack lowering driven by new model requests (IMHO this process could be driven by TF model garden and TF Addons with a better interplay with the compiler stack team).
Just to make practical example for a quite popular operation like Deformable Convolution see:
Deformable convolution and other custom ops MLIR
Recently we had a refresh over a Deformable convloution WIP PR in Addons.
I’ve cherry-picked this as an example as this requires us to maintain almost 3k lines of new code in the repository.
This is maintainer-ship overhead is also quite similar to what we have with other custom kernels PRs.
As Addons is one of the few Ecosystem repositories to support custom (c++) ops and the related CI infra it is quite normal that we have this kind of proposed PRs.
But as the codeownership of these compon… |
st207253 | Thanks for the feedback! so basically to create the custom op (operators) it is recommended to do it in C++? and then test the op in Python, according to the OP Guide 2? |
st207254 | You can create custom layers in python:
keras.io
Keras documentation: Making new layers and models via subclassing 2
keras.io
Keras documentation: Simple custom layer example: Antirectifier 3
The problem is that sometime, when you combine different TF native ops in the implementation, the performance could be too low (memory/flops) to reach an efficient training/inference as some part of your specific codepath could be not explicitly optimized by the TF compiler stack (e.g. missing ops fusion, etc…).
Generally uou can try to benchmark your specific implementation using also tf.function 1 but if you canno’t directly interact/contribute to the compiler stack to improve that specific codepath you will probably need to write C++/CUDA custom op/kernel to support your python custom layer.
See also:
github.com
tensorflow/community/blob/master/rfcs/20190814-kernel-and-op-registration.md 4
# Kernel and Op Implementation and Registration API
| Status | Accepted |
:-------------- |:---------------------------------------------------- |
| **Author(s)** | James Ring ([email protected]). |
| **Sponsor** | Günhan Gülsoy ([email protected]) |
| **Updated** | 2020-06-02 |
## Objective
Tensorflow (TF) currently provides a C++ API for implementing kernels and ops.
The Voltron project aims to create a modular/plugin-based TF implementation with
API and ABI surfaces. Plugins will be able to create and register custom kernel
and op implementations.
In order to provide a stable ABI, the Voltron team has chosen to provide C APIs
to plugin authors. This document introduces the C API for op and kernel
registration. For authors who wish to continue using C++ to interface with
TensorFlow, an ABI-stable C++ header-only API is provided.
This file has been truncated. show original |
st207255 | Hi folks:
I havn’t done any TFJS since version 2.3.0, looks like we are now at 3.8.0.
Just want to introduce myself. My name is Jeremy Ellis, social media @rocksetta. I have spent almost 5 years simplifying TensorflowJs for High School students.
https://www.rocksetta.com/tensorflowjs/ 11
My favorite basic layers example is at
https://www.rocksetta.com/tensorflowjs/beginner-keras/20keras-xOr.html 7
I am now getting back to TFJS.
Anyone want to comment on possible changes, or if they are interested in what I do? |
st207256 | Indeed Jeremy and I have crossed paths on Twitter a few times before - welcome to the forum too!
It is great to see folk like yourself making TFJS more accessible to folk including those at high school - have also seen folk integrate with Scratch over in Japan too to go one step higher level for younger kids too.
I would be curious to know more about results of using this in any curriculum etc if you have managed to get results from such activities yet? Do you have any plans to teach this formally at a specific (or region) of high schools?
Would love to hear about your plans to take your work and put it into action. Maybe others on the forum have connections with folk teaching ML at high schools who may be interested in using the materials Jeremy has produced and may be worth connecting with them to try this out?
Let me know how it goes! |
st207257 | @Jason
@jbgordon
So my issue is, that what I have made with TFJS and TF-micro would be better suited for a University 100/101 level Robotics and Machine Learning course for non-Engineers. It is a bit advanced for all but a few of my High School students. What I did with TFJS gives a good understanding of what is happening when training and using Machine Learning in the browser and with TinyML.
Probably better explained on my Github main README.md hpssjellis (Jeremy Ellis) · GitHub 3
The Maker100 2 course (ready for Jan 2022) is more what I am doing at the High School level for Robotics, sensors, actuators and ML mainly using Edge Impulse. The Maker101 course is what I could be working on with TFJS and TF-micro, but presently have no audience to test it with. |
st207258 | I want to try to explore a little bit the collaboration over the CI/infra.
After evaluating the recent positive impact of having a more transparent and reproducible linting env with a [public Github action] (Actions · tensorflow/tensorflow · GitHub 2) in the repository why we don’t progressively expand a little bit this pilot with self-hosted github runners?
I’ve already talked about this in PR #48421 6 and I’ve mentioned different solutions. As you can see in comments some of the mentioned repo are maintained by Google Cloud team members.
As the dev-infra team is always quite busy having Gtihub Action workflows hosted in the main repository with self-hosted GitHub runners I think it could open the collaboration and partially co-maintainership on the CI infra, minimizing, as possibile, the delta between the internal testing perimeter/env and the public testing space increasing the transparency and community collaboration.
E.g. As you can see in Numpy 1.20 issue/PRs 4 people often don’t understand the full picture of our CI (internal + external).
Minimizing the delta with an expanded public space could help us to build up a more conscious community which is one of the prerequisites for attracting more contributions also in this specific domain.
/cc @thea @Joana @yarri-oss |
st207259 | We have an internal draft proposal on this but any kind of public feedback is welcome.
Thanks |
st207260 | SIG Build’s next meeting will be tomorrow, Tuesday, August 3, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 5, and feel free to suggest new agenda items. |
st207261 | I need to start from a set of objects, define a single object, with the most used data among the javascript object collection informed.
Example of set of objects:
[{
name: "Pedro",
product: "Caneta"
}, {
name: "Ana",
product: "Caderno"
}, {
name: "Maria",
product: "Boracha"
}, {
name: "Pedro",
product: "Caneta"
}, {
name: "Pedro",
product: "Caneta"
}]
Single object with the most used values in the above collections:
{
name: "Pedro",
product: "Caneta"
}
That’s what I need, based on the javascript collections array you can see that the most used information for the fields (name and product), were name: "Pedro" and product: "Caneta".
That kind of analysis I need.
What function or how could I do this using tensorflow javascript, if you have any examples I’m very grateful, because I’m new to this. |
st207262 | Hi, I’m currently using TensorBoard to analyse experiments involving the training and evaluation of time series forecasting models. I currently create plots of various metrics in TensorBoard (MSE, MAE etc) but am interested in improving the analysis I do through TensorBoard.
Does anyone have any recommendations of:
Useful plugins to add
Extra logging to add
Ways of customising the TensorBoard plots
Anything else that has helped you get the most out of TensorBoard
Thanks in advance! |
st207263 | Hi, have you visited TensorBoard’s official Get started page Get started with TensorBoard | TensorFlow 8 ? On the left-hand side - and you may have to expand the browser window to see it - there is a list of different guides. Let us know what you think. In addition, there’s a long tutorial video made by a YouTuber about TensorBoard: TensorFlow Tutorial 17 - Complete TensorBoard Guide - YouTube 3 you may want to check out. |
st207264 | Thank you for your answer, that video was very helpful. I’ve read through the get started guides for TensorBoard, but was more wondering if the community had any more advanced tips.
Do you know if there is any documentation for the Time Series plugin that was added to TensorBoard? It sounds pretty relevant to what I’m working on |
st207265 | Hi there,
I’m Leo, ML GDE from Mainland China. Months ago, I’ve tried tflite for microcontroller with OpenMV H7 3 but face some issues:
when converter.optimizations = [tf.lite.Optimize.DEFAULT] is set. it raises “hybrid model is not supposed” or converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] and follow are set, it raises “Currently, only float32 input type is supported.”
more information can be found here 3
I’m wondering if anyone meets the same issue or can someone kindly sharing the possible solution?
thanks in advance
Leo |
st207266 | SIG-JVM’s next meeting is tomorrow, 23rd July at 9am PDT. The meeting call-in details and agenda are here 4, feel free to add additional topics. Our current focus is on improving the inference experience in TensorFlow-Java, but we’re also looking longer term at improving the training support. |
st207267 | I am trying to plot the progression of my model’s weights during training using add_scalar, however, the tensorboard plot only shows the weights for any one epoch.
What I mean by this is, when I load tensorboard, I only see “epoch 0” in the scalars section even if I have run my model for 10 epochs. However, I dont have this issue while plotting histograms in the same code.
My code is as follows:
for epoch in total_epochs:
train model
calculate error
optimizer.step()
calculate average loss
for name,weight in model.named_parameters():
SummaryWriter.add_histogram(name, weight, epoch)
SummaryWriter.add_scalar(str(name), weight, epoch)
Here 1 is an example of what I mean. I had run the model for 10 epochs, the graph only shows epoch 0 and 1. However, the histogram (not pictured) contains the progression of all 10 epochs. |
st207268 | For custom training loops and TensorBoard, have you tried the method described in Using TensorBoard with other methods 2 (tf.summary)? |
st207269 | Hi everyone,
I saw the following tweet last week about splitting Keras into a seperate pip package.
Does this mean that, on my local virtual environment I need to do ‘pip install --upgrade keras’ in order to get the latest tf.keras?
Up until now, keras is updated as part of my:
pip install --upgrade tensorflow"
or
pip install --upgrade tf-nighly"
Thanks!
Fadi |
st207270 | pip install -U tf-nightly will auto update the keras-nightly as the dependency.
When tf 2.6 releases, pip install -U tensorflow will update keras as the dependency. |
st207271 | SIG Build’s next meeting will be tomorrow, Tuesday the 13th of July, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 10, and feel free to suggest new agenda items. |
st207272 | SIG Build’s next meeting will be on Tuesday, June 8th, at 2pm Pacific time. Find the meeting details here 5. Please feel free to suggest your own agenda items.
Normally the meeting would be on June 1st, but that’s right after Memorial Day, so @perfinion and I decided to move it a week later. |
st207273 | The meeting is happening tomorrow, on Tuesday the 8th! Find the meeting details, how to join, and agenda here 5. |
st207274 | Thanks to everyone who attended this meeting. Here’s a summary of the big discussion points:
Microsoft and TensorFlow will be looking into whether or not Microsoft can adopt some of the TensorFlow builds on Windows, which could improve QoL for Windows TensorFlow users.
Contributor Experience has gotten some more internal prioritization, which is a good sign for progress on various SIG Build tools.
SIG Build dockerfiles can build manylinux2010-compatible images with little effort. I am looking into using them to build the tf-nightly packages.
Our next meeting will be July 13th at 2pm PST. |
st207275 | Hi, Rizal here.
I am following this topic , GitHub - AntonMu/TrainYourOwnYOLO: Train a state-of-the-art yolov3 object detector from scratch! 26
but when i do the training, i got the error
2021-06-03 17:39:14.545998: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Using TensorFlow backend.
Traceback (most recent call last):
File “Train_YOLO.py”, line 28, in
import keras.backend as K
File “/home/rizal/.local/lib/python3.8/site-packages/keras/init.py”, line 12, in
from . import initializers
File “/home/rizal/.local/lib/python3.8/site-packages/keras/initializers/init.py”, line 126, in
populate_deserializable_objects()
File “/home/rizal/.local/lib/python3.8/site-packages/keras/initializers/init.py”, line 51, in populate_deserializable_objects
LOCAL.GENERATED_WITH_V2 = tf.internal.tf2.enabled()
AttributeError: module ‘tensorflow.compat.v2.internal’ has no attribute ‘tf2’
Any hint, for me to move forward ? Many thanks in advanded. guys.
Cheers. |
st207276 | Hey all,
Just wondering if anyone has any interest in joining up for the reproducibility challenge this year?
The primary goal of this event is to encourage the publishing and sharing of scientific results that are reliable and reproducible. In support of this, the objective of this challenge is to investigate reproducibility of papers accepted for publication at top conferences by inviting members of the community at large to select a paper, and verify the empirical results and claims in the paper by reproducing the computational experiments, either via a new implementation or using code/data or other information provided by the authors.
The submission deadline is July 15th, 2021, so we don’t have a lot of time but with some effort, we can pull it off. Could be fun to tackle a paper that has shown promise and would be a useful addition to Tensorflow.
Comment below if you think you’ll have a bit of time to spare and what paper you think could be worth reproducing |
st207277 | This is a nice idea!
Unfortunately I can’t participate at the moment
I’d go even further and publish the model on TFHub when ready! |
st207278 | I am currently using the infineon tricore controller. I would like to include the TF-lite model in to this controller. is it possible? how to compile C/C++ code specifically toward the infinion tricore compiler? |
st207279 | There is, alas, currently no “out of the box” Tricore support. You’d need to talk to Aurix™ product support/ technical marketing folks for the official roadmap/plans in this area.
However it is possible - I know of at least one Infineon customer active in this area. I can’t “name names” as I’m not sure this activity is public information. |
st207280 | Hi,
I wasn’t aware of the availability of TensorFlow for Java, which github repository is available here:
GitHub - tensorflow/java: Java bindings for TensorFlow 3
This is great news. Is there any good place to find code examples for training, predictions, model building and the likes?
Thank you,
OD |
st207281 | We have a small set of example models here: GitHub - tensorflow/java-models: Models in Java 12
TensorFlow Java doesn’t have access to all of the gradients available in TensorFlow as many are implemented in python, so some models can’t be specified entirely in Java yet, but we’re working to build out the set. |
st207282 | Can we have SIG for GDL/3D deep learning? It would be also a good idea to create a tag and resources thread for GDL. |
st207283 | There was a plan for TF Graphics to start a SIG
github.com/tensorflow/graphics
Add Roadmap.md 1
opened
Nov 14, 2019
closed
May 15, 2020
bhack
Is there a roadmap for this TF subprojects.
Seems that It Is very low resource …so It Is interesting if you can share TF plans/expectation on this.
But I don’t know what Is the current status
We have also TF 3D but on GitHub it Is still under Google Reaseach org.
Google AI Blog
3D Scene Understanding with TensorFlow 3D 8
Posted by Alireza Fathi, Research Scientist and Rui Huang, AI Resident, Google Research The growing ubiquity of 3D sensors (e.g., Lidar ,...
/cc @thea |
st207284 | concaption:
Can we have SIG for GDL/3D deep learning? It would be also a good idea to create a tag and resources thread for GDL.
Hi @concaption - great idea. We first worked to create a SIG at the beginning of COVID, and a lot of the initial members were no longer able to prioritize participating in the SIG. This is a good reminder to check back in to see whether they have bandwidth to restart discussions. We generally wait to create new SIGs until there is enough community interest and a core group on-board to drive collaboration.
In the meantime, adding tags for those topics is a great idea. Will take a look at that. |
st207285 | Interested in this SIG.
I have research interest in using Deep Learning for 2D geometry (profiles) as well. Example: GitHub - yogeshhk/MidcurveNN: Computation of Midcurve of Thin Polygons using Neural Networks 2 |
st207286 | Hi TF.js Folks!
We are looking forward to catching up with you in our next SIG meeting. The meeting has been moved to Tuesday June 8th from 6:00-7:00 PM PST. The TF team would like to expand on some recent launches; TFLite Web API, Pose API, Task API. We would also like to discuss how we can form working groups with members to drive forward solutions. Please feel free to add any questions or topics for discussion to the agenda.
Resources:
Previous meeting notes and agenda 4
Shared drive for SIG resources and materials 2
Talk soon!
TFJS Team |
st207287 | Adding here a discussion around TFX ML Metadata Client proposal. TFX MLMD Client Library proposal by casassg · Pull Request #23 · tensorflow/tfx-addons · GitHub 6
Currently under review in the PR above. |
st207288 | Hey everyone,
TFA 0.13 built against TF2.5 and CUDA11.2 has been released:
github.com
Release TensorFlow Addons v0.13.0 · tensorflow/addons 13
//github.com/tensorflow/addons/releases/tag/v0.13.0
Release Notes
Built against TensorFlow 2.5
CUDA kernels are compiled with CUDA11.2 and cuDNN 8.1.0
API docs found on the website
Changelog
Add python3.9 support (#2204)
Fixed build on ppc (#2438...
Let us know if there are any issues! |
st207289 | Currently micro prefers, what in DSP would be referred to as, fixed point representations of all it’s integer tensors. This is very acceptable for situations where the expected dynamic range is low, i.e. NNs with batch norm, etc. I’m interested in using micro for general audio/other DSP where I’m used to using block floating point.
What are my options here? Any plans to support it/is it supported?
My ultimate goal is to move audio frontend DSP code of an audio pipeline into tf so as to consolidate our DSP(block floating point) and ML(fixed/floating point) frameworks.
Thanks |
st207290 | I just saw ’ * Dynamic quantized models support’ on TensorFlow Lite Roadmap 2
Is this what I am asking for? |
st207291 | My ultimate goal is to move audio frontend DSP code of an audio pipeline into tf so as to consolidate our DSP(block floating point) and ML(fixed/floating point) frameworks.
Hi @asj! Looping in @Advait_Jain |
st207292 | Is anyone using a custom/secondary flatbuffer schema for CustomOptions to ops instead of using FlexBuffers? If so then how has it gone? Thanks |
st207293 | Is anyone using a custom/secondary flatbuffer schema for CustomOptions to ops instead of using FlexBuffers? If so then how has it gone? Thanks
Looping in @Advait_Jain (TF Lite Micro team) |
st207294 | Hi TF.js Community!
We are looking forward to catching up with you in our next SIG meeting on Tuesday May 11th from 6-7PM PST.
We are really excited to present an overview of the new Task API and TF Lite model execution in TF.js. Based on feedback, we will allocate the second half of the meeting to discussion and Q&A. Joana will also introduce the new TF contributor forum. Please feel free to add any questions or topics for discussion to the agenda.
The meeting agenda, meeting calendar details, and previous meeting notes:
docs.google.com
[Public] SIG TF.js Meeting Notes 5
SIG TF.js Meeting Notes Note: This is a public document. Meeting: 2021-05-04 Tuesday, May 4, 2021, 6:00 – 7:00 pm Pacific Time Meeting Recording - TBD Please join this link: meet.google.com/dac-ngjq-okc (US) +1 617-675-4444 PIN: 298 636 519...
Shared drive for SIG resources and materials:
https://drive.google.com/corp/drive/folders/1Fh2RcrlCfn3sefMCLFc09TX1XfIYxKXw 1
The meeting is open to everyone - hope to see you there,
Sandeep Gupta,
PM, TensorFlow.js, on behalf of the TensorFlow.js team |
st207295 | Do you have any ideas for new components, examples, or tools that can be added to TFX and shared with the world? If so, then you’re in luck! You can add your idea as a TFX-Addons community project idea by creating an issue in the repo 6 and tagging it with Project: Idea. We’ll include it in the projects that we discuss in meetings, and someone (maybe you?) can write a proposal 2 to implement it!
Join SIG TFX-Addons by joining the Google Group 3, and remember to also read the guidelines in the README 1 for more information on how the group operates. |
st207296 | Hi SIG,
As a reminder we’ll have our monthly meeting tomorrow @ 11am PT
Please add anything to the agenda:
docs.google.com
[Public] SIG Addons Notes 6
TensorFlow SIG Addons Notes This is a public document 2021-05-06 Thursday, May 6th, 11am-12pm PT meet.google.com/gne-cewk-epe [log into Google with the email you use for [email protected]] +1 920-605-5984 PIN: 517 795# Review previous action...
Thursday, May 6th, 11am-12pm PT
meet.google.com/gne-cewk-epe 2 |
st207297 | Discuss state-of-the-art ML models both in and out of the TensorFlow model garden. |
st207298 | Two models I would immediately like to see in Model Garden:
[WideResNets] (very helpful for benchmarking purposes)
Any colorization model
I had worked on a Colorization model 9 in the past months with a collaborator. It gave decent results but we could not get the distributed training part right. We are up for collaborations! Our pipeline is pretty fast so far utilizing the best practices from tf.data and is fully compatible with TPUs. |
st207299 | I don’t find vision transformer models in any TensorFlow repos so far.
I think it’s important to develop a repository full of transformer models made with TensorFlow just like we have pytorch image models 2.
I am currently working on this and I really appreciate collaborations for this project. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.