id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st205200 | you can try to read Keras model (.h5) and rewrite as SavedModel (saved_model.pb) |
st205201 | # Here is an example from keras.io
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model
# identical to the previous one
model = load_model('my_model.h5')
Read more: Model saving & serialization APIs 17 |
st205202 | adding up to the discussion, this links can help you have a deeper understand if you want:
Using the SavedModel format Β |Β TensorFlow Core 11
Save and load models Β |Β TensorFlow Core 6
Save and load Keras models Β |Β TensorFlow Core 5
Iβd go over these tutorials (they are short), they will give you a good understanding. |
st205203 | You can convert the model by freezing variables, this is possible with TF <2.0. Above TF 2.0 may give you an error.
stackoverflow.com
How to export Keras .h5 to tensorflow .pb? 26
python, tensorflow, keras
answered by
jdehesa
on 04:33PM - 02 Aug 17 UTC |
st205204 | Hi, Iβm trying to implement SOLO architecture for instance segmentation in TensorFlow (Decoupled version).
https://arxiv.org/pdf/1912.04488.pdf
Right now, I need to compute the loss function and multiply each output map from conv2d to each other.
xi = Conv2D(β¦)(input) # output is (batch, None, None, 24)
yi = Conv2D(β¦)(input) # output is (batch, None, None, 24)
I need to multiply output filters (element wise) xi with yi in a way to get output with (batch, None, None, 24*24).
I try to do this with for cycles but get error βOperatorNotAllowedInGraphError: iterating over tf.Tensor is not allowed: AutoGraph did convert this functionβ.
Any advice to achieve this? |
st205205 | Iβve not personally verified this loss implentation with the original SOLO paper but check if it could help you as a baseline:
github.com
quanghona/SOLO_tf2/blob/master/train/loss.py 2
import tensorflow as tf
class SOLOLoss(tf.keras.losses.Loss):
"""Loss for SOLO network
Usage: This class can be use for built-in training method (keras) or custom
training procedure
- For built-in method, get the loss functions by invoke `get_category_loss`
and `get_mask_loss` method and loss.weights to get the weights
- For custom training function, Just invoke the functional method to get all
losses at the same time
Note: if call method is apply instead of __call__, please add `reduction`
parameter to the constructor
"""
def __init__(self, mask_weight=3, d_mask='dice', focal_gamma=2.0, focal_alpha=0.25, name='solo_loss'):
super(SOLOLoss, self).__init__(name=name)
self.mask_weight = mask_weight
self.weights = [1, mask_weight]
This file has been truncated. show original |
st205206 | Thank you for the answer, I already looked into the mentioned implementation but I cannot find any answer for multiplication of two Conv2D layers in decoupled way. |
st205207 | Do you have a very small isolated example of what you want to achieve? E.g. two dummy input Tensors and the expected output? |
st205208 | According to paper I need elementwise multiplication of output maps between two conv layers. In numpy something like this:
batch_size = 3
xi = np.ones((batch_size, 10, 10, 24))
yi = np.ones((batch_size, 10, 10, 24))
results = []
for i in range(24):
for k in range(24):
results.append(xi[:,:,:,i]*yi[:,:,:,k])
In tensorflow 2 I can run simple example like this, but it fails during training:
a = np.zeros((3, 10, 10, 24))
b = np.zeros((3, 10, 10, 24))
mask_preds = []
for i in range(24):
for j in range(24):
mask_pred = tf.multiply(a[:,:,:,i], b[:,:,:,j])
mask_preds.append(mask_pred.numpy())
mask_preds = tf.constant(mask_preds)
mask_preds = tf.transpose(mask_preds, [1,2,3,0]) # (batch, 10, 10, 24*24) |
st205209 | Michal_Lukac:
but it fails during training:
If you mean that is failing with fit etc. is that your are then running this loop in graph mode and you are in the same case as:
github.com/tensorflow/tensorflow
autograph fails inside keras model train_step including a for loop over a tensor 4
opened
Aug 7, 2020
klaimans
TF 2.3
comp:autograph
comp:keras
stat:awaiting tensorflower
type:bug
<em>Please make sure that this is a bug. As per our
[GitHub Policy](https://gitβ¦hub.com/tensorflow/tensorflow/blob/master/ISSUES.md),
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:bug_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linus Ubuntu 18.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: no
- TensorFlow installed from (source or binary): binary (docker image latest-gpu-py3)
- TensorFlow version (use command below): 2.3
- Python version: Python 3.6.9
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory: V100
**Describe the current behavior**
When writing a python "for" loop inside a tf.keras.Model.train_step I get the following error:
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
The same function works correctly when outside of a keras model but still decorated with tf.function.
**Describe the expected behavior**
autograph should support iterating over a tensor also inside a keras model
**Standalone code to reproduce the issue**
```
import tensorflow as tf
import numpy as np
t = tf.Variable(0)
@tf.function()
def foo():
for n in tf.range(tf.constant(10)):
t.assign_add(n)
return t
nt = foo()
nt # <tf.Tensor: shape=(), dtype=int32, numpy=45>
class mymodel(tf.keras.Model):
def __init__(self):
super().__init__()
self.t = tf.Variable(0)
def train_step(self, data):
for n in tf.range(tf.constant(10)):
t.assign_add(n)
return {"loss": t}
mm = mymodel()
mm.compile()
mm.fit(np.random.random((5)), steps_per_epoch=1) # this doesn't work see trace below
```
**Other info / logs**
OperatorNotAllowedInGraphErrorTraceback (most recent call last)
<ipython-input-18-c68155fbb474> in <module>
----> 1 mm.fit(np.random.random((5)), steps_per_epoch=1)
~usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
--> 108 return method(self, *args, **kwargs)
109
110 # Running inside `run_distribute_coordinator` already.
~usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1096 batch_size=batch_size):
1097 callbacks.on_train_batch_begin(step)
-> 1098 tmp_logs = train_function(iterator)
1099 if data_handler.should_sync:
1100 context.async_wait()
~usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
778 else:
779 compiler = "nonXla"
--> 780 result = self._call(*args, **kwds)
781
782 new_tracing_count = self._get_tracing_count()
~usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
821 # This is the first call of __call__, so we have to initialize.
822 initializers = []
--> 823 self._initialize(args, kwds, add_initializers_to=initializers)
824 finally:
825 # At this point we know that the initialization is complete (or less
~usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
695 self._concrete_stateful_fn = (
696 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 697 *args, **kwds))
698
699 def invalid_creator_scope(*unused_args, **unused_kwds):
~usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2853 args, kwargs = None, None
2854 with self._lock:
-> 2855 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2856 return graph_function
2857
~usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3211
3212 self._function_cache.missed.add(call_context_key)
-> 3213 graph_function = self._create_graph_function(args, kwargs)
3214 self._function_cache.primary[cache_key] = graph_function
3215 return graph_function, args, kwargs
~usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3073 arg_names=arg_names,
3074 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3075 capture_by_value=self._capture_by_value),
3076 self._function_attributes,
3077 function_spec=self.function_spec,
~usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
984 _, original_func = tf_decorator.unwrap(python_func)
985
--> 986 func_outputs = python_func(*func_args, **func_kwargs)
987
988 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
598 # __wrapped__ allows AutoGraph to swap in a converted function. We give
599 # the function a weak reference to itself to avoid a reference cycle.
--> 600 return weak_wrapped_fn().__wrapped__(*args, **kwds)
601 weak_wrapped_fn = weakref.ref(wrapped_fn)
602
~usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
OperatorNotAllowedInGraphError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
<ipython-input-12-62f0dcb0797d>:6 train_step
for n in tf.range(tf.constant(10)):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:503 __iter__
self._disallow_iteration()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:496 _disallow_iteration
self._disallow_when_autograph_enabled("iterating over `tf.Tensor`")
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:474 _disallow_when_autograph_enabled
" indicate you are trying to use an unsupported feature.".format(task))
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
Probably if you will pass run_eagerly=True it will run fine but it will slower in eager mode with these nested loops.
TensorFlow
tf.keras.Model Β |Β TensorFlow Core v2.5.0
Model groups layers into an object with training and inference features.
Did you find a similar loop in the official Solo implementation at:
github.com
WXinlong/SOLO
SOLO and SOLOv2 for instance segmentation, ECCV 2020 & NeurIPS 2020. |
st205210 | I tried also with run_eagerly=True, but got AttributeError: 'Tensor' object has no attribute 'numpy'. And when not using numpy() i got error on tf.constant(mask_preds) of TypeError: Expected any non-tensor type, got a tensor instead and other problems.
The mentioned repo is in XIinlong-SOLO is implemented in Pytorch and looking into it they have a bit different approach. They use a lot of loopsβ¦
Maybe I will just skip the decoupled head and use coupled head which should be more straightforward for implementation as I donβt need to multiply two Conv2D layers.
I just thought that multiplying two |
st205211 | Have you checked the decoupled head in:
github.com
quanghona/SOLO_tf2/blob/master/model/layers/head.py#L47
self.cat_convs.append(cat_conv)
cat_conv_out = tf.keras.layers.Conv2D(self.num_class, (3,3), 1, padding="same",
kernel_initializer=tf.keras.initializers.glorot_uniform(),
activation="sigmoid")
self.cat_convs.append(cat_conv_out)
# Mask branch
if head_style == 'vanilla':
num_mask_branch = 1
mask_out_num_channel = grid_sizes[0]*grid_sizes[0]
elif head_style == 'decoupled':
num_mask_branch = 2
mask_out_num_channel = grid_sizes[0]
else:
raise ValueError(f"Head style {head_style} not supported")
mask_conv_channels = [[self.num_channel, self.num_channel]] * num_mask_branch # number of channels for all sub-branchs in mask branchs
for branch_channels in mask_conv_channels:
branch_conv = []
branch_out = {grid_size: None for grid_size in grid_sizes}
coordconv = CoordConv2D(filters=branch_channels[0], kernel_size=(3,3), strides=1, padding="same", |
st205212 | If somebody was looking for answer here it is:
@tf.function
def outerprodflatten(x, y, channel_dims):
"""
According to StackOverflow:
https://stackoverflow.com/questions/68361071/multiply-outputs-from-two-conv2d-layers-in-tensorflow-2
"""
return tf.repeat(x,channel_dims,-1)*tf.tile(y,[1,1,1,channel_dims]) |
st205213 | I am trying to train a model over human skeleton data and was able to achieve good accuracy overtraining but the validation loss reaches a point and starts to increase again. Model validation accuracy doesnβt decrease over time. Clearly, it overfitting and I totally understand. For reducing this I tried most of the techniques but was not able to decrease the validation_loss. I had tried dropouts, reducing model capacity, and adding loss for layers but no luck.
Log graph would be seen below
Any ideas to improve the model??
logs1920Γ953 81.1 KB |
st205214 | Yap, I have tried that too. But I fell for this model parameter itβs more data. Maybe I could be wrong. |
st205215 | No when i did data augmentation, train seems to be fine but the valdication loss only increases back |
st205216 | I donβt know your dataset but If you cannot collect more train data to cover your validation distribtuion you can try with some interesting augmentation approach like:
arXiv.org
PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose... 3
Existing 3D human pose estimators suffer poor generalization performance to
new datasets, largely due to the limited diversity of 2D-3D pose pairs in the
training data. To address this problem, we present PoseAug, a new
auto-augmentation framework... |
st205217 | I am using the NTU-RGBD dataset for training. According to your idea what should be validation distribution. My dataset size is around 18000 samples and split 80:10:10. Also model parameters is around 210,864. |
st205218 | Was in your graph the loss/accurancy on the Action recognition or on the keypoints? |
st205219 | it was on action recognition because i have considered 20 classes for prediction |
st205220 | Have you tried to build the confusion Matrix 17 or the classification error for each class to check how It is distributed? |
st205221 | This is data distribution, numbers respond to classes.
Counter({14: 850, 16: 849, 9: 848, 18: 848, 5: 848, 4: 847, 6: 847, 17: 846, 1: 845, 19: 845, 3: 845, 15: 845, 12: 844, 8: 844, 10: 844, 2: 843, 0: 841, 11: 840, 13: 837, 7: 834}) |
st205222 | Yes but I meant how the validation error is distributed over classes.
Before debugging your custom model have you tried to reproduce approximate results with any well known model on this dataset?
paperswithcode.com
Papers with Code - NTU RGB+D Benchmark (Skeleton Based Action Recognition) 25
The current state-of-the-art on NTU RGB+D is PoseC3D (w. HRNet 2D skeleton). See a full comparison of 85 papers with code. |
st205223 | how can i get validation error over distributed classes. I mean how to visualise loss over classes. |
st205224 | You can play the confusion matrix preparing the validation GT label and predictions |
st205225 | Hey guys!
Iβm facing a problem where I got bunch of form scans. The problem is that by applying OCR tool it has dificulties with reading it because of overlapping text. I would like to delete the form schema and leave only input provided by people.
What do you guys think, what would be the best solution for that problem?
Thanks in advance,
regards |
st205226 | I donβt know if the user input is handwritten or not but generally check if you find some useful paper in DI@KDD2021 1 or in the previous edition 2 of this workshop. |
st205227 | Check also Neurips 2020 survey:
arXiv.org
A Survey of Deep Learning Approaches for OCR and Document Understanding 1
Documents are a core part of many businesses in many fields such as law,
finance, and technology among others. Automatic understanding of documents such
as invoices, contracts, and resumes is lucrative, opening up many new avenues
of business. The... |
st205228 | Suppose I have two features f1, f2 with corresponding embedding vector e1,e2 built by keras Embeding layer, how could I update each vector with some custom operation instead of only using gradient?
For example , I would like to update e1 as follows:
e1 = 0.9 * (e1 + learning_rate * d e1) + 0.1 * e2 |
st205229 | I typically run a custom training loop and never associate my optimizer with my model(s) via the compile function. I would like to save my optimizer so I can more easily resume training. What is the best way to save an optimizer in its own file, independent of any model. |
st205230 | I have a model with input layer as shape (None,128,128,1)
Model: βmodelβ
Layer (type) Output Shape Param #
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 126, 126, 16) 160
max_pooling2d (MaxPooling2D) (None, 63, 63, 16) 0
conv2d_1 (Conv2D) (None, 61, 61, 32) 4640
max_pooling2d_1 (MaxPooling2 (None, 30, 30, 32) 0
flatten (Flatten) (None, 28800) 0
dense (Dense) (None, 128) 3686528
dense_1 (Dense) (None, 10) 1290
Total params: 3,692,618
Trainable params: 3,692,618
Non-trainable params: 0
now i have trained and saved the model and later loaded the model USING
new_model = tf.keras.models.load_model('saved_model/my_model')
``
Now i want to change the input layer with shape (256,256,1)
and use the rest of the model weights the same
how can I do this?
What I have tried
```python
new_model.layers.pop(0)
mode=tf.keras.Sequential()
mode.add(tf.keras.Input(shape=(256,256,1)))
for lay in new_model.layers[1:]:
mode.add(lay)
none of these work for me
is there a way? |
st205231 | I think the approach you followed should work. I tried with mnist example as shown in the following gist 5.
Only thing is to make sure that changing the input shape should not affect the layers after input layer. Please share entire code (with any dummy data) for further support.
new_model = tf.keras.Sequential(tf.keras.layers.Flatten(input_shape=(14, 56)))
for layer in loaded_model.layers[1:]:
new_model.add(layer)
As you are changing one of the layer, you need to train the model again to get the updated weights. Before training, you need to compile the model and then train. You could also freeze the layers and fine tune the model. |
st205232 | Hi everybody,
i want to enable GPU to use tensorflow on my notebook but it seems not be recognized by jupyter.
I have:
NVIDIA GeForce GT 540M
Windows 7-64 bit
CUDA 9.0
cuDNN 7.6.5
Anaconda
tensorflow-gpu 1.14.0
keras 2.4.3
could anyone help me? i found a topic that say the minimum needed GPU CUDA capability for tensorflow is 3.0 |
st205233 | Help with cuDNN installation: GPU With Tensorflow on Jupyter Notebook General Discussion
Hi guys, iβm trying to use my GPU(Nvidia GeForce 540M) with Tensorflow.
Iβve installed CUDA 6.5 but it seems not be supported by tensorflow.
Iβve seen that is mandatory to have cuDNN
Can anyone help me?
I think this is kind of the same issue. But the problem seems that the GPU is not supported as per people in that thread. I have no problem believing that considering the GPU is over 10 years old now. I would recommend using Google Colaboratory 9 for a very similar experience to jupyter notebook. Colab Pro 8is also a viable option when you are ready to do things fast. Even I use it although I have a powerful GPU locally.
Also there are tons of tutorials and getting started videos that show you how to use it.
Hope this helps |
st205234 | @Luca_Zanetti , I think the problem here is with CUDA & cuDNN version.
You can see here 16 the corresponding supported CUDA & cuDNN versions for each version of tensorflow.
After installing CUDA & cuDNN, you can check whether your tensorflow uses GPU or not with below command.
python -c "import tensorflow as tf; print(assert tf.test.is_gpu_available())" |
st205235 | Iβve already seen this compatibility table but the GPU doesnβt work with some configuration of cuda and cuDNN.
Python doesnβt recognize it. |
st205236 | Hello, a class 5xx GPU is more than obsolete. I know that we are suffering the full force of the silicon crisis. But this is a bit exaggerated, think about two things: a desktop and Linux
the desktop is stable and updates easily
a desktop GPU costs much less than a laptop has lasted a limited life
Linux is stable and updates easily and the command line is easy to learn, not to mention the millions of tutorials on youtube |
st205237 | tell me how to roughly assemble the model
input data
Xt = [# only one 1 in an element of 5 values
[[0,0,0,0,1], [0,0,1,0,0], [0,0,0,0,1], [1,0,0,0,0], [ 0,0,1,0,0], [0,0,1,0,0]], 1 (6 elements)
β¦ β¦
[[0,0,0,0,1], [0,0,0,0,1], [0,0,0,0,1], [0,0,0,0,1], [ 0,0,0,0,1], [0,0,1,0,0]] 500 (6 elements)
Yt = [# only one 1 in an element of 5 values
[[0,0,0,0,1], [0,0,1,0,0], [0,0,0,0,1]], 1 (3 elements)
β¦ β¦
[[0,0,0,0,1], [0,0,0,0,1], [0,0,0,0,1]] 500 (3 elements)
DOES NOT WORK
act = βreluβ
model = Sequential ()
model.add (LSTM (6, activation = act, batch_input_shape = (500, 6, 5), return_sequences = True))
model.add (LSTM (3, activation = βsoftmaxβ)) # assuming this is the last layer
model.compile (optimizer = βadamβ, loss = βcategorical_crossentropyβ, metrics = [βaccβ])
md = model.fit (Xt, Yt, epochs = 500, batch_size = 500, verbose = 1, shuffle = False)
ValueError: Shapes (500, 6, 5) and (500, 3) are incompatible |
st205238 | Hi, Saha. You can add more details about your problem and data so we can see where to help. |
st205239 | As you are using softmax activation in the last layer, Your code is expecting Yt to be of size [500, 3] where as you are providing an array of size [500,3,5]. When I updated your code with that change, model trains without any error. Please note that the data I am using is only for demonstration.
Here 3 is a gist for reference. Thanks!
Check this example 3 for more details. |
st205240 | tf.division() - On dividing two tensors / a tensor and a integer
Produces a tensor of datatype float64 as default.
Doesnβt int32 / int32 results with a int32 value ?
Should I use tf.cast( , tf.int32) always to get this done ?
Sadly < 2 > is right acc. to tensorFlows library,
Which is against a general equation in mathematics.
x * (y/y) = x [where both x and y variables are of type int in LHS]
In terms of tensorFlow
tensor = tf.multiply(tf.divide(x, y), y) #x - tensor(DT - int32), y - tensor or a variable (DT- int32)
print(tensor)
Should I stop overthinking over this and just stick with this rules ??
a.) tf.division(tensor of DT int, tensor or variable of DT int) = tensor of value < tensor/tensor or variable > of DT float64 always
b.) The same rule < a > while dividing a tensor with a variable or another tensor of DT int gives a tensor < tensor/tensor or variable > of DT float64 always.
If any agree this has to be addressed and I was right do support this to let this addressed by TensorFlow team. & to help me move on to further studies. !
ThankYou Dev_Friends !! |
st205241 | Hello Saravanan_R,
According to the Tensorflow doc 1,
1.If the given x,y are int32 output of tf.devide() is float64 datatype default.
For example, if x = tf.constant([16, 11]) and y = tf.constant([4, 2]),
tf.devide output:
16/4 = 4 and 11/2 =5.5 which is float value.
Thats reason tf.devide output is set datatype of float64 as default.
If you are looking for the Tensorflow api which takes integer input and result in an integer.
Use tf.math.floordiv(x, y), Where x and y are of datatype int32. Result will have datatype of int32. For more details refer this link 1
import tensorflow as tf
x = tf.constant([16, 12, 11])
y = tf.constant([4, 6, 2])
tf.math.floordiv(x, y)
Output:
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([4, 2, 5], dtype=int32)>
In case of tf.devide, if the inputs are of float32, output will be of float32. LHS and RHS datatype are same.
import tensorflow as tf
x = tf.constant([16.0, 12.0, 11.0])
y = tf.constant([4.0, 6.0, 2.0])
print(x)
tf.divide(x,y)
Output:
> > tf.Tensor([16. 12. 11.], shape=(3,), dtype=float32)
> > <tf.Tensor: shape=(3,), dtype=float32, numpy=array([4. , 2. , 5.5], dtype=float32)> |
st205242 | I am trying to install the Tensorflow Object Detection API, however when i run the installation it seems to get stuck in a dependency/compability loop (running Ubuntu 20.04). This is the last part of the installation:
`
Downloading psutil-5.6.3.tar.gz (435 kB)
|ββββββββββββββββββββββββββββββββ| 435 kB 11.4 MB/s
Downloading psutil-5.6.2.tar.gz (432 kB)
|ββββββββββββββββββββββββββββββββ| 432 kB 11.4 MB/s
Downloading psutil-5.6.1.tar.gz (427 kB)
|ββββββββββββββββββββββββββββββββ| 427 kB 11.3 MB/s
Downloading psutil-5.6.0.tar.gz (426 kB)
|ββββββββββββββββββββββββββββββββ| 426 kB 11.6 MB/s
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Downloading psutil-5.5.1.tar.gz (426 kB)
|ββββββββββββββββββββββββββββββββ| 426 kB 11.5 MB/s
Downloading psutil-5.5.0.tar.gz (425 kB)
|ββββββββββββββββββββββββββββββββ| 425 kB 11.4 MB/s
Downloading psutil-5.4.8.tar.gz (422 kB)
|ββββββββββββββββββββββββββββββββ| 422 kB 11.3 MB/s
Downloading psutil-5.4.7.tar.gz (420 kB)
|ββββββββββββββββββββββββββββββββ| 420 kB 11.0 MB/s
Downloading psutil-5.4.6.tar.gz (418 kB)
|ββββββββββββββββββββββββββββββββ| 418 kB 9.9 MB/s
Downloading psutil-5.4.5.tar.gz (418 kB)
|ββββββββββββββββββββββββββββββββ| 418 kB 13.5 MB/s
Downloading psutil-5.4.4.tar.gz (417 kB)
|ββββββββββββββββββββββββββββββββ| 417 kB 11.4 MB/s
Downloading psutil-5.4.3.tar.gz (412 kB)
|ββββββββββββββββββββββββββββββββ| 412 kB 11.3 MB/s
INFO: pip is looking at multiple versions of proto-plus to determine which version is compatible with other requirements. This could take a while.
Collecting proto-plus>=1.10.0
Downloading proto_plus-1.18.1-py3-none-any.whl (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 1.3 MB/s
Downloading proto_plus-1.18.0-py3-none-any.whl (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 836 kB/s
Downloading proto_plus-1.17.0-py3-none-any.whl (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 1.7 MB/s
Downloading proto_plus-1.16.0-py3-none-any.whl (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 1.7 MB/s
Downloading proto_plus-1.15.0-py3-none-any.whl (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 1.7 MB/s
Downloading proto_plus-1.14.2-py3-none-any.whl (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 615 kB/s
Downloading proto-plus-1.13.0.tar.gz (44 kB)
|ββββββββββββββββββββββββββββββββ| 44 kB 3.0 MB/s
INFO: pip is looking at multiple versions of proto-plus to determine which version is compatible with other requirements. This could take a while.
Downloading proto-plus-1.11.0.tar.gz (44 kB)
|ββββββββββββββββββββββββββββββββ| 44 kB 2.8 MB/s
Downloading proto-plus-1.10.2.tar.gz (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 1.2 MB/s
Downloading proto-plus-1.10.1.tar.gz (42 kB)
|ββββββββββββββββββββββββββββββββ| 42 kB 2.2 MB/s
Downloading proto-plus-1.10.0.tar.gz (24 kB)
INFO: pip is looking at multiple versions of packaging to determine which version is compatible with other requirements. This could take a while.
Collecting packaging>=14.3
Downloading packaging-20.9-py2.py3-none-any.whl (40 kB)
|ββββββββββββββββββββββββββββββββ| 40 kB 5.6 MB/s
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this
`
Anyway to solve this issue? |
st205243 | I had this issue when trying to install it after making the following updates to my environment:
tensorflow 2.4.1β> 2.5
python 3.7.10β> 3.8.10
Cuda toolkit 10.1β> 11.2
This was the error message:
WARNING: --use-feature=2020-resolver no longer has any effect, since it is now the default dependency resolver in pip. This will become an error in pip 21.0.
Processing /home/labuser/tlt2/Tensorflow/models/research
DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.
Collecting avro-python3
Using cached avro-python3-1.10.2.tar.gz (38 kB)
Collecting apache-beam
Using cached apache_beam-2.31.0-cp38-cp38-manylinux2010_x86_64.whl (11.5 MB)
Requirement already satisfied: pillow in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from object-detection==0.1) (8.3.1)
Collecting lxml
Using cached lxml-4.6.3-cp38-cp38-manylinux2014_x86_64.whl (6.8 MB)
Requirement already satisfied: matplotlib in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from object-detection==0.1) (3.4.2)
Collecting Cython
Using cached Cython-0.29.24-cp38-cp38-manylinux1_x86_64.whl (1.9 MB)
Collecting contextlib2
Using cached contextlib2-21.6.0-py2.py3-none-any.whl (13 kB)
Collecting tf-slim
Using cached tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
Requirement already satisfied: six in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from object-detection==0.1) (1.16.0)
Collecting pycocotools
Using cached pycocotools-2.0.2-cp38-cp38-linux_x86_64.whl
Collecting lvis
Using cached lvis-0.5.3-py3-none-any.whl (14 kB)
Collecting scipy
Using cached scipy-1.7.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (28.4 MB)
Collecting pandas
Using cached pandas-1.3.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (10.6 MB)
Collecting tf-models-official
Using cached tf_models_official-2.5.0-py2.py3-none-any.whl (1.6 MB)
Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from apache-beam->object-detection==0.1) (3.7.4.3)
Collecting oauth2client<5,>=2.0.1
Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting dill<0.3.2,>=0.3.1.1
Using cached dill-0.3.1.1-py3-none-any.whl
Collecting avro-python3
Using cached avro_python3-1.9.2.1-py3-none-any.whl
Collecting pytz>=2018.3
Using cached pytz-2021.1-py2.py3-none-any.whl (510 kB)
Collecting future<1.0.0,>=0.18.2
Using cached future-0.18.2.tar.gz (829 kB)
Requirement already satisfied: protobuf<4,>=3.12.2 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from apache-beam->object-detection==0.1) (3.17.3)
Collecting pyarrow<5.0.0,>=0.15.1
Using cached pyarrow-4.0.1-cp38-cp38-manylinux2014_x86_64.whl (21.9 MB)
Collecting hdfs<3.0.0,>=2.1.0
Using cached hdfs-2.6.0-py3-none-any.whl (33 kB)
Collecting pydot<2,>=1.2.0
Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB)
Requirement already satisfied: requests<3.0.0,>=2.24.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from apache-beam->object-detection==0.1) (2.26.0)
Collecting fastavro<2,>=0.21.4
Using cached fastavro-1.4.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.7 MB)
Collecting crcmod<2.0,>=1.7
Using cached crcmod-1.7-cp38-cp38-linux_x86_64.whl
Requirement already satisfied: grpcio<2,>=1.29.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from apache-beam->object-detection==0.1) (1.34.1)
Collecting httplib2<0.20.0,>=0.8
Using cached httplib2-0.19.1-py3-none-any.whl (95 kB)
Collecting pymongo<4.0.0,>=3.8.0
Using cached pymongo-3.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (545 kB)
Requirement already satisfied: python-dateutil<3,>=2.8.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from apache-beam->object-detection==0.1) (2.8.2)
Collecting numpy<1.21.0,>=1.14.3
Using cached numpy-1.20.3-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.4 MB)
Collecting docopt
Using cached docopt-0.6.2-py2.py3-none-any.whl
Requirement already satisfied: pyparsing<3,>=2.4.2 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from httplib2<0.20.0,>=0.8->apache-beam->object-detection==0.1) (2.4.7)
Requirement already satisfied: pyasn1>=0.1.7 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from oauth2client<5,>=2.0.1->apache-beam->object-detection==0.1) (0.4.8)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from oauth2client<5,>=2.0.1->apache-beam->object-detection==0.1) (0.2.8)
Requirement already satisfied: rsa>=3.1.4 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from oauth2client<5,>=2.0.1->apache-beam->object-detection==0.1) (4.7.2)
Requirement already satisfied: idna<4,>=2.5 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from requests<3.0.0,>=2.24.0->apache-beam->object-detection==0.1) (3.2)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from requests<3.0.0,>=2.24.0->apache-beam->object-detection==0.1) (1.26.6)
Requirement already satisfied: certifi>=2017.4.17 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from requests<3.0.0,>=2.24.0->apache-beam->object-detection==0.1) (2021.5.30)
Requirement already satisfied: charset-normalizer~=2.0.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from requests<3.0.0,>=2.24.0->apache-beam->object-detection==0.1) (2.0.3)
Collecting opencv-python>=4.1.0.25
Using cached opencv_python-4.5.3.56-cp38-cp38-manylinux2014_x86_64.whl (49.9 MB)
Requirement already satisfied: kiwisolver>=1.1.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from lvis->object-detection==0.1) (1.3.1)
Requirement already satisfied: cycler>=0.10.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from lvis->object-detection==0.1) (0.10.0)
Requirement already satisfied: setuptools>=18.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from pycocotools->object-detection==0.1) (52.0.0.post20210125)
Collecting tensorflow-model-optimization>=0.4.1
Using cached tensorflow_model_optimization-0.6.0-py2.py3-none-any.whl (211 kB)
Collecting tensorflow-hub>=0.6.0
Using cached tensorflow_hub-0.12.0-py2.py3-none-any.whl (108 kB)
Collecting sacrebleu
Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)
Collecting pyyaml>=5.1
Using cached PyYAML-5.4.1-cp38-cp38-manylinux1_x86_64.whl (662 kB)
Collecting gin-config
Using cached gin_config-0.4.0-py2.py3-none-any.whl (46 kB)
Collecting kaggle>=1.3.9
Using cached kaggle-1.5.12-py3-none-any.whl
Collecting psutil>=5.4.3
Using cached psutil-5.8.0-cp38-cp38-manylinux2010_x86_64.whl (296 kB)
Collecting py-cpuinfo>=3.3.0
Using cached py_cpuinfo-8.0.0-py3-none-any.whl
Collecting seqeval
Using cached seqeval-1.2.2-py3-none-any.whl
Collecting opencv-python-headless
Using cached opencv_python_headless-4.5.3.56-cp38-cp38-manylinux2014_x86_64.whl (37.1 MB)
Collecting tensorflow-addons
Using cached tensorflow_addons-0.13.0-cp38-cp38-manylinux2010_x86_64.whl (679 kB)
Collecting tensorflow-datasets
Using cached tensorflow_datasets-4.3.0-py3-none-any.whl (3.9 MB)
Collecting google-cloud-bigquery>=0.31.0
Using cached google_cloud_bigquery-2.22.1-py2.py3-none-any.whl (195 kB)
Requirement already satisfied: tensorflow>=2.5.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from tf-models-official->object-detection==0.1) (2.5.0)
Collecting sentencepiece
Using cached sentencepiece-0.1.96-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)
Collecting google-api-python-client>=1.6.7
Using cached google_api_python_client-2.14.0-py2.py3-none-any.whl (7.1 MB)
Requirement already satisfied: google-auth<3.0.0dev,>=1.16.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from google-api-python-client>=1.6.7->tf-models-official->object-detection==0.1) (1.33.1)
Collecting uritemplate<4dev,>=3.0.0
Using cached uritemplate-3.0.1-py2.py3-none-any.whl (15 kB)
Collecting google-api-core<3.0.0dev,>=1.21.0
Using cached google_api_core-1.31.0-py2.py3-none-any.whl (93 kB)
Collecting google-auth-httplib2>=0.1.0
Using cached google_auth_httplib2-0.1.0-py2.py3-none-any.whl (9.3 kB)
Collecting googleapis-common-protos<2.0dev,>=1.6.0
Using cached googleapis_common_protos-1.53.0-py2.py3-none-any.whl (198 kB)
Collecting packaging>=14.3
Using cached packaging-21.0-py3-none-any.whl (40 kB)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from google-auth<3.0.0dev,>=1.16.0->google-api-python-client>=1.6.7->tf-models-official->object-detection==0.1) (4.2.2)
Collecting google-resumable-media<3.0dev,>=0.6.0
Using cached google_resumable_media-1.3.1-py2.py3-none-any.whl (75 kB)
Collecting google-cloud-core<3.0.0dev,>=1.4.1
Using cached google_cloud_core-1.7.1-py2.py3-none-any.whl (28 kB)
Collecting proto-plus>=1.10.0
Using cached proto_plus-1.19.0-py3-none-any.whl (42 kB)
Collecting grpcio<2,>=1.29.0
Using cached grpcio-1.39.0-cp38-cp38-manylinux2014_x86_64.whl (4.3 MB)
Collecting google-crc32c<2.0dev,>=1.0
Using cached google_crc32c-1.1.2-cp38-cp38-manylinux2014_x86_64.whl (38 kB)
Collecting cffi>=1.0.0
Using cached cffi-1.14.6-cp38-cp38-manylinux1_x86_64.whl (411 kB)
Collecting pycparser
Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting tqdm
Using cached tqdm-4.61.2-py2.py3-none-any.whl (76 kB)
Collecting python-slugify
Using cached python_slugify-5.0.2-py2.py3-none-any.whl (6.7 kB)
Collecting six
Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Requirement already satisfied: tensorboard~=2.5 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from tensorflow>=2.5.0->tf-models-official->object-detection==0.1) (2.5.0)
Requirement already satisfied: astunparse~=1.6.3 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from tensorflow>=2.5.0->tf-models-official->object-detection==0.1) (1.6.3)
Requirement already satisfied: h5py~=3.1.0 in /home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages (from tensorflow>=2.5.0->tf-models-official->object-detection==0.1) (3.1.0)
INFO: pip is looking at multiple versions of pyyaml to determine which version is compatible with other requirements. This could take a while.
Collecting pyyaml>=5.1
Using cached PyYAML-5.4-cp38-cp38-manylinux1_x86_64.whl (662 kB)
Using cached PyYAML-5.3.1.tar.gz (269 kB)
Using cached PyYAML-5.3.tar.gz (268 kB)
Using cached PyYAML-5.2.tar.gz (265 kB)
Using cached PyYAML-5.1.2.tar.gz (265 kB)
Using cached PyYAML-5.1.1.tar.gz (274 kB)
Using cached PyYAML-5.1.tar.gz (274 kB)
INFO: pip is looking at multiple versions of pyyaml to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of py-cpuinfo to determine which version is compatible with other requirements. This could take a while.
Collecting py-cpuinfo>=3.3.0
Using cached py-cpuinfo-7.0.0.tar.gz (95 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Using cached py-cpuinfo-6.0.0.tar.gz (145 kB)
Using cached py-cpuinfo-5.0.0.tar.gz (82 kB)
Using cached py-cpuinfo-4.0.0.tar.gz (79 kB)
Using cached py-cpuinfo-3.3.0.tar.gz (76 kB)
INFO: pip is looking at multiple versions of psutil to determine which version is compatible with other requirements. This could take a while.
Collecting psutil>=5.4.3
Using cached psutil-5.7.3.tar.gz (465 kB)
INFO: pip is looking at multiple versions of py-cpuinfo to determine which version is compatible with other requirements. This could take a while.
Using cached psutil-5.7.2.tar.gz (460 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Using cached psutil-5.7.1.tar.gz (460 kB)
Using cached psutil-5.7.0.tar.gz (449 kB)
Using cached psutil-5.6.7.tar.gz (448 kB)
Using cached psutil-5.6.6.tar.gz (447 kB)
Using cached psutil-5.6.5.tar.gz (447 kB)
INFO: pip is looking at multiple versions of psutil to determine which version is compatible with other requirements. This could take a while.
Using cached psutil-5.6.4.tar.gz (447 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Using cached psutil-5.6.3.tar.gz (435 kB)
Using cached psutil-5.6.2.tar.gz (432 kB)
Using cached psutil-5.6.1.tar.gz (427 kB)
Using cached psutil-5.6.0.tar.gz (426 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Using cached psutil-5.5.1.tar.gz (426 kB)
Using cached psutil-5.5.0.tar.gz (425 kB)
Using cached psutil-5.4.8.tar.gz (422 kB)
Using cached psutil-5.4.7.tar.gz (420 kB)
Using cached psutil-5.4.6.tar.gz (418 kB)
Using cached psutil-5.4.5.tar.gz (418 kB)
Using cached psutil-5.4.4.tar.gz (417 kB)
Using cached psutil-5.4.3.tar.gz (412 kB)
INFO: pip is looking at multiple versions of proto-plus to determine which version is compatible with other requirements. This could take a while.
Collecting proto-plus>=1.10.0
Using cached proto_plus-1.18.1-py3-none-any.whl (42 kB)
Using cached proto_plus-1.18.0-py3-none-any.whl (42 kB)
Using cached proto_plus-1.17.0-py3-none-any.whl (42 kB)
Using cached proto_plus-1.16.0-py3-none-any.whl (42 kB)
Using cached proto_plus-1.15.0-py3-none-any.whl (42 kB)
Using cached proto_plus-1.14.2-py3-none-any.whl (42 kB)
Using cached proto-plus-1.13.0.tar.gz (44 kB)
INFO: pip is looking at multiple versions of proto-plus to determine which version is compatible with other requirements. This could take a while.
Using cached proto-plus-1.11.0.tar.gz (44 kB)
Using cached proto-plus-1.10.2.tar.gz (42 kB)
Using cached proto-plus-1.10.1.tar.gz (42 kB)
Using cached proto-plus-1.10.0.tar.gz (24 kB)
INFO: pip is looking at multiple versions of packaging to determine which version is compatible with other requirements. This could take a while.
Collecting packaging>=14.3
Using cached packaging-20.9-py2.py3-none-any.whl (40 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Downloading packaging-20.8-py2.py3-none-any.whl (39 kB)
Downloading packaging-20.7-py2.py3-none-any.whl (35 kB)
Downloading packaging-20.5-py2.py3-none-any.whl (35 kB)
Downloading packaging-20.4-py2.py3-none-any.whl (37 kB)
Downloading packaging-20.3-py2.py3-none-any.whl (37 kB)
Downloading packaging-20.2-py2.py3-none-any.whl (37 kB)
INFO: pip is looking at multiple versions of packaging to determine which version is compatible with other requirements. This could take a while.
Downloading packaging-20.1-py2.py3-none-any.whl (36 kB)
Downloading packaging-20.0-py2.py3-none-any.whl (36 kB)
Downloading packaging-19.2-py2.py3-none-any.whl (30 kB)
Downloading packaging-19.1-py2.py3-none-any.whl (30 kB)
Collecting attrs
Downloading attrs-21.2.0-py2.py3-none-any.whl (53 kB)
|ββββββββββββββββββββββββββββββββ| 53 kB 2.9 MB/s
Collecting packaging>=14.3
Downloading packaging-19.0-py2.py3-none-any.whl (26 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, you can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
Downloading packaging-18.0-py2.py3-none-any.whl (21 kB)
Downloading packaging-17.1-py2.py3-none-any.whl (24 kB)
Downloading packaging-17.0-py2.py3-none-any.whl (23 kB)
Downloading packaging-16.8-py2.py3-none-any.whl (23 kB)
Downloading packaging-16.7-py2.py3-none-any.whl (22 kB)
Downloading packaging-16.6-py2.py3-none-any.whl (22 kB)
Downloading packaging-16.5-py2.py3-none-any.whl (22 kB)
Downloading packaging-16.4-py2.py3-none-any.whl (22 kB)
Downloading packaging-16.3-py2.py3-none-any.whl (22 kB)
Downloading packaging-16.2-py2.py3-none-any.whl (22 kB)
Downloading packaging-16.1-py2.py3-none-any.whl (21 kB)
Downloading packaging-16.0-py2.py3-none-any.whl (19 kB)
Downloading packaging-15.3-py2.py3-none-any.whl (18 kB)
Downloading packaging-15.2-py2.py3-none-any.whl (18 kB)
Downloading packaging-15.1-py2.py3-none-any.whl (17 kB)
Downloading packaging-15.0-py2.py3-none-any.whl (17 kB)
Downloading packaging-14.5-py2.py3-none-any.whl (16 kB)
Downloading packaging-14.4-py2.py3-none-any.whl (16 kB)
Downloading packaging-14.3-py2.py3-none-any.whl (16 kB)
INFO: pip is looking at multiple versions of kaggle to determine which version is compatible with other requirements. This could take a while.
Collecting kaggle>=1.3.9
Downloading kaggle-1.5.10.tar.gz (59 kB)
|ββββββββββββββββββββββββββββββββ| 59 kB 2.4 MB/s
Downloading kaggle-1.5.9.tar.gz (58 kB)
|ββββββββββββββββββββββββββββββββ| 58 kB 1.4 MB/s
Collecting slugify
Downloading slugify-0.0.1.tar.gz (1.2 kB)
ERROR: Exception:
Traceback (most recent call last):
File "/home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 180, in _main
status = self.run(options, args)
File "/home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 318, in run
requirement_set = resolver.resolve(
File "/home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 127, in resolve
result = self._result = resolver.resolve(
File "/home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/home/labuser/anaconda3/envs/tf/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 384, in resolve
raise ResolutionTooDeep(max_rounds)
pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 2000000
It took multiple hours to time out. However, it worked the next time I ran the installation command, and only took a minute. I would recommend just letting it go through the dependencies until it works out, as pip recommends 27. |
st205244 | I was trying within a function definition run a specific operation on each element in the batch. This operation has to be performed independently(ie it canβt be run on the whole batch at once.) I tried using for loops, map functions, for in range loops.
I have attached a mini colab. I put a dummy function here, but in reality it wonβt work across the batch.
Any strategies that would be helpful would be great.
Google Colaboratory 1. |
st205245 | train_data = object_detector.DataLoader.from_pascal_voc("/content/drive/MyDrive/10ImagesTesting/Train10imagesXML", β/content/drive/MyDrive/10ImagesTesting/Train10imagesXMLβ, label_map= {1: βCircle K Donutβ, 2: βChocolate Heartβ, 3: βChocolate Heart With Sprinklesβ, 4: βChocolate Ringβ,5: βChocolate Ring With Sprinklesβ, 6: βApple Turnoverβ, 7: βRed Velvet Cakeβ, 8: βChocolate Old Fashionedβ,8: βChocolate Old Fashionedβ, 9: βConchaβ, 10: βCrumb Cakeβ, 11: βStrawberry Heart With Sprinkles promoβ,12: βBlueberry Fritterβ, 13: βApple Fritterβ, 14: βCinnamon Rollβ, 15: βFrosted Cinnamon Rollβ,16: βButter Croissantβ, 17: βRaspberry Filledβ, 18: βBoston cream shellβ, 19: βCotton Candy Ring Promoβ,20: βGlazed Ringβ, 21: βSugared Ringβ, 22: βGlazed Twistβ, 23: βSugared Twistβ,24: βGlazed Pull Apartβ, 25: βSprinkled Pull Apartβ, 26: βStrawberry Ring With Sprinklesβ, 27: βDeath by Chocolateβ, 28: βBoston Cream Barβ, 29: βJelly Filled Barβ, 30: βBrownie Batter Barβ, 31: βChocolate Barβ,32: βMaple Barβ})
ERROR
/usr/local/lib/python3.7/dist-packages/tensorflow_examples/lite/model_maker/third_party/efficientdet/dataset/create_pascal_tfrecord.py in dict_to_tf_example(data, images_dir, label_map_dict, unique_id, ignore_difficult_instances, ann_json_dict)
172 area.append((xmax[-1] - xmin[-1]) * (ymax[-1] - ymin[-1]))
173 classes_text.append(obj[βnameβ].encode(βutf8β))
β 174 classes.append(label_map_dict[obj[βnameβ]])
175 truncated.append(int(obj[βtruncatedβ]))
176 poses.append(obj[βposeβ].encode(βutf8β))
KeyError: '23 |
st205246 | Iβm having the same issue. Any luck?
Edit: Just figured it out myself, in the xml files, some files were missing the field. I just added <pose>Undefined</pose> after </name> and got it to work. |
st205247 | Iβm using 2 generator to generate input for each step and need to update input at certain training step, but prefetched data often get a bit messed up when changing input. I think of 2 way to solve problem:
Remove data that has been called to input pipeline by prefetch (since I use generator and need to update input at certain training step)
Stop prefetching when it prefetch enough data at training step k
Is there any straight-forward way solve 1 of 2 problems above? |
st205248 | In my project I have several different classes of videos with thousands of videos in any one class. The videos are too big to be all loaded into memory. My model loads batches of videos from each class and creates synthetic videos by applying a function to a batch of videos from different classes. My problem is how to efficiently load the data? I have created a generator which samples some class ids, loads videos from each class and creates the synthetic video. This works, but even using the recommended dataset pipeline data loading and manipulation still takes 90% of training time. I tried creating separate dataset for each class and trying to sample from them in parallel using a zip dataset, which also works and is faster. However, Iβm doing distributed training along multiple GPUs, and thus need a distributed dataset, so how can I create a distributed dataset of datasets? The goal is to have a dataset that takes in a batch of video batches and applies a function that batch to produce a meta-batch? |
st205249 | Hello guys,
Iβd like to ask if there is some info, examples, sample code or recent benchmarking data for XLA/tfcompile for inference of float and quantized models?
Iβd like to find out and compare tflite vs. XLA/AOT.
Thank you |
st205250 | There is XLA and there is tfcompile() - I am looking for a way how to use it for ARM-based devices, is it possible to do AOT (Ahead-of-Time) compilation?
Per indication, tfcompile() can be used as a standalone tool, which converts TensorFlow graph into executable code but for x86-64 CPU only.
Iβd like to try out XLA/tfcompile()/AOT for arm-based device and Iβd like to have better ways to improve performance. As indicated thus can reduce binary size, and also avoid runtime overhead. So Iβd like to achieve this. |
st205251 | github.com/tensorflow/tensorflow
[feature] Support Cross Compiling with tfcompile 9
opened
May 4, 2017
cancan101
stat:contributions welcome
type:feature
Tensorflow (using XLA) is able to AOT compile a graph using `tfcompile`. There dβ¦oes not seem to be a way to, or it it not documented, cross compile the graph (ie compile on OS X for deployment on iOS). (Related [SO](http://stackoverflow.com/questions/43508105/using-tfcompile-to-aot-compile-tensorflow-graph-for-ios) question).
I suggest adding a means of performing this cross compilation. |
st205252 | I am trying to understand the code used in:
TensorFlow
Time series forecasting Β |Β TensorFlow Core 3
They define further below what is a βmulti_windowβ in which several inputs have a label of several outputs. The WindowGenerator class is defined as:
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
But this is then used as an input when calling the βmodel.fitβ function:
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
You can see this in the initialization of βhistoryβ where the training set of the raw data is returned.
From what I can make sense of, the input_width and label_width of the Window does not affect what goes into the LSTM at all, as the only thing that is being called is the validation and test sets. |
st205253 | Hi, I have been trying to run movenet with p5js. I have managed to get the detector working and can log out the poses. However , I am trying to draw a skeleton on top of the connected circle for vizualization purposes. I have looked into posenet as it has similar 17 keypoints. However, I noticed that some functions such as posenet.getAdjacentKeypoints are not implemented in movenet. I tried to look into the TFJS models -->src-->demo but could not get my head around how to implement the skeleton. My sample trial code is below.
let detector;
let poses;
let video;
async function init() {
const detectorConfig = {
modelType: poseDetection.movenet.modelType.SINGLEPOSE_LIGHTNING,
};
detector = await poseDetection.createDetector(
poseDetection.SupportedModels.MoveNet,
detectorConfig
);
}
async function videoReady() {
console.log("video ready");
await getPoses();
}
async function setup() {
createCanvas(640, 480);
video = createCapture(VIDEO, videoReady);
video.hide();
await init();
//createButton('pose').mousePressed(getPoses)
}
async function getPoses() {
poses = await detector.estimatePoses(video.elt);
setTimeout(getPoses, 0);
}
function draw() {
background(220);
image(video, 0, 0);
if (poses && poses.length > 0) {
//console.log(poses[0].keypoints.length)
//console.log(poses[0].keypoints[0].x);
for (let kp of poses[0].keypoints) {
const { x, y, score } = kp;
console.log(kp);
if (score > 0.5) {
fill(255);
stroke(0);
strokeWeight(4);
circle(x, y, 16);
}
}
for (let i = 0; i < poses[0].keypoints.length ; i ++)
{
// Get adjacent keypoints (Start with nose and left_eye)
let x = poses[0].keypoints.length.nose
}
}
}
If i understand the line functions needs both the x1,x2 and y1,y2 co-ordinates to effectively join the points. I was wondering if anyone has managed to overlay the skeleton |
st205254 | It seems you are trying to use:
let x = poses[0].keypoints.length.nose
But my understanding is that the results come back in the following form (there is no β.noseβ on array length) instead if you want nose you would simply refer to: poses[0].keypoints[0].x and poses[0].keypoints[0].y as described below.
[
{
score: 0.8,
keypoints: [
{x: 230, y: 220, score: 0.9, name: "nose"},
{x: 212, y: 190, score: 0.8, name: "left_eye"},
...
]
}
]
So using the names of the points (or the array offset as shown in the image below) you can plot them on the canvas as normal. If you draw out the points as dots first you can figure out which 2 points you want to connect eg left_shoulder to right_shoulder to get a line across the top of the body for example. It is up to you to decide what lines you want to draw though. You can use this diagram as reference:
68747470733a2f2f73746f726167652e676f6f676c65617069732e636f6d2f6d6f76656e65742f636f636f2d6b6579706f696e74732d3530302e706e67500Γ511 37.1 KB
Which was taken from the docs available here: tfjs-models/pose-detection at master Β· tensorflow/tfjs-models Β· GitHub 14
Hence to draw a line from top left shoulder to top right shoulder, you would need to take the array elements 5 and 6 and draw a line between those 2 xy co-ordinates. You can add logic to check confidence score if that is important to you to ommit lines that do not have confidence > SOME_THRESHOLD for both points. |
st205255 | Hello @Jason , thank you . A rookie mistake by me . It helps me , better now to understand how the keypoints are accessed . Three additional questions
Is there a normalized way of getting the keypoints ? Or one has to normalize them in future ?
I tried to do a full screen mode (that is making the canvas size equal to the width and height of the video but i noticed that the overlaid skeleton is not upto the mark , It seems the skeletons are displayed in different scale. I have seen that you explained it in another thread but i could not quite understand it . It feels like the the skeleton did not get scaled . Is there a way to overcome this issue . Attaching the snippet and also the picture . (Drawinf function - image(video, 0, 0,windowWidth, windowHeight);
Any suggestions to overcome the jitters ? Increasing the confidence scores sometimes does not seem to workout that well.
let detector;
let poses;
let video;
async function init() {
const detectorConfig = {
modelType: poseDetection.movenet.modelType.SINGLEPOSE_LIGHTNING,enableSmoothing:1,
};
detector = await poseDetection.createDetector(
poseDetection.SupportedModels.PoseNet,
detectorConfig
);
}
async function videoReady() {
console.log("video ready");
await getPoses();
}
async function setup() {
createCanvas(windowWidth, windowHeight);
video = createCapture(VIDEO, videoReady);
console.log(video.height);
video.hide();
await init();
}
async function getPoses() {
poses = await detector.estimatePoses(video.elt);
//console.log(poses);
setTimeout(getPoses, 0);
}
function draw() {
background(220);
image(video, 0, 0,windowWidth, windowHeight);
if (poses && poses.length > 0) {
for (let kp of poses[0].keypoints) {
const { x, y, score } = kp;
if (score > 0.5) {
fill(255);
stroke(0);
strokeWeight(4);
circle(x, y, 16);
}
}
}
} |
st205256 | The keypoints are simply the x,y co-ordinates found in the image you sent for classification. If you want to do any extra processing then that would be up to you.
It seems you have 2 different sized things - the image and the canvas. You can solve this by simply taking the x,y co-ordinates that are returned and divide the x by the original image width, and divide the y by the original image height. You then have a ratio you can use to find itβs new position of a larger version of the image (eg the canvas). Simply then multiply the new canvas.width with the ratio for the x to find the new relative x, and then do the same for height too.
From your static image I do not see any jitter and I can not replicate this on my side. It may be your camera/lighting causing lack of accuracy in prediction. If that does not solve then you could investigate some smoothing algorithm eg moving average or something to reduce jitter at cost of latency. |
st205257 | I am not entirely sure why but I found the results more jittery using p5.js vs direct, especially on the edges of the video, so I ended up dropping it for my project and going direct. Could be related to this point @Jason mentioned?
Also it seems P5 does some rendering magic of its own - it draws the video frame to canvas and then the dots on top of that, which is not terribly efficient as you are sampling video frame twice - you can just absolute position canvas on top of video element and draw the circles only on top of the already playing video saving you pushing twice the number of video pixel data around and only needing to worry about rendering dots to canvas based on the rendered size of the canvas. |
st205258 | @inuit . I dont think so , i think the recaling works but you are correct. It is not as smooth . in p5js . I am quiten new to JS and I thought it could be the easier way to experiment a bit.
What do you mean by βgoing directβ , are you doing something in react-native-expo ? I think last i checked it was not even loading the models and a lot of functionalities seem to be broken . That was my understanding . I am not sure though. |
st205259 | Hello. Iβm new to NLP and Iβm trying to follow Tensorflowβs tutorial on Image Captioning (Image captioning with visual attention Β |Β TensorFlow Core 1), but I ran into an problem when trying to preprocess the images with InceptionV3. As said in the tutorial, I have a preprocessing function
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.image.decode_jpeg(img, channels=3)
img = tf.image.resize(img, (299, 299))
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
Then I use it to get a BatchDataset (I get a <BatchDataset shapes: ((None, 299, 299, 3), (None,)), types: (tf.float32, tf.string)>)
# Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(load_image, num_parallel_calls=tf.data.experimental.AUTOTUNE).batch(16)
Up to this point, everything works, but then when I try
for img, path in image_dataset:
#Do something
Either nothing happens or the kernel dies. Is there a way to fix or circumvent that issue? |
st205260 | Hi,
Iβve just tried the colab you linked to and it passed the part you mentioned (and finished fine).
Is your runtime configured to use GPU?
When the kernel dies might be an out of memory issue, did you notice that on the RAM bar on the top right corner? |
st205261 | Thanks for the reply, @lgusm .
I donβt think this is a memory issue. I purposefully reduced the number of images down to 64 so that such a problem canβt happen. I also checked my task manager and didnβt see any problem there either.
Could there be a problem with the tensorflow version that Iβm using? Itβs not likely the problem, but I think I have an earlier version than the one used on colab. |
st205262 | Given that:
WINTERSDORFF_Raphael:
Could there be a problem with the tensorflow version that Iβm using? Itβs not likely the problem, but I think I have an earlier version than the one used on colab.
you may try running the tutorial with v2.5 on your local machine and check if that fixes your issue.
The notebook example is runnable in Colab end-to-end and Colab is loaded with the current latest version (TF 2.5).
WINTERSDORFF_Raphael:
Either nothing happens or the kernel dies. Is there a way to fix or circumvent that issue?
This might be due to the compute/etc requirements of the example, but we donβt have all information about your setup to be able to completely debug this. As youβre probably aware, itβs not a small dataset for a typical demo:
ββ¦ large download ahead . Youβll use the training set, which is a 13GB fileβ¦β
And the Caching the features extracted from InceptionV3 step can be compute intensive. It comes with a warning in the tutorial:
βYou will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but also memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this exceeds the memory limitations of Colab (currently 12GB of memory).β
Also keeping in mind that, as the doc says:
βPerformance could be improved with a more sophisticated caching strategy (for example, by sharding the images to reduce random access disk I/O), but that would require more code.β
Maybe some or all of that are contributing to the issue.
Let us know if upgrading to TF 2.5 fixes anything. |
st205263 | Yes, the amount of memory needed is probably very large, which is why I restricted the number of images to 64 just to see if the rest of the code works.
I managed to upgrade tensorflow to the latest version (TF2.5), and now if I write
for img, path in image_dataset:
pass
the kernel doesnβt die anymore, but I get an error:
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-20-55361e928603> in <module>
----> 1 for img, path in image_dataset:
2 pass
~\anaconda3\envs\TF2_5\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py in __next__(self)
759 def __next__(self):
760 try:
--> 761 return self._next_internal()
762 except errors.OutOfRangeError:
763 raise StopIteration
~\anaconda3\envs\TF2_5\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py in _next_internal(self)
745 self._iterator_resource,
746 output_types=self._flat_output_types,
--> 747 output_shapes=self._flat_output_shapes)
748
749 try:
~\anaconda3\envs\TF2_5\lib\site-packages\tensorflow\python\ops\gen_dataset_ops.py in iterator_get_next(iterator, output_types, output_shapes, name)
2722 _result = pywrap_tfe.TFE_Py_FastPathExecute(
2723 _ctx, "IteratorGetNext", name, iterator, "output_types", output_types,
-> 2724 "output_shapes", output_shapes)
2725 return _result
2726 except _core._NotOkStatusException as e:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 117: invalid continuation byte |
st205264 | I fixed the problem, so in case someone has the same problem, hereβs how I solved it:
I found out that the file βcaptions_train2014.jsonβ contains image IDs that do not exist in the βtrain2014β folder, so when trying to iterate over the images, the error occured. More exactly, there are 82783 different IDs, but I have only 74891 images. I fixed that by verifying if the image path exists before opening the image. I have no idea why that works in collab though (but maybe my download just went wrong). |
st205265 | import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.models import Model
(stl_train, stl_test, stl_unlabelled), ds_info = tfds.load('stl10',split=['train','test','unlabelled'],shuffle_files=True,as_supervised=True,with_info=True)
base_model = ResNet50(input_shape=ds_info.features['image'].shape,weights=None)
def reshapeImage(image, label):
return tf.reshape(image, (1,96,96,3)), tf.convert_to_tensor(onehot_encoded[label.numpy()])
stl_train = stl_train.map(reshapeImage, num_parallel_calls=tf.data.experimental.AUTOTUNE)
The labels are tf.Tensor(4, shape=(), dtype=int64) ,I want the 4, in this example, which I will be using to convert into one-hot encoded labels, but I am unable to do so |
st205266 | tf.convert_to_tensor(onehot_encoded[label.numpy()])
The problem is actually in this part of the return statement. If I simply keep the labels as categorical numbers, it works, but then I will be having problems with softmax |
st205267 | Update: it worked with stl_train = stl_train.map(lambda x, y: (x, tf.one_hot(integer_encoded,depth=10)[y]))
Can you explain why it did ? y is literally a tensor! |
st205268 | onehot_encoded is a numpy array generated from sklearn.preprocessing.OneHotEncoder |
st205269 | Iβve ran in to similar problems before when part of the input is not a Tensor. I think tf.data builds a graph and might have problems with NumPy arrays. What happens if you wrap your function in tf.numpy_function: tf.numpy_function Β |Β TensorFlow Core v2.5.0 70 |
st205270 | Hi, I converted my saved model to model json using tensorflowjs_converter
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_format=tfjs_graph_model \
--saved_model_tags=serve \
--signature_name=serving_default \
/saved_model \
/json-model
model.predict({input_tensor: inputTensor});
throws the following errors
Error: The shape of dict['input_tensor'] provided in model.execute(dict) must be [1,-1,-1,3], but was [1,600,800,4]
at Object.assert (/object-detection/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:337:15)
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7478:28
at Array.forEach (<anonymous>)
at GraphExecutor.checkInputShapeAndType (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7470:29)
at GraphExecutor.<anonymous> (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7272:34)
at step (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:81:23)
at Object.next (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:62:53)
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:55:71
at new Promise (<anonymous>)
at __awaiter (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:51:12)
If I add ----control_flow_v2=True to do the conversion, it will fail to loadGraphModel.
I'm running @tensorflow/tfjs-node v 3.7.0 and I'm still getting this error "Cannot read property 'outputs' of undefined" when tried to load model json that was converted from saved model using tensorflowjs_converter.
When I changed to @tensorflow/tfjs-node@next, it would throw "Cannot read property 'children' of undefined"
model = await tf.loadGraphModel(modelPath);
2021-07-10 13:26:00.618147: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
TypeError: Cannot read property 'outputs' of undefined
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3851:31
at Array.forEach ()
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3848:29
at Array.forEach ()
at OperationMapper.mapFunction (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3846:18)
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3679:56
at Array.reduce ()
at OperationMapper.transformGraph (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3678:48)
at GraphModel.loadSync (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7763:68)
at GraphModel. (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7737:52) |
st205271 | If this is an image model are you loading RGBA when it expects RGB? That would maybe explain the 4 vs 3 error you have in the first case? |
st205272 | @Jason, Iβm not sure. It works when I load it with tfnode.node.loadSavedModel().
Maybe itβs the way Iβm loading the model.json? Can you elaborate? |
st205273 | Was the original Python model Keras saved model (.h5 file) or regular TF savedModel (typically .pb file)?
For Keras saved model you would use:
tf.loadLayersModel(MODEL_URL);
For TF SavedModel you would use:
tf.loadGraphModel(MODEL_URL);
Looking at your error message though in the first instance it seems your passing a tensor of the wrong shape. Check the tensor of the data you are sending into the model and figure out why it is 4 instead of 3 in that last dimension. What image did you load? PNG? What exactly is your input_tensor? Do you have this on a Glitch or Codpen somewhere for me to see running? |
st205274 | Itβs TF savedModel.
using loadGraphModel()
loading png image
const image = fs.readFileSync(imageFile);
let decodedImage = tfnode.node.decodeImage(image);
let inputTensor = decodedImage.expandDims();
model.predict()
the repo is here GitHub - playground/tfjs-object-detection 3 |
st205275 | Can you try with JPG? I think it may be because PNG has 4 channels, JPG will have 3. IF that is the case then you simply need to force the channels to be 3 and ignore the alpha channel for transparency on PNGs:
js.tensorflow.org
TensorFlow.js 3
A WebGL accelerated, browser based JavaScript library for training and deploying ML models
There is parameter to force it to be 3 under channels which is optional parameter you can specify. |
st205276 | with jpg image, I get Error: The dtype of dict[βinput_tensorβ] provided in model.execute(dict) must be int32, but was float32 |
st205277 | In that case you will need to convert the resulting tensor to be integer values and not floating point.
You can use something like:
tf.cast(data, 'int32')
However before doing that inspect the resulting Tensor to see what the values are after the image read. Depending how it decoded if it decided to take RGB values and normalize them eg instead of numbers from 0-255 it gives numbers 0 -1 as a floating point then you would want to convert back to whole number by multiplying by 255 first. Else 0.1239803 would just become 0 in a cast to integer which is not what you want!
However if you are using the PNG method I listed above it automatically returns an int32 tensor so it is probably better to do that if you are not using that method and you know your images are always PNG.
js.tensorflow.org
TensorFlow.js
A WebGL accelerated, browser based JavaScript library for training and deploying ML models |
st205278 | Now, itβs getting Image:
kept: false,
isDisposedInternal: false,
shape: [ 1, 600, 800, 3 ],
dtype: 'int32',
size: 1440000,
strides: [ 1440000, 2400, 3 ],
dataId: {},
id: 918,
rankType: '4',
scopeId: 2
}
Error: This execution contains the node 'StatefulPartitionedCall/map/while/exit/_727', which has the dynamic op 'Exit'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [StatefulPartitionedCall/map/TensorArrayV2Stack_1/TensorListStack]
If use loadSavedModel(savedModel) without the conversion, it works fine for both jpg and png images. |
st205279 | playground:
Please use model.executeAsync() instead
Did you try what it recommended? Eg dont use predict but use executeAsync() instead? |
st205280 | Actually, I have tried that earlier and this is what throws with model.executeAsync
with jpg image
Image: 1440000 bytes with shape: Tensor {
kept: false,
isDisposedInternal: false,
shape: [ 1, 600, 800, 3 ],
dtype: 'int32',
size: 1440000,
strides: [ 1440000, 2400, 3 ],
dataId: {},
id: 918,
rankType: '4',
scopeId: 2
}
Error: Invalid TF_Status: 3
Message: In[0] and In[1] has different ndims: [1,8,8,64,2] vs. [2,1]
With png image
Error: The shape of dict['input_tensor'] provided in model.execute(dict) must be [1,-1,-1,3], but was [1,600,800,4] |
st205281 | Hello again! Sorry for the slight delay here. I have been discussing with our software engineers on the team. It seems we may have found a bug here and they would like you to submit an issue on the TFJS github which you can find here: GitHub - tensorflow/tfjs: A WebGL accelerated JavaScript library for training and deploying ML models. 4
Once you have submitted a bug, feel free to let me know the link and I can also send that link to the SWE directly who is looking into this to kick that process off.
Thanks for your patients here and for being part of the community |
st205282 | playground:
Error: The shape of dict['input_tensor'] provided in model.execute(dict) must be [1,-1,-1,3], but was [1,600,800,4]
Hi @Jason,
Thanks for the update, I have submitted a bug here Error: The shape of dict[βinput_tensorβ] provided in model.execute(dict) must be [1,-1,-1,3] Β· Issue #5366 Β· tensorflow/tfjs Β· GitHub 7.
Also this might be a regression bug with the recent release of tfjs-node v3.8.0, I have posted that here Failed to find bogomips warning Β· Issue #38260 Β· tensorflow/tensorflow Β· GitHub 1
A different question, is there an example of training savedmodel using nodejs you can point me to, similar to this example mnist-node |
st205283 | This is the only codelab for Node that we have right now:
Google Codelabs
TensorFlow.js Training in Node.js Codelab Β |Β Google Codelabs 6
In this codelab, you will learn how to build and train a baseball pitch estimation model using TensorFlow.js in a Node.js server, and serve metrics to a client.
If you do get this working it may be a good blog writeup if you are interested! |
st205284 | Hi @Jason. Another question if I may, the reason Iβm asking for a Nodejs version for training models is that my environment that I have been using to train models on my Mac for some reason in the last couple of weeks it seems to be acting up and not producing the proper result like it did before. Not sure what got changed, the trained savedmodel instead of recognizing the proper objects, itβs producing many random bounding boxes. In last couple of days I have been trying to dockerize the training pipeline using Ubuntu 20.04, 21.04 with different python version 3.7, 3.8 and 3.9 but the training keeps failing with the same errors
WARNING:tensorflow:Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
W0723 03:05:32.545922 140238382945856 utils.py:78] Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
W0723 03:05:44.013801 140238382945856 utils.py:78] Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
Killed
at ChildProcess.exithandler (node:child_process:397:12)
at ChildProcess.emit (node:events:394:28)
at maybeClose (node:internal/child_process:1067:16)
at Socket.<anonymous> (node:internal/child_process:453:11)
at Socket.emit (node:events:394:28)
at Pipe.<anonymous> (node:net:662:12) {
killed: false,
code: 137,
signal: null,
cmd: 'python /server/models/research/object_detection/model_main_tf2.py --pipeline_config_path=/server/data-set/ssd_efficientdet_d0_512x512_coco17_tpu-8.config --model_dir=/server/data-set/training --alsologtostderr'
Any idea or suggestions? |
st205285 | Hi, Iβm new to machine learning, so excuse me if this is a foolish question. Iβm working on a data science project in R, where I attempt to predict whether a Swift code sample is correct or not. The data comes from IBMβs Project CodeNet 1, which has 14 million code samples from code problem websites (6000 of which are in Swift). Each code sample is annotated with whether it was accepted or rejected.
I was thinking of parsing all of the code samples into their Abstract Syntax Tree 1 (in this case the Swift Abstract Syntax Tree 1), which is basically an ultra-labelled version of the source code, and then passing the parsed AST into a neural net, so that it can learn how to predict whether a code sample is right or wrong.
Is this a text classification problem? The Abstract Syntax Tree is all text, and Iβm trying to classify it into accepted / rejected, but itβs also extremely structured. So can I consider it like one, or is there some other tool which would be better suited for this situation? |
st205286 | As Iβve been looking around, Iβve come to the realisation that this is a supervised binary graph classification problem β so a graph convolutional network using the stellargraph library 3 should do the trick. Let me know if you think Iβm totally wrong about this! |
st205287 | After following the transfer learning tutorial on Tensorflowβs site 2, I have a question about how model.evaluate() works in comparison to calculating accuracy by hand.
At the very end, after fine-tuning, in the Evaluation and prediction section, we use model.evaluate() to calculate the accuracy on the test set as follows:
loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
6/6 [==============================] - 2s 217ms/step - loss: 0.0516 - accuracy: 0.9740
Test accuracy : 0.9739583134651184
Next, we generate predictions manually from one batch of images from the test set as part of a visualization exercise:
# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
However, itβs also possible to extend this functionality to calculate predictions across the entire test set and compare them to the actual values to yield an average accuracy:
all_acc=tf.zeros([], tf.int32) #initialize array to hold all accuracy indicators (single element)
for image_batch, label_batch in test_dataset.as_numpy_iterator():
predictions = model.predict_on_batch(image_batch).flatten() #run batch through model and return logits
predictions = tf.nn.sigmoid(predictions) #apply sigmoid activation function to transform logits to [0,1]
predictions = tf.where(predictions < 0.5, 0, 1) #round down or up accordingly since it's a binary classifier
accuracy = tf.where(tf.equal(predictions,label_batch),1,0) #correct is 1 and incorrect is 0
all_acc = tf.experimental.numpy.append(all_acc, accuracy)
all_acc = all_acc[1:] #drop first placeholder element
avg_acc = tf.reduce_mean(tf.dtypes.cast(all_acc, tf.float16))
print('My Accuracy:', avg_acc.numpy())
My Accuracy: 0.974
Now, if model.evaluate() generates predictions by applying a sigmoid to the logit model outputs and using a threshold of 0.5 like the tutorial suggests, my manually-calculated accuracy should equal the accuracy output of Tensorflowβs model.evaluate() function. This is indeed the case for the tutorial. My Accuracy: 0.974 = accuracy from model.evaluate() function. However, when I try this same code with a model trained using the same convolutional base as the tutorial, but different Gabor images (not cats & dogs like the tutorial), my accuracy no longer equals the model.evaluate() accuracy:
current_set = set17 #define set to process.
all_acc=tf.zeros([], tf.float64) #initialize array to hold all accuracy indicators (single element)
loss, acc = model.evaluate(current_set) #now test the model's performance on the test set
for image_batch, label_batch in current_set.as_numpy_iterator():
predictions = model.predict_on_batch(image_batch).flatten() #run batch through model and return logits
predictions = tf.nn.sigmoid(predictions) #apply sigmoid activation function to transform logits to [0,1]
predictions = tf.where(predictions < 0.5, 0, 1) #round down or up accordingly since it's a binary classifier
accuracy = tf.where(tf.equal(predictions,label_batch),1,0) #correct is 1 and incorrect is 0
all_acc = tf.experimental.numpy.append(all_acc, accuracy)
all_acc = all_acc[1:] #drop first placeholder element
avg_acc = tf.reduce_mean(all_acc)
print('My Accuracy:', avg_acc.numpy())
print('Tf Accuracy:', acc)
My Accuracy: 0.832
Tf Accuracy: 0.675000011920929
Does anyone know why there would be a discrepancy? Does the model.evaluate() not use a sigmoid? Or does it use a different threshold than 0.5? Or perhaps itβs something else Iβm not considering? Please note, my new model was trained using Gabor images, which are different than the cats and dogs from the tutorial, but the code was the same.
Thank you in advance for any insight! |
st205288 | I am working with a small team of amateur radio astronomers in the UK (British Astronomical Association | Supporting amateur astronomers since 1890 1) to set up a citizen-science scheme to record meteors. This is achieved using radio reflections off the plasma tails left by meteors entering the Earthβs atmosphere. Currently analysing these reflections is pretty manual and we would like to automate some of this, possibly using ML. There are two problems we would like to overcome. 1) Counting actual meteor hits 2) Rejecting aircraft echos.
The latter is one of the main challenges and is currently been done manually, as well as on a zoo basis.
I am sure, looking at the work others (and myself) have done with ML in Tensaflow that certainly the aircraft echos could be identified, and hopefully the actual meteors too. Here is a link to some of the data we are collecting with what we are looking for. Zooniverse 2
My questions is: 1) Do you think this would be possible? 2) are there any of you who may be interested in helping out on the project? |
st205289 | Nice initiative!
I think it would certainly be possible to have a classification task, given that you already have quite a lot of data. To reject the aircraft echos, I was thinking about anomaly detection using auto-encoders, again given the fact that you probably have a lot of data on actual meteor reflections.
Either way, I think there are several ways to tackle this project |
st205290 | Hello, Iβm trying to load cifar dataset
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
from tensorflow.keras import datasets, layers
from glob import glob
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_paths = glob('dataset/cifar/train/*.png')
test_paths = glob('dataset/cifar/train/*.png')
def get_label_name (path):
lbl_name = path.split('_')[-1].replace('.png','')
return lbl_name
classes = np.unique([get_label_name (path) for path in train_paths])
def onehot_encoding(label_name):
onehot_encoding = tf.cast(classes == label_name, tf.uint8)
return onehot_encoding
def read_dataset(path):
gfile = tf.io.read_file(path)
image = tf.io.decode_image(gfile)
class_name = get_label_name(path)
label = onehot_encoding(class_name)
return image, label
train_dataset = tf.data.Dataset.from_tensor_slices(train_paths)
train_dataset = train_dataset.map(read_dataset)
train_dataset = train_dataset.batch(batch_size=32)
train_dataset = train_dataset.shuffle(buffer_size=len(train_paths))
train_dataset = train_dataset.repeat()
However, error message β AttributeError: βTensorβ object has no attribute βsplitββ comes out.
Is there any way to solve this problem?
I am looking forward to see an help. Thanks in advance.
Kind regards,
Yoon Ho |
st205291 | Issue with this line class_name = get_label_name(path), you are trying to get the class_name but its giving Tensor value. It should be list of classes.
Find the below working code,
import os
os.environ[βTF_CPP_MIN_LOG_LEVELβ] = β2β
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
from tensorflow.keras import datasets, layers
from glob import glob
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_paths = glob(βdataset/cifar/train/.pngβ)
test_paths = glob('dataset/cifar/train/.pngβ)
def get_label_name (path):
lbl_name = path.split(β/β)[-1].replace(β.pngβ,ββ)
return lbl_name
classes = np.unique([get_label_name (path) for path in train_paths])
classes
def onehot_encoding(label_name):
onehot_encoding = tf.cast(classes == label_name, tf.uint8)
return onehot_encoding
def read_dataset(path):
gfile = tf.io.read_file(path)
image = tf.io.decode_image(gfile)
class_name = [get_label_name (path) for path in train_paths]
label = onehot_encoding(class_name)
return image, label
train_dataset = tf.data.Dataset.from_tensor_slices(train_paths)
train_dataset = train_dataset.map(read_dataset)
train_dataset = train_dataset.batch(batch_size=32)
train_dataset = train_dataset.shuffle(buffer_size=len(train_paths))
train_dataset = train_dataset.repeat() |
st205292 | Hi folks.
If you are a beginner in Machine Learning, it might be hard for you to brainstorm project ideas to supplement your learning. At least, I have faced this challenge and to some extent, I still face it. This is why I put together some points I feel to be important and may be of help:
docs.google.com
YFP - Sayak 19
Sayak Paul | Deep Learning Associate at PyImageSearch DSC WOW December 07, 2020 Hello everyone. I am Sayak. I am a Deep Learning Associate at PyImageSearch where I work on computer vision with deep learning. Your first machine learning project
Let me know what you think. I also open to adding more relevant pointers to the above deck. |
st205293 | Hi,
I am also learning Machine learning 9 and i am working on my Live project now!!! |
st205294 | Iβm excited to uncover this page. I need to thank you for your time for this, particularly fantastic read!! I definitely really liked every part of it and I also have you saved to fav to look at new information in your site. |
st205295 | I tried to convert my Pytorch models to TensorFlow Lite with ONNX. But my inference time from TensorFlow Lite is twice as slow as Tensorflow and Pytorch. I run TensorFlow Lite model in google colab and this is my first time using TensorFlow Lite.
Here is my code to convert from Tensorflow to TensorFlow Lite:
converter = tf.lite.TFLiteConverter.from_saved_model("model/")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
model_lite = converter.convert()
with open('model.tflite', 'wb') as f:
f.write(model_lite)
I used time module from Python to measure the latency of frameworks. I donβt know why my Lite version is slower than the others. Any suggestions will help me a lot. |
st205296 | TFLite is not meant to perform good on the commodity hardware actually. Itβs opset is optimized to run faster primarily on mobile hardware. However, if you build TFLite for your platform (preferably with XNNPACK enabled) then you may get some benefits.
Hereβs some more information:
TensorFlow
Build TensorFlow Lite with CMake 7 |
st205297 | So when I deploy it mobile this problem may fix? I am testing it with colab but will deploy it to mobile app in the future. |
st205298 | Yes. You should test it on a real mobile device to set the expectations right.
You could also set benchmarks on Firebase Test Labs:
Firebase
Firebase Test Lab 6
Test your app on devices hosted in a Google data center. |
st205299 | I converted a keras model to tflite which includes batch normalization layer.
Opening the tflite file in Netron, the batch normalization operation is separated into 2 operations of multiplication and addition.
When doing inference on a couple of test samples with tflite , the values are not just multiplied and added in batch normalization layer. There seems to be more operations than just a simple multiplication and addition.
Does anyone know if tensorflow lite is applying any operation besides that when doing batch normalization? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.