id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st205600 | I want to implement a grappler graph optimization pass. I want to do split for data input node only. For example, a node handling y = wx + b may have 3 input nodes, w , x and b.
I only want to do tf.split(x), not touch w and b.
So,
is there any way to get x out from a list of nodes? (for example, get x from node->inputs()).
Similarly, if the above w is trainable , and b is not trainable, how could I get this information from graph?
Thanks in advance |
st205601 | Hi, I encountered an accuracy issue when computing the backprop of some layers through TensorFlow OPs. The gradients was computed through two different ways:
tf.gradients
compute the gradients directly through TF APIs, take softmax as an example, we can compute the gradients as follows:
sum_channels = math_ops.reduce_sum(grad_softmax * softmax, -1, keepdims=True)
grad = (grad_softmax - sum_channels) * softmax
But I found that the results from the two implementation are not exactly the same. Anyone knows what is the problem?
And another question is that when training in Tensorflow, whether the gradients are the same as that computed through tf.gradients? Thanks.
The entire testing code is as follows (tested with TF 1.15):
import numpy as np
import tensorflow as tf
batch_size = 20
num_heads = 8
from_seq_len = 50
to_seq_len = 50
class testSoftmaxBackprop:
def __init__(self,
batch_size,
num_heads,
from_seq_len,
to_seq_len):
self.batch_size = batch_size
self.num_heads = num_heads
self.from_seq_len = from_seq_len
self.to_seq_len = to_seq_len
self.input_data = tf.placeholder(tf.float32, shape=[
self.num_heads * self.batch_size,
self.from_seq_len,
self.to_seq_len
])
self.out = tf.nn.softmax(self.input_data)
# self.out = tf.identity(softmax)
def forward(self, np_data):
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
out = sess.run([self.out],
feed_dict={
self.input_data:np_data
})
return out[0]
def back_auto(self, np_grads, np_data, np_softmax):
grads = tf.placeholder(tf.float32,
shape=[
self.num_heads * self.batch_size,
self.from_seq_len,
self.to_seq_len
])
g = tf.gradients(self.out, [self.input_data], grad_ys=grads)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
g_out = sess.run(g,
feed_dict={
self.input_data:np_data,
grads:np_grads,
})
return g_out
def back_api(self, np_grads, np_data, np_softmax):
tf_grads = tf.constant(np_grads, dtype=tf.float32)
# tf_data = tf.constant(np_data, dtype=tf.float32)
tf_softmax = tf.constant(np_softmax, dtype=tf.float32)
sum_channels = tf.reduce_sum(tf_grads * tf_softmax, axis=-1, keepdims=True)
d_out = (tf_grads - sum_channels) * tf_softmax # [h*N, T_q, T_k]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
grad = sess.run([d_out])
return grad
def main():
np.random.seed(0)
np_data = np.random.rand(num_heads * batch_size, from_seq_len, to_seq_len)
np_data = np_data.astype(np.float32)
np_grad = np.random.rand(num_heads * batch_size, from_seq_len, to_seq_len)
np_grad = np_grad.astype(np.float32)
test_back = testSoftmaxBackprop(batch_size, num_heads, from_seq_len, to_seq_len)
np_softmax = test_back.forward(np_data)
grad_auto = test_back.back_auto(np_grad, np_data, np_softmax)
grad_api = test_back.back_api(np_grad, np_data, np_softmax)
api_data = grad_api[0]
auto_data = grad_auto[0]
api_save = api_data.reshape(-1)
auto_save = auto_data.reshape(-1)
np.savetxt("api_data.txt", api_save)
np.savetxt("auto_data.txt", auto_save)
print("Results:")
print("Comparison :" + str(np.allclose(api_data, auto_data, atol = 5e-6)))
print("max diff " + str(np.fabs(api_data - auto_data).max()))
if __name__ == "__main__":
main() |
st205602 | I have trained a PyTorch model, which I converted to Keras using the pytorch2keras lib.
using this Keras model to convert to tflite. I want to run the tflite on coral devices.
things which i noticed :
Keras model size (57.6MB)
Using dynamic range quantization the generated tflite is of size(15MB)
Using integer only quantization the generated tflite is also of size(15MB)
Ideally, we should be able to reduce the model size even further as we convert fp32 to int8
Can Anyone help me to understand why this happening?
sharing my conversion notebook and Keras modelkeras_model : data.zip - Google Drive 7 |
st205603 | For the most part the model is dominated by weights. Using dynamic range quantization the weights are stored in int8, the same as in integer-only quantization. So we wouldnt expect to see the full-int8 model be dramatically smaller than the dynamic range quantized model. (only the bias terms should make up the difference) |
st205604 | Actually, you should make sure that your model is actually fully integer (did you provide a representative dataset?) |
st205605 | David_Rim:
should make sure that your model is actually fully integer (did you provide a representative dataset?)
Yes @David_Rim i had provided the represenation dataset |
st205606 | def representative_data_gen():
dataset_list = tf.data.Dataset.list_files('/home/ubuntu/livesense/lane_detection/GCO_BDD/bdd_images/bbox_images/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [img_h, img_w])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
image = tf.reshape(image,(1,3,288,352))
print(" reshape shape :",image.shape)
print(i)
yield [image]
def frozen_to_tflite_quant(fname):
path="./frozen_models/"+fname+"_frozen_graph.pb"
filename = fname +"_frozen_tflite_quant.tflite"
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(path, # TensorFlow freezegraph .pb model file
input_arrays=['x'], # name of input arrays as defined in torch.onnx.export function before.
output_arrays=['Identity'] # name of output arrays defined in torch.onnx.export function before.
)
converter.optimizations = [tf.compat.v1.lite.Optimize.DEFAULT]
# And this sets the representative dataset so we can quantize the activations
converter.representative_dataset = representative_data_gen
# converter.experimental_new_converter = True
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# These set the input and output tensors to uint8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tf_lite_model = converter.convert()
# Save the model.
with open(filename, 'wb') as f:
f.write(tf_lite_model) |
st205607 | I’m very new to coding so please explain in layman’s terms what to do about this error. Thank you!!
Traceback (most recent call last):
File “C:\Users\Maggie\Desktop\Manatees\Tensorflow Object Detection\tfod\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”, line 64, in
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “Tensorflow\models\research\object_detection\builders\model_builder_tf2_test.py”, line 22, in
import tensorflow.compat.v1 as tf
File “C:\Users\Maggie\Desktop\Manatees\Tensorflow Object Detection\tfod\lib\site-packages\tensorflow_init_.py”, line 41, in
from tensorflow.python.tools import module_util as module_util
File "C:\Users\Maggie\Desktop\Manatees\Tensorflow Object Detection\tfod\lib\site-packages\tensorflow\python_init.py", line 39, in
from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
File “C:\Users\Maggie\Desktop\Manatees\Tensorflow Object Detection\tfod\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”, line 83, in
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File “C:\Users\Maggie\Desktop\Manatees\Tensorflow Object Detection\tfod\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”, line 64, in
from tensorflow.python._pywrap_tensorflow_internal import *
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
Failed to load the native TensorFlow runtime. |
st205608 | TensorFlow release binaries are prebuilt with AVX instruction sets. Therefore on any CPU that does not have these instruction sets, either CPU or GPU version of TF will fail to load. For other possible reasons you can check here 55. |
st205609 | GradientTape cannot compute the gredients for the model. How can I debug this code?
class Training(keras.Model):
def __init__(self, model):
super(Training, self).__init__()
self.model = model
def compute_loss(self, texts, labels):
texts = tf.math.l2_normalize(texts, axis=0)
losses = tf.Variable(tf.zeros_like(labels, dtype=tf.float32), trainable=True, dtype=tf.float32)
for index, label in enumerate(labels):
pos_pairs = texts[labels == label]
neg_pairs = texts[labels != label]
if len(pos_pairs) > 1:
p_list = tf.Variable( tf.zeros(pos_pairs.shape[0], dtype=tf.float32), trainable=True, dtype=tf.float32 )
i = 0
for pos_pair in pos_pairs:
p_list[i].assign( keras.losses.cosine_similarity(texts[index], pos_pair) )
i += 1
p_list = tf.exp( p_list )
p_list = p_list / tf.reduce_sum(p_list)
p_loss = tf.reduce_sum( - tf.math.log( p_list ) )
else:
p_loss = 0.0
if len(neg_pairs) > 1:
n_list = tf.Variable( tf.zeros(neg_pairs.shape[0], dtype=tf.float32), trainable=True, dtype=tf.float32 )
i = 0
for neg_pair in neg_pairs:
n_list[i].assign( keras.losses.cosine_similarity(texts[index], neg_pair) )
i += 1
n_list = tf.exp( n_list )
n_list = n_list / tf.reduce_sum(n_list)
n_loss = tf.reduce_sum( tf.math.log( n_list ) )
else:
n_loss = 0.0
loss_on_sentence = p_loss + n_loss
losses[index].assign( loss_on_sentence )
loss = tf.reduce_mean(losses)
return loss
def train_step(self, data):
texts = data[0]
labels = data[1]
#print(labels, texts)
with tf.GradientTape() as tape:
texts = self.model(texts)
loss = self.compute_loss(texts, labels)
print(loss)
trainable_vars = self.trainable_variables
#print(trainable_vars)
gradients = tape.gradient(loss, trainable_vars)
print(gradients)
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
loss_tracker.update_state(loss)
return {"loss": loss_tracker.result()}
@property
def metrics(self):
return [loss_tracker]
trainer = Training(model)
trainer.compile(optimizer='adam', run_eagerly=True)
trainer.fit(train_dataset, callbacks=[tensorboard_callback])
Error:
ValueError: No gradients provided for any variable: ['dense1/kernel:0', 'dense1/bias:0', 'dense2/kernel:0', 'dense2/bias:0', 'bn1/gamma:0', 'bn1/beta:0', 'dense3/kernel:0', 'dense3/bias:0', 'dense4/kernel:0', 'dense4/bias:0', 'bn2/gamma:0', 'bn2/beta:0', 'dense5/kernel:0', 'dense5/bias:0', 'dense6/kernel:0', 'dense6/bias:0', 'bn3/gamma:0', 'bn3/beta:0', 'dense7/kernel:0', 'dense7/bias:0', 'dense8/kernel:0', 'dense8/bias:0', 'bn4/gamma:0', 'bn4/beta:0', 'dense9/kernel:0', 'dense9/bias:0', 'bn5/gamma:0', 'bn5/beta:0'].
Can anyone help to debug this error!!
Thanking you in advance. |
st205610 | Which part breaks the gradient flow? This type of code works in pytorch but here it breaks flow somewhere and I cannot figure out which part cause the problem. |
st205611 | For the tfdf RandomForestModel regression model, is there any way to generate R2 value, and SHAP value plot? |
st205612 | Hi Frank,
Keras does not have a native R2 score implementation (see the list of Keras’ regression metrics 2). However, the R2 score can be computed using a custom metric. See an example 3.
TF-DF does not have a native TreeSHAP implementation (yet? ). If this is of interest, please create a feature request here 6 so we can prioritize new features. In the mean time, I would use a model agnostic implementation of SHAP.
Cheers,
M. |
st205613 | I’m working on a project to train cars via a simulator and use RNN to train the model for self driving capabilities, I’m using unreal engine for the simulation part, I’ve setup the environment and the car. I would like any resources on how this can be implemented. The data collection part and training part. I’m used to training 2d Models but 3D models i have no idea. |
st205614 | I have been spending a lot of time trying to understand the TensorFlow internals for a personal project by looking at source code and messing around in GDB. I know that Tensors are stored in the tensorflow::Tensor class which contains a shape, element count and a pointer to a tensorflow::TensorBuffer. TensorBuffer has a reference count for the actual data in the tensor and a pointer to the data. I have been able to find tensorflow::Tensors c++ data structures when messing around in GDB and am able to access their underlying buffers. However, I am struggling to find the data structures that manage the tensorflow::Tensors and how they are kept track of internally in a model.
For example, how do I associate a trainable variable in a model to its underlying in memory buffer?
When looking at how model.predict() works I know that the inputs to the execute function (TFE_Py_Execute) in quick_execute in tensorflow/python/eager/execute.py are passed in as tf.Tensor. When looking at the C++ code for TFE_Py_Execute in GDB the inputs are passed into that function as TFE_InputTensorHandles and an individual TFE_TensorHandle can be accessed with inputs->at(n) where n is any number below input size. Casting an individual TFE_TensorHandle to a tensorflow::TensorHandle in gdb lets me access the methods of the TensorHandle class for each TensorHandle.
However, there is no method that lets me pull out the data buffer for the TensorHandle and I seem to be missing some connection as when I go to access the data_ field of the TensorHandle which should contain a LocalTensorHandleData it is filled with info that I was not expecting. Where as a typical tensorflow::Tensor has a shape, element count, and pointer to a TensorBuffer which subsequently contains a pointer to a in memory buffer filled with tensor elements, the data_ for these TensorHandles is filled with a bunch of strings such as device string, localhost, a couple byte sections with 2c in them etc. I am not sure of what type this is and why this isnt just an in memory buffer.
So I would really like a clarification on what I am missing to more easily access the underlying tensorflow::Tensor data structure and its buffer as well as how I can maybe get from a model and its trainable variables to its underlying buffer.
Thanks! |
st205615 | I recently started implementing custom ops, and I agree TF C++ is a lot harder to grapple with that the Python side, and the current state of the docs does not help at all.
That said, grep or Search in VS Code is your ally: searching class TensorHandle in the TF repo, you can pull up tensorflow/cc/experimental/base/public/tensorhandle.h and see that the Tensor Resolve(Status* status) from which you could then get the buffer? |
st205616 | Yup, Greping is what Ive been doing for about a month now
That tensorflow::experimental::tensorhandle class is very interesting, but unfortunately it looks to be something a bit different from the TensorHandles returned in execution. There is a Resolve method in the tensor_handle located in core/common_runtime/eager/tensor_handle but it doesnt unwrap to a Tensor, instead unwraps to an AbstractTensorInterface which is something I havent looked into yet.
An Ideal scenario for me would be a path from the tf.Variables for model variables in the model dictionary to the actual c++ Tensors themselves if pointers are followed but probably isnt a thing |
st205617 | it seems you can call tensor_handle.Resolve().Data() to get a void *. From the AbstractTensorInterfaceyou could also usedown_cast`? There are examples such as
Status TF_TensorToTensor(const TF_Tensor* src, Tensor* dst) {
return tensorflow::down_cast<const tensorflow::TensorInterface*>(src->tensor)
->ToTensor(dst);
}
which might help. |
st205618 | Gotcha, that makes sense. The biggest issue though is that this still doesnt clear up the problem I described in the initial post where execution for lets say the predict function of a model starts in python at execute.py, it seems to pass in all of its model variables into TFE_Py_Execute as inputs which get read as TFE_TensorHandleInputs in C space and added to the operation. But those tensorhandles dont have a data_ field containing a Tensor, instead they contain debug information.
I guess my main point is, am I missing that the context contains the handles to the tensors with Data? Because the inputs to the execute function definitely dont. And if so, how do I pull those TensorHandles from the context (EagerContext not OpKernelContext) |
st205619 | Hi, first question here, sorry if wrong format/category:
I’m writing a custom op (using the custom-op Docker images mentioned in the tutorial/repo) which is trying to allocate a temporary during the op but hits a segfault, which I have a hard time to understand. The snippet where this happens is inside the ::Compute method of an OpKernel, where I’ve got
Tensor *q_t;
const TensorShape &sh = TensorShape({sdr->sht->nlm});
OP_REQUIRES_OK(context, context->allocate_temp(DT_COMPLEX128, sh, q_t));
And that allocation results in a segfault:
Thread 1 "spharde_ops_tes" received signal SIGSEGV, Segmentation fault.
0x00007fe3c216b973 in tensorflow::OpKernelContext::allocate_tensor(tensorflow::DataType, tensorflow::TensorShape const&, tensorflow::Tensor*, tensorflow::AllocatorAttributes, tensorflow::AllocationAttributes const&) ()
from /usr/local/lib/python3.6/dist-packages/tensorflow/python/../libtensorflow_framework.so.2
with a backtrace like so
#0 0x00007fe3c216b973 in tensorflow::OpKernelContext::allocate_tensor(tensorflow::DataType, tensorflow::TensorShape const&, tensorflow::Tensor*, tensorflow::AllocatorAttributes, tensorflow::AllocationAttributes const&) ()
from /usr/local/lib/python3.6/dist-packages/tensorflow/python/../libtensorflow_framework.so.2
#1 0x00007fe3c216e696 in tensorflow::OpKernelContext::allocate_temp(tensorflow::DataType, tensorflow::TensorShape const&, tensorflow::Tensor*, tensorflow::AllocatorAttributes, tensorflow::AllocationAttributes const&) ()
from /usr/local/lib/python3.6/dist-packages/tensorflow/python/../libtensorflow_framework.so.2
#2 0x00007fe3a5895190 in tensorflow::OpKernelContext::allocate_temp (this=0x7ffe37e7d710, type=tensorflow::DT_COMPLEX128, shape=..., out_temp=0x5727690, allocator_attr=...)
at /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/op_kernel.h:1025
#3 0x00007fe3a5895217 in tensorflow::OpKernelContext::allocate_temp (this=0x7ffe37e7d710, type=tensorflow::DT_COMPLEX128, shape=..., out_temp=0x5727690) at /usr/local/lib/python3.6/dist-packages/tensorflow/include/tensorflow/core/framework/op_kernel.h:1029
#4 0x00007fe3a5899ae2 in ApplyShtDiffOp<Eigen::ThreadPoolDevice>::Compute (this=0x574e380, context=0x7ffe37e7d710) at tf_spharde/cc/kernels/shtdiff_kernels.cc:101
I could understand an out-of-memory or whatever but why the segfault? Any tips would be greatly appreciated. |
st205620 | I seem to have fixed it by using
Tensor q_t;
const TensorShape &sh = TensorShape({sdr->sht->nlm});
OP_REQUIRES_OK(context, context->allocate_temp(DT_COMPLEX128, sh, &q_t));
but it’s not clear to me why that works. When running the op, I get an abort, double-free or corruption.
From reading use of allocate_temp in the TF codebase, this seems like it should work. |
st205621 | maedoc:
get an abort, double-free or corruption.
this was just the rest of the kernel misbehaving. I am still curious about the segfault on alloc… |
st205622 | maedoc:
Tensor *q_t;
Oops this is a dumb question: the above just declares a pointer, but there’s nothing for allocate_temp to intialize. Other routines would return a usable pointer with Tensor ** type argument, but allocate_temp takes just a Tensor * which needs to point to a usable Tensor instance (even if not allocated). That’s why declaring a Tensor on the stack and then allocating it works via address:
Tensor x;
context->allocate_temp(..., &x); |
st205623 | In allocate_temp its failing at:
*out_temp = new_temp;
Which makes sense because in the first example youre passing in a pointer so out_temp becomes a pointer to a pointer I am pretty sure? Where as in the second example you did everything correctly. |
st205624 | No worries, I guess I shouldn’t have asked the question in the first place. I sort of forget some c++ fundamentals while trying to get my head around the TF API… |
st205625 | Hi, I want to visualize the image in my data but I’m getting an error
image596×706 58.2 KB
image753×60 6.33 KB
image791×591 59 KB |
st205626 | It might have to do with the fact that labels is a list, not an array. Therefore you cannot use array slicing on it. Try to convert labels to an array first:
labels_array = np.array(labels)
Get back to me if it does or doesn’t work. |
st205627 | I also try this one but it won’t work but when I change the label_mode to binary it can visualize the image I don’t know why |
st205628 | Hi,
I work on a model based on this tutorial:
keras.io
Keras documentation: Variational AutoEncoder 2
I want to make this vae an annealed beta-vae, so I have defined a “tf.keras.backend.variable” which is updated every epoch in a custom callback. This variable is then applied as a factor to the latent loss in the train_step function.
My first concern is that if I’m not forcing eager mode, the value of the variable is never updated in the train_step() function.
My second concern is that if I force eager mode in .compile() with run_eagerly=True, the value is now correctly updated in the train_step() function but the impact on runtime is HUGE : it’s twice the time for each epoch.
Do you have any idea of what is going on here ?
Thanks |
st205629 | Twice the time is really nice. For me, is more like ten times.
Eager execution: Most of the ops are placed on host instead of device General Discussion
Running Keras model in eager execution causes most of the ops to be placed on the host instead of device. Obviously, it causes eager execution to be much slower. Is it some issue, or that’s how eager execution works?
TensorFlow Profiler output:
Code : keras-io/mnist_convnet.py at master · keras-team/keras-io · GitHub
[tf_profile_graph]
Same code with run_eagerly=True in the model.compile().
[tf_profile_eager]
system: 5.10.42-1-MANJARO
version: tensorflow 2.5 (Manjaro repository) |
st205630 | Finally, I found the solution.
In the train_step() function, my custom variable must be converted to a tensor with tf.convert_to_tensor(my_variable). And now, the variable is correctly updated at each callback, even in non eager mode ! |
st205631 | OSError: SavedModel file does not exist at: C:\Users\Xiang\AppData\Local\Temp\tfhub_modules\a96501569a79510824b3ed96d5158fae078cd465{saved_model.pbtxt|saved_model.pb}
请教下这个怎么解决
bug1847×806 149 KB |
st205632 | Hi,
Do you have a small code snippet to reproduce the error?
It looks like the model you tried to use isn’t present in the disk.
if you could try hub.resolve(“URL_OF_THE_MODEL”) that would show where the model is cached and you can try to verify if it’s indeed there. |
st205633 | I am trying to load data inside flat_map. Inside map_func used in flat_map, Im unable to get file name from input argument.
`filenames = [“data4/0/31”, “data4/0/32”]
dataset = tf.data.Dataset.from_tensor_slices(filenames)
def parse_fn(filename):
print(filename)
return tf.data.experimental.load(filename)
dataset = dataset.interleave(lambda x:
parse_fn(x),
cycle_length=4, block_length=16)
for item in dataset.as_numpy_iterator():
print(item)`
I already posted a question in stackoverflow.
python - Trying to load dataset inside flat_map got error ‘TypeError: expected str, bytes or os.PathLike object, not Tensor’ - Stack Overflow 4 |
st205634 | I have a question about the regression model in tutorials (model_7), Build, train and evaluate models with TensorFlow Decision Forests 4
I want to log the accuracy, but the output is blank. I have tried:
model_7.compile(metrics=[“accuracy”])
or
model_7.compile(metrics=[“mae”, “acc”])
After evaluation, the accuracy is vary low:
evaluation = model_7.evaluate(test_ds, return_dict=True)
print()
loss: 0.0000e+00 - mae: 0.9490 - acc: 3.4258e-04
The log has no accuracy data:
logs = model_7.make_inspector().training_logs()
Do I miss something?
Thank you! |
st205635 | The accuracy metric is used for classification (binary and multi-class) tasks. For regression, use one of the regression metrics 6.
model_7.make_inspector().training_logs() should contains the RMSE metric.
model_7.compile(metrics=[“acc”]) will evaluation a regression output (the scale depends on the label) as a probability. In other words, you will get garbage :). |
st205636 | Hi again
Thanks to all of your help, I can build Faster-RCNN model.
But it goes well except training step, and I hit the wall.
I debugged functions, so found a suspicious part, however I can’t catch what is root cause.
First, the version is:
tensorflow-gpu==2.5.0
CUDA==11.2.0
cuDNN==8.1.0.77
The whole tack trace is:
2021-06-17 16:46:58.163220: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-06-17 16:47:04.565562: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll
2021-06-17 16:47:04.610494: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1660 Ti computeCapability: 7.5
coreClock: 1.59GHz coreCount: 24 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 268.26GiB/s
2021-06-17 16:47:04.618787: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
2021-06-17 16:47:04.634109: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-06-17 16:47:04.638019: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
2021-06-17 16:47:04.647980: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll
2021-06-17 16:47:04.655237: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll
2021-06-17 16:47:04.670023: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll
2021-06-17 16:47:04.679152: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll
2021-06-17 16:47:04.685911: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2021-06-17 16:47:04.690109: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-06-17 16:47:04.693839: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-06-17 16:47:04.703798: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1660 Ti computeCapability: 7.5
coreClock: 1.59GHz coreCount: 24 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 268.26GiB/s
2021-06-17 16:47:04.711944: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-06-17 16:47:05.263458: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-06-17 16:47:05.268143: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0
2021-06-17 16:47:05.270931: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N
2021-06-17 16:47:05.273788: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3983 MB memory) → physical GPU (device: 0, name: GeForce GTX 1660 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5)
D:\dev\anaconda3\envs\dl_env\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py:3703: UserWarning: Even though the tf.config.experimental_run_functions_eagerly option is set, this option does not apply to tf.data functions. To force eager execution of tf.data functions, please use tf.data.experimental.enable.debug_mode().
warnings.warn(
WARNING:tensorflow:input_shape is undefined or non-square, or rows is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.
2021-06-17 16:47:06.528692: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll
2021-06-17 16:47:07.040698: I tensorflow/stream_executor/cuda/cuda_dnn.cc:359] Loaded cuDNN version 8100
2021-06-17 16:47:07.690211: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll
2021-06-17 16:47:08.195097: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll
WARNING:tensorflow:From D:\dev\anaconda3\envs\dl_env\lib\site-packages\tensorflow\python\ops\array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version.
Instructions for updating:
The validate_indices argument has no effect. Indices are always validated on CPU and never validated on GPU.
2021-06-17 16:47:09.868833: I tensorflow/core/profiler/lib/profiler_session.cc:126] Profiler session initializing.
2021-06-17 16:47:09.872503: I tensorflow/core/profiler/lib/profiler_session.cc:141] Profiler session started.
2021-06-17 16:47:09.875622: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1611] Profiler found 1 GPUs
2021-06-17 16:47:09.886018: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘cupti64_112.dll’; dlerror: cupti64_112.dll not found
2021-06-17 16:47:09.898400: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘cupti.dll’; dlerror: cupti.dll not found
2021-06-17 16:47:09.902934: E tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1661] function cupti_interface_->Subscribe( &subscriber_, (CUpti_CallbackFunc)ApiCallback, this)failed with error CUPTI could not be loaded or symbol could not be found.
2021-06-17 16:47:09.910278: I tensorflow/core/profiler/lib/profiler_session.cc:159] Profiler session tear down.
2021-06-17 16:47:09.914039: E tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1752] function cupti_interface_->Finalize()failed with error CUPTI could not be loaded or symbol could not be found.
2021-06-17 16:47:09.953551: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
Epoch 1/100
214/214 [==============================] - 60s 275ms/step - loss: 7.0047 - rpn_reg_loss: 0.0160 - rpn_cls_loss: 0.1079 - frcnn_reg_loss: 6.5493 - frcnn_cls_loss: 0.3315 - val_loss: 5.5020 - val_rpn_reg_loss: 0.0161 - val_rpn_cls_loss: 0.1088 - val_frcnn_reg_loss: 5.2288 - val_frcnn_cls_loss: 0.1484
And the error message is:
2021-06-17 16:48:09.849872: W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
Suspicious code is:
@tf.function
def rpn_generator(dataset, anchors):
while True:
for data in dataset:
image, gt_boxes, gt_labels = data
bbox_deltas, bbox_labels = calculate_rpn_actual_outputs(anchors, gt_boxes, gt_labels)
yield image, (bbox_deltas, bbox_labels)
Last, I referenced https://github.com/FurkanOM/tf-faster-rcnn 29
It must cause the same problem, because when I run that code, still get it. |
st205637 | I tried downgrading tensorflow to 2.4.0 and the error still occurred.
The strange thing is that when I run the code, the trained epoch has not consistency.
For example, at first run, train stopped at 2 epoch, and next run, train stopped at 13 epoch.
I thought the problem is my gpu(GTX 1660 Ti) memory, but running the code has taken about 55% of gpu memory. |
st205638 | I found it.
I gave wrong parameter to call backs tf.keras.callbacks.ReduceLROnPlateau of model.fit() , that’s why the train stopped when epoch ends.
Thanks to all again
But I don’t know why referenced code caused error until now… mysterious… |
st205639 | Trying to install tensorflow_graphics I receive the following error from my command prompt!
running setup.py clean for OpenEXR
Failed to build openEXR
Then I received a lot of red lines about the problem.
I do have the latest Tensorflow installad and Python plus windows 10, all updates R2D2.
Hope some one has a suggestion.
Thanks
Ronald |
st205640 | I recive the warning
WARNING: skip full serialization of Keras
It happens to me with several downloaded and retrained models of the Tensorflow 2 model zoo |
st205641 | Hello. Have you found a solution to this yet? I have been stuck on the same problem for a while now. |
st205642 | Hello,
Is it possible to install the TensorFlow Object Detection API with a specific version (TF2.3)? I have TensorFlow 2.3 installed and I need to be using the API with that specific version, but when I try to install it it automatically uninstalls TF2.3 and installs TF2.5.
Any advice? |
st205643 | You can clone repo with 2.3 tag.
github.com
tensorflow/models 14
v2.3.0/research/object_detection
Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub. |
st205644 | You can download object detection repo with specific tag - 2.3.0. Then install it with setup.py file.
github.com
tensorflow/models 8
v2.3.0/research/object_detection
Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub. |
st205645 | The link to this repo says that TensorFlow 2.x is not supported. It seems like it only supports TensorFlow 1.x. |
st205646 | Hello, I have a code-switching project in which I have to build a classifier to recognize 3 classes, laguage1, language2, or language1-2(mix). Which algorithm and methodology should I use? |
st205647 | I have a model can be successfully run tenorflow-serving. Then I covert it with commond saved_model_cli, below is detail command line:
docker run --rm --user 3004 --gpus all -it \
-v /path/to/tensorflow_serving:/work/tf_model \
-e CUDA_VISIBLE_DEVICES=1 \
harbor.private.com/dev/tf:1.15.5-gpu /usr/local/bin/saved_model_cli convert \
--dir /work/tf_model/buyer_sent_model_pb_02/01 \
--output_dir /work/tf_model/buyer_sent_model_trt/02 \
--tag_set serve \
tensorrt --precision_mode FP32 --max_batch_size 16 --is_dynamic_op True
Then I serve it with tensorflow-serving, command line:
docker run -d --gpus all -p 8501:8501 --mount type=bind,source=/path/to/tensorflow_serving/my_model_dir,target=/models/my_model_dir \
-e MODEL_NAME=my_model_name -e CUDA_VISIBLE_DEVICES=1 \
-e TF_FORCE_GPU_ALLOW_GROWTH='true' \
-t harbor.private.com/dev/tf-serving:2.4.1-gpu
my input:
{
"inputs": {
"Input-Token": data1,
"Input-Segment": data2
}
}
data1 and data2 are both lists, length is 16.
data1:
[
[101, 3766, 752, 8024, 6814, 3341, 6760, 6760, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 2218, 3221, 8238, 697, 1259, 1408, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 2769, 6206, 743, 2643, 5948, 1947, 6163, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 930, 702, 6963, 3221, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 2218, 3221, 2769, 6821, 6804, 791, 1921, 1157, 2802, 2458, 3341, 4500, 749, 671, 833, 6230, 2533, 679, 1916, 3265, 102, 0, 0, 0],
[101, 2769, 3221, 6206, 2864, 4706, 5296, 3890, 5011, 4638, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 1962, 4638, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 1355, 749, 1557, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 6843, 3819, 4706, 3344, 1408, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 872, 1962, 6435, 7309, 6821, 702, 743, 671, 6843, 671, 3221, 2582, 720, 702, 6843, 3791, 102, 0, 0, 0, 0, 0, 0, 0],
[101, 1119, 3247, 2458, 1993, 8043, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 6716, 7770, 8725, 8175, 1408, 8043, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 2769, 743, 749, 6821, 702, 121, 119, 8146, 4638, 4385, 1762, 4684, 2970, 4802, 6371, 3119, 6573, 2218, 1377, 809, 749, 511, 1968, 102],
[101, 4692, 1168, 928, 2622, 1726, 1908, 678, 1521, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 155, 4772, 7027, 7481, 1377, 809, 3022, 679, 6585, 6716, 4638, 3688, 6132, 720, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[101, 2571, 6853, 4157, 3766, 3300, 2571, 6853, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
]
data2:
[
[0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0],
[0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1],
[0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1],
[1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1],
[0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1]
]
This set of data works fine.
But when change data2 to:
[
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
]
This set of data1 and data2 run into troubles.
On the server side it has log as below:
2021-07-01 08:29:53.363285: W external/org_tensorflow/tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:587] Running native segment forTRTEngineOp_26 due to failure in verifying input shapes: Input shapes are inconsistent on the batch dimension, for TRTEngineOp_26: [[16,25,768], [1,25,768]]
2021-07-01 08:29:58.734463: W external/org_tensorflow/tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:587] Running native segment forTRTEngineOp_26 due to failure in verifying input shapes: Input shapes are inconsistent on the batch dimension, for TRTEngineOp_26: [[16,25,768], [1,25,768]]
2021-07-01 08:29:58.863914: W external/org_tensorflow/tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:587] Running native segment forTRTEngineOp_26 due to failure in verifying input shapes: Input shapes are inconsistent on the batch dimension, for TRTEngineOp_26: [[16,30,768], [1,30,768]]
on the client, I got:
{'error': 'Timed out waiting for notification'}
It seems tensorflow compresses the data2 from a list of 16 length to 1 length?
What is the problem in my case, do I miss someting?
Environment
Nvidia Driver Version : 455.38 in Host
GPU Type : 2080ti, both convert and serving
tensorflow:1.15.5-gpu for converting
tensorflow-serving: 2.4.1-gpu for serving
both docker is pulled from offical site in docker hub |
st205648 | I am trying to build a hierarchical sequence model for time series classification (refer to the paper: hierarchical attention networks for document classification). But I’m very confused about how to mask the hierarchical sequences.
My data is a hierarchical time series. Specifically, each sample is composed of multiple sub-sequences and each sub-sequence is a multiple multivariate time series (just like word–> sentence -->document in NLP). So I need to pad and mask it twice. This is critical as a document will often not have the same number of sentences (or all sentences the same number of words). Finally, I get data as follows:
array([[[[0.21799476, 0.26063576],
[0.2170655 , 0.53772384],
[0.18505535, 0.30702454],
[0.22714901, 0.17020395],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]],
[[0.2160176 , 0.23789616],
[0.2675753 , 0.21807681],
[0.26932836, 0.21914595],
[0.26932836, 0.21914595],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]]],
[[[0.03941338, 0.3380829 ],
[0.04766269, 0.3031088 ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]],
[[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ],
[0. , 0. ]]]], dtype=float32)
Then I build a hierarchical model as follows:
inputs = Input(shape=(maxlen_event, maxlen_seq, 2))
x = TimeDistributed(
Sequential([
Masking(),
LSTM(units=8, return_sequences=False)
])
)(inputs)
x = LSTM(units=32, return_sequences=False)(x)
x = Dense(16, activation='relu')(x)
output = Dense(16, activation='sigmoid')(x)
As my data is padded in on both dimensions, I don’t know how to mask it correctly. I have two questions about it:
Q1: In TimeDistributed, do I use the masking layer correctly to mask the first padding?
Q2: How to mask the second padding?
Thank you. |
st205649 | Hey folks, I have never really posted anything online but at the moment, I’m all out of ideas.
The thing is, I am trying to run FISM (a classifier or recommender system algorithm) — specifically, this implementation: GitHub - yushuai/FISM: implementation for the paper "FISM: Factored Item Similarity Models for Top-N Recommender Systems" by Tensorflow 1.2. When I apply it to the provided data set (dataset 1: ml_train & ml_test), it works. When I apply it to dataset 2, which I processed myself so that the format is equal to dataset 1, it also works like expected. But, when I apply it to dataset 3, which also has equal formatting, I get an error in tf.multiply: Incompatible shapes: [99,1] and [100,64].
It is so strange to me that the code works for datasets 1 and 2, but not for 3. I have checked the entire dataset and there is nothing out of the ordinary. I have also scoured the internet for answers, but I haven’t been able to find any. So, here’s my plea for help. This particular part is crucial for the success of my research…
I have already tried reducing the size of dataset 3 but to no avail.
Does anyone know why this error is occurring?
=====================================================
Traceback (most recent call last):
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\client\session.py”, line 1365, in _do_call
return fn(*args)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\client\session.py”, line 1350, in _run_fn
target_list, run_metadata)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\client\session.py”, line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [99,1] vs. [100,64]
[[{{node Mul}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “model.py”, line 194, in
model.train()
File “model.py”, line 103, in train
hit_ratio, ndcg = self.evaluate(sess)
File “model.py”, line 163, in evaluate
self.neighbour_num: [neighbour_number for _ in x_test]
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\client\session.py”, line 956, in run
run_metadata_ptr)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\client\session.py”, line 1180, in _run
feed_dict_tensor, options, run_metadata)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\client\session.py”, line 1359, in _do_run
run_metadata)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\client\session.py”, line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [99,1] vs. [100,64]
[[node Mul (defined at C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
Original stack trace for ‘Mul’:
File “model.py”, line 193, in
model.build_graph()
File “model.py”, line 130, in build_graph
user_repr = tf.multiply(inverse_rated_num, sumvec)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\util\dispatch.py”, line 180, in wrapper
return target(*args, **kwargs)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\ops\math_ops.py”, line 331, in multiply
return gen_math_ops.mul(x, y, name)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py”, line 6701, in mul
“Mul”, x=x, y=y, name=name)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\framework\op_def_library.py”, line 794, in _apply_op_helper
op_def=op_def)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\util\deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\framework\ops.py”, line 3357, in create_op
attrs, op_def, compute_device)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\framework\ops.py”, line 3426, in _create_op_internal
op_def=op_def)
File “C:\Users\semye\anaconda3\envs\py3.5\lib\site-packages\tensorflow_core\python\framework\ops.py”, line 1748, in init
self._traceback = tf_stack.extract_stack() |
st205650 | Hey there,
Preparing for the exam.
In the handbook it points out that the TF 2.4.1 is the condition, however, I have installed 2.5 on my machine.
GPU support is impossible on Mac according to the official doc. Has anyone who had the experience of the exam tried during the exam, was it heavily to train the exam models?
thank you 🤷🏼 |
st205651 | (1) You should be fine – the test uses PyCharm, and PyCharm projects use a virtualized Python environment, into which 2.4.1 can be installed
(2) My advice here would be to practice practice practice with the syllabus the handbook shares, and see if you hit any walls with using CPU instead of GPU. |
st205652 | Hello,
I am trying to re-train an object detection model for a project. The end goal is for me to convert the ‘saved_model.pb’ model into another format through an SDK provided.
I have no problem at all converting the pre-trained object-detection SavedModels provided by TensorFlow on TensorFlow Hub. However, when I retrain from a checkpoint of one of the models, following every step in the TensorFlow Object Detection API training tutorial, eventually I start to get errors when I try to convert my re-trained result.
Does re-training cause anything significant in the format or components of the saved_model.pb to change? What can I do to simply re-train but not change anything important?
I would really appreciate any help with this, please.
Thanks,
Ahmad |
st205653 | Hi,
When you follow those tutorials, as far as I know, they don’t change the model structure, they will only change weights to be customized to your data.
what’s the error? |
st205654 | Hello Gus,
Yes that’s exactly what I though too, that’s why I’m not sure I’m facing issues.
The error I am getting is that the conversion tool that I am using to convert to the proprietary framework that I want to use goes through the whole network and just gives the following warning for every layer: “Layer … was not consumed by converter.” And then says ‘After pruning, model is empty’ when it’s done going through the whole model, so the conversion fails.
I’m pretty sure it is some compatibility issue between the SDK and the TensorFlow version I used to train (I used GPU training with TF 2.5). Is it possible to know what TF version the models in TF2 Zoo were trained with? That would help me narrow the cause of the issue. |
st205655 | Update: I ended up figuring it out! I re-trained models and tested with many different TF versions until I found one that was compatible! Thank you. |
st205656 | Thats great!
It would be nice to post on their github repo about this and let them know and maybe get them to update to TF 2.5 or at least document this somewhere. |
st205657 | I will definitely work on doing that as soon as I can! I’ve been trying to work through this issue for about a week now so I’m sure it would be helpful for others. Thank you for your time and help! |
st205658 | Hello,
I am trying to work with tfp.experimental.mcmc.particle_filter but I do not understand how the input ‘observation’ is supposed to be formatted in case my observations are tensors of shape 2. Suppose we have 5 states in the model. What I am giving as input in this case is a tensor of shape (5, 2), where each row is an observation and each column is a dimension of the state, but this is not correct as it forces me insert num_particles=2.
Anyone able to help me figuring this out? Thank you. |
st205659 | How to save the images inside the bounding boxes !?, So that I can use the image for OCR.
I came over this tutorial 18, in the end I found a code sample for this,
output_directory = 'some dir'
# get label and coordinates of detected objects
output = []
for index, score in enumerate(output_dict['detection_scores']):
label = category_index[output_dict['detection_classes'][index]]['name']
ymin, xmin, ymax, xmax = output_dict['detection_boxes'][index]
output.append((label, int(xmin * image_width), int(ymin * image_height), int(xmax * image_width), int(ymax * image_height)))
# Save images and labels
for l, x_min, y_min, x_max, y_max in output:
array = cv2.cvtColor(np.array(image_show), cv2.COLOR_RGB2BGR)
image = Image.fromarray(array)
cropped_img = image.crop((x_min, y_min, x_max, y_max))
file_path = output_directory+'/images/'+str(len(df))+'.jpg'
cropped_img.save(file_path, "JPEG", icc_profile=cropped_img.info.get('icc_profile'))
df.loc[len(df)] = [datetime.datetime.now(), file_path]
df.to_csv(output_directory+'/results.csv', index=None
But the above code is unclear to me, I am new to computer vision and Tensorflow Object Detection API, here is my query,
How can I leverage the above code to save multiple images(In my case, I am doing invoice extraction, where I have to save the invoice number, date, table, etc…) |
st205660 | The code seems to be saving multiple images based on the detections. The output should be on the images folder. |
st205661 | I think this code you posted does what you want already.
In summary what it does:
go over all the predicted boxes and create an output list with them ((label, box))
using the boxes, go over all the images and crop the internals of the box and save it to a file
you get used to this kind of code after a while, take your time to experiment with it. |
st205662 | Thank you, yes I was able to understand and the code was also updated by the author, now I have another doubt
# Get data(label, xmin, ymin, xmax, ymax)
output = []
for index, score in enumerate(output_dict['detection_scores']):
if score < threshold:
continue
label = category_index[output_dict['detection_classes'][index]]['name']
ymin, xmin, ymax, xmax = output_dict['detection_boxes'][index]
output.append((label, int(xmin * image_width), int(ymin * image_height), int(xmax * image_width), int(ymax * image_height)))
# Save incident (could be extended to send a email or something)
for l, x_min, y_min, x_max, y_max in output:
if l == label_to_look_for:
array = cv2.cvtColor(np.array(image_show), cv2.COLOR_RGB2BGR)
image = Image.fromarray(array)
cropped_img = image.crop((x_min, y_min, x_max, y_max))
file_path = output_directory+'/images/'+str(len(df))+'.jpg'
cropped_img.save(file_path, "JPEG", icc_profile=cropped_img.info.get('icc_profile'))
df.loc[len(df)] = [datetime.datetime.now(), file_path]
df.to_csv(output_directory+'/results.csv', index=None)
What is the score means!? , I guessed it is the score of the detection_scores that are returned. which is output_dict['detection_scores], so when I apply threshold 0.5, no images are saved but when I apply 0.0 as my threshold nearly 2000 images are cropped from the single image and saved, so I checked out my output_dict['scores], it has many values
Here are my detection scores
Detection scores
0.9999884
0.9875551
0.9298591
0.18066546
0.06862515
0.060081333
0.05767244
0.043635964
0.040076256
0.037350416
0.033092856
0.03055805
0.030125767
0.029847085
0.029215574
0.028655708
0.027012408
0.025616944
0.02515155
0.023829997
0.023615092
0.02239129
0.021808654
0.021342427
0.020629108
0.01946026
0.01930508
0.019111484
0.018848777
0.017635822
0.017435431
0.016988814
0.016978234
0.01697129
0.01664561
0.016387343
0.016295582
0.016104639
0.016039342
0.015885413
0.01586929
0.015589744
0.015241742
0.015219361
0.015110254
0.015015632
0.014730513
0.014715463
0.01455313
0.0144896805
0.014403313
0.014309466
0.01429531
0.01426512
0.014217079
0.014211059
0.014092535
0.013988614
0.013938546
0.013933927
0.01387459
0.013772488
0.013516575
0.0134027
0.013376057
0.013336897
0.01318419
0.013004512
0.0129831135
0.01276961
0.012724757
0.012371838
0.012347668
0.012268215
0.0122665465
0.012233138
0.01222229
0.012182564
0.012130201
0.0121108
0.012091279
0.012085319
0.0120278895
0.011973709
0.0119514465
0.011933267
0.011857897
0.011782587
0.011546642
0.011545628
0.011477649
0.011402994
0.011328131
0.011262983
0.011066496
0.010975838
0.010870099
0.010821551
0.010576516
0.01054436
So, I doubt is there any problem with my model or is the scores are correct and my code is wrong!?
Here is my code
def run_inference_and_extract(model,category_index,threshold,label_to_look_for,
output_dir):
#create output dir if not already created
os.makedirs(output_dir, exist_ok=True)
#os.makedirs(output_dir,'/images', exist_ok=True)
if os.path.exists(output_dir+'/results.csv'):
df = pd.read_csv(output_dir+'/results.csv')
else:
df = pd.DataFrame(columns=['timestamp','image_path'])
image_show = np.copy(image_np)
image_height, image_width, _ = image_np.shape
#Actual detection
output_dict = run_inference_for_single_image(model,image_np)
vis_util.visualize_boxes_and_labels_on_image_array(image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks_reframed', None),
use_normalized_coordinates=True,
line_thickness=8)
#cv2.imshow('object_detection', cv2.resize(image_np, (600,600)))
#get data(label, xmin, ymin, xmax, ymax)
output = []
for index, score in enumerate(output_dict['detection_scores']):
if score < threshold:
continue
label = category_index[output_dict['detection_classes'][index]]['name']
ymin, xmin, ymax, xmax = output_dict['detection_boxes'][index]
output.append((label, int(xmin * image_width), int(ymin * image_height),
int(xmax * image_width), int(ymax * image_height)))
# save incident
for l, x_min, y_min, x_max, y_max in output:
if l == label_to_look_for:
array = cv2.cvtColor(np.array(image_show), cv2.COLOR_RGB2BGR)
image = Image.fromarray(array)
cropped_img = image.crop((x_min, y_min, x_max, y_max))
file_path = output_dir+'/images/'+str(len(df))+'.jpg'
cropped_img.save(file_path, 'JPEG', icc_profile=cropped_img.info.get('icc_profile'))
df.loc[len(df)] = [datetime.datetime.now(), file_path]
df.to_csv(output_dir+'/results.csv', index=None)
And I call the function like this
output_dir = '/content/sample_data'
label_to_look_for = "INVOICE NO"
threshold=0.5
run_inference_and_extract(model,category_index,threshold,label_to_look_for,
output_dir)
It would be helpful, if you show me any light on this, Thanks in Advance! |
st205663 | I’d expect the score is how confident the model is that it found something on the region.
The code for the threshold seems to be correct but from your code, if the 3 super confident values are not from the label that your are looking for (if l == label_to_look_for:) then it won’t save any images. |
st205664 | If I understand correctly, the score tells us how confident is our model is right!?, so why does it saves all the images(like 2000) |
st205665 | Well, I got it so, if the first 3 scores are not for my label, then it won’t save any images
Also, if I want to save all the predicted regions then I should have a model whose output scores should be high enough.
I guess I should improve my model, Thanks for your valuable inputs! |
st205666 | I am still having this issue, what is exactly is output_dict['detection_scores'] exactly!?, When I pass on my image and see individual predictions, I get 93% , 99% as probability for each classes but when I try to crop the image, many images gets cropped
is there any other way to crop the images inside the bounding boxes!? |
st205667 | The output_dict['detection_scores'] describes the confidence the model has in a specific prediction/region. In the script I provided you 10, a threshold can be passed using -t or --threshold. Only if the detection_scores of a detection is higher than the threshold, the image is being saved. |
st205668 | Thank you for the response, your tutorials are very helpful to me
Let’s say I have four classes INVOICE NO, INVOICE DATE, PO NUMBER, TABLE
From the code in the script, when I pass 0.5 for threshold and TABLE in label_to_look_for, the image of table is cropped and saved.
But if I try for other lables, the images are not cropped for the 0.5 threshold(but it still shows 93% in the bounding box) If I lower the threshold, many parts of my images are saved(outside the bounding boxes)
So where am I doing my mistake!?..I have trained my model with 200 images on colab |
st205669 | I am having some issues while getting output in TensorFlow Model(tflite).
This is my input:
let inputInfo:[Float32] : [-1.0291401 , 1.6121695 , 0.58366895, -0.25974554, 2.6633718 ,
0.39398468, 1.2648116 , -1.0617405 , 1.0997621 , -0.01813432,
-0.02543107, 1.9113901 , 0.30188444, 0.3199759 , 0.07759953,
0.23082322, 2.0959156 , -0.42658705, 0.08775132, 3.4258583 ,
-1.0573974 , 0.7249298 , -1.1119401 , -0.72663903, -0.74873704,
-0.387724 , -0.14288527, -0.39554232, -0.10774904, -0.0911286 ,
0.40389383, -0.169619 , -1.1736624 ]
let inputData = Data(bytes: &inputInfo, count: inputInfo.count * MemoryLayout<Float32>.stride)
print(inputData)
try interpreter?.copy(inputData, toInputAt: 0)
After passing this array of Float32 type as input I am getting below result in output:
Output:
TensorFlowLite.Tensor(name: “Identity”, dataType: TensorFlowLite.Tensor.DataType.float32, shape: TensorFlowLite.Tensor.Shape(rank: 2, dimensions: [1, 1]), data: 4 bytes, quantizationParameters: nil)
For getting the expexcted output result I am using this code:
let outputTensor = try self.interpreter?.output(at: 0)
let finalOutput = [Float32](unsafeData: outputTensor!.data)
print(finalOutput)
4.6533377e+33
Here expected output should be some numbers between 0 & 1(like 0.8,0.9) but my final output is beyond that expected output. I am stuck here,please help. |
st205670 | Hi i will be honest i dont have a clue in coding(yet) nor ML subject
And im wondering if someone heard or get his/her knowledge by the next sites which offer some data(which im not sure how much good it is)
Datacamp(with their paid version, for now im kind of doing their free trial course)
Pluralsight
Codeacademy
Udacity(which is more expensive then those 3 above,but the knowledge is important enough to me to think of it as an option)
I want to hear some “PRO” thoughts on what to do next i mean i can start with something but i really prefer not do waste my time on something bad so…
THANKS |
st205671 | Hi Yovel,
There’s a lot of great free content for you to start before you need/want to pay for a course.
I’d start with those.
Something like Machine Learning Crash Course | Google Developers 7
There’s also this thread:
How to audit courses on Coursera for Free 6
Any suggestions to start with TensorFlow? - #2 by 8bitmp3 6
I hope this helps! |
st205672 | Do you currently have any models in production, being used for a product, service, or something else important? Do you ever plan to have any? If so, how do you create and manage your production deployment? How do you train your models? How do you serve your models? |
st205673 | Sir I am planning to build a smart bot that talks and guides people towards a more healthy mental physique, I being an engineer start with the very first tools, pen and paper plan out the model, its requirements (software and hardware) and then do the coding part, I plan to soon make it on production. TF helps to a great deal in doing complex coding in just few lines. |
st205674 | Kind of an old post, but I like the idea, so let me try to give it some love
Yes, at my company I am working on bringing a fire-prediction ML solution into production. The prediction service is part of an IoT implementation in a large forestry area. We gather on the ground data with IoT sensors and collect the data. The prediction service runs as its own docker container which retrieves data from the Prometheus DB and runs it through a classifier and regressor. The latter two are trained by the training container, once every few months with updated data.
The results are shown to the customer (government forestry org.) by means of Grafana Dashboard.
All in all, training the models was 20% of the work, engineering it into production took way more effort. |
st205675 | TimoKer:
All in all, training the models was 20% of the work, engineering it into production took way more effort.
Thanks TimoKer! I think that the division of the work that you outlined is all too common, especially for the first model that you put into production. I’m assuming that was the case for you? |
st205676 | Yes indeed you are right. Although I can imagine the production side becomes much more complex for larger scale deployments. So even though one gets better in navigating the deployment side of ML, as the applications get more scale and complexity, so does deployment. Therefore, perhaps the 80% remains a good rough estimate? What do you think, as you have more experience with complex deployments at large scale? |
st205677 | XNNPACK backend is supported and enabled via a build-time opt-in mechanism. Can be enabled by default? |
st205678 | I am trying to prune my pre-trained model and for that It is mandatory to use UpdatePruningStep() in the callbacks while fitting the model. When I do so, I am getting the error as follows -
ValueError: Error processing property ‘_dropout_mask_cache’ of <ContextValueCache at 0x1c2604d02e0>
code for pruning is as follows -
from tensorflow_model_optimization.sparsity import keras as sparsity
num_train_samples=X_train.shape[0]
batch_size=128
epochs = 4
end_step = np.ceil(1*num_train_samples/batch_size).astype(np.int32)*epochs
print(end_step)
new_pruning_params = {
‘pruning_schedule’ : sparsity.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.90,
begin_step = 0,
end_step = end_step,
frequency=100)
}
new_pruned_model = sparsity.prune_low_magnitude(loaded_model, ** new_pruning_params)
new_pruned_model.summary()
new_pruned_model.compile(loss = tf.keras.losses.categorical_crossentropy,
optimizer=‘adam’,
metrics=[‘accuracy’])
logdir = tempfile.mkdtemp()
callbacks = [sparsity.UpdatePruningStep(), sparsity.PruningSummaries(log_dir=logdir)]
keras.callbacks.EarlyStopping(monitor=‘val_loss’, patience=10)]
new_pruned_model.fit(X_train, Y_train.values, batch_size=batch_size, epochs=epochs,verbose=1,
validation_data = (X_test,Y_test.values), callbacks=callbacks)
score= new_pruned_model.evaluate(X_test,Y_test.values, verbose=0)
print('Test Loss : ’ , score[0])
print('Test Accuraccy: ', score[1]) |
st205679 | This seems similar to Error processing property '_dropout_mask_cache' when using PrunableLayer with DropoutRNNCellMixin · Issue #753 · tensorflow/model-optimization · GitHub 22
Does the fix there work for you? |
st205680 | I am looking for an alternative for QNNPACK to execute quantized DNN effectively on CPU, ARM-based devices. In other words, what is the recommended approach to effectively interpret quantized NN? Is there QNNPACK available for TensorFlow Lite?
Please can I have some examples?
Thank you. |
st205681 | Hi all,
It is my first topic in this forum.
I m trying to follow this → GitHub - TannerGilbert/Tensorflow-2-Object-Counting: Cumulative object counting with Tensorflow 2 7
where it uses
“tensorflow_cumulative_object_counting.py” and “tflite_cumulative_object_counting.py” for counting pedestrians. “tensorflow_cumulative_object_counting.py” works just fine, But when try to use “tflite_cumulative_object_counting.py” it throws (ImportError: generic_type: type “InterpreterWrapper” is already registered!) error. Since, Im new to tensorflow I couldn’t figure it out. What might be the reason for this error? Any help would be appreciated. |
st205682 | I have one question about markdown in git, and how it rendered in website.
for example,
github.com
tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/operation_semantics.md#allgather 1
# Operation Semantics
The following describes the semantics of operations defined in the
[`XlaBuilder`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h)
interface. Typically, these operations map one-to-one to operations defined in
the RPC interface in
[`xla_data.proto`](https://www.tensorflow.org/code/tensorflow/compiler/xla/xla_data.proto).
A note on nomenclature: the generalized data type XLA deals with is an
N-dimensional array holding elements of some uniform type (such as 32-bit
float). Throughout the documentation, *array* is used to denote an
arbitrary-dimensional array. For convenience, special cases have more specific
and familiar names; for example a *vector* is a 1-dimensional array and a
*matrix* is a 2-dimensional array.
## AfterAll
See also
[`XlaBuilder::AfterAll`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_builder.h).
This file has been truncated. show original
source code is
Arguments
Type
Semantics
operand
XlaOp
Array to concatenate across
: : : replicas. :
| all_gather_dim | int64 | Concatenation dimension. |
| replica_groups | vector of vectors of | Groups between which the |
: : int64 : concatenation is performed. :
| channel_id | optional int64 | Optional channel ID for |
: : : cross-module communication. :
rendered in github is wrong
image772×480 23.7 KB
but rendered in website is right.
I read Contribute to the TensorFlow documentation 1,
but find nothing about the format.
Can anyone help to solve my confusion?
how to follow markdown format in table?
is there any format tool?
any suggestions is welcome,
thanks in advance. |
st205683 | Hello. Tables aren’t part of the original Markdown spec so different systems implemented their own syntax. For the TensorFlow docs, we optimize viewing for the webpage and use GitHub as a preview for convenience—but there are some Markdown discrepancies. Some pages can also be viewed in Colab.
Since the webpage 3 renders correctly, please use that Markdown syntax for the table on that page. And, depending on the data and format, might just prefer a regular ole’ HTML table that renders everywhere. We use those, too. |
st205684 | Hello billy
thanks for your reply.
I am still confused after read your message.
I want to contribute to tensorflow doc, but I have no idea about the markdown format in github.
there are 2 questions confused me.
1
the tensorflow website use html.
the doc in github use markdown.
who/how convert markdown to html?
I followed below commands, I get new files with suffix .md, but the .md file is html format actually.
and the .md file (html format) is different with the source code of website.
git clone https://github.com/tensorflow/tensorflow tensorflow
cd tensorflow/tensorflow/tools/docs
pip install tensorflow
python generate2.py --output_dir=/tmp/out
for example;
TensorFlow
tf.xla.experimental.jit_scope | TensorFlow Core v2.5.0
Enable or disable JIT compilation of operators within the scope.
this page’s source code, is not same with generated by above commands.
2
if I contribute to doc, i have to follow the format and style of markdown.
but I didn’t find the required rule about markdown for tensorflow github repo.
is there any formatter for markdown file before git push or PR in github?
BR.
Mingting |
st205685 | Hi billy
I find there is _toc.yaml file in code. After investigate the _toc.yaml file.
I guess tensorflow website is generated with docfx tool, if this is true, this might answer my question#1.
I tried docfx, I can convert md to html, but I cannot get right table in html.
I guess, is there any privately markdown table extension used in google?
BR.
Mingting |
st205686 | The Markdown-to-HTML conversion for the website uses an internal system. As noted, there are a few discrepancies between the Markdown syntaxes, but the GitHub Markdown previewer and the Colab (Jupyter) preview will get you mostly there.
The generate2.py config script is used to generate the TensorFlow API reference documentation from Python docstrings. This Markdown should be viewable in GitHub and then we convert it to HTML in our docs publishing pipeline. It uses the api_generator module in our tensorflow-docs package. It’s actually a pretty nice Python API documentation system (and not TensorFlow specific), but not really documented. Many prefer it to Sphinx. But, yeah, this is for API docs only and not narrative docs. |
st205687 | Hello,
I want to run a 1D CNN on some time series data and have questions at several steps in the process. I include code and info on the data below, and am seeking any assistance with understanding how to set the shape of the input training data and the test data.
I also seek info on how to run the model and predict classes using the test data with 10 examples of each of the 10 classes and look at the predicted classes.
The code below runs, but it does not seem to work like I would expect, and I cannot interpret the results of predictions to tell if I have it configured property.
I have questions inserted at several steps below.
Any suggestions greatly appreciated.
First, some info on my data:
dftrainin and dftestin are the train and test data that come from .CSV files with
column 1 = sampleID (text) to identify the examples / rows
columns 2 to 128 = time series data (floating point numbers, scaled 0 to 100)
column 129 = labels with column name ch_id
the 10 class labels (ch_id) are coded 0 to 9
each of the time series examples (rows) belong to one of 10 classes (ch_id)
in the training data there are 200 rows which are 20 examples each of 10 classes
in the test data there are 100 rows which are 10 examples each of 10 classes
after import many tf, keras, and other libraries / utils
work on copy of input data
dftrain = dftrainin
dftest = dftestin
pop off the labels
train_labels = dftrain.pop(‘ch_id’)
test_labels = dftest.pop(‘ch_id’)
questions about the shape of input training data and the test data?
how to ID and format the 20 example rows for each of the 10 classes?
how to ID and format the 10 example rows for each of the 10 classes?
try using 3rd dim as classes
dftrain_rs = tf.reshape(dftrain, [20, 127, 10])
dftest_rs = tf.reshape(dftest, [10, 127, 10])
one hot encode the labels
train_hot = np_utils.to_categorical(train_labels)
test_hot = np_utils.to_categorical(test_labels)
set up the model
this section runs without error
num_classes = 10
model = Sequential([
layers.Conv1D(filters=64, kernel_size=8, activation=‘relu’, input_shape=(127, 10)),
layers.Conv1D(filters=64, kernel_size=8, activation=‘relu’),
layers.Dropout(0.5),
layers.MaxPooling1D(pool_size=2),
layers.Flatten(),
layers.Dense(96, activation=‘relu’),
layers.Dense(num_classes, activation=‘softmax’)
])
compile the model
this section runs without error
model.compile(loss=‘categorical_crossentropy’, optimizer=‘adam’, metrics=[‘accuracy’])
train the model
use 20% for validation via argument passed to model.fit()
this section runs without error, but the accuracy goes to 1 after 2 epochs
which seems too fast to get to 100% accuracy
epochs=20
history = model.fit(
dftrain_rs, train_hot,
validation_split=0.2,
epochs=epochs
)
Visualize training results
Create plots of loss and accuracy on the training and validation sets.
this section runs without errors, but the graphs don’t look like typical
training curves
acc = history.history[‘accuracy’]
val_acc = history.history[‘val_accuracy’]
loss = history.history[‘loss’]
val_loss = history.history[‘val_loss’]
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label=‘Training Accuracy’)
plt.plot(epochs_range, val_acc, label=‘Validation Accuracy’)
plt.legend(loc=‘lower right’)
plt.title(‘Training and Validation Accuracy’)
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label=‘Training Loss’)
plt.plot(epochs_range, val_loss, label=‘Validation Loss’)
plt.legend(loc=‘upper right’)
plt.title(‘Training and Validation Loss’)
plt.show()
run the model for the test data
how to test with 10 examples of each of 10 classes and get predicted classes?
test_predictions = model.predict(dftest_rs).flatten()
test_scores = tf.nn.softmax(test_predictions)
how to see the predicted classes?
print(test_scores)
print(np.argmax(test_scores)) |
st205688 | I have a sequential keras model using dense and lstm layers. After training the model, I saved in .h5 format. I am trying to convert this model to a tensorflow lite model with 8-bit integer quantization to run it on the Coral Dev board. I can perform the conversion to a lite model just fine, but when i try to quantize i get the “ValueError: Failed to parse the model: Only models with a single subgraph are supported, model had 3 subgraphs.”.
System Information:
Ryzen 5 3600
AMD 5700xt
Tensorflow version: TF nightly
Model design:
self.model = tf.keras.Sequential([
InputLayer(input_shape=(WINDOW_SIZE // WINDOW_STEP, 1), name=‘input’),
Dense(DENSE_LAYERS, activation=‘relu’),
LSTM(LSTM_LAYERS),
Dense(len(CLASSIFICATION.keys()), activation=‘softmax’, name=‘output’)
])
self.model.compile(optimizer=‘adam’,
loss=‘categorical_crossentropy’,
metrics=METRICS)
To reproduce error:
Clone GitHub - jboothby/LSTM_Error_Report 2 and run convert_to_lite.py
I used the example code from: Post-training integer quantization | TensorFlow Lite 1 for integer-only quantization . My representative data is include in the .csv file in the repository.
The error seems to be coming from the representative dataset line. If i change the current code
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = tf.lite.RepresentativeDataset(representative_data_gen)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
to
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
Then it executes fine, but doesn’t do full integer quantization.
This is my first time posting a help question on this forum, so please let me know what else I can add to clarify. |
st205689 | This really seems:
github.com/tensorflow/tensorflow
Failed to convert weights to 8 bit precision: "Quantize weights tool only supports tflite models with one subgraph" 21
opened
Dec 18, 2019
closed
May 19, 2021
rutrilla
TF 2.1
comp:lite
stalled
stat:awaiting response
type:feature
### System information
- **Have I written custom code (as opposed to using a st…ock example script provided in TensorFlow)**: Yes
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Google Colab (GPU)
- **TensorFlow installed from (source or binary)**: Binary
- **TensorFlow version (use command below)**: 2.1.0-dev20191217
- **Python version**: 3
- **Exact command to reproduce**:
```bash
!pip install tf-nightly
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import GRU, Dense, Dropout
model = Sequential()
model.add(GRU(100, activation='relu', return_sequences=False, input_shape=(128,2)))
model.add(Dropout(0.2))
model.add(Dense(11, activation='softmax'))
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.experimental_new_converter = True
tflite_model = converter.convert()
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model_quant = converter.convert()
```
### Error message
```bash
E tensorflow/lite/tools/optimize/quantize_weights.cc:351] Quantize weights tool only supports tflite models with one subgraph.
```
### Describe the problem
First, I used the new converter (with the experimental flag converter.experimental_new_converter = True) to convert an RNN model from TensorFlow to TensorFlow Lite's flat buffer format, as suggested in the issue #32608.
This works correctly, but then when I try to perform a post-training weight quantization, I got an error saying that the quantize weights tool only supports tflite models with one subgraph.
Is there a problem with my procedure? Or is that feature not yet supported? In that case, I would like to request this feature.
Thanks in advance for your help.
### Source code / logs
The attached file can be used to reproduce the error with a trained model (.h5).
[GRU_1L.zip](https://github.com/tensorflow/tensorflow/files/3974453/GRU_1L.zip) |
st205690 | I suggest you to subscribe to:
[RNN] Rolled SimpleRNN and GRU TFLite conversion · Issue #50226 · tensorflow/tensorflow · GitHub 55 |
st205691 | Thank you for your responses. If I’m understanding this correctly, there just isn’t support for quantizing LSTM models right now. I’ll subscribe to this and watch for developments. |
st205692 | Hello, I’m running Tensorflow 2.4.1 on a Windows 10 computer (Python 3.8.5 with Anaconda). I’m using a NVIDIA GeForce RTX 2060 GPU with TF. I’m having problems trying to launch a CNN model using the ImageDataGenerator feeding from a Pandas dataframe. After defining the data generator and the model, I get the following error when running model.fit: “Image transformations require SciPy. Install SciPy.” However, when I can confirm that Scipy is installed by running “import scipy” without errors.
The definition of the image generator is as follows:
train_datagen = ImageDataGenerator(horizontal_flip=False,
vertical_flip=False,
rescale=1/255.0).flow_from_dataframe(dataframe=X,
x_col='image_name',
y_col='response',
shuffle=False,
directory=src_path,
target_size=(128, 128),
class_mode=None
)
And error comes when running:
model.fit(train_datagen,
epochs=5)
Please help! |
st205693 | PS. I believe the error results from my GPU configuration. I am running a CPU version of the code and training is running (although extremely slow). I had to change my imagegenerator declaration to:
class_mode='raw',
as I am running a regression model. |
st205694 | It looks like you’ve worked it out. If you need help with getting the code to train faster with your RTX 2060, let us know. |
st205695 | If you can share some of the code, users here may be able to see what can be optimized |
st205696 | I’ve got a subclassed model which I’m trying to run model.save() on, but I get the following error:
Model <main.ReId object at 0x7fe554658590> cannot be saved because the input shapes have not been set. Usually, input shapes are automatically determined from calling .fit() or .predict(). To manually set the shapes, call model.build(input_shape).
This is despite explicitly calling model.build(input_shape=(256,256,3)) to set the input shape.
I’ve realised that this only happens when I use my custom BatchDataset. When I run model.fit() on a dataset generated by an ImageDataGenerator the model saves normally.
The full code is available at the link below:
https://vehiclereidjupyternotebook.s3.eu-west-2.amazonaws.com/broken_saving_tf.html 11 |
st205697 | Do you have already tried with:
github.com/tensorflow/tensorflow
`tf.keras.Model.save` does not support subclassed model when saving model as SavedModel format 159
opened
Jul 26, 2019
closed
Aug 10, 2019
zakizhou
TF 2.0
comp:keras
type:support
<em>Please make sure that this is a bug. As per our [GitHub Policy](https://gith…ub.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template</em>
**System information**
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): NA
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: None
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): 2.0.0-dev20190724
- Python version: 3.6
- Bazel version (if compiling from source): None
- GCC/Compiler version (if compiling from source): None
- CUDA/cuDNN version: None
- GPU model and memory: None
You can collect some of this information using our environment capture
[script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh)
You can also obtain the TensorFlow version with: 1. TF 1.0: `python -c "import
tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` 2. TF 2.0: `python -c
"import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"`
**Describe the current behavior**
`tf.keras.Model.save` **DOES NOT** support subclassed model when saving model as SavedModel format
**Describe the expected behavior**
`tf.keras.Model.save` **SHOULD** support subclassed model when saving model as SavedModel format
**Code to reproduce the issue**
```
import tensorflow as tf
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.d = tf.keras.layers.Dense(2)
@tf.function
def call(self, x, training=True, mask=None):
return self.d(x)
model = Model()
model(tf.random.normal((2, 3)))
# next line raises errors
model.save("save/model", save_format="tf")
```
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. |
st205698 | history = model.fit_generator(train_generator, epochs=epochs, steps_per_epoch=train_steps, verbose=1, callbacks=[checkpoint], validation_data=val_generator, validation_steps=val_steps)
def create_sequences(tokenizer, max_length, desc_list, photo, vocab_size):
X1, X2, y = list(), list(), list()
for desc in desc_list:
seq = tokenizer.texts_to_sequences([desc])[0]
for i in range(1, len(seq)):
in_seq, out_seq = seq[:i], seq[i]
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
X1.append(photo)
X2.append(in_seq)
y.append(out_seq)
return array(X1), array(X2), array(y)
def data_generator(descriptions, photos, tokenizer, max_length, imgsIds, vocab_size):
while 1:
for ind in range(len(imgsIds)):
photo = photos[ind]
key = imgsIds[ind]
desc_list = descriptions[str(key)]
in_img, in_seq, out_word = create_sequences(
tokenizer, max_length, desc_list, photo, vocab_size)
yield [in_img, in_seq], out_word
i got
Failed to convert a NumPy array to a Tensor (Unsupported object type dict).
if there is anything i should add it please comment … Thanks
Traceback (most recent call last):
File "fit.py", line 271, in <module>
main(sys.argv)
File "fit.py", line 268, in main
fit_model(train, train_descriptions, train_rnn_input, val, val_descriptions, val_rnn_input)
File "fit.py", line 255, in fit_model
history = model.fit_generator(train_generator, epochs=epochs, steps_per_epoch=train_steps, verbose=1, callbacks=[checkpoint], validation_data=val_generator, validation_steps=val_steps)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1479, in fit_generator
initial_epoch=initial_epoch)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 872, in fit
return_dict=True)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1057, in evaluate
model=self)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1112, in __init__
model=model)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 775, in __init__
peek = _process_tensorlike(peek)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1013, in _process_tensorlike
inputs = nest.map_structure(_convert_numpy_and_scipy, inputs)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 617, in map_structure
structure[0], [func(*x) for x in entries],
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/util/nest.py", line 617, in <listcomp>
structure[0], [func(*x) for x in entries],
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/data_adapter.py", line 1008, in _convert_numpy_and_scipy
return ops.convert_to_tensor(x, dtype=dtype)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1341, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/framework/tensor_conversion_registry.py", line 52, in _default_conversion_function
return constant_op.constant(value, dtype, name=name)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 262, in constant
allow_broadcast=True)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 270, in _constant_impl
t = convert_to_eager_tensor(value, ctx, dtype)
File "/path/.local/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 96, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type dict).
2021-06-27 04:46:22.936001: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]] |
st205699 | Why don’t you check the type of X1, X2, y?
As the error message, I think tensorflow tried to convert a numpy array which is dict() in fact to tensor, so failed converting.
If you don’t make any function array() besides, how about using numpy.array()? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.