id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st205800 | Thanks for your reply. tf.sparse only supports the COO format. However, I need the CSR format. |
st205801 | If you need CSR you can take a look to this tests:
github.com
tensorflow/tensorflow/blob/master/tensorflow/python/kernel_tests/linalg/sparse/csr_sparse_matrix_ops_test.py 1
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""CSR sparse matrix tests."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
This file has been truncated. show original |
st205802 | There is a machine learning challenge on" Machine Learning Based Feature Extraction of Electrical Substations from Satellite Data Using Open-Source Tools". Details are available at
http://ietcint.com/user/electrical_substation_detection 8 |
st205803 | Deadline is approaching. Please download the datasets and submit your outputs. Need more info? Please message me |
st205804 | The official webpage of the Neural Structured Learning 1 has provided three tutorials all of which only focus on classification problems. So, can NSL and its graph regularization be also applicable to regression problems? If so, is there any example representing how that works? |
st205805 | Yes check:
github.com/tensorflow/neural-structured-learning
NSL for Regression 1
opened
Apr 8, 2020
closed
Apr 11, 2020
santhoshkolloju
question
I was able to integrate adversarial loss to my TensorFlow model, My input data h…as combination of categorical and continuous attributes.
Is there any way to specify not to perturb categorical data while creating adversarial examples?
```
code:
x = tf.placeholder(tf.float32,shape=(None,20))
y = tf.placeholder(tf.float32,shape=(None,1))
#build a sample model
def model(x,y,is_training):
with tf.variable_scope("regression_model",reuse=tf.AUTO_REUSE) as scope:
layer0 = tf.layers.Dense(units=64,activation=tf.nn.relu)(x)
layer1 = tf.layers.Dense(units=128,activation=tf.nn.relu)(layer0)
output = tf.layers.Dense(units=1)(layer1)
error = tf.subtract(y,output)
print(error)
loss = tf.reduce_mean(error,axis=0)
return loss
#normal loss mean absolute error
regular_loss = model(x,y,True)
adv_config = nsl.configs.AdvRegConfig()
adv_input,adv_weights = nsl.lib.gen_adv_neighbor(x,regular_loss,config=adv_config.adv_neighbor_config)
adv_loss = model(adv_input,y,True)
overall_loss = regular_loss + 0.2*adv_loss
train_step = optim.minimize(overall_loss)
tf.random.set_random_seed(100)
sess = tf.Session()
init_op = tf.global_variables_initializer()
sess.run(init_op)
X = np.random.random((32,20))#batch of 32
Y = np.random.random((32,1))
sess.run([train_step,adv_loss,regular_loss,overall_loss],feed_dict={x:X,y:Y})
```
Medium – 15 Apr 20
Neural Structured Learning 1
Neural structured learning is a framework used for training neural networks with structured signals. This can be applied to NLP, vision or…
Reading time: 2 min read |
st205806 | Hello. Trying to learn tensorflow so apologies if I’m missing something obvious. I can’t seem to make predictions on a loaded model. My original model seems to work just find and I can get a prediction without issues. However, saving and loading that same model doesn’t seem to do what I would think.
Predicting with the original model gives me appropriate values while the loaded model gives me an error:
model.save(“Model”)
s = testFrame.head(1).drop([‘classification’,‘cloudProb’], axis=1).loc[0].to_dict()
loadedModel = tf.saved_model.load(‘Model’)
model(s)
loadedModel(s)
The above code works when passing ‘s’ into model but fails for the loadedModel.
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (2 total):
* {‘band2’: 346.0, ‘band3’: 625.0, ‘band4’: 443.0, ‘band8’: 2688.0}
* False
Keyword arguments: {}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (2 total):
* {‘band4’: TensorSpec(shape=(None, 1), dtype=tf.float32, name=‘band4’), ‘band8’: TensorSpec(shape=(None, 1), dtype=tf.float32, name=‘band8’), ‘band3’: TensorSpec(shape=(None, 1), dtype=tf.float32, name=‘band3’), ‘band2’: TensorSpec(shape=(None, 1), dtype=tf.float32, name=‘band2’)}
* False
Keyword arguments: {}
… it goes on to give me all the options. What am I missing here? Thanks for the help. |
st205807 | Hi Barrett,
Sorry for the late answer. Thanks @lgusm for the alert :).
Quick answer
Can you try applying expand_dims on the feature before calling the model.
Full example:
model.save("/tmp/my_model")
loaded_model = tf.keras.models.load_model("/tmp/my_model")
for features,label in test_ds:
# Add a batch dimension.
features = tf.nest.map_structure(lambda v : tf.expand_dims(v,axis=0), features)
# Make sure the model is feed with rank 2 features.
features = tf.nest.map_structure(lambda v : tf.expand_dims(v,axis=1), features)
print(loaded_model(features))
Explanations
There are two separate issues:
Part 1
In your example {‘band2’: 346.0, ‘band3’: 625.0, ‘band4’: 443.0, ‘band8’: 2688.0}, the values are of rank 0 i.e. those are single values. Instead, models should be fed with a batch of examples. So you need to make sure the examples are of rank>=1. The result of tf.expand_dims(v,axis=0) will be: {‘band2’: [346.0], ‘band3’: [625.0], ‘band4’: [443.0], ‘band8’: [2688.0]}.
If you are using the tf.data.Dataset API, you can also use the batch 1 method. The pd_dataframe_to_tf_dataset 1 method also does that for you.
Part 2
Internally, Keras all the features to rank 2. However, when calling the model directly (i.e. model(s)), this logic is skipped, which creates a shape mismatch. In your error, you can see that tensors of rank 2 are expected e.g. TensorSpec(shape=(None, 1). If the model was just trained (model in your example), Keras is able to solve the issue. But if the model was serialized and deserialized (loadedModel in your example), it does not.
The second call tf.expand_dims(v,axis=1) will reshape your features are follow {‘band2’: [[346.0]], ‘band3’: [[625.0]], ‘band4’: [[443.0]], ‘band8’: [[2688.0]]}
This is of course not user friendly :), and we are working with Keras and hoping to solve the problem soon. In the meantime, users have to make sure the model’s calls are of rank 2.
Note: model.predict and model.evaluate do not suffer from this issue i.e. loaded_model.predict(test_ds) works fine in all cases. |
st205808 | Hey, thanks for the comprehensive answer! Okay so rank 2 is key - got it. I played around a bit:
print(testSet)
for features,label in testSet:
features = tf.nest.map_structure(lambda v: tf.expand_dims(v, axis=0), features)
features = tf.nest.map_structure(lambda v: tf.expand_dims(v, axis=1), features)
print(features)
Gives me
<BatchDataset shapes: ({band2: (None,), band3: (None,), band4: (None,), band8: (None,)}, (None,)), types: ({band2: tf.float64, band3: tf.float64, band4: tf.float64, band8: tf.float64}, tf.int64)>
{'band2': <tf.Tensor: shape=(1, 1, 64), dtype=float64, numpy=
array([[[260., 261., 237., 242., 227., 227., 207., 217., 227., 238.,
225., 211., 226., 237., 245., 216., 218., 200., 247., 284.,
223., 209., 199., 200., 231., 216., 222., 195., 213., 213.,
197., 253., 261., 197., 200., 210., 208., 223., 244., 199.,
208., 199., 161., 147., 124., 120., 135., 124., 144., 160.,
141., 111., 145., 134., 122., 116., 155., 163., 164., 151.,
180., 153., 123., 131.]]])>, 'band3': <tf.Tensor: shape=(1, 1, 64), dtype=float64, numpy=
array([[[569., 555., 502., 507., 514., 533., 512., 491., 524., 517.,
513., 509., 505., 509., 542., 516., 527., 507., 518., 554.,
527., 502., 504., 516., 538., 561., 568., 549., 577., 517.,
531., 549., 575., 523., 472., 506., 509., 538., 548., 524.,
527., 504., 359., 294., 343., 345., 328., 299., 339., 320.,
285., 322., 338., 318., 286., 329., 336., 374., 379., 381.,
397., 342., 339., 321.]]])>, 'band4': <tf.Tensor: shape=(1, 1, 64), dtype=float64, numpy=
array([[[449., 437., 409., 414., 422., 376., 343., 349., 364., 376.,
367., 379., 383., 394., 400., 381., 342., 339., 386., 393.,
370., 343., 302., 295., 301., 321., 314., 279., 311., 285.,
321., 361., 344., 318., 268., 287., 307., 350., 377., 339.,
337., 332., 239., 200., 224., 202., 198., 188., 219., 215.,
219., 212., 229., 226., 200., 219., 240., 248., 244., 270.,
266., 231., 233., 213.]]])>, 'band8': <tf.Tensor: shape=(1, 1, 64), dtype=float64, numpy=
array([[[2166., 2166., 2228., 2220., 2222., 2412., 2632., 2498., 2400.,
2448., 2466., 2498., 2556., 2522., 2442., 2442., 2556., 2534.,
2460., 2498., 2570., 2652., 2802., 3034., 3040., 2962., 3008.,
3090., 3072., 2872., 2834., 2752., 2630., 2532., 2850., 3068.,
2920., 2514., 2382., 2436., 2498., 2346., 2092., 2474., 2890.,
2906., 2790., 2568., 2712., 2878., 2690., 2712., 2702., 2688.,
2624., 2764., 2900., 2916., 2684., 2604., 2894., 2828., 2754.,
2614.]]])>}
And the same error as before. BUT if I comment out the expand_dims call with axis=1, I actually get a result but it’s only 1 result for the entire set. I feel like I should be getting more. OR I’ve somehow trained my model with the completely wrong data…
for features,label in testSet:
features = tf.nest.map_structure(lambda v: tf.expand_dims(v, axis=0), features)
# features = tf.nest.map_structure(lambda v: tf.expand_dims(v, axis=1), features)
print(features)
print(loadedModel.predict(features))
{'band2': <tf.Tensor: shape=(1, 64), dtype=float64, numpy=
array([[260., 261., 237., 242., 227., 227., 207., 217., 227., 238., 225.,
211., 226., 237., 245., 216., 218., 200., 247., 284., 223., 209.,
199., 200., 231., 216., 222., 195., 213., 213., 197., 253., 261.,
197., 200., 210., 208., 223., 244., 199., 208., 199., 161., 147.,
124., 120., 135., 124., 144., 160., 141., 111., 145., 134., 122.,
116., 155., 163., 164., 151., 180., 153., 123., 131.]])>, 'band3': <tf.Tensor: shape=(1, 64), dtype=float64, numpy=
array([[569., 555., 502., 507., 514., 533., 512., 491., 524., 517., 513.,
509., 505., 509., 542., 516., 527., 507., 518., 554., 527., 502.,
504., 516., 538., 561., 568., 549., 577., 517., 531., 549., 575.,
523., 472., 506., 509., 538., 548., 524., 527., 504., 359., 294.,
343., 345., 328., 299., 339., 320., 285., 322., 338., 318., 286.,
329., 336., 374., 379., 381., 397., 342., 339., 321.]])>, 'band4': <tf.Tensor: shape=(1, 64), dtype=float64, numpy=
array([[449., 437., 409., 414., 422., 376., 343., 349., 364., 376., 367.,
379., 383., 394., 400., 381., 342., 339., 386., 393., 370., 343.,
302., 295., 301., 321., 314., 279., 311., 285., 321., 361., 344.,
318., 268., 287., 307., 350., 377., 339., 337., 332., 239., 200.,
224., 202., 198., 188., 219., 215., 219., 212., 229., 226., 200.,
219., 240., 248., 244., 270., 266., 231., 233., 213.]])>, 'band8': <tf.Tensor: shape=(1, 64), dtype=float64, numpy=
array([[2166., 2166., 2228., 2220., 2222., 2412., 2632., 2498., 2400.,
2448., 2466., 2498., 2556., 2522., 2442., 2442., 2556., 2534.,
2460., 2498., 2570., 2652., 2802., 3034., 3040., 2962., 3008.,
3090., 3072., 2872., 2834., 2752., 2630., 2532., 2850., 3068.,
2920., 2514., 2382., 2436., 2498., 2346., 2092., 2474., 2890.,
2906., 2790., 2568., 2712., 2878., 2690., 2712., 2702., 2688.,
2624., 2764., 2900., 2916., 2684., 2604., 2894., 2828., 2754.,
2614.]])>}
[[0. 0. 0. 0.99666584 0.00333333]]
What’s odd about this is that only calling expand_dims once seems to give a rank 2 result as I currently see it. At least I’m assuming by the double ‘[[’ in each of those nparrays. Why aren’t I getting 64 results from that input structure? Thanks again |
st205809 | Hello,
I’m working on training a neural network with data from elasticsearch cluster. I’ve followed this tutorial : Streaming structured data from Elasticsearch using Tensorflow-IO 2
but i’m getting an error.
python3.9 main.py
Type Age Breed1 Gender Color1 Color2 MaturitySize FurLength Vaccinated Sterilized Health Fee Description PhotoAmt AdoptionSpeed
0 Cat 3 Tabby Male Black White Small Short No No Healthy 100 Nibble is a 3+ month old ball of cuteness. He … 1 2
1 Cat 1 Domestic Medium Hair Male Black Brown Medium Medium Not Sure Not Sure Healthy 0 I just found it alone yesterday near my apartm… 2 0
2 Dog 1 Mixed Breed Male Brown White Medium Medium Yes No Healthy 0 Their pregnant mother was dumped by her irresp… 7 3
3 Dog 4 Mixed Breed Female Black Brown Medium Short Yes No Healthy 150 Good guard dog, very alert, active, obedience … 8 2
4 Dog 1 Mixed Breed Male Black No Color Medium Short No No Healthy 0 This handsome yet cute boy is up for adoption… 3 2
11537 14
Number of training samples: 8075
Number of testing sample: 3462
/home/cdt/.local/lib/python3.9/site-packages/elasticsearch/connection/base.py:208: ElasticsearchWarning: Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See Set up minimal security for Elasticsearch | Elasticsearch Guide [7.13] | Elastic 2 to enable security.
warnings.warn(message, category=ElasticsearchWarning)
deleting the ‘train’ index.
Response from server: {‘acknowledged’: True}
creating the ‘train’ index.
Response from server: {‘acknowledged’: True, ‘shards_acknowledged’: True, ‘index’: ‘train’}
bulk index the data
/home/cdt/.local/lib/python3.9/site-packages/elasticsearch/connection/base.py:208: ElasticsearchWarning: [types removal] Specifying types in bulk requests is deprecated.
warnings.warn(message, category=ElasticsearchWarning)
Errors: False, Num of records indexed: 8075
deleting the ‘test’ index.
Response from server: {‘acknowledged’: True}
creating the ‘test’ index.
Response from server: {‘acknowledged’: True, ‘shards_acknowledged’: True, ‘index’: ‘test’}
bulk index the data
Errors: False, Num of records indexed: 3462
Connection successful: http://172.16.238.221:9200/_cluster/health
Connection successful: http://172.16.238.221:9200/_cluster/health
{‘Type’: b’Cat’, ‘Age’: 2, ‘Breed1’: b’Domestic Short Hair’, ‘Gender’: b’Male’, ‘Color1’: b’Black’, ‘Color2’: b’White’, ‘MaturitySize’: b’Small’, ‘FurLength’: b’Short’, ‘Vaccinated’: b’No’, ‘Sterilized’: b’No’, ‘Health’: b’Healthy’, ‘Fee’: 0, ‘PhotoAmt’: 1}
Traceback (most recent call last):
File “/home/cdt/stream/elk/main.py”, line 149, in
File “/home/cdt/stream/elk/main.py”, line 108, in get_normalization_layer
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/keras/engine/base_preprocessing_layer.py”, line 242, in adapt
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/eager/def_function.py”, line 889, in call
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/eager/def_function.py”, line 917, in _call
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/eager/function.py”, line 3023, in call
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/eager/function.py”, line 1960, in _call_flat
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/eager/function.py”, line 591, in call
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/eager/execute.py”, line 59, in quick_execute
tensorflow.python.framework.errors_impl.FailedPreconditionError: Corrupted response from the server null
[[{{node IO>ElasticsearchReadableNext}}]]
[[IteratorGetNext]] [Op:__inference_adapt_step_829]
Function call stack:
adapt_step
Can you help me |
st205810 | import os
os.environ[‘TF_CPP_MIN_LOG_LEVEL’] = ‘3’
import tensorflow_io as tfio
import tensorflow as tf
from tensorflow.keras import layers
dataset = tfio.experimental.elasticsearch.ElasticsearchIODataset(
** nodes=[“XXX.XXX.XXX.XXX:9200”],**
** index=“test”**
** )**
dataset = dataset.enumerate(start=0)
for element in dataset.as_numpy_iterator():
** print(element)**
there are 3462 records, and this code works until 3379
(3379, {‘Type’: b’Cat’, ‘Age’: 4, ‘Breed1’: b’Domestic Medium Hair’, ‘Gender’: b’Female’, ‘Color1’: b’Black’, ‘Color2’: b’No Color’, ‘MaturitySize’: b’Medium’, ‘FurLength’: b’Medium’, ‘Vaccinated’: b’No’, ‘Sterilized’: b’No’, ‘Health’: b’Healthy’, ‘Fee’: 0, ‘PhotoAmt’: 2, ‘target’: 1})
Traceback (most recent call last):
File “/home/cdt/temp/elk.py”, line 13, in
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/dataset_ops.py”, line 4194, in next
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/iterator_ops.py”, line 761, in next
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/data/ops/iterator_ops.py”, line 744, in _next_internal
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/ops/gen_dataset_ops.py”, line 2728, in iterator_get_next
File “/home/cdt/.local/lib/python3.9/site-packages/tensorflow/python/framework/ops.py”, line 6897, in raise_from_not_ok_status
File “”, line 3, in raise_from
tensorflow.python.framework.errors_impl.FailedPreconditionError: Corrupted response from the server null
[[{{node IO>ElasticsearchReadableNext}}]] [Op:IteratorGetNext] |
st205811 | I have a dataset which is a big matrix of shape (100 000, 2 000).
I would like to train the neural network with all the possible sliding windows/submatrices of shape (16, 2000) of this big matrix.
I use:
from skimage.util.shape import view_as_windows
A.shape # (100000, 2000) ie 100k x 2k matrix
X = view_as_windows(A, (16, 2000)).reshape((-1, 16, 2000, 1))
X.shape # (99985, 16, 2000, 1)
...
model.fit(X, Y, batch_size=4, epochs=8)
Unfortunately, this leads to a memory problem:
W tensorflow/core/framework/allocator.cc:122] Allocation of … exceeds 10% of system memory.
This is normal, since X has ~ 100k * 16 * 2k coefficients, i.e. more than 3 billion coefficients!
But in fact, it is a waste of memory to load X in memory because it is highly redundant: it is made of sliding windows of shape (16, 2000) over A.
Question: how to train a neural network with input being all sliding windows of width 16 over a 100k x 2k matrix, without wasting memory?
The documentation of skimage.util.view_as_windows 3 states indeed that it’s costly in memory:
One should be very careful with rolling views when it comes to memory usage. Indeed, although a ‘view’ has the same memory footprint as its base array, the actual array that emerges when this ‘view’ is used in a computation is generally a (much) larger array than the original, especially for 2-dimensional arrays and above.
For example, let us consider a 3 dimensional array of size (100, 100, 100) of float64. […] the hypothetical size of the rolling view (if one was to reshape the view for example) would be 8*(100-3+1)3*33 which is about 203 MB! The scaling becomes even worse as the dimension of the input array becomes larger.
Edit: timeseries_dataset_from_array is exactly what I’m looking for except that it works only for 1D sequences:
import tensorflow
import tensorflow.keras.preprocessing
x = list(range(100))
x2 = tensorflow.keras.preprocessing.timeseries_dataset_from_array(x, None, 10, sequence_stride=1, sampling_rate=1, batch_size=128, shuffle=False, seed=None, start_index=None, end_index=None)
for b in x2:
print(b)
and it doesn’t work for 2D arrays:
x = np.array(range(90)).reshape(6, 15)
print(x)
x2 = tensorflow.keras.preprocessing.timeseries_dataset_from_array(x, None, (6, 3), sequence_stride=1, sampling_rate=1, batch_size=128, shuffle=False, seed=None, start_index=None, end_index=None)
# does not work |
st205812 | Hi, I am running a dozen of model with different architecture in google colab and save the logs file to my google drive folder. Now I would like to use TensorBoard to visualize the accuracy and loss of the models, but I can’t call it from my google drive folder
According to sample provided by Tensorflow (Google Colaboratory 2). The code is shown below:
%tensorboard --logdir logs
But I saved the logs in my google drive in case of losing them when colab collapsed or interrupted.
I tried the following code but it is still not working
%tensorboard --logdir content/drive/MyDrive/CNN/logs
%tensorboard --logdir ‘content/drive/MyDrive/CNN/logs’
Hope someone can help me on this issue |
st205813 | Just to make sure, can you access the logs from a cell, like !lson the dir?
Even with the log’s on drive, if you were disconnected from colab at some point and comeback, you might need to re-mount the drive folder for each to be accessible. |
st205814 | hello, I’m having trouble running tensorboard as well:
Is there a way to pause/unpause execution of code cells in google colab? I know we can interrupt and stop code cells but can we pause/unpause?
Maybe I need to give some background on what I’m trying to do:
I have tensorboard watching a folder on google drive for my log dir for changes to my model’s metrics. It’s on a separate instance of colab since I can’t run two instances of a code cell at the same time on the same colab notebook (run tenorboard in one cell in parallel to another cell running my training and eval script…). I realized having tensorboard on a separate instance sort of works but it seems when tensorflow updates log files… the file isn’t updated in real time across google drive end points… so for the latest log dir files to be written and synced the code has to stop… that’s when it syncs the files across google drive… and that’s when tensorboard detects a change and sees the files… so this is why I was wondering if I could “stop”, “resume” code cells… just to have those log directory files written and synced then I can unpause the code cells and continue training… or is there another way to do this?
I know that on a local machine this isn’t a problem… just run all the python scripts in parallel… just wondering how to do it in google colab. Thanks for any advice! |
st205815 | As far as I know you can’t pause the cell execution on colab.
Maybe there might be some possible customization here: TensorBoard 12
but if you are using this callback already, it will write the logs every epoch by default. If it’s not being flushed at the pace you’d like, I don’t know if there’s much that can be done. |
st205816 | Hi everyone, I’m a newbie in tensorflow. Currently I’m using tensorflow 2.3.0, my project need to reinitialize the input pipeline regularly, but each time I reinitialize my input sometimes the memory increase rapidly and OOM. Anyone meet this issue before?
Below is what my input pipeline looks like:
train_holder,label_holder = tf.placeholder(tf.float32, [None, 32, 32, 3],name = “train_holder”), tf.placeholder(tf.int32, [None],name = “label_holder”)
input_tuple = (self.train_holder, self.label_holder)
ds = tf.data.Dataset.from_tensor_slices(data)
map_fn = lambda x, y: (cifar_process(x, is_train), y)
train_dataflow = ds.map(map_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_ds = train_dataflow.repeat().batch(
self.batch_size, drop_remainder=True
).map(
autoaug_batch_process_map_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE).prefetch(
buffer_size=tf.data.experimental.AUTOTUNE)
train_input_iterator = (
self.strategy.experimental_distribute_dataset(
train_ds).make_initializable_iterator())
Each time I reinitialize this pipeline with new input by feedict:
self.sess.run([train_input_iterator.initializer],feed_dict = …)
Sometimes I get OOM. |
st205817 | I may be wrong but there is no tf.placeholder in tf 2.3.0. Only tf.compat.v1.placeholder.
Are you sure its 2.3.0? |
st205818 | I am using the resnet model from keras.
I can use
tf.keras.applications.resnet50.ResNet50(include_top=True, weights=None, input_tensor=None,
input_shape=(224,224,3), classes=2 )
and my ourput will be two classes
predictions (Dense) (None, 2) 4098 avg_pool[0][0]
is there anyway to not have the Dense layer and just have two class prediction via average pooling?
I thought this code should do the trick:
tf.keras.applications.resnet50.ResNet50(
include_top=False, weights=None, input_tensor=None,
input_shape=(224,224,3), pooling='avg' , classes=2 )
but my output is still 2048:
avg_pool (GlobalAveragePooling2 (None, 2048) 0 conv5_block3_out[0][0] |
st205819 | Hi,
The output you get is the feature vector from resnet50.
You need now to add your classification head with the two classes you want.
To have a complete code, you can see it on this link: Transfer learning and fine-tuning | TensorFlow Core 8
the main difference is that on the tutorial, the base model is MobileNetV2 instead of ResNet50, but it’s the same idea |
st205820 | Thanks for your reply, I already saw the link, but it is not clear to me how should I add that to my model exactly since I am very new to tf keras.
so I have the model as defined above in the post.
now I define this new dense layer for having prediction of two classes:
prediction_layer = tf.keras.layers.Dense(2)
how can I add it to the model so when I print it it shows it in the model.summary()?
For clarification, I dont want to do it like:
inputs = tf.keras.Input(shape=(160, 160, 3))
x = data_augmentation(inputs)
x = preprocess_input(x)
x = base_model(x, training=False)
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
outputs = prediction_layer(x)
model = tf.keras.Model(inputs, outputs)
but I like to do it so my model.summary() show them all. would that be possible? |
st205821 | Why you don’t want to do the way you showed?
that’s one valid solution using the Function API
The summary will show your layer at the bottom as expected but the base model will be one line only, is that the problem?
you can still access the summary of the base model by using: model.get_layer(index=4).summary() |
st205822 | so the code that I am using right now dont have access to input in the same class that the model is defined. so I like to make the model finalize before calling in it in the function that call the input.
Plus I dont want to add the dense layer in the function that the input is available because it just makes the code not clean like I rather to add every necessary parts to the model and just send it to the other function.
hmmmm, so there is no way to do it without using the input? |
st205823 | the main reason is because the code that I am using right now dont have access to input in the same class that the model is defined. so I like to make the model finalize before calling in it in the function that call the input.
Plus I dont want to add the dense layer in the function that the input is available because it just makes the code not clean like I rather to add every necessary parts to the model and just send it to the other function.
hmmmm, so there is no way to do it without using the input? |
st205824 | Humm, it’s not clear to me your reason for the summary issue.
For example, if you use any feature vector from TensorFlow Hub (which is basically what your seem to be doing but using the keras application) you won’t be able to see the internal summary of the model too and that is a very common use. |
st205825 | Somebody please explain me exactly what is going on in this code
model = Sequential()
model.add(Dense(10, input_dim=x.shape[1], activation='relu'))
model.add(Dense(50, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, input_dim=x.shape[1], activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.add(Dense(y.shape[1],activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=2,epochs=10)
what is input_dim exactly, what are the numbers 10,50,10,1 and why is y.shape[1] in last layer
I know activation functions but what is kernel_initializer =‘normal’ .
thanks in Advance |
st205826 | Hi Tornike,
the numbers your are asking are the number of neurons on each of those layers.
for kernel_initializer (docs 3), this defines how the starting random weights of that layer will be initialized. the ‘normal’ might (I don’t know for sure) generate the numbers from a random values from a Normal distribution.
y.shape[1] is the second dimension size of y, for example:
a = [[1, 2, 3],[1, 2, 3]]
b = tf.constant(a)
b.shape
>>> TensorShape([2, 3])
b.shape[1]
>>> 3
you can find more information here: tf.shape | TensorFlow Core v2.5.0 1 |
st205827 | Hi Igusm, thank you for your reply,
but i wonder are these neurons arbitrary numbers ? I think that y.shape[1] is output(labels) but i can’t understand how the numbers of neurons are chosen in those layers. |
st205828 | The number of neurons will affect the model’s ability to learn and its computation requirements. There is usually some room for experimentation there so they are arbitrary, but can be changed to try to get better performance. The only size that is important will be the output layer, since it needs to be the same size as the labels that it is being compared to in the loss function. |
st205829 | for some additional resources, you might want to check out this video series:
it will give a better understanding of the neurons and layers.
The first colab is a gem to play with and experiment until you can get the idea. |
st205830 | Great questions.
tornike_amaghlobeli:
what is kernel_initializer =‘normal’ .
As @lgusm mentioned, “this defines how the starting random weights of that layer will be initialized”. And there are a lot of initializers to choose from ( Module: tf.keras.initializers | TensorFlow Core v2.5.0 2).
From Experiments on learning by back propagation 2 (Plaut, Nowlan & Hinton (1986)) (see also Learning representations by back-propagating errors (Rumelhart, Hinton & Williams (1986)):
[the learning procedure] repeatedly adjusts the weights in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector given the current input vector…
…
The aim of the learning procedure is to find a set of weights which ensures that for each input vector the output vector produced by the network is the same as (or sufficiently close to) the desired output vector.
…
To minimize [the error] by gradient descent it is necessary to compute the partial derivative of E with respect to each weight in the network
…
The learning procedure is entirely deterministic, so if two units within a layer start off with the same connectivity and weights, there is nothing to make them ever differ from each other. We break this symmetry by starting with small random weights.
tornike_amaghlobeli:
i can’t understand how the numbers of neurons are chosen in those layers.
The input shape depends on the data you’re feeding into the network’s input layer. When building a model (at least in a supervised learning setting like basic classification with labelled data), you choose or design an initializer, a network architecture, an optimizer, etc - so it’s kind of an art form.
[1611.02167] Designing Neural Network Architectures using Reinforcement Learning (2016):
While constructing a CNN, a network designer has to make numerous design choices: the number of layers of each type, the ordering of layers, and the hyperparameters for each type of layer, e.g., the receptive field size, stride, and number of receptive fields for a convolution layer. The number of possible choices makes the design space of CNN architectures extremely large and hence, infeasible for an exhaustive manual search.
There are niche research fields - e.g. AutoML and meta-learning - where you use ML for ML to optimize the tasks that you’d normally do manually. For example, from [1611.01578] Neural Architecture Search with Reinforcement Learning :
Our experiments show that Neural Architecture Search can design good models from scratch, an
achievement considered not possible with other methods.
@tornike_amaghlobeli There are a lot of good courses that teach ML and deep learning theory with TensorFlow and Keras, such as Coursera’s deeplearning.ai and Udacity - you can check them out here Basics of machine learning | TensorFlow if you’re interested.
In addition, Kaggle has good material for learning ML and deep learning - Learn Intro to Deep Learning Tutorials | Kaggle, A Single Neuron | Kaggle, Deep Neural Networks | Kaggle. |
st205831 | input_dim=x.shape[1]
Those don’t make any sense. I don’t think they should be there. |
st205832 | Hi,
I checked some tfjs examples, and noticed that the page loads the model.json and weight files for the first time. Later if I refresh the page, the browser uses cached model.json, weight files and so on. So I would love to know if the model is updated, will the browser get the latest model automatically? If so, how does it work or how to make it work? |
st205833 | Welcome to the TensorFlow.js community! That is a great question So in the web world on the server side you can set in the header of the http response when you deliver the file how long to cache for and the rules for caching.
When the file changes on the server side the browser can see that the file has changed (without the need to download the whole file to check) and invalidate the local cached copy to force a new download. Some resources are available here on this topic if of interest:
developer.mozilla.org
Cache-Control - HTTP | MDN 1
The Cache-Control HTTP header holds directives (instructions) for caching in both requests and responses. A given directive in a request does not mean the same directive should be in the response.
And then also ETag:
developer.mozilla.org
ETag - HTTP | MDN 1
The ETag HTTP response header is an identifier for a
specific version of a resource. It lets caches be more efficient and save bandwidth, as
a web server does not need to resend a full response if the content has not changed.
Additionally,...
A more friendly read of the above can be found here:
Medium – 25 Jul 17
A Web Developer’s Guide to Browser Caching 2
Overview
Reading time: 7 min read
Along with this write up:
jakearchibald.com
Caching best practices & max-age gotchas 1
How to get the most out of caching without nasty race conditions
That being said, TensorFlow.js will use the browser cache detailed above (not local storage) unless you forcefully use the API to save the model to local storage I believe, and if you use local storage it is up to you to resave new files if you find new versions are available. The save API is here:
js.tensorflow.org
TensorFlow.js 6
A WebGL accelerated, browser based JavaScript library for training and deploying ML models
Hope that helps! |
st205834 | Hi all,
I apologize if I’ve formatted this wrong or posted it in the wrong area, I’m new to the forum.
I’ve been browsing the tutorials of both tensorflow and tensorflow.js today, and saw that with tensorflow, policies can be written (and in this way, known and extracted). This is documented here → Policies | TensorFlow Agents 5. However, I didn’t see this described in tutorials on tensorflow.js. As I want to use tensorflow.js for my dissertation (as it has to be web-based), I was hoping someone could advise me as to whether policies could still be written.
Thanks so much for any advice you can give! |
st205835 | See Is there a javascript version of this? · Issue #420 · tensorflow/agents · GitHub 4
/cc @Jason if there is any fresh update on that thread |
st205836 | There is no JS version of TF agents yet as far as I know but maybe someone wants to make? |
st205837 | I guess the need is more on the TF Agents side to support this and they can own to ensure API is aligned with their future goals etc? However “both” is probably preferred choice as they may need advice from the JS pov too to account for needs of web engineers and js devs etc and to ensure it works well with existing web tech and may need changes to tfjs to make some things work? |
st205838 | @Sandeep_Gupta
I have spoken to the TF agents team about JS support but they do not have any JS eng on their team to take it on right now but yes it’s possible for a SIG effort if they can help guide as im not an expert in this area. |
st205839 | Hi all,
Thank you for all the advice! I apologize for the slow response, I’ve been super busy with coursework.
If I’m reading your comments correctly, tensorflow.js does not currently support policies, but it may be up for discussion/implementation in the future? |
st205840 | We are fully open source so welcome contributions to add such functionality to the project in the future. We also have a special interests group which is where people like yourself from other companies/community work together on key future areas of TensorFlow.js. Reinforcement learning could be one of them if enough people interested in hacking on that together and willing to commit to milestones etc. |
st205841 | When I am referring to official tensorflow tutorials on https://www.tensorflow.org/text/tutorials 11 I see tf. experimental being used at many places. From the name it is clear that these are experimental APIs and something that should not be used in production code as things might change. However I don’t understand why are they so much widely used in official tutorials? Can we have these tutorials built using more stable APIs? Something people can refer to and use them in their production code? |
st205842 | Hi @dhavalsays ,
I guess you’re mentioning
tensorflow.keras.layers.experimental
correct?
thanks for the feedback.
Do you have any specific task in mind? |
st205843 | You could build your own versions of all these layers using low level string ops, tensorflow_text and tf.lookup.
That approach is verbose and hard to get right. Our overall assessment here was that using these experimental symbols now helps people get the tasks done, with minimal code changes expected in the future. The team that owns these layers is planning to get them out of experimental ASAP. I don’t expect significant changes. |
st205844 | It Is also useful for the API stability as the namespace is exposed earlier to collect more users feedback also from newcomers. |
st205845 | That’s an even better answer than mine .
If you find a mistake in the API design, if it’s in experimental we can still fix it. Outside of experimental we have to live with it. |
st205846 | yes I am referring to tensorflow.keras.layers.experimental I am going through this tutorial: text_classification_rnn tutorial and I see the experimental layer used. I also see it being used in many other tutorials. When is this experimental api going to become stable? Also I would not feel comfortable deploying this code to production. I also make coding videos on youtube and now when I start seeing experimental it makes me nervous that my coding video will become outdated pretty soon as I am using unstable api… Hope you understand my concern. |
st205847 | In general all APIs not under an experimental namespace can only change when changing the major version of TF (like TF 1.x → TF 2.x). We are guaranteeing that code written for TF 2.1 would still work in TF 2.42 if it only uses APIs outside of experimental (there are some exceptions where APIs that are not frequently used but are very broken might break in between minor versions but we will notify this change well in advance) |
st205848 | I think the concern is not so much about the experimental namespace rather its prevalence in the official tutorials on tensorflow.org 18. And it’s a valid concern especially if you’re building educational resources that need a longer shelf life.
We don’t have an official editorial policy on this other than the content on tensorflow.org 18 represents TensorFlow “as it is”, meaning, the site docs should reflect the latest TF release (though we host older API reference versions and archive old docs). Notebook tutorials are executed and tested regularly to ensure everything works and the site is up-to-date, or at least doesn’t break. Most usage of experimental APIs in the TF docs are an improvement in API usability that, for whatever reason, failed to make the stable window. For an existing doc with an experimental API, you can always look through the docs version branches 6 to see how something was done before the experimental API was introduced.
Generally, if an experimental API is in an official tutorial there’s a strong chance a decision was made this is the best way forward. While tensorflow.org 18 has a lot of content and docs for many packages (some more stable than others), maybe we can be more thoughtful about how we introduce experimental APIs in the “core” guide 5 and tutorials 8 to make sure we provide a solid foundation for the larger TensorFlow ecosystem to build on. |
st205849 | Billy, you conveyed my concern most accurately. I run a youtube channel called codebasics (320k subscribers) and I mainly teach AI, data science etc. In my present deep learning series I am having this feeling of regret that I should have selected PyTorch because when I see experimental all over the places in TF official tutorials I get a feeling that “TF is probably work in progress framework and I should opt for more stable framework such as PyTorch” I have a huge following on youtube and people rely on my videos for their learning as well as their real life usage. I am feeling reluctant to use any experimental API nowadays. I have lot of respect for Google as a company and I really wish you will understand the whole problem from your API consumer standpoint and try to resolve these issues sooner than later. |
st205850 | Billy, is there a way to get in touch with you or some other folks who work in tensorflow team? I ran a youtube channel called “codebasics” (320k subscribers) and my ML,deep learning videos are popular on youtube. If someone searches “tensorflow tutorial” in youtube my videos come in first 5 search results. I would like to collaborate with tensorflow team so that I can provide a best quality education to people for free and at the same time market tensorflow framework. My email id is removed Please send me an email and I would like to discuss some collaboration ideas that can even help you guys make your documentation better as well as produce help in video format. |
st205851 | hello to all and am should u are good? Pls i want to start a research work on transfer learning on deep learning based on plant disease detection, pls, what are the areas i need to research for the work and how can i start using tensor flow thanks |
st205852 | Hello,
Start with Tensorflow basics, learn how to write neural networks in Tensorflow.
Then learn about computer vision and transfer learning. |
st205853 | hello, language detection is also missing, but I would advise you to read the official documentation for use cases |
st205854 | Just adding up to @Rohan_Raj , here are some links for you:
If you don’t know anything about ML, there are many good courses to do, a good free one is the crash course: Machine Learning Crash Course | Google Developers 8
If you know the basics of ML, then this tutorials is a very good start for TensorFlow:
TensorFlow
TensorFlow 2 quickstart for beginners | TensorFlow Core 5
After you feel comfortable with the API, this tutorial for transfer learning will give you the idea of how TL works: Transfer learning and fine-tuning | TensorFlow Core 1
Finally, for a Transfer Learning for Plant disease, detected from pictures of the leaves, this tutorial can get you cover: CropNet: Cassava Disease Detection | TensorFlow Hub 1
hope this helps |
st205855 | how to change the keypoints in a model.
I wanted to use something like centernet_resnet50_v2_512x512_kpts_coco17_tpu-8
Which does OBJ detection and keypoints.
I have learned how to change the classes, I know how to add the keypoints on the label maps and the pictures themselves.
But how to I add it to the premade TS model above.
I don’t want to reinvent the wheel here just be able to alter for my own needs.
That model does the face and body, I want to do birds.
is there anyone who knows how to change the key points? |
st205856 | There was already a thread at:
github.com/tensorflow/models
Train a CenterNet model using custom keypoints 29
opened
Mar 30, 2021
bileki
models:research
type:docs
# Prerequisites
Please answer the following question for yourself before subm…itting an issue.
- [*] I checked to make sure that this issue has not been filed already.
## 1. The entire URL of the documentation with the issue
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/configuring_jobs.md
## 2. Describe the issue
I am trying to train a CenterNet model using custom keypoints.
However, I can't find any documentation or example on how to configure the CenterNet pipeline file.
In https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md, the examples are just about a human pose and there is not even an explanation on how to set the **keypoint_estimation_task**, or what the **value** parameter of **keypoint_label_to_std** means or how to calculate it. The same for **keypoint_label_to_sigmas**.
Example of **centernet_mobilenetv2fpn_512x512_coco17_kpts.tar.gz**:
> keypoint_label_map_path: "PATH_TO_BE_CONFIGURED/label_map.txt"
> keypoint_estimation_task {
> task_name: "human_pose"
> task_loss_weight: 1.0
> loss {
> localization_loss {
> l1_localization_loss {
> }
> }
> classification_loss {
> penalty_reduced_logistic_focal_loss {
> alpha: 2.0
> beta: 4.0
> }
> }
> }
> keypoint_class_name: "/m/01g317"
> keypoint_label_to_std {
> key: "left_ankle"
> value: 0.89
> }
> keypoint_label_to_std {
> key: "left_ear"
> value: 0.35
> }
> keypoint_label_to_std {
> key: "left_elbow"
> value: 0.72
> }
> ....
> keypoint_regression_loss_weight: 0.1
> keypoint_heatmap_loss_weight: 1.0
> keypoint_offset_loss_weight: 1.0
> offset_peak_radius: 3
> per_keypoint_offset: true
> } |
st205857 | Hello, I’m receiving an error while importing TensorFlow. I have AMD Radeon graphics and python 3.8 installed on my pc. Can anyone help me with these? Will these errors affect my work in any way further?
Error:-
W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘cudart64_110.dll’; dlerror: cudart64_110.dll not found
I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. |
st205858 | For AMD GPU check:
Use AMD GPU for model training General Discussion
Hello folks,
I have got an AMD GPU on my system and I want to use it for model training. After several efforts I have not been able to do so.
Since CUDA is only supported on NVIDIA GPU, it doesn’t work for me.
Any suggestions?
Thank you. |
st205859 | Hi.
This is just a warning (W) and information (I) message that CUDA libraries cannot be found.
The I message says to ignore the W message that comes above it if no CUDA GPU is installed on your machine.
The only effect of this is that training will happen on CPU only. |
st205860 | Hi all,
Novice question here. I have a very large dataset that I want to feed into a tfdf model. How can I couple tfdf.keras.pd_dataframe_to_tf_dataset() to a generator feeding the data to it? Of course, if there are better methods for feeding a generator into tfdf dataset than using this method I’d be interested to know.
Many thanks in advance,
Doug |
st205861 | Hi Doug,
tfdf.keras.pd_dataframe_to_tf_dataset is just a convenience method if you already have a dataframe. If you are starting off with a generator, you can use it to create a tf.data.Dataset directly from it tf.data.Dataset | TensorFlow Core v2.5.0 3, which might be a better fit – TF-DF datasets are not special, and any dataset object created for another keras model should work.
There are the following caveats:
Your dataset still needs to fit in memory at training time. If this is not the case, you can do something like dataset = dataset.take(10000000) to subsample 10 million rows (the exact number will depend on the number of features + memory capacity)
There are a few data sanitization steps happening in pandas_dataframe_to_tf_dataset like making sure feature names don’t have spaces or other forbidden characters, you might want to consult the source code 3 and copy that logic as is appropriate.
Hope that helps!
Arvind |
st205862 | def to_one_hot(image,label):
return image,tf.one_hot(classes_to_indices[label],depth=14)
train_ds = train_ds.map(to_one_hot)
calsses_to_indices is a simple python dictionary containing { label_name: indices }
this code is showing an error:-
Tensor is unhashable. Instead, use tensor.ref() as the key.
is there any way to do one_hot encoding while using tf.data API ? |
st205863 | Can you check:
One Hot Encoding General Discussion
Hey Community i hope you’re doing fine, i have data frame with more than 7000 rows and a 4 columns, and the label column with categorical labels and i want to convert them to numerical labels, and i don’t know how to do it!
is there any tensorflow function that can do this?
any help please. |
st205864 | Ya, I have checked it but it is for NumPy data. I want to convert my string labels to integer labels using python dictionary calsses_to_indices but we cannot use tensor data in the python dictionary. |
st205865 | Is that dataset Map transforms are executed in graph mode.
More details at:
github.com/tensorflow/tensorflow
Eager: Eager execution of tf.data pipelines 18
opened
Nov 21, 2017
closed
Apr 22, 2018
hsm207
comp:eager
type:feature
# System information
Tensorflow version:
1.5.0-dev20171120
Python version…:
python 3.6.3 |Anaconda, Inc.| (default, Nov 8 2017, 15:10:56) [MSC v.1900 64 bit (AMD64)]
# Problem
When debugging, calling the `numpy()` method on a `Tensor` object results in `AttributeError: 'Tensor' object has no attribute 'numpy' ` in certain situations.
# Steps to reproduce
1. Put this code in a script:
```
import tensorflow as tf
import tensorflow.contrib.eager as tfe
import numpy as np
import tensorflow as tf
from collections import defaultdict, Counter
tfe.enable_eager_execution()
class_probs = dict(
a=0.15,
b=0.3,
c=0.8,
d=0.9,
e=0.2,
f=0.02
)
num_classes = len(class_probs)
class_probs = {k: v / sum(class_probs.values()) for k, v in class_probs.items()}
class_mapping = {n: i for i, n in enumerate(class_probs.keys())}
class_names = list(class_probs.keys())
class_weights = list(class_probs.values())
sampled_dataset = np.random.choice(class_names, size=1000, p=class_weights)
dataset_data = defaultdict(list)
for i, d in enumerate(sampled_dataset):
dataset_data['class_name'].append(d)
dataset_data['class_id'].append(class_mapping[d])
dataset_data['data'].append(np.array([i]))
dataset_data['class_prob'].append(class_probs[d])
dataset_data['class_target_prob'].append(1 / num_classes)
for k, v in dataset_data.items():
dataset_data[k] = np.array(dataset_data[k])
class_counts = Counter(sampled_dataset)
oversampling_coef = 0.9
def oversample_classes(example):
"""
Returns the number of copies of given example
"""
class_prob = example['class_prob']
class_target_prob = example['class_target_prob']
prob_ratio = tf.cast(class_target_prob / class_prob, dtype=tf.float32)
prob_ratio = prob_ratio ** oversampling_coef
prob_ratio = tf.maximum(prob_ratio, 1)
# Breakpoint 1
repeat_count = tf.floor(prob_ratio)
repeat_residual = prob_ratio - repeat_count # a number between 0-1
residual_acceptance = tf.less_equal(
tf.random_uniform([], dtype=tf.float32), repeat_residual
)
residual_acceptance = tf.cast(residual_acceptance, tf.int64)
repeat_count = tf.cast(repeat_count, dtype=tf.int64)
return repeat_count + residual_acceptance
dataset = tf.data.Dataset.from_tensor_slices(dict(dataset_data))
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensors(x).repeat(oversample_classes(x))
)
i = tfe.Iterator(dataset)
x = i.next()['class_id']
# Breakpoint 2
print('end')
```
2. Insert the breakpoints in the lines following the comments Breakpoint 1 and Breakpoint 2
3. Debug the script
4. When breakpoint 1 is reached, evaluate the following:
`prob_ratio.numpy()`
This will result in the attribute error message.
5. When breakpoint 2 is reached, evaluate the following:
`x.numpy()`
This will not result in the attribute error message. |
st205866 | Hii TensorFlow community,
After a long search for my problem, any solution fund won’t work for me. I hope that you can help me to overcome this problem so I can continue my project.
The problem is while doing post-training integer quantization of a GRU model, it gives me the following error :
quantization problem1538×516 38.1 KB
ValueError: Failed to parse the model: pybind11::init(): factory function returned nullptr.
I use the following code for quantization :
converter = tf.lite.TFLiteConverter.from_saved_model(GRUMODEL_TF)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_dataset_gen():
for sample in XX_data:
sample = np.expand_dims(sample.astype(np.float32), axis=0)
yield [sample]
Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.representative_dataset = repr_data_gen
model_tflite = converter.convert()
open(GRUMODEL_TFLITE, “wb”).write(model_tflite) |
st205867 | Hi,
I would like to implement Instance segmentation in TensorFlow 2+ based on CondInst + CenterNet. CondInst is currently implemented only in PyTorch. I saw the implementation in pytorch and I’m not sure if this is possible in tensorflow …
The paper is interesting https://arxiv.org/pdf/2003.05664.pdf 3
Has anyone worked on it? Any tips?
Thank you! |
st205868 | You need to write your CondConv custom layer like this (not personally tested):
github.com
prstrive/CondConv-tensorflow/blob/master/network/condconv.py 4
import tensorflow as tf
from config import WEIGHT_DECAY
from tensorflow.keras import layers
def conv2d(kernel_size, stride, filters, kernel_regularizer=tf.keras.regularizers.l2(WEIGHT_DECAY), padding="same", use_bias=False,
kernel_initializer="he_normal", **kwargs):
return layers.Conv2D(kernel_size=kernel_size, strides=stride, filters=filters, kernel_regularizer=kernel_regularizer, padding=padding,
use_bias=use_bias, kernel_initializer=kernel_initializer, **kwargs)
class Routing(layers.Layer):
def __init__(self, out_channels, dropout_rate, temperature=30, **kwargs):
super(Routing, self).__init__(**kwargs)
self.avgpool = layers.GlobalAveragePooling2D()
self.dropout = layers.Dropout(rate=dropout_rate)
self.fc = layers.Dense(units=out_channels)
self.softmax = layers.Softmax()
self.temperature = temperature
This file has been truncated. show original
Centernet models are available in the Model Gardens research section:
github.com
tensorflow/models
master/research/object_detection
Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub. |
st205869 | Thanks for suggestion, really appreciate it.
Actually this implementation that you posted is more like Mixture of Experts than CondConv: see discussion github prstrive/CondConv-tensorflow/issues/1
And instead of doing the CondConv on previous convolution outputs (the number of outputs is always the same), we need to do it on instances of detected objects (the number of objects is different for every image). |
st205870 | You can create your CondConv custom layer from the original impl:
github.com
aim-uofa/AdelaiDet/blob/master/adet/modeling/condinst/condinst.py 3
# -*- coding: utf-8 -*-
import logging
from skimage import color
import torch
from torch import nn
import torch.nn.functional as F
from detectron2.structures import ImageList
from detectron2.modeling.proposal_generator import build_proposal_generator
from detectron2.modeling.backbone import build_backbone
from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY
from detectron2.structures.instances import Instances
from detectron2.structures.masks import PolygonMasks, polygons_to_bitmask
from .dynamic_mask_head import build_dynamic_mask_head
from .mask_branch import build_mask_branch
from adet.utils.comm import aligned_bilinear
This file has been truncated. show original
Or request the off the shelf model at:
github.com/tensorflow/models
📄 Community requests: New paper implementations
opened
Jun 6, 2020
jaeyounkim
models:official
type:support
This issue contains **all open requests for paper implementations requested by t…he community**.
We cannot guarantee that we can fulfill community requests for specific paper implementations.
If you'd like to contribute, **please add a comment to the relevant GitHub issue to express your interest in providing your paper implementation**.
Awesome external contributors will be nominated for [Google Open Source Peer Bonus](https://opensource.google/docs/growing/peer-bonus/).
Please also see our [contribution guidelines](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution) and [paper selection criteria](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution#model-selection).
## Computer Vision
| Paper | Conference | GitHub issue | Note |
--------|------------|--------------|------|
| ResNeXt: [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431) | CVPR 2017 | #6752 | |
| DenseNet: [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) | CVPR 2017 | #8278 | |
| [Density estimation using Real NVP](https://arxiv.org/abs/1605.08803) | ICLR 2017 | #7848 | Need to migrate [TF 1 code](https://github.com/tensorflow/models/tree/master/research/real_nvp) to TF 2 |
| [Spatiotemporal Contrastive Video Representation Learning](https://arxiv.org/abs/2008.03800) | CVPR 2021 | #9993 | In progress (Internally) | |
st205871 | Hi, Is it possible to use a custom loss function for the DecisionForest models (any of them), like you would with any other Keras model? If so, is there an example?
Thanks! |
st205872 | Hi,
Unfortunately, it is not currently possible to define a custom loss in the python code like you would do with classical Keras models.
Instead, custom losses are created by extending the C++ AbstractLoss class 3 and implementing the corresponding methods (e.g. loss value, gradient). This 8 is the list of the currently available losses.
Specific losses can be requested by posting an issue 3 with the “Enhancement” tag.
Cheers,
M. |
st205873 | I have been doing an image classification problem where in the objective is to train a predefined neural network model with set of tfrecords and do inference. This all is happening with reasonable accuracy In Colab.
Subsequent to this I converted the saved_model.pb into model.tflite file. I have checked it against the netron app it is seemingly taking correct inputs (which is an image tensor).
After this I called interpreter.invoke().
Following this when I try to decipher the output tensor I should be able to at least render the output the image, but i am having difficulty in doing this.
This is the link of colab notebook Google Colaboratory 2 where I have maintained the code.
I have other colab notebooks where similar code was done with training for upto 7500 iterations, but i am stuck in all the case at the interpreter level, since i have to port this app on to Android platform |
st205874 | Hi @Vishal_Virmani,
Welcome to the community.
I see at the end of your colab notebook that you try to print the output of the model which is an array of [1,10,4]. So why are you doing that as this is a classification problem as you mentioned? Or this is an object detection problem?
From the previous cells I see that with the model you do:
input_tensor = tf.convert_to_tensor(
np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)
and then you do:
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
(detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.5,
agnostic_mode=False,
)
plt.figure(figsize=(12,16))
plt.imshow(image_np_with_detections)
plt.show()
so you use the model output, you post process it and then visualize the boxes on the image.
You can think the output of the Interpreter as the output of the saved-model. So you have to do the same procedure there also eg. preprocess the image, make the predictions with the Interpreter, post process and create the boxes on the image.
If indeed this is an Object detection project check a good example here:
github.com
tensorflow/examples 1
master/lite/examples/object_detection/android
TensorFlow examples. Contribute to tensorflow/examples development by creating an account on GitHub. |
st205875 | George_Soloupis:
. So you have to do the same procedure there also eg. preprocess the image, make the predictions with the Interpreter, post process and create the boxes on the image.
If indeed this is an Object detection project check a good example here:
Hi @George_Soloupis The objective of doing the above colab notebook is to do face recognition by retraining the existing models( in this case i took ssd_mobilenet_v2_320x320_coco17_tpu-8 after testing for models like SSD MobileNet v2 320 x 320 and EffiecientDet D0 512 x 512 )
I have created a notebook where i am doing correct inference with the images in Colab.
Now my effort was to create a model.tflite file, which initially i was unable to do. After further query i came across that the mechanism of creating tflite is like running
!python /content/gdrive/MyDrive/TFLite_Check/models/research/object_detection/export_tflite_graph_tf2.py
–pipeline_config_path {pipeline_file}
–trained_checkpoint_dir {last_model_path}
–output_directory {outd}
followed by
converter = tf.lite.TFLiteConverter.from_saved_model(’/content/gdrive/MyDrive/TFLite_Check/freezetflite/saved_model/’)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.allow_custom_ops=True
I implemented these commands in Colab notebook which i shared earlier, not in the one where i am doing the inferences as they appear in the notebook
I was trying to check if the output of the interpreter call returns an image as i had seen in the colab notebook Selfie2Anime maintained by @Sayak_Paul
This is why i was calling plt.imshow(output()[0])
So how do i go ahead now…
I am trying to understand the notebook link you have shared |
st205876 | Vishal_Virmani:
I implemented these commands in Colab notebook which i shared earlier, not in the one where i am doing the inferences as they appear in the notebook
I am really lost here. I have not understood the procedure. If you may upload the correct colab notebook to take a look it will be fine! The previous one is just showing an Object detection procedure. |
st205877 | Yeah I am doing it
In here i have inferred correctly 2 people , the model was trained only for these 2 people.
In this notebook i failed to create the tflite model, which i did later in the earlier notebook |
st205878 | @George_Soloupis I keep on getting a prompt that link cannot be shared on the post. So i have shared the same via email notification which i received in my gmail account. |
st205879 | https://drive.google.com/file/d/1JyO0aPIL_gStEuh8fL2jTrqXlO1wy1RA/view?usp=sharing |
st205880 | Vishal_Virmani:
run.ipynb - Google Drive
@Vishal_Virmani I took a look of the notebook. It seems that it is the same Object detection procedure with square boxes over the image.
Convert the model with the same procedure as the previous notebook. Then if you want to check the output of the interpreter use this:
TensorFlow
TensorFlow Lite inference
and if you want to port it in android the example here:
github.com
tensorflow/examples
master/lite/examples/object_detection/android
TensorFlow examples. Contribute to tensorflow/examples development by creating an account on GitHub.
is what you need! |
st205881 | @George_Soloupis I am going through the same.
Initially i thought that just by calling interpreter.invoke() and catching the output tensor, i can display back the image.
I will try to understand and see if i can implement it with the links you have shared |
st205882 | I was wondering how I can convert a Tensorflow Lite object detection model created with the TFLite Model Maker. I don’t think this is possible after exporting the model to Tensorflow Lite but it should work if the model is exported as a saved model.
Any help is greatly appreciated. |
st205883 | Hi @Gi_T
After training the model with create() method you can do:
serving_model = model.create_serving_model()
print(f'Model\'s input shape and type: {serving_model.inputs}')
print(f'Model\'s output shape and type: {serving_model.outputs}')
and then save it to saved model format as always:
saved_model_path = './object_model_maker'
serving_model.save(saved_model_path, include_optimizer=False)
Check a workflow here with an audio classification example (it is still an ongoing project). |
st205884 | @George_Soloupis’s workflow is the recommended one. As long as you can create a SavedModel for your OD network and ensure it’s supported in OpenVINO you should be good to go. However, I must mention that OpenVINO’s TensorFlow 2 support is still very experimental and limited. So you might want to keep that in mind. |
st205885 | Thanks for the replies. I tried the code suggested by @George_Soloupis, but I couldn’t get the conversion to work. I’m currently getting the following error:
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/content/object_model_maker/saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)
Unfortunately, it seems like I can’t embed any links, so I can’t share my current notebook with you, but after creating the saved model, I’m using the following code to convert the model:
output_dir = '/content/output'
!source /opt/intel/openvino_2021/bin/setupvars.sh && \
python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
--input_model /content/object_model_maker/saved_model.pb \
--transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/automl_efficientdet.json \
--reverse_input_channels \
--output_dir {output_dir} \
Any help is greatly appreciated. |
st205886 | It seems like even when someone saves the model maker ‘model’ in saved _model format with the provided code here 6:
model.export(export_dir='.', export_format=[ExportFormat.SAVED_MODEL, ExportFormat.LABEL])
and then tries to reload it like:
reloaded_model = tf.saved_model.load('./object_detection_model_maker_saved_model/saved_model) reloaded_model.summary()
it throws an error. I have also checked it with the audioclassifier example.
I hope @Yuqi_Li can sed some light here. |
st205887 | Yes, please just use model.export(export_dir='.', export_format=ExportFormat.SAVED_MODEL) to export as saved_model. |
st205888 | I now used model.export(export_dir='.', export_format=ExportFormat.SAVED_MODEL) as suggested but when running
reloaded_model = tf.saved_model.load('/content/saved_model')
reloaded_model.summary()
I’m getting the following error:
AttributeError Traceback (most recent call last)
<ipython-input-13-38f2d2297174> in <module>()
1 reloaded_model = tf.saved_model.load('/content/saved_model')
----> 2 reloaded_model.summary()
AttributeError: '_UserObject' object has no attribute 'summary'
Also, while trying to convert the model to OpenVino:
output_dir = '/content/output'
!source /opt/intel/openvino_2021/bin/setupvars.sh && \
python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py \
--input_model /content/saved_model/saved_model.pb \
--transformations_config /opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/automl_efficientdet.json \
--reverse_input_channels \
--output_dir {output_dir} \
I’m still getting the following error message:
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf2.sh
Note that install_prerequisites scripts may install additional components.
[ FRAMEWORK ERROR ] Cannot load input model: TensorFlow cannot read the model file: "/content/saved_model/saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43) |
st205889 | Tag @Yuqi_Li at your previous answer.
This is also what I have noticed! summary() is not working after reloading the model and throws the same error:
AttributeError: '_UserObject' object has no attribute 'summary' |
st205890 | I see. Not sure why it happened. While it can still be used to load the model and run inference. |
st205891 | Gi_T:
rtunately, it seems like I can’t e
I think you can track this at:
github.com/tensorflow/models
'_UserObject' object has no attribute 'summary' 6
opened
Jul 29, 2020
nicholasguimaraes
models:research
type:support
Hello, I'm trying to load a ssd_resnet50_v1_fpn_640x640_coco17_tpu-8 I just fine… tuned but I'm coming across this error:
'_UserObject' object has no attribute 'summary'
Here are the 4 lines of code I have;
import tensorflow as tf
model_dir = 'C:/Users/Windows/Documents/Tensorflow_Obj_Det_API/models/research/object_detection/inference_graph/saved_model'
trained_model = tf.saved_model.load(model_dir)
trained_model.summary()
I've tried including the save_model.pb on the path to the model but then I get this error:
SavedModel file does not exist at: C:\Users\Windows\Documents\Tensorflow_Obj_Det_API\models\research\object_detection\inference_graph\saved_model\saved_model.pb/{saved_model.pbtxt|saved_model.pb}
Anyone knows how to load a trained model to do inference? |
st205892 | Hi @1118 , I tried to follow the link you posted but the page is not available anymore. I’d suggest you post the full question so that people might be able to help you |
st205893 | I am curious about how to set the Device in TF. I want to implement a custom distributed data parallel algorithm, and I want to say , for example, split input tensor x into three parts and transfer it to three devices.
so basically, I want to
x0, x1, x2 = tf.split(x, num_or_size_splits=3, axis=1)
x0 = x0.to('device:0')
x1 = x1.to('device:1')
x2 = x2.to('device:2')
But this seems quite impossible in TF.
I found one is about colocation_graph, should I use that? |
st205894 | you can do that using the with(device)
TensorFlow
Use a GPU | TensorFlow Core 4
does it help? |
st205895 | Thanks for the reply, sorry for this unclear question. The with context manager only work for python, IMHO.
However, if I want to implement a data parallel, I would have to rewrite the default TF’s pass, in that case, how would I handle this in C++? Because as far as I know, TF’s tensor does not have the device’s information. |
st205896 | Humm, I don’t know.
Is this for the training step?
I lack the background but maybe this Distributed training with TensorFlow | TensorFlow Core 17 might be able to give some insights |
st205897 | Are you looking for creating your own custom distributed strategy?
Cause I don’t think that we officially support this:
github.com/tensorflow/tensorflow
Custom Distributed Strategy 1
opened
Sep 9, 2019
closed
Jun 25, 2020
jpadrao
TF 1.14
comp:dist-strat
type:feature
<em>Please make sure that this is a feature request. As per our [GitHub Policy](…https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template</em>
**System information**
- TensorFlow version (you are using): 1.14.0
- Are you willing to contribute it (Yes/No): Yes
**Describe the feature and the current behavior/state.**
As Tensorflow stands, there is no easy and intuitive way to implement a new distribution strategy. The ones available, (MirroredStrategy, MultiWorkerMirroredStrategy, ...), work fine but the code seems very complex, and there isn't a tutorial/guide on how to develop a new one
**Will this change the current api? How?**
The api cloud be restructured to ease the support of new distributed strategy. A tutorial/guide on how to develop one would also be appreciated
**Who will benefit with this feature?**
Researchers who want to reduce the time spent training distributed tensorflow models
**Any Other info.** |
st205898 | Thanks for the reply! Yes, I am trying to create my own custom distributed strategy, but it seems that doing this in TF is causing a lot of trouble… |
st205899 | You can try to look at
github.com
tensorflow/tensorflow/blob/c35883e15a675767c15b8f3c5ed619bd9e051af4/tensorflow/python/distribute/distribute_lib.py#L16:L25 2
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# pylint: disable=line-too-long
"""Library for running a computation across multiple devices.
The intent of this library is that you can write an algorithm in a stylized way
and it will be usable with a variety of different `tf.distribute.Strategy`
implementations. Each descendant will implement a different strategy for
distributing the algorithm across multiple devices/machines. Furthermore, these
changes can be hidden inside the specific layers and other library classes that
need special treatment to run in a distributed setting, so that most users'
model definition code can run unchanged. The `tf.distribute.Strategy` API works
the same way with eager and graph execution. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.