id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st207100 | Correct. However, the installation without the container is not working for me.
Thanks. |
st207101 | Without container you need to double check yourself the required python, CUDA, cudnn versions etc. for the specific TF version. |
st207102 | I recently created an object detection example 2 using tfjs-tflite 1, which uses the ObjectDetector class 1 to load and use the Object Detector.
Now I wanted to create an object detection without using the ObjectDetector class. I managed to load the model into memory and to prepare the image to make predictions by following the ‘Test model runner’ example, but I’m having problems postprocessing the predictions since the dataSync method that is used in the CodePen example throws an error.
index.js:20 Uncaught (in promise) TypeError: result.dataSync is not a function
at detect (index.js:20)
Without the dataSync() method I’m getting the following output:
{TFLite_Detection_PostProcess: e, TFLite_Detection_PostProcess:1: e, TFLite_Detection_PostProcess:2: e, TFLite_Detection_PostProcess:3: e}
TFLite_Detection_PostProcess: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: 'float32', size: 40, …}
TFLite_Detection_PostProcess:1: e {kept: false, isDisposedInternal: false, shape: Array(2), dtype: 'float32', size: 10, …}
TFLite_Detection_PostProcess:2: e {kept: false, isDisposedInternal: false, shape: Array(2), dtype: 'float32', size: 10, …}
TFLite_Detection_PostProcess:3: e {kept: false, isDisposedInternal: false, shape: Array(1), dtype: 'float32', size: 1, …}
[[Prototype]]: Object
Code:
index.js:
const img = document.querySelector("img");
const resultEle = document.querySelector(`.result`);
let objectDetector;
/** Detect objects in image. */
async function detect() {
resultEle.textContent = "Loading...";
if (!objectDetector) {
objectDetector = await tflite.loadTFLiteModel(
"https://tfhub.dev/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2?lite-format=tflite"
);
}
const start = Date.now();
let input = tf.image.resizeBilinear(tf.browser.fromPixels(img), [300, 300]);
input = tf.cast(tf.sub(tf.div(tf.expandDims(input), 127.5), 1), 'int32');
// Run the inference and get the output tensors.
let result = objectDetector.predict(input);
console.log(result)
const latency = Date.now() - start;
renderDetectionResult(result);
resultEle.textContent = `Latency: ${latency}ms`;
}
/** Render detection results. */
function renderDetectionResult(result) {
const boxesContainer = document.querySelector(".boxes-container");
boxesContainer.innerHTML = "";
for (let i = 0; i < result.length; i++) {
const curObject = result[i];
const boundingBox = curObject.boundingBox;
const name = curObject.classes[0].className;
const score = curObject.classes[0].probability;
if (score > 0.5) {
const boxContainer = createDetectionResultBox(
boundingBox.originX,
boundingBox.originY,
boundingBox.width,
boundingBox.height,
name,
score
);
boxesContainer.appendChild(boxContainer);
}
}
}
/** Create a single detection result box. */
function createDetectionResultBox(left, top, width, height, name, score) {
const container = document.createElement("div");
container.classList.add("box-container");
const box = document.createElement("div");
box.classList.add("box");
container.appendChild(box);
const label = document.createElement("div");
label.classList.add("label");
label.textContent = `${name} (${score.toFixed(2)})`;
container.appendChild(label);
container.style.left = `${left - 1}px`;
container.style.top = `${top - 1}px`;
box.style.width = `${width + 1}px`;
box.style.height = `${height + 1}px`;
return container;
}
document.querySelector(".btn").addEventListener("click", () => {
detect();
});
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>TFLITE Web API Example</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<h1>TFLITE Web API Object Detection Example</h1>
<div class="img-container">
<img src="https://storage.googleapis.com/tfweb/demos/static/obj_detection.jpeg" crossorigin="anonymous" />
<div class="boxes-container"></div>
</div>
<div class="btn">Detect</div>
<div class="result"></div>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-cpu"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf-tflite.min.js"></script>
<script src="index.js"></script>
</body>
</html>
Any help is highly appreciated. Kind regards,
Gilbert Tanner |
st207103 | So when you predict on the model that operation is async so you need to await to that to finish before you attempt to access the result object which may not have resolved yet.
For example:
let result = await objectDetector.predict(input);
let data = await result.data();
console.log(data); |
st207104 | Thanks for the reply @Jason. I couldn’t get it to work by calling the data method on the result directly as this gave me an error but it worked, when calling it on the individual results:
let boxes = Array.from(await result["TFLite_Detection_PostProcess"].data());
let classes = Array.from(await result["TFLite_Detection_PostProcess:1"].data())
let scores = Array.from(await result["TFLite_Detection_PostProcess:2"].data())
let n = Array.from(await result["TFLite_Detection_PostProcess:3"].data())
The complete code is available on my Github 3. If you find anything that can be improved please let me know.
Kind regards,
Gilbert Tanner |
st207105 | Glad you got it working with the await! Indeed the function can be called on Tensors so as long as what is returned is a Tensor then it should be able to call data on that. See TF Tensor API here:
js.tensorflow.org
TensorFlow.js
A WebGL accelerated, browser based JavaScript library for training and deploying ML models |
st207106 | I keep getting the Error No Operation named [StatefulPartitionedCall_2:0] in the Graph
when Using SavedModelBundle.exporter to save the model
Tensorflow Python Version : 2.4.1
Tensorflow Java Version: 0.3.1
Os: Windows 10
GPU/CPU: CPU version
ConcreteFunction serveFunction = savedModel.function("serve_model");
SavedModelBundle.exporter(exportDir)
.withFunction(serveFunction)
.export();
To access and inspect Graph operations, i can see the StatefulPartitionedCall_2
But without the : at the end of the operation name.
Iterator<Operation> operationIterator = serveFunction.graph().operations();
while(operationIterator.hasNext()){
System.out.println(operationIterator.next().name());
}
code snippet output
Adam/iter
Adam/iter/Read/ReadVariableOp
Adam/beta_1
Adam/beta_1/Read/ReadVariableOp
Adam/beta_2
...
...
...
train_model_labels
StatefulPartitionedCall_1
saver_filename
StatefulPartitionedCall_2
StatefulPartitionedCall_3
Works fine when invoking directly the Op from session.runner()
String checkpointPath = "...";
session.runner()
.feed("saver_filename:0", checkpointPath)
.fetch("StatefulPartitionedCall_2:0").run() ;
Error could be reproduced using this scripts which defines than saves the model (credits to Thierry Herrmann)
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
def make_model():
class CustomLayer(keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
l2_reg = keras.regularizers.l2(0.1)
self.dense = layers.Dense(1, kernel_regularizer=l2_reg,
name='my_layer_dense')
def call(self, data):
return self.dense(data)
inputs = keras.Input(shape=(8,))
x1 = layers.Dense(30, activation="relu", name='my_dense')(inputs)
outputs = CustomLayer()(x1)
return keras.Model(inputs=inputs, outputs=outputs)
class CustomModule(tf.Module):
def __init__(self):
super(CustomModule, self).__init__()
self.model = make_model()
self.opt = keras.optimizers.Adam(learning_rate=0.001)
@tf.function(input_signature=[tf.TensorSpec([None, 8], tf.float32)])
def __call__(self, X):
return self.model(X)
# the my_train function processes one batch (one step): computes the loss and apply the
# loss gradient to update the model weights
@tf.function(input_signature=[tf.TensorSpec([None, 8], tf.float32), tf.TensorSpec([None], tf.float32)])
def my_train(self, X, y):
with tf.GradientTape() as tape:
logits = self.model(X, training=True)
main_loss = tf.reduce_mean(keras.losses.mean_squared_error(y, logits))
# self.model.losses contains the reularization loss (see l2_reg above)
loss_value = tf.add_n([main_loss] + self.model.losses)
grads = tape.gradient(loss_value, self.model.trainable_weights)
self.opt.apply_gradients(zip(grads, self.model.trainable_weights))
return loss_value
# instantiate the module
module = CustomModule()
def save_module(module, model_dir):
tf.saved_model.save(module, model_dir,
signatures={
'serve_model' :
module.__call__.get_concrete_function(tf.TensorSpec([None, 8], tf.float32)),
'train_model' :
module.my_train.get_concrete_function(tf.TensorSpec([None, 8], tf.float32),
tf.TensorSpec([None], tf.float32))})
MODEL_OUTPUT_DIR ="..."
save_module(module, MODEL_OUTPUT_DIR)
``` |
st207107 | For those interested to follow this topic, the discussion is happening on this GitHub issue 2. |
st207108 | env:
python = 3.8.12
tensorflow = 2.6.0
keras = 2.6.0
so the problem is that I am trying to train highly unbalanced data, so I tried to use sample_weights as part of model.fit(), but I always get the same error:
ValueError: Can not squeeze dim[4], expected a dimension of 1, got 4 for '{{node categorical_crossentropy/weighted_loss/Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1]](Cast)' with input shapes: [?,48,48,80,4].
so this is the shape of the data, where the y_s were converted using tf.keras.utils.to_categorical, where num_classes = 4 :
x_train (54, 48, 48, 80)
y_train (54, 48, 48, 80, 4)
x_test (18, 48, 48, 80)
y_test (18, 48, 48, 80, 4)
x_val (18, 48, 48, 80)
y_val (18, 48, 48, 80, 4)
the architecture is U-NET:
inputs = Input((number_of_layers, height, width, 1))
c1 = Conv3D(filters=16, kernel_size=3, activation=‘relu’, kernel_initializer=‘he_normal’, padding=‘same’)(inputs)
c1 = Dropout(0.1)(c1)
c1 = Conv3D(16, kernel_size=3, activation=‘relu’, kernel_initializer=‘he_normal’, padding=‘same’)(c1)
p1 = MaxPooling3D(pool_size=2)(c1)
…
outputs = Conv3D(num_classes, kernel_size=1, activation=‘softmax’)(u9)
model = Model(inputs=[inputs], outputs=[outputs])
regarding the compile part, it’s like the following:
model.compile(optimizer=‘adam’, loss=‘categorical_crossentropy’, metrics=[‘accuracy’], sample_weight_mode=“temporal”)
NOTE: I’m not using metrics=[‘accuracy’] for evaluation, I’m using some IOU
The problem comes here, when I am using:
from sklearn.utils.class_weight import compute_sample_weight
weights = compute_sample_weight(class_weight=‘balanced’, y=y_train.flatten())
weights = weights.reshape(y_train.shape)
weights.shape # => (54, 48, 48, 80, 4) (same as y_train)
so till here it’s working, without any errors, but when I added weights to the following dataset:
tf_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train, weights)).batch(4)
and after that I tried to run model.fit:
model.fit(x=tf_ds, verbose=1, epochs=5, validation_data=(x_val, y_val))
I got the following error:
ValueError: Can not squeeze dim[4], expected a dimension of 1, got 4 for ‘{{node categorical_crossentropy/weighted_loss/Squeeze}} = SqueezeT=DT_FLOAT, squeeze_dims=[-1]’ with input shapes: [?,48,48,80,4].
Any ideas, how to solve this ? |
st207109 | Hi everyone - I’m re-sharing the message sent by Yong Tang (SIG IO Lead) on the SIG’s mailing list.
Hi All,
Our monthly meeting call will be hold again tomorrow. With TensorFlow 2.7 branch cut out and 2.7 final release coming soon, we are looking into having tensorflow-io 0.22.0 release when 2.7 is out. Please attend if interested.
The SIG IO monthly meeting is scheduled for tomorrow 10/14 Thursday, 11:00 AM -12:00 Pacific Time.
Below is the link to the meeting docs we can build up, feel free to update:
docs.google.com
[Public] SIG IO Meeting Notes 5
SIG IO Meeting Notes This is a public document 2021-10-14 Thursday, Oct 14th, 11:00 – 11:55 am Pacific Time Meeting recording meet.google.com/aqg-bowh-ykx +1 929-324-9972 PIN: 406 631 103# Notes Roll call Yong Tang (MobileIron) Agenda Actions...
Thanks
Yong
–
Please join us! |
st207110 | The tfjs-tflite library allows you to run TFLite models on the web.
Example:
const tfliteModel = await tflite.loadTFLiteModel('url/to/your/model.tflite');
or
const objectDetector = await tflite.ObjectDetector.create(
"https://storage.googleapis.com/tfhub-lite-models/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2.tflite"
);
I’m currently working on a simple Object Detection example 2, which works fine for the example models that are stored on Google Cloud, but I couldn’t get it to work with a custom model stored on Github.
This gives me a CORS error:
Access to fetch at 'https://github.com/TannerGilbert/TFLite-Object-Detection-with-TFLite-Model-Maker/raw/master/model.tflite' from origin 'null' has been blocked by CORS policy: The 'Access-Control-Allow-Origin' header contains multiple values 'https://render.githubusercontent.com https://viewscreen.githubusercontent.com https://viewscreen-lab.githubusercontent.com', but only one is allowed. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Therefore, I wanted to ask what’s the simplest way to solve the error or perhaps what the best platform is to store your model on the web for free.
Any help is highly appreciated. Kind regards,
Gilbert Tanner |
st207111 | Solved by Jing_Jin in post #4
Thanks @Jason for the detailed explanations!
@Gi_T, for your use case, it is possible to use the “raw.githubusercontent.com” link to get the model file which has “access-control-allow-origin” header set to “*”.
Try:
https://raw.githubusercontent.com/TannerGilbert/TFLite-Object-Detection-with-TFLi… |
st207112 | So CORS stands for Cross Origin Resource Sharing and in order to use anything in JavaScript on the client side you must send the asset with a server that sets the correct CORS headers so that the code is allowed to consume it - else people could take content from any site and use it without permission which could lead to unintended consequences.
If the asset is on the same domain as the webpage you are hosting then this will not be an issue. However if the asset you are trying to bring in is on a different domain (eg subdomain too) then you must explicitly set these headers with your server configuration. You also need to set the “crossorigin” attribute on the HTML too for images etc:
developer.mozilla.org
HTML attribute: crossorigin - HTML: HyperText Markup Language | MDN 1
The crossorigin attribute, valid on the , , ,
More details on CORS can be found here:
developer.mozilla.org
Cross-Origin Resource Sharing (CORS) - HTTP | MDN 1
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources. CORS also relies on a mechanism by...
If you use a website like Glitch.com 1 to host your demo CORS headers are set automatically.
It should also be noted that you probably need to serve your assets over https too as mixed protocols eg some assets on http and some on https will also lead to issues. |
st207113 | Thanks @Jason for the detailed explanations!
@Gi_T, for your use case, it is possible to use the “raw.githubusercontent.com” link to get the model file which has “access-control-allow-origin” header set to “*”.
Try:
https://raw.githubusercontent.com/TannerGilbert/TFLite-Object-Detection-with-TFLite-Model-Maker/master/model.tflite 5
Thanks! |
st207114 | Oh nice that is good to know you can use that domain to get the correct headers! Thanks for sharing Jing! |
st207115 | Hello,
I am trying to apply the command tf-opt --promote-resources-to-args on the following TF/MLIR code.
func @counter(%arg0: tensor) → tensor {
%1 = “tf.VarHandleOp”() {container = “”, shared_name = “x”} : () → tensor<!tf_type.resource<tensor>>
%2 = “tf.ReadVariableOp”(%1) : (tensor<!tf_type.resource<tensor>>) → tensor
%3 = “tf.Add”(%arg0, %2): (tensor, tensor) → tensor
“tf.AssignVariableOp”(%1, %3) {device = “”} : (tensor<!tf_type.resource<tensor>>, tensor) → ()
%4 = “tf.ReadVariableOp”(%1) : (tensor<!tf_type.resource<tensor>>) → tensor
return %4: tensor
}
It does nothing because the function’s name is not @main. Is there a way to apply this command on every function whatever its name is ?
The result I need is :
func @counter(%arg0: tensor, %arg1: tensor {tf.aliasing_output = 1 : i64, tf.resource_name = “x”}) → (tensor, tensor) {
%0 = “tf.Add”(%arg0, %arg1) : (tensor, tensor) → tensor
return %0, %0 : tensor, tensor
}
Best regards,
HP |
st207116 | Unfortunately this isn’t possible as-is since it is hard-coded here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/tensorflow/transforms/promote_resources_to_args.cc#L348-L349 6
Feel free to send a patch that would make it an option of the pass to specify the entrypoint name! |
st207117 | I would like to add a constant tensor of values to a graph. Ideally, I want to describe the tensor as a floating point representation and have either the optimiser replace it or a helper builder do it for me. What is the correct way to do this? Thanks |
st207118 | [what i want to do]
I created a rank-3 tensor.
Then I got a flattened 1D array of all the values in this tensor by using tensor.data() method.
I want to assign each element of this flattened array as value of an object.
[what is the problem]
I’m unable to obtain the individual elements of the array using a.data().then((data) => { card.value = data[i] }); .
console.log(card.value) returns undefined.
However, using card.value = a.dataSync()[i]; seems to work, instead.
[main.js]
import * as tf from "@tensorflow/tfjs";
import Card from "./Card.js";
// create a rank-3 tensor
const a = tf.randomNormal([4, 3, 2]);
a.print();
// assign values in the tensor to a series of div object
for (let i = 0; i < a.size; i += 1) {
// create card object
const card = new Card(i, "card " + String(i), "96px", "96px");
// assign a value to card
// [method 1] using synchronous method works
// card.value = a.dataSync()[i];
// [method 2] using asynchronous method is not working ...
a.data().then((data) => { card.value = data[i] });
console.log(card.value);
[Card.js]
export default class Card {
// constructor
constructor(_idx, _name, _width, _height, _posx, _posy, _posz, _value) {
this.idx = _idx;
this.name = _name;
this.width = _width;
this.height = _height;
this.posx = _posx;
this.posy = _posy;
this.posz = _posz;
this.value = _value;
}
} |
st207119 | Could anybody kindly help to take a look of the issue shown above and advise what is missing? Thanks. |
st207120 | SIG Build’s next meeting will be tomorrow, Tuesday, October 5, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 5, and feel free to suggest new agenda items. |
st207121 | Thanks to everyone who attended the meeting yesterday. Here are are a couple of notable points from the discussion:
Our November meeting on Nov. 2 may be affected by Daylight Savings Time if you are in Europe.
The TF DevInfra team is ramping up a couple of new members with projects including manylinux2014 support for our build environment and author notifications when a GitHub PR gets rolled back.
manylinux development continues to be complicated. See the notes for details.
Thanks, and see you in November! |
st207122 | I have a keras .pb model which i trained with Tensorflow 1.15.0 and Keras 2.3.1. I want to convert this model tensorrt engine. I tried Using TF-TRT,
from tensorflow.python.compiler.tensorrt import trt_convert as trt
input_saved_model_dir = "my_model.pb"
output_saved_model_dir = "my_model.engine"
converter = trt.TrtGraphConverter(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)
and i am getting this error,
converter.convert()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 548, in convert
self._convert_saved_model()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 494, in _convert_saved_model
self._input_saved_model_dir)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py", line 268, in load
loader = SavedModelLoader(export_dir)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py", line 284, in __init__
self._saved_model = parse_saved_model(export_dir)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/loader_impl.py", line 83, in parse_saved_model
constants.SAVED_MODEL_FILENAME_PB))
OSError: SavedModel file does not exist at: my_model.pb/{saved_model.pbtxt|saved_model.pb}
the name of the model is correct and it’s there in the path…
Any guess why i am getting this error? |
st207123 | Hi Everyone,
we propose some exciting improvements in KerasTuner to be implemented in the near future.
It allows you to:
Tune the hyperparameters in any step in a ML workflow, including data preprocessing, model building, and training.
Tune your existing Keras code with little modification.
You feedback is valuable to us!
Please let us know what you think of these feature by commenting on the pull request. [Link 2]
To view the proposal: [Link 4] |
st207124 | I am currently working on integrating a tf.recommender 1 model into an existing TFX 1 pipeline to provide on-service recommendations. I am quite new to both TFX and tf.recommender and am not seeing any resources on integrating the two. I want to be sure I am implementing best practices–in particular with TFT 1 and TFMA 1. Does anyone know of existing docs that may help me with this, or better yet, existing example pipelines?
Thanks! |
st207125 | @Robert_Crowe Do you happen to have any relevant info on this topic or know where I could look? |
st207126 | Just to note that this Is in the scope of Recommenders Addons:
github.com
GitHub - tensorflow/recommenders-addons: Additional utils and helpers to extend... - Scope 4
Additional utils and helpers to extend TensorFlow when build recommendation systems, contributed and maintained by SIG Recommenders. - GitHub - tensorflow/recommenders-addons: Additional utils and ... |
st207127 | Thanks for pointing this out @Bhack. This is what prompted me to reach out as it indicates that there are existing docs. |
st207128 | Hello im working training to train a CNN with two datasets that I label manually with negative and positive.(80x60 depth images in each matrix)
# dimensions of our images.
img_width, img_height = 80, 60
n_positives_img, n_negatives_img = 17874, 26308
n_total_img = 44182
#Imports of datasets inside Drive
ds_negatives = np.loadtxt('/content/drive/MyDrive/Colab Notebooks/negative_depth.txt')
ds_positives = np.loadtxt('/content/drive/MyDrive/Colab Notebooks/positive_depth.txt')
#Labeled arrays for datasets
arrayceros = np.zeros(n_negatives_img)
arrayunos = np.ones(n_positives_img)
#Reshaping of datasets to convert separate them
arraynegativos= ds_negatives.reshape(( n_negatives_img, img_width, img_height))
arraypositivos= ds_positives.reshape((n_positives_img, img_width, img_height))
#Labeling datasets with the arrays
ds_negatives_target = tf.data.Dataset.from_tensor_slices((arraynegativos, arrayceros))
ds_positives_target = tf.data.Dataset.from_tensor_slices((arraypositivos, arrayunos))
#Concatenate 2 datasets and shuffle them
ds_concatenate = ds_negatives_target.concatenate(ds_positives_target)
datasetfinal = ds_concatenate.shuffle(n_total_img)
But when I try to separate my dataset 80/20 to validate my CNN:
trainingdataset, validatedataset = train_test_split(datasetfinal, test_size=0.2, random_state=25)
I get this error:
TypeError: Singleton array arrayshapes: ((80, 60), ()), types: (tf.float64, tf.float64)>, dtype=object) cannot be considered a valid collection.
Any ideas? Thank in advance!!! |
st207129 | It’s impossible to split tensorflow dataset object by passing it to train_test_split from sklearn. You can choose the number of validation samples, which should be int number, and use the following example:
valid_ds = datasetfinal.take(n_samples)
train_ds = datasetfinal.skip(n_samples)
It does what the method says: takes first n_samples from the dataset and skips all the rest or skips the first n_samples and takes all the rest. |
st207130 | I fixed that but then when i build the model and i fit it:
model = Sequential()
model.add(Conv2D(5, kernel_size=(5, 5),activation='linear',input_shape=(80,60,1),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2),padding='same'))
model.add(Conv2D(5, (5, 5), activation='linear',padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(5, (5, 5), activation='linear',padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Flatten())
model.add(Dense(100, activation='linear'))
model.add(Dense(1, activation='linear'))
#Compiling model
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
model.fit(train_ds, validation_data=valid_ds, batch_size=32, epochs=10)
I get this error:
ValueError: Input 0 of layer sequential_6 is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (80, 60) |
st207131 | If your data has shape (80, 60), the input layer of the network should also have this input shape. But you defined input_shape=(80, 60, 1), which is 3D.
When the error says that it expected 4 dimensions, it means that it expects 3D input as you defined it + batch dimension of the dataset. You probably did not apply .batch(batch_size) method to your datasets before passing them to the model.
You can read about preparing data and various dataset methods here (tf.data: Build TensorFlow input pipelines | TensorFlow Core). You could probably benefit from using .cache() method as well. |
st207132 | Ekaterina_Dranitsyna:
split tensorflow dataset object
I have followed your reply and i could have trained my CNN correctly but now that i want to do confusion matrix of my results, its impossible for me because i dont have my tensorflow dataset separated in x_ds and y_ds with the labeled array, So i cant compared it with the predicted one.
How could i do that?
THANKS! |
st207133 | To extract validation labels from a batched dataset you can do this:
valid_labels = list(valid_ds.flat_map(lambda x, y: tf.data.Dataset.from_tensor_slices((x, y))).as_numpy_iterator())
valid_labels = [y for x, y in valid_labels]
Probably, there is some easier way to do it. But now I can’t think of anything else.
Then you get predicted values from the model:
pred_labels = model.predict(valid_ds)
And use this to get a confusion matrix:
conf_m = tf.math.confusion_matrix(valid_labels, pred_labels) |
st207134 | It gives me this error:
InvalidArgumentError: `predictions` contains negative values
Condition x >= 0 did not hold element-wise:
x (shape=(20000, 10) dtype=int64) =
['3', '3', '-8', '...']
It has no sense because my labels are just 0 and 1.
This iwhat pred_labels has
[[ 6.11138 2.9243512 -11.660926 … -11.982912 -12.400366
-12.061557 ]
[ 6.1406865 2.6330147 -11.452074 … -11.73517 -12.167985
-11.924534 ]
[ 6.0676413 2.5402145 -11.498982 … -11.899355 -12.084745
-11.687552 ]
…
[ 6.1307297 2.6329107 -11.447449 … -11.732571 -12.161408
-11.918484 ]
[ 6.056893 3.654493 -10.960058 … -11.586772 -11.884876
-11.678584 ]
[ 6.1401978 2.6324804 -11.449804 … -11.733837 -12.1662035
-11.92291 ]]
It seems to be one image not predicted labels |
st207135 | Check the final layer of the model. It should have an activation suitable to classification task (sigmoid in this case) and 1 neuron. Loss function should be BinaryCrossentropy.
You can see this example of a binary image classification:
keras.io
Keras documentation: 3D image classification from CT scans 2 |
st207136 | Firstly, thank you that worked!!
But, this is the confusion matrix i get
[[7829 4138]
[5346 2687]]
And this is the precision 52% and accuracy 39%.
Why do i get this values if my model return me with model.evaluate and fit around 93% of acc?
colab.research.google.com
Google Colaboratory 1 |
st207137 | SIG-JVM’s monthly meeting is 9am PDT today. The agenda is here 3. We’re going to discuss plans for the next release, issues with training gradients, strategies for patch releases now upstream TF is more consistently releasing them, and our longer term plans for inference/training. |
st207138 | In most callbacks, when logging information, print is used. But the standard logging library is also used for warning or error. Why logging.info is not used instead of print ?
For example, in the ReduceLROnPlateau callback, on_epoch_end 1, it is the following :
print('\nEpoch %05d: ReduceLROnPlateau reducing learning rate to %s.' % (epoch + 1, new_lr))
But for the warning, the standard logging library is used :
logging.warning('Learning rate reduction mode %s is unknown, fallback to auto mode.', self.mode)
I’m currently trying to redirected the print to a logger and it would have been much simpler if logging.info was used. Any ideas on why this is done like this and if I should bother to make a PR to change that ? |
st207139 | AttributeError Traceback (most recent call last) in 14 sys.path.append(ROOT_DIR) # To find local version of the library 15 from mrcnn import utils —> 16 import mrcnn.model as modellib 17 from mrcnn import visualize 18 # Import COCO config
~\ComputerVisionProject\Mask_RCNN_CustomDataset\Mask_RCNN-master\Mask_RCNN-master\mrcnn\model.py in 253 254 → 255 class ProposalLayer(KE.Layer): 256 “”"Receives anchor scores and selects a subset to pass as proposals 257 to the second stage. Filtering is done based on anchor scores and
AttributeError: module ‘keras.engine’ has no attribute 'Layer’ |
st207140 | Are you using this repo? I think its for tf version 1.x
github.com
matterport/Mask_RCNN 50
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow |
st207141 | @rohan_Singh If you’re using tf 2. x you can use the following implementation. It’s an extended version of the matterport and provides many backbones including efficientnets.
github.com
GitHub - alexander-pv/maskrcnn_tf2: Mask R-CNN for object detection and instance... 162
Mask R-CNN for object detection and instance segmentation with Keras and TensorFlow V2 and ONNX and TensorRT optimization support. - GitHub - alexander-pv/maskrcnn_tf2: Mask R-CNN for object detec... |
st207142 | Good morning everyone,
I’ll try to briefly explain the context and then the problem I’m facing:
Context: I am using and testing a collaborative Robot. This Robot has been provided to me with a library in python that allows to acquire signals from the robot (currents, velocity, positions, I/O etc) and to command it, in joint and end-effector (EE) coordinates. There are also available the functions of Direct and Inverse Kinematics (DK and IK).
For my curiosity, I was interested in generating a trajectory (in end-effector coordinates) in order to move it within a conical area [I attach a link to the video that shows the movement in question].
LINK: https://www.youtube.com/watch?v=CExtMfvRabo 4
From the robot, moreover, it is possible to save the .csv file containing the trajectories, in joint coordinates, of the single joints.
Initially, not knowing the “shape” that should have the trajectory (in end-effector coordinates) of the movement that I was interested in reproducing, I was able, manually moving the robot in gravity compensation mode, to acquire the trajectories of the individual joints. At this point, using the Direct Kinematics algorithm, I obtained the movement of the consequent end-effector [I attach photos of 2 3D graphs: the first in which I plot the 3 coordinates x,y,z and the second, in which I plot roll, pitch, yaw].
End Effector Angular Displacement 1
End Effector Position Displacement
Here the problem was born.
Problem: out of curiosity, I tried to use the Inverse Kinematics algorithm on the points obtained from the DK and the algorithm returned the error: “Singular Trajectory”. But the robot was able to move according to that trajectory, the problem should be in the calculation of the IK, which probably finds multiple/infinite solutions.
To overcome this limitation I used a Neural Network developed in Python using Tensorflow (Keras) to try to approximate the IK. I will preface this by saying that I have never used Keras or Tensorflow, so I may have made some conceptual errors. I have consulted the API of Keras and also the guide proposed in this link
LINK: https://machinelearningmastery.com/deep-learning-models-for-multi-output-regression/ 1
In my PC I use:
Visual Studio Code for programming in python;
python 3.9.5
Keras 2.6.0;
I thought of the neural network this way: 6 input nodes (corresponding to the 6 coordinates of the end-effector) and 6 output nodes (the 6 coordinates of the joints). The training set consists of a .csv file containing the 6 coordinates of the end-effector computed via the DK run on a .csv file containing the trajectories of the 6 joints. The file containing the joint coordinates is the Label file.
Below I attach the code of the network implementation.
from numpy import loadtxt
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
import tensorflow as tf
from numpy import array
# Model definition
def get_model(I_N_L_1, I_N_L_2, I_N_L_3, I_N_L_4, I_N_L_5 ,n_inputs, n_outputs):
model = Sequential()
model.add(Dense(I_N_L_1, input_dim=n_inputs, kernel_initializer='he_uniform', activation='relu'))
model.add(Dense(I_N_L_2, activation='relu'))
model.add(Dense(I_N_L_3, activation='relu'))
model.add(Dense(I_N_L_4, activation='relu'))
model.add(Dense(I_N_L_5, activation='relu'))
model.add(Dense(n_outputs))
model.compile(loss='mae', optimizer='adam', metrics=["mae"])
return model
# Load Training set csv
dataset_EF = loadtxt('WeldingProve.csv', delimiter=',')
x_train = dataset_EF[0:1700,0:6]
print('shape: ',x_train.shape)
# Load Label set csv
dataset_joints = loadtxt('EF_from_WeldingProve.csv', delimiter=',')
y_train = dataset_joints[0:1700,0:6]
print('shape: ',y_train.shape)
# Test set definition
x_test = dataset_EF[1701:,0:6]
print('shape: ',x_test.shape)
# Label of the test set definition
y_test = dataset_joints[1701:,0:6]
print('shape: ',y_test.shape)
# Number of nodes in the hidden layers
I_N_L_1 = 192
I_N_L_2 = 36
I_N_L_3 = 6
I_N_L_4 = 36
I_N_L_5 = 192
# Number of nodes in the input and output layers
n_inputs = 6
n_outputs = 6
# calling the "get_model" function
model = get_model(I_N_L_1, I_N_L_2, I_N_L_3, I_N_L_4, I_N_L_5 ,n_inputs, n_outputs)
es = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=5)
# fit model
model.fit(x_train, y_train, verbose=1, epochs=600)
# saving the model
model.save("Test_Model.h5")
# Testing procedure
pred = []
# Computing the Prediction on the Test Set
for i in range(len(x_test)-1):
b = [x_test[i][0], x_test[i][1], x_test[i][2], x_test[i][3], x_test[i][4], x_test[i][5]]
ToBePredicted = array([b])
Prediction = model.predict(ToBePredicted)
a = [Prediction[0][0], Prediction[0][1], Prediction[0][2], Prediction[0][3], Prediction[0][4], Prediction[0][5]]
# Computing the mean vector of the error for each predicted joint trajectory
average_vector = []
sum = 0
average = 0
for j in range(6): # colonne
for i in range(len(y_test)-1): #righe
sum = sum + (pred[i][j] - y_test[i][j])
average = sum/len(y_test)
average_vector.append(average)
average = 0
sum = 0
print('average_vector: ', average_vector)
# Computing the standard deviation vector of the error for each predicted joint trajectory
sum = 0
std_vector = []
for j in range(6): # colonne
for i in range(len(y_test)-1): #righe
sum = sum + ((pred[i][j] - y_test[i][j]) - average_vector[j])**2
std = (sum/len(y_test))**(0.5)
std_vector.append(std)
std = 0
sum = 0
print('std_vector: ', std_vector)
My questions are the following:
once I have trained the neural network, even using a very large training set, I get predictions that are not good. Can you suggest me how to improve these predictions, perhaps going to act on the parameters of the network,
Is it necessary to pre-process the training data and its labels? If yes, which technique should I apply?
Trying to change the number of nodes in the various layers of the network, I saw that the performance changes, even a lot. Do you have advice on the “shape” to give to the network ?
Are there any other solutions that can be used to estimate the IK of the robot ? |
st207143 | I’m not an expert in robotics. So my comments are only regarding the model architecture and training.
When you call model.fit() you can pass you test data to the argument “validation_data”, and the model will automatically calculate loss and metrics for both train and validation set. TensorFlow has MeanSquaredError and MeanAbsolutePercentageErrror in addition to MAE that you use.
In the EarlyStopping callback you should define monitor=‘val_loss’ and restore_best_weights=True. In this case the training will be stopped, when validation loss starts worsening, and the model will roll back to the optimal state, when best val_loss was reached. At present you monitor training loss, which does not say anything about overfitting.
Check the scale of the coordinates used as input features. If they are not in range 0-1, input data requires normalization. Keras has Normalization layer, which could be used to ensure that all data passed to the model is normalized identically.
Usually number of units in the dense layers gradually decreases. You defined 5 layers with units decreasing and then increasing like V-shape.
If all this does not improve the result, probably you should add more features like previous positions of the object, or it’s speed, or something else. |
st207144 | I really thank you very much for these tips. It is my interest to implement them as soon as possible and see if I can get any improvements.
Thanks again for your kindness and availability. |
st207145 | I think you can also Explore a RL approach.
This is large scale library so probably it doesn’t for your use case:
github.com
GitHub - google-research/tensor2robot: Distributed machine learning... 1
Distributed machine learning infrastructure for large-scale robotics research - GitHub - google-research/tensor2robot: Distributed machine learning infrastructure for large-scale robotics research
But as it is suggested in the Readme you could try to to look to Dopamine of tf-agents repos.
I also suggest to take a look at:
github.com
GitHub - AndrejOrsula/drl_grasping: Deep Reinforcement Learning for Robotic... 3
Deep Reinforcement Learning for Robotic Grasping from Octrees - GitHub - AndrejOrsula/drl_grasping: Deep Reinforcement Learning for Robotic Grasping from Octrees |
st207146 | I think I have implemented the changes you suggested and below I attach the updated code.
From the first results I’ve seen that the prediction has improved dramatically, but I need some clarification regarding some parameters and functions in use in the neural network.
Note: I have a dataset of 16400 samples (.csv file consisting of a matrix 16400x6)
from numpy import loadtxt
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from numpy.core.einsumfunc import einsum
import tensorflow as tf
from keras.layers import LayerNormalization
from numpy import array
# Model definition
def get_model(I_N_L_1, I_N_L_2, I_N_L_3, I_N_L_4, I_N_L_5, I_N_L_6, n_inputs, n_outputs):
model = Sequential()
model.add(Dense(I_N_L_1, input_dim=n_inputs, kernel_initializer='he_uniform', activation='relu'))
model.add(LayerNormalization(axis=-1 , center=True , scale=True))
model.add(Dense(I_N_L_2, activation='relu'))
model.add(Dense(I_N_L_3, activation='relu'))
model.add(Dense(I_N_L_4, activation='relu'))
model.add(Dense(I_N_L_5, activation='relu'))
model.add(Dense(I_N_L_6, activation='relu'))
model.add(Dense(n_outputs))
model.compile(loss='mae', optimizer='adam', metrics=["mae"])
return model
print('start reading CSV')
# Load Training set csv
dataset_EF = loadtxt('EF_from_Welding2.csv', delimiter=',')
x_train = dataset_EF[0:12000,0:6]
print('shape: ',x_train.shape)
# Load Label set csv
dataset_joints = loadtxt('Welding2.csv', delimiter=',')
y_train = dataset_joints[0:12000,0:6]
print('shape: ',y_train.shape)
# Validation set definition
x_val = dataset_EF[12001:14000,0:6]
print('shape: ',x_val.shape)
# Label of the validation set definition
y_val = dataset_joints[12001:14000,0:6]
print('shape: ',y_val.shape)
# Test set definition
x_test = dataset_EF[14001:,0:6]
print('shape: ',x_test.shape)
# Label of the test set definition
y_test = dataset_joints[14001:,0:6]
print('shape: ',y_test.shape)
print('end reading CSV')
# Number of nodes in the hidden layers
I_N_L_1 = 700
I_N_L_2 = 450
I_N_L_3 = 300
I_N_L_4 = 150
I_N_L_5 = 75
I_N_L_6 = 15
# Number of nodes in the input and output layers
n_inputs = 6
n_outputs = 6
print('start model and training')
# calling the "get_model" function
model = get_model(I_N_L_1, I_N_L_2, I_N_L_3, I_N_L_4, I_N_L_5, I_N_L_6, n_inputs, n_outputs)
# fit model
es = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=20, mode='min', restore_best_weights=True, verbose=1)
model.fit(x_train, y_train, validation_data=(x_val, y_val), verbose=1, epochs=100, callbacks=[es])
print('end model and training')
print('start saving model')
# saving the model
model.save("ModelloDiProva1.h5")
print('end saving model')
print('start predictions')
# Testing procedure
pred = []
# Computing the Prediction on the Test Set
for i in range(len(x_test)-1):
b = [x_test[i][0], x_test[i][1], x_test[i][2], x_test[i][3], x_test[i][4], x_test[i][5]]
ToBePredicted = array([b])
Prediction = model.predict(ToBePredicted)
a = [Prediction[0][0], Prediction[0][1], Prediction[0][2], Prediction[0][3], Prediction[0][4], Prediction[0][5]]
pred.append(a)
print('end predictions')
print('start validation')
# Computing the mean vector of the error for each predicted joint trajectory
average_vector = []
sum = 0
average = 0
for j in range(6): # column
for i in range(len(y_test)-1): #row
sum = sum + (pred[i][j] - y_test[i][j])
average = sum/len(y_test)
average_vector.append(average)
average = 0
sum = 0
print('average_vector: ', average_vector)
# Computing the standard deviation vector of the error for each predicted joint trajectory
sum = 0
std_vector = []
for j in range(6): # colonne
for i in range(len(y_test)-1): #righe
sum = sum + ((pred[i][j] - y_test[i][j]) - average_vector[j])**2
std = (sum/len(y_test))**(0.5)
std_vector.append(std)
std = 0
sum = 0
print('std_vector: ', std_vector)
print('end validation')
as per your advice I have changed the “shape” of the neural network, starting with an initial layer of 700 nodes to arrive at the penultimate layer with 15 nodes. Do you have any advice or rule of thumb to use on the number of layers and the number of nodes per layer, in relation to the position of the layer within the network?
I have divided my dataset in the following way: 12000 values for the training set, 2000 values for the validation set and the remaining values for the test set. Is this a sensible choice or is it better to have all 3 sets with the same size?
Until now, each layer has the activation function “relu”. I have tried to use other activation functions, getting less precise results; in several examples on the internet I see that, for example, in the output layer a different activation function is used than in the previous layers. Is there a way to choose the best activation function based on the problem you are using? Why is a different activation function used in the output layer than in the other layers?
always searching between the examples in internet I have seen that the function “DropOut()” is used. I have understood that it is used to avoid the overfitting of the network by acting randomly on the weights stored in a particular layer of the network. Could it be useful to insert it also in my network? If yes, is it necessary to insert it between two specific layers or is it necessary to “go by attempts”?
Relatively to the normalization of the input, I have used the function “LayerNormalization”:
is it necessary to insert it only once, or in multiple layers of the network?
There is also a normalization function called “BatchNormalization”, but I could not understand the difference between the first and the second.
Thanks in advance for your attention |
st207147 | Hi! I’m glad you got some positive results.
When I wrote about normalization of input data, I meant preprocessing.Normalization layer. You can find an example in this tutorial: Classify structured data using Keras Preprocessing Layers
This layer should be initialized and adapted to the training subset of your data. Then it should be used inside a model as the first layer or as a second layer following layer Input.
Training and validation sets should not be of equal size. Using 10% to 15% of the data for validation is normal, especially if you have a very small data set.
Before using any techniques to prevent overfitting you need to find out if the model actually overfits. For that you should plot train and validation loss and inspect the chart. Here is a tutorial on this subject: Overfit and underfit | TensorFlow Core
If you use Dropout layers they are added after all or some of the inner dense layers (not after the final dense layer).
As for activations, they depend on the position of the dense layer and the task. In the inner dense layers you can use “relu”, “elu” or “selu”. In the final layer for regression task you do not specify activation, which is equivalent to “linear” activation. If you had a classification task, the final layer would have “softmax” or “sigmoid” activation depending of the number of classes.
The optimal architecture of the network is a result of trial and error. You can experiment manually or use KerasTuner to explore parameter combinations automatically. |
st207148 | @Aristide_Martello
hi you, You can share WeldingProve.csv and EF_from_WeldingProve.csv
thanks |
st207149 | Good morning,
sorry for the late reply. I have problem in sharing the files. Can I send you via Email ? |
st207150 | Is there a quick and easy import tool for custom voice data?
Is there a free local training Speech Recognition to text tool (including exporting model for tf.js) for custom raw voice data?
Can I run tf.js to automatically learn unknown speech sounds and integrate them into existing model examples?
ps: I don’t want to train my custom data through a cloud-based paid service |
st207151 | Welcome to the community. If you just need sound recognition you can try Teachable Machine that makes it easy to recognize short form sounds eg 1 second in length. I have not seen a full voice recognition conversion yet as those tend to be quite large in file size, but sound recognition is most certainly possible. check:
teachablemachine.withgoogle.com
Teachable Machine 16
Train a computer to recognize your own images, sounds, & poses.
A fast, easy way to create machine learning models for your sites, apps, and more – no expertise or coding required.
And then select audio project. If you like what it trains in browser you can click download on top right and save the model files generated to your computer. All training is done in browser using TensorFlow.js so no server is used here other than to deliver the initial webpage so your sounds are never sent to a server.
If you want to do voice recognition in JavaScript it actually exists via the WebSpeech API:
developer.mozilla.org
Using the Web Speech API - Web APIs | MDN 11
The Web Speech API provides two distinct areas of functionality — speech recognition, and speech synthesis (also known as text to speech, or tts) — which open up interesting new possibilities for accessibility, and control mechanisms. This article...
You do not need TensorFlow.js to use that. It is part of the browser implementation and will use whatever OS level voice recognition exists.
Good luck! |
st207152 | Because I have a hearing impairment and the recognition rate of such products in real life is very low and there is no self-learning enhanced training feature.
So I want to research if tf.js has a self-learning unsupervised function. And improve the recognition rate.
If there are only short voice commands, it is not helpful for hearing impaired people. |
st207153 | I see! Thank you for the context.
So our short form audio detection would be good to inform you of sounds like a fire alarm, a gunshot, a doorbell etc - things that repeat or distinct. So in that sense it could be useful for that sort of a task to then trigger a push alert on your phone to notify you something needs attention which may otherwise be missed if one can not hear them.
In terms of voice recognition, right now, the API above is the best bet for JavaScript as the on device voice models to the best of my knowledge are Gigabytes in size I believe? Maybe @lgusm knows more on that voice recognition models or knows someone who does? |
st207154 | It is a sort of Google project euphonia but with TF.js
https://sites.research.google/euphonia/about 10 |
st207155 | See also Conformer Parrotron: a Faster and Stronger End-to-end SpeechConversion and Recognition Model for Atypical Speech – Google Research 7 |
st207156 | Thank you for your reply.
The fact is that I need to communicate with normal people.
I can’t use short speech to understand what normal people use to say.
I would like tf.js to provide a voice training version of long sentences. |
st207157 | Hi Flash,
We’ve just published this community tutorial: Fine-tuning Wav2Vec2 with an LM head | TensorFlow Hub 12
It’s not going to help you directly as it’s an English model, but it could give you some kind of start.
I’ll keep looking for better options and let you know if I find something. |
st207158 | hi, thanks for your community tutorial link.
MY PC is CPU i5-3470, and no GPU.
OS: windows 10 pro
env: miniconda
I wrote the code according to the instruction (GitHub - flashlin/deep_learning 2)
but it show the error message
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.UnimplementedError: Fused conv implementation does not support grouped convolutions for now.
[[{{node StatefulPartitionedCall/wav2vec2/encoder/pos_conv_embed/conv/Conv1DWithWeightNorm}}]] [Op:__inference_restored_function_body_39909]
Function call stack:
restored_function_body |
st207159 | did you install the dependencies?
!pip3 install -q git+https://github.com/vasudevgupta7/gsoc-wav2vec2@main
!sudo apt-get install -y libsndfile1-dev
!pip3 install -q SoundFile
Can you try that on Google Colab please? that’s easier to get the env working fine it’s free. |
st207160 | But if I use google colab,
How do I automatically collect unrecognizable sounds on the client side?
and perform training automatically
Enhance the learning and merge it into the trained model. |
st207161 | Google Colab is typically to try Python code out via browser - it seems lgusm’s suggestion above is Python based not JavaScript - it actually fires up a server to execute so may be trickier than using JS to gather sensor data from device as it is not front end on device.
If you want to do the data collection on the client side you would need to make your own custom version of Teachable Machine so that it could generate data in the right form you could use to retrain the model @lgusm suggested which you could then maybe convert to TensorFlow.js format via our converter? Do you know if that one is compatible for conversion @lgusm or has a JS implementation? |
st207162 | Jason:
own custom version of Teachable Machine
I like how easy Teachable Machine is to use,
However, Teachable Machine has no place to upload Teachable Machine trained models so that I can enhance them.
How do I view the Teachable Machine Audio Project Source Code?
Or can I customize a project? |
st207163 | Teachable Machines repo is here: GitHub - googlecreativelab/teachablemachine-community: Example code snippets and machine learning code 3
it can give you some insights but I think it’s focused on short commands (like this tutorial: Simple audio recognition: Recognizing keywords | TensorFlow Core 4)
That model I shared has just been published, there’s no TFJS version yet and it’s a little big (+200MB). I shared it because it’s a state of the art for Automatic Speech Recognition and can give some ideas. |
st207164 | Is it available in the XLSR version? As probably It could be easier to finetune that one in a low resource regime.
huggingface.co
Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers 4 |
st207165 | They have organized a nice fine-tuning community week a few months ago:
Hugging Face Forums – 17 Mar 21
[Open-to-the-community] XLSR-Wav2Vec2 Fine-Tuning Week for Low-Resource... 15
🤗 Speech-To-Text in 60 languages 🌎 🌍 🌏 Hi all, We organize a community week (Mar 22th to Mar 29th) to fine-tune the cross-lingual speech recognition model XLSR-Wav2Vec2 on all languages of the crowd-sourced Common Voice dataset. What it is about...
Reading time: 28 mins 🕑
Likes: 501 ❤
It could be nice to involve also our community on initiatives like these e.g. with TFHub /cc @thea @Joana @yarri-oss |
st207166 | So in terms of uploading previously saved training data to Teachable Machine I believe it does allow you to open arbitrary data saved from other TM produced models etc if you have access to them. You just need to click on the 3 lines at the top left to access the file menu to do so. Eg on this page: Teachable Machine 2
Do this:
trm1533×1180 65.4 KB
Check out @lgusm suggestions for acessing the raw code of TM though and there is also a fun codelab on how to make your own Teachable Machine for images here - but as audio classification is an image problem it may also help you out:
Google Codelabs
TensorFlow.js Transfer Learning Image Classifier | Google Codelabs 4
In this codelab, you will learn how to build a “Teachable machine”, a custom image classifier that you will train on the fly in the browser using TensorFlow.js. |
st207167 | I spent a lot of time setting up windows 10 to run tensorflow environments.
Just now I finally managed to run the tutorial you provided. (like this tutorial: Simple audio recognition: Recognizing keywords | TensorFlow Core 8)
If I lengthen the contents of commands,
Is it possible to train a language with variable length sentences? |
st207168 | Hi,
Looking at Making new layers and models via subclassing 2 I see that the derived class:
class VariationalAutoEncoder(keras.Model):
...
does not accept the ‘inputs’ keyword argument. I wondered why that is? I see that the code https://github.com/keras-team/keras/blob/v2.6.0/keras/utils/generic_utils.py#L1137 2
forbids it, but then, why would inputs and outputs need to be defined for the plain tf.keras.Model, but not for a derived class? |
st207169 | Hi tensors,
I have seen chrome extension 6 in the tfjs-examples, I am trying to make it for my model. But for that also, I am not able to get required tfjs.min.js and tfjs.js lib into extension.
I have added lib links to popup.html and tried to load the model in background.js.
Uncaught (in promise) ReferenceError: tflite is not defined
I have added package.json file —
{
“name”: “xxx”,
“version”: “0.0.1”,
“description”: “Use tfjs model.predict in a chrome extension”,
“scripts”: {
“copy”: “copy content.js dist/”,
“build”: “parcel build background.js -d dist/ -o background --no-minify && npm run copy”,
“watch”: “npm run copy && parcel watch background.js --hmr-hostname localhost -d dist/ -o background”
},
“license”: “Apache 2.0”,
“devDependencies”: {
“babel-core”: “^6.26.3”,
“babel-plugin-transform-runtime”: “^6.23.0”,
“babel-polyfill”: “^6.26.0”,
“babel-preset-env”: “^1.6.1”,
“clang-format”: “^1.2.3”,
“parcel-bundler”: “^1.9.4”
},
“resolutions”: {
“is-svg”: “4.3.1”,
“node-fetch”: “2.6.1”,
“vega”: “5.17.3”,
“glob-parent”: “5.1.2”,
“postcss”: “8.2.10”
},
“dependencies”: {
“@tensorflow/tfjs”: “^3.9.0”
}
}
The manifest.json is like this—
{
“name”: “xxxxx”,
“description”: “xxxxxx”,
“version”: “1.0”,
“manifest_version”: 2,
“browser_action”: {
“default_icon”: “icon.png”,
“default_popup”: “popup.html”,
“default_title”: “Chrome Extension”
},
“permissions”: [
“<all_urls>”,
“activeTab”
],
“background”: {
“scripts”: [“background.js”],
“persistent”: true
},
“content_scripts”: [
{
“matches”: [“http:///”, “https:///”],
“js”: [“content.js”],
“all_frames”: true,
“run_at”: “document_start”
}
],
“commands”: {
“_execute_browser_action”: {
“suggested_key”: {
“default”: “Ctrl+Shift+F”,
“mac”: “MacCtrl+Shift+F”
},
“description”: “Opens popup.html”
}
},
“content_security_policy”: “script-src ‘self’ https://cdn.jsdelivr.net ‘unsafe-eval’; object-src ‘self’”
}
Background.js is like —
// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//import * as tf from ‘tfjs’;
/**
Function callled when extension installed into browser.
*/
chrome.runtime.onInstalled.addListener(function() {
// load TFLite model into browser
async function load_tflite_model() {
const tfliteModel = await tflite.loadTFLiteModel(
“https://storage.googleapis.com/tfweb/models/cartoongan_fp16.tflite 2”
);
console.log(“tfliteModel…”,tfliteModel)
}
load_tflite_model();
});
Thank
Neha Soni |
st207170 | Hi there,
I have seen chrome extension in the tfjs-examples, I am trying to make it for my model. But for that also, I am not able to get required tfjs.min.js and tfjs.js lib into extension.
I have added lib links to popup.html and tried to load the model in background.js.
Uncaught (in promise) ReferenceError: tflite is not defined
I have added package.json file —
{
“name”: “xxx”,
“version”: “0.0.1”,
“description”: “Use tfjs model.predict in a chrome extension”,
“scripts”: {
“copy”: “copy content.js dist/”,
“build”: “parcel build background.js -d dist/ -o background --no-minify && npm run copy”,
“watch”: “npm run copy && parcel watch background.js --hmr-hostname localhost -d dist/ -o background”
},
“license”: “Apache 2.0”,
“devDependencies”: {
“babel-core”: “^6.26.3”,
“babel-plugin-transform-runtime”: “^6.23.0”,
“babel-polyfill”: “^6.26.0”,
“babel-preset-env”: “^1.6.1”,
“clang-format”: “^1.2.3”,
“parcel-bundler”: “^1.9.4”
},
“resolutions”: {
“is-svg”: “4.3.1”,
“node-fetch”: “2.6.1”,
“vega”: “5.17.3”,
“glob-parent”: “5.1.2”,
“postcss”: “8.2.10”
},
“dependencies”: {
“@tensorflow/tfjs”: “^3.9.0”
}
}
The manifest.json is like this—
{
“name”: “xxxxx”,
“description”: “xxxxxx”,
“version”: “1.0”,
“manifest_version”: 2,
“browser_action”: {
“default_icon”: “icon.png”,
“default_popup”: “popup.html”,
“default_title”: “Chrome Extension”
},
“permissions”: [
“<all_urls>”,
“activeTab”
],
“background”: {
“scripts”: [“background.js”],
“persistent”: true
},
“content_scripts”: [
{
“matches”: [“http:///”, “https:///”],
“js”: [“content.js”],
“all_frames”: true,
“run_at”: “document_start”
}
],
“commands”: {
“_execute_browser_action”: {
“suggested_key”: {
“default”: “Ctrl+Shift+F”,
“mac”: “MacCtrl+Shift+F”
},
“description”: “Opens popup.html”
}
},
“content_security_policy”: “script-src ‘self’ https://cdn.jsdelivr.net ‘unsafe-eval’; object-src ‘self’”
}
Background.js is like —
// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//import * as tf from ‘tfjs’;
/**
Function callled when extension installed into browser.
*/
chrome.runtime.onInstalled.addListener(function() {
// load TFLite model into browser
async function load_tflite_model() {
const tfliteModel = await tflite.loadTFLiteModel(
“https://storage.googleapis.com/tfweb/models/cartoongan_fp16.tflite”
);
console.log(“tfliteModel…”,tfliteModel)
}
load_tflite_model();
});
Thank
Neha Soni |
st207171 | Here is list of editors that you can use to develop Keras.
The steps to setup each of them is provided.
Welcome to reply to this topic to add more.
GitHub Codespaces
This is the easiest option. It helps you setup the environment with one click.
You can click “Code → new codespace” on your fork’s web page to open it in GitHub Codespaces 15.
You can start coding and running the tests there right away.
However, Codespaces is only available in beta. You need to request early access to use it.
Visual Studio Code
This is also an easy option for beginners.
Clone your fork of the repo to your computer.
Open Visual Studio Code 2.
Install the Remote-Containers extension 8.
Open the cloned folder and click “Reopen in Container” in the popup notification.
You can start coding and running the tests there right away. |
st207172 | Thank you,
I think we are missing the linting tools in keras/requirements.txt at master · keras-team/keras · GitHub 1
See my comment at https://github.com/keras-team/keras/pull/15006#pullrequestreview-716500818 1
If it is complete you can close my June contribution offer at Vscode/Github codespaces
We are also waiting for the same for TF core at Tensorflow with Gitbub Codespaces 1.
For TF Addons it was rejected more then 1 year ago but probably we could re-valuate it:
github.com/tensorflow/addons
Add initial vscode devcontainer support 2
tensorflow:master ← bhack:vscode_devcontainer
opened
Mar 15, 2020
bhack
+61
-3
Initial support for Vscode devcontainer. See https://github.com/tensorflow/addon…s/issues/1305
Since May 2020 we maintained the .devcontainers for TF, Keras (not standalone) and TF Addons at:
github.com
vscode-dev-containers/repository-containers/github.com/tensorflow at main ·... 7
main/repository-containers/github.com/tensorflow
A repository of development container definitions for the VS Code Remote - Containers extension and GitHub Codespaces - vscode-dev-containers/repository-containers/github.com/tensorflow at main · m... |
st207173 | Thank you,
we are missing the linting tools in keras/requirements.txt at master · keras-team/keras · GitHub
See my comment at https://github.com/keras-team/keras/pull/15006#pullrequestreview-716500818 1
If it is complete you can close my contribution offer at Vscode/Github codespaces
We are also waiting for the same for TF core at Tensorflow with Gitbub Codespaces.
For TF Addons it was rejected more than 1 year ago but probably we could re-valuate it:
github.com/tensorflow/addons
Add initial vscode devcontainer support 1
tensorflow:master ← bhack:vscode_devcontainer
opened
Mar 15, 2020
bhack
+61
-3
Initial support for Vscode devcontainer. See https://github.com/tensorflow/addon…s/issues/1305 |
st207174 | I’ve tried the the new Keras .devcontainer
I think using that locally with remote containers is not so much usable with a Keras local checkout as you are going to create by default all the build files and new file with the root user/permission but on the host directory. So when you are back on your host you will find many root files and folders.
This is why we have already discussed with the SIG-build to add “a standard” UID/GUID in the Docker image:
github.com/tensorflow/build
(Experimental) Single Multi-Stage Dockerfile
tensorflow:master ← tensorflow:dockerfiles
opened
Jan 27, 2021
angerson
+1002
-0
Here's my progress on a multi-stage Dockerfile (see https://github.com/tensorflo…w/tensorflow/issues/46062).
I based it on a mixture of Sean's suggested layout, the current official dockerfiles, and the current unofficial RBE dockerfiles. Since I made progress, I thought I'd put it into a PR so I can get some early feedback on the general ideas. Some points and questions:
1. I built and tested these with build.sh.
1. Most notably, the `devel` target can't build TF, and I'm not sure why. It fails pretty late at a weird spot.
1. The idea for backwards compatibility would be: every time TF cuts a branch, the `master` directory gets cloned into a folder with the same name as the new branch, and any direct references would be updated (git clone, etc.).
1. These need some kind of verification test suite.
1. I didn't bother trying to recreate the manual package install in the runtime container.
1. I still don't know how the shims really work, and some things from cleanup_cuda_install.sh are only there because they've always been in our images, but I don't know why.
1. Are all these pyenv installations actually useful?
1. DevInfra would need another layer with more tools for our own CI, but presumably we'd want to keep the official blessed build environment image pretty slim.
1. Setting up pyenv correctly looks like it could be tricky and will be different in runtime and devel.
1. Ideally we'd provide an official suite of bazelrc files to go alongside this for the devel containers.
- Then we could also maybe offer access to a RO cache the same as our real builds use, eventually
You can check it out yourself if you clone this PR and run the commands in build.sh yourself.
@seanpmorgan @perfinion @yongtang @bhack
I think also that requirements.txt is quite heavy and so it lags on the Codespace/Container bootstrap as it requires to install tf-nightly (so the GPU wheel) and its dependencies on every new Codspace/Docker container instance that you launch.
Codespaces also are CPU only VMs so you have also the useless overhead of a GPU image + GPU wheel downloads before you can start to code something.
Isn’t better to rely on Keras nightly docker images, CPU only for Codespaces or for local CPU only machines, where these dependencies are already installed?
Just a side note, I don’t know if anyone here could get in contact with the Bazel team.
We are suffering a little bit of usability in Codespace/Vsocde for the missing test integration in the Google official VsCode Bazel extension:
github.com/bazelbuild/vscode-bazel
[Feature request] Integrate with Test Explorer
opened
Feb 20, 2020
nicolasnoble
There is a [Test Explorer UI extension](https://marketplace.visualstudio.com/ite…ms?itemName=hbenl.vscode-test-explorer) available, which itself provides a [Test Adapter API to discover and run tests](https://github.com/hbenl/vscode-example-test-adapter). An example of such test adapter is the [Catch2 and GoogleTest adapter](https://marketplace.visualstudio.com/items?itemName=matepek.vscode-catch2-test-adapter).
It would be very neat if the vscode-bazel extension was integrating with either Test Explorer directly, or with the GoogleTest adapter, in order to provide the list of tests, and the ability to spawn a [Visual Studio Code Debugger instance](https://code.visualstudio.com/docs/cpp/cpp-debug) through the Test Explorer UI. |
st207175 | SIG-build to add “a standard” UID/GUID in the Docker image…
@Bhack +1 for the standard UID/GUID, is it “user google” ?
usable with a Keras local checkout…
Some anecdata… I was able to set up Keras on a local Docker image, build Keras and run a test, following the Keras Contributing.md guides from @Scott_Zhu in 12 min with TF2.6. This compares to 4 hrs plus for full TF. Wow.
Caveats… 2.6GHz MBP with git, vscode and docker preinstalled; started in the GitHub UI cloning keras-team/keras then pressing “.” in the browser to launch VSCode web UI, this allows for local install & build of devcontainer via local VSCode. |
st207176 | yarri-oss:
for the standard UID/GUID
I have already a “default” user at:
github.com/tensorflow/tensorflow
Vscode devcontainer
tensorflow:master ← bhack:vscode_devcontainer
opened
Aug 12, 2021
bhack
+116
-0
I know we closed https://github.com/tensorflow/tensorflow/pull/48679 but as Code…spaces is GA now please keep this open so that in the meantime we have a PR where the user could test and bootstrap Github Codespaces and we could collect some feedback.
We could give it the name that we want.
I don’t think we have too much alternative solutions now as this Is still open since 2013:
github.com/moby/moby
Add ability to mount volume as user other than root
opened
Oct 17, 2013
mingfang
area/api
area/kernel
exp/expert
kind/enhancement
area/volumes
Use case: mount a volume from host to container for use by apache as www user.
T…he problem is currently all mounts are mounted as root inside the container.
For example, this command
docker run -v /tmp:/var/www ubuntu stat -c "%U %G" /var/www
will print "root root"
I need to mount it as user www inside the container.
yarri-oss:
This compares to 4 hrs plus for full TF
It Is really different as Tensorflow is python, c++ and all the third_party dependencies that we compile from source (e.g. llvm etc.).
We need to invest time on this to have a similar experience in Codespace/VsCode remote-containers:
github.com/tensorflow/build
Provide Bazel cache for TensorFlow builds 1
opened
May 15, 2020
angerson
Providing a TensorFlow build cache could be very helpful to external developers,… and lower the barrier to entry of contributing to TF.
Some ideas for this we've discussed before are:
- Offer [Bazel RBE](https://docs.bazel.build/versions/master/remote-execution.html) resources on behalf of SIG Build. This service is in alpha on GCP.
- Provide a read-only [build cache](https://docs.bazel.build/versions/master/remote-caching.html#google-cloud-storage) in a GCP bucket.
- Provide `devel_cache` Docker images containing a build cache (these could be very large)
- Provide code-and-cache volumes for the docker `devel` images.
See also:
- https://github.com/tensorflow/tensorflow/issues/39560
- https://github.com/tensorflow/tensorflow/issues/4116
- https://github.com/tensorflow/addons/issues/1414 |
st207177 | We are indeed missing the linting tools.
I am working on that.
Will update the contributing guide afterwards for the linting instructions. |
st207178 | The Keras nightly docker image sounds a good solution.
I will see if it works or not.
UPDATE: I found a tf-nightly image, we will see if that works.
There is no keras-nightly image.
Any suggestions for the file owner permission issue? |
st207179 | haifeng:
Any suggestions for the file owner permission issue?
As I have already mentioned in the previous post we don’t have too much solution at Docker upstream:
github.com/moby/moby
Add ability to mount volume as user other than root 1
opened
Oct 17, 2013
mingfang
area/api
area/kernel
exp/expert
kind/enhancement
area/volumes
Use case: mount a volume from host to container for use by apache as www user.
T…he problem is currently all mounts are mounted as root inside the container.
For example, this command
docker run -v /tmp:/var/www ubuntu stat -c "%U %G" /var/www
will print "root root"
I need to mount it as user www inside the container.
As you can see in my PR mentioned in the previous posts I’ve just used the official trick to add an user like we already had in other officials Microsoft devcontainers on Github |
st207180 | haifeng:
The Keras nightly docker image sounds a good solution.
I will see if it works or not.
Yes, consider that also GPU and CPU images size diffs will make enough download impact when you will need to quickly open a Codespace or a Container to just contribute a PR.
Also `“postCreateCommand”: instead of using a final layer in the nightly image it will creare an overhead on every new container bootstrap also when the image is already available on the host. |
st207181 | I am not sure if the -e also works with devcontainer.
I used it to map the user group and IDs in the docker container so that they would be the same owner. (for my docker vim env, not vscode)
github.com
haifeng-jin/ide/blob/master/keras/run.sh#L3-L4
-e HOST_USER_ID=$(id -u $USER) \
-e HOST_GROUP_ID=$(id -g $USER) \ |
st207182 | We “talked” about this some times ago at:
github.com/tensorflow/build
User in devel container
opened
Jun 18, 2020
bhack
We still tell to users to use `HOST_PERMS` like in https://www.tensorflow.org/in…stall/source#cpu-only_2 but it is used marginally and it doesn't solve permission on your volume mount.
I don’t remeber if It worked in the devcontainer but generally it could be a problem without having the real user in the image/container.
If I remeber correctly I had some specific problem with the bazel cache permission with a persistent volume also on a local setup (VSCode + remote container extension). |
st207183 | Check also the official documentation:
code.visualstudio.com
Advanced Container Configuration 2
Advanced setup for using the VS Code Remote - Containers extension |
st207184 | In the meantime I’ve created a small fix:
github.com/keras-team/keras
Exclude default bazel paths for VsCode 2
keras-team:master ← bhack:patch-1
opened
Sep 10, 2021
bhack
+10
-1
This is required for performance and also to avoid to have a confusing ghost dup…licated github repositories in the Vscode git tab
But you need to tell me what you want to do with the two other discussed issues:
no root-user file permission on the source mounted volume
use a Keras nightly image instead of manually installing and updating tf-nighlty every time in every container
P.s. in the long term probably we will have a native solution 1 for the first point with Kernels >= 5.12 but in the mean time I think that we could use the standard solution to add a non-root user. |
st207185 | @Bhack
Sure we can add a non-root user, as long as it works well for both vscode and standalone docker env.
For the nightly image, I don’t think we have a keras-nightly.
We can use a tf-nightly, but I am not sure if it works well with the SSH authentication for GitHub if using codespaces.
Would you help us make this changes? I feel you are more familiar with these setup than me. : ) |
st207186 | Sure we can add a non-root user, as long as it works well for both vscode and standalone docker env.
I’ve update the PR
For the nightly image, I don’t think we have a keras-nightly.
I think that tf-nighlty Tensorflow image, so GPU version, is a little bit too large just for Keras.
You could transform the postCreateCommand in a Dockefile layer but requirements.txt is out of the Dokerfile context.
If we don’t have a clear knowledge of the breaking change on the Keras ft-nighlty dependency we need to install `tf-nighlty’ wheel every day.
At least you can find a solution to install tf-nightly-cpu to lower this daily overhead when we are on Codespace or on CPU only machine as not all the PRs have a GPU requirements. |
st207187 | Yes, I think it is a good idea.
If the contributor doesn’t make any GPU related change, they can always uninstall tf-nightly and install tf-nightly-cpu for future updates.
I don’t think we can change anything in requirements.txt. I believe it has to be the GPU version of TF to run some of the tests.
So is the large tf-nightly image itself causing any issue?
It is like either use a large image, or install a large package on startup. |
st207188 | haifeng:
It is like either use a large image, or install a large package on startup.
The difference Is that with the postcreatecommand you have this overhead/lag for every container you launch instead after the image/layer is downloaded/cached the first time you don’t have this overhead anymore.
Partially this advantage is invalidated if we ask to contributors to update tf-nightly or a tf-nightly layer in the image every single day. |
st207189 | I have built the model, trained it. Again I was trying to set weights to the model from a file saved in my local machine but I was getting errors. So, what are the things that we should care about while setting weights in the trained models?Thank you |
st207190 | What methods did you use to save the model weights and use them again?
If you need to reuse only the weights, that’s how you can save and reload them:
model.save_weights(‘name_or_path.h5’)
new_model.load_weights(‘name_or_path.h5’)
Architecture of “new_model” should be identical to “model”. |
st207191 | For tf.keras.applications.MobileNetV3 (large or small), there’s been a slight change to the architecture from TF <=2.5 to TF 2.6. Specifically the GlobalAveragePooling2D layer happens before “Conv_2” in TF2.6, but after “Conv_2” (and it’s non-linear activation) in TF2.5.
These operations don’t commute, so the architectures are slightly different. Both versions point to the same pre-trained weights, so their architectures ought to be the same.
I haven’t checked if this degrades the performance of the pretrained models.
My interest in this is mostly that it’s a breaking change to the API: MobileNetV3Large(include_top=False) will output a tensor of shape [?, 1, 1, 1280] starting with TF2.6 compared to a tensor of shape [?, 7, 7, 1280] with TF <=2.5 (assuming an input of shape [?, 224, 224, 3]). |
st207192 | This is a bug fix.
The two ops don’t quite commute but they commute well enough that both versions of the model do well with the weights.
You are correct about the change in the feature vector shape. The new version is the one that is “correct”. |
st207193 | For context, here’s the original change from GitHub: https://github.com/tensorflow/tensorflow/pull/48542 8
MobileNetV3Large(include_top=False) will output a tensor of shape [?, 1, 1, 1280] starting with TF2.6 compared to a tensor of shape [?, 7, 7, 1280] with TF <=2.5 (assuming an input of shape [?, 224, 224, 3]).
This is indeed a problem. I think we should change the location of if include_top: to return the feature map before pooling. |
st207194 | For my use-case, I would like to preserve spatial information when setting include_top=False, but it’s also really not that big a deal: I can grab the layer immediately before pooling.
There is something a bit off with the TF2.6 version where there’s the argument pooling, which could be 'avg' or 'max', but it doesn’t do anything because average pooling has already happened. |
st207195 | You have a point there. That looks like an unintended change of behavior. Could you please file a bug with a quick repro in Colab + reference to the PR that caused this? Or even better, suggest fix as a PR?
https://github.com/keras-team/keras 1 |
st207196 | Hello all,
I want to do Image Data Augmentation for an Semantic Segmentation task. Therefore, I want to use the ImageDataGenerator from Keras, together with the flow() method, because my data is in Numpy arrays and does not need to be loaded from a folder. Since this is a segmentation task, I need to augment the image and the corresponding mask. I do this by following the last example in the API reference (ImageDataGenerator 4 ) and accordingly using two different generators for image and mask with the same data_gen_args. I only want to rotate, flip and move my images, so I want to use the arguments rotation_range, width_shift_range, height_shift_range,horizontal_flip, vertical_flip.
Accordingly, I want to get masks that are 8 bit images of the shape (128,128,1) like the input mask and also contain only the classes of the input mask (all integer values). And this is exactly where the problem lies, the masks I get are 32-bit floats, which do not contain integer values at all. Even when specifying the argument dtype = “uint8” the code always returns only float32 masks. I have not found an example that fixes this problem? Is there a trick that can be used ?
Another problem in connection with the ImageDataGenerator is sample_weight. As my dataset is quite unbalanced, I would like to use them. In a segmentation task, I think the sample_weight parameter in the flow() method would have to correspond to another mask containing the respective class_weight for the class of each pixel in the original mask. If I do it this way, I get sample_weight back as well, but it seems to me that these weights, similar to the mask, are not correct either, as my UNet does not train well with them anymore. In the meantime I use a third ImageDataGenerator only for the sample_weight, so the training works better, but I hardly think this is the right approach. However, I have not found an example for the correct use. Therefore I hope that the community can help me with their experience.
Thank you.
Kind regards,
Jonas |
st207197 | Hi Jonas
ImageDataGenerator has been superseded by Keras Preprocessing Layers 5 for data preprocessing, to be used together with the tf.data 2 API. However, at this time, you cannot yet do joint preprocessing of the image and mask using Keras Preprocessing Layers so I cannot recommend that route yet.
In my experience, the following data augmentation frameworks support image segmentation use cases directly:
Albumentations 7
ImgAug 7
Your best way for now is to use one of these libraries and then format your dataset as a Python generator (or tf.data.Dataset through tf.data.Dataset.from_generator 4)
The limitations of these approaches is that they do the data transformations in Python rather than TF operations and therefore cannot be saved to SavedModel and deployed in production.
Until we have a segmentation-compatible, Keras Preprocessing Layer implemented with TF ops, I advise you to special-case inference in your model setup. You can use Python libraries for data preprocessing for training and evaluation, but implement the minimal necessary inference-time data transformations (JPEG decompression, size, scale, …) using TF functions and Keras Preprocessing Layers. For example tf.io.decode_image and tf.keras.layers.Resizing. |
st207198 | martin_gorner_tf:
However, at this time, you cannot yet do joint preprocessing of the image and mask using Keras Preprocessing Layers so I cannot recommend that route yet.
Do we have a small example at:
github.com
keras-team/keras/blob/master/keras/preprocessing/image.py#L745-L776 41
Example of transforming images and masks together.
```python
# we create two instances with the same arguments
data_gen_args = dict(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=90,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)
# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
image_datagen.fit(images, augment=True, seed=seed)
mask_datagen.fit(masks, augment=True, seed=seed)
image_generator = image_datagen.flow_from_directory(
'data/images',
class_mode=None,
seed=seed)
This file has been truncated. show original |
st207199 | SIG Build’s next meeting will be today, Tuesday, September 7, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 6, and feel free to suggest new agenda items.
I hope those of you in the USA had a relaxing Labor Day! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.