id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st206900 | TensorFlow’s Special Interest Groups (SIGs) support community collaboration on particular project focuses.
Our SIGs include:
SIG I/O
SIG JVM
SIG MLIR
SIG TFjs
SIG Recommenders
SIG Models
SIG Build
SIG Addons
SIG TensorBoard
SIG Micro
SIG Keras
SIG Swift
SIG Rust |
st206901 | I think we have 18 SIGs now community/sigs at master · tensorflow/community · GitHub 9 |
st206902 | Our next meeting is today, Tuesday, February 1, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 2.
I have another update on Docker, and we’ll look at the results of the when-is-a-good-time-for-the-meetings form, which you can still fill out: SIG Build Monthly Meeting Time 2022 |
st206903 | I am interested in applying loss function weights to a multi-target model using the class_weight parameter in .fit but it appears that it cannot be used past version 2.1. In 2.1, it looks like you could input a dictionary with the classes and their corresponding loss weights. Does anyone know the reason this was removed or is it a bug? Are there any workarounds to apply this kind of weight? |
st206904 | A quite long story in:
github.com/tensorflow/tensorflow
tf.keras cannot weight classes when using multiple outputs 4
opened
Jul 16, 2020
maxpv
stat:awaiting tensorflower
type:bug
comp:keras
TF 2.5
This post is a mirror of https://github.com/keras-team/keras/issues/11735, showi…ng the need to handle class weight for multiple outputs.
Version 2.2.0 used.
------
This is a minimal source code, by @GalAvineri, to reproduce the issue (please comment/uncomment the class weight line):
````python3
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import Input, Dense
from tensorflow.python.data import Dataset
import tensorflow as tf
import numpy as np
def preprocess_sample(features, labels):
label1, label2 = labels
label1 = tf.one_hot(label1, 2)
label2 = tf.one_hot(label2, 3)
return features, (label1, label2)
batch_size = 32
num_samples = 1000
num_features = 10
features = np.random.rand(num_samples, num_features)
labels1 = np.random.randint(2, size=num_samples)
labels2 = np.random.randint(3, size=num_samples)
train = Dataset.from_tensor_slices((features, (labels1, labels2))).map(preprocess_sample).batch(batch_size).repeat()
# Model
inputs = Input(shape=(num_features, ))
output1 = Dense(2, activation='softmax', name='output1')(inputs)
output2 = Dense(3, activation='softmax', name='output2')(inputs)
model = Model(inputs, [output1, output2])
model.compile(loss='categorical_crossentropy', optimizer='adam')
class_weights = {'output1': {0: 1, 1: 10}, 'output2': {0: 5, 1: 1, 2: 10}}
model.fit(train, epochs=10, steps_per_epoch=num_samples // batch_size,
# class_weight=class_weights
)
````
Uncommenting yields this error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-38-d137ff6fb3f9> in <module>
33 class_weights = {'output1': {0: 1, 1: 10}, 'output2': {0: 5, 1: 1, 2: 10}}
34 model.fit(train, epochs=10, steps_per_epoch=num_samples // batch_size,
---> 35 class_weight=class_weights
36 )
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
813 workers=workers,
814 use_multiprocessing=use_multiprocessing,
--> 815 model=self)
816
817 # Container that configures and calls `tf.keras.Callback`s.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weight, batch_size, steps_per_epoch, initial_epoch, epochs, shuffle, class_weight, max_queue_size, workers, use_multiprocessing, model)
1115 dataset = self._adapter.get_dataset()
1116 if class_weight:
-> 1117 dataset = dataset.map(_make_class_weight_map_fn(class_weight))
1118 self._inferred_steps = self._infer_steps(steps_per_epoch, dataset)
1119 self._dataset = strategy.experimental_distribute_dataset(dataset)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py in _make_class_weight_map_fn(class_weight)
1233 "Expected `class_weight` to be a dict with keys from 0 to one less "
1234 "than the number of classes, found {}").format(class_weight)
-> 1235 raise ValueError(error_msg)
1236
1237 class_weight_tensor = ops.convert_to_tensor_v2(
ValueError: Expected `class_weight` to be a dict with keys from 0 to one less than the number of classes, found {'output1': {0: 1, 1: 10}, 'output2': {0: 5, 1: 1, 2: 10}}
```` |
st206905 | I was writing some simple models and I did not want the Model Loss to be the sum of all L2 Regularizations. I wanted it to be the mean instead. My reason being that having 3 L2 Losses had a huge impact of regularization, taking the mean reduces that impact. In most courses as well, we can take the mean
Any idea on how to approach it in a manner that can generalize well
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
model = Sequential()
model.add(Dense(100, input_shape=(8,), kernel_regularizer=tf.keras.regularizers.L2(0.01)))
model.add(Dense(80, kernel_regularizer=tf.keras.regularizers.L2(0.01)))
model.add(Dense(30, kernel_regularizer=tf.keras.regularizers.L2(0.01)))
model.add(Dense(1))
model.compile(optimizer=‘adam’, loss=‘binary_crossentropy’, metrics=[‘accuracy’])
print(model.losses)
[<tf.Tensor: shape=(), dtype=float32, numpy=0.15066518>, <tf.Tensor: shape=(), dtype=float32, numpy=0.883246>, <tf.Tensor: shape=(), dtype=float32, numpy=0.4300898>]
I would want the loss to add (0.15066518 + 0.883246 + 0.4300898)/3 instead of (0.15066518 + 0.883246 + 0.4300898) |
st206906 | Do you want just to Reduction.None like in
github.com
keras-team/keras/blob/master/keras/losses.py#L546-L547
>>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,
... reduction=tf.keras.losses.Reduction.NONE)
And then apply your custom operation? |
st206907 | I want the Binary Cross entryopy to be added to loss function
I essentially want my loss function to be
L = BCE + Avg(Regularization)
Current implementation in Keras is L = BCE + Sum(Regularization) |
st206908 | For the regularization loss penalizzation the sum is embedded in the code when you compile the loss:
github.com
keras-team/keras/blob/master/keras/engine/compile_utils.py#L231
if (loss_obj.reduction == losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE or
loss_obj.reduction == losses_utils.ReductionV2.AUTO):
loss_value = losses_utils.scale_loss_for_distribution(loss_value)
loss_values.append(loss_value)
loss_metric_values.append(loss_metric_value)
if regularization_losses:
regularization_losses = losses_utils.cast_losses_to_common_dtype(
regularization_losses)
reg_loss = tf.add_n(regularization_losses)
loss_metric_values.append(reg_loss)
loss_values.append(losses_utils.scale_loss_for_distribution(reg_loss))
if loss_values:
loss_metric_values = losses_utils.cast_losses_to_common_dtype(
loss_metric_values)
total_loss_metric_value = tf.add_n(loss_metric_values)
self._loss_metric.update_state(
total_loss_metric_value, sample_weight=batch_dim)
As a workaround probably you could create a custom regularizer that you can scale yourself (if you know the total number of regularizers) or you can control your loss more in detail with a custom trainning loop:
TensorFlow
Writing a training loop from scratch | TensorFlow Core |
st206909 | Thanks, I was hoping there would be a simple way. Thanks for letting me know
Probably the best way would be the training step as I am not sure how many layers require regularization. I want the solution to be generic and not specific |
st206910 | Hi everyone,
Let’s join to our first community call of the year. Now that TF Java 0.4.0 is available, it’s time to plan what will come next in our next release. I’m attaching here the current agenda but please feel free to add your own suggestions: [PUBLIC] TensorFlow SIG JVM Notes - Google Docs 1
See you tomorrow!
Karl |
st206911 | I got this function that builds a DeeplabV3+ model with a ResNet50 backend for semantic segmentation
def DeepLabV3_ResNet50(size, classes):
input = keras.Input(shape=(size, size, 3))
resnet50 = keras.applications.ResNet50(weights="imagenet", include_top=False, input_tensor = input)
x = resnet50.get_layer("conv4_block6_2_relu").output
x = DSP_pooling(x)
a = layers.UpSampling2D(size=(size // 4 // x.shape[1], size // 4 // x.shape[2]),interpolation="bilinear",)(x)
b = resnet50.get_layer("conv2_block3_2_relu").output
b = block(b, filters = 48, kernel = 1)
x = layers.Concatenate(axis=-1)([a, b])
x = block(x)
x = block(x)
x = layers.UpSampling2D(size=(size // x.shape[1], size // x.shape[2]),interpolation="bilinear",)(x)
output = layers.Conv2D(classes, kernel_size=(1, 1), padding="same")(x)
return keras.Model(inputs = input, outputs = output)
model = DeepLabV3_ResNet50(size = image_size, classes = labels)
model.summary()
To improve my validation accuracy, I figured that switching the backend from ResNet50 to ResNet101 might be a good try. Changing resnet50.get_layer() to resnet101.get_layer() is not enough. How do I know which convolutional blocks I should pick for my resnet101.get_layer() function? |
st206912 | I am trying to get a tfjs-tflite demo from here 1
to get it to work on codepen here 2
The goal is to cartoonise the image on the top row and generate its corresponding cartoon via GAN on the second row.
For some reason, it does not seem to produce a result However when I do yarn it works fine.
Any clue as to why this is would be greatly appreciated (I am a JS newbie). |
st206913 | So first of all you are using:
.resizeBilinear(tf.browser.fromPixels(document.querySelector("img4")[0]), [
224,
224
], true)
Here you are using document.querySelector when you should be using document.getElementById as img4 is an id. Currently you are not sending any pixels to tf.browser.fromPixels.
Secondly, once you fix that, the images you are using are not hosted correctly for CORS Access. You will then see you will get a canvas tainted issue which you need to fix by using CORS on the server/CDN that hosts the images to allow access from other domains. Glitch.com offers a CDN for free that actually sets the headers correctly and then on your tag you just need to set the crossorigin attribute to force it to use that.
See my glitch demo here for object detection for the index.html page that uses an image:
glitch.com
Glitch Code Editor ・゚✧
Simple, powerful, free tools to create and use millions of apps.
Then it should have a chance to work!
PS for more information on CORS see my reply here:
Use custom model with tfjs-tflite CORS issue TF.js
The tfjs-tflite library allows you to run TFLite models on the web.
Example:
const tfliteModel = await tflite.loadTFLiteModel('url/to/your/model.tflite');
or
const objectDetector = await tflite.ObjectDetector.create(
"https://storage.googleapis.com/tfhub-lite-models/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2.tflite"
);
I’m currently working on a simple Object Detection example, which works fine for the example models that are stored on Google Cloud, but I couldn’t get it to wor… |
st206914 | Hey there everyone,
I, along with my team were having some issues in getting consistent results from a MobileNet V3 based model (both .h5 and .tflite models) on different machines. The model architecture and the compilation details:
base_model = tf.keras.applications.MobileNetV3Small(input_shape=(256, 256, 3),
include_top=False,
weights='imagenet',
minimalistic=True)
base_model.trainable = True
model = tf.keras.Sequential([
base_model,
#pretrained_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(2, dtype=tf.float32)
])
model.compile(
loss = tf.keras.losses.MeanAbsoluteError(),
optimizer = tf.keras.optimizers.Adam(),
metrics = [tf.keras.metrics.MeanAbsoluteError(), tf.keras.metrics.MeanSquaredError()],
steps_per_execution=64
)
It is an image regression model and we have noticed that the outputs vary by about ~0.2 when the ground truth labels’ range was [0, 3] which makes it difficult to determine thresholds. For additional context, we tested them on the Intel Mac, Mac M1 as well as windows machines with the exact same TF, Keras, numpy and opencv versions and different results were obtained on every system.
Any idea why it might be happening and how it can be dealt with, would mean a lot.
Thanks! |
st206915 | This is a question related to projects that would like to depend on mlir-hlo and set it up as a submodule (while using a cmake setup).
https://github.com/tensorflow/mlir-hlo
As far as I could tell, mlir-hlo doesn’t include something equivalent to MLIRHLOConfig.cmake.in that would allow a user project to find it and see all its targets. As a result, one approach would be to set it up with proper target_link options to point to the submodule’s build dir. This creates issues with transitive dependencies and rebuilding after updates. I haven’t tried the approaches of cmake external projects ExternalProject — CMake 3.22.1 Documentation or simply an add_subdirectory.
Is there a recommended setup for projects that would like to depend on mlir-hlo via a submodule? Would a contribution of Config.cmake be acceptable in the repo or does one already exist somewhere? Thanks! |
st206916 | Here it is:
github.com/polymage-labs/mlir-hlo
Add cmake configuration for external projects
polymage-labs:polymage ← polymage-labs:uday/add_cmake_for_mhlo
opened
Jan 24, 2022
bondhugula
+97
-0
Add cmake configuration for mlir-hlo so that external projects that want
to dep…end on it can imports its cmake targets (via -DMHLO_DIR=... for
example).
I can raise a PR on the official tensorflow repo. |
st206917 | When using the tensorflow profiler for memory footprint analysis, the profiler keeps up to 1000 snapshots. This default value limits us to more detailed memory analysis. I would like to know why this default value is set to 1000.
The relevant code is as follows:
tensorflow/tensorflow/core/profiler/convert/xplane_to_memory_profile.h
Line 30 in f0df570
MemoryProfile ConvertXPlaneToMemoryProfile(const XPlane& host_plane,
tensorflow/tensorflow/core/profiler/convert/xplane_to_memory_profile.cc
Line 550 in f0df570
MemoryProfile memory_profile = ConvertXPlaneToMemoryProfile(*host_plane); |
st206918 | I’m trying to use a generator- based data set:
def gen():
return zip(samples,feature)
ds = tf.data.Dataset.from_generator(gen,output_types=tf.dtypes.float32)
model.fit(ds,
epochs=150,
#callbacks=[tensorboard_callback]
)
model.save("/sda/anyone/imagenet-in-np/transformer")
whereas feature is numpy.ndarray (2D array)
whereas feature is numpy.ndarray (4D array)
And I get the following error:
TypeError: Target data is missing. Your model has `loss`: BinaryCrossentropy, and therefore expects target data to be passed in `fit()`.
which is strange, as the target data is actually present.
Whenever I separate the dataset to two
def gen():
return samples
ds = tf.data.Dataset.from_generator(gen,output_types=tf.dtypes.float32)
def gen2():
return feature
ds2= tf.data.Dataset.from_generator(gen2,output_types=tf.dtypes.float32)
model.fit(ds,ds2,
epochs=150,
#callbacks=[tensorboard_callback]
)
model.save("/sda/anyone/imagenet-in-np/transformer")
I get:
raise ValueError("`y` argument is not supported when using "
ValueError: `y` argument is not supported when using dataset as input.
Which means that TF doesn’t accept this split.
I tried
def gen():
for element in zip(samples,feature):
yield element
ds = tf.data.Dataset.from_generator(gen(),output_types=tf.dtypes.float32)
I get
TypeError: generator must be a Python callable.
So I tried to swap it to :
def gen():
for element in zip(samples,feature):
yield element
ds = tf.data.Dataset.from_generator(gen,output_types=tf.dtypes.float32)
I get again:
TypeError: Target data is missing. Your model has `loss`: BinaryCrossentropy, and therefore expects target data to be passed in `fit()`.
python-BaseException
So how should I use the generator API? |
st206919 | I actually got the same error and my mistake was that the model was expecting the input and labels in a different format while I was passing them in an incorrect format.
Additionally can you provide a Minimum working example (MWE)? It would be easier to find out the problem that way. |
st206920 | Hi,
I try to make predictions with keras models but face an issue when I use fit. My goal is to get 30 next minutes prediction on BNB/USDT stock
The error I get is
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Incompatible shapes: [32,30] vs. [32,30,1]
[[{{node loss/dense_loss/SquaredDifference}}]]
[[training/Adam/gradients/gradients/lstm_1/while/ReadVariableOp/Enter_grad/b_acc_3/_125]]
(1) Invalid argument: Incompatible shapes: [32,30] vs. [32,30,1]
[[{{node loss/dense_loss/SquaredDifference}}]]
Here’s the code
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from binance.client import Client
import csv
import tensorflow as tf
pd.options.mode.chained_assignment = None
tf.random.set_random_seed(0)
api = {'key':'...','secret':'...'}
# client = Client(api['key'], api['secret'])
# length_data = "2 day"
# klines = client.get_historical_klines("BNBUSDT", Client.KLINE_INTERVAL_1MINUTE, length_data + " UTC")
# with open('./bnbusdt_price_train_test.csv', 'w') as f:
# writer = csv.writer(f)
# writer.writerow(['timestamp','open','max','min','close'])
# for sub in klines:
# writer.writerow([sub[0], sub[1], sub[2], sub[3], sub[4]])
df = pd.read_csv('./bnbusdt_price_train_test.csv')
df['Date'] = pd.to_datetime(df.timestamp, unit='ms')
df.sort_values('Date')
y = df['close'].fillna(method='ffill')
y = y.values.reshape(-1, 1)
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(y)
y = scaler.transform(y)
n_lookback = 60
n_forecast = 30
X = []
Y = []
for i in range(n_lookback, len(y) - n_forecast + 1):
X.append(y[i - n_lookback: i])
Y.append(y[i: i + n_forecast])
X = np.array(X)
Y = np.array(Y)
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, input_shape=(n_lookback, 1)))
model.add(LSTM(units=50))
model.add(Dense(n_forecast))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X, Y, epochs=1, batch_size=32, verbose=2)
The CSV I load up contains :
timestamp (ms)
open price
max price
min price
close price
I tried to changed my 3d inputs to 2d but got another error on model.add
Do you have any idea ? |
st206921 | Hello,
I notice at least about half a dozen tensorflow transforms passes (like tensor-list-ops-decomposition) hardcoded to work only on a single function named “main”.
(tensor_list_ops_decomposition.cc on github)
void TensorListOpsDecompositionPass::runOnOperation() {
auto module = getOperation();
auto main = module.lookupSymbol<FuncOp>("main");
if (!main) return;
if (failed(DecomposeTensorListOps(&main.front(), module))) {
signalPassFailure();
}
}
Is there an assumption that the canonical form is one where the “entry function” is named “main”? This isn’t true for an import/translation from a tf.function where the entry function has the tf.function’s name with a suffix/prefix. Should this check instead be for a function with the attribute “tf.entry_function” and should this be patched like this or better with a common utility to update all passes with such checks?
- auto main = module.lookupSymbol<FuncOp>("main");
- if (!main) return;
- if (failed(DecomposeTensorListOps(&main.front(), module))) {
- signalPassFailure();
+ for (auto func_op : module.getOps<FuncOp>()) {
+ // Just run on the entry function.
+ if (!func_op->getAttr("tf.entry_function") && func_op.getName() != "main")
+ continue;
+ if (failed(DecomposeTensorListOps(&func_op.front(), module))) {
+ signalPassFailure();
+ }
+ break;
}
Related to this are also several instances of “main” and “tf.entry_function” hardcoded in “transforms/” and “translate/”. |
st206922 | We likely should provide a helper for this instead of a raw loop.
Also it isn’t clear to me what a public function (from an MLIR symbol visibility point of view) that isn’t an entry function would mean? And if so why not just filter on public ones? |
st206923 | It’s possible that passes generated additional functions but missed marking them as private. If the canonical form is one where there is a single entry function marked in a defined way, these passes could be called on that one. If not, they could be called on all those that are visible. Another alternative which is least surprising I feel is to call it on everything. The current behavior isn’t really correct or in line with any of the standard forms being used. |
st206924 | The private/public and entry function concept is a bit confusing here. The function that is called main is the function corresponding to the Graph (so during conversion one has 1 Graph with a function library with multiple functions). For an execution of a model (which these passes were developed for and run) one therefore has this situation. The place where we have the clearest indication of public or private is during SavedModel conversion. But even there in the workflows supported (TFlite converter and TFRT serving conversion) we have single entry point AFAIK. Could you show the python code corresponding to the tf.function example? |
st206925 | If you just take the simplest example like this one,
@tf.function(
input_signature=(
tf.TensorSpec(shape=(M, K), dtype=tf.float32),
tf.TensorSpec(shape=(K, N), dtype=tf.float32),
)
)
def matmul(lhs, rhs):
return tf.matmul(lhs, rhs)
and use tensorflow.python.pywrap_mlir to do a:
import_function(
func.get_concrete_function(), pass_pipeline="", show_debug_info=False)
the MLIR you get won’t have a “main” but just something like matmul in its name. Most of the tensorflow MLIR transforms would just end up being no-ops on them. |
st206926 | That is a experimental/testing API that is not along any execution or conversion paths. It is something that shows a part of the import (this shows how a function in the flib would be imported) and can be used for “visualization”. There are multiple ways to indicate entry, the convention we follow is to call it main, in particular as during execution we have a nameless Graph at the point where we import during execution. |
st206927 | Oh, can I know what the recommended API method to import the decorated tf.function into MLIR then would be? I think your note then confirms that the canonical and expected form is one where the entry function is named “main” – that was my original question. |
st206928 | uday:
and use tensorflow.python.pywrap_mlir to do a:
import_function(
func.get_concrete_function(), pass_pipeline="", show_debug_info=False)
the MLIR you get won’t have a “main” but just something like matmul in its name. Most of the tensorflow MLIR transforms would just end up being no-ops on them.
We could change this import to generate a main function instead?
But in general it’s not clear to me why we don’t use “public” for most passes / why do we filter on “main”? |
st206929 | Continuing a thread started on Gitter:
Hello, I want to run a Tensorflow model I found with a Java app, but I am having difficulty with getting the input just right. Below you can see the result from the layer analysis. I found a few examples for one-dimensional input (mnist) and I got another model working that required integers, but creating Tensor with dimensions {batch, height, width, channels} is a difficult task. I would like some help. The input is just a JPG, basically BufferedImage as I want to keep my options open.
Often TF Java users are looking for a snippet showing how this can be done easily, I’m sharing one here written in Kotlin (warning, I did not test it out after modifying it, but basically the logic should be good):
fun preprocess(sourceImages: List<BufferedImage>, imageHeight: Int, imageWidth: Int, imageChannels: Int): TFloat32 {
val imageShape = Shape.of(sourceImages.size.toLong(), imageHeight.toLong(), imageWidth.toLong(), imageChannels.toLong())
return TFloat32.tensorOf(imageShape) { tensor ->
// Copy all images to the tensor
sourceImages.forEachIndexed { imageIdx, sourceImage ->
// Scale the image to required dimensions if needed
val image = if (sourceImage.width != imageWidth || sourceImage.height != imageHeight) {
val scaledImage = BufferedImage(imageWidth, imageHeight, BufferedImage.TYPE_3BYTE_BGR)
scaledImage.createGraphics().apply {
setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGHBOR)
drawImage(sourceImage, 0, 0, imageWidth, imageHeight, null)
dispose()
}
scaledImage
} else {
sourceImage
}
// Converts the image to floats and normalize by subtracting mean values
var i = 0
for (h in 0L until imageHeight) {
for (w in 0L until imageWidth) {
// "caffe"-style normalization
tensor.setFloat(image.data.dataBuffer.getElemFloat(i++) - 103.939f, imageIdx.toLong(), h, w, 0)
tensor.setFloat(image.data.dataBuffer.getElemFloat(i++) - 116.779f, imageIdx.toLong(), h, w, 1)
tensor.setFloat(image.data.dataBuffer.getElemFloat(i++) - 123.68f, imageIdx.toLong(), h, w, 2)
}
}
}
}
}
So the idea is simply to resample your image if it is not already of the right size and to normalize its pixel values when feeding the tensor. The “caffe”-style normalization is the one used by default by Keras in Python so the mean values to subtract were picked from Keras sources 4 directly.
UPDATED : here’s the Java version
TFloat32 preprocess(List<BufferedImage> sourceImages, int imageHeight, int imageWidth, int imageChannels) {
Shape imageShape = Shape.of(sourceImages.size(), imageHeight, imageWidth, imageChannels);
return TFloat32.tensorOf(imageShape, tensor -> {
// Copy all images to the tensor
int imageIdx = 0;
for (BufferedImage sourceImage : sourceImages) {
// Scale the image to required dimensions if needed
BufferedImage image;
if (sourceImage.getWidth() != imageWidth || sourceImage.getHeight() != imageHeight) {
image = new BufferedImage(imageWidth, imageHeight, BufferedImage.TYPE_3BYTE_BGR);
Graphics2D graphics = image.createGraphics();
graphics.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_NEAREST_NEIGHBOR);
graphics.drawImage(sourceImage, 0, 0, imageWidth, imageHeight, null);
graphics.dispose();
} else {
image = sourceImage;
}
// Converts the image to floats and normalize by subtracting mean values
int i = 0;
for (long h = 0; h < imageHeight; ++h) {
for (long w = 0; w < imageWidth; ++w) {
// "caffe"-style normalization
tensor.setFloat(image.getData().getDataBuffer().getElemFloat(i++) - 103.939f, imageIdx, h, w, 0);
tensor.setFloat(image.getData().getDataBuffer().getElemFloat(i++) - 116.779f, imageIdx, h, w, 1);
tensor.setFloat(image.getData().getDataBuffer().getElemFloat(i++) - 123.68f, imageIdx, h, w, 2);
}
}
++imageIdx;
}
});
} |
st206930 | Sorry I can’t add links but there’s also some example java code in the tensorflow-java models github repository. You need to drill down to the cnn FasterRcnnInception directory |
st206931 | The example @Keith_Hall is referring to is here - java-models/tensorflow-examples/src/main/java/org/tensorflow/model/examples/cnn/fastrcnn at master · tensorflow/java-models · GitHub 21 |
st206932 | Yes this other example is valid also but takes a different approach, it uses TensorFlow to decode and resize the images. The goal of my previous example is to demonstrate how to do it when using image utilities coming with the JDK. |
st206933 | I have no experience with Kotlin, but it does look like it is a step in the right direction. I would like to take you up on your offer Karl to try and convert this to Java. |
st206934 | Please @James2026 , see above my initial post, I’ve added the same snippet but in Java |
st206935 | Hello,
I am moving to this thread from a github issue since this topic isn’t germane to the reason it was opened, but is one that I am still working on. I have had some discussion with Craig, Keith and Karl there already about converting BufferedImage to Tensors.
I have a JavaFX app, which uses the AWT Robot class to take BufferedImage screenshots of a game, which I want to feed into a TF2 model to make predictions on. I then want to capture the bounding box information, send it back to the JavaFX portion of my app and draw them onto the image. The goal of this is to get it as close to real time as possible. I also want to use the bounding box information to feed coordinates of objects back to some other handler that will use the Robot class key presses to avoid them. Disclaimer: I am working on this for a Thesis project and not as a botter.
I was able to get the reading/writing to file example to work, but I believe instead of being able to send the bounding box coordinates back, it wrote a new image with bounding boxes on it to a file. I’m hesitant to go this route because it seems like it will be a lot of writing to disk.
I have also tried to add the preprocessing solution @karllessard has come up with and have attempted to use threading to speed it up but run into memory access errors. (context: I am inexperienced with concurrency).
Is there a solution, where I can use something like the DecodePng feature but instead of having it read from a file, just have it take in a Buffered Image? Or, is there a concurrent solution to doing it just within the JDK methodology? |
st206936 | If you need this to be fast, you’ll want to avoid BufferedImage as much as possible. If all that you need from the Robot class is taking screenshots, that can be achieved a lot more efficiently with FFmpeg and JavaCV. There is some sample code for that here, among other places:
github.com/bytedeco/javacv
ScreenCapture on Windows using new FFmpegFrameGrabber("screen-capture-recorder");
opened
Aug 24, 2014
Nackloose
enhancement
help wanted
Apparently ffmpeg supports screen capture on Windows according to this:
https://…trac.ffmpeg.org/wiki/Capture/Desktop
Attempting to do so using:
```
FrameGrabber grabber = new FFmpegFrameGrabber("screen-capture-recorder");// As well as FFmpegFrameGrabber("UScreenCapture");
grabber.setFormat("dshow");
grabber.setFrameRate(30);
grabber.start();
```
Results in:
```
run:
Exception in thread "main" org.bytedeco.javacv.FrameGrabber$Exception: avformat_open_input() error -5: Could not open input "screen-capture-recorder". (Has setFormat() been called?)
at org.bytedeco.javacv.FFmpegFrameGrabber.startUnsafe(FFmpegFrameGrabber.java:368)
at org.bytedeco.javacv.FFmpegFrameGrabber.start(FFmpegFrameGrabber.java:318)
at testarea.JavaCVDemo.main(JavaCVDemo.java:40)
[dshow @ 154dc160] Malformed dshow input string.
[dshow @ 154dc160] Malformed dshow input string.
Java Result: 1
BUILD SUCCESSFUL (total time: 9 seconds)
```
This may possibly have something to do with(maybe???): https://trac.ffmpeg.org/wiki/DirectShow#Specifyinginputframerate
This is here as a resource in case other people experience this error and wish to look into it, as well as a reminder to myself to do so in the future.
If you find any solutions or fixes, please let me know.
FFmpegFrameGrabber.grab() returns Frame objects, but what you want from them is Frame.image[0], which is typically just a ByteBuffer in BGR24 format from which we can easily create a Tensor.
And while you’re at it, you may want to try TF Lite since it is probably going to give you lower latency than TF Core:
github.com
javacpp-presets/tensorflow-lite at master · bytedeco/javacpp-presets 1
master/tensorflow-lite
The missing Java distribution of native C++ libraries |
st206937 | If you already have your image in memory and can access easily its raw pixels (no matter if it’s a BufferedImage or something else), you can certainly feed them directly to your tensor without passing through a file. The technique above shows only one way to do it, using AWT.
That being said, when you allocate a Tensor, you have directly access to its memory, using the Java NdArray library There are many accessors that allows you to transfer your pixel data to your tensor. Depending on your model, you’ll want to feed your tensor in BGR or RGB. Also, pixel data need to be normalized as floats between 0 and 1, while your PNG will probably have integer values.
If performance matter, you can try to apply these transformations first on the raw data of original image (e.g. normalization + channel reordering), using any Java techniques for doing it, then you could transfer that data directly to your tensor buffer like this:
byte[] normalizedPixels = ....;
try (TFloat32 tensor = TFloat32.tensorOf(Shape.of(w, h), t -> t.asRawTensor().data().write(normalizedPixels))) {
...
}
That’s the most direct way I can think of right now, but there are also other ways to achieve something close if that doesn’t work for you.
About normalization, I’ve gave the “caffe-style” one in example, as it is the default used by Keras, but there are other valid ways to do it, e.g . float f = (x/127.5 - 1). Pick the best approach for your needs.
it wrote a new image with bounding boxes on it to a file. I’m hesitant to go this route because it seems like it will be a lot of writing to disk.
You definitely don’t need to do this. You can again read directly the data from your detectionBoxes and other output tensors and pass it any other handle you have or tool for drawing efficiently the bounding boxes in your frame. Again, check at the various read operations available for float buffers from the NdArray library. |
st206938 | Sometimes you might need to reverse the BGR to RGB as well with a tensor. The Reverse op will do this for you.
e.g.
Reverse reverse = tf.reverse(tf.constant(someImageTensor), tf.constant(new long[]{2L})); |
st206939 | I am going to build my project and data is fetched from my database with specific Project_id. and then train my model using LSTM. Epochs are clearly running but after that, It shows an Internal Server Error
admin.py
def build(self, request, queryset):
count = 0
for p in queryset:
if build_id(p.project_management.id):
count += 1
else:
messages.warning(request, f"Could not build model for {p}")
messages.success(
request, f"Successfully built models for {count} projects")
build.short_description = "Build models for selected Projects"
bild.py
here the model is built via a specific Project_id. Model store only model.pkl data but not completed. And other files scalar_in and scalar_out do not save in a specific folder.
def build_id(project_id):
# get directory path to store models in
path = fetch_model_path(project_id, True)
# train model
model, scaler_in, scaler_out = train_project_models(project_id)
# ensure model was trained
if model is None:
return False
# store models
store_model(f'{path}/model.pkl', model)
store_model(f'{path}/scaler_in.pkl', scaler_in)
store_model(f'{path}/scaler_out.pkl', scaler_out)
# clear current loaded model from memory
keras_clear()
return True
utils.py
with open(path, 'wb') as f:
model_file = File(f)
pickle.dump(model, model_file)
when I Comment on the pickle.dump(model,model_file) then model.pkl, scalar_in.pkl, and scalar_out.pkl save files with 0 kb data. If pkl files exist already with data then it removes and builds the project successfully. I debug this code and the Django debuger_tool shows that the page is temporarily moved.
output
Epoch 1/4
11/11 [==============================] - 9s 302ms/step - loss: 0.4594 - val_loss: 0.2777
Epoch 2/4
11/11 [==============================] - 2s 177ms/step - loss: 0.1039 - val_loss: 0.0395
Epoch 3/4
11/11 [==============================] - 2s 170ms/step - loss: 0.0545 - val_loss: 0.0361
Epoch 4/4
11/11 [==============================] - 2s 169ms/step - loss: 0.0414 - val_loss: 0.0551
Internal Server Error: /turboai/turboAI/jaaiparameters/
Traceback (most recent call last):
File "E:\.Space\project\venv\lib\site-packages\django\core\handlers\exception.py", line 47, in inner
response = get_response(request)
File "E:\.Space\project\venv\lib\site-packages\django\core\handlers\base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\options.py", line 616, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\utils\decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\views\decorators\cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\sites.py", line 232, in inner
return view(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\utils\decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\utils\decorators.py", line 130, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\options.py", line 1723, in changelist_view
response = self.response_action(request, queryset=cl.get_queryset(request))
File "E:\.Space\project\venv\lib\site-packages\django\contrib\admin\options.py", line 1408, in response_action
response = func(self, request, queryset)
File "E:\.Space\project\TurboAnchor\turboAI\admin.py", line 125, in build
if build_id(p.project_management.id):
File "E:\.Space\project\TurboAnchor\turboAI\build.py", line 48, in build_id
store_model(f'{path}/model.pkl', model)
File "E:\.Space\project\TurboAnchor\turboAI\utils.py", line 154, in store_model
pickle.dump(model, model_file)
TypeError: can't pickle weakref objects
[29/Oct/2021 17:50:31] "POST /turboai/turboAI/jaaiparameters/ HTTP/1.1" 500 126722 |
st206940 | Please look at:
github.com/tensorflow/tensorflow
Keras model pickle-able but tf.keras model not pickle-able 90
opened
Nov 29, 2019
closed
Oct 4, 2021
Edwin-Koh1
stat:awaiting response
stat:awaiting tensorflower
type:bug
stalled
comp:keras
TF 2.5
**System information**
- Windows 10
- Tensorflow 2.0 (CPU)
- joblib 0.14.0
…- Python 3.7.5
- Keras 2.3.1
Hello everybody! This is my first post so please forgive me if I have missed something. So I'm trying to use a genetic algorithm to train and evaluate multiple NN architectures so I need to parallelize them on a multi-core CPU. Therefore I have used joblib to try to parallelize this. However, I was stuck on my tf.keras code because it wasn't pickleable. After many hours of debugging I finally realised that the tf.keras models are not pickleable whereas keras models are.
**Describe the current behavior**
The code below works but if you replaced keras with tf.keras, there will be an error:
**Could not pickle the task to send it to the workers.**
**Describe the expected behavior**
Moving forward, tf.keras should be replacing keras and therefore tf.keras should also be pickleable.
**Code to reproduce the issue**
```
#The following is a simple code to illustrate the problem:
from joblib import Parallel, delayed
import keras
import tensorflow as tf
def test():
model = keras.models.Sequential()
return
Parallel(n_jobs=8)(delayed(test)(i) for i in range(10)) #this works as intended
def test_tf():
model = tf.keras.models.Sequential()
return
Parallel(n_jobs=8)(delayed(test_tf)(i) for i in range(10)) #this will spit out the error above
```
**Other comments**
I guess a quick fix would just be to replace all the existing code with tf.keras to just keras but seeing as keras support will be discontinued and absorbed by Tensorflow 2.0, I think this should be fixed.
I suggest to test this with TF 2.6.x or TF 2.7rc |
st206941 | Hi
I’ve written a custom function which takes 4 inputs and returns 2 outputs. In my application, both these outputs can be computed in parallel and I want to use static graphs in tf to automate this (code shown below).
@tf.function
def forward(X, dX, W1, W2):
Z1 = tf.matmul(X, tf.transpose(W1))
dZ1 = tf.matmul(dX, tf.transpose(W1))
A1 = tf.tanh(Z1)
dA1 = tf.multiply(tf.expand_dims(1-tf.square(Z1), axis=1), dZ1)
Z2 = tf.matmul(A1, tf.transpose(W2))
dZ2 = tf.matmul(dA1, tf.transpose(W2))
return Z2, dZ2
To make sure that the computations are done in parallel I wanted to visualise the graph. However when I launch tensorboard, it doesn’t show me computations corresponding to dZ2 (code used shown below).
%load_ext tensorboard
from datetime import datetime
from packaging import version
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
Z2, dZ2 = forward(X, dX, W1, W2)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
%tensorboard --logdir logs/func
Screenshot from 2022-01-16 11-27-081018×668 28.1 KB |
st206942 | Hi everyone,
I am facing a problem while trying to save my model that has a custom layer, I followed the same method the Francois Chollet book is using, but I got this error:
ValueError: Unable to create a dataset (name already exists).
can anyone help, please? |
st206943 | Hi, can you share the chapter from the Deep Learning with Python (v1 or v2) book you’re referring to, as well as some code? Are you saving the model with Keras ModelCheckpoint callbacks? I’m sure we’ll be able to help. |
st206944 | I did a quick look at the tools folder and I am interested in maintaining this folder since I have some experience with docker . My github nick name is @vulkomilev . |
st206945 | You can start to creare and review PRs related to that folder. At some point we will add you are codeowners. |
st206946 | Recently we had a refresh over a Deformable convloution WIP PR 51 in Addons.
I’ve cherry-picked this as an example as this requires us to maintain almost 3k lines of new code in the repository.
This is maintainer-ship overhead is also quite similar to what we have with other custom kernels PRs.
As Addons is one of the few Ecosystem repositories to support custom (c++) ops and the related CI infra it is quite normal that we have this kind of proposed PRs.
But as the codeownership of these components it is generally not so stable over time we would like to not merge, as possible, these custom ops PRs also to achieve a more broad hardware coverage.
What are the alternatives? How we could collaborate when a compositional implementation has huge performance gaps?
Often this kind of issues are shared across the “extend” ecosystem like e.g. for the EmbeddingBag:
github.com/pytorch/xla
lowering embeddingbag to XLA 6
opened
Aug 5, 2020
shz0116
nostale
op lowering
The embeddingbag operation has not been lowered to XLA. I saw aten:embeddingba…g from the profiling.
github.com/tensorflow/addons
EmbeddingBag and Product-Key Memory Layers 1
opened
Oct 14, 2020
Rocketknight1
Feature Request
layers
**Describe the feature and the current behavior/state.**
FAIR have a cool paper… where they introduce [Product-Key Memory Layers](https://arxiv.org/abs/1907.05242) - these are layers that can add a huge number of parameters (100M-1B) to a network with a very minimal compute overhead.
Unfortunately, implementing them efficiently depends on the [EmbeddingBag layer](https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html) from Pytorch. This layer basically does a gather op followed by a weighted sum across the final dimension of the gather indices.
It is trivial to implement this op as a composition of two or three ops in Tensorflow, but doing so requires you to materialize the output of the gather, which in the case of Product-Key Memory layers is enormous, and usually blows out my GPU RAM. By combining these ops into a single efficient call, EmbeddingBag avoids ever materializing the extremely large pre-sum gather output. There's no efficient way to do the same in Tensorflow without a custom op.
I've already gotten a CUDA and (single-threaded) CPU implementation of EmbeddingBag working locally using the custom-op repo and associated docker image. I've verified correctness by comparing outputs and gradients to those from the manual composition of ops, and speed and memory usage are vastly improved. I could also contribute a TF implementation of the Product-Key Memory layer itself if desired.
**Relevant information**
- Are you willing to contribute it (yes/no): yes
- Are you willing to maintain it going forward? (yes/no): yes
- Is there a relevant academic paper? (if so, where): https://arxiv.org/abs/1907.05242
- Is there already an implementation in another framework? (if so, where): Yes, EmbeddingBag is already a PyTorch layer
- Was it part of tf.contrib? (if so, where):
**Which API type would this fall under (layer, metric, optimizer, etc.)**
Layer
**Who will benefit with this feature?**
People who want to squeeze loads of parameters into their model while maintaining fast throughput and aren't worried about overfitting. The paper used it for big autoregressive NLP Transformers, but I suspect you could deploy it in a lot of other places too.
**Any other info.**
I have only implemented the portions of EmbeddingBag necessary for Product-Key Memory layers.
EmbeddingBag op and layer by Rocketknight1 · Pull Request #2352 · tensorflow/addons · GitHub (1k lines)
github.com/tensorflow/tensorflow
embedding_lookup cause ran out of memory 1
opened
Oct 5, 2020
shz0116
TF 2.3
comp:tpus
comp:xla
stat:awaiting tensorflower
type:bug
I am running the following code to test embedding_lookup.
```python
# comman…d:
# python3 -m pdb embtest.py --features=1000 --nnz=30 --batch=128
#
# error:
# *** tensorflow.python.framework.errors_impl.ResourceExhaustedError:
# Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
#
import tensorflow as tf
import numpy as np
import sys
import os
import time
def measure(params, sp_ids, steps, thr):
res = tf.nn.embedding_lookup([params[0:thr],params[thr:]], sp_ids, None, name="TEST1")
print("Finished test")
return res
if __name__ == "__main__":
import sys
import argparse
parser = argparse.ArgumentParser(
description="Measure the performance of tensorflow embeddingbag using tf.nn.embedding" )
parser.add_argument("--features", type=int, default=10)
parser.add_argument("--em", type=int, default=2)
parser.add_argument("--nnz", type=int, default=2)
parser.add_argument("--batch", type=int, default=4)
parser.add_argument("--steps", type=int, default=1)
parser.add_argument("--warmups", type=int, default=0)
args = parser.parse_args()
features = args.features
em = args.em
nnz = args.nnz
batch = args.batch
steps = args.steps
warmups = args.warmups
sp_ids = np.random.randint(0, features, (batch * nnz,))
res = tf.zeros([batch, em])
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="grpc://"+os.environ["TPU_IP"])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print(" ")
tpus = tf.config.list_logical_devices('TPU')
print("There are {} tpu logical devices".format(len(tpus)))
print(tpus[0])
with tf.device('TPU:0'):
params = tf.random.uniform([features, em])
res = measure(params, sp_ids, tf.constant(steps), features//2)
print(res)
```
But got the following error:
```bash
hongzhang@shan-tf1:~$ python embtest.py --features=1000 --nnz=30 --batch=128
Eager execution : True
2020-10-05 08:23:42.244623: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-05 08:23:42.250601: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2300000000 Hz
2020-10-05 08:23:42.251595: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4c1dde0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-05 08:23:42.251631: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-05 08:23:42.263068: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.178.175.58:8470}
2020-10-05 08:23:42.263113: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:38651}
2020-10-05 08:23:42.279709: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.178.175.58:8470}
2020-10-05 08:23:42.279743: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:38651}
2020-10-05 08:23:42.280176: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:405] Started server with target: grpc://localhost:38651
There are 8 tpu logical devices
LogicalDevice(name='/job:worker/replica:0/task:0/device:TPU:7', device_type='TPU')
Traceback (most recent call last):
File "embtest.py", line 84, in <module>
t1 = measure(params, sp_ids, tf.constant(steps), features//2)
File "embtest.py", line 15, in measure
res = tf.nn.embedding_lookup([params[0:thr],params[thr:]], sp_ids, None, name="TEST1")
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/embedding_ops.py", line 394, in embedding_lookup_v2
return embedding_lookup(params, ids, "div", name, max_norm=max_norm)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/embedding_ops.py", line 328, in embedding_lookup
transform_fn=None)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/embedding_ops.py", line 246, in _embedding_lookup_and_transform
ret.set_shape(ids.get_shape().concatenate(element_shape_s))
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 1206, in set_shape
if not self.shape.is_compatible_with(shape):
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 1167, in shape
self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple())
tensorflow.python.framework.errors_impl.ResourceExhaustedError: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: register allocator spill slots
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
2020-10-05 08:23:59.826142: W tensorflow/core/distributed_runtime/eager/remote_tensor_handle_data.cc:76] Unable to destroy remote tensor handles. If you are running a tf.function, it usually indicates some op in the graph gets an error: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: register allocator spill slots
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
XLA label: %concatenate.724 = f32[3840,2]{0,1:T(2,128)} concatenate(f32[1,2]{0,1:T(2,128)}, f32[3,2]{0,1:T(2,128)}, f32[5,2]{0,1:T(2,128)}, f32[1,2]{0,1:T(2,128)}, ...(+2400)), dimensions={0}
Allocation type: scoped
```
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
os: Linux
os kernel version: #1 SMP Debian 4.19.146-1 (2020-09-17)
os release version: 4.19.0-11-cloud-amd64
os platform: Linux-4.19.0-11-cloud-amd64-x86_64-with-debian-10.6
linux distribution: ('debian', '10.6', '')
linux os distribution: ('debian', '10.6', '')
mac version: ('', ('', '', ''), '')
uname: uname_result(system='Linux', node='shan-tf1', release='4.19.0-11-cloud-amd64', version='#1 SMP Debian 4.19.146-1 (2020-09-17)', machine='x86_64', processor='')
architecture: ('64bit', 'ELF')
machine: x86_64
- TensorFlow installed from (source or binary):
- TensorFlow version (use command below):
tf.version.VERSION = 2.3.0-dev20200620
tf.version.GIT_VERSION = v1.12.1-34769-gfd2d4cdb70
tf.version.COMPILER_VERSION = 7.3.1 20180303
- Python version:
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
github.com/google/jax
np.take and np.einsum aren't fused properly 3
opened
May 26, 2020
AranKomat
P2 (eventual)
performance
xla_issue
I'm trying to translate [Product Key Memory ](https://arxiv.org/abs/1907.05242) …in PyTorch into JAX, and this requires the translation of [nn.EmbeddingBag](https://pytorch.org/docs/master/generated/torch.nn.EmbeddingBag.html) with per_sample_weights, as I haven't found any counterpart in JAX (but if you know, please let me know). For this, I wrote scatter_weighted_sum, the weighted sum version of scatter_add, in hope that it'll be efficient with fusing. However, jit didn't fuse np.take, reshape and np.einsum properly, which resulted in a huge intermediate object. Since #1979 concluded this sort of ops will be fused on GPU, I was wondering what is causing this problem. If by any chance this isn't supported on GPU, should this work on TPU? I'm using JAX ver. 0.1.67 with various Colab GPUs.
````python
hidden_dim = 512
n_keys = 512
batch = 2 ** 15
knn = 32
heads = 4
key = random.PRNGKey(0)
values = random.normal(key, (n_keys ** 2, hidden_dim))
indices = random.randint(key, (batch*heads, knn), 0, n_keys ** 2)
weights = random.normal(key, (batch*heads, knn))
@jit
def scatter_weighted_sum(inputs, indices, weights):
num_bags = weights.shape[-1]
dim = inputs.shape[-1]
indices = indices.reshape(-1)
tmp = inputs.take(indices, axis=0).reshape(-1, num_bags, dim)
return np.einsum('ind, in -> id', tmp, weights)
````
Thanks,
Stefano |
st206947 | @kristen Is the MLIR team registered to this Dscourse instance or are they only in the LLVM MLIR discourse instance 3?
Cause generally we don’t have TF specific threads in the LLVM MLIR instance. |
st206948 | Ok I’ve cross posted in the MLIR llvm forum instance.
I hope that at least some TF-MLIR team members could be subscribed to their tags and subcategory. |
st206949 | /cc @Jacques_Pienaar let me know if you want to move this in in another category and you want to use only the XLA tag. |
st206950 | Hey Stefano,
Here is fine thanks (all necessary tags). I’m pinging a couple of folks who has been looking at interfacing/third party backends as i don’t think they’ve seen this yet.
Best,
Jacques |
st206951 | [I’ll speculate based on previous conversations while we wait]
One of the parts we have discussed is “keeping” multiple levels of abstraction around, enabling backends to hook/match at appropriate level to enable the “mega” op while exposing the decomposed forms where there is no support. It is also true that the compositional representation has been too rigid and hasn’t composed as well (“just rewrite your computation as convolutions if you want performance” being in effect the indirect suggestion) and should be revised (which is happening albeit slowly). These are great examples to highlight - a common problem is that folks find a case where compositional form does poorly, special cases a transformation and then moves on and without such overarching examples it is easy to miss that the problem isn’t being addressed. |
st206952 | Jacques_Pienaar:
a common problem is that folks find a case where compositional form does poorly, special cases a transformation and then moves on and without such overarching examples it is easy to miss that the problem isn’t being addressed.
IMHO this is exactly the point.
And I think it is why some specific reusable components ( keras-nlp, keras-cv, tf-addons) that are serving e2e models, also in our selected models in model garden, could be one of the driver to understand what we are expecting from the compiler stack.
Just take a look at our current threshold in TF addons:
we need at least >50 citations to accept a feature related to a paper so it is not something that it is totally brand new.
If we need to have a custom c++ op to reach good enough performance for a new layer but then the codeowner disappear after one or two months or people require to use it in Colab/Google cloud TPU isn’t better to try to interact with these use cases directly with the compiler stack team to understand a little bit how to handle our end2end performance request and to better evaluate a solution that it is alternative to maintain a partial hw covered large custom op?
Just my 2¢ |
st206953 | We could see the same in Keras as now it is again a python only repo:
github.com/keras-team/keras
Support 3D Pre-trained Model (DepthwiseConv3D and SeparableConv3D) 3
opened
Aug 3, 2021
innat
type:feature
stat:awaiting response
It must be a bug, a feature request, or a significant problem with the documenta…tion (for small docs fixes please send a PR instead).
1. The form below must be filled out.
**Here's why we have that policy:**. feature-request
2. Keras developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
**System information**. General.
TensorFlow version (you are using): TF 2.5
Are you willing to contribute it (Yes/No): No
3. **Describe the feature and the current behavior/state**.
Describe the feature clearly here. Be sure to convey here why the requested feature is needed. Any brief description of the use-case would help.
It will be useful to enhance the research of medical imaging (3D modeling) and work on video data and more. There are available 2D classification models but unfortunately not a single 3D model for classification.
For that, DepthwiseConv3D and [SeparableConv3D](https://github.com/keras-team/keras/issues/5639) official implementation is also needed.
4. **Will this change the current api? How?**
It will enhance the API.
5. **Who will benefit from this feature?**
Researcher on medical imaging and with video data and possibly all 3D format data.
6. **[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)**
- Do you want to contribute a PR? (yes/no): no
- If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions
- Briefly describe your candidate solution(if contributing):
**Others**
There is (AFAIK) only one non-official code-bases for 3D modeling in `tf.keras`
- [ZFTurbo/efficientnet_3D](https://github.com/ZFTurbo/efficientnet_3D)
- [ZFTurbo/classification_models_3D](https://github.com/ZFTurbo/classification_models_3D) |
st206954 | Jacques_Pienaar:
I’m pinging a couple of folks who has been looking at interfacing/third party backends as i don’t think they’ve seen this yet.
@Jacques_Pienaar Any news? I would like to keep this thread alive
/cc @yarri-oss @thea |
st206955 | Not yet (I have a meeting soon that is semi-relevant, but higher level and a couple next week where I could raise it again). There are a few efforts I’m aware of, but they are at various stages.
I do like driving these with specific components. I would also ideally have it be such that the compiler team need not be a bottleneck here as that also doesn’t scale. And I believe separable convolutions have been on your list for a long time |
st206956 | Just a keep alive message for this thread.
Can we find someone in the TF or MLIR team that can give us any feedback/roadmap or just a rough outlook on this topic?
Thanks |
st206957 | @markdaoust Could you help us to find someone, on the TF side, that could give us an overview on this thread about the custom ops roadmap with the new compiler infra and TF runtime?
Thanks |
st206958 | I’ll see if I can find someone.
Aside: For embedding-bag the docs describe this a merging “embedding lookup” and “reduce”. But for the sum and mean combiners, isn’t it sufficient to implement this as a sparse tensor (the ids and weights) times a dense matrix (the embedding vectors)? Doesn’t that cover the cases except combiner=max? I think it would be possible to implement an efficient combiner=max if the sparse_segment_* series was complete and included a sparse_segment_max
TensorFlow
Migrating feature_columns to TF2's Keras Preprocessing Layers |
st206959 | Thanks,
yes the topic is more in general about what is the perspective when the compositional path doesn’t perform well.
Do we need to interact more strictly with the compiler team on the TF side before introducing a custom ops (often it is hard to collect a feedback)? I think new ops are interesting use cases to stress-test the compositional and the compiler stack transformations.
Will we have a new way to use the new compiler and runtime infra to write more portable high level custom ops?
If we are in a python only ecosystem repo, like keras*, where we need to contribute these “missing pieces”?
P.s.
For the embedding bag case (Addons, Pytorch TPU, JAX) at some point we had a sparse proposal at:
github.com/tensorflow/addons
EmbeddingBag op and layer
tensorflow:master ← Rocketknight1:master
opened
Jan 19, 2021
Rocketknight1
+1100
-0
# Description
Brief Description of the PR:
This is a PR for the EmbeddingBag… op. Please don't merge it yet! Although it works, testing is incomplete and the file structure needs to be cleaned up. I'm opening it now just to get some initial feedback. I'll keep working on several of these issues (particularly 1, 3, 4 and 6 see below), but I'll need some feedback on 2) and 5), plus any other feedback you have for the rest of it!
Fixes # (issue)
#2201
## Type of change
New layer and associated C++/CUDA op
# Comments
There are a few issues that need to be resolved before I'd feel comfortable with this being merged. In no particular order, they are:
1) The CUDA/C++ code is split with the forward and backward passes in separate files, which is not how other Tensorflow or Addons ops do it. This is just a style thing - I'll merge them soon.
2) There are really two different entrypoints for users here, the function/op (analogous to tf.gather) and the layer (analogous to tf.keras.layers.Embedding). Like Embedding, the layer instantiates its own embeddings tensor and expects to be passed only indices and weights, whereas the function needs to be passed embeddings as well. Following PyTorch's naming conventions, I called the op embeddingbag and the layer EmbeddingBag, but this is almost certainly not what you want. What is the right way to name these two? Should I make the function/op a stateless Layer rather than just a function?
3) No support for float16/bfloat16 yet.
4) Because context->AllocateTemp continuously segfaulted for me when I was compiling in the custom-op repo, I used AllocateOutput to make some dummy outputs and then just used them as temp arrays. Compiling in tensorflow_addons itself seems much more stable, but I still need to go back and set that properly to AllocateTemp.
5) The CUDA/C++ ops expect a weight tensor. When no weights are passed, the Python wrapper instantiates dummy weights with `tf.ones_like()`. Is this acceptable?
6) More tests! I don't have any gradient tests at all yet, and I should probably add additional tests with weird shapes.
But then the custom ops was merged in Addons (+1.100 lines for CPU/CUDA) |
st206960 | I want to refresh this topic for the new year.
Can we collect a little bit more clear vision on this topic? |
st206961 | It may be one where we could make this an impromptu virtual meeting to discuss. Some folks aren’t back yet, but let me see. |
st206962 | SIG Build’s next meeting will be tomorrow, Tuesday, January 11, at 2pm Pacific time. Find the meeting details at bit.ly/tf-sig-build-notes 4, and feel free to suggest new agenda items. |
st206963 | Here’s a summary of some of the major points from the meeting.
Please fill out this form: SIG Build Monthly Meeting Time 2022. We’re considering a new meeting time every other month in for the sake of worldwide members for whom 2pm PST is not ideal.
I’ve merged configuration files that build TF’s Nightly test suite in Docker, and am working on changing our internal CI to use these containers.
Gentoo now packages ROCm!
Many of TF DevInfra team’s projects are delayed – manylinux2014 and Python 3.10 support are both stuck on build/test failures.
Our next meeting is February 1st at 2pm PST. See you then! |
st206964 | Hi everyone,
I’m having an issue with model.predict() causing OOM errors. Strangely, this doesn’t happen while training, only while predicting on the val dataset.
I’m using a TFRecordDataset with batches of size 512
val_dataset = tf.data.TFRecordDataset(filenames=[val_files])
.map(tf_parse, num_parallel_calls=6)
.batch(BATCH_SIZE).prefetch(16)
.repeat(1)
def tf_parse(eg):
example = tf.io.parse_example(eg[tf.newaxis], {“features”: tf.io.FixedLenFeature(shape=(1050,), dtype=tf.float32),
“targets”: tf.io.FixedLenFeature(shape=(6,), dtype=tf.float32)})
return example[“features”][0], (example[“features”][0], example[“targets”][0], example[“targets”][0])
As I stated above, training works fine, but when I try to predict on the entire val_dataset I get an OOM error. Trying smaller bits of the dataset with model.predict(val_dataset.take(50)), for example, works fine, but not with the entire val_dataset. Even specifying batch_size=1 in predict doesn’t help at all.
The input data is 1050 columns of numeric data, and there are about 540k rows of data. During training, GPU memory usage is around 2.5/8.0GB.
Does anyone have any suggestions?
EDIT
I’ve run some more tests. model.evaluate() works fine as well. Does Tensorflow cache results on the GPU, then send it down to the CPU in one big set, or does it send results down per batch, flush buffers, and continue? I suspect it’s the former, because my outputs end up being (1050+1050+6) x num_rows due to the architecture |
st206965 | Hi, I’m a student studying data science, i took an interest at CF when i was learning Recommender system, and i would like to make a Matrix Factorization based CF, but I’m seeing the increase of Contextual Recommender System and Implicit Recommender System, so i was wondering is it possible to make Matrix Factorization based CF, that could implement Contextual Information and Implicit Data. If so can someone help me how to build it. Thanks in advance. |
st206966 | Hello, I had been use TFJS for a while for pose recognition (with PoseNet) and it is magnific! Now I wonder if I can use TF for a different use case: I want to scan printed assessments with multiple questions, where the options are circles that should be filled with a black pen by the user.
I imagine I need a way to recognize the frame containing all the answers, similar to the way QR-Codes corners are used… I appreciate any suggestion about what would be a good approach to this problem.
thanks! |
st206967 | Welcome to the forum and thanks for being part of the TensorFlow.js community!
A few things.
Glad you were enjoying PoseNet however please consider upgrading to MoveNet which is almost 100x faster and much more accurate. PoseNet was good when it came out, but MoveNet is now the standard Learn more here:
blog.tensorflow.org
Next-Generation Pose Detection with MoveNet and TensorFlow.js
MoveNet is a human pose detection architecture developed at Google that is ultra fast and accurate. It was designed to detect difficult poses
So for your new problem I would first ask if you need machine learning for this task? Regular computer vision may be adequate depending on your data eg if its well scanned and just black and white it may be fairly trivial to find all black areas that are squares of a certain size and which of those contain more filled pixels than others for example.
That being said if you do want to use machine learning you will probably want to retrain some sort of object detection model to find objects of interest and their positions eg a filled box vs an unfilled box etc.
For that I highly recommend following this great tutorial by Hugo:
blog.tensorflow.org
Custom object detection in the browser using TensorFlow.js
Train a custom MobileNetV2 using the TensorFlow 2 Object Detection API and Google Colab for object detection, convert the model to TensorFlow.js
You can then run that resulting trained model in the browser to detect custom objects like you need and find their locations in the given image. |
st206968 | hi @Jason thank you for your answer!
I had not idea MoveNet was out, I will check it out by sure.
About the question 2, if I understood well, probably my problem is most on the side of “regular computing vision” rather than machine learning itself:
I can control the printed forms: I can add QRCodes or AR markers if needed…
I know in advance the layout of the form to process.
I guess the main challenge is that the capture should be done with a phone or in front of a webcam rather than using a flat scanner. that’s why I imagined that AR markers may be of help: to determine the axis of rotation of the sheet, apply some kind of “inverse transformation” to “flat” the captured picture and then compare the circles with the answers. |
st206969 | Well my main point is that you may be able to solve your problem using regular computer vision techniques depending how clean your image data is etc. If you find that is not working well, then I would try solving with Machine Learning using TensorFlow.js but also ensure the machine learning solution beats your “baseline” of the non - ML solution if that makes sense?
What you want to do is certainly achievable with TensorFlow.js though, and in real time too. Check out this interview I did with one member of the community who explains really well how he made a Soduku solver that is actually very similar problem as yours:
If you check the description links of the video too there is one more video where he goes even deeper about the preprocessing steps he takes (non ML tasks) along with his TensorFlow.JS implementation that will help you too:
Browser-Based Augmented Reality Sudoku Solver using TensorFlow and Image Processing |
st206970 | I was wondering how I should weight the data as my accuracy stops at around 82%.
Test Classes Actual: [138, 156, 407, 450, 9334, 2192, 12087, 987, 828]
Test Classes Classified by Model: [81, 16, 252, 12, 9069, 0, 15339, 982, 0]
How should I weight the values, because when I make the weights all equal, I get this?
Test Classes Classified by Model: [430, 141, 934, 912, 6619, 7395, 8332, 1050, 0]
Any help would be appreciated.
Thanks,
Rishav |
st206971 | The TensorFlow OSS DevInfra Team and TF SIG Build are developing new Dockerfiles in the SIG Build GitHub repo 8 that we want to be used for all of TensorFlow’s official build and test environments. They are published to SIG Build on DockerHub 6. Our first milestone is to use the Dockerfiles to build the TF Nightly packages with the following goals:
Container-built packages are functionally identical to the current package
Developers (you!) can build the same packages that we do with minimal effort
That milestone is ready for verification. I’ve set up internal CI jobs that use the containers to build tf-nightly packages that are very similar to the current ones, and I’d like your help to evaluate them for functional differences. Starting on Monday the 30th, we’ve been using the containers to build our official tf-nightly packages.
Here is a set of packages we built at the same commits for advance comparison. There are minor cosmetic differences but we’d like your help to find out if there are any functional differences between packages on the same row of the table below.
Short Git Hash
Old Non-Docker Builds
New Docker Builds
5af3afc559
GPU Python 3.9
GPU Python 3.9 3
5af3afc559
GPU Python 3.8
GPU Python 3.8
5af3afc559
GPU Python 3.7
GPU Python 3.7
1d51452b18
CPU Python 3.9
CPU Python 3.9 1
1d51452b18
CPU Python 3.8
CPU Python 3.8 1
1d51452b18
CPU Python 3.7
CPU Python 3.7
Here’s how you can help us make the containers useful for you:
Install and compare the sample packages above. If you compare the two wheels for any of the rows, do they have any differences that would affect your workflow?
Check out the containers on DockerHub 6 and the tf-nightly build instructions at the SIG Build repository 8. Are you able to build TensorFlow with them? If you use the same git hashes as above, how is your package different?
With the new packages that came out starting on Nov. 30, is anything different about them in a way that affects your workflow?
Please give all feedback in this thread. Thank you for your help! |
st206972 | If you have Docker (and nvidia-docker 1 if you want to run GPU TensorFlow) set up already, here’s how to test out one of the packages linked in the OP from inside the containers:
CPU:
docker pull tensorflow/build:latest-python3.9
docker run -it --rm tensorflow/build:latest-python3.9 bash
wget https://storage.googleapis.com/tensorflow-nightly/prod/tensorflow/nightly_release/ubuntu_tfdkr/cpu_py39/6/20211117-000455/pkg/tf_nightly_cpu-2.8.0.dev20211117-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
pip install ./tf_nightly*
python
import tensorflow as tf
GPU with nvidia-docker:
docker pull tensorflow/build:latest-python3.9
docker run --gpus=all -it --rm tensorflow/build:latest-python3.9 bash
wget https://storage.googleapis.com/tensorflow-nightly/prod/tensorflow/nightly_release/ubuntu_tfdkr/gpu_py39/6/20211117-000458/pkg/tf_nightly-2.8.0.dev20211117-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
pip install ./tf_nightly*
python
import tensorflow as tf
tf.config.list_physical_devices('GPU') |
st206973 | Here’s a parallel topic for SIG Build contributors we can also discuss here: container extensibility and collaboration. With regards to “what should be in the containers?”, I am strongly for saying that the “Officially Supported” Dockerfiles should only contain code that the DevInfra team has pledged to commit. We still need to decide exactly what this is, but here are some of the user stories I’ve considered and my own thoughts on whether they’ll get official support, based on matching our current testing needs:
Yes: Build and test on x86 (test targets need better definition)
Yes: Contributor utilities like pylint, clang-tidy rules
Yes: Support currently-receiving-security-upgrades release branches
Needs decision: custom-op functions for SIG Addons, IO, etc. (I want to get backing from leadership to guarantee support)
Needs decision: TF Lite tests / mobile builds (DevInfra is disconnected from this)
No: Other platforms like ARM, PowerPC, etc. DevInfra can’t support this.
I want the Dockerfiles to be good enough such that interested parties could copy the directory into a separate project that they maintain for their special needs (for example: minimized containers, containers for non-x86 platforms, containers for accelerators other than CUDA).
Any thoughts on any of this? |
st206974 | I’ve launched again the Tensorflow Addons (WIP) Github Action CI with these images at:
github.com/tensorflow/addons
[WIP] Testing new build image
tensorflow:master ← seanpmorgan:new-build-image
opened
Jul 1, 2021
seanpmorgan
+2
-3
# Description
Just testing the py37 build
angerson:
Needs decision: custom-op functions for SIG Addons, IO, etc. (I want to get backing from leadership to guarantee support)
Currently we had some issues cause custom-ops images are unmaintained in TF. See more at:
github.com/tensorflow/addons
Use multipython image for dev container
tensorflow:master ← seanpmorgan:new-dev-base
opened
Nov 9, 2021
seanpmorgan
+3
-3
# Description
Previous container only supports up to py3.6 which TF2.7 is not… built for. TF team has stopped publishing these custom-op images since TF2.5.
Using the CUDA image makes the container larger, but it still uses the cpu only version of TF so it's accurate to call it dev_cpu. Building the gpu image would be pretty trivial now, but also this limits the maintenance burden on our team.
There is no multipython container without GPU installs:
https://console.cloud.google.com/gcr/images/tensorflow-testing/GLOBAL
angerson:
Here’s a parallel topic for SIG Build contributors we can also discuss here:
What I see is that with these images we are loosing the small size runtime and devel images if we compare these with the current “officially” published images (~ 400/600MB) as the GPU/CUDA layers are not optional.
I suppose/hope that we want to propose these images also as a reproducible environment to prepare, compile, lint and test PRs for our community contributions. As with these image the envs could be as close as possible to that we use in the TF CI (if we use these images) there is some hope that we could be almost exactly on the same page.
If this is the case I suppose that having also low overhead images, when you want to contribute a quick PR, could be still valuable. Often you don’t need to have a GPU or consume a large size GPU images to contribute to TF or you want to just minimize the cloud costs and boostrap waiting time when you are asking resources to the cloud for a new development env. e.g. Github Codespaces or Kubernetes devcontainer POD.
As a side note I still hope that we could integrate pre-commits hooks from:
github.com/tensorflow/tensorflow
Update TF official devel images to Ubuntu 20.04 and precommit hooks
tensorflow:master ← bhack:devel_ubuntu_20.04
opened
Apr 7, 2021
bhack
+79
-17
Update TF official devel images to Ubuntu 20.04 LTS and install pytlint.
`ci_…build.sh` is currently broker for `ci_sanity.sh` caused by a misalignment between CI sanity and local images on developer machine (See https://github.com/tensorflow/tensorflow/issues/47989).
Now that https://github.com/tensorflow/tensorflow/pull/48291 is merged this will let developer/contributors to locally execute python linting steps in the docker devel container with:
```
docker run --rm -it -v $PWD:/tensorflow -w /tensorflow tensorflow/tensorflow:devel tensorflow/tools/ci_build/ci_sanity.sh --pylint --incremental
```
It could be nice if later the full CI sanity could be run exactly on the same image where developer are working every day to avoid other potential misalignment.
If this works we could change the documentation for linting.
I could expand this PR or with a new one to introduce other steps (e.g. include `--clang_format` ?) so that we could have a working minimal configuration that could be also contributed in the repository as a pre-commit hook considering that the the whole `ci_sanity.sh` is too heavy to be run as a git hook.
/cc @mihaimaruseac @angerson
We are full of multiple formatting request comments in PRs that requires manual intervention. I hope that we could enforce these a little bit more on the local dev machine with pre-commits to lower the linting roundtrips comments on the PR itself and CI run cycles.
I hope that these new reproducible env will enable a read only cache sharing from Tensorflow master so that we could quickly build TF in these envs with a reasonable amount of time on our local dev hw or cloud resources when we want to contribute a PR to TensorFlow. |
st206975 | If we check the layer’s size distribution we have a single quite huge layer:
Every time the step 25 will be invalidated we will have a new quite huge download and extraction time overhead (probably disk size?) |
st206976 | I’ve also tried to make a build with the CPU receipt + remote cache inside these images on the commit f3c361931fe449302953596e8f99fcd737b9390c (master):
bazel --bazelrc=/usertools/cpu.bazelrc build --config=sigbuild_remote_cache tensorflow/tools/pip_package:build_pip_package
I already see that we are having many cache misses. How frequently the remote cache is updated? Is it only with CI jobs orchestration for nightly release?
I cannot see the orchestration script-scheduling as for Github Actions but I suppose that if we are only going to update it on nightly we will be strongly conditioned by:
LLVM updates and bazel cache Build
As we are updating LLVM two times at day I’ve tried to query bazel with:
bazel aquery "rdeps(//tensorflow:*,//third_party/llvm:*)" --include_aspects 2>/dev/null | grep Compiling | wc -l
I am not a bazel ninja so probably the query could be wrong or improved but I currently see 9938 files on master (CPU only).
What is the effect of this bi-daily rolling update on the average community contributor compiling workflow/environment and his bazel cache?
Is It will plausible to schedule your internal job (I mean the the on without --remote_upload_local_results=false) on every master commits or at least after any llvm updates/sync?
If not:
can we publish somewhere the updated nightly master commit so we know on which commit the remote cache is positioned?
Is it safe enough to contribute a PR starting from the “nightly commit”?
Edit:
Just the last point from the first quick pass.
For a developer image or if we want to derive an official developer image it is quite annoying to have root permission files created inside the container on your host mounted TF source path when you are back on the host. Probably It could be ok if you are always going to create a temporary source code checkout or if we suggest to maintain the TF source in a named volume but I suppose this will not be the main use case.
So we already discussed this on the Build repository some months ago and now we have also introduced a default user in official Keras developer image.
We don’t have too much upstream solutions so I think we could introduce a default user. |
st206977 | the containers gain 4GB from CUDA
I’m punting this until we have usage data and rationale. Splitting the containers preemptively would add a lot of maintenance burden. Our internal CI, which I’m targeting first, doesn’t need it.
the remote cache is not consistently useful yet
I have been wondering about how this will turn out. Right now, the cache gets pushed once per day with an inconsistent commit. Using nightly won’t come until the next milestone of work is done. I think what we’ll probably do in the future is to make sure we push the cache with every day’s nightly tag and encourage developers to start from there. Most of the time, I think that should give a good improvement over the current situation.
the container creates root-owned files
This a low priority task for the moment. For my work I am currently focusing on our internal CI, where the permissions are not a problem. Feel free to work on a PR, though. I don’t want to accept the hassle of maintaining a user in the image unless it can very easily match the user and group inside the container to the permissions on the volumes, e.g. if my user is 396220:89939 then I shouldn’t end up with files owned by 1000:1000.
formatting and precommit hooks aren’t available yet
Those are still on the roadmap, but not until Q1 at the earliest. |
st206978 | I’m punting this until we have usage data and rationale. Splitting the containers preemptively would add a lot of maintenance burden. Our internal CI, which I’m targeting first, doesn’t need it.
If we still think that these two needs to conflict instead of converging probably we are missing a great opportunity in this refactoring. I hope we could find the right equilibrium to stay easily on the same page with the local environment we distribute and in which the contributor prepare the PR and the automation that validates it (CI).
I think that this CI vs Dev-env approach it could really go to create some friction quite soon as the CI use case, when “in production”, will be dominant. IMHO it is better to co-design earlier if it is possible.
I have been wondering about how this will turn out. Right now, the cache gets pushed once per day with an inconsistent commit. Using nightly won’t come until the next milestone of work is done. I think what we’ll probably do in the future is to make sure we push the cache with every day’s nightly tag and encourage developers to start from there. Most of the time, I think that should give a good improvement over the current situation.
Not all the commits are the same as you can see in the mentioned forum thread. It seems also from @mihaimaruseac reported small experiments in the same thread the the llvm “daily syncs” are invalidating many targets (and so the cache).
Working with Github PR over “the last” nightly it could be ok but we really need to understand what is required when we need resolve conflicts or if and when we are asking to a developer to rebase or to merge master in a under review PR.
I don’t want to accept the hassle of maintaining a user in the image unless it can very easily match the user and group inside the container to the permissions on the volumes, e.g. if my user is 396220:89939 then I shouldn’t end up with files owned by 1000:1000.
I suppose that you know that this it is not possible as it was explored in a quite long thread at upstream level:
github.com/moby/moby
Add ability to mount volume as user other than root
opened
Oct 17, 2013
mingfang
area/api
area/kernel
exp/expert
kind/enhancement
area/volumes
Use case: mount a volume from host to container for use by apache as www user.
T…he problem is currently all mounts are mounted as root inside the container.
For example, this command
docker run -v /tmp:/var/www ubuntu stat -c "%U %G" /var/www
will print "root root"
I need to mount it as user www inside the container.
If we really don’t want a standard 1000 user probably it could be better to not suggest, when we will have again an official devel image, to use an host path with the TF source mounted in the container.
We could suggest to checkout the source in a namedVolume directly so that we don’t mix host path permission with the root permission in the container.
EDIT:
An alternative:
we could suggest to the user to use Docker rootless with uidmap as now it is not experimental anymore.
I still think that having a default user will cover more frequent use cases also for the standard docker root installation.
But if we don’t want to support this we could at least strongly suggest to use Docker rootless as it will not go to create all the permission problem on new files created in the container on an host shared path as we have with the current setup/docs.
Note:
These two alternative solutions are mutually incompatible as with the rootless currently the host user is mapped only with root in the container.
See more:
github.com/moby/moby
Flexible userns uid / subuid mapping
opened
Sep 26, 2020
foresto
kind/enhancement
area/security/userns
area/rootless
I use rootless docker, and often need the unprivileged user running dockerd to s…hare resources (e.g. bind mounted files) with an unprivileged container user. Unfortunately, I can't do this by simply mapping the host uid into the container's user namespace, because docker seems to always map my host uid to the container's uid 0.
Running my container application as uid 0 (in the userns) would allow it to share resources with the host user, but that practice is a significant security risk. The kernel has many parts that were not designed with namespaces in mind, and uid 0 in a userns has been exploited to gain privilege escalation over and over again. [(example)](http://www.halfdog.net/Security/2015/UserNamespaceOverlayfsSetuidWriteExec/) [(example)](https://www.cvedetails.com/cve/CVE-2018-18955/) [(examples)](https://www.debian.org/security/2017/dsa-4073)
For shared files, I have been working around the problem as follows:
* Start the container in order to create the user namespace
* Examine the container process to determine which subuid & subgid docker uses for it
* Change ownership of all shared files and directories to that subuid & subgid
* Make all shared files and directories group-writable
* Make all shared directories setgid, so new files/dirs will have the same subgid
* Set up a script to set umask 002 in the container
* Restart the container with bind mounts for the shared directories
* Remember to set umask 002 in the host whenever working with shared files (and sometimes forget, leading to permissions errors some time later)
* Make the host user a member of the subgid group
* (optional) Name the subgid in /etc/group, for easy identification in `ls -l` output
That works, but it's an awful lot of annoying hoops to jump through, and it's error-prone.
All this would be much easier if docker allowed configurable mappings between host and userns uids/gids. LXC has offered this functionality for years, via the `lxc.idmap` config file option.
Even a very limited approach would we a welcome relief. Perhaps making only one uid mapping configurable. Or perhaps allowing only the host user's uid to have a configurable target uid in the userns (instead of always being mapped to uid 0). Podman offers this via the `--userns=keep-id` option.
#27285 seems related, but I think the OP there is asking for a simpler interface to existing functionality (choosing whose subuid ranges are used), which is not the same thing. I'm asking for something new: the ability to map at least one non-subuid into the container as a configurable non-root uid. |
st206979 | In the meantime I have prepared two PRs and a suggestion for the ctrl+c issue in the Docs:
Baseline/CPU and CUDA stage separation
Separate dev baseline and cuda layers by bhack · Pull Request #47 · tensorflow/build · GitHub
A PR to monitor the required time to build with remote cache on Standard_DS2_v2 machine:
Compile cpu by bhack · Pull Request #48 · tensorflow/build · GitHub
Docker exec/Ctrl+c proposals:
Docker exec ctrl+c suggestion · Issue #49 · tensorflow/build · GitHub |
st206980 | Update: I switched TensorFlow’s tf-nightly build process over last night, and the resulting tf-nightly packages 2 were built with Docker. |
st206981 | Can we tag the nightly commit on the Github repository?
Cause in git(hub) it is not possible to shallow clone a specific commit hash but it is possible to shallow clone a tag. It will also help to fast identify the last nightly wheel related to a specific tag/commit in the repo.
Currently we need to execute every time the whole TF repository clone and then hard reset and checkout to the specific commit.
Also it is not clear how to fast identify the commit hash related to nightly other then installing the nightly wheels.
This is why in the new Docker docs we claim:
The nightly tag on GitHub is not related to the tf-nightly packages. |
st206982 | angerson:
Update: I switched TensorFlow’s tf-nightly build process over last night, and the resulting tf-nightly packages were built with Docker.
I don’t know if the commit is correct but I tried locally this:
docker run --rm -it tensorflow/build:latest-python3.9 /bin/bash -c "git clone https://github.com/tensorflow/tensorflow.git --single-branch /tf/tensorflow && cd /tf/tensorflow/ && git reset --hard 13adf6272a4 && bazel --bazelrc=/usertools/cpu.bazelrc build --config=sigbuild_remote_cache tensorflow/tools/pip_package:build_pip_package"
At some point at 13790 processed I got:
INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (434 packages loaded, 27055 targets configured).
INFO: Found 1 target...
WARNING: Reading from Remote Cache:
BulkTransferException
ERROR: /tf/tensorflow/tensorflow/compiler/mlir/tensorflow/BUILD:572:11: C++ compilation of rule '//tensorflow/compiler/mlir/tensorflow:tensorflow_ops' failed (Exit 4): crosstool_wrapper_driver_is_not_gcc failed: error executing command external/ubuntu18.04-gcc7_manylinux2010-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -MD -MF ... (remaining 196 argument(s) skipped)
gcc: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://gcc.gnu.org/bugs/> for instructions.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 4408.405s, Critical Path: 1140.54s
INFO: 13790 processes: 8985 remote cache hit, 3772 internal, 1033 local.
FAILED: Build did NOT complete successfully |
st206983 | I’ve tried again with today nightly with a totally reproducible command (I suppose you are writing the cache with the same commit of the nightly available wheel):
docker run --rm -it tensorflow/build:latest-python3.9 /bin/bash -c "git clone --depth 200 https://github.com/tensorflow/tensorflow.git --single-branch /tf/tensorflow && pip install tf-nightly-cpu && python -c \"import tensorflow as tf; import re; print(re.search('g(\S+)',tf.__git_version__).group(1))\" | GIT_DIR=/tf/tensorflow/.git xargs git reset --hard && cd /tf/tensorflow && bazel --bazelrc=/usertools/cpu.bazelrc build --config=sigbuild_remote_cache --verbose_failures tensorflow/tools/pip_package:build_pip_package"
But I start to not hit the cache around 5000/6000 actions… e.g.:
tensorflow/compiler/xla/service/* |
st206984 | I’ve added an exec log PR to debug these cache misses on you side in the CI between to run of the same commit build and on our side on different machine but with the same docker environment.
github.com/tensorflow/build
Add an execution log
tensorflow:master ← bhack:execution_log
opened
Dec 2, 2021
bhack
+12
-0
This will store the executioin log in the mounted volume `/tf/pkg/` like the new… stored build profile.
This will help debug cache misses on subsequent CI builds on the same or different commit and on local dev machines.
See more at step 2 and 4 in:
https://docs.bazel.build/versions/main/remote-caching-debug.html
I hope that you could open this log from the CI at some point.
Thanks |
st206985 | Thanks @angerson! Just FYI SIG Addons has switched to using these images for our build:
github.com/tensorflow/addons
Utilize SIG Build Docker Images 1
tensorflow:master ← seanpmorgan:new-build-image
opened
Jul 1, 2021
seanpmorgan
+15
-25
# Description
Move to managed SIG Build images.
Do want to echo Stefano’s comments w.r.t a CPU and GPU image since we’re unable to support GitHub code spaces with such a large docker image. |
st206986 | What I currently have and trying to do:
When I receive a request from a client to the model in tensorflow-serving, I first need to process them using 13 regexes, then pass that text through tf.keras.preprocessing.text.Tokenizer 1 to convert them to numbers(or token), and then pass them to tf.keras.preprocessing.sequence.pad_sequences to add 0s (for the sentences whose lenght doesn’t match the input that the model expects) at the end of each array(for a batch of inputs), then this(a single sentence or a batch of sentences as tokens) will be fed to a tf.keras model to get some probabilities as outputs. And I then need to map these probabilites(different thresholds for different units) to texts and return it to the client.
What problems am I currently facing trying to accomplish above:
While trying to put together all that to be able to serve the model using tensorflow-serving, I learned that some parts can be converted to tensorflow functions, but not all of it.
regexes: I still couldn’t figure out where and how to put my regexes to be able to manipulate the text.
tokenizer: I learned from some blogs and SO questions, that tf.lookup.StaticHashTable can be used for this purpose.
pad_sequences: no help with this too.
post-processing: I could find very little information to do this.
I read the beginner and advanced blogs on tensorflow-transform tutorials page, but either of them mentioned how to link those tft functions to the tf-keras model, while saving it. And I could also find some information about adding pre-processing for serving, but all of them involved tensorflow code and some workarounds, but they didn’t cover what I am trying to achieve even indirectly.
I can provide more information as required.
How do I add these steps to the graph, while saving the model? |
st206987 | Solved by jeongukjae in post #5
1 & 3 & 4. After training the model, you can save graph with pre-processing and post-processing steps like below
...
...
# some training steps
model = ...
model.compile(...)
model.fit(...)
@tf.function
def inference_function(text):
# do some preprocessing
text = tf.strings.regex_replace(text,… |
st206988 | You can use this function (tf.strings.regex_replace) 1 if you want to manipulate texts using regular expressions in the TF graph.
SentencePiece and WordPiece tokenizers are useful for me. I recommend these tokenizers to you.
text.pad_model_inputs 1 is very useful to pad model inputs.
I don’t know what task you want to solve, so I’m just guessing the task. If you are solving a sequence tagging task, I think you can use tensorflow-text’s tokenizers and use tokenize_with_offsets to get offsets of each token. Then, you can use those offsets to map probabilities to texts. (For example, you can use tokenize_with_offsets in WordpieceTokeizer 1) |
st206989 | thanks for the answer and suggestions.
1 & 3. I will try to adopt tf.strings.regex_replace for regex operations on my text and text.pad_model_inputs. but, how do I put this inside the graph, while doing tf.keras.models.save_model() or convey tensorflow that i have some regexes in variables that have to be included in the graph?
4. Yes, I have been doing Sequence tagging, multi-label classification and mutli-class classification and this question is aimed at learning to serve those models with tf-serving. so, for example, with multi-label, I want to use the logits from tf.keras.model and if threshold is >0.5, i want to label the input text as belonging to a label(texts from a dictionary); and I also have different thresholds for different label. like previous comment, where and how do I include logic/code for this while saving the model?
2. I didn’t know about SentencePiece 2 and WordPiece tokenizers. you meant to say that these packages/libraries have been useful for you? Sure, i will adapt them. |
st206990 | 1 & 3 & 4. After training the model, you can save graph with pre-processing and post-processing steps like below
...
...
# some training steps
model = ...
model.compile(...)
model.fit(...)
@tf.function
def inference_function(text):
# do some preprocessing
text = tf.strings.regex_replace(text, # ... some regex patterns...)
token_ids, starts, ends = tokenizer.tokenize_with_offsets(text)
model_inputs = # prepare model inputs using token_ids
# inference model
model_outputs = model(model_inputs)
outputs = # do some post-processing with starts, ends, and model_outputs
return outputs
# https://www.tensorflow.org/api_docs/python/tf/keras/Model#save
model.save(
"some path to save the model",
signatures={
"inference_fn": inference_function.get_concrete_function(tf.TensorSpec([None], dtype=tf.string)),
}
)
Yes! After training the sentencepiece model, you can load and use it with text.SentencepieceTokenizer 1 in TF graph. |
st206991 | thanks for the code.
some small questions:
if i write more such complex functions docorating with @tf.function and as long as I stick to functions and classes tensorflow and it’s libraries(like tf, tf.keras, tf-addons, tf-text tf-transform etc,.), will the saved-model be loadable in other environments? if not, where can I find what part of tensorflow code can and can’t be used in these functions?
are you telling me that, if i had trained and used SentencePiece 1 tokenisers, I can use them in pre-processing functions and in the tf-serving graph using text.SentencepieceTokenizer 1? |
st206992 | Yes! But you have to register the required ops. If you used tf-text’s operations in the SavedModel, you have to register tf-text’s ops to load it(example 2).
Yes, exactly! |
st206993 | What’s the proper/best architecture at server side to serve deep-learning models written mostly in tf.keras with tensorflow-serving? Is it advisable to use any specifiic web-framework(e.g., fastAPI, Flask)? Do I use tf-serving along with a WSGI(e.g,. gunicorn) with some set configuration(like worker type)? Is it advised to put tf-serving(or gunicron with tf-serving or gunicorn with fastapi with tf-serving) behind some web server or reverse proxy like nginx(or nginx API Gateway)?
I would like to know what type of configurations have worked for most people, to take deployment decisions at my side accordingly!!
I need to serve multiple deep-learning models at the same time, some on AWS EC2, some using kubernetes on AWS EKS. the models change pretty quickly(some need to be changed as soon as every week while some go on for months or even a couple years). some models will be accessed by hundreds of thousands of people every second, while some models can be served on a single machine. so, tf-models usage is relatively atypical at where I need to deploy. |
st206994 | Hi everyone,
For a personal project I’m trying to recreate the NBeats architecture into Keras, and I don’t think I’m doing it correctly but am not sure why.
The page I’m working off of as a ground truth can be found here: https://github.com/ElementAI/N-BEATS/blob/master/models/nbeats.py
Here’s the starter PyTorch code that I’m trying to convert:
class NBeatsBlock(t.nn.Module):
def __init__(self,
input_size,
theta_size: int,
basis_function: t.nn.Module,
layers: int,
layer_size: int):
super().__init__()
self.layers = t.nn.ModuleList([t.nn.Linear(in_features=input_size, out_features=layer_size)] +
[t.nn.Linear(in_features=layer_size, out_features=layer_size)
for _ in range(layers - 1)])
self.basis_parameters = t.nn.Linear(in_features=layer_size, out_features=theta_size)
self.basis_function = basis_function
def forward(self, x: t.Tensor) -> Tuple[t.Tensor, t.Tensor]:
block_input = x
for layer in self.layers:
block_input = t.relu(layer(block_input))
basis_parameters = self.basis_parameters(block_input)
return self.basis_function(basis_parameters)
class NBeats(t.nn.Module):
def __init__(self, blocks: t.nn.ModuleList):
super().__init__()
self.blocks = blocks
def forward(self, x: t.Tensor, input_mask: t.Tensor) -> t.Tensor:
residuals = x.flip(dims=(1,))
input_mask = input_mask.flip(dims=(1,))
forecast = x[:, -1:]
for i, block in enumerate(self.blocks):
backcast, block_forecast = block(residuals)
residuals = (residuals - backcast) * input_mask
forecast = forecast + block_forecast
return forecast
class GenericBasis(t.nn.Module):
def __init__(self, backcast_size: int, forecast_size: int):
super().__init__()
self.backcast_size = backcast_size
self.forecast_size = forecast_size
def forward(self, theta: t.Tensor):
return theta[:, :self.backcast_size], theta[:, -self.forecast_size:]
Here’s the Keras code I have to translate:
class NBeatsBlock(keras.layers.Layer):
def __init__(self,
theta_size: int,
basis_function: keras.layers.Layer,
layer_size: int = 4):
super(NBeatsBlock, self).__init__()
self.layers_ = [keras.layers.Dense(layer_size, activation = 'relu')
for i in range(layer_size)]
self.basis_parameters = keras.layers.Dense(theta_size)
self.basis_function = basis_function
def call(self, inputs):
x = self.layers_[0](inputs)
for layer in self.layers_[1:]:
x = layer(x)
x = self.basis_parameters(x)
return self.basis_function(x)
class NBeats(keras.layers.Layer):
def __init__(self,
blocksize: int,
theta_size: int,
basis_function: keras.layers.Layer):
super(NBeats, self).__init__()
self.blocks = [NBeatsBlock(theta_size = theta_size, basis_function = basis_function) for i in range(blocksize)]
def call(self, inputs):
residuals = K.reverse(inputs, axes = 0)
forecast = inputs[:, -1:]
for block in self.blocks:
backcast, block_forecast = block(residuals)
residuals = residuals - backcast
forecast = forecast + block_forecast
return forecast
class GenericBasis(keras.layers.Layer):
def __init__(self, backcast_size: int, forecast_size: int):
super().__init__()
self.backcast_size = backcast_size
self.forecast_size = forecast_size
def call(self, inputs):
return inputs[:, :self.backcast_size], inputs[:, -self.forecast_size:]
If I try and make a model from the Keras code it works, but I don’t think it’s constructed correctly.
Here’s a simple model:
inputs = Input(shape = (1, ))
nbeats = NBeats(blocksize = 4, theta_size = 7, basis_function = GenericBasis(7, 7))(inputs)
out = keras.layers.Dense(7)(nbeats)
model = Model(inputs, out)
My concern is that the internal NBeatsBlock layers are not actually being used in the model I just created.
My model summary reads like this:
,
And as you can see there’s nothing that indicates the internal Dense layers are there.
And if I plot the model I get the following diagram:
So I don’t think I’m doing things correctly but I’m also not sure where I’m going wrong with how I’m constructing it. I’m guessing there are small differences in how PyTorch & Keras work that I’m not picking up on. |
st206995 | I’ve not personally verified the correct implementation but there was a parallel Pytorch and Keras impl at:
GitHub
GitHub - philipperemy/n-beats: Keras/Pytorch implementation of N-BEATS:... 1
Keras/Pytorch implementation of N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. - GitHub - philipperemy/n-beats: Keras/Pytorch implementation of N-BEATS: Neural ... |
st206996 | Also please check how to print summaries with nested models:
github.com/keras-team/keras
Print expanded nested layers feature in models.summary() 2
keras-team:master ← krishrustagi:changes
opened
Aug 26, 2021
krishrustagi
+191
-19
Issue #15250. I am making this PR.
The changes have been made using the recursi…on approach. All the nested layers get summarized one by one.
Or plot_model:
github.com/keras-team/keras
[P] [RELNOTES] Add ability to visualize wrapped models with plot_model function 1
keras-team:master ← yoks:master
opened
Oct 19, 2018
yoks
+73
-20
### Summary
Added two new optional arguments to `plot_model` function in `vis_u…tils.py`, to be able to visualize nested (wrapped) models. Also adds `dpi` param to have ability to produce high resolution graphs.
For example giving this model:
```python
sentence_input = Input(shape=(2, 3), dtype='float32', name="input2")
l_lstm = Bidirectional(LSTM(16))(sentence_input)
sent_encoder = Model(sentence_input, l_lstm)
review_input = Input(shape=(5, 2, 3), dtype='float32')
review_encoder = TimeDistributed(sent_encoder)(review_input)
l_lstm_sent = LSTM(16)(review_encoder)
preds = Dense(5, activation='softmax')(l_lstm_sent)
model = Model(review_input, preds)
vis_utils.plot_model(model, to_file='model3.png', show_shapes=True,
expand_nested=True, dpi=300)
```
Will produce:

While calling it without new arguments will produce plot as before:
```python
vis_utils.plot_model(model, to_file='model3.png', show_shapes=True)
```

### Related Issues
#5937
### PR Overview
- [x] This PR requires new unit tests [y/n] (make sure tests are included)
- [x] This PR requires to update the documentation [y/n] (make sure the docs are up-to-date)
- [x] This PR is backwards compatible [y/n]
- [x] This PR changes the current API [y/n] (all API changes need to be approved by fchollet) |
st206997 | Not an actual solution but some pointer.
stackoverflow.com
Converting PyTorch Code to Keras Code for Neural Net With Nested Layers 2
tensorflow, keras, deep-learning, pytorch
answered by
M.Innat
on 06:29PM - 21 Dec 21 UTC |
st206998 | Hi everyone, I’m new to the Tensorflow Keras API, and thought I would use it with the tensorflow-metal plugin from Apple to train a custom MobileNetV3Small model on my M1 Pro MacBook for the task of image classification. This is for my app DeTeXt, that classifies drawings into LaTeX symbols. Currently I’m using a MobileNetV2 model that I had trained on a GPU cluster using the PyTorch API (code here 1).
Here is the code I use to train my custom network from scratch on the images I have:
import tensorflow as tf
import pdb
EPOCHS = 5
BATCH_SIZE = 128
LEARNING_RATE = 0.003
SEED=1220
if __name__ == '__main__':
# Load train and validation data
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
'/Volumes/detext/drawings/',
color_mode="grayscale",
seed=SEED,
batch_size=BATCH_SIZE,
labels='inferred',
label_mode='int',
image_size=(200,300),
validation_split=0.1,
subset='training')
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
'/Volumes/detext/drawings/',
color_mode="grayscale",
seed=SEED,
batch_size=BATCH_SIZE,
labels='inferred',
label_mode='int',
image_size=(200,300),
validation_split=0.1,
subset='validation')
# Get the class names
class_names = train_ds.class_names
num_classes = len(class_names)
# Create model
model = tf.keras.applications.MobileNetV3Small(
input_shape=(200,300,1), alpha=1.0, minimalistic=False,
include_top=True, weights=None, input_tensor=None, classes=num_classes,
pooling=None, dropout_rate=0.2, classifier_activation="softmax",
include_preprocessing=True)
# Compile model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
# Training
model.fit(train_ds, epochs=EPOCHS, validation_data=val_ds)
model.save('./saved_model3/')
While the training runs smooth and fast with the metal plugin, the validation accuracy is very low after 5 epochs, and I suspect it is either predicting the same class every time, or there is an error somewhere in my setup above. I have tried rescaling the inputs myself (and removing rescaling layer from model), but no matter what I try, the validation accuracy it outputs is really low. Here is the output (warnings and all) after 2 epochs:
Found 210454 files belonging to 1098 classes.
Using 189409 files for training.
Metal device set to: Apple M1 Pro
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
2021-12-16 10:02:46.369476: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-12-16 10:02:46.369603: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 210454 files belonging to 1098 classes.
Using 21045 files for validation.
Epoch 1/2
2021-12-16 10:02:50.610564: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2021-12-16 10:02:50.619328: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2021-12-16 10:02:50.619628: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1480/1480 [==============================] - ETA: 0s - loss: 1.7621 - sparse_categorical_accuracy: 0.57022021-12-16 10:12:58.720162: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1480/1480 [==============================] - 626s 422ms/step - loss: 1.7621 - sparse_categorical_accuracy: 0.5702 - val_loss: 9.5837 - val_sparse_categorical_accuracy: 0.0052
Epoch 2/2
1480/1480 [==============================] - 622s 420ms/step - loss: 1.0791 - sparse_categorical_accuracy: 0.6758 - val_loss: 7.3651 - val_sparse_categorical_accuracy: 0.0423
2021-12-16 10:23:40.260143: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
/Users/venkat/miniforge3/envs/tf-metal/lib/python3.9/site-packages/keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override '
For reference, I was getting validation micro-F1 (which is same as accuracy) of over 60% with MobilenetV2 in PyTorch. Anyone have any idea what I’m doing wrong here? |
st206999 | The issue seems to be specific to certain types of operations/layers in Tensorflow, and specifically with respect to the validation accuracy (similar to this issue 2). When I build my own custom model with convolutions like so:
model = Sequential([
layers.Rescaling(1./255, input_shape=(IMG_HEIGHT, IMG_WIDTH, 1)),
layers.Conv2D(16, 1, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 1, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 1, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes),
layers.Softmax()
])
training proceeds as expected, with a high validation accuracy as well. Below is the output for the above model:
Found 210454 files belonging to 1098 classes.
Metal device set to: Apple M1 Pro
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
2021-12-21 12:27:24.005759: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-12-21 12:27:24.006206: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 210454 files belonging to 1098 classes.
Using 31568 files for validation.
2021-12-21 12:27:26.965648: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2021-12-21 12:27:26.968717: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2021-12-21 12:27:26.969214: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - ETA: 0s - loss: 2.1246 - sparse_categorical_accuracy: 0.52732021-12-21 12:32:57.475358: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - 353s 214ms/step - loss: 2.1246 - sparse_categorical_accuracy: 0.5273 - val_loss: 1.3041 - val_sparse_categorical_accuracy: 0.6558
2021-12-21 12:33:19.600146: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
However, the very same code with the MobileNetV3Small model (instead of my custom model) produces the following output:
Found 210454 files belonging to 1098 classes.
Metal device set to: Apple M1 Pro
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
2021-12-21 12:34:46.754598: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2021-12-21 12:34:46.754793: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Found 210454 files belonging to 1098 classes.
Using 31568 files for validation.
2021-12-21 12:34:49.742015: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
2021-12-21 12:34:49.747397: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2021-12-21 12:34:49.747606: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - ETA: 0s - loss: 2.4072 - sparse_categorical_accuracy: 0.46722021-12-21 12:41:28.137948: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.
1645/1645 [==============================] - 415s 252ms/step - loss: 2.4072 - sparse_categorical_accuracy: 0.4672 - val_loss: 21.6091 - val_sparse_categorical_accuracy: 0.0131
2021-12-21 12:41:46.017580: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
/Users/venkat/miniforge3/envs/tf-metal/lib/python3.9/site-packages/keras/utils/generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override '
The validation loss/accuracy is hilariously bad, and I find that the model constantly predicts the same class. My guess is that MobileNetV3Small seems to contain some operations/layers that don’t work well with tensorflow-metal for whatever reason, and only Apple Engineers can fix this problem at a low level. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.