id
stringlengths
3
8
text
stringlengths
1
115k
st206500
Good question @Harsh_Banka. Wanted to ask the same question. Was checking ML under google developers on YouTube, seemed pretty solid and easy to understand. Am sure its will be great start.
st206501
First, thanks for the forum. It was much needed Second, there could be another tag on TFLite. Third, we have a TFLite working group that consists of ML GDEs, Googlers like Khanh and Hoi, and other community contributors that are passionate about the TinyML revolution. The working group has emerged to be one of the most active ones and together we have been able to: Develop mobile applications that demonstrate best and advanced practices of TFLite. Publish state-of-the-art TFLite models (from different data modalities) on TensorFlow Hub. Write in-depth technical tutorials. To better recognize these efforts, I think it might be a great option to also host something similar to what Jason does with TFJs ShowAndTell. Looking forward to hearing what others have to say.
st206502
Not able to include links in my post. But in case if someone is interested to see where we track all of the aforementioned efforts, go over to ml-gde/e2e-tflite-tutorials 11 on GitHub.
st206503
This is a great idea, Sayak! It would be awesome to feature outstanding works from the TF Lite Community on a regular basis. I can support this effort if the team agrees to launch it.
st206504
I liked the idea too! The TFLite community is doing some very cool things! about the tag, that’s also a Good idea last, about the link, I don’t know why you can’t add it but I edited for you, hope it’s not a problem.
st206505
Probably because I haven’t been promoted to ML-GDE category while being one Maybe one of the moderators could help.
st206506
I like the idea. I guess the format is pretty open, so you just need to start hosting it You can even use MLGDE youtube channel for such content
st206507
@Sayak_Paul sorry about the delay, fixed! I think I’ll also lift the restriction on links for new users for the time being since we’re still approving all signups. This is primarily a spam mitigation measure.
st206508
Thanks for the nice suggestions! For the tags, it would be nice to have tags for TensorFlow Lite Micro and TensorFlow Model Optimization as well.
st206509
I think starting it from the official channels would help us to garner some initial recognition. I agree the format is pretty wide open.
st206510
@Sayak_Paul I agree. And if it is not considered to launch from official channels, I think we can get some promotion support from Google Developers and TensorFlow social media accounts.
st206511
We already have “model_optimization” and “microcontroller” tags. Just created a “tflite_micro” tag.
st206512
Do you have already tested GitHub - SarderLab/tf-WSI-dataset-utils: An optimized pipeline for working with Whole Slide Image (WSI) data in Tensorflow 18 ?
st206513
Hey, Guys I want to build a recommendation system using TF or TF.js for my web application ,Can I get suggestions on blogs or videos I could refer.
st206514
Have you already explored TensorFlow Recommenders (TFRS)? blog.tensorflow.org Introducing TensorFlow Recommenders 15 Introducing TensorFlow Recommenders, a library for building flexible and powerful recommender models.
st206515
I have a TF 1.14 model which uses “tf.contrib.rnn.LSTMBlockFusedCell”, which I am trying to replicate in TF2.4. It is a variant of “DeepSpeech, v. 0.5.1”. Both models have one LSTM and five Dense layers. The layer weights are loaded from a DeepSpeech v. 0.5.1 Checkpoint into the TF2.4 model, taking care to split kernel from recurrent_kernel, and re-ordering the blocks (i,c,f,o) → (i,f,c,o) as suggested by a kind person here. The models take same input, all other layers (the five Dense layers) have same inputs and outputs, only the LSTM layers have different outputs in teh two models. The final outputs are in the same order of magnitude, but the TF2.4 result is not close to correct, that is: does not translate audio to text, which the TF1.14 model does almost satisfactorily. Does anyone here know whether Tensorflow.keras.LSTM and tf.contrib.rnn.LSTMBlockFusedCell are in fact designed to work identically? Am I wasting my time trying to get the same results?
st206516
I don’t know if you could be interested to explore the upstream diff on how they are refactoring the model (to remove also the old contrib): github.com/mozilla/DeepSpeech Low touch upgrade to TensorFlow 2.3 15 mozilla:master ← mozilla:low-touch-r2.3 opened Jan 2, 2021 reuben +476 -1252 Keeps changes to a minimum by leveraging the fact that under a `tfv1.Session` ob…ject, TensorFlow v1 style meta-graph construction works normally, including placeholders. This lets us keep changes to a minimum. The main change comes in the model definition code: the previous LSTMBlockCell/static_rnn/CudnnRNN parametrized RNN implementation gets replaced by `tf.keras.layers.LSTM` which is supposed to use the most appropriate implementation given the layer configuration and host machine setup. This is a graph breaking change and so GRAPH_VERSION is bumped.
st206517
The title describes the gist of the issue, details and code are here: python - Getting "Function call stack: train_function -> train_function" error when training tensorflow2 LSTM RNN - Stack Overflow 14 Does anyone know why this error is happening? I cannot for the life of me figure it out.
st206518
Guy_Berreby: python - Getting “Function call stack: train_function → train_function” error when training tensorflow2 LSTM RNN - Stack Overflow Hi @Guy_Berreby, what type of music data are you working on? For instance, if it’s in a MIDI format, then I can see why you’re using the LSTM architecture. I also noticed you’re using tfio.audio.AudioIOTensor ( tfio.audio.AudioIOTensor  |  TensorFlow I/O 2) - maybe your two datasets are waveform-based. Can you please share some info and how you’re loading the data (code)? I’ve summarized your code and the task below with some formatting, based on the information in the StackOverflow post you shared. @Guy_Berreby do let me know if the spaces and other info are correct, I had to make some minor adjustments: Your ML task Music genre classification, two different genres - bm and dm Code RNN (LSTM) model: model = keras.Sequential() # Add an Embedding layer expecting input vocab of size 1000, and # output embedding dimension of size 64. model.add(layers.Embedding(input_dim=maxLen, output_dim=2,mask_zero=True)) #model.add(layers.Masking()) # Add an LSTM layer with 128 internal units. # model.add(layers.Input(shape=[1,None]) ) model.add(layers.LSTM(8,return_sequences=True)) model.add(layers.Dropout(0.2) ) model.add(layers.LSTM(8)) model.add(layers.Dropout(0.2) ) # Add a Dense layer with 10 units. model.add(layers.Dense(16,activation="relu")) model.add(layers.Dropout(0.2)) model.add(layers.Dense(2,activation="softmax")) model.compile(loss='categorical_crossentropy', optimizer='adam') Generator: def modelTrainGen(maxLen): # One type of music - training set bmTrainDirectory = '/content/drive/.../...train/' # Another type of music - training set dmTrainDirectory = '/content/drive/.../...train/' dmTrainFileNames = os.listdir(dmTrainDirectory) bmTrainFileNames = os.listdir(bmTrainDirectory) maxAudioLen = maxLen bmTensor = tf.convert_to_tensor([[1],[0]]) dmTensor = tf.convert_to_tensor([[0],[1]]) allFileNames = [] for fileName in zip(bmTrainFileNames,dmTrainFileNames): bmFileName = fileName[0] dmFileName = fileName[1] allFileNames.append((bmFileName,1)) allFileNames.append((dmFileName,0)) random.shuffle(allFileNames) for fileNameVal in allFileNames: fileName = fileNameVal[0] val = fileNameVal[1] if val == 1: bmFileName = fileName audio = tfio.audio.AudioIOTensor(bmTrainDirectory + bmFileName) audio_slice = tf.reduce_max(tf.transpose(audio[0:]),0) del audio print(audio_slice.shape) padded_x = tf.keras.preprocessing.sequence.pad_sequences( [audio_slice], padding="post", dtype=float,maxlen=maxAudioLen ) del audio_slice converted = tf.convert_to_tensor(padded_x[0]) del padded_x print("A") print(converted.shape) yield ( converted,bmTensor) print("B") del converted else: dmFileName = fileName audio = tfio.audio.AudioIOTensor(dmTrainDirectory + dmFileName) audio_slice = tf.reduce_max(tf.transpose(audio[0:]),0) del audio print(audio_slice.shape) padded_x = tf.keras.preprocessing.sequence.pad_sequences( [audio_slice], padding="post", dtype=float,maxlen=maxAudioLen) del audio_slice converted = tf.convert_to_tensor(padded_x[0]) del padded_x print("C") print(converted.shape) yield ( converted,dmTensor) print("D") del converted (The following TensorFlow docs are for waveform-based data - could be useful in future: Audio Data Preparation and Augmentation  |  TensorFlow I/O 2 Simple audio recognition: Recognizing keywords  |  TensorFlow Core 2 Transfer Learning with YAMNet for environmental sound classification 2 )
st206519
Hi, thanks for the response! The data is mp3 files of songs, not midi files, which as you noticed I am loading in tfio.audio.AudioIOTensor
st206520
We’ve done 12 courses on TensorFlow at Coursera for TensorFlow:In Practice; TensorFlow: Data and Deployment; and TensorFlow: Advanced Techniques. We have an upcoming specialization on MLE covering TFX etc We also influenced their NLP and Medical AI specializations. What would you want to see next?
st206521
How about a project-based specialization that goes from beginning to end and ties the concepts from previous specializations together? You could go in-depth on the decisions that you make as a developer and suggest tools that students can use while demonstrating the way that Google handles an ML project. At the end of the specialization, the students would have an example of a functional, deployable application that they know how to maintain.
st206522
@Jeff_Corpac @Laurence_Moroney - This is an excellent idea ! I completed the Deep Learning Specialization on Coursera and on my way to complete the TensorFlow in Practice specialization. I plan to take the MLOps specialization next. While these specializations are good and have hands on examples, those examples are no where realtime and even if they are the knowledge is scattered in various places across specializations. It will be a great idea to have a few real time case studies bundled in to a specialization where the problem framing and design approaches at various stages are discussed and brainstormed and corresponding implementation is shown. Extending the same example to include the MLOps lifecycle steps like labeling, experiment management, deployment, monitoring would also be a great idea.
st206523
I was playing with this function from tf.data called tf.data.Dataset.from_generator() where it takes a ImageDataGenerator object and turns it into a Dataset Object. Everything was fine but when I fit my model I can’t use steps_per_epoch and validation_steps including them throws an error, Screenshot 2021-05-20 at 3.39.15 AM1153×541 68.5 KB TypeError: dataset length is unknown. Then I commented them out and continued to fit the model but its training for infinitely seems there is no stopping. I have attached my images. (edited) Screenshot 2021-05-20 at 3.15.58 AM709×637 42.8 KB When I use .from_generator it automatically converts from ImageDatagenerator to a Dataset object but not sure why it’s still considering it as a generator and throwing error. Any help on this?
st206524
ashikshafi0: tf.data.Dataset.from_generator() Looping in @Andrew_Audibert I also checked for similar Issues here (keyword: tf.data.Dataset.from_generator - Issues · tensorflow/tensorflow · GitHub 13) - in case someone has encountered a similar issue.
st206525
Just to extend the official example 8 in the documentation import tensorflow as tf def gen(): ragged_tensor = tf.ragged.constant([[1, 2], [3]]) yield 42, ragged_tensor dataset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(), dtype=tf.int32), tf.RaggedTensorSpec(shape=(2, None), dtype=tf.int32))) print(dataset.cardinality()) You will see that cardinality is -2 that is tf.data.experimental.UNKNOWN_CARDINALITY. So it can’t quickly estimate the cardinality on datasets using from_generator You can use tf.data.experimental.assert_cardinality 28 to manually set the cardinality.
st206526
Thanks, will do that. Also, the weird thing is I got it working by setting the steps_per_epoch and validation_steps manually. model.fit(train_dataset_gen , epochs = 3, steps_per_epoch = 62.5 , validation_data = valid_dataset_gen , validation_steps = 10) Log Epoch 1/3 Found 2000 images belonging to 2 classes. 63/62 [==============================] - ETA: -1s - loss: 0.6973 - accuracy: 0.4990Found 1000 images belonging to 2 classes. 62/62 [==============================] - 189s 3s/step - loss: 0.6973 - accuracy: 0.4990 - val_loss: 0.6936 - val_accuracy: 0.5063 Epoch 2/3 63/62 [==============================] - ETA: -1s - loss: 0.6958 - accuracy: 0.4830Found 1000 images belonging to 2 classes. 62/62 [==============================] - 188s 3s/step - loss: 0.6958 - accuracy: 0.4830 - val_loss: 0.6921 - val_accuracy: 0.5250 Epoch 3/3 63/62 [==============================] - ETA: -1s - loss: 0.6950 - accuracy: 0.5025Found 1000 images belonging to 2 classes. 62/62 [==============================] - 188s 3s/step - loss: 0.6950 - accuracy: 0.5025 - val_loss: 0.6921 - val_accuracy: 0.5188 <tensorflow.python.keras.callbacks.History at 0x7f427aacfdd0> But not sure whether it’s the right way of doing things. All I know is we can’t use a len() function on a generator, but by using tf.data.dataset.from_generator() returns a dataset right?
st206527
ashikshafi0: tf.data.dataset.from_generator() returns a dataset right? Yes it returns a dataset
st206528
the error about the len. https://discuss.tensorflow.org/uploads/default/original/1X/f07a1537b7a14c89137f8615566a695a1c6470a8.png 26
st206529
It is the same explained with cardinality you don’t have the len available with from_generator. github.com tensorflow/tensorflow/blob/master/tensorflow/python/data/ops/dataset_ops.py#L419-L425 12 def __len__(self): """Returns the length of the dataset if it is known and finite. This method requires that you are running in eager mode, and that the length of the dataset is known and non-infinite. When the length may be unknown or infinite, or if you are running in graph mode, use `tf.data.Dataset.cardinality` instead.
st206530
Thanks a lot, Bhack with your help I was able to fix it. It seems during the conversion of the generator to the dataset object length of the dataset is unknown and infinite. By using the tf.data.experimental.cardinality() we can get the number of samples in our dataset. Like as I said before during the conversion the length is infinite and unknown so it will return -2 . We can fix this by entering the number of samples explicitly in our dataset by using tf.data.experimental.assert_cardinality(num_of_samples) and now we can even use the len() function. I have shared the link to the complete notebook here Google Colaboratory 30 Best, Ashik
st206531
I want to start a Tensorflow User group in my area. I found an application for it, but it asks for a website/meetup link. But before investing time and money on meetup group/website, I would like an official affirmation. What should I do?
st206532
Hi @JuHyung_Son. Swift for TensorFlow has been archived, and is no longer being updated. However, the API documentation and binary downloads will continue to be accessible, as well as the Open Design Review meeting recordings. You can read more on the tensorflow/swift README 43.
st206533
How you guys, going to greet for tensorflow newbie’s like me. PS:- This post is open to share suggestion to help learn tensorflow learn efficiently with experienced professional.
st206534
Check out this thread which covers a lot of material for new TensorFlow users: Introducing TensorFlow to high school students General Discussion If your students know and use Python, I created an “ML Foundations” course on YouTube. You can find it on the YT channel, and it might work well for them. Hope this helps! cc @Laurence_Moroney @jbgordon @lgusm @Jason
st206535
I would like to get some help to make tf.js run in this environment/setup: GraalVM: https://www.graalvm.org/ 4 TypeScript: https://www.typescriptlang.org/ 2 Do you think it is possible? I’m still pretty new to TypeScript so a sample code/project would be very helpful. Here’s my current tsconfig if it helps: { “compilerOptions”: { “module”: “amd”, “target”: “es5”, “moduleResolution”: “node”, “allowJs”: true, “sourceMap”: false, “newLine”: “LF”, “esModuleInterop”: true, “baseUrl”: “.”, “rootDir”: “./src/TypeScript/”, “outDir”: “./src/FileCabinet/SuiteApps/com.netsuite.unittestreference/src”, “lib”: [“es2015”, “dom”], “skipLibCheck”: true, “paths”: { “N”: [“node_modules/@hitc/netsuite-types/N”], “N/": ["node_modules/@hitc/netsuite-types/N/”], “n”: ["./src/TypeScript/types/n"], “n/": ["./src/TypeScript/types/n/”] } }, “exclude”: ["./test//", "./src/TypeScript/types/"], “include”: ["./src/TypeScript//*"] } I tried using the CDN version and I got this error: Error: Could not find a global object [at Rg (/SuiteApps/com.netsuite.unittestreference/src/app/HelloTypeScriptPage/main/tf.min.js:17:125032)] I also tried the npm version, the tf.js doesn’t seem to be automatically built by TypeScript. I have some experience on webpack. Should I use a bundler?
st206536
This is primarily for the folks working in TensorFlow Lite and on-device ML. Today, I gave a short talk at this meetup: gdg.community.dev Machine Learning developers Meetup [EMEA/APAC] | Google Developer Groups 6 I’m attending the I/O Community Lounge Meetups meetup on May 20, 2021! Learn more and join me: https://gdg.community.dev/e/mnrmef/ @GDG I talked about the collaborative efforts TensorFlow Lite community has been making from the last year: docs.google.com Community Ecosystem of TFLite 9 Community Ecosystem of TensorFlow Lite Sayak Paul PyImageSearch @RisingSayak image1312×724 30.3 KB image1264×687 114 KB image1219×661 95.2 KB @lgusm @Laurence_Moroney
st206537
I really loved the talk and it was pretty great seeing so much awesome work by the TFLite Community. Thanks for sharing the slides!
st206538
The TF Lite updates from Google I/O 2021 look outstanding too Big things happening in the on-device space
st206539
hi guys, I’m trying to optimize my model with 8bit integer quantization for performance. From what I learned from Post-training quantization  |  TensorFlow Model Optimization 1 the only way for TF to run a integer quantized model is through the tflite runtime. I’m trying to deploy the service on the cloud with a powerful CPU server and a bunch of HW accelerators. Right now we are running with native TF runtime and tfserving. it’s working well. It sounds that the tflite is not designed for this scenario. also in some article it says the tflite implementation of cpu kernels are not best fit for server. Please let me know what is the legitimate method to run quantized model on cloud. Thank you very much. Kevin
st206540
Hello, I try to convert an existing code in pytorch to tensorflow. In the original code, they use “dtype=torch.cuda.LongTensor” to specify using GPU. Is there any alternative in tensorflow ? I have tried to use classical types of tensorflow such as “dtype =tf.dtypes.int64” but the code is very slow. Thank you for your help.
st206541
Hello, the device placement can be specified using tf.device in this way: with tf.device("/gpu:0"): x = tf.Variable(shape, dtpe=tf.int64) # shape is a variable defined somewhere, with the shape In this case the x variable is placed in the first GPU, and the type is 64 bit integer
st206542
Thank you for your answer. I have converted my list into tf.Variable and then I changed my list values within a loop, but I got the following error: “TypeError: ‘ResourceVariable’ object does not support item assignment” Any idea ?
st206543
Hi folks, I always wanted to use RandAugment to improve the robustness of my vision models. But never found a straightforward example that showed how to use it in the context of TensorFlow and Keras. Now, together with the community, we have one such example showing a clear advantage of RandAugment for improving the robustness of vision models: keras.io Keras documentation: RandAugment for Image Classification for Improved... 27 The example shows how to use the imgaug library together with tf.py_function inside tf.data pipelines. Of course, it has its own demerits. But hey, that’s life!
st206544
Nice but It could be useful to expose to the user what we have at models/augment.py at master · tensorflow/models · GitHub 10 models/augment.py at master · tensorflow/models · GitHub 9 Probably TF ops could have a better performance then tf.py_function with imageaug.
st206545
Certainly yes and I note that in the example itself. However, the example is more focused on readability and ease of use. I didn’t find the official one that you shared to not that easy-to-use. But I am interested to know more. Feel free to provide a minimal example of using the official one if you have time.
st206546
Yeah, I am aware of it. I would take your suggestion and include a link to it so that readers are aware.
st206547
Both of these are pretty cool. augment.py That seems like something we could mention in the image data augmentation 5 tutorial. I’m sure some users end up on that page looking for RandAugment or another higher level function and are disappointed.
st206548
I think the main issue there Is that externally, to the users, Is not clear if we want that tf-models could be used also as library or if we are waiting that keras, keras-cv and keras-nlp will be standalone.
st206549
markdaoust: That seems like something we could mention in the image data augmentation tutorial That is a really good idea. I have seen that tutorial evolve over time and I really appreciate the effort. Bhack: I think the main issue there Is that externally, to the users, Is not clear if we want that tf-models could be used also as library or if we are waiting that keras, keras-cv and keras-nlp will be standalone. Yes, exactly, that seems to be the case.
st206550
Here’s how we can take advantage of the RandAugment class from tf-models. First, install the nightly build: !pip install -q tf-models-nightly. Then - from official.vision.beta.ops import augment # Recommended is m=2, n=9 augmenter = augment.RandAugment(num_layers=3, magnitude=10) dataset = load_dataset(filenames) dataset = dataset.shuffle(batch_size*10) dataset = dataset.map(augmenter.distort, num_parallel_calls=AUTO) ... load_dataset() takes image filepaths and reads images from those paths and creates a tf.data.Dataset object.
st206551
I’m one of those people! +1 for RandAugment being added to the data augmentation tutorial
st206552
On a related note, here’s a tutorial that shows the use of RandAugment from TensorFlow Model Garden, thereby allowing for faster training with TPUs (since it is implemented using native TF ops): keras.io Keras documentation: Consistency Training with Supervision 22
st206553
I would like to retrain a YOLOv4 model, but I prefer a more mainstream environment than the Darknet. I looked at the catalog of models at TensorFlow Hub and Model Garden, but there is no YOLO models there. YOLO achieves the fastest frame rates for object detection in many benchmarks. What is the reason for a lack of official Google support? Is it just a sheer quantity of various models to support or superior models already available at TensorFlow Hub and Model Garden?
st206554
Hi Paul, publishing models on TFHub or Model Garden depends on the model creator to decide to do so. Researchers from Google published multiple Object Detection models 203 that can be fine tuned using Model Maker 77. For Yolo specifically, I guess the problem is that the creator didn’t want to publish it on TFHub yet. Would be great if they did.
st206555
The models are still in beta in our garden: github.com tensorflow/models/blob/master/official/vision/beta/projects/yolo/README.md 246 # YOLO Object Detectors, You Only Look Once [![Paper](http://img.shields.io/badge/Paper-arXiv.1804.02767-B3181B?logo=arXiv)](https://arxiv.org/abs/1804.02767) [![Paper](http://img.shields.io/badge/Paper-arXiv.2004.10934-B3181B?logo=arXiv)](https://arxiv.org/abs/2004.10934) This repository is the unofficial implementation of the following papers. However, we spent painstaking hours ensuring that every aspect that we constructed was the exact same as the original paper and the original repository. * YOLOv3: An Incremental Improvement: [YOLOv3: An Incremental Improvement](https://arxiv.org/abs/1804.02767) * YOLOv4: Optimal Speed and Accuracy of Object Detection: [YOLOv4: Optimal Speed and Accuracy of Object Detection](https://arxiv.org/abs/2004.10934) ## Description Yolo v1 the original implementation was released in 2015 providing a ground breaking algorithm that would quickly process images, and locate objects in a single pass through the detector. The original implementation based used a backbone derived from state of the art object classifier of the time, like This file has been truncated. show original
st206556
In the past, I’ve had issues debugging in TensorFlow where the problem was somewhere in the C++ code base and I was using gdb, these included debug builds being too large (using -O0) and running out of space, recompile time etc. Does anyone have recommendations to handle debugging in TensorFlow?
st206557
I think that some of these problems are well known. For a recent experience you can follow this: github.com/tensorflow/tensorflow cannot build TensorFLow with --config=dbg 26 opened May 6, 2021 bas-aarts when building opensource TensorFlow with bazel build --config=dbg --config=cuda --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" //tensorflow/tools/pip_package:build_pip_package (for SM 7.0 only) The build dies at link time with: ERROR: /home/baarts/tensorflow-GH/tensorflow/python/BUILD:3373:24: Linking... subtype: ubuntu/linux subtype:bazel type:build/install In the end there is a draft proposal so If you have something technical to share about your experience please leave a comment in the ticket.
st206558
I found that the best way to debug is printf-debugging without checking again from the head of the repository (because that would result in longer compile times again). If possible, building with ASAN also helps. The OSSFuzz docker container allows that.
st206559
What do you think about adding a Docs contributions and support links footer in every Docs webpage like in Github Docs 2 image1137×234 21 KB
st206560
Nice idea. Adding a footer link to either the /community 1 landing page or /community/contribute 1 page is straightforward. Not docs contribution-specific, but we could highlight on the landing page. footer1000×282 25.2 KB For individual pages we link to the GitHub location for API and notebook guides in the top buttons.
st206561
Yes, in Github Docs when we click “Make a Contribution” we are going to edit the page directly. E.g. I think that now we cannot directly go with a click from the Website to Github locations of pages like Contribute to TensorFlow 2
st206562
Ah, I see what you mean. docs.github.com looks like it’s organized under a central repo so this is a bit easier to pull off. Our docs are spread out across many repos (mostly to make it easier for teams to update) and not everything is in GitHub. For notebooks, we lint check 1 if there are buttons in the source files. And could potentially require this in plain Markdown files (though notebooks are the dominant documentation format on the website).
st206563
billy: Our docs are spread out across many repos (mostly to make it easier for teams to update) and not everything is in GitHub. Unfortunately, I know this well and that was the point. In the case of TF as it is difficult for us to put a button it is also hard for a new user to manually chase the various Github repositories to understand where to contribute and what is the source that generated the page that he is reading.
st206564
it is difficult for us to put a button it is also hard for a new user to manually chase the various Github repositories Not sure I follow—all the notebooks on the website include (or should include) links to their source location in GitHub. For example, the GitHub button in this Addons tutorial 1 points you to the correct repo location. ghbuttons1000×375 84.8 KB But we can’t use a specific link in the footer since the footer is site-wide.
st206565
Yes I know but I am still talking about Markdown sources that we have on the website. E.g. docs/code.md at master · tensorflow/docs · GitHub Or Configuring Visual Studio Code  |  TensorFlow I/O 2 I don’t know, with the generating scripts, if it is easy to automatically know how many Markdown sources we are still consuming generating the Tensorflow website.
st206566
Yes, I agree, buttons in the Markdown files would be useful. We’ve been adding these buttons directly to the notebook file so they’re visible wherever the notebook is rendered: Colab, GitHub, website, or unrelated projects that aggregate notebooks. Could do something similiar for Markdown files. This is probably easiest and can be enforced in CI tests (GitHub Actions) using nblint or another check. We can also think about auto-adding buttons in the docs pipeline when published to the website. Would be nice to standardize this across the site but requires docs infra work and not sure if we’d gain much over CI tests.
st206567
I think both solutions are viable. In the meantime is there a quick trick to know how many Markdowns we currently use when we generate the TF website?
st206568
Bhack: is there a quick trick to know how many Markdowns we currently use when we generate the TF website? Not across the entire site since it is assembled from multiple repos. But you can check an individual repo doc set: $ cd tensorflow/docs $ find site/en/ -name '*.md' \ | grep -v 'README.md\|sitemap.md' | grep -v '/r1/' Since we’re unlikely to update the docs pipeline in the near future, we’ll have to explicitly add buttons to the top of the Markdown files. To enforce, would need to add CI checks to each repo which may be handled differently across projects—but would be straightforward for the tensorflow/docs repo. Also need to update the docs style guide. Do we have any ticket in the repo to track this? Don’t think so, but can file a feature request 2.
st206569
Not across the entire site since it is assembled from multiple repos. But you can check an individual repo doc set: Can we grep/print directly from the generator/assembler log? I suppose that we have a loop somewhere in the docs scripts.
st206570
Jobs run independently and everything is converted to Markdown so can be difficult to determine the source format. But … we do have the en-snapshot 2 directory in the tensorflow/docs-l10n 2 repo—this is probably the closet thing to a single point of aggregation across docs projects.
st206571
Ok so if it is all so distributed I think that the CI could check the Markdown file for the button and ping the source owner.
st206572
P.s. on the en-snapshot we have 234 Markdown files. So probably we have almost the same not directly editable pages in the website.
st206573
if it is all so distributed I think that the CI could check the Markdown file for the button and ping the source owner CI on the translations repo (or docs pipeline) would happen post-submit and would hold up publishing without fixing the source issue. This would also require some sort of notification system. Not impossible, but would likely need to be implemented on the OSS side.
st206574
Hi folks. Currently, I have a requirement for a batch of data that should have an equal number of samples from each of the given classes. I am implementing it using the naive way for CIFAR10: def support_sampler(): idx_dict = dict() for class_id in np.arange(0, 10): subset_labels = sampled_labels[sampled_labels == class_id] random_sampled = np.random.choice(len(subset_labels), 16) idx_dict[class_id] = random_sampled return np.concatenate(list(idx_dict.values())) def get_support_ds(): random_balanced_idx = support_sampler() temp_train, temp_labels = sampled_train[random_balanced_idx],\ sampled_labels[random_balanced_idx] support_ds = tf.data.Dataset.from_tensor_slices((temp_train, temp_labels)) support_ds = ( support_ds .shuffle(BATCH_SIZE * 1000) .map(agumentation, num_parallel_calls=AUTO) .batch(BATCH_SIZE) ) return support_ds Is there a better way? Particularly using pure TF ops with tf.data?
st206575
Here the approach I used was to make a dataset for each class, and then merge them. TensorFlow Classification on imbalanced data  |  TensorFlow Core 15 I used sample_from_datasets so it’s approximately equal. But you could also zip the datasets then and .map a function to stack all the zipped tensors.
st206576
Thanks Mark. I later revisted that tutorial and found out about that neat method. Solved my purpose. I think having a separate sampler utility for tf.data pipelines might be better from usability standpoint.
st206577
There is this “rejection resample” function: TensorFlow tf.data: Build TensorFlow input pipelines  |  TensorFlow Core 19
st206578
Oh my. This is really neat. Thanks for sharing. I need to extend the example for my use case.
st206579
@markdaoust here’s what I tried: def class_func(image, label): return label SUPPORT_BATCH_SIZE = 640 (x_train, y_train), (_, _) = tf.keras.datasets.cifar10.load_data() sampled_idx = np.random.choice(len(x_train), 4000) sampled_train, sampled_labels = x_train[sampled_idx], y_train[sampled_idx].squeeze() sampled_labels = sampled_labels.astype("int32") support_ds = tf.data.Dataset.from_tensor_slices((sampled_train, sampled_labels)) distribution = Counter(sampled_labels) counts = np.array(list(distribution.values())) fractions = counts/counts.sum().astype("float64") target_distribution = np.array([0.1] * 10).astype("float64") resampler = tf.data.experimental.rejection_resample( class_func, target_dist=target_distribution, initial_dist=fractions) support_ds = support_ds.apply(resampler).batch(SUPPORT_BATCH_SIZE) Here’s the root error: TypeError: Input 'y' of 'Less' Op has type float64 that does not match type float32 of argument 'x'. Any idea what I might have missed out on? Here’s the Colab 8 if you wanna give it a shot.
st206580
To get your code to work, replace: fractions = counts/counts.sum().astype("float64") target_distribution = np.array([0.1] * 10).astype("float64") with: fractions = counts/counts.sum() fractions = fractions.astype("float32") target_distribution = np.array([0.1] * 10).astype("float32") The implementation is just being a bit careless with the dtypes. Here 8 it’s does a random_ops.random_uniform([], seed=seed) < p)). That uniform random returns a float32. So p needs to be float32, or it should say random_ops.random_uniform([], seed=seed, dtype=p.dtype) Or it should assert that all those arguments are float32, or cast them to float32.
st206581
Although the code is working fine, the distribution is not what I would expect (the expectation here is to have a uniform distribution across the labels). Here’s a batch-wise summary: Counter({6: 73, 1: 72, 7: 71, 5: 67, 0: 65, 8: 64, 9: 63, 4: 57, 3: 55, 2: 53}) Counter({9: 74, 0: 70, 4: 70, 2: 69, 3: 68, 1: 66, 7: 62, 6: 56, 5: 53, 8: 52}) Counter({0: 75, 3: 71, 6: 70, 1: 69, 8: 64, 9: 63, 4: 63, 2: 60, 7: 55, 5: 50}) Counter({4: 74, 0: 72, 7: 72, 1: 67, 5: 66, 6: 65, 3: 63, 9: 59, 2: 52, 8: 50}) Counter({2: 78, 7: 78, 6: 75, 1: 68, 4: 62, 5: 62, 9: 56, 0: 56, 3: 55, 8: 50}) For 640 samples with each batch, I would expect it to give 64 per class.
st206582
I tried another approach: sampled_idx = np.random.choice(len(x_train), 4000) sampled_train, sampled_labels = x_train[sampled_idx], y_train[sampled_idx].squeeze() sampled_labels = sampled_labels.astype("int32") support_ds = tf.data.Dataset.from_tensor_slices((sampled_train, sampled_labels)) ds = [] for i in np.arange(0, 10): ds_label = ( support_ds .filter(lambda image, label: label==i) .repeat()) ds.append(ds_label) balanced_ds = tf.data.experimental.sample_from_datasets( ds, [0.1] * 10).batch(SUPPORT_BATCH_SIZE) But here also when I do: for samples, labels in balanced_ds.take(10): print(Counter(labels.numpy())) the distribution does not come out as expected: Counter({9: 74, 0: 73, 3: 71, 8: 70, 1: 70, 5: 67, 7: 64, 2: 55, 6: 51, 4: 45}) Counter({2: 76, 3: 70, 4: 68, 1: 67, 6: 64, 0: 62, 7: 62, 8: 60, 9: 56, 5: 55}) Counter({1: 78, 2: 75, 7: 74, 0: 68, 9: 67, 3: 61, 5: 58, 8: 55, 4: 54, 6: 50}) Counter({6: 82, 9: 69, 5: 68, 4: 64, 1: 63, 3: 62, 7: 62, 8: 61, 2: 56, 0: 53}) Counter({6: 76, 2: 69, 5: 69, 8: 68, 4: 67, 0: 66, 1: 59, 3: 59, 9: 55, 7: 52}) Counter({8: 77, 9: 71, 4: 68, 0: 66, 2: 66, 6: 66, 7: 64, 5: 62, 1: 60, 3: 40}) Counter({8: 86, 9: 66, 4: 65, 1: 64, 2: 62, 5: 61, 0: 60, 6: 60, 3: 58, 7: 58}) Counter({7: 75, 8: 73, 6: 70, 5: 70, 3: 68, 9: 64, 4: 61, 0: 55, 2: 53, 1: 51}) Counter({6: 78, 1: 70, 5: 67, 0: 66, 2: 66, 4: 64, 8: 60, 3: 58, 9: 56, 7: 55}) Counter({9: 75, 7: 70, 8: 69, 3: 67, 4: 65, 5: 63, 2: 62, 1: 57, 0: 57, 6: 55}) @markdaoust
st206583
Don’t trust a person’s ability to evaluate a probability distribution at a glance. Here’s an independent implementation that gets equivalent results: import numpy as np for _ in range(10): d = np.zeros(10) for n in range(640): d[np.random.randint(10)] += 1 print(sorted(d, reverse=True)) [79.0, 71.0, 69.0, 68.0, 65.0, 61.0, 60.0, 59.0, 58.0, 50.0] [78.0, 70.0, 70.0, 68.0, 67.0, 64.0, 62.0, 57.0, 56.0, 48.0] [78.0, 73.0, 70.0, 69.0, 67.0, 62.0, 59.0, 57.0, 53.0, 52.0] [74.0, 71.0, 70.0, 68.0, 66.0, 61.0, 61.0, 60.0, 56.0, 53.0] [77.0, 70.0, 67.0, 65.0, 65.0, 63.0, 62.0, 60.0, 57.0, 54.0] [76.0, 73.0, 68.0, 67.0, 66.0, 61.0, 59.0, 58.0, 56.0, 56.0] [74.0, 74.0, 70.0, 69.0, 68.0, 67.0, 65.0, 59.0, 48.0, 46.0] [85.0, 69.0, 68.0, 66.0, 62.0, 61.0, 61.0, 59.0, 56.0, 53.0] [73.0, 71.0, 67.0, 67.0, 65.0, 63.0, 61.0, 58.0, 58.0, 57.0] [72.0, 70.0, 68.0, 67.0, 65.0, 63.0, 62.0, 60.0, 59.0, 54.0] I’m not sure what the right statistical test is (something Dirichlet 5.) but use a bigger sample size and you’ll see that it’s converging. with 1e6 samples everything’s within 1%: d = np.random.randint(10, size=int(1e6)) counts, _ = np.histogram(d, bins=range(11)) counts array([100254, 99351, 100098, 100162, 99747, 100369, 99793, 100247, 100039, 99940]) If you you want to force exact balance then with one dataset per class you can: import tensorflow as tf datasets = tuple(tf.data.Dataset.from_tensors(n).repeat() for n in range(10)) zipped = tf.data.Dataset.zip(datasets) stacked = zipped.map(lambda *args: tf.stack(args, axis=0)) stacked.element_spec TensorSpec(shape=(10,), dtype=tf.int32, name=None) tf.data.experimental.get_single_element(stacked.take(1)) <tf.Tensor: shape=(10,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)>
st206584
Also: github.com/tensorflow/tensorflow Fix dtype mismatches in rejection_resample. 10 committed May 11, 2021 MarkDaoust +34 -3 Cast `initial_dist` and `target_dist` to `float32`. Normally TensorFlow doesn't cast things for you, but I think in this case it's reasonable because: -...
st206585
@markdaoust it just keeps getting interesting: colab.research.google.com Google Colaboratory 7 What I exactly wanted: Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Counter({0: 64, 1: 64, 2: 64, 3: 64, 4: 64, 5: 64, 6: 64, 7: 64, 8: 64, 9: 64}) Crux of the code: def dataset_for_class(i): i = tf.cast(i, tf.uint8) return support_ds.filter(lambda image, label: label == i).repeat() sampled_idx = np.random.choice(len(x_train), 4000) sampled_train, sampled_labels = x_train[sampled_idx], y_train[sampled_idx].squeeze() support_ds = tf.data.Dataset.from_tensor_slices((sampled_train, sampled_labels)) stratified_ds = tf.data.Dataset.range(10).interleave(dataset_for_class, cycle_length=10) stratified_ds = stratified_ds.batch(640) Notes: Dataset is CIFAR10. I made sure that the images getting batched are different as you would ntoice in the notebook provided above.
st206586
Yeah, that interleave is basically equivalent to the zip. def dataset_for_class(i): i = tf.cast(i, tf.uint8) return support_ds.filter(lambda image, label: label == i).repeat() Just remember that if you’re splitting a dataset like that, the dataset for each class loads the whole dataset, and throws out all but 1/n of it. So if you have a larger dataset with a larger number of classes you’ll probably want to cache each of the class-datasets (but there might also be a way to fix it with querues).
st206587
True that. Let’s just continue putting together our hacks and benchmark them. Who knows, future readers may find these incredibly useful. On a slightly related note, as you may already know this kind of stratified sampling is pretty common for few-shot classification tasks (particularly for models like Prototypical Networks 6). Might be a good idea to work on a tutorial concerning this topic.
st206588
I currently have a virtual environment inside which I have installed all the packages that I want (tensorflow, … ) and I use it to work locally. I constantly update the packages to the latest releases however, I am usually stuck on python 3.6 Is it the best approach to work locally? Or are containers a better approach where I can easily update my packages including python’s version. If so, is there any guideline on how to create my environment inside containers? Thanks! Fadi
st206589
I found while containers have more of a learning curve they are much easier in the long run. TensorFlow even has containers with source so you can build TensorFlow versions you need without installing a bunch of tooling on your system. If you use a GPU Nvidia-docker is by far the easiest way to get up and running with TensorFlow on a GPU. Best of all you avoid the “it works on my machine” issues where you can ship your container to the cloud or provide them to other developers to pickup and join your project.
st206590
Venv are very different from containers. One of the main difference is that a container/image it is related to a whole OS not only python. If you want to explore containers with TF you can follow our official Docker guide 3 and share with us any feedback or issue related to the docs.
st206591
I’m currently working on a project where of course the question of neural networks comes up. We are working on a fairly simple regression problem that has some noise in the data. My current approach is testing a few different networks including Random Forest and XGBoost. I did run some test with TensorFlow but I didn’t see much better results to justify the increased training time. However I figured I should ask people a bit more seasoned than myself, why should I use TensorFlow for a simple regression algorithm on structured data? Thanks!
st206592
If the problem is the model itself have you tried to explore the model space with AutoKeras 8?
st206593
In my previous project, I need to frame an image classification task as a regression problem. I implement the regression model using Tensorflow, with standard Sequential model with a 1 node Dense layer with no activation function as the last layer. In order to measure the performance, I need to use standard classification metrics, such as accuracy and cohen kappa. However, I can’t directly use those metrics because my model is a regression model, so I need to clip and round the output before feeding them to the metrics. I use a workaround by defining my own metric, however that workaround is not practical. Therefore, I’m thinking about contributing to Tensorflow by implementing a custom transformation_function to transform y_pred by a Tensor lambda function before storing them in the __update_state method. After reading the source code, I get doubts regarding this idea. So, I’m asking out to you, fellow Tensorflow user/contributors, what is the best practice of transforming y_pred before feeding it to a metric? Is this functionality already implemented in the newest version? Thank you!
st206594
Since logging is important for ML (and any software system in general) I think it would be a good idea to have an introductory tutorial on this topic. This could be an opportunity to also introduce tf.get_logger() to the users and its basic capabilities in more explicit ways.
st206595
Are you a book person? A give-me-a-sample-that-I-can-take-apart person? A video tutorial watcher? Someone who loves to parse deep into the technical docs? Or some combination of all of the above? What has worked for you in learning TensorFlow, Machine Learning, or indeed anything?
st206596
For a scientific topic like Machine Learning (or any academic topic from fluid dynamics to literary linguistics), my approach is usually Full-length video courses + TextBooks. For a practical platform like TensorFlow I tend to start (phase 1) with a couple (2-5) of video tutorials for a high level overview. I then move to (phase 2) written tutorials that show code samples and explain the code. The more samples and the smoother the progression from simple to more complex the better. After a couple (again 2-5) of tutorials I move to (phase 3) trying to implement a few things relying on documentation (official guide/API docs). Books sometimes can be part of phase 2 as some books are basically written as very good tutorials (like these 3 here 2).
st206597
A book to go through. Currently I am going through Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow by Aurelien Geron. Next is your book @Laurence_Moroney “AI and Machine Learning for Coders” Examples and tutorials especially the ones on Code examples 3 Competitions For me, a book is the most consistent way as you get a more academic knowledge. However, it’s the slowest and should be coupled by checking examples, best practises and especially by rolling up your sleeves and starting coding and finding your way with examples and Kaggle competitions
st206598
Fadi_Badine: A book to go through. Currently I am going through Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow by Aurelien Geron. Next is your book @Laurence_Moroney “AI and Machine Learning for Coders” Excellent choice I also highly recommend for beginner, intermediate and advanced users: Deep Learning with Python (version 1, version 2 is out soon ) by @fchollet Coursera and Udacity MOOCs to learn TF 2 (check out https://www.coursera.org/instructor/lmoroney 2 by @Laurence_Moroney)
st206599
Another excellent book: Deep Learning Design Patterns by Andrew Ferlitsch (Google Cloud AI) From the publisher 1: Deep Learning Design Patterns distills models from the latest research papers into practical design patterns applicable to enterprise AI projects. Using diagrams, code samples, and easy-to-understand language, Google Cloud AI expert Andrew Ferlitsch shares insights from state-of-the-art neural networks.