id
stringlengths
3
8
text
stringlengths
1
115k
st206800
@Jason So I fixed my tfjs converter installation issue with a few minor changes. This is what finally worked ‘’’ pip3 install --upgrade pip pip3 install tensorflowjs ‘’’ Seems that installing tensorflow first or tf-nightly was messing something up, tensorflowjs seems to install the correct tensorflow. Also pip3 helped over just trying pip. Next issue is converting edgeImpulse GRAYSCALE 120 x 120 tensorflow saved model to tfjs. Conversion created the necessary files, now how to run it in the browser? Typically I would take a mobilenet demo and upload my new saved file, but that might prove tricky. So far getting errors just loading the saved layer model in yhe browser. I will have to dig a bit deeper. Anyone got any GRAYSCALE TFJS webcam working demos they could share? Perhaps an easier route is just to change my edgeimpulse model to RGB?
st206801
These 2 codelabs I wrote may have some overlap and be of interest: Converting and using Python models: Google Codelabs TensorFlow.js: Convert a Python SavedModel to TensorFlow.js format 1 In this codelab, you’ll learn how to take an existing Python ML model that is in the SavedModel format and convert it to the TensorFlow.js format so it can run in a web browser whilst also learning how to address common issues that may occur in... Hosting via Firebase: Google Codelabs TensorFlow.js: Use Firebase Hosting to deploy and host a machine learning... 2 In this codelab, you’ll learn how to use the Firebase infrastructure to deploy an ML model so it can be used and consumed on your website using TensorFlow.js
st206802
Thanks @Jason I forgot you have to change the “paths” in the model.json file. Still got issues with loading my edge impulse converted model, but making some headway. By the way we should chat before school starts about copePen, glitch, codeSandBox, repl.it … I have probably done most of them and have some opinions.
st206803
No worries! Glad that helped find the issue! Yes feel free to drop me an DM on the forum if you want to discuss privately your thoughts on those.
st206804
So I am having issues loading a model.json file from my website. Best to start from scratch: @Jason does anyone know if there is an up-to-date mobilenet webcam example? I have these old single web page vanilla javascript ones which work on multiple devices. Has anyone put together a simple mobilenet webcam version. I can do it, just nice to see what others have done. https://hpssjellis.github.io/beginner-tensorflowjs-examples-in-javascript/tfjs-models/blazeface/index.html 2 https://hpssjellis.github.io/beginner-tensorflowjs-examples-in-javascript/tfjs-models/mobilenet/index.html 1 My older version which does not work well on iPhone browsers I have lots more but everything is a few years old, would love to see something that was modern simple and easy to load a TFJS layers model and test it using a webcam.
st206805
That is certain one route which uses our COCO-SSD model however if you wanted a mobile net example all my working code is available on Glitch.com 1 and Codepen.io 1 eg here: Mobilenet: glitch.com Glitch Combining automated deployment, instant hosting & collaborative editing, Glitch gets you straight to coding so you can build full-stack web apps, fast Here I show how to apply to a single image and also webcam classification too. Mobilenet on its own tends to give interesting results so I much prefer to use COCO-SSD for object detection or retrain Mobilenet for specific things eg like TeachableMachine does. Anyhow my minimal COCO-SSD demo is here: glitch.com Glitch Combining automated deployment, instant hosting & collaborative editing, Glitch gets you straight to coding so you can build full-stack web apps, fast
st206806
Just checking something I read. Can a Tensorflow Saved model be converted to a TFJS Layers Model? I think I read somewhere that it can only be converted to a TFJS Graph Model. I really want a layers model from Edge Impulse. Could a TFLite model be converted to a TFJS layers model?
st206807
Rare, but occasionally I make something that is useful. https://hpssjellis.github.io/my-examples-of-edge-impulse/public/edge-models/single-heart-rock/forweb/index-just-load.html A website to check your TFJS layers model loaded hopefully from any https site and prints a summary Or a Vission only Graph Model either from TF hub or an https site that tests a zeros filled tensor. Kind of like the API examples. Screenshot_20210815-134904_Chrome1019×906 207 KB
st206808
Things are going good. Anyone know how to directly take a video webcam element and convert it to shape 1, 224, 224, 3 I can do it with a canvas but wonder if there is an easier way. I am testing this const input_tensor = await tf.browser.fromPixels(video).toFloat().sub(255 / 2).div(255 / 2).reshape([1, 224, 224, 3]); … … ooooooo I see the issue I need to crop the video. Can that be done using the video element or do I have to convert it to a canvas. Also does anyone know the code to grab the middle pixels. Presently the webcam is 640 x 480 and I will need the middle 224 x 224. … Ok, getting there const input_tensor = await tf.data.webcam(video, {resizeWidth: 224, resizeHeight: 224, centerCrop : true, }); console.log('input_tensor') console.log(input_tensor) const myInputImage = await input_tensor.capture(). seems to give me tensors, now I just have to add a dimension. The tensor is: 224,224,3 and I need 1,224,224,3 .expandDims(0); does not seem to work.
st206809
Anyone following along, what is working for loading vision data from a canvas to a model predict function is const image = await tf.browser.fromPixels(document.getElementById('my224x224Canvas')).toFloat().reshape([1, 224, 224, 3]) ; The TFJS webpage now runs, but does not analyze correctly. The problem is most likely the input data from the canvas being formatted incorrectly. Anyone got any suggestions? My last working web page demo should be here 1 (since it is vanilla javascript just right click to view all the source code.) My active model is here 1, but I routinely break it.
st206810
What if you could monitor your Machine Learning jobs on Google Colab or Kaggle or pretty much anywhere in real-time on your mobile phones📱. You can now monitor your ML jobs even while taking a walk🚶with <5 lines of code to get started! Presenting our project TF Watcher: github.com GitHub - Rishit-dagli/TF-Watcher: Monitor your ML jobs on mobile devices📱,... 21 Monitor your ML jobs on mobile devices📱, especially for Google Colab / Kaggle - GitHub - Rishit-dagli/TF-Watcher: Monitor your ML jobs on mobile devices📱, especially for Google Colab / Kaggle
st206811
Hello everyone! I made an Android app called Pocket AutoML 3 that trains a deep learning model for image classification right on your phone and made a tutorial 3 how to export a model from it and make your custom Android app with an exported model like e.g. an app for identifying plants or sorting lego bricks. Some of its features: Pocket AutoML lets AI enthusiasts even without any prior machine learning expertise train a deep learning image classification model, export it in TensorFlow Lite format and make a custom Android app based on it following the provided tutorial. Computer vision or deep learning professionals can also use it as a tool to create a quick proof-of-concept for transfer learning on their tasks without a single line of code. It trains a model right on on your device in seconds (for a dataset with dozens of images). It respects your privacy: your images are never uploaded anywhere as both training and prediction happens on your device (the apps made with exported models also have the same advantages). It does not need an internet connection for training and predicting (internet connection is needed for TL Lite export though). Just few images per class can be enough to train a model that accurately classifies objects (what is known as few-shot learning). Pocket AutoML does nothing magical, it just uses transfer learning 2, which you can use directly, as described in the end of the tutorial above, so no vendor lock is imposed. The above mentioned app creation tutorial on GitHub 3 includes a working example Android app that classifies images from a phone camera and steps to customize that app with your model trained in Pocket AutoML 3. Since Pocket AutoML exports models in TensorFlow Lite format, they can also be used to create apps for platforms other then Android like iOS, embedded Linux devices like Raspberry Pi or Coral 1 and microcontrollers. You can compare this app with other no-code or low-code deep learning solutions: CreateML from Apple, Lobe from Microsoft, Teachable Machine from Google, Google AutoML Vision, Azure Custom Vision from Microsoft, TensorFlow Lite Model Maker from Google, they are either free like Pocket AutoML or have a trial period. I will be glad to discuss the app here and help in case of potential technical issues with the app itself or when following the tutorial above. screen11080×2280 130 KB screen21080×2280 531 KB screen31080×2280 382 KB screen41080×2280 208 KB screen51080×2280 203 KB screen61080×2280 198 KB
st206812
Sentence 1: Sourav Ganguly is the greatest captain in BCCI. Sentence 2: Ricky Ponting is the greatest captain in Cricket Australia. Do these two sentences contradict/entail each other or are they neutral? In NLP, this problem is known as textual entailment and is a part of the GLUE benchmark for language understanding. On social media platforms, to better curate and moderate content, we often need to utilize multiple sources of data to understand their semantic behavior. This is where multimodal entailment can be useful. In my latest post, I introduce the basics of this topic and present a set of baseline models for the Multimodal Entailment dataset recently introduced by Google. Some recipes include “modality dropout”, cross-attention, and class-imbalance mitigation. Blog post: Multimodal entailment 4 Code: https://git.io/JR0HU 4 Fun fact: This marks the 100th example on keras.io 1.
st206813
Thanking @markdaoust, @lgusm, and @jbgordon for the amazing tutorial on TensorFlow Solve GLUE tasks using BERT on TPU  |  Text  |  TensorFlow 2 My post on multimodal entailment uses a fair bit of code from that post (of course with due citation). With that, I wanted to take the opportunity to thank you, folks, for the tutorial since it DEFINITELY helps in solving GLUE tasks more accessible and readily approachable.
st206814
Hello guys, we like to share with you our recently published notebook on kaggle for the ongoing Brain Tumor Classification multi-modal problem in TensorFlow/Keras. Here one can find the 3D modeling approach and 3D augmentation data pipelines with various packages. [TF]: 3D & 2D Model for Brain Tumor Classification 15
st206815
Our new Keras example just got published in Keras. Link: 3D volumetric rendering with NeRF 23 In this joint venture with Ritwik Raha 5, we present a minimal implementation of the research paper NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis by Ben Mildenhall et. al. The authors have proposed an ingenious way to synthesize novel views of a scene by modelling the volumetric scene function through a neural network. We try making concepts like 3D volumetric rendering and ray tracing as visual as possible. We would like to thank @Sayak_Paul for his thorough review of the first draft. We also want to acknowledge @fchollet for his guidance through and through. The GIFs in the example are made with manim 5. We like how the animations have turned out to be. manim854×480 2.05 MB
st206816
This looks very interesting. Sorry for my ignorance but after you train the model, can you apply to new data easily?
st206817
Thanks for your interest @lgusm To answer your concern, the model trained here is very specific to the scene that we want to synthesize. You can think of the model to encode a specific scene in itself, it cannot help you with the generation of a completely new scene.
st206818
Where convolutions have been the doing great at what it does, involution symmetrically inverts the inherent properties of convolutions. Where convs are spatial-agnostic and channel-specific operations, invs are spatial-specific and channel-agnostic operations. My take on Involutions: GitHub - ariG23498/involution-tf: TensorFlow implementation of involution. 15 Here one can find the Involution Layer which has all the necessary code to build the kernel dynamically and also apply it on the input feature space. One can also pick the code up and apply the layer to any tf based architecture.
st206819
Hey Folks, I have also had the opportunity to write on this topics as a Keras example. Link: Involutional neural networks 4 Any feedback is welcome.
st206820
Hi folks, Today I am pleased to open-source the code for implementing the recipes from Knowledge distillation: A good teacher is patient and consistent 11 (function matching) and reproducing their results on three benchmark datasets: Pet37, Flowers102, and Food101. Importance: The importance of knowledge distillation lies in its practical usefulness. With the recipes from “function matching”, we can now perform knowledge distillation using a principled approach yielding student models that can actually match the performance of their teacher models. This essentially allows us to compress bigger models into (much) smaller ones thereby reducing storage costs and improving inference speed. Some features of the repository I wanted to highlight: The code is provided as Kaggle Kernel Notebooks to allow the usage of free TPU v3-8 hardware. This is important because the training schedules are comparatively longer. There’s a notebook 1 on distributed hyperparameter tuning and it’s often not included in the public release of an implementation. For reproducibility and convenience, I have provided pre-trained models and TFRecords for all the datasets I used. Here’s a link to the repository github.com GitHub - sayakpaul/FunMatch-Distillation: TF2 implementation of knowledge... 11 TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237. - GitHub - sayakpaul/FunMatch-Distillation: TF2 implementation... I’d like to sincerely thank Lucas Beyer (first author of the paper) for providing crucial feedback on the earlier implementations, ML-GDE program for the GCP support, and TRC for providing TPU access. For any questions, either create an issue in the repository directly or email me. Thank you for reading!
st206821
Thanks, also from Google It seems that knowledge distillation is something that we need to handle with care: arXiv.org Does Knowledge Distillation Really Work? 4 Knowledge distillation is a popular technique for training a small student network to emulate a larger teacher model, such as an ensemble of networks. We show that while knowledge distillation can improve student generalization, it does not typically...
st206822
Yes I am aware of this paper since it had come out during the same time. Here’s another from MSFT Research in case you want to dive even further: Microsoft Research – 19 Jan 21 3 deep learning mysteries: Ensemble, knowledge- and self-distillation 2 Microsoft and CMU researchers begin to unravel 3 mysteries in deep learning related to ensemble, knowledge distillation & self-distillation. Discover how their work leads to the first theoretical proof with empirical evidence for ensemble in deep... There are also works that show how knowledge distillation may ignore the skewed portions of the dataset. But business needs and objectives drive the carefulness I think. Knowledge distillation has the promise to account for both without incorporating too much technical debt or complications in the overall pipeline. Amongst three popular compression schemes (quantization, pruning, and distillation) distillation is my favourite.
st206823
I think distillation and pruning are more similar as they both relies on the overparametrization and in distillation you control the prior of the student model arch. But if the pruning could start to be efficently structured I think that at some point the two approaches could go to converge in something else. See some Deepmind early findings: arXiv.org A Generalized Lottery Ticket Hypothesis 2 We introduce a generalization to the lottery ticket hypothesis in which the notion of "sparsity" is relaxed by choosing an arbitrary basis in the space of parameters. We present evidence that the original results reported for the canonical basis...
st206824
Vector-Quantized VAEs were proposed in 2017. Since its inception, it has pushed the field of high-quality image generation to a great extent. Its recipes like discrete latent space optimization, codebook sampling, etc. have gone to later become essential blocks for modern models like VQ-GAN, DALL-E, etc. In my latest Keras example, I present an implementation VQ-VAEs including the subsequent PixelCNN part for image reconstruction and generation. I’ve included crucial pieces of visualizations as well to make it fun and interesting. image2916×1574 411 KB Here is the link to my example: keras.io Keras documentation: Vector-Quantized Variational Autoencoders 12
st206825
I have been building my muscle memory for MLOps and the progress has been good so far. Thanks to Coursera’s Specialization, ML Design Patterns book, and Vertex AI’s neat examples. I wanted to build a simple Vertex AI pipeline that should train a custom model and deploy it. TFX pipelines seemed like a way easier choice for this than KFP pipelines. I am now referring to this stock example: colab.research.google.com Google Colaboratory 3 I see loads of argument parsing here and there, especially in the model building utilities. For reference, here’s a snippet that creates ExampleGen and Trainer in the initial TFX pipeline: # Brings data into the pipeline. example_gen = tfx.components.CsvExampleGen(input_base=data_root) # Uses user-provided Python function that trains a model. trainer = tfx.components.Trainer( module_file=module_file, examples=example_gen.outputs['examples'], train_args=tfx.proto.TrainArgs(num_steps=100), eval_args=tfx.proto.EvalArgs(num_steps=5)) The run_fn only takes fn_args as its arguments. I am wondering how the arguments passed and mapped inside penguin_trainer.py? I will be grateful for an elaborate answer.
st206826
Take a look at these lines in the Trainer component 9 for how the run_fn is invoked. The private function _GetFnArgs 5 is used to gather and create the fn_args which get passed through Trainer to the run_fn.
st206827
Thanks Robert! Appreciate it. Maybe a brief note about this in the tutorial would be helpful for curious readers.
st206828
I reimplement ResMLP: Feedforward networks for image classification with data-efficient training 2 in keras. And this is my first open-source work ever. https://github.com/yeyinthtoon/tf2-resmlp 8
st206829
New example on conditional generation using GANs. I think it’s an important recipe to know about if you’re into generative deep learning. P.S.: This is NOT SoTA stuff. The aim of the example is to walk you through a standard workflow that you can extend to high-fidelity datasets. I look forward to those. keras.io Keras documentation: Conditional GAN 6
st206830
How does a commoner train a Transformers-based model on small and medium datasets like CIFAR-10, ImageNet-1k and still attain competition results? What if they don’t have the luxury of using a modern GPU cluster or TPUs? You use Compact Convolutional Transformers (CCT). In this example, I walk you through the concept of CCTs and also present their implementation in Keras demonstrating their performance on CIFAR-10: keras.io Keras documentation: Compact Convolutional Transformers 45 A traditional ViT model would take about 4 Million parameters to get to 78% on CIFAR-10 with 100 epochs. CCTs would take 30 epoch and 0.4 Million parameters to get there
st206831
Thanks, Gus! Happy to also collaborate with the Hub team to train this on ImageNet-1k and democratize its use.
st206832
As usual, Sayak writes tutorials faster than I can read them . This is not the implementation I would expect for stocastic-depth. Isn’t this more more like … dropout with a funny noise-shape . I expected: class Stocastic(keras.Layer): def __init__(self, wrapped, drop_prob, *args, **kwargs): super().__init(*args, **kwargs) self.wrapped = wrapped self.drop_prob = drop_prob def call(self, x): if tf.random.uniform(shape=()) > self.drop_prob: x = self.wrapped(x) return x Otherwise you don’t get all the training speed improvements they talk about in the stocastic-depth paper since you run the layer anyway. Also the dropout-like implementation… this only works to kill the output before the add on a residual branch, right? Does the dropout 1/keep_prob scaling still make sense used like this?
st206833
markdaoust: Also the dropout-like implementation… this only works to kill the output before the add on a residual branch, right? You are right. This implementation of Stochastic Depth is only cutting the outputs at the residual block. markdaoust: Does the dropout 1/keep_prob scaling still make sense used like this? No, I am not scaling the dropped out blocks with the inverse dropout probabilities. They are simply not dropped during inference. markdaoust: self.wrapped = wrapped Is wrapped the block we are applying?
st206834
Sorry I jumped straight to the criticism, I am a big fan, keep up the good work. No, I am not scaling the dropped out blocks with the inverse dropout probabilities. They are simply not dropped during inference. Right, but it is being applied during training and it’s the difference between training and inference that I wonder about. My intuition for why dropout uses that scaling factor is so that the mean value of the feature is the same before and after the dropout. But each example is independent, they don’t share statistics. In training the next layer sees layer(x)/keep_prob when the layer is kept, and 0 otherwise. No mixing. So the average across samples is preserved, but maybe the average value seen for any sample is not realistic. And I see slightly better results (just 1 run) after dropping that factor: Before: Epoch 28/30 352/352 [==============================] - 13s 37ms/step - loss: 0.9469 - accuracy: 0.8020 - top-5-accuracy: 0.9899 - val_loss: 1.0130 - val_accuracy: 0.7766 - val_top-5-accuracy: 0.9870 Epoch 29/30 352/352 [==============================] - 13s 36ms/step - loss: 0.9326 - accuracy: 0.8079 - top-5-accuracy: 0.9901 - val_loss: 1.0455 - val_accuracy: 0.7674 - val_top-5-accuracy: 0.9844 Epoch 30/30 352/352 [==============================] - 13s 37ms/step - loss: 0.9296 - accuracy: 0.8097 - top-5-accuracy: 0.9902 - val_loss: 0.9982 - val_accuracy: 0.7822 - val_top-5-accuracy: 0.9838 313/313 [==============================] - 2s 8ms/step - loss: 1.0239 - accuracy: 0.7758 - top-5-accuracy: 0.9837 Test accuracy: 77.58% Test top 5 accuracy: 98.37% After: Epoch 28/30 352/352 [==============================] - 13s 37ms/step - loss: 0.9268 - accuracy: 0.8117 - top-5-accuracy: 0.9908 - val_loss: 0.9599 - val_accuracy: 0.8050 - val_top-5-accuracy: 0.9872 Epoch 29/30 352/352 [==============================] - 13s 37ms/step - loss: 0.9255 - accuracy: 0.8136 - top-5-accuracy: 0.9910 - val_loss: 0.9751 - val_accuracy: 0.7942 - val_top-5-accuracy: 0.9868 Epoch 30/30 352/352 [==============================] - 13s 37ms/step - loss: 0.9132 - accuracy: 0.8181 - top-5-accuracy: 0.9923 - val_loss: 0.9745 - val_accuracy: 0.7952 - val_top-5-accuracy: 0.9870 313/313 [==============================] - 3s 9ms/step - loss: 0.9976 - accuracy: 0.7867 - top-5-accuracy: 0.9855 Test accuracy: 78.67% Test top 5 accuracy: 98.55% Interesting. IIUC, here’s that factor in the code from the original paper, so maybe I’m wrong: github.com yueatsprograms/Stochastic_Depth/blob/master/ResidualDrop.lua#L52 1 end function ResidualDrop:updateOutput(input) local skip_forward = self.skip:forward(input) self.output:resizeAs(skip_forward):copy(skip_forward) if self.train then if self.gate then -- only compute convolutional output when gate is open self.output:add(self.net:forward(input)) end else self.output:add(self.net:forward(input):mul(1-self.deathRate)) end return self.output end function ResidualDrop:updateGradInput(input, gradOutput) self.gradInput = self.gradInput or input.new() self.gradInput:resizeAs(input):copy(self.skip:updateGradInput(input, gradOutput)) if self.gate then self.gradInput:add(self.net:updateGradInput(input, gradOutput)) end Is wrapped the block we are applying? Yes, that’s what I meant here that this layer is like “maybe apply the wrapped layer”, (It should also check the training flag…) Either way, fun stuff. Thanks again Sayak!
st206835
markdaoust: Sorry I jumped straight to the criticism, I am a big fan, keep up the good work. I think this is the need of the hour. Please keep’em coming. Thanks very much for the resources. If you feel like you could add those to the example, feel free to submit a PR (maybe)? I’d be more than happy to give you co-author credits. markdaoust: But each example is independent, they don’t share statistics. I think for single domain learning tasks like vanilla image classification batch stats are fine. markdaoust: And I see slightly better results (just 1 run) after dropping that factor: Which factor? Could you provide a short snippet?
st206836
Sayak_Paul: If you feel like you could add those to the example, feel free to submit a PR (maybe)? I’d be more than happy to give you co-author credits. Yes, absolutely. But I guess the main thing is just that it turns out I just don’t really understand stochastic depth. Your code and the original implementation are doing the same thing. And now I’m just trying to understand why. Original: if self.train then if self.gate then -- only compute convolutional output when gate is open self.output:add(self.net:forward(input)) end else self.output:add(self.net:forward(input):mul(1-self.deathRate)) Yours: if training: keep_prob = 1 - self.drop_prob shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1) random_tensor = keep_prob + tf.random.uniform(shape, 0, 1) random_tensor = tf.floor(random_tensor) return (x / keep_prob) * random_tensor return x The part I don’t understand is the keep_prob scaling. # inference branch self.output:add(self.net:forward(input):mul(1-self.deathRate)) # Training branch return (x / keep_prob) * random_tensor These are equivalent, but I don’t understand why this line is there. I understand the argument for this scaling in dropout: “Show me half the pixels twice as bright during training and then all the pixels for inference.” But I’m less comfortable with applying thos logic to the entire example. “Skip the operation or do it twice as hard, and for inference do it with regular strength.” But maybe I can understand it with the “the layers of a resnet are a like gradient vector field pushing the embedding towards the answer” interpretation. I guess If I’m taking fewer steps each could be larger. Which factor? Could you provide a short snippet? My little experiment was, in your code, to just replace this line: return (x / keep_prob) * random_tensor with: return x * random_tensor I’ll run it a few more times and see what happens.
st206837
I’ll run it a few more times and see what happens. It looks like the difference between those two runs was not important. Validation accuracy seems to come out anywhere from 76-79%.
st206838
markdaoust: But maybe I can understand it with the “the layers of a resnet are a like gradient vector field pushing the embedding towards the answer” interpretation. I guess If I’m taking fewer steps each could be larger. Right on. Thanks so much, @markdaoust for the conversation and for looking into this.
st206839
I created a repository with EfficientNet-lite model variants adapted to Keras (functional API). github.com sebastian-sz/efficientnet-lite-keras 38 Keras reimplementation of EfficientNet Lite. Contribute to sebastian-sz/efficientnet-lite-keras development by creating an account on GitHub. The main goal was to mimic tf.keras.applications usability as much as possible. The lite model variants were only available in Tensorflow 1.x - adopting them to Keras allowed for more flexibility, and made them more consistent with existing API and documentation. According to original repository 6, the lite variants: Use ReLU6 instead of Swish. Do not use SE blocks. Have fixed Stem and Head, while scaling up the models. Hope that it helps somebody!
st206840
Hello wonderful people, this is my first post here. Let me begin by thanking all the support this forum has provided I recently built an Emotion recognition that could detect emotions of people from a live camera feed. I used raspberry pi to this. I used a a pre-trained model to recognize the facial expression of a person from a real-time video stream. The “FER2013” dataset is used to train the model with the help of a VGG-like Convolutional Neural Network (CNN). To implement Expression Recognition on Raspberry Pi, we have to follow the three steps mentioned below. Step-1: Detect the faces in the input video stream. Step-2: Find the Region of Interest (ROI) of the faces. Step-3: Apply the Facial Expression Recognition model to predict the expression of the person. We are using Six Classes here that is ‘Angry’, ‘Fear’, ‘Happy’, ‘Neutral’, ‘Sad’, ‘Surprise’. So, the predicted images will be among these classes. I have already documented my learnings in this article linked below so that other don’t have to go through the problems I faced. Emotiong Recognition on Raspberry Pi using tensorflow 25 Enjoy! and do let me know your feedback thanks.
st206841
Hello Community I have just developed an algorithm (I can say a family of algorithms) which have as performance a fast convergence while maintaining the generalization I ask you for any help and advice I want to join a company as a researcher in the field of artificial intelligence. having access to very expensive hardware (like GPUs) and more… Here are the results of one of my algorithms in MNIST and IMBD data.( conduct on personal computer) sincerely
st206842
Hi there, This seems interesting. I do observe, however, that certainly for a low number of epochs, your algorithm has lower accuracy compared to the industry standards. Perhaps your approach has some other benefits that are not represented on these graphs? (time wise?) Cheers, Timo
st206843
Hi there As we know, there are two types of complexity (timing and memory requirements) In this algorithm, I managed to achieve both qualities: less memory needed compared to adam and Adagrad RMSProp and high accuracy in finite time, even though I trained my algorithm with some time-consuming epochs on a personal computer. I also reduced the hyperparameter used in many algorithms. do you suggest any further research in this field? cheers
st206844
I suggest to take a look at some strong benchmark protocols like ICLR 2021: github.com SirRob1997/Crowded-Valley---Results 10 This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"
st206845
Awesome work. There are a few other notable optimizers that may be worth comparing your work to, although I’m no expert: AdaBelief: [2010.07468] AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients 6 (fast convergence/stability) (2020) Adafactor: [1804.04235] Adafactor: Adaptive Learning Rates with Sublinear Memory Cost 5 (saves memory/fast training) (2018) Fromage: [2002.03432] On the distance between two neural networks and the stability of learning 5 (no learning rate tuning/works on GANs and Transformers) (2020) LAMB: [1904.00962] Large Batch Optimization for Deep Learning: Training BERT in 76 minutes 3 (consistent training performance/Transformers & ResNets) SM3 (Square-root of Minima of Sums of Maxima of Squared-gradients Method): [1901.11150] Memory-Efficient Adaptive Optimization 6 (memory efficient & adaptive/designed to decrease memory overhead especially with large models like Transformers-based BERT, etc.) (2019) Yogi: http://www.sanjivk.com/yogi_nips2018.pdf 7 (similar to Adam/convergence and generalisation/focuses on Adam and RMSprop’s issues) (2020)
st206846
Hey everyone, I would like to share with you all a small project 18 that I had worked at my spare time, I called it “BERT as a service”, the goal is to build an end-to-end TFX pipeline for sentiment analysis using BERT. This project 18 is educational and is also aimed to provide a simple and easier reference for people that are looking to get familiar with MLOps using TFX and GCP (as was my case), but should not be that hard to tweak it to be more “production-ready”. Here is an overview of what you will find: An end-to-end ML pipeline, from data ingestion, data validation, transformation, training, and deployment. Usage of specific components for infrastructure validation and hyperparameter tuning. Orchestration using KubeFlow and the new Vertex AI New version of TFX (1.0.0) Colab version has a Tunner component that uses KerasTunner for HP search. KubeFlow version has Infravalidator component to validate infra before blessing the model for deployment. This was a very cool experience to get more familiar with MLOps concepts applied to the GCP ecosystem, if you have a similar project or any feedback I would love to know more.
st206847
This looks cool @DimitreOliveira ! @Robert_Crowe might have some insights on your project!
st206848
Hey @lgusm thank you very much! This was a great experience, especially regarding Vertex AI, next, I will build an application with more real-world potential, and a more professional look.
st206849
This new code walkthrough by me on Keras.io 4 talks about gradient centralization, a simple trick that can markedly speed up model convergence which is implemented in Keras in literally 10 lines of code. This can both speed up the training process and improve the final generalization performance of DNNs. Further, this code example also shows the improvements on using Gradient Centralization while training on @Laurence_Moroney 's Horses v Humans dataset. keras.io Keras documentation: Gradient Centralization for Better Training Performance 18 This was also my first time contributing to Keras examples and many thanks to @fchollet and @Sayak_Paul for helping all along!
st206850
Interesting! That worked really well in the example. I’m not quite getting the intuition for what this does/why it works. What’s your understanding? I think I understand why you keep the last axis, and what this would do with SGD. But it’s less clear when applied through one of these more complex optimizers. Can you summarize your understanding of it (without using the word “Lipschitzness” ).
st206851
Hi @markdaoust, Thanks so much for taking a look at this example! Here is my intuition behind this and why it works in the first place after reading the paper especially for this example. As I understand GC calculates the mean of the column matrices in our gradient matrix and subtract this mean from each column matrices to have zero mean because unlike the first thing we probably thought, normalizing gradients, does not work very well. About how I understand this works, we could say that in an intuitive way (without notation): weights (with GC) = gradient of L with respect to W matrix (the standard term) - mean x (gradient of L with respect to W matrix) We can see that our modified weights now can be seen as the projected gradients along; a unit vector with the number of columns in the weight matrix as the dimension. And similar to the intuition behind batch normalization; this would constrict the weight to a hyperplane. So, this would allow us to regularize the weight space and improve the generalization capability specially when there are less examples (like shown in the example). And this would work across all kind of optimizers, right? The paper also talks about regularizing output feature space, if we use GC to update weights for SGD based optimizers, for a feature x and x + some scalar, the paper derives that change of output activation caused is only dependent on the scalar and the mean of initial weights, not the final weights. So if the mean for initial weights is very close to 0, we end up with making the output feature space more robust to variations in training data. This would also work on complex optimizers derived from SGD. Apart of this in my opinion another major aspect showed by the original paper in section 4.2 1 is when they try to compare original loss and the constrained loss to show how the optimization can be smoothened out reducing training time, howveer they straight up derive this with Lipschitzness, unlike others for which I shared my understanding of them geometrically.
st206852
In semi-supervised learning (SSL), we use a small amount of labeled data to train models on a bigger unlabeled dataset. Quite common in practice sometimes. In unsupervised domain adaptation (UDA), we have access to a source labeled dataset and a target unlabeled dataset. Then the task is to learn a model that can generalize well to the target dataset. Again, quite practical stuff. In my latest example on keras.io 1, I present an implementation and walkthrough of AdaMatch 3 which beautifully unifies SSL and UDA. I also introduced a couple of preliminaries to make it easier for folks who are not familiar with relevant concepts. Expect plots, figures, code, and illustrative code comments. keras.io Keras documentation: Semi-supervision and domain adaptation with AdaMatch 12
st206853
Instance-aware image colorization by Jheng-Wei Su, Hung-Kuo Chu and Jia-Bin Huang proposes a brilliant idea where a model colorizes a black and white image while being aware of the specific objects in an image along with the entire image. We (@ayush_thakur and I) have come up with a minimal reproduction of the paper in TensorFlow. Repository: GitHub - ariG23498/instance-aware-colorization-TF: Reproduction of instance-aware image colorization in TensorFlow. 6 In the repository, we have tried encapsulating all the main features of the training process as suggested in the paper. The repository consists of jupyter notebook only, this was done to help readers with reading the code better and also execute and experiment on it. What we have covered: The three-stage training process Ablating on the rgb and the lab color spaces You can also check this Weights & Biases report for a quick paper summary. Report: wandb.me/instcolorization-report 3 Official Colab Notebook: wandb.me/instcolorization-colab Paper: [2005.10825] Instance-aware Image Colorization
st206854
Hi all. I am pleased to let you all know that majority of the materials for our CVPR 2021 tutorial Practical Adversarial Robustness in Deep Learning: Problems and Solutions have been released. This tutorial is organized and presented by Pin-Yu Chen (IBM Research) and myself. The focus of this tutorial is not just to survey different attack types but also how to employ them in practice and how to mitigate them with SOTA methods. A detailed outline of the tutorial can be found on the official website (includes code and slides too): sites.google.com CVPR 2021 Tutorial 7 By Pin-Yu Chen (IBM Research) & Sayak Paul (Carted) CVPR 2021 | June 20, 2021 | GitHub Repository | Slides Our code uses the following libraries extensively: TensorFlow Keras Neural Structured Learning Foolbox This tutorial will take place today starting from 10 AM ET. We will host a live QnA at 7:30 PM ET. Tutorial videos will be made available on YouTube soon.
st206855
Congratulations to our latest winner @Jitesh_helloworld! You can learn more about the project on his GitHub page 10. twitter.com TensorFlow (TensorFlow) 8 🏆 Our latest #TFCommunitySpotlight winner, Jitesh Saini, created a tool using Python, Web-dev and TensorFlow Lite on Raspberry Pi to run pre-trained ML Models from Coral. Check out his GitHub → https://t.co/VKdMpE9TJk https://t.co/Icew5YnJ3I 11:59 AM - 17 Jun 2021 30 7 If you have a TensorFlow project you’d like us to review for a chance to be featured on our #TFCommunitySpotlight 10 channel and win some cool swag, you can submit it here: goo.gle/tfcs 2
st206856
How do we process videos to feed to a Deep Learning model and train it? Can we borrow concepts from image and text models and combine those to train a video classification model? Yes, we can. My latest example on keras.io 6 shows you how: keras.io Keras documentation: Video Classification with a CNN-RNN Architecture 65
st206857
Recently we have added MoViNets for Action Recognition on Mobile: github.com tensorflow/models 25 master/official/vision/beta/projects/movinet Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub.
st206858
Hi folks, My latest Keras example showing how to build a video classifier by using a hybrid Transformer model. First, we process the video frames using a pre-trained CNN and then we use a Transformer-based model to operate on the CNN feature maps for modeling the temporal relationships. keras.io Keras documentation: Video Classification with Transformers 35 Here’s what you can expect to get as the results (above are the predictions and below is a GIF of the input video): image737×158 4.55 KB P.S.: I am a Cricket fanatic. Sachin Tendulkar is my favorite batsman and Shane Warne is my favorite bowler.
st206859
Last year, with a couple of folks from the TFLite community, I put together the following repository to show non-trivial TFLite conversion workflows: github.com sayakpaul/Adventures-in-TensorFlow-Lite 15 This repository contains notebooks that show the usage of TensorFlow Lite for quantizing deep neural networks. There are many things possible with just a few TFLite API calls (yes it’s that well-designed). The best part is the code won’t even scare you into thinking about the non-trivial nature of these conversion workflows. Two other repositories you should look into in this regard: tulasiram58827/ocr_tflite 3 tulasiram58827/TTS_TFLite 6 The above two are probably the firsts of their kinds.
st206860
Easyflow is a simple interface containing easy Keras/TF model building blocks and feature encoding pipelines. This module contains functionality similar to what sklearn does with its Pipeline, FeatureUnion and ColumnTransformer. See link below for pypi package: Easy Tensorflow: An interface containing easy tensorflow model building blocks and feature encoding pipelines 15 Hope this can be useful.
st206861
I’ve created a 14-hour tutorial introduction to deep learning fundamentals + TensorFlow. It covers: The ins and outs of Google Colab TensorFlow basics (tensor creation, tensor manipulation, tensor aggregation) Deep learning fundamentals (preparing data, fitting a model, loss functions, optimization functions) Regression models Classification models (non-linearities, loss functions, optimizers) It’s designed to be as hands-on as possible, apprenticeship style with as few prerequisites as possible. A watcher of the videos learns concepts whilst coding along live. So far the videos have been viewed nearly 100,000+ times. I think they’re a great introduction to deep learning and TensorFlow. Links: Code on GitHub 26 (the videos cover notebooks 00, 01, 02) Videos on YouTube 44
st206862
We created PerceptiLabs, a TensorFlow-based visual modeling tool. It’s free to use and can be installed by: $ pip install perceptilabs Here is a tutorial/demo on building a model that predicts the age of a human face: PerceptiLabs Live Coding: Age prediction (again) For more information, please visit our documentation: Welcome - PerceptiLabs 7 Would love to get your feedback
st206863
This year, Google introduced MLP-Mixer 3, an architecture based on Multilayer perceptrons ( MLPs ) and Mixer layers. Each Mixer layer consists of two MLPs, one for token-mixing ( mixing per-location features ) and another for channel-mixing ( mixing spatial information ). This architecture yields competitive results against models which use convolutions and vision transformers. Using a similar approach, I’ve tried using MLP Mixers for text classification, thereby using them on embeddings of shape max_length * embedding_dims. The architecture is similar to what is mentioned in the paper, except for some changes in how the text sequences are fed to the Mixer layers. I’ve used this model in the Kaggle competition " Natural Language Processing with Disaster Tweets" and it achieves an accuracy of 73.95 % which is comparable to that of a model using 1D Convolutions. The Kaggle notebooks, kaggle.com Tweet Classification With MLP-Mixers ( TF-Keras ) 20 Explore and run machine learning code with Kaggle Notebooks | Using data from Natural Language Processing with Disaster Tweets kaggle.com Tweet Classification with 1D Convolutions 4 Explore and run machine learning code with Kaggle Notebooks | Using data from Natural Language Processing with Disaster Tweets
st206864
Karl Weinmeister 3 and I co-authored this blog post that discusses important concepts in Vertex AI [1]. It also shows you how to run a simple TensorFlow training job using Vertex AI. Google Cloud Blog Streamline your ML training workflow with Vertex AI | Google Cloud Blog 20 Many of us have used a local computing environment for machine learning (ML). For some problems, a local environment is more than enough. Plus, there's a lot of flexibility. Install Python, install JupyterLab, and go! It’s really nice to see how well TensorFlow integrates with Google Cloud. First, there’s AI Platform. Second, there’s Vertex AI that provides simpler APIs with more flexibility. Third, there’s TensorFlow Cloud. It’s even nicer that TFX can fit into most of these workflows. [1] Vertex AI  |  Google Cloud 2
st206865
Great post Sayak! There’s also this great one about using #TFHub models with Vertex AI for inference: Google Cloud Blog Serve a TensorFlow Hub model in Google Cloud with Vertex AI | Google Cloud Blog 6 Make open-source TensorFlow Hub models ready for production by hosting them with Google Cloud's Vertex AI. I’m very happy on how easy it is to create a rest API. What do you think?
st206866
Thank you for sharing this. Loved reading this. I really like the way even these things are getting easier day by day for anyone to pick up. Loving Vertex AI’s unified offerings.
st206867
You don’t have to use AI tech like TensorFlow.js for AI only. I enjoyed adding GPU acceleration to my JavaScript to make a wild project. Want to know what it is? Watch the promo video I made: TimeWarpScan.me - Fun with TensorFlow.js Building with JavaScript is quite rewarding. I hope you get the feel of it, too. Just watch us enjoy the site on a Walkthrough Wednesday with friends. I’ve been fortunate enough to talk about this at Magnolia JS conf and other places! It’s a fantastic Open Source project for people learning and looking to contribute during their #100DaysOfCode The site is here: http://timewarpscan.me/ 1 The source code is here: timewarp/timewarpscanme at main · GantMan/timewarp · GitHub 2 The site is hosted on AWS Amplify and written in React + TensorFlow.js And of course, if you want to be a master of TensorFlow.js to make your own fun projects, here’s the obligatory book link: amazon.com 2 Learning TensorFlow.js2400×1602 653 KB
st206868
I am pleased to present to the community a series of reports on “Attention”. Here we tackle the beautiful concept of attention and try visualizing the same. The list of reports in their order: https://bit.ly/att-one 8 https://bit.ly/att-two 3 https://bit.ly/att-three 1 https://bit.ly/att-four 3 Repository: GitHub - cr0wley-zz/nmt: Neural Machine Translation Here one can find interactive plots like image849×542 65 KB which makes the concept as intuitive as possible. The work is in collaboration with @Devjyoti_Chakraborty
st206869
I’m happy to share that my first paper got published International Journal of Scientific Research and Technology 3 (Volume 7, Issue 2, March-April-2021). The paper was about Detecting mask using Tensorflow JS and Arduino. Read more about it here: https://ijsret.com/wp-content/uploads/2021/03/IJSRET_V7_issue2_261.pdf 29 Thanks!
st206870
Q Learning is a Reinforcement Learning algorithm in which an agent performs a specific action and gets a reward for it. The goal of the agent is to maximize the reward. Most of us would have tried Q Learning with OpenAI’s Gym, a Python package which provides various environments for training RL agents. Trying a different approach, I have implemented Q Learning in Kotlin ( which can be used in an Android app ) where our agent is trained in an environment identical to OpenAI Gym’s Frozen Lake Environment. The FrokenLakeEnv.kt class provides methods similar to OpenAI Gym, like step(), reset() and actionSpaceSample(). One can see the agent’s progress trying to reach its goal. The detailed discussion on the project and also on some basics of Q Learning is included in the story, Medium – 14 Oct 20 Q Learning With The Frozen Lake Environment In Android 7 Explore Q Learning with the Frozen Lake environment, all in Android! Reading time: 10 min read The GitHub project, github.com shubham0204/QLearning_With_FrozenLakeEnv_Android 3 Explore Q Learning with the Frozen Lake Environment 🥶 in Android. As a ML + Android developer, I consistently try to bring ML models into languages like Kotlin and Java so that edge devices can make use of them efficiently.
st206871
The FaceNet model has been widely adopted by the ML community for face recognition tasks. A number of Python packages are available by which can be used to leverage the powers of FaceNet. We have used the FaceNet model to produce 128D embeddings for each face, captured in the live camera feed, so as perform face recognition in an Android app. This recognition follows the traditional approach of computing the Euclidean distance between the embeddings ( or by computing the cosine of the angle between them ). The “Keras” of FaceNet is first converted to a TensorFlow Lite model ( Using TFLiteConverter API ) which is then used in the Android app. To perform face detection, we use Firebase MLKit’s FaceDetector. Here’s the GitHub project, github.com shubham0204/FaceRecognition_With_FaceNet_Android 300 Face Recognition using FaceNet and Firebase MLKit on Android.
st206872
Awesome! Thank you for taking the time to share this useful information. I am learning how to set up and coordinate I.o.T. sensors and switches for controlling humidity, air exchange, and lighting in my fruiting chamber for gourmet mushrooms. I intend to make this network of sensors and controllers secure so FaceNet would be perfect to use for faster access when I connect from outside the LAN. I will definitely do more research about the FaceNet model. Thanks for the link!
st206873
Here’s my implementation of MLP-Mixer 20, the all MLP architecture for computer vision without any use of convs and self-attention: github.com sayakpaul/MLP-Mixer-CIFAR10 85 Implements MLP-Mixer (https://arxiv.org/abs/2105.01601) with the CIFAR-10 dataset. Here’s what is included: Distributed training with mixed-precision. Visualization of the token-mixing MLP weights. A TensorBoard callback to keep track of the learned linear projections of the image patches. Results are quite competitive with room for improvements for interpretability.
st206874
I made a basic model in TensorFlow.js to show me how to make dice art. I’ve been able to make basic logos out of dice! If anyone is interested in making a more advanced model with the data, I’d love to collaborate. Until then, check this out! 1_P0rAzGyIzUzR2YVykCnpcw4000×3000 2.23 MB Here’s the blog post, clap if you like it! Medium – 19 May 21 [From our friends] Dicify AI 6 Making art from science using AI in TensorFlow.js Reading time: 7 min read
st206875
Oh yeah! I forgot to mention, it’s hanging up on @Jason 's wall! 1_a_Ev0uMVMlId51Liv6TzPA1280×718 164 KB I loved this project. I do think it could be even better! I have lots of JavaScript code for the project, and it’s the capstone of my book.
st206876
Spatial Transformer Networks (STN) have been there since 2015 but I haven’t found an easy-to-follow example of it for #Keras. On the other hand, Kevin Zakka’s implementation of STN 28 is by far one of the cleanest ones but it’s purely in TensorFlow 1. So, I decided to take the utility functions from his implementation and prepare an end-to-end example in #Keras out of it. You can find it here: github.com sayakpaul/Spatial-Transformer-Networks-with-Keras 56 This repository provides a Colab Notebook that shows how to use Spatial Transformer Networks inside CNNs build in Keras. Comes with a Colab Notebook and also a TensorBoard callback that helps visualize the progressions of the transformations learned by STN during training. Notice how the STN module is able to figure out transformations for the dataset that may be helpful to boost its performance - https://user-images.githubusercontent.com/22957388/115120399-e8084b80-9fca-11eb-97e1-c72228c3edc4.mov
st206877
Thanks, Sayuk! More concrete examples of how to combine the myriad techniques and tools helps the folks who come after you. re: the tensorboard line in the notebook. Have you played with tensorboard.dev? If you changed that line then anyone could upload to a public hosted TensorBoard and they could share links to their specific runs. here’s a deep link to a demo colab showing how to upload to tensorboard.dev directly 1 if you’re interested.
st206878
ben: Have you played with tensorboard.dev? If you changed that line then anyone could upload to a public hosted TensorBoard and they could share links to their specific runs. here’s a deep link to a demo colab showing how to upload to tensorboard.dev directly if you’re interested. The last time I tried it (which is not very long ago) image data wasn’t supported.
st206879
I believe it is now (but am not an expert). Here are some relevant docs: Displaying image data in TensorBoard  |  TensorFlow 20
st206880
It’s supported in TensorBoard which is what I have demonstrated in my example above. By the time of that development, it wasn’t supported on tensorboard.dev.
st206881
We was talking about github.com/tensorflow/tensorboard Tensorboard dev support image display 1 opened May 5, 2020 closed May 6, 2020 xwinxu type:support Currently I am writing gifs to my Tensorboard and am able to write and load them… successfully using `tf.summary.experimental.write_raw_pb` (code is essentially from this [issue](https://github.com/tensorflow/tensorboard/issues/39#issuecomment-568917607)) when running Tensorboard locally. However upon trying the `Tensorboard dev`, it will only display my scalars and there is no tab available for images that I would have normally.
st206882
Probably there is still some hope that it is in the roadmap. You could upvote and subscribe to github.com/tensorflow/tensorboard No images in plugin for tensorboard.dev 1 opened Sep 8, 2020 gauravkuppa plugin:images tbdev type:feature I am trying to upload and host my tensorboard results, but I cannot upload image…s to Tensboard.dev. This is vital to my work; when is this planned to be implemented?
st206883
One popular approach for reducing the resource requirements at test time is Neural Network Pruning. This means systematically removing parameters (neurons, connections, etc.) from an existing network to try to reduce down its size. Tensorflow Model Optimization Toolkit makes it very easy to apply various optimization strategies such as Weight Pruning, Quantization and Weight Clustering. For example this code snippet can be used to prune the model weights by 30 % sparsity pruning_params = { 'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.3, 0), 'block_size': (1, 1), 'block_pooling_type': 'AVG' }  model_thirty = tfmot.sparsity.keras.prune_low_magnitude(model,**pruning_params)  log_dir_thirty = tempfile.mkdtemp()  callbacks = [ tfmot.sparsity.keras.UpdatePruningStep(), tfmot.sparsity.keras.PruningSummaries(log_dir = log_dir_thirty), WandbCallback(data_type="image", validation_data=(x_valid, y_valid), save_model=True) ]  model_thirty.compile(...)  model_thirty.fit(...) But what about it’s impact on model performance. Not only top level metrics such as top-k accuracy but also it’s performance on the underrepresented classes in the dataset. Well in a paper titled " What Do Compressed Deep Neural Networks Forget? 2" by Sara Hooker et al, the authors tackled this question. Check out my minimal reproducibility study verifying the claims of the paper. To minimally reproduce the results, instead of using the Resnet-18, I ran multiple experiments with the InceptionV3 Architecture with a pruning scheduler of constant sparsity s ∈ {0,0.3,0.5,0.7,0.9,0.99}, block size of (1,1) and average block pooling., implemented using the TensorFlow Model Optimization Toolkit. The models were trained for a Binary Image Classification (blonde vs non-blonde) which is an under-represented group in the CelebA 1 dataset (what is sometime referred to as the “long-tail” in literature ). I did not experiment with quantization in my work. Github Repository:- github.com SauravMaheshkar/Compressed-DNNs-Forget 1 Experiments with Compression of Deep Neural Networks Weights and Biases Report 1 Web Application Link 3 2048×1280 305 KB Would love to hear some feedback from the community.
st206884
This is a thread to collect and recognize outstanding (unofficial) TensorFlow tutorials (or blog posts that are mainly about specific usage of TensorFlow) from around the web. I’ll start with a seed. I found Effective TensorFlow 2 14 to be a solid and concise tutorial. It has a nice flow and covers interesting gotchas and pitfalls.
st206885
I’d like to add this Tutorial from keras.io 3: Natural language image search with a Dual Encoder 8 it implements a model inspired by CLIP 5 using TensorFlow and TFHub. I
st206886
I have a 14-hour introduction to TensorFlow & deep learning series on YouTube I’d love to share but I can’t share links here yet haha
st206887
Awesome idea @deeb ! Deep reinforcement learning tutorials, such as this one: Policy Gradients are Easy in Tensorflow 2 | Complete Deep Reinforcement Learning Tutorial | - YouTube 6 - check out the Machine Learning with Phil channel on YouTube. The instructor also covers various other policy-gradient-based methods, Soft Actor Critic (SAC) methods, and deep Q-learning algorithms with TensorFlow.
st206888
If you are into self-supervised learning, you already know that “representation collapse” is a real problem and is difficult to get around with simple methods. But not anymore! Barlow Twins introduces a simple training objective that implicitly guarantees representation collapse does not happen. Here’s my TensorFlow implementation: github.com sayakpaul/Barlow-Twins-TF 21 TensorFlow implementation of Barlow Twins (https://arxiv.org/abs/2103.03230). With a ResNet20 as a trunk and a 3-layer MLP (each layer containing 2048 units) and 100 epochs of pre-training, I got 62.61% accuracy on the CIFAR10 test set. The pre-training total takes ~23 minutes on a single Tesla V100. Note that this pre-training does not make use of any labeled samples. There’s a Colab Notebook inside. So, feel free to tweak it, experiment with it, and let me know. Happy to address any feedback.
st206889
Thank you for the kind words, Daniel. Have been following your work since late 2018 and you have been an inspiring figure.
st206890
PAWS 3 introduces a way to combine a small fraction of labeled samples with unlabeled ones during the pre-training of vision models. With its simple and unique approach, it sets SOTA in semi-supervised learning that too with far less compute and parameters. Here’s my implementation of PAWS in TensorFlow: github.com sayakpaul/PAWS-TF 15 Minimal implementation of PAWS (https://arxiv.org/abs/2104.13963) in TensorFlow. For the benefit of the community, I have included all the major bits that have been used in order to make PAWS work. These recipes are largely applicable to train self-supervised and semi-supervised models at scale: Multi-crop augmentation policy (helps a network systematically learn local to global mappings) Class stratified sampling WarmUpCosine LR schedule Training with the LARS optimizer (with the correct hyperparameter choices) Additionally, I have included a Colab Notebook 8 that walks through the Multi-crop augmentation method since it can seem daunting when you work it out for the first time. The results are pretty promising. I encourage you, folks, to check it out.
st206891
Hi folks, I wanted to share my new work with Pin-Yu Chen 11 (IBM Research) - “Vision Transformers are Robust Learners” . For some time now, Transformers have taken the vision world by storm. In this work, we question the robustness aspects of Vision Transformers. Specifically, we investigate the question: With the virtue of self-attention, can Vision Transformers provide improved robustness to common corruptions, perturbations, etc.? If so, why? We build on top of existing works & investigate the robustness aspects of ViT. Through a series of six systematically designed experiments, we present analyses that provide both quantitative & qualitative indications to explain why ViTs are indeed more robust learners. Paper: [2105.07581] Vision Transformers are Robust Learners 6 Code: https://git.io/J3VO0 4
st206892
Sayak_Paul: [2105.07581] Vision Transformers are Robust Learners Congratulations - amazing work on ViT research. image1013×897 457 KB
st206893
It is a common belief that if we constrain vision models to perceive things as humans do, their performance can be improved. For example, in this work 9, Geirhos et al. showed that the vision models pre-trained on the ImageNet-1k dataset are biased toward texture whereas human beings mostly use the shape descriptor to develop a common perception. But does this belief always apply especially when it comes to improving the performance of vision models? Know more in this post: keras.io Keras documentation: Learning to Resize in Computer Vision 23
st206894
Interesting! There’s something funny with the visualizations they shouldn’t be that saturated. convert_image_dtype isn’t working because your input is already a float. Fixed: Copy_of_learnable_resizer.ipynb - Google Drive 10 Also: It’s clearer if you show the before and after for each image. It’s easier to follow if you create get_resizer function that returns a concrete resizer model, then just use that.
st206895
Thanks, Mark! I can make the changes and create a PR. There’s something funny with the visualizations they shouldn’t be that saturated. convert_image_dtype isn’t working because your input is already a float. Thanks for catching it. I didn’t realize even if my inputs are in float, I couldn’t use convert_image_dtype() to scale the pixels to [0. 1]. Maybe casting the dtype to int after the resizing step (since tf.image.resize() casts to float) would be easier.
st206896
@markdaoust FYI: github.com/keras-team/keras-io [Learnable Resizing] Fixes to the resizer utility and better visualization keras-team:master ← sayakpaul:resizing opened May 13, 2021 sayakpaul +132 -56
st206897
Happy to open-source my TensorFlow implementation of Denoised Smoothing. It provides provable robustness for pre-trained image classification models (including cloud APIs) against L2 attacks. github.com sayakpaul/Denoised-Smoothing-TF 12 Minimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.
st206898
Hi folks, I hope you are doing well. I wanted to let you know about my new example on keras.io 8: keras.io Keras documentation: Consistency Training with Supervision 7 It shows how to perform consistency regularization to improve the performance of vision models under distribution shifts. It also provides a template for conducting semi-supervised / weakly supervised learning. With this minimal example, I got promising improvements on CIFAR-10-C 1: My full-scale experiments are here 6. As always, happy to address feedback.
st206899
Here’s my implementation of Generalized ODIN, a framework to detect OOD samples without exposing a network to outliers during training. Code’s in TensorFlow and comes with several Colab Notebooks for folks to try out. Happy to address feedback. github.com sayakpaul/Generalized-ODIN-TF 26 TensorFlow 2 implementation of the paper Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution Data (https://arxiv.org/abs/2002.11297). - sayakpaul/General...