id
stringlengths
3
8
text
stringlengths
1
115k
st206700
In this case should we restrain from adding these models to keras.applications ?
st206701
I think that adding a model in keras.applications, model garden or TF HUB has different scope requirements.
st206702
Going on with my series of blog posts about Advent of code 2021 in pure TensorFlow. Here’s how I solved the day 11 problem with a bit of computer vision and a queue P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 11 3 The Day 11 problem has lots in common with Day 9. In fact, will re-use some computer vision concepts like the pixel neighborhood, and we'll be able to solve both parts in pure TensorFlow by using only a tf.queue as a support data structure.
st206703
I’m pleased to share that it’s now possible to share ANY Keras model on the Hub. The integration with huggingface_hub is pretty easy to use…here’s how it looks for the Sequential API. carbon (28)2312×1688 361 KB In addition, I put together a video 1 walking you through how to use it to push and pull the denoising autoencoder example from the docs! You can follow along in this colab notebook 1 to try it out for yourself. Let me know what you think!
st206704
aug1058×418 46.7 KB benchmark_small_model1054×394 51.2 KB benchmark_total2212×1224 436 KB benchmark1132×634 91.7 KB converge_curve1092×600 40 KB decoupled_head1794×1058 136 KB Speed_and_accuracy2312×818 191 KB
st206705
Going on with the Advent of Code solutions in pure TensorFlow. Here’s how I solved the day 10 puzzle about syntax checking and autocomplete in a very straightforward way. P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 10 2 The day 10 challenge projects us in the world of syntax checkers and autocomplete tools. In this article, we'll see how TensorFlow can be used as a generic programming language for implementing a toy syntax checker and autocomplete. TensorFlow used as a generic programming language is really versatile!
st206706
Here is my implementation of the “Conformer: Convolution-augmented Transformer 1” paper. It reduces the local inductive bias of transformer and achieves the best of both worlds (transformers for content-based global interactions and CNNs to exploit local features) by combining convolution neural networks and transformers to model both local and global dependencies GitHub GitHub - Rishit-dagli/Conformer: An implementation of Conformer:... 9 An implementation of Conformer: Convolution-augmented Transformer for Speech Recognition, a Transformer Variant in TensorFlow/Keras - GitHub - Rishit-dagli/Conformer: An implementation of Conformer...
st206707
Hi! I am unable to figure out how to use car196 labeled dataset for object detection can you please help me out.
st206708
Hi folks, Excited to share what Soumik 2 and I have been working on for the past few days. We present an implementation of GauGAN [1] in Keras: GauGAN for conditional image generation 2. Our core focus was on readability but we are happy to see it performing quite well too. We are also announcing this repository where we will publish results with bigger datasets: GitHub - soumik12345/tf2_gans: Implementations of GANs in Tensorflow 2.x. During training, GauGAN learns to generate images that are conditioned on semantic segmentation maps and latents learned from cue images (“Reference Image” in the above GIF). What a great way to extend Variational Autoencoders isn’t it! For those who are wondering, yes, this is the architecture we used to see a few years back using which they drew some simple paintings and generated images from them [2]. Thanks to @fchollet for helping us with the reviews. References [1] https://arxiv.org/abs/1903.07291 1 [2] SPADE Project Page
st206709
The day 9 challenge can be seen as a computer vision problem. TensorFlow contains some computer vision utilities that we’ll use - like the image gradient - but it’s not a complete framework for computer vision (like OpenCV). Anyway, the framework offers primitive data types like tf.TensorArray and tf.queue that we can use for implementing a flood-fill algorithm in pure TensorFlow and solve the problem. P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 9 3 The day 9 challenge can be seen as a computer vision problem. TensorFlow contains some computer vision utilities that we'll use - like the image gradient - but it's not a complete framework for computer vision (like OpenCV). Anyway, the framework...
st206710
I end the year by sharing my solution to the day 8 puzzle of #AdventOfCode 2021. The day 8 challenge is the most boring challenge faced Day 9, instead, has been fun! I solved it with lots of computer vision concepts, so stay tuned for the next one! P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 8 3 The day 8 challenge is, so far, the most boring challenge faced 😅. Designing a TensorFlow program - hence reasoning in graph mode - would have been too complicated since the solution requires lots of conditional branches. A known AutoGraph limitation... Any feedback is appreciated
st206711
Just to share a project where I use TensorFlow Keras to create and train a convolutional neural network for analyzing gray scale video images, and estimate the position of objects on the floor. This model is then ported to an ARM Cortex M7 micro-controller board with a low-resolution camera. The system can be used in small mobile robots as detailed in the blog below. [Note: the controller I am using, Microchips SAMS70J20 is not supported by TFLite, so I need to create my own workflow) fkeng.blogspot.com Implementing Convolutional Neural Network (CNN) in DIY Machine Vision Module... In this post I will share my journey in implementing a convolutional neural network (CNN) in my DIY machine vision module (MVM).  The MVM mo...
st206712
The Advent of Code Day 7 challenge is easily solvable in pure TensorFlow with the help of ragged tensors and a couple of observations that simplify the problem. Here’s my article on how I solved the puzzle P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 7 5 The day 7 challenge is easily solvable with the help of the TensorFlow ragged tensors. In this article, we'll solve the puzzle while learning what ragged tensors are and how to use them.
st206713
I’m going on with the challenge in the challenge of solving Advent Of Code puzzles in pure TensorFlow. Here’s how I solved the day 6 puzzle P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 6 10 The day 6 challenge has been the first one that obliged me to completely redesign for part 2 the solution I developed for part 1. For this reason, in this article, we'll see two different approaches to the problem. The former will be computationally... The tutorial shows how to use the experimental mutable hashtables and how to face the problem in 2 different ways. I write the articles some days after solving the problems. So far I solved in pure TensorFlow problems from 1 to 10. So stay tuned for - at least - 4 more articles!
st206714
Hello there everyone. I hope all are doing well! Check out my implementation of the paper “Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation”. A single image low light enhancement with deep learning. The paper proposes some zero-reference loss functions which do not require any reference enhanced image. The model learns to enhance the image directly by low light image as input. Zero-DCE++ model can be used for real-time low-light video enhancement. Get detailed Implementation and results on my Github repo. Github Repo: Zero-DCE and Zero-DCE++ 9 enhanced_result_with_alpha_maps_zero_dce_1001050×573 97.1 KB enhanced_result_with_alpha_maps_zero_dce_lite_160x160_iter8_301000×682 116 KB If you find this work useful, drop a star and do cite this repository if it helps in your project or research.
st206715
I’m going on with the challenge The day 5 challenge is easily solvable in pure TensorFlow thanks to its support for various distance functions and the power of the tf.math package. The problem only requires some basic math knowledge to be completely solved - and a little bit of computer vision experience doesn’t hurt. Any feedback is welcome! P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 5 5 The day 5 challenge is easily solvable in pure TensorFlow thanks to its support for various distance functions and the power of the tf.math package. The problem only requires some basic math knowledge to be completely solved - and a little bit of...
st206716
Glad to share our paper (w/ @ali ) is now out “CPPE-5: Medical Personal Protective Equipment Dataset” in which we introduce a new challenging image dataset with the goal to allow the study of subordinate categorization of medical PPE, unlike any other existing dataset. Furthermore, you can easily get started to use this dataset with the tutorials in the code repository or also use one among the 15+ SoTA models (TF Lite and TFJS variants too) from the model zoo for this dataset (some top-performing models are in the process of being contributed to TF HUB). Repo: GitHub GitHub - Rishit-dagli/CPPE-Dataset: CPPE - 5 (Medical Personal Protective... 6 CPPE - 5 (Medical Personal Protective Equipment) is a new challenging object detection dataset - GitHub - Rishit-dagli/CPPE-Dataset: CPPE - 5 (Medical Personal Protective Equipment) is a new challe... Paper: https://arxiv.org/abs/2112.09569 5
st206717
Using tensors for representing and manipulating data is very convenient. This representation allows changing shape, organizing, and applying generic transformations to the data. TensorFlow - by design - executes all the data manipulation in parallel whenever possible. The day 4 challenge is a nice showcase of how choosing the correct data representation can easily simplify a problem… and play bingo! P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 4 7 Using tensors for representing and manipulating data is very convenient. This representation allows changing shape, organizing, and applying generic transformations to the data. TensorFlow - by design - executes all the data manipulation in parallel...
st206718
In a ViT setup, dividing an image of (256, 256) into patches of (8, 8) gets you 1024 patches to deal with. In our (w/ @Sayak_Paul) new keras example 2 we implement Adaptive Space-Time Tokenization for Videos 1 by Ryoo et. al. that helps reduce token count from 1024 to 8. What makes it even more interesting is that there is no performance drop with reducing the number of tokens. The paper (and we in the example) report lesser FLOPs accompanying boost in downstream results. image2020×540 28.8 KB
st206719
Sensoria Video link 2 Link to Sensoria: https://sensoria.herokuapp.com/ 4 GitHub: enter in GH search field AlexShafir/Sensoria This project is a demo for P2P WebRTC communication in 3D space using TFJS Face-Landmarks-Detection for face & iris tracking. What you will see is unfiltered result of TensorFlow Facemesh. WASM (CPU) computation backend, MediaPipe Facemesh model. Concept Once I was on massive online Zoom event and I felt lots of video “rectangles” make me feel a bit disconnected. So I became curious are there alternative solutions, that do not require VR goggles, and still providing immersive experience. Projecting video as 2D rectangle into 3D space still breaks immersion, so I opted for TFJS Face-Landmarks-Detection for face & iris tracking. How to try you can open Sensoria in another browser tab to simulate conversation (it will run 2x slower though due to two processing threads). Chrome browser is recommended. Model was originally trained for smartphones, so you should be close to camera for the best result. Virtual camera has fixed location, while face mesh moves freely around it. Eyes currently have fixed size due to scaling problems.
st206720
As I anticipated 1, I started this series on how to use TensorFlow for solving the Advent of Code 2021 puzzles. In this article, I show how I solved the day 3 puzzle. You’ll learn about TensorArrays, how to use them and I’ll also present a huge limitation this data type has. Moreover, I’ll show how and why to use a tf.function experimental (but very useful) feature for avoiding useless retraces and reusing the same graph with tensors of different shapes. I already solved days 4 and 5, hence two new articles will come soon! P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 3 3 A Solution to the AoC day 3 puzzle in pure TensorFlow. This challenge allows us to explore the TensorArray data type and find their limitations when used inside a static-graph context. We'll also use a tf.function experimental (but very useful)...
st206721
I am glad to present an implementation of DeepMind’s new “Perceiver: General Perception with Iterative Attention” Model which builds on top of Transformers but solves the quadratic scaling problem without making any assumptions of the data like the previous approaches in TensorFlow. This means you can use the same model on images, audio, videos, etc! This model also achieves state-of-the-art for some tasks! Find it here: github.com Rishit-dagli/Perceiver 71 Implementation of Perceiver, General Perception with Iterative Attention in TensorFlow
st206722
Nice, we have also a tutorial/example for image classification with Perceiver on the Keras doc website keras.io Keras documentation: Image Classification with Perceiver 75 If you have any feedback/PR to improve the tutorial it is very appreciated.
st206723
As I anticipated 1, I started this series on how to use TensorFlow for solving the Advent of Code 2021 puzzles. In this article, I show how to use Python enums in TensorFlow (the limitation that we face, since we can only use TensorFlow-compatible data-types), how the type annotations for TensorFlow programs are limited, and also briefly introduce the DeepMind project TensorAnnotations. I solved the puzzle too The day 2 challenge was very similar to the challenge of day 1, but I already solved the challenges for days 3 and 4 and they are way more interesting. So two new articles will come soon! P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 2 7 A Solution to the AoC day 2 puzzle in pure TensorFlow. How to use Enums in TensorFlow programs and the limitations of tf.Tensor used for type annotation
st206724
Hi everyone! Even if it’s a bit late (today is the 11th day, but I’m starting from day 1) I decided to start solving all the puzzles of the Advent of Code 1 (AoC) using TensorFlow, designing pure TensorFlow programs. I’m doing this for fun, but also for showing that solving a coding puzzle with TensorFlow doesn’t mean throwing fancy machine learning stuff (without any reason) to the problem for solving it. On the contrary, I want to demonstrate the flexibility - and the limitations - of the framework, showing that TensorFlow can be used to solve any kind of problem and that the produced solutions have tons of advantages with respect to the solutions developed using any other programming languages. Here’s the article I wrote about my solution for the Day 1 challenge. More will come P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 1 7 Solving a coding puzzle with TensorFlow doesn't mean throwing fancy machine learning stuff (without any reason) to the problem for solving it. On the contrary, I want to demonstrate the flexibility - and the limitations - of the framework, showing... Let me know your thoughts!
st206725
@deep-diver and I have worked on an MLOps project for the past couple of months. It shows how “Continuous Adaptation for ML System to Data Changes” can be done by building/interconnecting two separate pipelines (note this project is done in TFX and various GCP services). image1600×799 125 KB We have written a blog post about some of the internal implementation details, and it is published on TensorFlow Blog. Please find it here: blog.tensorflow.org Continuous Adaptation for Machine Learning System to Data Changes 4 Learn how ML models can continuously adapt as the world changes, avoid issues, and take advantage of new realities in this guest blog post. Also, we have open-sourced all the materials to reproduce this project including in-depth explanations within a set of Jupyter notebooks. You can find the repo here GitHub GitHub -... 1 Contribute to deep-diver/Continuous-Adaptation-for-Machine-Learning-System-to-Data-Changes development by creating an account on GitHub. @Robert_Crowe, huge thanks for your help on this one. Thanks for your valuable time to read this and we hope this will be helpful
st206726
Thanks it is a nice tutorial, It could be interesting one day to expand this to cover: some state of the art continual learning approaches instead of retraining the whole model. to handle the drift in an openset/openworld context 2 instead of just a misclassification threshold in the closed set classes.
st206727
Thanks for the suggestions! As you likely know many SoTA approaches stand quite colorless when they are exposed to real-world data but we will investigate and dig deeper. We acknowledge (we do this from the post itself too) that JS Divergence (just a measurement) could have been used to capture the drift too but we wanted to follow another path. But meanwhile, PRs are welcome
st206728
Yes also in the not so “extreme” cases like Continual learning and openset the Active learning topic is always around the corner also with more “static” model but dynamic data pipelines: Jacob Gildenblat – 21 Feb 20 Overview of Active Learning for Deep Learning 1 Overview of different Active Learning algorithms for Deep Learning.
st206729
An overview on some of the required features are in https://arxiv.org/abs/2106.03122: 1 immagine869×177 17.2 KB
st206730
thanks @Bhack for further information. the materials you shared will definitely expand our knowledge space and let us to think about the next project to work on. we are interested in two topics recently. monitor data drift comparing datasets (without model prediction) like JS Divergence combining two CI/CD MLOps systems to open source a complete MLOps system. Notice that this doesn’t mean to cover every usecases but to provide a complete kickstarter in one specific usecase. CI/CD to adapt to codebase changes (done in the previous project 1) CI/CD to adapt to data changes
st206731
I think also that some state of the art learning challenge will go to impact mlops: https://arxiv.org/abs/2111.01956 1
st206732
While there is a great ground to cover in “distributed training with TPUs” I have written down a blog post which helps anyone to being with. My latest PyImageSearch blog post covers the following details: Hardwares used for DL (CPUs, GPUs and TPUs). An efficient data pipeline to use TPUs (using tf.data). A primer on distributed training. Link: Fast Neural Network Training with Distributed Training and Google TPUs - PyImageSearch 7
st206733
I implemented the very recent NeurIPS 21 paper Transformer in Transformer in TensorFlow which uses attention inside local patches essentially using pixel level attention paired with patch level attention. This also achieves SoTA performance on image classification beating ViT and DeiT with similar computational cost. GitHub GitHub - Rishit-dagli/Transformer-in-Transformer: An Implementation of... 11 An Implementation of Transformer in Transformer for image classification, attention inside local patches - GitHub - Rishit-dagli/Transformer-in-Transformer: An Implementation of Transformer in Tran...
st206734
github.com/keras-team/keras-io Update an example on transformer_in_transformer 4 keras-team:master ← czy00000:master opened Nov 6, 2021 czy00000 +357 -0 sove the problem of this PR #687
st206735
Hi folks, I hope you are doing well. Since last year, the computer vision community experienced a boom in self-supervised pertaining methods for images. While most of these methods share common recipes (augmentation, projection head, LR schedules, etc.) reconstruction-based pertaining methods differ from them and make the process simpler and more scalable. Such a method is Masked Autoencoding released a couple of days ago from FAIR. Today, we (@ariG23498 and myself) are happy to share a pure TensorFlow implementation of the method along with commentary and promising results: Masked image modeling with Autoencoders 12. It is along similar lines to BERT’s pretraining objective: masked language modeling. Some advantages of this method: Does not rely upon sophisticated augmentation transforms Easy to implement (barring a few nuts and bolts) Quite faster pre-training Implicit handling of representation collapse On par with SoTA for self-supervision in the field of computer vision I hope you folks will find the article useful and as always, we are happy to answer questions.
st206736
Sayak_Paul: , Great work on this Sayak, this will definitely be useful for many. Small thing, but the keras page is future dated. But this is so cool it might as well be in the future
st206737
Have you ever wondered how to convert your dark images into better photos? Now you can do it in one second, with the help of the Zero-DCE model. The TF-Lite model is available on TensorFlow Hub. Now you can effortlessly deploy this Deep Learning model on your edge devices to get amazing images as output. Thanks to Soumik Rakshit 1 for building the Zero-DCE model, Sayak Paul 1 and Tulasi Ram Laghumavarapu 1 for guiding me while contributing to TensorFlow Hub. TensorFlow Hub Link: TensorFlow Hub 1 References Zero-DCE paper link: https://arxiv.org/pdf/2001.06826.pdf Zero-DCE Original Repository: GitHub - soumik12345/Zero-DCE: Pytorch implementation of Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement Zero-DCE Keras Example: Zero-DCE for low-light image enhancement 1 Zero-DCE TF-Lite Repository: GitHub - sayannath/Zero-DCE-TFLite: Conversion of TF-Lite model from ZERO-DCE model 5 PS: My first contribution to TensorFlow Hub
st206738
Neural Style Transfer as proposed by Gatys et. al. was a slow and iterative method that could not transfer style in real time. With Adaptive Instance Normalization 1 we achieve arbitrary style transfer in real time. Our (with @Ritwik_Raha) Keras blog post on AdaIN got published. Link: Neural Style Transfer with AdaIN 1 The post is also accompanyed by a hugging face demo for you all to try the models out. Hugging face demo: Neural Style Transfer using AdaIN - a Hugging Face Space by ariG23498 1 Hope this example turns out to be a good addition to the community. Results Style Transfer Image546×1661 1.21 MB
st206739
Thanks @lgusm for your kind words and suggestion. Due to the fact that this was an example I did not train the model with much data to give me high quality results (as one might notice in the hugging face demo). I will definitely contribute to the hub once I have a better model to share. Thanks again
st206740
Hi folks,Continuous integration and deployment (CI/CD) is a common topic of discussion when it comes to DevOps. No wonder why it has also become so for MLOps. With MLOps though, we have another piece of continuity - continuous re-training and evaluation. In our latest two-part article from the GCP blog, @deep-diver and I dive deep into incorporating CI/CD for ML with TensorFlow, TFX, and Vertex AI along with other services from GCP. We take the scenario where we need to incorporate code changes (be it for better training techniques or better model architectures) for an ML system and perform CI/CD in a meaningful manner. Below are the links: Model training as a CI/CD system (Part I): Code 14 | Blog Post 27 Model training as a CI/CD system (Part II): Code 2 | Blog Post 6 Happy to answer q’s.
st206741
Solved by deep-diver in post #9 @Sayak_Paul CI/CD tool(cloud build or github action here) always starts from clean state with fresh container. That means there is no existing pipeline, but you can do update if you are using a dedicated stateful server.
st206742
Thanks for sharing, this is very helpful! I have a question, in the partial-pipeline-deployment.yaml in the ‘Create Pipeline’ step, in the tfx cli, why use the create command and not the update? If the pipeline is already created, it will raise that the pipeline already exists and will not update, thus the run create will be the same, or no?
st206743
@Sayak_Paul CI/CD tool(cloud build or github action here) always starts from clean state with fresh container. That means there is no existing pipeline, but you can do update if you are using a dedicated stateful server.
st206744
Active Learning is an incremental process of learning. In this process, we initially annotate and train on a small subset of the unlabled data pool and then query the model for what data it would want to train on in the future. This is done iteratively till business metrics are met. dmNKusp1094×441 30.6 KB In my recent tutorial on Keras, I implement a ratio-based sampling technique and demonstrate its usefulness on a toy, IMDB dataset. Some salient features of this tutorial include: The active learning method of training achieved similar results as compared to a standard training loop while eliminating the unnecessary annotation of ~10,000 labels! This method of sampling slightly balances out the false positives and false negatives which is beneficial for businesses that require balanced performance for both labels. This tutorial serves as a comprehensive introduction to active learning with plenty of resources for beginners. If you’re interested then check it out. Review Classification using Active Learning 40
st206745
Adaptive Instance Normalization 7 was a great read. Here the authors argue that instance normalization is indeed style normalization. With that in mind, they provide a faster and more versatile approach to neural style transfer. My take: GitHub - ariG23498/AdaIN-TF: Minimal Implementation of AdaIN with TensorFlow 31
st206746
Built a hugging face space for this project. Link: Neural Style Transfer using AdaIN - a Hugging Face Space by ariG23498 3 Worked on it with @Ritwik_Raha.
st206747
Hi folks, I am pleased to share my latest blog post with you: Distributed Training in TensorFlow with AI Platform & Docker. Sayak Paul – 6 Apr 21 Distributed Training in TensorFlow with AI Platform & Docker 17 Training a model using distributed training with AI Platform and Docker. It will walk you through the steps of running distributed training in TensorFlow with AI Platform training jobs and Docker. Below, I explain the motivation behind this blog post: If you are conducting large-scale training it is likely that you are using a powerful remote machine via SSH access. So, even if you are not using Jupyter Notebooks, problems like SSH pipe breakage, network teardown, etc. can easily occur. Consider using a powerful virtual machine on Cloud as your remote. The problem gets far worse when there’s a connection loss but you somehow forget to turn off that virtual machine to stop consuming its resources. You get billed for practically nothing when the breakdown happens until and unless you have set up some amount of alerts and fault tolerance. To resolve these kinds of problems, we would want to have the following things in the pipeline: A training workflow that is fully managed by a secure and reliable service with high availability. The service should automatically provision and de-provision the resources we would ask it to configure allowing us to only get charged for what’s been truly consumed. The service should also be very flexible. It must not introduce too much technical debt into our existing pipelines. Happy to address any feedback.
st206748
Sure. I should add this to my post. But then again, the moment one would try to change to a different framework, tensorflow-cloud would likely break.
st206749
Hi there! I also would like to share a great article, which can be useful for you. It is on Cloud Agnostic vs Cloud Native: This is How to Get the Most out of Your Cloud Adoption Approach. Here is the link - https://www.avenga.com/magazine/cloud-agnostic-vs-cloud-native/ 1
st206750
In recent Google Landmark Recognition 2021 1 kaggle competition, 1st place winner Dieter 2 shared the winning solution. One of the models that he used in his approach was the EfficietNet-DOLG model, shown below. 1080×494 138 KB Official PyTorch Implementation, Code 10. Unofficial Keras Implementation. Code 6
st206751
I contributed this collection containing 6 different ConvMixer models that were pre-trained on the ImageNet-1K dataset available for fine-tuning as well as image classification. Further, the models are also accompanied with a tutorial to help you get started in <5 minutes. ConvMixer is a simple model that uses only standard convolutions to achieve the mixing steps. Despite its simplicity, ConvMixer outperforms ViT and MLP-Mixer. ConvMixer relies directly on patches as input, separates the mixing of spatial and channel dimensions and maintains equal size and resolution throughout the network. https://tfhub.dev/rishit-dagli/collections/convmixer 8 The associated GitHub repo could be found here: github.com GitHub - Rishit-dagli/ConvMixer-torch2tf: This repository hosts code for... 7 This repository hosts code for converting the original ConvMixer models (PyTorch) to TensorFlow. - GitHub - Rishit-dagli/ConvMixer-torch2tf: This repository hosts code for converting the original C... You might want to take a look at a ConvMixer implementation by @Sayak_Paul here 2 and by @sayannath235 here 3.
st206752
This is very cool Rishit! Thanks for contributing to TFHub and helping the community have access newer models!
st206753
Combining the benefits of convolutions (for spatial relationships) and transformers (for global relationships) is an emerging research trend in computer vision. In my latest example, I present the MobileViT architecture (Mehta et al. 1) that presents a simple yet unique way to reap benefits of the two. With about a million parameters, it achieves a top-1 accuracy of ~86% on the tf_flowers dataset on 256x256 resolution. Furthermore, the training recipes are simple and the model runs efficiently on mobile devices (which is atypical for transformer-based models). keras.io Keras documentation: MobileViT: A mobile-friendly Transformer-based model for... 29
st206754
A “point cloud” is an important type of data structure for storing geometric shape data. Due to its irregular format, it’s often transformed into regular 3D voxel grids or collections of images before being used in deep learning applications, a step that makes the data unnecessarily large. In our latest example (Soumik and myself), we present PointNet (from 2017) solves this problem by directly consuming point clouds, respecting the permutation-invariance property of the point data. Additionally, we’re working on a comprehensive repository on performing point cloud segmentation at scale with full TPU support. image560×558 96.6 KB Keras example: Point cloud segmentation with PointNet 12 GitHub repo: https://git.io/Jidna 5
st206755
Here goes my next Keras Example all about implementing Swin Transformers, a general-purpose backbone for computer vision. The Swin Transformer architecture for image classification – a Transformer-based vision model that uses local self-attention as a way to make self-attention on images linear in complexity. I go on to demonstrate using this for image classification on CIFAR-100. keras.io Keras documentation: Image classification with Swin Transformers 51
st206756
PS: After your wonderful suggestion, @lgusm , I am already in works to publish the trained model on TF Hub.
st206757
How would I use my own small dataset for binary classification and not the cifra-100 dataset?
st206758
I have recently finalized my project and decided to share with you It’s rock, papar, scissors game powered by the MobileNetV2 model and deployed completely server-less with Tensorflow.js: romaglushko.com Rock, Paper, Scissors Game - Lab by Roman Glushko 13 Rock, paper, scissors online game powered by Machine Learning There is a demo of the game where you can try it and the full story how I built the project. Hope you will have a great time checking it out
st206759
Welcome to the forum and thank you for sharing your TensorFlow.js demo! Always great to see what folk are up to. I actually did something similar a few years back (but was not TensorFlow.js) where I recorded a GIF of myself and was able to make it look like you were playing a real person - you may be able to update your version to do something similar too: Rock Paper Scissors - Machine Learning Style using Tensor Flow You can see in the bottom left the live view in the last second when you show your hands, and the large GIF is the “computer”. My version here used websockets to transmit the final frame to web server for classification which is certainly less elegant than a pure TensorFlow.js solution, but it did not exist back when I made this Look forward to seeing how your project (or future projects) evolve! Thanks for being part of the #madeWithTFJS community!
st206760
What happens when we apply similar pure convolution blocks on patches of images? We can train a network with 0.8 million parameters for 10 epochs on CIFAR-10 and get ~83% top-1 test accuracy without having to use any fancy regularization. ConvMixer (the recently talked about architecture on Twitter): keras.io Keras documentation: Image classification with ConvMixer 8 There are a few visualizations of the internals of ConvMixer that might be useful for the community. Learned patch embeddings: Convolution kernel from the middle of the network showing varying locality spans:
st206761
It is a common practice to use the same input image resolution while training and testing vision models. However, as investigated in Fixing the train-test resolution discrepancy 10 (Touvron et al.), this practice leads to suboptimal performance. Data augmentation is an indispensable part of the training process of deep neural networks. For vision models, we typically use random resized crops during training and center crops during inference. This introduces a discrepancy in the object sizes seen during training and inference. As shown by Touvron et al., if we can fix this discrepancy, we can significantly boost model performance. In the following example, we implement the FixRes techniques introduced by Touvron et al. to fix this discrepancy. keras.io Keras documentation: FixRes: Fixing train-test resolution discrepancy 5
st206762
Congrats to our latest TensorFlow Community Spotlight winners! @deep-diver and @Sayak_Paul Check out their project here: goo.gle/3AnYwEO 4 twitter.com TensorFlow (TensorFlow) 1 🏅👏 Let’s give it up for our #TFCommunitySpotlight Winners and ML GDEs, @algo_diver and @risingsayak! Chansung and Sayak’s project demonstrates a workflow to cover dual model deployment scenarios using TFX, Kubeflow and Vertex AI. Learn more → https://t.co/4YFfmJfi1S 4:00 PM - 8 Oct 2021 162 37 If you have a cool TensorFlow project you’d like us to review for a chance to be featured on our #TFCommunitySpotlight 8 channel and win some TF swag, you can submit it here: goo.gle/tfcs
st206763
Sharing a project which exposes tflite models using a rest API provided by fastAPI, with the goal of making tflite easier to integrate with existing projects. I used this to integrate tflite models into the home automation platform Home Assistant by recreating API endpoints provided by Deepstack. I also have a fork where the Coral TPU stick is supported. Checkout the links below github.com GitHub - robmarkcole/tensorflow-lite-rest-server: Expose tensorflow-lite models... 7 Expose tensorflow-lite models via a rest API. Contribute to robmarkcole/tensorflow-lite-rest-server development by creating an account on GitHub. github.com GitHub - robmarkcole/coral-pi-rest-server: Perform inferencing of... 1 Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick - GitHub - robmarkcole/coral-pi-rest-server: Perform inferencing of tensorflow-lite models on an RPi w... image992×880 158 KB
st206764
Hi everyone, As part of Kaggle’s " Tabular Playground Series - Sep 2021", I created a notebook using TensorFlow Decission Forests. If you’re interested, check it out please and let me know if you have any ideas on how to make it better and enhance it. colab.research.google.com Google Colaboratory 103 Regards, Fadi Badine
st206765
Hi Fadi, Nice colab Since you asked about it, here is a couple of nits that might be interesting to try: Installing TF-DF (i.e. pip install tensorflow_decision_forests) prints a lot of things. You can mask some of it as follow: !pip install tensorflow_decision_forests -U -q By default, Colabs are running on two small CPUs (trying running !cat /proc/cpuinfo). However, by default, TF-DF trains on 6 threads (see “num_threads” constructor argument). It would be interesting to see the speed of training with only 2 threads. The two approaches differ in two ways: Different l1_regularization values and the replacement of missing values by the mean in the second approach. Apart from this, both approaches are equivalent and are expected to give similar results within training noise (which might already be the case 0.81345 ~= 0.81343). You can compute the confidence bounds or a t-test to be fancy :). For long training, it might be interesting to print the training logs (while training). This can be done as follow: !pip install wurlitzer -U -q from wurlitzer import sys_pipes with sys_pipes(): model.fit(...) Note that at some point, this will be done automatically depending on the verbose parameter. Thanks for sharing the colab. Since you had hands-on practice with the library, do you mind me asking you about your experience? For example, did you face some hard debug errors, or did some of the behavior of the library was surprising? Cheers, M.
st206766
Mathieu: !pip install tensorflow_decision_forests -U -q Thanks Mathieu! I will apply the changes that you proposed. However, regarding point 2 … I did not quite understand. Do you mean, I should set num_threads = 2? As for my experience with the library, it was smooth and easy. I did not face any unusual behaviour. The tutorials and online documentation helped me a lot The only thing I faced and was unable to solve was when using kerastuner for searching for the ultimate hyperparameters: kernel kept on crashing (locally, colab and kaggle). It did not at the beginning but rather started happening so I was not able to figure it out cause nothing had changed. But I think this is more related to keras tuner
st206767
@Mathieu, I do not know how to calculate the confidence bounds or t-test … any hint or example if possible please?
st206768
I should set num_threads = 2? Yes. And make sure this is faster (otherwise revert back) Tuner Good point. I see what I can do. Maybe TF-DF could catch some of these issues (maybe caused by incompatible configuration) and give informative error messages… A nice example was created by a user. confidence bounds or t-test Confidence intervals (CI) on the accuracy can be computed with the Wilson Score Interval and CI on AUC can be computed with Hanley et al method. Alternatively, CI can be computed on any metrics using bootstrapping (i.e. you sample with replacement in your predictions, and estimate the CI empirically; I am a big fan of this ). Note that those CI will not contain the training noise (unless you use some form of repeated training / cross-validation). McNemar test 2 is probably suited for accuracy on pairwise data (which should be the case i.e. you use the same test dataset on both candidate models).
st206769
Thanks @Mathieu for your reply and sorry for my belated response. Yes, I have seen the example. In fact, Ekaterina created this based on a question that I started link I will try to set the number of threads to 2 and will add the McNemar test. Thanks!! Regards, Fadi Badine
st206770
Hi, there. I developed a model to find similar questions or answers via contrastive learning, and I think I got a good result in a new way. So I want to share my codes and results here. The idea is very simple; similar questions should be answered similarly. So I built a model that encodes question texts and finds appropriate answers in a contrastive objective. After the training, I could find similar questions using the trained encoder. github.com GitHub - jeongukjae/question-similarity: Find similar questions via contrastive... 3 Find similar questions via contrastive learning. Contribute to jeongukjae/question-similarity development by creating an account on GitHub.
st206771
@Divvya_Saxena I removed the help_request tag because this post is not for requesting help. But thanks for updating the other tags!
st206772
I am releasing a comprehensive repository containing 30+ notebooks on Python programming, data manipulation, data analysis, data visualization, data cleaning, classical machine learning, Computer Vision, and Natural Language Processing(NLP). git_cover1280×640 127 KB Here is the link 33 for the repo. The easiest way of reviewing the notebooks is through Nbviewer 4. For Deep Learning with TensorFlow specific repo, here is the link 9. And to review the notebooks here 4.
st206773
Are you doing great things with TensorFlow? If you have a cool TensorFlow project you’d like us to review for a chance to be featured on our #TFCommunitySpotlight 5 channel and win some TF swag, you can submit it here: goo.gle/tfcs 8 Check out our most recent winner’s project @Sayak_Paul @ goo.gle/3AxVn5Y 9 twitter.com TensorFlow (TensorFlow) 5 🏅Congratulations to #TFCommunitySpotlight winner, Sayak Paul (ML GDE)! Sayak’s project provides a systematic way to compress bigger models into smaller ones allowing developers to serve them at lower costs. Great job! Check it out → https://t.co/R4zeX6oOOn 12:07 PM - 19 Aug 2021 128 23
st206774
I didn’t know that it received the award :o I thought I’d get tagged. But anyway, thank you!
st206775
@deep-diver and I have been constantly working on MLOps projects, and we want to share one of our latest works with you guys, “Dual Deployments on Vertex AI”. We leverage several components from the ML tooling provided by Google such as TensorFlow, TFX, Vertex AI, Cloud Build, and so on. Dual Deployments is a common machine learning design pattern. image1102×938 123 KB As described in the blog post, it can be applied to two scenarios of online/offline predictions and layered predictions. In this blog post, we introduce two different approaches on how to write a machine learning pipeline to realize the dual deployment pattern. TFX + custom model based approach : DenseNet for cloud and MobileNet v3 for mobile deployments. KFP + GCP’s AutoML based approach : AutoML for cloud and AutoML Edge for mobile deployments. In both cases, you can find out how custom components can be written including At the time of writing the blog post, TFX didn’t support uploading/hosting trained models on Vertex AI, so we wrote ones ourselves. For mobile deployment, we wrote a custom component to publish the TFLite model to Firebase ML. If you find this brief description interesting, please find more information: GCP Blog Post: Dual deployments on Vertex AI | Google Cloud Blog 3 GitHub Repo: https://github.com/sayakpaul/Dual-Deployments-on-Vertex-AI 1
st206776
Congrats Sayak and Chansung for the detailed post!! The TF community is just amazing!!!
st206777
To get good quality language-agnostic sentence embeddings, LaBSE is a good choice. But due to the parameter size(Bert-base size, but #param is 471M), it is hard to fine-tune/deploy appropriately in a small GPU/machine. So I applied the method of the paper “Load What You Need: Smaller Versions of Multilingual BERT” to get the smaller version of LaBSE, and I can reduce LaBSE’s parameters to 47% without a big performance drop using TF-hub and tensorflow/models. GitHub: https://github.com/jeongukjae/smaller-labse 15 Relative Links: Language-agnostic BERT Sentence Embedding(LaBSE) (Paper: [2007.01852] Language-agnostic BERT Sentence Embedding, TF-hub:TensorFlow Hub 1) Load What You Need: Smaller Versions of Multilingual BERT (Paper: [2010.05609] Load What You Need: Smaller Versions of Multilingual BERT 1, GitHub: https://github.com/Geotrend-research/smaller-transformers 1)
st206778
Nice work Jeon!! Does the preprocessing model still works with your model? or is there still a need for it? Did you think about publishing this to TFHub too?
st206779
Thank you The preprocessing model is exported using the modified vocab file. So this model can be used with updated preprocessing model, not the original one. (You can check here make_smaller_labse.py#L37 2) And I didn’t think about publishing this model, because I didn’t train the model, just patched it. Is it okay to publish?
st206780
Yes I think you should!! Of course, mention the base model on the description and all. I’d also publish the updated preprocessing model too to keep consistency.
st206781
This is perfect!! Thanks! Keep me posted, I’d love to try it on this Colab: Classify text with BERT  |  Text  |  TensorFlow 11
st206782
and it’s live: TensorFlow Hub 6 well done!! I’d update the documentation to link to your preprocessing too! Great work!
st206783
Hi, there. I want to introduce a Korean text datasets library based on TensorFlow-datasets. I added several Korean text datasets and built a catalog page using GitHub Pages and Jekyll. GitHub: https://github.com/jeongukjae/tfds-korean Dataset Catalog: Dataset Catalog | tfds-korean 1
st206784
New example on building a near-duplication image search utility. Comprises of an image classifier, Bit LSH (Locality Sensitive Hashing), random projection, and TensorRT optimized inference to drastically reduce the query time. LSH and random projection have been shown with from-scratch implementations for readers to better understand them. keras.io Keras documentation: Near-duplicate image search 18
st206785
With sincere guidance from Arjun Gopalan from the Neural Structured Learning team, we just published a tutorial on using structural similarity to regularize the training of deep neural networks for image data: github.com tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/graph_keras_cnn_flowers.ipynb 3 { "nbformat": 4, "nbformat_minor": 0, "metadata": { "colab": { "name": "graph_keras_cnn_flowers", "provenance": [], "collapsed_sections": [], "machine_shape": "hm", "authorship_tag": "ABX9TyNkVcRU7nIDq/gDufB88GpW" }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "name": "python" }, "accelerator": "GPU" }, This file has been truncated. show original Under the hood, we represent the images of a given dataset as a synthesized graph wherein the nodes are the neighbors for a given image and the edges denote their similarity. Works with Keras and introduces minimal technical overhead.
st206786
Hi Weizhi. Can you explain in more details what issue are you facing? maybe with some code.
st206787
With a considerable amount of help from Willi Gierke 1, we were able to port the original .npz weights of new BiT-ResNet models [1] to TensorFlow and Keras. Colab Notebook showing the porting and all the other details are available here [2]. The converted models are available on TF-Hub 3 as well: image1775×618 106 KB These models are a part of the recent work done by Google Brain on Knowledge Distillation [3]. References [1] Big Transfer (BiT): General Visual Representation Learning by Kolesnikov et al. 1 [2] BiT-jax2tf: https://github.com/sayakpaul/BiT-jax2tf 3 [3] Knowledge distillation: A good teacher is patient and consistent by Beyer et al. 1
st206788
I am glad to present my implementation of the “Fastformer: Additive Attention Can Be All You Need” paper. This is a Transformer variant based on additive attention that can handle long sequences efficiently with linear complexity. Fastformer is much more efficient than many existing Transformer models and can meanwhile achieve comparable or even better long text modeling performance. github.com GitHub - Rishit-dagli/Fast-Transformer: An implementation of Fastformer:... 26 An implementation of Fastformer: Additive Attention Can Be All You Need in TensorFlow - GitHub - Rishit-dagli/Fast-Transformer: An implementation of Fastformer: Additive Attention Can Be All You Ne...
st206789
Nice work! If you have the trained model, maybe you could also publish it on Tensorflow Hub 3
st206790
Thanks, @lgusm I haven’t yet worked on training it but this is a really great idea, let me get started on this as soon as possible.
st206791
Hello fellow Scholars, I Implemented a research paper named as PraNet: Parallel Reverse Attention Network for Polyp Segmentation: https://arxiv.org/pdf/2006.11392v4.pdf 2 The paper is implemented in Pytorch and I Re-Implemented this paper in Tensorflow and Keras. I have implemented 2 variants of PraNet. PraNet + resnet50(As feature extractor) 2.PraNet + MobileNetv2 (As feature Extractor). Do checkout Github (GitHub - Thehunk1206/PRANet-Polyps-Segmentation: Implementation of research paper : "PraNet: 2) Any type of contribution are welcome. detection_1629898143.47446322000×1000 140 KB
st206792
Happy to share our (@Nain and myself) latest collaboration on building models for handwriting recognition. We show how a simple CNN+RNN model can be used for this task with the CTC loss. The model is able to handle variable-length sequences as well. Another key takeaway is our implementation of Edit Distance (as a callback) to evaluate the recognition model. keras.io Keras documentation: Handwriting recognition 18 image851×416 53.2 KB
st206793
Hi, everyone. I would like to share my small project. I recently built a website to inspect saved_model.pb files. Specifically, this website deserializes the selected saved_model.pb file and shows exported Keras objects(with metadata), signature defs, and serialized concrete functions. It can be used when you want to know whether the model was serialized properly, or how some arbitrary model has consisted. GitHub: https://github.com/jeongukjae/saved-model-inspector 1 Website: https://jeongukjae.github.io/saved-model-inspector/ 2 Example Image: Screen Shot 2021-08-21 at 7.42.01 PM2708×1906 329 KB Screen Shot 2021-08-21 at 7.41.44 PM2708×1906 271 KB Screen Shot 2021-08-21 at 7.41.52 PM2708×1906 282 KB
st206794
Edgeimpulse.com 2 tinyML Machine Learning supports exports to Tensorflow, TensorflowLite, TensorflowMicro but not TensorflowJs. It can easily be converted from one of the other formats but typically takes a python script which is not Javascript. Perhaps a few people could “like” the following Feature Request at EdgeImpulse.com 1 to increase its chances of adoption. Edge Impulse – 5 Aug 21 Feature Request: export TensorflowJS 3 Could Edge Impulse please fully support all versions of Tensorflow specifically by adding dashboard exported support for TensorflowJs? Would show a zipped model.json with binary shard files, preferably float32 and also with int8 quantization. ...
st206795
So it looks like Edge Impulse will not be helping with exporting to TFJS. Anyone any opinions on how to convert from Tensorflow saved model to TensorflowJs layers model. I used to do it all the time but my code doesn’t seem to work anymore. Anyone got uptodate conversion examples and installation code. I used to have several steps full automated on this github to take a model.json to several other forms, but I dont even thing the instalation is working anymore. Any suggestions? github.com GitHub - hpssjellis/Gitpod-auto-tensorflowJS-to-arduino 1 Contribute to hpssjellis/Gitpod-auto-tensorflowJS-to-arduino development by creating an account on GitHub. ‘’’ pip install tf-nightly pip install tensorflowjs pip install netron “dask[delayed]” tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras_saved_model ./model.json ./ tflite_convert --keras_model_file ./ --output_file ./model.tflite xxd -i model.tflite model.h ‘’’
st206796
What is the error though? Did the model you are trying to convert change? On our side I am not aware of any major updates to the converter itself so it should still work assuming same model architecture / ops being used. If the issue is with the TFLite converter then maybe someone from the TFLite team can comment on that?
st206797
Hi Jason: (sorry 2 different laptops seems to have 2 different logins) The issue seems to be that the following are not installing correctly Jeremy_Ellis: pip install tf-nightly pip install tensorflowjs Worked really well 12 months ago but now I get errors from these commands tflite_convert --help tensorflowjs_converter --help Errors like tflite_convert --help 2021-08-08 21:03:08.629865: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory ... The above was on a Gitpod Cloud Debian server, when I try to install directly onto my ubuntu machine I have installation issues. My github that loads on Gitpod is here 1 the auto gitpod load URL is here 2 This all worked great a year ago, so I wonder what is different.
st206798
I see. If you believe this is a bug and not an error with your environment etc then please submit a bug to the TFJS repo so it can be triaged. github.com Build software better, together rflow/tfjs/issues/new/choose GitHub is where people build software. More than 65 million people use GitHub to discover, fork, and contribute to over 200 million projects. We have had a few people using the converter recently on regular vanilla Ubuntu cloud instances and not reported this issue before AFAIK. Please confirm there were no issues with deps not installed on your local environment first though / file access permissions. If it still persists after checking those on your end feel free to open a bug using link above. Thanks!
st206799
Thanks @Jason, presently away from my computer lab so I have a minimal number of computers to test the issues on. I will look into it in when I can. Does anyone else have ideas about possible changes to any dependencies for the converters over the last year?