id
stringlengths
3
8
text
stringlengths
1
115k
st206600
Another great resource: Deep Learning with TensorFlow 2 and Keras: Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API (2nd Edition) by Antonio Gulli (Google), Amita Kapoor, Sujit Pal
st206601
@Sayak_Paul has curated list for resources to learn TF2.0 . Here it’s: GitHub - sayakpaul/TF-2.0-Hacks: Contains my explorations of TensorFlow 2.x 9
st206602
The TensorFlow team is doing a pass on the contributor documentation we have on tensorflow/tensorflow 7, tensorflow/community 9, and the website 7. Based on your experience in contributing to TensorFlow, are there any docs that could use improvements for a better experience? Are there any docs that are missing? Looking forward to your suggestions! cc @billy for visibility
st206603
In tensorflow/docs 15, I guess it is better to give an explanation about nbfmt and nblint 10. They are very useful tools for contributors and are applied automatically for each pull request by GitHub Actions.
st206604
Thank you, Sugiyama-san. Would you mind sharing your typical contribution workflow? We realize it’s not so easy for contributions to be submitted, and so we appreciate your view on the need for format & lint checks on notebooks. Are there other techniques you use to streamline your contributions? Thanks!
st206605
Off course! I mainly contribute to tensorflow/docs-l10n 10 and made some contribution to other repos (tensorflow/docs 12, tensorflow/tfx 13, and tensorflow/tensorboard 11) on almost same workflow. Followings are my workflow. Find an issue Run example notebooks in Colab (I mostly use them for my learning) Find unexpected errors (tensorflow/docs-l10n only) Find untranslated notebooks or outdated notebooks Search issues of the repo to check the issue is already reported or not Try to find the cause of the issue to understand it enough to describe Report an issue Check README.md and CONTRIBUTING.md to find the way to report the issue Take a look at issue templates to find the most suitable one (each repo can have its own rules and templates) Fill the issue template Fix the issue Fork the repo. Set up my local development environment following instructions in README.md, CONTRIBUTING.md, setup.py, and other resources (if required, I mostly use Colab because ) Fix the issue and push it into my cloned repo. Clone it into my local environment. Apply formatter and linter using nbfmt and nblint following this document 6 Push fix commits to my cloned repo Make a pull request Make a pull request and if I created the issue, link the PR to the issue Pray to pass the CI (and fix my PR if needed) Pray to merge it I think this is a generic workflow to contribute to OSS hosted on GitHub. I guess, for tensorflow/docs-l10n, this PR-based workflow will soon change into gitlocalize based workflow.
st206606
How do you want to run your production ML infrastructure? Do you prefer cloud or on-prem deployments? For cloud, do you prefer managed or self-managed?
st206607
This is a great question for a thorough discussion. I usually prefer the cloud and decide whether or not to use a managed service based on several considerations. First, if the service is supposed to run infrequently at unpredicted times (usually at early stages of an application), I prefer managed services as they give me a predictable cost overview. It also helps me avoid issues caused by cold start in the serverless. I simply dockerize my services and docker-compose-up them with different .env files in dev and prod environments. This seems a bit like hacky but yields a great productivity at early stages. As my application starts to get attension and have a more regular usage pattern, I start to migrate / upgrade Docker containers to managed services one by one. Finally, I try to implement MLOps best practices for CI/CD/CT. This workflow helps me remain experimental, fast-prototyping and cost-effective at early stages and then become gradually more robust, tested, scalable and manageable. I love to hear from others’ experiences in this.
st206608
It is very important for us to use cloud-managed services. We mainly use AI Platform Pipelines (Kubeflow Pipelines) for production. In most cases, we start prototyping in a local notebook environment. After the prototyping, we design and develop an ML pipeline on AI Platform Pipelines. This works for us because we built relatively simple batch pipelines. If our task is a more complex one or needs online evaluation, we might have took a different way.
st206609
What is the status of the TFR compositional op 10 project? Will be proposed as a valid alternative to custom ops in some use cases?
st206610
Hi folks. I have a use case on binary segmentation i.e. the per-pixel categories can only be either of the two given classes. The presence of these classes inside the training images is skewed. This essentially relates to a class imbalance problem but in a 3D space which is a bit complicated to handle. So, instead of setting the sample_weight (which is recommended 2 to deal with this problem), I did some research and found the following to be a pretty elegant way of dealing with the problem. When feeding a batch of samples to the model, always ensure the number of images containing the positive class is beyond a prefixed ratio. The ground-truth segmentation masks contain 0’s and/or 1’s. One way to ensure that a mask has some presence of the positive class is to compare its mean. For masks containing no positive class pixels, will have a mean of 0. I am looking for snippets/pointers/approaches on how to realize this inside a tf.data pipeline. This is a tried and tested method (see here 2 and here 1).
st206611
I wanted to share with you a sneak peek of a deck I have prepared for an upcoming talk. The deck tries to focus on five key trends in computer vision in 2021. Of course, it’s not an exhaustive summary and it largely relates to what I have been mostly working on these days. Here’s where you can find the deck: https://bit.ly/trends-cv 13. Folks that are a part of the ML-GDE Group can directly comment on the deck. Aappreciate any feedback.
st206612
Nice deck Sayak! Well Done! I guess you can also add Efficientnet V2 2 to slide 9 later
st206613
Hi folks. I am currently implementing a custom training loop by overriding the train_step() function. I am also not using the default compile() method. So, I believe loss scaling is to be implemented. Here’s how the fundamental loop is implemented (that runs as expected on a single GPU): with tf.GradientTape() as tape: fake_colorized = self.gen_model(grayscale) fake_input = tf.concat([grayscale, fake_colorized], axis=-1) predictions = self.disc_model(fake_input) misleading_labels = tf.ones_like(predictions) g_loss = - self.loss_fn(misleading_labels, predictions) l1_loss = tf.keras.losses.mean_absolute_error(colorized, fake_colorized) final_g_loss = g_loss + self.reg_strength * l1_loss self.loss_fn is binary cross-entropy. Here’s how the distributed variant is implemented: with tf.GradientTape() as tape: fake_colorized = self.gen_model(grayscale) fake_input = tf.concat([grayscale, fake_colorized], axis=-1) predictions = self.disc_model(fake_input) misleading_labels = tf.ones_like(predictions) g_loss = - self.loss_fn(misleading_labels, predictions) g_loss /= tf.cast( tf.reduce_prod(tf.shape(misleading_labels)[1:]), tf.float32) g_loss = tf.nn.compute_average_loss(g_loss, self.global_batch_size) l1_loss = tf.keras.losses.MeanAbsoluteError( reduction=tf.keras.losses.Reduction.NONE)(colorized, fake_colorized) l1_loss /= tf.cast( tf.reduce_prod(tf.shape(colorized)[1:]), tf.float32) l1_loss = tf.nn.compute_average_loss(l1_loss, self.global_batch_size) final_g_loss = g_loss + (l1_loss * self.reg_strength) self.loss_fn is binary cross-entropy but in this case, it’s initialized without any reduction. This loop is not behaving as expected because the losses are way off. Am I missing out on something?
st206614
Are you in the same case as Custom training with tf.distribute.Strategy  |  TensorFlow Core 1 or is it something different?
st206615
I mean currently that is the official tutorial that we propose for users/devs that want to use Distributed training with a custom train loop. So I was just asking if you have specific needs and if we could expand that page.
st206616
No. I was actually asking if the distributed variant of my training loop is correctly implemented.
st206617
I am currently using the RandAugment class from tf-models (from official.vision.beta.ops import augment). The RandAugment().distort(), however, does not allow batched inputs, and computation-wise it’s expensive as well (especially when you have more than two augmentation operations). So, following suggestions from this guide 4, I wanted to be able to map RandAugment().distort() after my dataset is batched. Any workaround for that? Here’s how I am building my input pipeline for now: # Recommended is m=2, n=9 augmenter = augment.RandAugment(num_layers=3, magnitude=10) dataset = load_dataset(filenames) dataset = dataset.shuffle(batch_size*10) dataset = dataset.map(augmenter.distort, num_parallel_calls=AUTO)
st206618
Yes the issue is that It seems to me that we have also duplicated OPS like e.g. cutout not batched in official.vision namespace and batched in TFA 3. These are the origins of the current status: github.com/tensorflow/addons Migrate AutoAugment and RandAugment to TensorFlow Addons. 2 opened Mar 5, 2020 closed May 28, 2020 dynamicwebpaige Describe the feature and the current behavior/state. RandAugment and AutoAugment are both policies for enhanced image preprocessing that are included in EfficientNet,... Feature Request help wanted image github.com/tensorflow/community Ask contribution to Tensorflow addons for general scope utils, loss, layers, ops 1 opened Apr 1, 2020 bhack As we have just refreshed the model repo as model garden I would enforce the contributions policies of generale use (or...
st206619
My opinion is that we need just to see how we want to standardize our image processing OPS in the ecosystem. I think these duplicates are going to create confusion.
st206620
To get this started, here are some of my favourite resources for getting started with Machine Learning in TensorFlow.js. Please do comment below with other resources you have found for ML in JS that helped you out too. Books: Learning TensorFlow.js by Gant Laborde (Published by O’Reilly): Learning TensorFlow.js [Book] 14 This book is great for folk who are new to machine learning but familiar with JavaScript and looking to learn the essentials to get started and be productive. From understanding how to manipulate data into Tensors and then quickly progressing to real world applications, this book is a superb introduction that will have you feeling much more confident loading models, passing data to them, and interpreting that data that comes out after digesting these 12 wonderful chapters. Available for pre-order right now (as of March 2021) and having been the technical reviewer for this book I feel this is a very solid introduction for JS devs looking to upskill. Deep Learning with JavaScript (Published by Manning): Deep Learning with JavaScript 6 This book was written by folk on the TensorFlow + TensorFlow.js teams here at Google. It is also aiming to be a solid foundation to the world of Machine Learning and TensorFlow.js and at times gets pretty low level. For me personally this is a solid follow up to read after being familiar with some of the concepts introduced in the O’Reilly book above. From Linear Regression to GANs this book goes deeper in many of these areas. Courses Google AI for JavaScript developers with TensorFlow.js by Jason Mayes (Developer Relations Engineer for TensorFlow.js) edX Google AI for JavaScript Developers with TensorFlow.js 1 Get productive with TensorFlow.js - Google's Machine Learning library for JavaScript. From pre-made off the shelf models to writing or training your own, learn how to create next gen web apps. Get productive with TensorFlow.js - Google’s Machine Learning library for JavaScript. From pre-made off the shelf models to writing or training your own, learn how to create next gen web apps. What you’ll learn: Common terms and what they mean How Machine Learning works (without formal mathematical definitions) Overview of the TensorFlow.js library Advantages of using ML in JavaScript Ways to consume or create Machine Learning models How to use pre-made “off the shelf” models What Tensors are in Machine Learning How to use Tensors with ML models How to write a simple custom model Perceptrons (artificial neuron) and how they work Linear regression to predict numbers using single neuron Multi layered perceptrons for handling more complex data How to use models that use Convolutional Neural Networks for images How to convert Python models to JavaScript Transfer learning - reusing existing trained models with your own data Inspiring projects others are creating to seed your own future ideas Browser-based Models with TensorFlow.js by Laurence Moroney Coursera Browser-based Models with TensorFlow.js 9 Offered by DeepLearning.AI. Bringing a machine learning model into the real world involves a lot more than just modeling. This ... Enroll for free. Bringing a machine learning model into the real world involves a lot more than just modeling. This Specialization will teach you how to navigate various deployment scenarios and use data more effectively to train your model. Building Machine Learning Solutions with TensorFlow.js by Abhishek Kumar pluralsight.com Building Machine Learning Solutions with TensorFlow.js 8 In this course, you'll learn to use TensorFlow.js to build, train, and deploy machine learning and deep learning models to power client-side and server-side applications using the JavaScript language. In this course, you’ll learn about the TensorFlow.js ecosystem, and how to set it up on the client-side in the browser and on the server-side with Node.js. First, you’ll discover how to use the environment to build an end-to-end machine learning application that uses natural language processing (NLP) under the hood to detect toxic elements in unstructured text. Next, you’ll learn how to import and process data, build, train, and export a model, and finally predict using the trained model. Finally, you’ll explore how to use existing models trained in Python on the client-side using TensorFlow.js, and even retrain the pre-trained model using transfer learning. Inspiration TensorFlow.js Community Show & Tell - live every 3 months. Previous episodes after the live event can be found in this YouTube Playlist. #MadeWithTFJS hashtag on social media - a great way to get inspiration and find others who may be working on a similar problem to ask questions to. Right now Twitter and LinkedIn are pretty active with new projects coming out every week from people all around the world: Twitter Search: https://twitter.com/hashtag/MadeWithTFJS?src=hashtag_click&f=live 1 LinkedIn Search: Just search the hashtag via the LinkedIn search box when logged in. Seen other great learning resources? Comment below!
st206621
How do you decide when to retrain your production models? Do you just always retrain them on a schedule, whether they need it or not, or do you monitor and evaluate your model’s performance in production?
st206622
One common practice I saw, is to collect a new (typically small) test set every few weeks (weekly is not uncommon) and use that to monitor if model performance is dropping consistently over time. If that is the case, it is typical to set a threshold on the drop. Once that drop threshold is reached, a new training set is compiled and new models are trained. Both old and new test sets can be used to compare new models with current production model. I’ve also seen teams adopt a fixed retraining schedule, but only when they know from long experience that such drift does happen for their particular application. There are some fancy methods for monitoring distribution drift but they can be deceiving as some drifts won’t actually affect the model performance.
st206623
operations has been a big thing for a lot of my clients lately; there are a number of things i think about when deciding on retraining… how stationary is the input? sometimes it’s clear that a problem is stationary in the input; e.g. a vision model trained on a large diverse natural set images from a phone is usually pretty stable whereas a time series problem often is not. i’ve found rapid retraining of a time series problem on a small recent window can give better results than a larger model trained on a longer window across all data. how stationary is the output? the big thing i’ve found here to consider is feedback loops. when building a model that has outputs very close to the user experience; e.g. P(click|impression) then retraining can be critical to ensure the feedback loop doesn’t push things out of distribution too quickly. monitoring in any case, expectations about feedback loops and stationarity are often wrong so the main thing is to be able to monitor drift. it’s not just about knowing when to retrain but, more importantly in the operational sense, when to occasionally turn off and fall back to whatever graceful degradation plan is in place ( you do have a graceful degradation plan don’t you? ) [related] active learning loops more and more i’ve seen clients wanting to make use of unlabelled data; a key thing being to direct labelling effort. things like uncertainty and diversity sampling, which are needed for active learning loops, end up being super useful for monitoring too. challenger/champion sometimes the question isn’t “when do i retrain?” but “when do i expose users to my latest trained model?” in which case it’s more about retraining as frequently as you can and then focusing instead on how you want to slowly expose your model through techniques like shadow releases. the big question here is whether the added complexity is justified to get empirical info about a new model’s performance. so many interesting problems in this space!
st206624
Two years ago there was an interesting seminal paper “Continual Learning in Practice” 1 at the 2018 NeurIPS workshop on Continual learning. I think that in the meantime we are exploring full continual learning systems in resesarch, frameworks like TFX/Kubeflow could try to offer some kind of “zero-touch ML” features. Recently we approved TFX Periodic Training 3 and I think that we can expand in that neighborhood when we are ready to explore/offer some plugabble automations.
st206625
Hi! I would like to recognize a sequence of human poses, with a predefined timing. For example: recognize a tennis serve, a soccer kick, a ballet move, etc. I have looked at pose similarity for single pose comparison here (https://blog.tensorflow.org/2018/07/move-mirror-ai-experiment-with-pose-estimation-tensorflow-js.html 4). Is there a recommended model for a sequence of poses (LSTM?). I would also like to identify the deviation from ideal poses and timing (i.e. too early/late for this pose). Thanks!
st206626
It’s a great quesiton, and I’d love to know the answer too! I’m thinking multiple models would have to be used. One is pose detection which captures a sequence of poses over time. One is a sequence model, trained on sequences of poses to classify them as ‘good’ or ‘bad’ or some other label. At runtime you then capture the sequence using the posenet, pull out the relevant parts of the skeleton, and feed that into the sequence model to get a classification.
st206627
Good news, someone has actually done this already for sign language in TFJS and it worked pretty well and is generalizable to any time based gesture detection not just sign language. I believe they used a custom trained PoseNet (so that they could have slightly different positioned key points being returned eg in center of palm instead of wrist for hands to be more accurate for hand gesture, combined with the handpose model (running twice as currently handpose is a single hand detector), an then facemesh ontop of all of that too - so actually multiple models running concurrently in browser to get a solid idea of what the human body is doing at any given time as facial expression / movement was important for sign language too. Several frames of each model’s output over time are recorded, and the outputs of all of these are then fed into a higher level Graph Convolutional Network which makes predictions on the gesture it saw based on the inputs for all of the prior model outputs over time.
st206628
I found the general “temporal self-similarity matrix” (TSM) concept from RepNet 8 gave me lots of ideas about this kind of temporal problem. I’ve been using it, plus some alignment ideas from CTC 4, for some things.
st206629
A multitask network (pose + action recognition) could be a good baseline to explore the task GitHub - dluvizon/deephar: Deep human action recognition and pose estimation 10
st206630
Would love to hear funny stories about your ML implementations. What kind of bugs have you encountered? One of my favorites – not sure if it’s true or not – was that a few years back the US Army wanted to build a computer vision model to detect camouflaged tanks. They got some data scientists to build a model, and these folks got to borrow a tank for a couple of days, drive it around the woods, and take lots of pictures. One day they did it without camo and labelled it. The next day they got the camo nets and did the same. They built a model with these pictures, and did everything right – holding back a portion for testing and validation. When they were done, their model was incredibly accurate. They had succeeded! Then they went into the field to test it. And it failed, miserably. They couldn’t figure out why. Their test set and validation sets were properly selected and randomized. It should work. And then somebody pointed out that the weather was sunny on day 1 (no camo), and cloudy on day 2 (with camo) , so instead of a camo detector, they had actually built a cloudy sky detector instead…
st206631
This reminds me of an actual experience I had a few years ago. I was training a reinforcement learning robot to navigate a (simulated) maze. This was for a competition at my university. The organizers provided us with an API where we would get a maze layout and the position of the robot and several exit points and “treasure” locations. Goal was to have robot leave the maze before a timer expired but the ranking was based on how much treasure you got. After several hours of training, I got a robot which could navigate all the mazes that were generated by the API, getting good exploration-exploitation balance. Then, we decided to train the pipeline so that instead of using the maze as provided by the API we used a rotation of it. Turns out the robot was getting stuck most of the time. The reason? The maze generation algorithm that was provided in the API was biased to mazes that had long horizontal corridors and very short passages to next rows. So our robot was overfitting on this feature. It was easy to fix and it turns out the organizers also fixed this in the generator for the actual competition. So only a few number of robots managed to score points during the actual event, as those teams put extra care towards preventing overfitting to the training data.
st206632
Wow that’s gonna be my favorite thread When TensorFlow was opensource for the first time in 2015, I got really excited about it, but I even couldn’t compile it after one-week of effort. Then I finally gave up until I devoted one weekend to get up and running with it. Prior to it I was a big fun of node.js and did crazy things like writing really ugly ML code in JS. My first project with TF was a custom clothe color recognition for personal use (I’m blind), so I scraped images of clothing with descriptions from popular ecommerce websites, quickly labelled them with some regex matching color names and trained my model. Everything was great until I naively tested it in the real world and learned (with sorrow, tears and blood) about the challenges of “ML in the wild” vs. a controlled environment, and the necessity that train and test datasets should be sampled from the same distribution.
st206633
Preprocessing utilities have always bugged me So, back in the days, there was nothing like tf.keras.layersexperimental.preprocessing.Rescaling. In order to better streamline the preprocessing steps, I was wrapping my utilities inside a Lambda layer and then was interesting that layer into my final model. As expected, the model’s performance was terrible when it was compiled as a graph. So, we decided to do the preprocessing externally in a separate job. During the staging, I had mistakenly altered a single digit in our preprocessing utility and this led our model to predict everything to all the same. It took us ~4 days to figure out what was going wrong (yes, I acknowledge our codebase wasn’t organized that well). But I learned my lesson. Fast-forwarding to later 2020 when TensorFlow released the preprocessing layers it really came as a relief
st206634
We had a large piece of work that was funded partially by the fact that it “introduced machine learning”. So we did what I always do; start with the introduction of an evaluation system; the goal being to baseline the incumbent deterministic business rules. Just doing the evaluation meant that we discovered some problems with upstream data which meant we could improve things overall just by a data processing change. This was enough to actually move onto something else for a bit. When we came back to it later the next thing was to improve the business rules, still without introducing anything anyone would call ML. I’ll never forget after that release a key exec stakeholder pulling me aside and asked “are we doing machine learning yet?”. It always makes me laugh. So much hype that for them the key result wasn’t a better user outcome, it was whether we were “using machine learning”
st206635
We have lots of courses on Coursera, and we get lots of questions from folks that don’t want to sign in and pay to get a certificate, and just want to audit. This is possible. It’s subtle, but possible – here are the details. The audit option will give you the full content of the courses, but you will not earn the certificates for completing them. Note the Coursera terminology. A specialization is a collection of courses. Specializations may not be audited. However, each of the individual courses within the specialization may be! Below are links to each of these courses within their specialization. To access them in audit, select the course, and then click on ‘enroll for free’. Screen Shot 2021-02-22 at 5.26.44 PM866×392 43.8 KB You’ll see a dialog like this. Important: Note the ‘Audit the Course’ link at the bottom. Do not select ‘Start Free Trial’. Screen Shot 2021-02-22 at 5.28.31 PM660×578 76 KB This will give you full access to the content. Below are each of the specializations, and I’ve put direct links to the courses within them. Follow the above process with these links to access them at no cost. TensorFlow: In Practice Specialization Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning 6 Convolutional Neural Networks in TensorFlow 4 Natural Language Processing in TensorFlow 6 Sequences, Time Series and Prediction 7 TensorFlow: Data and Deployment Specialization Browser-based Models with TensorFlow.js 2 Device-based Models with TensorFlow Lite 3 Data Pipelines with TensorFlow Data Services 3 Advanced Deployment Scenarios with TensorFlow 5 AI for Medicine Specialization AI for Medical Diagnosis 2 AI for Medical Prognosis 3 AI For Medical Treatment 5 Natural Language Processing Specialization Natural Language Processing with Classification and Vector Spaces 8 Natural Language Processing with Probabilistic Models 2 Natural Language Processing with Sequence Models 6 Natural Language Processing with Attention Models 3
st206636
Also the Advanced TensorFlow specialization: TensorFlow: Advanced Techniques Specialization 6 Custom Models, Layers, and Loss Functions with TensorFlow 3 Custom and Distributed Training with TensorFlow 5 Advanced Computer Vision with TensorFlow 5 Generative Deep Learning with TensorFlow 6
st206637
This is so helpful! Are there any courses or youtube series in Spanish or Portuguese that you could share? We are hosting a couple of TensorFlow Everywhere events in LATAM 5 and it would be interesting to share more resources!
st206638
Portuguese: Coding TensorFlow em português - YouTube 5 Spanish: Coding TensorFlow en Español - YouTube 4
st206639
Having used many unsupervised learning algorithms in the past as part of my development pipeline, I was wondering about weather there are good implementations out there that are built on top of TensorFlow (I’ve seen a couple of k-means implementations in tutorial format, but not much more). What tools other than TensorFlow do you use for your unsupervised learning needs?
st206640
I have been using TensorFlow for coding up a number of different self-supervised models for vision and the experience have been great (and easy). Feel free to take a look into the following minimal implementations of popular self-supervised methods for vision: SimCLR 6 SwAV 5 SimSiam 2 Sorry, could not post links because the forum does not yet allow it.
st206641
Let me start with a very good one if you are starting with Machine Learning and is already a developer: AI and Machine Learning for Coders PXL_20201229_1625560064032×3024 1.85 MB It’s very easy to follow and shows some good use cases and a lot of hands on code to try.
st206642
Another great suggestions is Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow PXL_20210218_1035288353024×4032 2.39 MB This book has a lot of important theory about AI and ML and also the technical perspective of it using TensorFlow in most of the book.
st206643
I suggest two more books. “Deep Learning with Python” for learning Keras API and TF. I guess 2nd edition will be out this year. “Deep Learning with Javascript” for TF.js I translated Hands-on ML and DL with Python into Korean. And I will translate DL with javascript and AI for ML for Coders this year. All four books is really excellent!
st206644
Deep Learning with Python by Francois Chollet. Apart from that, the one by Geron is my favorite.
st206645
What operating system do you use TensorFlow on? Linux MacOS Windows Other30votersShow results
st206646
I’d add Colab as an option (or maybe the Browser). I know it’s not a OS but it’s my main IDE as of these days
st206647
Same here. Either Colab or AI Platform Notebooks or Kaggle Kernels. The IDE experience of Colab could be improved, though. Because now when we hit tab, we don’t get suggestions. While typing out the code we get it but the UX of that is not that great I think. On the other hand, this option comes in handy in Kaggle Kernels and AI Platform Notebooks.
st206648
Got questions on how to convert a TensorFlow Python SavedModel to TensorFlow.js for a project you are working on? Come discuss your progress with us here and see how you can get even more people using your model with the reach and scale of the web. To get started you can try this codelab 14 that will walk you step by step on how to do this if you are new to the process. 14
st206649
So I just wanted to start a thread with a list of the premade models the TensorFlow.js team has launched for your consideration in your next project, but in addition to these, we would love to hear what models you may have created or converted to run in TensorFlow.js! We are starting to see a lot of folk convert Python SavedModels to the TFJS format for use in browser to get not only the reach and scale of the web for people to try their models, but also gain privacy and lower serving costs as it can execute completely in browser on client side, on device. So to get started here are some official models the team has created along with a few conversions we have seen out in the wild you may want to check out but please do reply with others you have seen or made yourself Official TensorFlow.js premade models: Image Classification 4 Object Detection 1 Body Segmentation 2 Face Landmark Detection 1 Pose Estimation 4 Hand Pose Detection 2 Natural Language Question / Answer 1 Text Toxicity Detection Universal Sentence Encoder Sound Classification 2 KNN Classifier Semantic Segmentation 1 Python to TFJS conversions Custom Object Detection using YOLO Depth Estimation 1 3D pose estimation 2 U^2-Net portrait drawing Also a treasure trove of conversions made by this Github user 3 Seen more? Reply below - we would love to see what you have found or made!
st206650
How deep do you get into evaluating your model performance? Do you slice your data and evaluate the slices? Do you try to measure fairness or accuracy for different subsets of your users or use-cases?
st206651
Once per quarter we meet with several members from the TensorFlow.js community to learn more about what amazing projects they have created. This thread is the one stop place to see the latest interviews to get inspiration of what is possible using Machine Learning in JavaScript across front end (browser), back end (Node.js), React Native (native app), Electron (desktop), and even IOT (Raspberry Pi via Node). Check back regularly! To kick things off, here is our first video interview: Enjoying the show with Gant Laborde who explains how he solved a problem when presenting digitally to an audience where he was unable to know if they were interested in the content being presented. Learn how Gant created an innovative, real-time, and scalable system to better understand his audience using machine learning in the browser using TensorFlow.js.
st206652
Real-time semantic segmentation in the browser with Hugo Zanini - a Python developer who was looking to use the latest cutting edge research from the TensorFlow community in the browser using JavaScript. Join us as Hugo takes us through his learning experiences in using SavedModels in an efficient way in JavaScript directly enabling you to get the reach and scale of the web for your new research.
st206653
In our next episode of Made with TensorFlow.js we head to Denmark to join Anders Jessen, who has been investigating powerful touchless interfaces powered by our TensorFlow.js hand pose model. Finally, our sci-fi-like interaction dreams can become reality!
st206654
In our next episode we head to Spain to join Cristina Maillo, to talk about her experience using machine learning for Yoga instruction. Cristina created an easy to use website using TensorFlow’s PoseNet that can guide you through your Yoga poses and time you as you hold each one! If you lose the pose the countdown stops and waits for you to readjust yourself.
st206655
Next we are heading to Amsterdam to join Charlie Gerard, a Senior Front End Developer at Netlify to talk about her latest creations. Join us as Charlie walks us through WashOS - a web based system that can detect how long you have been washing your hands for, and “splat”, a fruit ninja styled game powered by TensorFlow.js that enables you to use your hands and arms to chop fruit from anywhere you wish!
st206656
In our 6th episode of Made with TensorFlow.js we head to Australia to join Benson Ruan, who has used Natural Language Processing to understand the sentiment of tweets and is able to visualize the results. Now we can monitor in real time user sentiment as people react to any given topic.
st206657
In our 7th episode of Made with TensorFlow.js we head to India to meet Shivay Lamba who has created a virtual physio assistant to help you perform your daily exercises. With this system you can check you are doing the correct stretch using our PoseNet model live in the browser.
st206658
In our 8th episode of Made with TensorFlow.js we head to the USA to meet with James Seo who has created a visually stunning mixed reality demo to help us understand human pose over space and time that can be inspected from any angle you desire using WebXR.
st206659
On episode 9 of Made with TensorFlow.js we’re joined by Shan Huang from China, who’s built upon her previous Pose Animator project to make Scroobly, a fun app which brings doodles (SVG images) to life in real-time using your camera. Scroobly uses Facemesh and PoseNet to map your live motion and updates the animation as you move!
st206660
Next on Made with TensorFlow.js for episode 10 we’re joined by Chris Greening from the UK, who’s built an augmented reality web app to solve Sudoku puzzles in real-time. Chris breaks down his problem-solving techniques in building a complex app like this, including methods for image processing and character recognition.
st206661
This time on Made With TensorFlow.js we’re joined by Samarth Gulati and Praveen Sinha from India, to hear how they’ve used TensorFlow.js and Facemesh model to create a system that can recreate digital face masks based on cultural events around their country.
st206662
Today on Made with TensorFlow.js we’re joined by Andreas Schallwig from Shanghai. Andreas has been hacking on some pretty impressive demos for touchless interfaces on public smart displays such as photo booths and games. Check them out!
st206663
Today on Made with TensorFlow.js we’re joined by Emily Xie from New York, who’s managed to bring paintings to life using a combination of TensorFlow.js and TensorFlow Core.
st206664
Today on Made with TensorFlow.js we’re joined by Paul Jessop from London, who’s made some custom hardware powered by machine learning, that’s capable of tracking custom objects for sport videography and more - our very first #MadeWithTFJS powered Kickstarter project!
st206665
In this episode of Made with TensorFlow.js we’re joined by Yining Shi and Bomani Oseni McClendon who are working on the ml5.js library that is built upon TensorFlow.js to try and make machine learning even more usable by everyone. From creative coding to hardware experiments, ml5.js can enable you to do many advanced things with just a few lines of code. Learn more and have a go yourself!
st206666
In this episode of Made With TensorFlow.js we’re joined by Kenny Song, an active researcher on security and reliability, where he shows you how to break neural networks in your web browser in real-time by changing inputs, such as pixels in an image, to fool a machine learning model. Watch as he turns a photo of a “dog” which is initially classified correctly to be misclassified as a “hotdog” - even though to you, as a human, the image still looks the same. Learn how he does it in this educational video so you can make your models even more robust to such attacks in the future.
st206667
This time on Made With TensorFlow.js we’re joined by Paul Van Eck, a software developer with IBM who shows how to use Node-RED, an open source visual programming tool that even supports machine learning with TensorFlow.js and can even deploy to a Raspberry Pi and more. Watch as Paul uses this system to keep his cat off the table, open his garage door when the correct car is recognized by its number plate, and more! Take command of the physical world with TensorFlow.js and Node-Red in this episode! Happy hacking.
st206668
Today on Made With TensorFlow.js we’re joined by Michelle Sun, an interaction designer, who solved a problem she had - never having a guitar tuner nearby when she needed one. Learn how Michelle created a system to tune any instrument (even your voice) live in the web browser using a pitch detection model known as Crepe without the need for any specialist hardware:
st206669
In this episode of #MadeWithTFJS, Jason is joined by Director of AI at Dataiku, Vivien Tran-Thien who has used TenorFlow.js to create an impressive motion parallax effect with face tracking in browser. This allows you to view any 3D scene on your regular 2D screen as if you had a 3D monitor - no special glasses needed. By moving your head you automatically change the 3D scene’s perspective as it tracks your eye’s position giving the illusion of 3D to the viewer.
st206670
Join Rishit Dagli in this episode of Made With TensorFlow.js as he turns nighttime into daytime. Learn how he managed to convert cutting edge research from Python, specifically the MIRNet machine learning model, to run in the web browser via Node.js. Now anyone can see in the dark.
st206671
Got a cool project? Share and show off your work. Feel free to share links to demos, GitHub and posts for your project. You can also submit your project 13 to the TensorFlow Community Spotlight program for the chance to be featured on the TensorFlow Twitter handle.
st206672
I worked on various Pre-trained Machine Learning Models on Raspberry Pi. These are computer vision models (Inception and Mobilenets) provided by the Google Coral team. Though they have provided examples to run these models using sample scripts, I thought it would be a nice idea to make a simple tool using Python & Web dev which can run these 20+ models without having to stop and start the Python script every time you need to test a different model. Further, the tool takes care of Object Detection and Image classification methods dynamically and provides output accordingly. To simplify the installation process, I have written a bash script using which anyone can configure his/her Raspberry Pi in all respects to run this project. The script automatically installs Tensorflow Lite, OpenCV, all the models and source code of this project on a Raspberry Pi. Basically, a beginner can start seeing the output without having to see the code or worry about it. I named the tool ‘Model Garden’ and I think it will be useful for students / hobbyists to get started with Machine Learning on a Raspberry Pi or atleast have a feel of these wonderful models without much hassle. Check this on Github:- jiteshsaini/model_garden
st206673
I’m Merve, GDE in ML and I’m working at Hugging Face, a company that democratizes responsible open-source machine learning. At Hugging Face I’m focusing on contributing to TensorFlow Keras (though we are known for transformers, we host non-transformers models from various libraries as well). For this we’ve made a couple of integrations that help you easily push any Keras models with one line of code (you can see it in working group page). We’re demonstrating these and many other TF specific features in our new example we contributed to Keras Examples in keras.io 1, a tutorial on Question Answering using Hugging Face Transformers 2. We host the model here 2 with an interactive widget and a generated model card for reproducibility. I’ve also built this UI 1. We have a Keras Working Group that works to improve the Keras ecosystem, we host Keras Examples and our own demos here 1. You can reach out to me if you are interested or for any feedbacks .
st206674
In my latest keras example I minimally implement “Augmenting Convolutional networks with attention-based aggregation” by Touvron et. al. The main idea is to use a non-pyramidal convnet architecture and to swap the pooling layer with a transformer block. The transformer block acts like a cross-attention layer that helps in attending to feature maps that are useful for a classification decision. The attention-maps from the transformer block helps in the interpretability of the model. It let’s us know which part (patch) of the image is the model really focused on when making a classificaiton decision. Link to the tutorial: Augmenting convnets with aggregated attention 2 @Ritwik_Raha, @Devjyoti_Chakraborty and I have built a Hugging Face demo around this example for all of you to try. In the demo we use a model that was trained on the imagenette dataset. image1732×886 126 KB Link to the demo: Augmenting CNNs with attention-based aggregation - a Hugging Face Space by keras-io 6 I would like to thank JarvisLabs.ai for providing me with GPU credits for this project.
st206675
Glad you like it! All credits to the authors of the paper for their wonderful research
st206676
An article that briefly recaps all the challenges faced solving half of the advent of code 2021 challenges in pure TensorFlow and allows you to browse them easily. P. Galeone's blog Wrap up of Advent of Code 2021 in pure TensorFlow 8 A wrap up of my solutions to the Advent of Code 2021 puzzles in pure TensorFlow
st206677
This is very interesting! I thought it was already challenge to solve all the problems with regular Python!
st206678
This is very interesting! Happy to hear that! This is something I find really useful, especially because the pre/post-processing phases of the forward pass of an ML model can benefy a lot (IMHO) from the SavedModel format. It removes every dependency from third-party libraries and you can guarantee the input will always be processed as expected (how many times I used a wrong resize and ended up with completely wrong results )
st206679
Yes, being able to do complex pre/post processing and then later have that on a SavedModel format is very powerful!
st206680
Hi everyone, I recently put the finishing touches on my Faster R-CNN self-learning exercise. My goal was to replicate the model from scratch using only the paper. That was a bit ambitious and I had to eventually relent and peek at some existing implementations to understand a few things the paper is unclear on. The repo is here: GitHub - trzy/FasterRCNN: Clean and readable implementations of Faster R-CNN in PyTorch and TensorFlow 2 with Keras. 5 I wrote both a PyTorch and a TensorFlow implementation. I’d like to think they are pretty clean, readable, and easy to use. I also documented some of my struggles and takeaways in the README.md file. One thing that continues to bother me is the need for an additional tf.stop_gradient() in the regression loss functions surrounding a tf.less statement. The function itself is differentiable. The PyTorch version doesn’t need this. I might make a post about it on one of the other sub-forums because I stumbled upon the solution by accident. Without the explicit stop_gradient, the model still learns, but achieves significantly lower precision. Would love to learn about how others would approach debugging such an issue. Thanks, Bart
st206681
@Bart great. You made a lot of effort out there, starred. A small request, in your read-me there’s a lot of development details which can be separated as another .md. In this way, the front read-me.md can give more top-level highlights for example how-to-reproduce, how-to-fine-tune- or how-to-train-from-scratch-on-custom-data, etc.
st206682
Thanks for the suggestions! I can certainly split the Development Learnings into a separate document. Re: fine-tuning and training from scratch on custom data, I suppose new data would have to be provided in the same format as the VOC dataset (which should be fairly simple to do). Do you think it would be worthwhile to elaborate on this point in the README.md or a separate attached document? I was thinking about making an annotation program for generating custom data (probably just an HTML5/JS thing) if the need arises for me.
st206683
My project GitHub GitHub - SubhranshuSharma/shitty_gaze_mouse_controller 3 Contribute to SubhranshuSharma/shitty_gaze_mouse_controller development by creating an account on GitHub. Cursor moves left if I look left right for right at the camera for up and down for down, left eyebrow raise for enabling/disabling script, blinking left eye for for double click
st206684
Videos are sequences of images. Modelling on video clips requires image representation models (CNNs) and sequence models (RNNs, LSTMs etc.) working together. While this approach is intuitive, how about a single model that takes care of the two modalities? In our (with @ayush_thakur ) latest keras.io 3 example, we minimally implement ViViT: A Video Vision Transformer by Arnab et al., a pure Transformer-based model for video classification. The authors propose a novel embedding scheme and a number of Transformer variants to model video clips. arXiv: https://arxiv.org/abs/2103.15691 3 Tutorial: Video Vision Transformer 2
st206685
I was checking the grad-cam of a pure cnn and a hybrid model (cnn+swin_transformer). Now, after passing an intermediate layer from CNN to Swin-transformer, it looks like the transformer blocks are able to refine the feature activation globally across the relevant object; unlike CNN which is more interested to operate locally. (left: input, – middle: CNN, – right: CNN + Transformer / Hybrid). Code example: TF: Hybrid EfficientNet Swin-Transformer : GradCAM | Kaggle 4
st206686
Nice, It could be interesting to visualize also: GitHub GitHub - google-research/vmoe Contribute to google-research/vmoe development by creating an account on GitHub.
st206687
P.s. see also: Do Vision Transformers See Like Convolutional Neural Networks? | Paper Explained
st206688
Thanks for sharing this info. The paper that was explained in the video is super interesting. (printout ) Ref. Do Vision Transformers See Like Convolutional Neural 1
st206689
Here’s my last article about my solutions to Advent of Code 2021 puzzles in pure TensorFlow. The kast, because I have no more time to dedicate to this topic - but it has been fun! Day 12 demonstrated a limitation of @TensorFlow (no recursion!) while working on graphs. P. Galeone's blog Advent of Code 2021 in pure TensorFlow - day 12 9 Day 12 problem projects us the world of graphs. TensorFlow can be used to work on graphs pretty easily since a graph can be represented as an adjacency matrix, and thus, we can have a tf.Tensor containing our graph. However, the
st206690
ViTs are data hungry, pretraining a ViT on a large-sized dataset like JFT300M and fine-tuning it on medium-sized datasets (like ImageNet) is the only way to beat state-of-the-art Convolutional Neural Network models. The self-attention layer of ViT lacks locality inductive bias (the notion that image pixels are locally correlated and that their correlation maps are translation-invariant). This is the reason why ViTs need more data. On the other hand, CNNs look at images through spatial sliding windows, which helps them get better results with smaller datasets. In my latest keras example I minimally implement the academic paper Vision Transformer for Small-Size Datasets. Here the authors set out to tackle the problem of locality inductive bias in ViTs by introducing two novel ideas: Shifted Patch Tokenization (SPT): A tokenization scheme which allows for a greater receptive field for the transformer. image832×400 94.6 KB Locality Self Attention (LSA): A tweaked version of the multi head self attention mechanism. Applying a diagonal mask and a learnable temperature quotient to the regular self attention, we get our LSA. Inheriting the tf.keras.layers.MultiHeadAttention and tweaking the API was my greatest win while implementing LSA. image1200×709 63.5 KB Tutorial: Train a Vision Transformer on small datasets 10
st206691
Hello Community! I’m sharing a personal project of mine, which was to rewrite ResNet-RS models from TPUEstimator to Tensorflow/Keras. GitHub GitHub - sebastian-sz/resnet-rs-keras: ResNet-RS models rewritten in... 52 ResNet-RS models rewritten in Tensorflow / Keras functional API. - GitHub - sebastian-sz/resnet-rs-keras: ResNet-RS models rewritten in Tensorflow / Keras functional API. Features: Automatic weights download. Transfer learning possible. pip install directly from GitHub. keras.applications like usage. Use like any other Tensorflow/Keras model! Other links: Original repository 5 Arxiv Link Let me know what you think!
st206692
Nice work, I really appreciate the “thoroughness”, especially TFLite support and Docker
st206693
@sebastian-sz would you be interested in the following? github.com/keras-team/keras Updating the ResNet-* weights 5 opened Dec 12, 2021 sayakpaul type:feature Contributions welcome If you open a GitHub issue, here is our policy: It must be a bug, a feature r…equest, or a significant problem with the documentation (for small docs fixes please send a PR instead). The form below must be filled out. **Here's why we have that policy:**. Keras developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow. **System information**. TensorFlow version (you are using): 2.7.0 Are you willing to contribute it (Yes/No) : Currently no **Describe the feature and the current behavior/state**. ResNets are arguably one of the most influential architectures in deep learning. Today, they are used in different capacities. For example, sometimes they act as strong baselines, sometimes they are used as backbones. Since their inception, their performance on ImageNet-1k, in particular, has improved quite a lot. I think it's time the ResNets under `tf.keras.applications` were updated to facilitate these changes. **Will this change the current api? How?** ResNet-RS (https://arxiv.org/abs/2103.07579) introduces slight architectural changes to the vanilla ResNet architecture (https://arxiv.org/abs/1512.03385). So, yes, there will be changes to the current implementation of ResNets (among other things) we have under `tf.keras.applications`. We could call it `tf.keras.applications.ResNet50RS`, for example. Following summarizes the performance benefits that ResNet-RS introduces to the final ImageNet-1k performance (measured on the `val` set): ![image](https://user-images.githubusercontent.com/22957388/145714574-b057a95e-4d6c-48ba-b7ea-67e953e83f29.png) <sub><a href=https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs#imagenet-checkpoints>Source</a></sub> **Who will benefit from this feature?** Keras users that use ResNets from `tf.keras.applications` for building downstream applications. **[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)** - Do you want to contribute a PR? (yes/no): Currently no - If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions - Briefly describe your candidate solution(if contributing):
st206694
@sebastian-sz cc. @Sayak_Paul Amazing job. There’s an option to contribute to the core API of these model families (as mentioned by brother sayak). If you consider that would be great, like you did for EfficientNet-V2 GitHub GitHub - sebastian-sz/efficientnet-v2-keras: Efficientnet V2 adapted to Keras... 4 Efficientnet V2 adapted to Keras functional API. Contribute to sebastian-sz/efficientnet-v2-keras development by creating an account on GitHub.
st206695
@Sayak_Paul @innat Thank you for thinking of me. Do you have information if it would be possible to simply, directly use (linted + slightly modified) my implementation’s model file 2? If yes I could probably submit a PR by the end of this month.
st206696
Thanks for considering! I quickly took a look at your implementation and it seems very structured. I think barring a few formatting and Keras-specific nits, you’d be well on your way.
st206697
Please check github.com/keras-team/keras-cv Including All Components to Train State of the Art Imagenet-1k models 4 opened Jan 10, 2022 LukeWood https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-…latest-primitives Two papers to reference: * ResNet-RS * ResNet Strikes Back
st206698
@Bhack Do you mean that the PR, regarding this model family, should be opened in keras-cv instead of keras? Or that ResNetRS models should be retrained from scratch with components created in keras-cv? The repository I created is simply a port of model architecture + weights. I did not include training code (but also none of the models under keras.applications do). As for training components like dropblock / drop connect: they are present in the original repository 1 but upon closer inspection their parameters are always either 0 or null, so I decided not to include them.
st206699
Yes there we will/would have all the reusable components to build and train these models.