id
stringlengths
3
8
text
stringlengths
1
115k
st206400
We have a ticket at github.com/tensorflow/tensorflow Groups parameter of Conv2d and Conv2dTranpose ('deconvolution') not Working ? 12 opened Nov 27, 2020 NguyenThaiHoc1 TF 2.3 comp:keras stat:awaiting tensorflower type:bug Hello Authors I'm a user of tensorflow. Someday i have use Deconvolution and …i know it called "Conv2dTranpose" in tensorflow. But I want to reduce parameters of model by groups. I found groups property at Pytorch and I see it in Conv2D class (Conv2dTranspose inheritance Conv2d). But when i use i get result which i don't want This is my code ` import tensorflow as tf from tensorflow.keras.layers import Conv2D, Conv2DTranspose, BatchNormalization, ReLU, MaxPool2D model = tf.keras.Sequential([ Conv2D(512, kernel_size=(3, 3), strides=(1, 1), padding='same', use_bias=False, name='conv'), BatchNormalization(), ReLU(), Conv2DTranspose(256, kernel_size=(4, 4), strides=(2, 2), padding='same', use_bias=False, groups=256, kernel_initializer='he_normal'), BatchNormalization(), ReLU() ]) model.build((32, 256, 192, 3)) model.summary() ` This is summary when i received > Layer (type) Output Shape Param # conv (Conv2D) (32, 256, 192, 512) 13824 batch_normalization (BatchNo (32, 256, 192, 512) 2048 re_lu (ReLU) (32, 256, 192, 512) 0 conv2d_transpose (Conv2DTran (32, 512, 384, 256) 2097152 batch_normalization_1 (Batch (32, 512, 384, 256) 1024 re_lu_1 (ReLU) (32, 512, 384, 256) 0 Total params: 2,114,048 Trainable params: 2,112,512 Non-trainable params: 1,536 I think conv2d_transpose will: 2097152 / 256(groups) = 8192 params ? Sorry for my writer is not good Thanks you for reading Thai Hoc
st206401
@Bhack Exactly. We used to have a ticket on grouped convolution too, and it took more than 2.5 years to close. (Issue #3332 on GitHub, can’t post link here). This one is 6 months old already (it could have been filed much earlier tbh) and I am just concerned history will repeat. PyTorch has it since at least v0.2 (July 2017, about the time “Attention is all you need” was published to get an idea of how long in DL terms). My understanding is that even jax can do it using conv_general_dilated and some paracetamol.
st206402
Yes but the grouped conv was contributed by the community in the end with: github.com/tensorflow/tensorflow Add support for cudnn's group convolution. 6 tensorflow:master ← ppwwyyxx:master opened Feb 18, 2019 ppwwyyxx +251 -65 This PR enables group convolution in cudnn, a feature that's highly desired for …many years (#3332, https://github.com/tensorflow/tensorflow/issues/12052#issuecomment-320465264, #11662, #10482). With this PR, now it's allowed to call `tf.nn.conv2d(inputs, filters)`, where the depth of `inputs` is not necessarily equal to `filters.shape[2]`, but be a multiple of `filters.shape[2]`. The core of this PR is only two lines of code (https://github.com/tensorflow/tensorflow/issues/3332#issuecomment-464308902) which removes the shape check. Then I added some extra checks and tests. This benchmark script: ```python import tensorflow as tf import time import os N = 64 C = 256 G = 32 H, W = 64, 64 print("N, C, H, W:", [N, C, H, W]) def benchmark_all(use_loop, format): shape4d = [N, C, H, W] if format == 'NCHW' else [N, H, W, C] tf.reset_default_graph() input = tf.get_variable('input', shape=shape4d, dtype=tf.float32) filter = tf.get_variable('filter', shape=[3, 3, C // G, C], dtype=tf.float32) if use_loop: inputs = tf.split(input, G, axis=1 if format == 'NCHW' else 3) filters = tf.split(filter, G, axis=3) output = tf.concat( [tf.nn.conv2d(i, f, strides=[1,1,1,1], padding='SAME', data_format=format) for i, f in zip(inputs, filters)], axis=1 if format == 'NCHW' else 3) else: output = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME', data_format=format) forward_op = output.op cost = tf.reduce_sum(output) backward_op = tf.train.GradientDescentOptimizer(0.1).minimize(cost) def benchmark(op, nr_iter=200, nr_warmup=10): for k in range(nr_warmup): op.run() start = time.perf_counter() for k in range(nr_iter): op.run() end = time.perf_counter() itr_per_sec = nr_iter * 1. / (end - start) return itr_per_sec sess = tf.Session() with sess.as_default(): sess.run(tf.global_variables_initializer()) spd_forward = benchmark(forward_op) print("Loop={}, Format={}, Forward: {} itr/s".format(use_loop, format, spd_forward)) spd_backward = benchmark(backward_op) print("Loop={}, Format={}, Backward: {} itr/s".format(use_loop, format, spd_backward)) formats = ['NHWC', 'NCHW'] for format in formats: for use_loop in [True, False]: benchmark_all(use_loop, format) ``` Executed on V100, cuda10, cudnn 7.4.2, it prints: ``` N, C, H, W: [64, 256, 64, 64] Loop=True, Format=NHWC, Forward: 65.49446747235214 itr/s Loop=True, Format=NHWC, Backward: 32.26484275606916 itr/s Loop=False, Format=NHWC, Forward: 117.40288830454352 itr/s Loop=False, Format=NHWC, Backward: 50.051492362319074 itr/s Loop=True, Format=NCHW, Forward: 98.8428390274372 itr/s Loop=True, Format=NCHW, Backward: 35.672312085388455 itr/s Loop=False, Format=NCHW, Forward: 152.24726060851506 itr/s Loop=False, Format=NCHW, Backward: 56.21414524041962 itr/s ``` which shows around 50~80% speed up over a naive loop-based implementation. I could ask to @thea to check if it the transposed group conv ticket could be labeled as contribution welcome or it is already the internal roadmap. Also that specific ticket has not so much community upvotes.
st206403
Thanks, I didn’t know this feature was eventually implemented by a contribution from the community. I am even more concerned about the implementation of this one now. I get your remark about community upvotes. At the same time depthwise/grouped (transposed) convolutions are so ubiquitous, it is hard for me to understand the reluctance.
st206404
@Bhack Exactly. We used to have a ticket on grouped convolution too, and it took more than 2.5 years to close. (Issue #3332 on GitHub, can’t post link here). This one is 6 months old already (it could have been filed much earlier tbh) and I am just concerned history will repeat. We’re currently working through our issue backlog. Ultimately implementing changes comes down to bandwidth and the volume of requests in the community. It might be more difficult to implement than it looks like on the surface. Thanks for flagging this thread. I think it’s important to also ensure that further discussion validating this bug happens on the bug thread – that way our triagers and maintainers have full context in a single place. I’ll leave this thread open for now in case maintainers would like to respond here, but may close it in the future to ensure that relevant details and use cases are added to the bug. I could ask to @thea to check if it the transposed group conv ticket could be labeled as contribution welcome or it is already the internal roadmap. @Bhack Can you add a mention on the bug that it might be a good candidate for contributions welcome so that can be looked at as part of the ongoing issue grooming work?
st206405
We already have a thread at: Calculate Flops in Tensorflow and Pytorch are not equal? General Discussion Given the same model, I found that the calculated flops in pytorch and tensorflow are different. I used the keras_flops (keras-flops · PyPI) in tensorflow, and ptflops (ptflops · PyPI) in pytorch to calculate flops. It seems that flops in pytorch are closed to my own calculation by hands. Is that tensorflow has some tricks to speed up the computation so that few flops are measured? My model in tensorflow d=56 s=12 inp = Input((750 ,750, 1)) x = Conv2D(d, (5,5), padding='same')(inp) x = PReLU()… /cc @markdaoust @thea
st206406
Calculate Flops in Tensorflow and Pytorch are not equal? General Discussion There’s too much going on in the initial post. Start by comparing individual layers, not whole models. That will make things easier to untangle. My first impression is that you’re not measuring the same thing. Do we know why in the PT model 90% of the GMac comes from the final ConvTrtanspose2d layer, but that’s not listed for tensorflow? “MAC” is “multiply-add-calculations”. The Conv2 layers are 9 Gflops (TF) or ~4.5 GMac (PT). 2:1 is the exchange rate. So that part makes sense.
st206407
Hi Bhack, Yes, that’s the basic idea, but if you look at softmax activation function. It contains the calculations for e to the power x. So, that will be counted as FLOPS not MACs. My understanding is one cannot divide FLOPS/2 to get MACs. Please correct me if I am wrong.
st206408
I want to use the following file : from tensorflow.python.grappler import model_analyzer But tensorflow by default does not have the model_analyzer.py in pip packages. I can’t find the file under .../envs/tfenv/lib/python3.8/site-packages/tensorflow/python/grappler/. However, the model_analyzer.py is indeed under the tensorflow’s source directory at /tensorflow/tensorflow/blob/master/tensorflow/python/grappler/model_analyzer.py Why tensorflow ignores these files under grappler directory? How should I build tensorflow for using these files?
st206409
Stonepia: model_analyzer I’m assuming that you need to analyze the model, as seen on file name. I don’t know is it just the same things as TFMA but since the beginning I’ve been using TFMA for model analyzer. If you want to give it a try, you can install it with; pip install tensorflow-model-analysis Hope it helps.
st206410
Thanks for the reply! Actually, I want a slightly different with this. Because I want to modify some code and compile it to see what happens. So that’s why I cannot just use the pip package.
st206411
Is this optimizer currently unmaintained? There is little docs about this. Also, it only applies to a few gradient ops from the code in tensorflow/core/grappler/optimizers. The autoparallel pass is quite straightforward and could only apply to few scenario, I think? Could someone offer a little more information about this optimizer?
st206412
I think that the developmemt of these optimizations is going to migrate over MLiR TensorFlow tf.config.experimental.enable_mlir_graph_optimization 8 Enables experimental MLIR-Based TensorFlow Compiler Optimizations.
st206413
What’s the meaning of loss & acc ? I mean they are loss and accuracy of training data but why the sum is not equal to 1?
st206414
Accuracy is a method for measuring performance based on the actual value and the predicted value. Loss is a performance measure that is based on how much the predicted value varies from the actual value. As they are different performance measures, their sum is not 1 (in most cases).
st206415
acc refers to how often your model predicted the correct class. loss refers to how close to the label your predictions were. Consider a single binary classification where your prediction is 0.9 and your label is 1 (I’ll use mean absolute error as my loss function). Your prediction was closer to 1 than it was to 0, so it predicted correctly 1 out of 1 times, and your acc is 1.0 (100%). Your loss is |0.9 - 1| = 0.1, so you were close, but your model will use that loss to make its next prediction closer.
st206416
Got it, they are the respective values instead of being some probabilities or percentages obtained by simple subtraction.
st206417
Hi. I’ve been working on my first TF project for a few weeks now in my spare time. I’ve learned a ton but I’m still running into a few gaps in my understanding before I think I can get everything working. I’m trying to use a CNN to predict a continuous value using images as input. I’m using the functions from keras.preprocessing.image to load the datasets but they expect the labels to be categorical. Is it possible to transform the labels into continuous values before training? Like if I have categories [“1”, “2”, “3”] can I transform those labels to values [0.0, 0.333, 1.0] and run some kind of regression? I’m having trouble finding resources for something like this so any help is super appreciated!
st206418
For further clarification, if it’s helpful. My goal is to build a CNN-regression model. I’d like to predict a continuous value with an image as input. I’m not clear, reading the documentation, how to use the image-dataset functions provided in kera.preprocessing.image to load a dataset of images with numeric, continuous labeling. Thanks!
st206419
While using tf.reduce_sum with ragged tensors, I stumbled upon an issue where autograd produces an exception in graph mode. The following code fails: @tf.function() def f(x): return tf.reduce_sum(x, axis=-1) def test_autograd(): values = tf.random.uniform((8,), seed=213) sizes = tf.constant([4, 2, 2]) x = tf.RaggedTensor.from_row_lengths(values, sizes) with tf.GradientTape() as tape: tape.watch(x.flat_values) y = f(x) grad = tape.gradient(y, x.flat_values) If I run test_autograd, I get an error: self = <tf.Operation 'RaggedReduceSum/UnsortedSegmentSum' type=UnsortedSegmentSum> name = '_XlaCompile' def get_attr(self, name): """Returns the value of the attr of this op with the given `name`. Args: name: The name of the attr to fetch. Returns: The value of the attr, as a Python object. Raises: ValueError: If this op does not have an attr with the given `name`. """ fields = ("s", "i", "f", "b", "type", "shape", "tensor", "func") try: with c_api_util.tf_buffer() as buf: > pywrap_tf_session.TF_OperationGetAttrValueProto(self._c_op, name, buf) E tensorflow.python.framework.errors_impl.InvalidArgumentError: Operation 'RaggedReduceSum/UnsortedSegmentSum' has no attr named '_XlaCompile'. ../../venv/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:2328: InvalidArgumentError During handling of the above exception, another exception occurred: scope = 'gradients' op = <tf.Operation 'RaggedReduceSum/UnsortedSegmentSum' type=UnsortedSegmentSum> func = None grad_fn = <function _GradientsHelper.<locals>.<lambda> at 0x7f206004f4d0> def _MaybeCompile(scope, op, func, grad_fn): """Compile the calculation in grad_fn if op was marked as compiled.""" scope = scope.rstrip("/").replace("/", "_") if func is not None: xla_compile = func.definition.attr["_XlaCompile"].b xla_separate_compiled_gradients = func.definition.attr[ "_XlaSeparateCompiledGradients"].b xla_scope = func.definition.attr["_XlaScope"].s.decode() else: try: > xla_compile = op.get_attr("_XlaCompile") ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/gradients_util.py:331: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <tf.Operation 'RaggedReduceSum/UnsortedSegmentSum' type=UnsortedSegmentSum> name = '_XlaCompile' def get_attr(self, name): """Returns the value of the attr of this op with the given `name`. Args: name: The name of the attr to fetch. Returns: The value of the attr, as a Python object. Raises: ValueError: If this op does not have an attr with the given `name`. """ fields = ("s", "i", "f", "b", "type", "shape", "tensor", "func") try: with c_api_util.tf_buffer() as buf: pywrap_tf_session.TF_OperationGetAttrValueProto(self._c_op, name, buf) data = pywrap_tf_session.TF_GetBuffer(buf) except errors.InvalidArgumentError as e: # Convert to ValueError for backwards compatibility. > raise ValueError(str(e)) E ValueError: Operation 'RaggedReduceSum/UnsortedSegmentSum' has no attr named '_XlaCompile'. ../../venv/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:2332: ValueError During handling of the above exception, another exception occurred: def test_autograd(): values = tf.random.uniform((8,), seed=213) sizes = tf.constant([4, 2, 2]) x = tf.RaggedTensor.from_row_lengths(values, sizes) with tf.GradientTape() as tape: tape.watch(x.flat_values) > y = f(x) graphs/tf2_sandwich_model_test.py:170: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py:580: in __call__ result = self._call(*args, **kwds) ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py:650: in _call return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py:1665: in _filtered_call self.captured_inputs) ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py:1751: in _call_flat forward_function, args_with_tangents = forward_backward.forward() ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py:1477: in forward self._inference_args, self._input_tangents) ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py:1233: in forward self._forward_and_backward_functions(inference_args, input_tangents)) ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py:1385: in _forward_and_backward_functions outputs, inference_args, input_tangents) ../../venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py:943: in _build_functions_for_outputs src_graph=self._func_graph) ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/gradients_util.py:669: in _GradientsHelper lambda: grad_fn(op, *out_grads)) ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/gradients_util.py:336: in _MaybeCompile return grad_fn() # Exit early ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/gradients_util.py:669: in <lambda> lambda: grad_fn(op, *out_grads)) ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/math_grad.py:470: in _UnsortedSegmentSumGrad return _GatherDropNegatives(grad, op.inputs[1])[0], None, None ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/math_grad.py:438: in _GatherDropNegatives dtype=is_positive_shape.dtype)], ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2967: in ones output = _constant_if_small(one, shape, dtype, name) ../../venv/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:2662: in _constant_if_small if np.prod(shape) < 1000: <__array_function__ internals>:6: in prod ??? ../../venv/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3031: in prod keepdims=keepdims, initial=initial, where=where) ../../venv/lib/python3.7/site-packages/numpy/core/fromnumeric.py:87: in _wrapreduction return ufunc.reduce(obj, axis, dtype, out, **passkwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <tf.Tensor 'gradients/RaggedReduceSum/UnsortedSegmentSum_grad/sub:0' shape=() dtype=int32> def __array__(self): raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy" > " array.".format(self.name)) E NotImplementedError: Cannot convert a symbolic Tensor (gradients/RaggedReduceSum/UnsortedSegmentSum_grad/sub:0) to a numpy array. ../../venv/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:749: NotImplementedError If I call tf.reduce_sum directly in the test (without going through f), it does work though. How can I avoid this problem?
st206420
It is working fine for me. Here is the gist 4. @bennofs , can you please check again and if possible, provide the gist in collab.
st206421
Wow I copied the example from the collab to my local python REPL and it prints the same tensorflow version but still fails with the error from the original post. I cannot reproduce this error on collab
st206422
It also happens with both a system-wide installed tensorflow and a tensorflow installed into a local venv. I don’t have any GPUs (except integrated).
st206423
Can you try to reproduce the error on your machine with docker? As I cannot to reproduce with this command on my local setup: docker run tensorflow/tensorflow:2.5.0 python -c " import tensorflow as tf @tf.function() def f(x): return tf.reduce_sum(x, axis=-1) def test_autograd(): values = tf.random.uniform((8,), seed=213) sizes = tf.constant([4, 2, 2]) x = tf.RaggedTensor.from_row_lengths(values, sizes) with tf.GradientTape() as tape: tape.watch(x.flat_values) y = f(x) grad = tape.gradient(y, x.flat_values) test_autograd() print('Ok') "
st206424
Bhack: docker run tensorflow/tensorflow:2.5.0 python -c " import tensorflow as tf @tf.function() def f(x): return tf.reduce_sum(x, axis=-1) def test_autograd(): values = tf.random.uniform((8,), seed=213) sizes = tf.constant([4, 2, 2]) x = tf.RaggedTensor.from_row_lengths(values, sizes) with tf.GradientTape() as tape: tape.watch(x.flat_values) y = f(x) grad = tape.gradient(y, x.flat_values) print('Ok') " You are missing a call to test_autograd there. But I can confirm that it doesn’t reproduce in docker for me either. I’ll debug further.
st206425
I’ve updated the example adding the function call. But as it is running ok in Docker you need to investigate your local env/install.
st206426
I reinstalled tensorflow (with pip -U in the existing venv) and now the error is gone, which is great. Unfortunately, I didn’t make a backup of the venv before doing the pip operation so I cannot debug any further to find the root cause
st206427
Don’t worry it will be for next time. However, it is always better to double check with Docker in order to have a reproducible environment and rule out env/installation issues in a local setup.
st206428
Hi! Joined today during the TensorFlow Community Team meet. I have found learning new things is easier when doing some lessons for a local high school I volunteer at. I see the following resources and have also gone through the freecodecamp resource: TensorFlow Machine learning education  |  TensorFlow 41 Start your TensorFlow training by building a foundation in four learning areas: coding, math, ML theory, and how to build an ML project from start to finish. Any tips or suggestions on your favorite intro course (accessible to learners, 14-18) would be much appreciated. Either from anyone who has undertaken this themselves or has some ideas on how they might do it. Thanks, and excited to be here.
st206429
If your students know and use Python, I created an “ML Foundations” course on YouTube. You can find it on the YT channel, and it might work well for them.
st206430
Hi @dan To add to what @Laurence_Moroney said, you can also check out the MOOCs mentioned on this page: Basics of machine learning  |  TensorFlow 12 featuring @Laurence_Moroney and @Magnus Also, if you haven’t already, check out the notebooks you can run in Colab: Machine Learning Basics with Keras (under TensorFlow.org 7 Tutorials: Basic classification: Classify images of clothing  |  TensorFlow Core 2) and TensorFlow Basics (under TensorFlow.org 7 Guides: Eager execution  |  TensorFlow Core 3) - made by @billy , @markdaoust , @Anirudh_Sriram , and many more from the team. And, there’s this awesome course: Machine Learning Crash Course  |  Google Developers 6 (You can find more resources at Machine learning education  |  TensorFlow 2).
st206431
Hi @dan, here are two book recommendations you can use to learn more (both are excellent). Manning | Deep Learning with Python, Second Edition 8 Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition [Book] 8
st206432
Hi @dan , I am also a high school student working with TensorFlow. I personally started off learning Machine Learning from Andrew Ng’s courses and strongly think they could build the basics for students. To learn specifically about TensorFlow I used and would recommend: TF in Practice (now called TensorFlow Developer) TF: Data and Deployment by @Laurence_Moroney . Apart from this, I think the official TensorFlow Guide is a quite systematic approach to learn. Being in High school myself, another thing that I think has helped me quite a lot to learn is OSS, maybe your students could make small contributions to TensorFlow while learning it; I started off with something as simple as fixing an error in a TensorFlow example! PS: In case your students want to validate their skills they could also take up the TensorFlow Developer certificate exam; TensorFlow also provides stipends for students giving the exam (which is also how I gave the exam)!
st206433
@Laurence_Moroney @jbgordon I wanna firstly apologise for my question being this long, but this is a burning question I always have, and feel like this is the right place to ask. Its related to the original Q. I am hugely interested in learning the inner workings of the ML algorithms, which I know Tensorflow team packages these ‘workings’ into high level APIs. But somewhere down the line, I also wish to use these helpful APIs and tools in my personal projects/ideas as well. So which of this two would be a better way to start my journey into the world of AI: Properly approach the topic conceptually and then implement them using tensorflow libraries OR Understand my project needs and learn only the related information on how to use the APIs and their associated libraries This question arises because the APIs are an offerings for developers to get their job done without doing any heavy-lifting in the AI-ML domain, but I wish to do some original contribution in this domain some day. However I also understand how powerful these new APIs and tools from Tensorflow team is, and how it can accelerate the development of projects/ideas I have. Awaiting an answer, and hoping to have a great time here with fellow AI enthusiasts
st206434
For me, my approach is a hybrid of both, but usually starting with (2). There’s no substitute for just getting something to just ‘work’, as your starting point, even if you don’t fully understand it. You can then peel it apart little by little to see what’s going on under-the-hood. That’s part of the magic of open source, and I’ve learned so much by building simple things, and then using step-by-step debugging to break ‘in’ to the framework to see what it’s doing.
st206435
Laurence_Moroney: ML Foundations Laurence, these are great (sat the first two at 2x) thank you for all of your efforts! I think following along with your video and the first two lessons is a great approach to close the year out on. It will kickstart a nice conversation about Python for a focus for next school year too. @leo It has always been my belief that solving a “real problem” you have always makes learning something new easier. You’ll need to figure out how you learn best. I am a reader first, so the books for me with Laurence’ video lessons for the classroom I think will be best. Thank you all for all of the fantastic answers, I am really excited to share this with our learners. And @rishit_dagli thank you for sharing your experience, the certificate is very appealing! I will check out Andrew Ng’s courses (looks like he is on Coursera).
st206436
@Laurence_Moroney @dan Thank you for your individual perspectives. I would definitely consider your words to decide on a proper learning path suited to me.
st206437
I suggest also a general introduction to AI that could be useful: Global A free online introduction to artificial intelligence for non-experts 5 Learn more about Reaktor's and the University of Helsinki's AI course - no programming or complicated math required.
st206438
Hi @dan Apart from all the above ML and Python resources, I would like to suggest you to get yourself enrolled in a advanced Mathematics course ( you can use platforms like Khan Academy, Brilliant, Udemy ) or start with the “Mathematics for Machine Learning” book ( Search this on Google, you’ll find it ). I’ve read this book and it will take you through every mathematical concept with easy and concise explanations. Combining this book with 3Blue1Brown and Wikipedia will help you concrete your concepts. Also, there are other textbooks for Linear Algebra, Probability and Statistics which are the building blocks of ML. Machine Learning is a math-heavy field. Although you can start with ML ( with a little Math ) especially with scikit-learn, you’ll need strong mathematical thinking to understand the underneath concepts and ideas. A good start will help you with advanced concepts and probably you’ll invent a new algorithm yourself!
st206439
Hello All I am looking for a mentor in my journey of becoming a machine learning engineer. I would like to contact GDEs for this. What is the best way to contact them ? Thanks Balu
st206440
Adding @Soonson_Kwon who might have a better insight on who to get in touch with from the GDE Program! Also @Sayak_Paul fyi!
st206441
Thanks for mentioning, Joana. Balu has reached out to me and I have provided some pointers.
st206442
Hi Balu: all the best to your journey to become ML engineer. I also recommend you to join local community and find relevant folks who might be interested in helping you. Due to the personal nature of mentoring, we currently don’t officially provide mentoring from our end but there should be some folks who are eager to help you if you also act as a good community citizen. Good luck!
st206443
@Joana - thank you very much for pointing me to the right people. @Sayak_Paul - thank you very much for the pointers. I will start looking at them and will follow up with you. Appreciate it. @Soonson_Kwon - Sure. I will try to engage with the local TFX community here as well. Sorry for the delayed reply. Has been away from work for a few days. Thanks Balu
st206444
Thanks for starting this thread @balumotukuru ! I was going to post the same thing requesting for mentorship from an experienced person here. I have been trying to become a full fledged ML Engineer for quite a long time (on and off). But with a day job taking up 8-10 hours of the day, it has become a quite challenging for me. For now, I am going to take the TF Developer Certificate Exam soon. I have a detailed journey planned out for myself (gathered using multiple online sources). It would definitely help to get a second opinion and some guidance. I will surely reach out for help soon after I clear the exam
st206445
Hello! I am trying to perform model conversions on a TensorFlow2.0 SavedModel and I need to know what the input nodes, input dimensions, and output nodes are. I am trying to use the ‘graph_transforms:summarize_graph’ tool but I keep getting the following error: Can’t parse saved_model.pb as binary proto (both text and binary parsing failed for file saved_model.pb) I also tried to visualize the graph using Tensorboard, but it provides complicated graphs that are not so obvious. Any recommendations? Thanks, Ahmad
st206446
Have you tried using GitHub - lutzroeder/netron: Visualizer for neural network, deep learning, and machine learning models 17 to visualize your model. You may try using browser version that allows you to quickly upload your model and it displays op level graph.
st206447
I did try that! It just displays the graph as complicatedly as Tensorboard does.
st206448
So you are trying to view conceptual graph and not op level graph. TensorBoard allows you to view conceptual graph as well. Does this help Examining the TensorFlow Graph  |  TensorBoard 8 ?
st206449
For some reason I don’t have the option to view that when I load my model into Tensorboard. I am working with a pre-trained model and loading the model into Tensorboard using the ‘import_pb_to_tensorboard.py’ script provided by TensorFlow rather than writing into logs while training using a callback. Could that be the issue?
st206450
I see that can be the reason. How about using tf.keras.utils.plot_model() function. See tf.keras.utils.plot_model  |  TensorFlow Core v2.5.0 3 tf.keras.utils.plot_model( model=your_keras_model, to_file='model.png', show_shapes=True, show_dtype=True, show_layer_names=True, rankdir='TB', expand_nested=True, dpi=96 ) expand_nested=True will expand your pre trained model
st206451
Hello, TensorFlow is, it seems to me under MIT license, I saw that there was tensorFlow JS, like a lot of people I have an allergy to JavaScript, is there an official PHP version planned, there is many forks, but hey … and I prefer to limit, as much as possible JSON
st206452
Not as a wrapper. Instead if you need to consume model inference with PHP you could use the serving REST API TensorFlow RESTful API  |  TFX  |  TensorFlow 10
st206453
Used this method, I find it inelegant and above all not very efficient, Javascript gives me buttons, but thank you for the answer
st206454
Excuse me does anybody know how to transfer learning a Faster RCNN to my own dataset? Or is there any reference for that? because I still don’t understand is it included as my base network in my model or what. Thank you very much
st206455
How about the following, though I’ve not tested this yet: In TensorFlow Hub (a repository of pre-trained TensorFlow models), click on/search for “object detection” models: (link to Image Object Detection results: TensorFlow Hub 25) In the results, there is faster_rcnn/openimages_v4/inception_resnet_v2 (link: TensorFlow Hub 17) Object detection model trained on Open Images V4 with ImageNet pre-trained Inception Resnet V2 as image feature extractor. FasterRCNN+InceptionResNetV2 network trained on Open Images V4. Then, check out the Transfer learning with TensorFlow Hub  |  TensorFlow Core 28 tutorial. Looping in @lgusm (DevRel) and the docs team - notebook co-authors @billy @markdaoust
st206456
TensorFlow HUB currently doesn’t support fine-tuning on Faster RCNN and generally on other models in the Object Detection API. But it is a quite requested feature. See Make Object detection models fine-tunable · Issue #678 · tensorflow/hub · GitHub 25. Let’s see if there is any other progress to share on this.
st206457
Might be available (no deadline I guess) under TensorFlow Model Garden: github.com tensorflow/models 28 master/official Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub.
st206458
in this topic, you will find some impressive tutorial that will help you to learn TFX 1200×675 31.3 KB Resources TensorFlow Page 10 TensorFlow Extended (TFX) 8 ML Pipelines on Google Cloud 10 Manage a production ML pipeline with TFX 15 How to build an ML pipeline with TFX 9 MLOps Specialization 5 I hope it will help you and feel free to add more tutorial
st206459
The MLOps Specialization on Coursera is also great: Coursera Machine Learning Engineering for Production (MLOps) 9 Offered by DeepLearning.AI. Become a Machine Learning expert. Productionize your machine learning knowledge and expand your production ... Enroll for free.
st206460
In the TFX GitHub repo, there are lots of examples in tfx/tfx/examples at master · tensorflow/tfx · GitHub 4 I guess these are advanced contents. It is better to try after learning the basics. Image 1 NLP 1 Table data 7
st206461
Hi there I plan to build a machine learning project with Tesnsorflow lite and Arduino for The TensorFlow Microcontroller Challenge 25 and I have a question, it is possible to implement a tflite model on Arduino UNO or not? and if you give me an example I will be thankful
st206462
Hey Kareem! Check this session during #GoogleIO . Building with TensorFlow Lite for microcontrollers 45 This may help you. Thanks.
st206463
Also, I don’t think it is supported to Arduino Uno as it is mentioned to use Arduino BLE in the Arduino Documentation. Check here 12
st206464
Hi @saswatsamal yes I think it is impossible, all videos including GoogleIO video use TinyML machine learning kit
st206465
It actually depends on the hardware and since Uno lack those therefore you need to use the BLE.
st206466
If you want to run ML Models on Arduino Uno, then you can use TensorFlowJS. Read their docs, and search for Tiny Sorter, you’ll understand.
st206467
You could use tf.image.resize: TensorFlow tf.image.resize  |  TensorFlow Core v2.5.0 3 Resize images to size using the specified method.
st206468
If you need this for preprocessing see also TensorFlow tf.keras.layers.experimental.preprocessing.Resizing Image resizing layer.
st206469
I was looking for a model to process French sentences, but I can’t find any for TF.js. So, using tensorflowjs_converter, I tried to convert the universal-sentence-encoder-multilingual model (TensorFlow Hub 2), but it’s not working. I get an error “Op type not registered ‘SentencepieceOp’ in binary running” Is there an existing multilingual model available for TF.js or a way to make it work? Thanks!
st206470
Have you checked TF2.0 hub Universal Sentence Encoder Multilingual Sentenepieceop not registered problem · Issue #463 · tensorflow/hub · GitHub 7?
st206471
I’m trying convert that model to TF.js with the tensorflowjs_converter tool like this: tensorflowjs_converter \ --input_format=tf_hub \ 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3' \ web_model I did make it work with Pyhton (with import tensorflow_text), but I’m looking for a way to make it works with JavaScript.
st206472
It was similar to Converting to tensorflow.js · Issue #668 · google-research/text-to-text-transfer-transformer · GitHub 4
st206473
Thanks! Let me loop in the TFJS team for this one. Someone should reply shortly.
st206474
This is also an interesting topic for tfhub about how to handle ecosystem dependencies in tfhub models like in this case when we need to use the model with the converter.
st206475
I haven’t found doc on how to use tfjs.converters directly, but I was able to go beyond the tensorflow_text with the following code (based on the logic of the CLI converter): import tensorflow as tf import tensorflowjs as tfjs import tensorflow_hub as hub import tensorflow_text from tensorflowjs.converters import tf_saved_model_conversion_v2 tf_saved_model_conversion_v2.convert_tf_hub_module( "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3", "web_model", signature="serving_default" ) However, I now get the following error: ValueError: Unsupported Ops in the model before optimization SentencepieceOp, SegmentSum, RaggedTensorToSparse, ParallelDynamicStitch, SentencepieceTokenizeOp, DynamicPartition It seems that this multilingual model uses different operators than the universal sentence encoder provided of TF.js model’s page.
st206476
XeL: universal-sentence-encoder-multilingual Xel, Another issue you might find later, given you manage to convert, is that this model is a little bit for a webpage (> 200MB). You might have to take that into account too.
st206477
From TFHub side, we try to address the ecosystem dependencies by adding this to the documentation. in this case specifically, the code snippet uses tf_text (tfhub 3) do you think adding some specific section to the documentation would help?
st206478
I don’t know if we could add somewhere machine readable metadata related to dependencies as this will be better for the other ecosystem tools or any automation. Also this could be consumed for creating a specific dependencies section in the TFHUB model webpage.
st206479
Yes as we don’t have a lite version for the multilingual model as for universal-sentence-encoder-lite 1
st206480
XeL: However, I now get the following error: ValueError: Unsupported Ops in the model before optimization SentencepieceOp, SegmentSum, RaggedTensorToSparse, ParallelDynamicStitch, SentencepieceTokenizeOp, DynamicPartition XeL, TensorFlow.js unfortunately doesn’t have support for those ops yet, but you might be able to convert the model to a TFLite saved model 6 and then use tfjs-tflite 3, which runs TFLite models in the browser using webassembly.
st206481
The TF lite Text ops list is available at Supported Select TensorFlow operators  |  TensorFlow Lite 2
st206482
Good news! I was able to compile it using TensorFlow Lite. I’ll test it out, but as @Igusm point it out, it weight 278 MB, so I guess I’ll have trouble using it on the web. It’s really hard to find pre-trained models for languages other than English. Thanks for your help!
st206483
XeL: Good news! I was able to compile it using TensorFlow Lite. I’ll test it out, but as @Igusm point it out, it weight 278 MB, so I guess I’ll have trouble using it on the web. Off the top of my head - have you tried any of the quantization techniques for model size reduction mentioned in in the TensorFlow Lite docs? I hope some of the following stuff helps: Model optimization  |  TensorFlow Lite 6 Post-training quantization  |  TensorFlow Lite 4 Post-training dynamic range quantization  |  TensorFlow Lite 7 Post-training integer quantization  |  TensorFlow Lite Post-training float16 quantization  |  TensorFlow Lite 1 Post-training integer quantization with int16 activations 1 TensorFlow Lite 8-bit quantization specification Also, in case you haven’t checked this out - there are TensorFlow Lite Model Maker guides and tutorials specifically for NLP (QA and classification) (cc @billy): TensorFlow Lite Model Maker 1 BERT Question Answer with TensorFlow Lite Model Maker Text classification with TensorFlow Lite Model Maker 1 And, if you are into ML research: Google AI Blog: Advancing NLP with Efficient Projection-Based Model Architectures 1 pQRNN is quantized, further reducing the model size by a factor of 4x. Link to arXiv: [1712.05877] Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference (Jacob et al., 2017 - Google) Quantization Aware Training with TensorFlow Model Optimization Toolkit - Performance with Accuracy — The TensorFlow Blog (2020, TensorFlow blog)
st206484
Very good tip @8bitmp3 , these techniques might be able to help with the mode’s size! You might lose a little bit of accuracy but it’s well worth to try!
st206485
I don’t think we have specific info. See github.com/tensorflow/tensorflow Export Control Classification Number (ECCN) for Tensorflow 11 opened Jan 22, 2019 closed Jan 23, 2019 zygfrydw **System information** - TensorFlow version: 1.8.0 - Doc Link: I would like… to use Tensorflow in commercial software which will be sold in the U.S. For this reason, the legal department asks me about Export Control Classification Number (ECCN) for Tensorflow library. From my understanding, the open sources software is not subject to [Encryption and Export Administration Regulations (EAR)](https://www.bis.doc.gov/index.php/policy-guidance/encryption/1-encryption-items-not-subject-to-the-ear). Can anyone confirm that Tensorflow is not a subject to EAR or point a ECCN class for Tensorflow? Does Tensorflow use any encryption functionality, which should be mention when applying for ECCN for software which uses Tensorflow?
st206486
Grappler offers an autoparallel optimizer as stated in (TensorFlow graph optimization with Grappler  |  TensorFlow Core 6). But there is no explicit tools for setting the flag, the tf.config.optimizer.set_experimental_options() does not offer this option (auto_parallel) either. How could I enable this? Thanks in advance
st206487
It is available but not documented. I think you could open a Docoumentation type issue on Github 1 github.com tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/rewriter_config.proto#L14-L17 3 message AutoParallelOptions { bool enable = 1; int32 num_replicas = 2; } github.com tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/rewriter_config.proto#L177-L179 3 // Configures AutoParallel optimization passes either through the // meta-optimizer or when manually specified through the optimizers field. AutoParallelOptions auto_parallel = 5;
st206488
Hi guys, could you please shed light on the following? Requirement: Create a sequence predictor based on geometric shapes Shapes can be circles, rectangles and triangles Such shapes can vary in size and angles if they have The sequencer should try to predict the most likely next shape and its size. What has been tried: Basics: The data has been normalized Training data, validation data and test data are separate and different sets Implementation: Passed the shape’s numeric data as part of the sequence e.g. the points that represent the triangle, it seems the LSTM is trying to do some math with it and it never picks a pattern, I guess, given the different shapes Make the shape’s point strings and use a categorical LSTM: the strings are so varied that no pattern is picked Round the numbers of the dimensions and make them string: too much precision gets lost and I get a crazy big dictionary , no pattern is picked In all cases the Test data evaluation is no more than 40% accurate Any ideas to tackle this type of requirement? Thanks!
st206489
Hello George, I’m not sure how much this will help, but here are a few things I might think about too. And these might be things you’re already thinking about too! First is to be careful with how the data is normalized, think about the domain and range of your inputs and outputs. If all of your circles have a max radius of 10, it might be hard for your model to predict a number larger than that. Second is maybe you can describe the shapes differently than points. I’m not sure exactly what your data looks like, but coordinates I think might not be important. Without putting the shapes on a coordinate system, you could define any rectangle for example with just a height and width, any circle could be defined with just a radius, and I think a triangle could be defined with two side lengths and the angle between them. You could use these simpler representations as inputs and outputs, and maybe they would be easier for the model to pick up. Third is maybe try breaking up the problem a bit, like first predicting what type of shape comes next, then predicting the dimensions and size of that shape. I hope this helps a bit!
st206490
Thanks asweet I am trying now with lengths and angels, and it doesn’t work either I seems to me that LSTM tries to make sense of any number as a function of something, but in this case the sequence of shapes and sizes sequences are arbitrary so there is no function to find On the other hand, when treating it as categorical the sizes create too much variation for categories. I think I don’t understand how to use the LSTM for this case where there is mix of categorical-numerical sequences. In my case there is no logic on the sequence, but common sequences that need to be memorized and identified once the pattern is forming as one of the ‘known’ sequences
st206491
Hi, I am very interested in the TFX project and started learning and contributing to it on GitHub And I hope one day I will become a member of TFX Team In this topic, I want to discuss the readiness of TFX to complete the MLops pipeline Does just learning TFX get you ready to complete projects or will you need more tools to complete your project? Excited to hear from the TensorFlow team’ answers
st206492
In my opinion, tfx provides enough features to complete mlops project but you will need additional engineering to do it. tfx helps a large part of mlops.
st206493
Totally a newbie in TFX. But with my existing ML and SWE knowledge, I totally agree.
st206494
I’m interested in learning tensor flow. I have a real world problem where I have two tables of data, the data is made up of columns of types strings, numbers and dates. Each row in table 1 has an equivalent entry in table 2. The data in table 2 will be similar but not exactly equal to it’s equivalent in table 1. Is it possible to use tensor flow to identify which records are related to each other?
st206495
That’s a great question! I think it depends how you define that “relation”. You may be familiar with some of these methods, but please check them out if you’re not! You could do something like an encoder decoder network, and then compare the encodings with a distance similarity (like cosine distance). But you could also try that with a method like PCA. Encoding and PCA, in a simplified sense, take your records and convert them to a few numbers, so you can then just compare the numbers with some distance formula. There are some great tutorials on using Keras for encoders, so please check them out! Scikit-learn has some good documentation on PCA (principal component analysis), and the tensorflow and keras site have some good code examples, including some on encoders (usually with images, but it’s the same concept). I hope this helps!
st206496
Great thanks, I’m not familiar with the terms, so you’ve definitely given me lots to be reading up on!
st206497
I am a high school student and ML is very fascinating to me. That’s why I spent most of my last year studying ML and deep learning algorithms (using python). A few months ago, I came to know about TensorFlow so I tried its keras.layers to make some custom models. It’s a great tool but it’s like a sea of features which makes it very complicated too (at least for me). So I am looking for some suggestion about where can I start learning it or some learning path or some guide. ThankYou.
st206498
Hello! Check out these posts in a related thread: Introducing TensorFlow to high school students General Discussion If your students know and use Python, I created an “ML Foundations” course on YouTube. You can find it on the YT channel, and it might work well for them. Introducing TensorFlow to high school students General Discussion Hi @dan To add to what @Laurence_Moroney said, you can also check out the MOOCs mentioned on this page: Basics of machine learning  |  TensorFlow featuring @Laurence_Moroney and @Magnus Also, if you haven’t already, check out the notebooks you can run in Colab: Machine Learning Basics with Keras (under TensorFlow.org Tutorials: Basic classification: Classify images of clothing  |  TensorFlow Core) and TensorFlow Basics (under TensorFlow.org Guides: Eager execution  |  TensorFlow Cor… Introducing TensorFlow to high school students General Discussion Hi @dan, here are two book recommendations you can use to learn more (both are excellent). Manning | Deep Learning with Python, Second Edition Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition [Book] Introducing TensorFlow to high school students General Discussion Hi @dan , I am also a high school student working with TensorFlow. I personally started off learning Machine Learning from Andrew Ng’s courses and strongly think they could build the basics for students. To learn specifically about TensorFlow I used and would recommend: TF in Practice (now called TensorFlow Developer) TF: Data and Deployment by @Laurence_Moroney . Apart from this, I think the official TensorFlow Guide is a quite systematic approach to learn. Being in High school myself, anot…
st206499
Good question @Harsh_Banka. Wanted to ask the same question. Was checking ML under google developers on YouTube, seemed pretty solid and easy to understand. Am sure its a great start.