instruction
stringlengths 1
910
| input
stringclasses 1
value | output
stringlengths 19
40.6k
| system
stringclasses 2
values |
---|---|---|---|
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 | <h2>What is this warning about?</h2>
<p>Modern CPUs provide a lot of low-level instructions, besides the usual arithmetic and logic, known as extensions, e.g. SSE2, SSE4, AVX, etc. From the <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions" rel="noreferrer">Wikipedia</a>:</p>
<blockquote>
<p><strong>Advanced Vector Extensions</strong> (<strong>AVX</strong>) are extensions to the x86 instruction
set architecture for microprocessors from Intel and AMD proposed by
Intel in March 2008 and first supported by Intel with the Sandy
Bridge processor shipping in Q1 2011 and later on by AMD with the
Bulldozer processor shipping in Q3 2011. AVX provides new features,
new instructions and a new coding scheme.</p>
</blockquote>
<p>In particular, AVX introduces <a href="https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation#Fused_multiply.E2.80.93add" rel="noreferrer">fused multiply-accumulate</a> (FMA) operations, which speed up linear algebra computation, namely dot-product, matrix multiply, convolution, etc. Almost every machine-learning training involves a great deal of these operations, hence will be faster on a CPU that supports AVX and FMA (up to 300%). The warning states that your CPU does support AVX (hooray!).</p>
<p>I'd like to stress here: it's all about <strong>CPU only</strong>.</p>
<h2>Why isn't it used then?</h2>
<p>Because tensorflow default distribution is built <a href="https://github.com/tensorflow/tensorflow/issues/7778" rel="noreferrer">without CPU extensions</a>, such as SSE4.1, SSE4.2, AVX, AVX2, FMA, etc. The default builds (ones from <code>pip install tensorflow</code>) are intended to be compatible with as many CPUs as possible. Another argument is that even with these extensions CPU is a lot slower than a GPU, and it's expected for medium- and large-scale machine-learning training to be performed on a GPU.</p>
<h2>What should you do?</h2>
<p><strong>If you have a GPU</strong>, you shouldn't care about AVX support, because most expensive ops will be dispatched on a GPU device (unless explicitly set not to). In this case, you can simply ignore this warning by</p>
<pre><code># Just disables the warning, doesn't take advantage of AVX/FMA to run faster
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
</code></pre>
<p>... or by setting <code>export TF_CPP_MIN_LOG_LEVEL=2</code> if you're on Unix. Tensorflow is working fine anyway, but you won't see these annoying warnings.</p>
<hr />
<p><strong>If you don't have a GPU</strong> and want to utilize CPU as much as possible, <strong>you should build tensorflow from the source optimized for <em>your</em> CPU</strong> with AVX, AVX2, and FMA enabled if your CPU supports them. It's been discussed in <a href="https://stackoverflow.com/q/41293077/712995">this question</a> and also <a href="https://github.com/tensorflow/tensorflow/issues/8037" rel="noreferrer">this GitHub issue</a>. Tensorflow uses an ad-hoc build system called <a href="https://bazel.build/" rel="noreferrer">bazel</a> and building it is not that trivial, but is certainly doable. After this, not only will the warning disappear, tensorflow performance should also improve.</p>
| you are the stackoverflow assistant |
|
TensorFlow not found using pip | <p>I found this to finally work.</p>
<pre><code>python3 -m pip install --upgrade https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.12.0-py3-none-any.whl
</code></pre>
<p>Edit 1: This was tested on Windows (8, 8.1, 10), Mac and Linux. Change <code>python3</code> to <code>python</code> according to your configuration. Change <code>py3</code> to <code>py2</code> in the url if you are using Python 2.x.</p>
<p>Edit 2: A list of different versions if someone needs: <a href="https://storage.googleapis.com/tensorflow" rel="noreferrer">https://storage.googleapis.com/tensorflow</a></p>
<p>Edit 3: A list of urls for the available wheel packages is available here:
<a href="https://www.tensorflow.org/install/pip#package-location" rel="noreferrer">https://www.tensorflow.org/install/pip#package-location</a></p>
| you are the stackoverflow assistant |
|
How to save/restore a model after training? | <p>In (and after) <strong>Tensorflow version 0.11</strong>:</p>
<p><strong>Save the model:</strong></p>
<pre><code>import tensorflow as tf
#Prepare to feed input, i.e. feed_dict and placeholders
w1 = tf.placeholder("float", name="w1")
w2 = tf.placeholder("float", name="w2")
b1= tf.Variable(2.0,name="bias")
feed_dict ={w1:4,w2:8}
#Define a test operation that we will restore
w3 = tf.add(w1,w2)
w4 = tf.multiply(w3,b1,name="op_to_restore")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
#Create a saver object which will save all the variables
saver = tf.train.Saver()
#Run the operation by feeding input
print sess.run(w4,feed_dict)
#Prints 24 which is sum of (w1+w2)*b1
#Now, save the graph
saver.save(sess, 'my_test_model',global_step=1000)
</code></pre>
<p><strong>Restore the model:</strong></p>
<pre><code>import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('my_test_model-1000.meta')
saver.restore(sess,tf.train.latest_checkpoint('./'))
# Access saved Variables directly
print(sess.run('bias:0'))
# This will print 2, which is the value of bias that we saved
# Now, let's access and create placeholders variables and
# create feed-dict to feed new data
graph = tf.get_default_graph()
w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
feed_dict ={w1:13.0,w2:17.0}
#Now, access the op that you want to run.
op_to_restore = graph.get_tensor_by_name("op_to_restore:0")
print sess.run(op_to_restore,feed_dict)
#This will print 60 which is calculated
</code></pre>
<p>This and some more advanced use-cases have been explained very well here.</p>
<p><a href="http://cv-tricks.com/tensorflow-tutorial/save-restore-tensorflow-models-quick-complete-tutorial/" rel="noreferrer">A quick complete tutorial to save and restore Tensorflow models</a></p>
| you are the stackoverflow assistant |
|
What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits? | <p>The softmax+logits simply means that the function operates on the unscaled output of earlier layers and that the relative scale to understand the units is linear. It means, in particular, the sum of the inputs may not equal 1, that the values are <em>not</em> probabilities (you might have an input of 5). Internally, it first applies softmax to the unscaled output, and then and then computes the cross entropy of those values vs. what they "should" be as defined by the labels.</p>
<p><code>tf.nn.softmax</code> produces the result of applying the <a href="https://en.wikipedia.org/wiki/Softmax_function" rel="noreferrer">softmax function</a> to an input tensor. The softmax "squishes" the inputs so that <code>sum(input) = 1</code>, and it does the mapping by interpreting the inputs as log-probabilities (logits) and then converting them back into raw probabilities between 0 and 1. The shape of output of a softmax is the same as the input:</p>
<pre><code>a = tf.constant(np.array([[.1, .3, .5, .9]]))
print s.run(tf.nn.softmax(a))
[[ 0.16838508 0.205666 0.25120102 0.37474789]]
</code></pre>
<p>See <a href="https://stackoverflow.com/questions/17187507/why-use-softmax-as-opposed-to-standard-normalization">this answer</a> for more about why softmax is used extensively in DNNs.</p>
<p><code>tf.nn.softmax_cross_entropy_with_logits</code> combines the softmax step with the calculation of the cross-entropy loss after applying the softmax function, but it does it all together in a more mathematically careful way. It's similar to the result of:</p>
<pre><code>sm = tf.nn.softmax(x)
ce = cross_entropy(sm)
</code></pre>
<p>The cross entropy is a summary metric: it sums across the elements. The output of <code>tf.nn.softmax_cross_entropy_with_logits</code> on a shape <code>[2,5]</code> tensor is of shape <code>[2,1]</code> (the first dimension is treated as the batch).</p>
<p>If you want to do optimization to minimize the cross entropy <strong>AND</strong> you're softmaxing after your last layer, you should use <code>tf.nn.softmax_cross_entropy_with_logits</code> instead of doing it yourself, because it covers numerically unstable corner cases in the mathematically right way. Otherwise, you'll end up hacking it by adding little epsilons here and there.</p>
<p><strong>Edited 2016-02-07:</strong>
If you have single-class labels, where an object can only belong to one class, you might now consider using <code>tf.nn.sparse_softmax_cross_entropy_with_logits</code> so that you don't have to convert your labels to a dense one-hot array. This function was added after release 0.6.0.</p>
| you are the stackoverflow assistant |
|
What is the meaning of the word logits in TensorFlow? | <p>Logits is an overloaded term which can mean many different things:</p>
<hr>
<p><strong>In Math</strong>, <a href="https://en.wikipedia.org/wiki/Logit" rel="noreferrer">Logit</a> is a function that maps probabilities (<code>[0, 1]</code>) to R (<code>(-inf, inf)</code>)</p>
<p><a href="https://i.sstatic.net/zto5q.png" rel="noreferrer"><img src="https://i.sstatic.net/zto5q.png" alt="enter image description here"></a></p>
<p>Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to > 0.5.</p>
<p><strong>In ML</strong>, it <a href="https://developers.google.com/machine-learning/glossary/#logits" rel="noreferrer">can be</a> </p>
<blockquote>
<p>the vector of raw (non-normalized) predictions that a classification
model generates, which is ordinarily then passed to a normalization
function. If the model is solving a multi-class classification
problem, logits typically become an input to the softmax function. The
softmax function then generates a vector of (normalized) probabilities
with one value for each possible class.</p>
</blockquote>
<p><strong>Logits also</strong> <a href="https://developers.google.com/machine-learning/glossary/#logits" rel="noreferrer">sometimes</a> refer to the element-wise inverse of the sigmoid function.</p>
| you are the stackoverflow assistant |
|
How to tell if tensorflow is using gpu acceleration from inside python shell? | <p>No, I don't think "open CUDA library" is enough to tell, because different nodes of the graph may be on different devices.</p>
<p>When using tensorflow2:</p>
<pre><code>print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
</code></pre>
<p>For tensorflow1, to find out which device is used, you can enable log device placement like this:</p>
<pre><code>sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
</code></pre>
<p>Check your console for this type of output.</p>
<p><a href="https://i.sstatic.net/RtRiB.png" rel="noreferrer"><img src="https://i.sstatic.net/RtRiB.png" alt="" /></a></p>
| you are the stackoverflow assistant |
|
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow? | <p>If you like ascii art:</p>
<ul>
<li><p><code>"VALID"</code> = without padding:</p>
<pre><code> inputs: 1 2 3 4 5 6 7 8 9 10 11 (12 13)
|________________| dropped
|_________________|
</code></pre></li>
<li><p><code>"SAME"</code> = with zero padding:</p>
<pre><code> pad| |pad
inputs: 0 |1 2 3 4 5 6 7 8 9 10 11 12 13|0 0
|________________|
|_________________|
|________________|
</code></pre></li>
</ul>
<p>In this example:</p>
<ul>
<li>Input width = 13</li>
<li>Filter width = 6</li>
<li>Stride = 5</li>
</ul>
<p>Notes:</p>
<ul>
<li><code>"VALID"</code> only ever drops the right-most columns (or bottom-most rows).</li>
<li><code>"SAME"</code> tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom).</li>
</ul>
<p><strong>Edit</strong>:</p>
<p>About the name:</p>
<ul>
<li>With <code>"SAME"</code> padding, if you use a stride of 1, the layer's outputs will have the <strong>same</strong> spatial dimensions as its inputs.</li>
<li>With <code>"VALID"</code> padding, there's no "made-up" padding inputs. The layer only uses <strong>valid</strong> input data.</li>
</ul>
| you are the stackoverflow assistant |
|
How to find which version of TensorFlow is installed in my system? | <p>This depends on how you installed TensorFlow. I am going to use the same headings used by <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup.html#download-and-setup" rel="noreferrer">TensorFlow's installation instructions</a> to structure this answer.</p>
<hr>
<h2>Pip installation</h2>
<p>Run:</p>
<pre><code>python -c 'import tensorflow as tf; print(tf.__version__)' # for Python 2
python3 -c 'import tensorflow as tf; print(tf.__version__)' # for Python 3
</code></pre>
<p>Note that <code>python</code> is symlinked to <code>/usr/bin/python3</code> in some Linux distributions, so use <code>python</code> instead of <code>python3</code> in these cases.</p>
<p><code>pip list | grep tensorflow</code> for Python 2 or <code>pip3 list | grep tensorflow</code> for Python 3 will also show the version of Tensorflow installed. </p>
<hr>
<h2>Virtualenv installation</h2>
<p>Run:</p>
<pre><code>python -c 'import tensorflow as tf; print(tf.__version__)' # for both Python 2 and Python 3
</code></pre>
<p><code>pip list | grep tensorflow</code> will also show the version of Tensorflow installed. </p>
<p>For example, I have installed TensorFlow 0.9.0 in a <code>virtualenv</code> for Python 3. So, I get:</p>
<pre><code>$ python -c 'import tensorflow as tf; print(tf.__version__)'
0.9.0
$ pip list | grep tensorflow
tensorflow (0.9.0)
</code></pre>
| you are the stackoverflow assistant |
|
Could not find a version that satisfies the requirement tensorflow | <p>The latest requirements for running TensorFlow are documented in the <a href="https://www.tensorflow.org/install/pip" rel="noreferrer">installation documentation</a>.</p>
<ul>
<li><p>TensorFlow only supports 64-bit Python</p>
</li>
<li><p>TensorFlow only supports certain versions of Python (for example, Python 3.6 is not supported)</p>
</li>
</ul>
<p>So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version.</p>
| you are the stackoverflow assistant |
|
How to prevent tensorflow from allocating the totality of a GPU memory? | <p>You can set the fraction of GPU memory to be allocated when you construct a <a href="https://www.tensorflow.org/api_docs/python/tf/Session" rel="noreferrer"><code>tf.Session</code></a> by passing a <a href="https://github.com/tensorflow/tensorflow/blob/08ed32dbb9e8f67eec9efce3807b5bdb3933eb2f/tensorflow/core/protobuf/config.proto" rel="noreferrer"><code>tf.GPUOptions</code></a> as part of the optional <code>config</code> argument:</p>
<pre><code># Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
</code></pre>
<p>The <code>per_process_gpu_memory_fraction</code> acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.</p>
| you are the stackoverflow assistant |
|
Disable Tensorflow debugging information | <p>You can disable all debugging logs using <code>os.environ</code> :</p>
<pre><code>import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
</code></pre>
<p>Tested on tf 0.12 and 1.0</p>
<p>In details, </p>
<pre><code>0 = all messages are logged (default behavior)
1 = INFO messages are not printed
2 = INFO and WARNING messages are not printed
3 = INFO, WARNING, and ERROR messages are not printed
</code></pre>
| you are the stackoverflow assistant |
|
Convert a tensor to numpy array in Tensorflow? | <h1><strong>TensorFlow 2.x</strong></h1>
<p><a href="https://www.tensorflow.org/guide/eager" rel="noreferrer">Eager Execution</a> is enabled by default, so just call <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/ops.py#L1042-L1067" rel="noreferrer"><strong><code>.numpy()</code></strong></a> on the Tensor object.</p>
<pre><code>import tensorflow as tf
a = tf.constant([[1, 2], [3, 4]])
b = tf.add(a, 1)
a.<b>numpy()</b>
# array([[1, 2],
# [3, 4]], dtype=int32)
b.<b>numpy()</b>
# array([[2, 3],
# [4, 5]], dtype=int32)
tf.multiply(a, b).<b>numpy()</b>
# array([[ 2, 6],
# [12, 20]], dtype=int32)
</code></pre>
<p>See <a href="https://www.tensorflow.org/tutorials/customization/basics" rel="noreferrer">NumPy Compatibility</a> for more. It is worth noting (from the docs),</p>
<blockquote>
<p>Numpy array may share a memory with the Tensor object. <strong>Any changes to one may be reflected in the other.</strong></p>
</blockquote>
<p>Bold emphasis mine. A copy may or may not be returned, and this is an implementation detail based on whether the data is in CPU or GPU (in the latter case, a copy has to be made from GPU to host memory).</p>
<p><strong>But why am I getting the <code>AttributeError: 'Tensor' object has no attribute 'numpy'</code>?</strong>.<br />
A lot of folks have commented about this issue, there are a couple of possible reasons:</p>
<ul>
<li>TF 2.0 is not correctly installed (in which case, try re-installing), or</li>
<li>TF 2.0 is installed, but eager execution is disabled for some reason. In such cases, call <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/enable_eager_execution" rel="noreferrer"><code>tf.compat.v1.enable_eager_execution()</code></a> to enable it, or see below.</li>
</ul>
<hr />
<p>If Eager Execution is disabled, you can build a graph and then run it through <code>tf.compat.v1.Session</code>:</p>
<pre><code>a = tf.constant([[1, 2], [3, 4]])
b = tf.add(a, 1)
out = tf.multiply(a, b)
out.eval(session=<b>tf.compat.v1.Session()</b>)
# array([[ 2, 6],
# [12, 20]], dtype=int32)</code></pre>
<p>See also <a href="https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0" rel="noreferrer">TF 2.0 Symbols Map</a> for a mapping of the old API to the new one.</p>
| you are the stackoverflow assistant |
|
Which TensorFlow and CUDA version combinations are compatible? | <p><strong>TL;DR</strong>) See this table: <a href="https://www.tensorflow.org/install/source#gpu" rel="noreferrer">https://www.tensorflow.org/install/source#gpu</a></p>
<h2>Generally:</h2>
<p>Check the CUDA version:</p>
<pre><code>cat /usr/local/cuda/version.txt
</code></pre>
<p>and cuDNN version:</p>
<pre><code>grep CUDNN_MAJOR -A 2 /usr/local/cuda/include/cudnn.h
</code></pre>
<p>and install a combination as given below in the images or <a href="https://www.tensorflow.org/install/source#tested_build_configurations" rel="noreferrer"><strong>here</strong></a>.</p>
<p>The following images and the link provide an overview of the officially supported/tested combinations of CUDA and TensorFlow on Linux, macOS and Windows:</p>
<h2>Minor configurations:</h2>
<p>Since the given specifications below in some cases might be too broad, here is one specific configuration that works:</p>
<ul>
<li><code>tensorflow-gpu==1.12.0</code></li>
<li><code>cuda==9.0</code></li>
<li><code>cuDNN==7.1.4</code></li>
</ul>
<p>The corresponding cudnn can be downloaded <a href="https://developer.nvidia.com/rdp/cudnn-a" rel="noreferrer">here</a>.</p>
<h2>Tested build configurations</h2>
<p>Please refer to <a href="https://www.tensorflow.org/install/source#gpu" rel="noreferrer">https://www.tensorflow.org/install/source#gpu</a> for a up-to-date compatibility chart (for official TF wheels).</p>
<p><em>(figures updated May 20, 2020)</em></p>
<h2>Linux GPU</h2>
<p><a href="https://i.sstatic.net/J3OlK.png" rel="noreferrer"><img src="https://i.sstatic.net/J3OlK.png" alt="enter image description here" /></a></p>
<h2>Linux CPU</h2>
<p><a href="https://i.sstatic.net/zfyCn.png" rel="noreferrer"><img src="https://i.sstatic.net/zfyCn.png" alt="enter image description here" /></a></p>
<h2>macOS GPU</h2>
<p><a href="https://i.sstatic.net/mWwmN.png" rel="noreferrer"><img src="https://i.sstatic.net/mWwmN.png" alt="enter image description here" /></a></p>
<h2>macOS CPU</h2>
<p><a href="https://i.sstatic.net/qEsgJ.png" rel="noreferrer"><img src="https://i.sstatic.net/qEsgJ.png" alt="enter image description here" /></a></p>
<h2>Windows GPU</h2>
<p><a href="https://i.sstatic.net/gZuJC.png" rel="noreferrer"><img src="https://i.sstatic.net/gZuJC.png" alt="enter image description here" /></a></p>
<h2>Windows CPU</h2>
<p><a href="https://i.sstatic.net/EoFVu.png" rel="noreferrer"><img src="https://i.sstatic.net/EoFVu.png" alt="enter image description here" /></a></p>
<p><strong>Updated as of Dec 5 2020:</strong> For the updated information please refer <a href="https://www.tensorflow.org/install/source#tested_build_configurations" rel="noreferrer">Link for Linux</a> and <a href="https://www.tensorflow.org/install/source_windows#tested_build_configurations" rel="noreferrer">Link for Windows</a>.</p>
| you are the stackoverflow assistant |
|
How to compile Tensorflow with SSE4.2 and AVX instructions? | <p>I just ran into this same problem, it seems like Yaroslav Bulatov's suggestion doesn't cover SSE4.2 support, adding <code>--copt=-msse4.2</code> would suffice. In the end, I successfully built with</p>
<pre><code>bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>without getting any warning or errors.</p>
<p>Probably the best choice for any system is:</p>
<pre><code>bazel build -c opt --copt=-march=native --copt=-mfpmath=both --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>(<strong>Update: <a href="https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions#comment81255824_47146118">the build scripts may be eating <code>-march=native</code></a>, possibly because it contains an <code>=</code></strong>.)</p>
<p><code>-mfpmath=both</code> only works with gcc, not clang. <code>-mfpmath=sse</code> is probably just as good, if not better, and is the default for x86-64. 32-bit builds default to <code>-mfpmath=387</code>, so changing that will help for 32-bit. (But if you want high-performance for number crunching, you should build 64-bit binaries.)</p>
<p>I'm not sure what TensorFlow's default for <code>-O2</code> or <code>-O3</code> is. <code>gcc -O3</code> enables full optimization including auto-vectorization, but that sometimes can make code slower.</p>
<hr>
<p>What this does: <a href="https://docs.bazel.build/versions/master/bazel-user-manual.html" rel="noreferrer"><code>--copt</code> for <code>bazel build</code></a> passes an option directly to gcc for compiling C and C++ files (but not linking, so you need a different option for cross-file link-time-optimization)</p>
<p>x86-64 gcc defaults to using only SSE2 or older SIMD instructions, so you can run the binaries on <em>any</em> x86-64 system. (See <a href="https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html" rel="noreferrer">https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html</a>). That's not what you want. You want to make a binary that takes advantage of all the instructions your CPU can run, because you're only running this binary on the system where you built it.</p>
<p><code>-march=native</code> enables all the options your CPU supports, so it makes <code>-mavx512f -mavx2 -mavx -mfma -msse4.2</code> redundant. (Also, <code>-mavx2</code> already enables <code>-mavx</code> and <code>-msse4.2</code>, so Yaroslav's command should have been fine). Also if you're using a CPU that doesn't support one of these options (like FMA), using <code>-mfma</code> would make a binary that faults with illegal instructions.</p>
<p><a href="https://www.tensorflow.org/versions/r1.2/install/install_sources" rel="noreferrer">TensorFlow's <code>./configure</code> defaults to enabling <code>-march=native</code></a>, so using that should avoid needing to specify compiler options manually.</p>
<p><code>-march=native</code> enables <code>-mtune=native</code>, so <a href="https://stackoverflow.com/questions/44490331/how-does-mtune-actually-work">it optimizes for your CPU</a> for things like which sequence of AVX instructions is best for unaligned loads.</p>
<p>This all applies to gcc, clang, or ICC. (For ICC, you can use <code>-xHOST</code> instead of <code>-march=native</code>.)</p>
| you are the stackoverflow assistant |
|
What's the difference between tf.placeholder and tf.Variable? | <p>In short, you use <code>tf.Variable</code> for trainable variables such as weights (W) and biases (B) for your model.</p>
<pre><code>weights = tf.Variable(
tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights')
biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')
</code></pre>
<p><code>tf.placeholder</code> is used to feed actual training examples.</p>
<pre><code>images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
</code></pre>
<p>This is how you feed the training examples during the training:</p>
<pre><code>for step in xrange(FLAGS.max_steps):
feed_dict = {
images_placeholder: images_feed,
labels_placeholder: labels_feed,
}
_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
</code></pre>
<p>Your <code>tf.variables</code> will be trained (modified) as the result of this training.</p>
<p>See more at <a href="https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html">https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html</a>. (Examples are taken from the web page.)</p>
| you are the stackoverflow assistant |
|
How to print the value of a Tensor object in TensorFlow? | <p>The easiest<sup>[A]</sup> way to evaluate the actual value of a <code>Tensor</code> object is to pass it to the <code>Session.run()</code> method, or call <code>Tensor.eval()</code> when you have a default session (i.e. in a <code>with tf.Session():</code> block, or see below). In general<sup>[B]</sup>, you cannot print the value of a tensor without running some code in a session.</p>
<p>If you are experimenting with the programming model, and want an easy way to evaluate tensors, the <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/InteractiveSession" rel="noreferrer"><code>tf.InteractiveSession</code></a> lets you open a session at the start of your program, and then use that session for all <code>Tensor.eval()</code> (and <code>Operation.run()</code>) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a <code>Session</code> object everywhere. For example, the following works in a Jupyter notebook:</p>
<pre><code>with tf.Session() as sess: print(product.eval())
</code></pre>
<p>This might seem silly for such a small expression, but one of the key ideas in Tensorflow 1.x is <em>deferred execution</em>: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a <code>Session</code>) is able to schedule its execution more efficiently (e.g. executing independent parts in parallel and using GPUs).</p>
<hr>
<p>[A]: To print the value of a tensor without returning it to your Python program, you can use the <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/print" rel="noreferrer"><code>tf.print()</code></a> operator, as <a href="https://stackoverflow.com/a/36296783/3574081">Andrzej suggests in another answer</a>. According to the official documentation: </p>
<blockquote>
<p>To make sure the operator runs, users need to pass the produced op to <code>tf.compat.v1.Session</code>'s run method, or to use the op as a control dependency for executed ops by specifying with <code>tf.compat.v1.control_dependencies([print_op]</code>), which is printed to standard output. </p>
</blockquote>
<p>Also note that:</p>
<blockquote>
<p>In Jupyter notebooks and colabs, <code>tf.print</code> prints to the notebook cell outputs. It will not write to the notebook kernel's console logs.</p>
</blockquote>
<p>[B]: You <em>might</em> be able to use the <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/get_static_value" rel="noreferrer"><code>tf.get_static_value()</code></a> function to get the constant value of the given tensor if its value is efficiently calculable.</p>
| you are the stackoverflow assistant |
|
Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: | <p>Just type the command you want execute with the user permission, if you don't want to change the permission:</p>
<pre><code>pip3 install --upgrade tensorflow-gpu --user
</code></pre>
| you are the stackoverflow assistant |
|
What's the difference of name scope and a variable scope in tensorflow? | <p>Let's begin by a short introduction to variable sharing. It is a mechanism in <code>TensorFlow</code> that allows for sharing variables accessed in different parts of the code without passing references to the variable around. </p>
<p>The method <a href="https://www.tensorflow.org/api_docs/python/tf/get_variable" rel="noreferrer"><code>tf.get_variable</code></a> can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the <a href="https://www.tensorflow.org/api_docs/python/tf/Variable" rel="noreferrer"><code>tf.Variable</code></a> constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists). </p>
<p>It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.</p>
<p>As a result, we end up having two different types of scopes:</p>
<ul>
<li><em>name scope</em>, created using <a href="https://www.tensorflow.org/api_docs/python/tf/name_scope" rel="noreferrer"><code>tf.name_scope</code></a></li>
<li><em>variable scope</em>, created using <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope" rel="noreferrer"><code>tf.variable_scope</code></a></li>
</ul>
<p>Both scopes have the same effect on all operations as well as variables created using <code>tf.Variable</code>, i.e., the scope will be added as a prefix to the operation or variable name. </p>
<p>However, name scope is ignored by <code>tf.get_variable</code>. We can see that in the following example:</p>
<pre class="lang-python prettyprint-override"><code>with tf.name_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)
print(v1.name) # var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0
</code></pre>
<p>The only way to place a variable accessed using <code>tf.get_variable</code> in a scope is to use a variable scope, as in the following example:</p>
<pre class="lang-python prettyprint-override"><code>with tf.variable_scope("my_scope"):
v1 = tf.get_variable("var1", [1], dtype=tf.float32)
v2 = tf.Variable(1, name="var2", dtype=tf.float32)
a = tf.add(v1, v2)
print(v1.name) # my_scope/var1:0
print(v2.name) # my_scope/var2:0
print(a.name) # my_scope/Add:0
</code></pre>
<p>This allows us to easily share variables across different parts of the program, even within different name scopes:</p>
<pre class="lang-python prettyprint-override"><code>with tf.name_scope("foo"):
with tf.variable_scope("var_scope"):
v = tf.get_variable("var", [1])
with tf.name_scope("bar"):
with tf.variable_scope("var_scope", reuse=True):
v1 = tf.get_variable("var", [1])
assert v1 == v
print(v.name) # var_scope/var:0
print(v1.name) # var_scope/var:0
</code></pre>
<hr>
<h2>UPDATE</h2>
<p><strong>As of version r0.11, <code>op_scope</code> and <code>variable_op_scope</code> are both <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/state_ops/sharing_variables#variable_op_scope" rel="noreferrer">deprecated</a> and replaced by <code>name_scope</code> and <code>variable_scope</code>.</strong> </p>
| you are the stackoverflow assistant |
|
Ordering of batch normalization and dropout? | <p>In the <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="noreferrer">Ioffe and Szegedy 2015</a>, the authors state that "we would like to ensure that for any parameter values, the network always produces activations with the desired distribution". So the Batch Normalization Layer is actually inserted right after a Conv Layer/Fully Connected Layer, but before feeding into ReLu (or any other kinds of) activation. See <a href="https://www.youtube.com/watch?v=jhUZ800C650&index=5&list=PLLvH2FwAQhnpj1WEB-jHmPuUeQ8mX-XXG" rel="noreferrer">this video</a> at around time 53 min for more details.</p>
<p>As far as dropout goes, I believe dropout is applied after activation layer. In the <a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf" rel="noreferrer">dropout paper</a> figure 3b, the dropout factor/probability matrix r(l) for hidden layer l is applied to it on y(l), where y(l) is the result after applying activation function f. </p>
<p>So in summary, the order of using batch normalization and dropout is:</p>
<p>-> CONV/FC -> BatchNorm -> ReLu(or other activation) -> Dropout -> CONV/FC -></p>
| you are the stackoverflow assistant |
|
How to get current available GPUs in tensorflow? | <p>There is an undocumented method called <a href="https://github.com/tensorflow/tensorflow/blob/d42facc3cc9611f0c9722c81551a7404a0bd3f6b/tensorflow/python/client/device_lib.py#L27" rel="noreferrer"><code>device_lib.list_local_devices()</code></a> that enables you to list the devices available in the local process. (<strong>N.B.</strong> As an undocumented method, this is subject to backwards incompatible changes.) The function returns a list of <a href="https://github.com/tensorflow/tensorflow/blob/8a4f6abb395b3f1bca732797068021c786c1ec76/tensorflow/core/framework/device_attributes.proto" rel="noreferrer"><code>DeviceAttributes</code> protocol buffer</a> objects. You can extract a list of string device names for the GPU devices as follows:</p>
<pre><code>from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
</code></pre>
<p>Note that (at least up to TensorFlow 1.4), calling <code>device_lib.list_local_devices()</code> will run some initialization code that, by default, will allocate all of the GPU memory on all of the devices (<a href="https://github.com/tensorflow/tensorflow/issues/9374" rel="noreferrer">GitHub issue</a>). To avoid this, first create a session with an explicitly small <code>per_process_gpu_fraction</code>, or <code>allow_growth=True</code>, to prevent all of the memory being allocated. See <a href="https://stackoverflow.com/q/34199233/3574081">this question</a> for more details.</p>
| you are the stackoverflow assistant |
|
Tensorflow 2.0 - AttributeError: module 'tensorflow' has no attribute 'Session' | <p>According to <code>TF 1:1 Symbols Map</code>, in TF 2.0 you should use <code>tf.compat.v1.Session()</code> instead of <code>tf.Session()</code></p>
<p><a href="https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0" rel="noreferrer">https://docs.google.com/spreadsheets/d/1FLFJLzg7WNP6JHODX5q8BDgptKafq_slHpnHVbJIteQ/edit#gid=0</a></p>
<p>To get TF 1.x like behaviour in TF 2.0 one can run </p>
<pre><code>import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
</code></pre>
<p>but then one cannot benefit of many improvements made in TF 2.0. For more details please refer to the migration guide
<a href="https://www.tensorflow.org/guide/migrate" rel="noreferrer">https://www.tensorflow.org/guide/migrate</a></p>
| you are the stackoverflow assistant |
|
Keras, How to get the output of each layer? | <p>You can easily get the outputs of any layer by using: <code>model.layers[index].output</code></p>
<p>For all layers use this:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs] # evaluation functions
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test, 1.]) for func in functors]
print layer_outs
</code></pre>
<p>Note: To simulate Dropout use <code>learning_phase</code> as <code>1.</code> in <code>layer_outs</code> otherwise use <code>0.</code></p>
<p><strong>Edit:</strong> (based on comments)</p>
<p><code>K.function</code> creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input. </p>
<p>Now <code>K.learning_phase()</code> is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time. </p>
<p>So if you remove the dropout layer in your code you can simply use:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functors = [K.function([inp], [out]) for out in outputs] # evaluation functions
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = [func([test]) for func in functors]
print layer_outs
</code></pre>
<p><strong>Edit 2: More optimized</strong></p>
<p>I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over. </p>
<p>Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs:</p>
<pre><code>from keras import backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function
# Testing
test = np.random.random(input_shape)[np.newaxis,...]
layer_outs = functor([test, 1.])
print layer_outs
</code></pre>
| you are the stackoverflow assistant |
|
In TensorFlow, what is the difference between Session.run() and Tensor.eval()? | <p>If you have a <code>Tensor</code> t, calling <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#eval" rel="noreferrer"><code>t.eval()</code></a> is equivalent to calling <code>tf.get_default_session().run(t)</code>.</p>
<p>You can make a session the default as follows:</p>
<pre><code>t = tf.constant(42.0)
sess = tf.Session()
with sess.as_default(): # or `with sess:` to close on exit
assert sess is tf.get_default_session()
assert t.eval() == sess.run(t)
</code></pre>
<p>The most important difference is that you can use <code>sess.run()</code> to fetch the values of many tensors in the same step:</p>
<pre><code>t = tf.constant(42.0)
u = tf.constant(37.0)
tu = tf.mul(t, u)
ut = tf.mul(u, t)
with sess.as_default():
tu.eval() # runs one step
ut.eval() # runs one step
sess.run([tu, ut]) # evaluates both tensors in a single step
</code></pre>
<p>Note that each call to <code>eval</code> and <code>run</code> will execute the whole graph from scratch. To cache the result of a computation, assign it to a <a href="https://www.tensorflow.org/how_tos/variables/" rel="noreferrer"><code>tf.Variable</code></a>.</p>
| you are the stackoverflow assistant |
|
How can I run Tensorboard on a remote server? | <p>Here is what I do to avoid the issues of making the remote server accept your local external IP:</p>
<ul>
<li>when I ssh into the machine, I use the option <code>-L</code> to transfer the port <code>6006</code> of the remote server into the port <code>16006</code> of my machine (for instance):
<code>
ssh -L 16006:127.0.0.1:6006 olivier@my_server_ip
</code></li>
</ul>
<p>What it does is that everything on the port <code>6006</code> of the server (in <code>127.0.0.1:6006</code>) will be <strong>forwarded</strong> to my machine on the port <code>16006</code>.</p>
<hr>
<ul>
<li>You can then launch tensorboard on the remote machine using a standard <code>tensorboard --logdir log</code> with the default <code>6006</code>port</li>
<li>On your local machine, go to <a href="http://127.0.0.1:16006">http://127.0.0.1:16006</a> and enjoy your remote TensorBoard.</li>
</ul>
| you are the stackoverflow assistant |
|
How to build and use Google TensorFlow C++ api | <p>To get started, you should download the source code from Github, by <a href="http://tensorflow.org/get_started/os_setup.md#installing_from_sources">following the instructions here</a> (you'll need <a href="http://bazel.io">Bazel</a> and a recent version of GCC).</p>
<p>The C++ API (and the backend of the system) is in <code>tensorflow/core</code>. Right now, only the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/public/session.h">C++ Session interface</a>, and the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/public/tensor_c_api.h">C API</a> are being supported. You can use either of these to execute TensorFlow graphs that have been built using the Python API and serialized to a <code>GraphDef</code> protocol buffer. There is also an experimental feature for building graphs in C++, but this is currently not quite as full-featured as the Python API (e.g. no support for auto-differentiation at present). You can see an example program that <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/tutorials/example_trainer.cc">builds a small graph in C++ here</a>.</p>
<p>The second part of the C++ API is the API for adding a new <code>OpKernel</code>, which is the class containing implementations of numerical kernels for CPU and GPU. There are numerous examples of how to build these in <code>tensorflow/core/kernels</code>, as well as a <a href="http://tensorflow.org/how_tos/adding_an_op/index.md">tutorial for adding a new op in C++</a>.</p>
| you are the stackoverflow assistant |
|
TypeError: Descriptors cannot not be created directly | <p>Sometimes the <a href="https://pypi.org/project/protobuf/" rel="noreferrer">protobuf</a> package might be installed without your involvement. For this, you have two solutions to apply. Try one of the below solutions and it should work.</p>
<h2>Solution 1:</h2>
<p>You can downgrade the <a href="https://pypi.org/project/protobuf/" rel="noreferrer">protobuf</a> plugin,</p>
<pre class="lang-none prettyprint-override"><code>pip install protobuf==3.20.*
</code></pre>
<p>Or you can add it to the <code>requirements.txt</code> file as the last package. Because this will override the previously installed <a href="https://pypi.org/project/protobuf/" rel="noreferrer">protobuf</a> package.</p>
<pre class="lang-none prettyprint-override"><code>...
protobuf==3.20.*
</code></pre>
<h2>Solution 2:</h2>
<p>You can set the following environment variable.</p>
<pre class="lang-none prettyprint-override"><code>export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
</code></pre>
<p>according to the error description, this might impact your program performance.</p>
<blockquote>
<p>but this will use pure-Python parsing and will be much slower</p>
</blockquote>
<hr />
<p>References:</p>
<ul>
<li><a href="https://discuss.streamlit.io/t/typeerror-descriptors-cannot-not-be-created-directly/25639" rel="noreferrer">TypeError: Descriptors cannot not be created directly</a></li>
<li><a href="https://github.com/protocolbuffers/protobuf/issues/10051" rel="noreferrer">[Python] Release 4.21.0 broke multiple Google Cloud client libraries ("TypeError: Descriptors cannot not be created directly.")</a></li>
</ul>
| you are the stackoverflow assistant |
|
What is the difference between steps and epochs in TensorFlow? | <p>A training step is one gradient update. In one step <code>batch_size</code> examples are processed.</p>
<p>An epoch consists of one full cycle through the training data. This is usually many steps. As an example, if you have 2,000 images and use a batch size of 10 an epoch consists of:</p>
<pre><code>2,000 images / (10 images / step) = 200 steps.
</code></pre>
<p>If you choose your training image randomly (and independently) in each step, you normally do not call it epoch. [This is where my answer differs from the previous one. Also see my comment.]</p>
| you are the stackoverflow assistant |
|
How to run Tensorflow on CPU | <p>You can also set the environment variable to </p>
<pre><code>CUDA_VISIBLE_DEVICES=""
</code></pre>
<p>without having to modify the source code.</p>
| you are the stackoverflow assistant |
|
Why is TensorFlow 2 much slower than TensorFlow 1? | <p><strong>UPDATE 8/<s>17</s>30/2020</strong>: TF 2.3 has finally done it: all cases run as fast, or notably faster, than any previous version.</p>
<p>Further, my previous update was unfair to TF; my GPU was to blame, has been overheating lately. If you see a rising stem plot of iteration times, it's a reliable symptom. Lastly, see a dev's note on <a href="https://github.com/tensorflow/tensorflow/issues/42429#issuecomment-676539007" rel="noreferrer">Eager vs Graph</a>.</p>
<p>This might be my last update on this answer. The true stats on <em>your</em> model's speed can only be found by you, on your device.</p>
<hr>
<p><strong>UPDATE 5/19/2020</strong>: TF 2.2, using same tests: only a minor improvement in Eager speed. Plots for Large-Large Numpy <code>train_on_batch</code> case below, x-axis is successive fit iterations; my GPU isn't near its full capacity, so doubt it's throttling, but iterations do get slower over time.</p>
<p><a href="https://i.sstatic.net/u74bU.png" rel="noreferrer"><img src="https://i.sstatic.net/u74bU.png" alt="enter image description here" /></a></p>
<p>Per above, Graph and Eager are <strong>1.56x</strong> and <strong>1.97x</strong> slower than their TF1 counterparts, respectively. Unsure I'll debug this further, as I'm considering switching to Pytorch per TensorFlow's poor support for custom / low-level functionality. I did, however, open an <a href="https://github.com/tensorflow/tensorflow/issues/39665" rel="noreferrer">Issue</a> to get devs' feedback.</p>
<hr>
<p><strong>UPDATE 2/18/2020</strong>: I've benched 2.1 and 2.1-nightly; the results are mixed. All but one configs (model & data size) are as fast as or much faster than the best of TF2 & TF1. The one that's slower, and slower dramatically, is Large-Large - esp. in Graph execution (<strong>1.6x to 2.5x slower</strong>).</p>
<p>Furthermore, there are <em>extreme</em> reproducibility differences between Graph and Eager for a large model I tested - one not explainable via randomness/compute-parallelism. I can't currently present reproducible code for these claims per time constraints, so instead I strongly recommend testing this for your own models.</p>
<p>Haven't opened a Git issue on these yet, but I did comment on the <a href="https://github.com/tensorflow/tensorflow/issues/33487" rel="noreferrer">original</a> - no response yet. I'll update the answer(s) once progress is made.</p>
<hr>
<p><strong>VERDICT</strong>: it <em>isn't</em>, IF you know what you're doing. But if you <em>don't</em>, it could cost you, lots - by a few GPU upgrades on average, and by multiple GPUs worst-case.</p>
<hr>
<p><strong>THIS ANSWER</strong>: aims to provide a high-level description of the issue, as well as guidelines for how to decide on the training configuration specific to your needs. For a detailed, low-level description, which includes all benchmarking results + code used, see my other answer.</p>
<p>I'll be updating my answer(s) w/ more info if I learn any - can bookmark / "star" this question for reference.</p>
<hr>
<p><strong>ISSUE SUMMARY</strong>: as <a href="https://github.com/tensorflow/tensorflow/issues/33487#issuecomment-548071133" rel="noreferrer">confirmed</a> by a TensorFlow developer, Q. Scott Zhu, TF2 focused development on Eager execution & tight integration w/ Keras, which involved sweeping changes in TF source - including at graph-level. Benefits: greatly expanded processing, distribution, debug, and deployment capabilities. The cost of some of these, however, is speed.</p>
<p>The matter, however, is fairly more complex. It isn't just TF1 vs. TF2 - factors yielding significant differences in train speed include:</p>
<ol>
<li>TF2 vs. TF1</li>
<li>Eager vs. Graph mode</li>
<li><code>keras</code> vs. <code>tf.keras</code></li>
<li><code>numpy</code> vs. <code>tf.data.Dataset</code> vs. ...</li>
<li><code>train_on_batch()</code> vs. <code>fit()</code></li>
<li>GPU vs. CPU</li>
<li><code>model(x)</code> vs. <code>model.predict(x)</code> vs. ...</li>
</ol>
<p>Unfortunately, almost none of the above are independent of the other, and each can at least double execution time relative to another. Fortunately, you can determine what'll work best systematically, and with a few shortcuts - as I'll be showing.</p>
<hr>
<p><strong>WHAT SHOULD I DO?</strong> Currently, the only way is - experiment for your specific model, data, and hardware. No single configuration will always work best - but there <em>are</em> do's and don't's to simplify your search:</p>
<p><strong>>> DO:</strong></p>
<ul>
<li><code>train_on_batch()</code> + <code>numpy</code> + <code>tf.keras</code> + TF1 + Eager/Graph</li>
<li><code>train_on_batch()</code> + <code>numpy</code> + <code>tf.keras</code> + TF2 + Graph</li>
<li><code>fit()</code> + <code>numpy</code> + <code>tf.keras</code> + TF1/TF2 + Graph + large model & data</li>
</ul>
<p><strong>>> DON'T:</strong></p>
<ul>
<li><p><code>fit()</code> + <code>numpy</code> + <code>keras</code> for small & medium models and data</p>
</li>
<li><p><code>fit()</code> + <code>numpy</code> + <code>tf.keras</code> + TF1/TF2 + Eager</p>
</li>
<li><p><code>train_on_batch()</code> + <code>numpy</code> + <code>keras</code> + TF1 + Eager</p>
</li>
<li><p><strong>[Major]</strong> <code>tf.python.keras</code>; it can run 10-100x slower, and w/ plenty of bugs; <a href="https://stackoverflow.com/questions/58279628/what-is-the-difference-between-tf-keras-and-tf-python-keras/58279629#58279629">more info</a></p>
<ul>
<li>This includes <code>layers</code>, <code>models</code>, <code>optimizers</code>, & related "out-of-box" usage imports; ops, utils, & related 'private' imports are fine - but to be sure, check for alts, & whether they're used in <code>tf.keras</code></li>
</ul>
</li>
</ul>
<p>Refer to code at bottom of my other answer for an example benchmarking setup. The list above is based mainly on the "BENCHMARKS" tables in the other answer.</p>
<hr>
<p><strong>LIMITATIONS</strong> of the above DO's & DON'T's:</p>
<ul>
<li>This question's titled "Why is TF2 much slower than TF1?", and while its body concerns training explicitly, the matter isn't limited to it; <em>inference</em>, too, is subject to major speed differences, <em>even</em> within the same TF version, import, data format, etc. - see <a href="https://stackoverflow.com/questions/58378374/why-does-keras-model-predict-slower-after-compile/58385156#58385156">this answer</a>.</li>
<li>RNNs are likely to notably change the data grid in the other answer, as they've been improved in TF2</li>
<li>Models primarily used <code>Conv1D</code> and <code>Dense</code> - no RNNs, sparse data/targets, 4/5D inputs, & other configs</li>
<li>Input data limited to <code>numpy</code> and <code>tf.data.Dataset</code>, while many other formats exist; see other answer</li>
<li>GPU was used; results <em>will</em> differ on a CPU. In fact, when I asked the question, my CUDA wasn't properly configured, and some of the results were CPU-based.</li>
</ul>
<hr>
<p><strong>Why did TF2 sacrifice the most practical quality, speed, for eager execution?</strong> It hasn't, clearly - graph is still available. But if the question is "why eager at all":</p>
<ul>
<li><strong>Superior debugging</strong>: you've likely come across multitudes of questions asking "how do I get intermediate layer outputs" or "how do I inspect weights"; with eager, it's (almost) as simple as <code>.__dict__</code>. Graph, in contrast, requires familiarity with special backend functions - greatly complicating the entire process of debugging & introspection.</li>
<li><strong>Faster prototyping</strong>: per ideas similar to above; faster understanding = more time left for actual DL.</li>
</ul>
<hr>
<p><strong>HOW TO ENABLE/DISABLE EAGER?</strong></p>
<pre class="lang-py prettyprint-override"><code>tf.enable_eager_execution() # TF1; must be done before any model/tensor creation
tf.compat.v1.disable_eager_execution() # TF2; above holds
</code></pre>
<p><em>Misleading</em> in TF2; see <a href="https://github.com/tensorflow/tensorflow/issues/42429#issuecomment-676539007" rel="noreferrer">here</a>.</p>
<hr>
<p><strong>ADDITIONAL INFO</strong>:</p>
<ul>
<li>Careful with <code>_on_batch()</code> methods in TF2; according to the TF dev, they still use a slower implementation, but <em>not intentionally</em> - i.e. it's to be fixed. See other answer for details.</li>
</ul>
<hr>
<p><strong>REQUESTS TO TENSORFLOW DEVS</strong>:</p>
<ol>
<li><s> Please fix <code>train_on_batch()</code>, and the performance aspect of calling <code>fit()</code> iteratively; custom train loops are important to many, especially to me. </s></li>
<li><s> Add documentation / docstring mention of these performance differences for users' knowledge.</s></li>
<li><s> Improve general execution speed to keep peeps from hopping to Pytorch.</s></li>
</ol>
<hr>
<p><strong>ACKNOWLEDGEMENTS</strong>: Thanks to</p>
<ul>
<li>Q. Scott Zhu, TensorFlow developer, for his <a href="https://github.com/tensorflow/tensorflow/issues/33487#issuecomment-548071133" rel="noreferrer">detailed clarification</a> on the matter.</li>
<li>P. Andrey for sharing <a href="https://github.com/tensorflow/tensorflow/issues/33487#issuecomment-543762187" rel="noreferrer">useful testing</a>, and discussion.</li>
</ul>
<hr>
<p><strong>UPDATES</strong>:</p>
<ul>
<li><p><em>11/14/19</em> - found a model (in my real application) that that runs slower on TF2 <em>for all* configurations</em> w/ Numpy input data. Differences ranged 13-19%, averaging 17%. Differences between <code>keras</code> and <code>tf.keras</code>, however, were more dramatic: <strong>18-40%</strong>, avg. 32% (both TF1 & 2). (* - except Eager, for which TF2 OOM'd)</p>
</li>
<li><p><em>11/17/19</em> - devs updated <code>on_batch()</code> methods in a <a href="https://github.com/tensorflow/tensorflow/issues/33487#issuecomment-554123856" rel="noreferrer">recent commit</a>, stating to have improved speed - to be released in TF 2.1, or available now as <code>tf-nightly</code>. As I'm unable to get latter running, will delay benching until 2.1.</p>
</li>
<li><p><em>2/20/20</em> - prediction performance is also worth benching; in TF2, for example, CPU prediction times can involve <a href="https://stackoverflow.com/questions/60267911/keras-inconsistent-prediction-time">periodic spikes</a></p>
</li>
</ul>
| you are the stackoverflow assistant |
|
What is the role of "Flatten" in Keras? | <p>If you read the Keras documentation entry for <a href="https://keras.io/layers/core/#dense" rel="noreferrer"><code>Dense</code></a>, you will see that this call:</p>
<pre><code>Dense(16, input_shape=(5,3))
</code></pre>
<p>would result in a <code>Dense</code> network with 3 inputs and 16 outputs which would be applied independently for each of 5 steps. So, if <code>D(x)</code> transforms 3 dimensional vector to 16-d vector, what you'll get as output from your layer would be a sequence of vectors: <code>[D(x[0,:]), D(x[1,:]),..., D(x[4,:])]</code> with shape <code>(5, 16)</code>. In order to have the behavior you specify you may first <code>Flatten</code> your input to a 15-d vector and then apply <code>Dense</code>:</p>
<pre><code>model = Sequential()
model.add(Flatten(input_shape=(3, 2)))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(4))
model.compile(loss='mean_squared_error', optimizer='SGD')
</code></pre>
<p><strong>EDIT:</strong>
As some people struggled to understand - here you have an explaining image:</p>
<p><a href="https://i.sstatic.net/Wk8eV.png" rel="noreferrer"><img src="https://i.sstatic.net/Wk8eV.png" alt="enter image description here" /></a></p>
| you are the stackoverflow assistant |
|
TensorFlow, why was python the chosen language? | <p>The most important thing to realize about TensorFlow is that, for the most part, <em>the core is not written in Python</em>: It's written in a combination of highly-optimized C++ and CUDA (Nvidia's language for programming GPUs). Much of that happens, in turn, by using <a href="http://eigen.tuxfamily.org/index.php?title=Main_Page" rel="noreferrer">Eigen</a> (a high-performance C++ and CUDA numerical library) and <a href="https://developer.nvidia.com/cudnn" rel="noreferrer">NVidia's cuDNN</a> (a very optimized DNN library for <a href="https://developer.nvidia.com/cuda-gpus" rel="noreferrer">NVidia GPUs</a>, for functions such as <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="noreferrer">convolutions</a>).</p>
<p>The model for TensorFlow is that the programmer uses "some language" (most likely Python!) to express the model. This model, written in the TensorFlow constructs such as:</p>
<pre><code>h1 = tf.nn.relu(tf.matmul(l1, W1) + b1)
h2 = ...
</code></pre>
<p>is not actually executed when the Python is run. Instead, what's actually created is a <a href="https://www.tensorflow.org/get_started/graph_viz" rel="noreferrer">dataflow graph</a> that says to take particular inputs, apply particular operations, supply the results as the inputs to other operations, and so on. <em>This model is executed by fast C++ code, and for the most part, the data going between operations is never copied back to the Python code</em>.</p>
<p>Then the programmer "drives" the execution of this model by pulling on nodes -- for training, usually in Python, and for serving, sometimes in Python and sometimes in raw C++:</p>
<pre><code>sess.run(eval_results)
</code></pre>
<p>This one Python (or C++ function call) uses either an in-process call to C++ or an <a href="https://en.wikipedia.org/wiki/Remote_procedure_call" rel="noreferrer">RPC</a> for the distributed version to call into the C++ TensorFlow server to tell it to execute, and then copies back the results.</p>
<p><strong>So, with that said, let's re-phrase the question: Why did TensorFlow choose Python as the first well-supported language for expressing and controlling the training of models?</strong></p>
<p>The answer to that is simple: Python is probably <em>the</em> most comfortable language for a large range of data scientists and machine learning experts that's also that easy to integrate and have control a C++ backend, while also being general, widely-used both inside and outside of Google, and open source. Given that with the basic model of TensorFlow, the performance of Python isn't that important, it was a natural fit. It's also a huge plus that <a href="http://www.numpy.org/" rel="noreferrer">NumPy</a> makes it easy to do pre-processing in Python -- also with high performance -- before feeding it in to TensorFlow for the truly CPU-heavy things.</p>
<p>There's also a bunch of complexity in expressing the model that isn't used when executing it -- shape inference (e.g., if you do matmul(A, B), what is the shape of the resulting data?) and automatic <a href="https://en.wikipedia.org/wiki/Gradient" rel="noreferrer">gradient</a> computation. It turns out to have been nice to be able to express those in Python, though I think in the long term they'll probably move to the C++ backend to make adding other languages easier.</p>
<p>(The hope, of course, is to support other languages in the future for creating and expressing models. It's already quite straightforward to run inference using several other languages -- C++ works now, someone from Facebook contributed <a href="https://golang.org/" rel="noreferrer">Go</a> bindings that we're reviewing now, etc.)</p>
| you are the stackoverflow assistant |
|
What does tf.nn.embedding_lookup function do? | <p>Yes, this function is hard to understand, until you get the point.</p>
<p>In its simplest form, it is similar to <code>tf.gather</code>. It returns the elements of <code>params</code> according to the indexes specified by <code>ids</code>.</p>
<p>For example (assuming you are inside <code>tf.InteractiveSession()</code>)</p>
<pre><code>params = tf.constant([10,20,30,40])
ids = tf.constant([0,1,2,3])
print tf.nn.embedding_lookup(params,ids).eval()
</code></pre>
<p>would return <code>[10 20 30 40]</code>, because the first element (index 0) of params is <code>10</code>, the second element of params (index 1) is <code>20</code>, etc.</p>
<p>Similarly, </p>
<pre><code>params = tf.constant([10,20,30,40])
ids = tf.constant([1,1,3])
print tf.nn.embedding_lookup(params,ids).eval()
</code></pre>
<p>would return <code>[20 20 40]</code>.</p>
<p>But <code>embedding_lookup</code> is more than that. The <code>params</code> argument can be a <strong>list</strong> of tensors, rather than a single tensor.</p>
<pre><code>params1 = tf.constant([1,2])
params2 = tf.constant([10,20])
ids = tf.constant([2,0,2,1,2,3])
result = tf.nn.embedding_lookup([params1, params2], ids)
</code></pre>
<p>In such a case, the indexes, specified in <code>ids</code>, correspond to elements of tensors according to a <strong>partition strategy</strong>, where the default partition strategy is 'mod'.</p>
<p>In the 'mod' strategy, index 0 corresponds to the first element of the first tensor in the list. Index 1 corresponds to the <strong>first</strong> element of the <strong>second</strong> tensor. Index 2 corresponds to the <strong>first</strong> element of the <strong>third</strong> tensor, and so on. Simply index <code>i</code> corresponds to the first element of the (i+1)th tensor , for all the indexes <code>0..(n-1)</code>, assuming params is a list of <code>n</code> tensors.</p>
<p>Now, index <code>n</code> cannot correspond to tensor n+1, because the list <code>params</code> contains only <code>n</code> tensors. So index <code>n</code> corresponds to the <strong>second</strong> element of the first tensor. Similarly, index <code>n+1</code> corresponds to the second element of the second tensor, etc.</p>
<p>So, in the code</p>
<pre><code>params1 = tf.constant([1,2])
params2 = tf.constant([10,20])
ids = tf.constant([2,0,2,1,2,3])
result = tf.nn.embedding_lookup([params1, params2], ids)
</code></pre>
<p>index 0 corresponds to the first element of the first tensor: 1</p>
<p>index 1 corresponds to the first element of the second tensor: 10</p>
<p>index 2 corresponds to the second element of the first tensor: 2</p>
<p>index 3 corresponds to the second element of the second tensor: 20</p>
<p>Thus, the result would be:</p>
<pre><code>[ 2 1 2 10 2 20]
</code></pre>
| you are the stackoverflow assistant |
|
Understanding TensorBoard (weight) histograms | <p>It appears that the network hasn't learned anything in the layers one to three. The last layer does change, so that means that there either may be something wrong with the gradients (if you're tampering with them manually), you're constraining learning to the last layer by optimizing only its weights or the last layer really 'eats up' all error. It could also be that only biases are learned. The network appears to learn something though, but it might not be using its full potential. More context would be needed here, but playing around with the learning rate (e.g. using a smaller one) might be worth a shot.</p>
<p>In general, histograms display the number of occurrences of a value relative to each other values. Simply speaking, if the possible values are in a range of <code>0..9</code> and you see a spike of amount <code>10</code> on the value <code>0</code>, this means that 10 inputs assume the value <code>0</code>; in contrast, if the histogram shows a plateau of <code>1</code> for all values of <code>0..9</code>, it means that for 10 inputs, each possible value <code>0..9</code> occurs <em>exactly</em> once.
You can also use histograms to visualize probability distributions when you normalize all histogram values by their total sum; if you do that, you'll intuitively obtain the likelihood with which a certain value (on the x axis) will appear (compared to other inputs).</p>
<p>Now for <code>layer1/weights</code>, the plateau means that:</p>
<ul>
<li>most of the weights are in the range of -0.15 to 0.15</li>
<li>it is (mostly) equally likely for a weight to have any of these values, i.e. they are (almost) uniformly distributed</li>
</ul>
<p>Said differently, almost the same number of weights have the values <code>-0.15</code>, <code>0.0</code>, <code>0.15</code> and everything in between. There are some weights having slightly smaller or higher values.
So in short, this simply looks like the weights have been initialized using a uniform distribution with zero mean and value range <code>-0.15..0.15</code> ... give or take. If you do indeed use uniform initialization, then this is typical when the network has not been trained yet.</p>
<p>In comparison, <code>layer1/activations</code> forms a bell curve (gaussian)-like shape: The values are centered around a specific value, in this case <code>0</code>, but they may also be greater or smaller than that (equally likely so, since it's symmetric). Most values appear close around the mean of <code>0</code>, but values do range from <code>-0.8</code> to <code>0.8</code>.
I assume that the <code>layer1/activations</code> is taken as the distribution over all layer outputs in a batch. You can see that the values do change over time.</p>
<p>The layer 4 histogram doesn't tell me anything specific. From the shape, it's just showing that some weight values around <code>-0.1</code>, <code>0.05</code> and <code>0.25</code> tend to be occur with a higher probability; a reason <em>could</em> be, that different parts of each neuron there actually pick up the same information and are basically redundant. This can mean that you could actually use a smaller network or that your network has the potential to learn more distinguishing features in order to prevent overfitting. These are just assumptions though.</p>
<p>Also, as already stated in the comments below, do add bias units. By leaving them out, you are forcefully constraining your network to a possibly invalid solution.</p>
| you are the stackoverflow assistant |
|
Can I run Keras model on gpu? | <p>Yes you can run keras models on GPU. Few things you will have to check first.</p>
<ol>
<li>your system has GPU (Nvidia. As AMD doesn't work yet)</li>
<li>You have installed the GPU version of tensorflow</li>
<li>You have installed CUDA <a href="https://www.tensorflow.org/install/install_linux" rel="noreferrer">installation instructions</a></li>
<li>Verify that tensorflow is running with GPU <a href="https://stackoverflow.com/questions/38009682/how-to-tell-if-tensorflow-is-using-gpu-acceleration-from-inside-python-shell">check if GPU is working</a></li>
</ol>
<p><code>sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))</code></p>
<p>for TF > v2.0</p>
<p><code>sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))</code></p>
<p>(Thanks @nbro and @Ferro for pointing this out in the comments)</p>
<p>OR</p>
<pre><code>from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
</code></pre>
<p>output will be something like this:</p>
<pre><code>[
name: "/cpu:0"device_type: "CPU",
name: "/gpu:0"device_type: "GPU"
]
</code></pre>
<p>Once all this is done your model will run on GPU:</p>
<p>To Check if keras(>=2.1.1) is using GPU:</p>
<pre><code>from keras import backend as K
K.tensorflow_backend._get_available_gpus()
</code></pre>
<p>All the best.</p>
| you are the stackoverflow assistant |
|
How does tf.app.run() work? | <pre><code>if __name__ == "__main__":
</code></pre>
<p>means current file is executed under a shell instead of imported as a module.</p>
<pre><code>tf.app.run()
</code></pre>
<p>As you can see through the file <code>app.py</code></p>
<pre><code>def run(main=None, argv=None):
"""Runs the program with an optional 'main' function and 'argv' list."""
f = flags.FLAGS
# Extract the args from the optional `argv` list.
args = argv[1:] if argv else None
# Parse the known flags from that list, or from the command
# line otherwise.
# pylint: disable=protected-access
flags_passthrough = f._parse_flags(args=args)
# pylint: enable=protected-access
main = main or sys.modules['__main__'].main
# Call the main function, passing through any arguments
# to the final program.
sys.exit(main(sys.argv[:1] + flags_passthrough))
</code></pre>
<p>Let's break line by line:</p>
<pre><code>flags_passthrough = f._parse_flags(args=args)
</code></pre>
<p>This ensures that the argument you pass through command line is valid,e.g.
<code>python my_model.py --data_dir='...' --max_iteration=10000</code> Actually, this feature is implemented based on python standard <code>argparse</code> module.</p>
<pre><code>main = main or sys.modules['__main__'].main
</code></pre>
<p>The first <code>main</code> in right side of <code>=</code> is the first argument of current function <code>run(main=None, argv=None)</code>
. While <code>sys.modules['__main__']</code> means current running file(e.g. <code>my_model.py</code>).</p>
<p>So there are two cases:</p>
<ol>
<li><p>You don't have a <code>main</code> function in <code>my_model.py</code> Then you have to
call <code>tf.app.run(my_main_running_function)</code></p></li>
<li><p>you have a <code>main</code> function in <code>my_model.py</code>. (This is mostly the case.)</p></li>
</ol>
<p>Last line:</p>
<pre><code>sys.exit(main(sys.argv[:1] + flags_passthrough))
</code></pre>
<p>ensures your <code>main(argv)</code> or <code>my_main_running_function(argv)</code> function is called with parsed arguments properly.</p>
| you are the stackoverflow assistant |
|
Tensorflow - ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float) | <p><strong>TL;DR</strong> Several possible errors, most fixed with <code>x = np.asarray(x).astype('float32')</code>.</p>
<p>Others may be faulty data preprocessing; ensure everything is <em>properly formatted</em> (categoricals, nans, strings, etc). Below shows what the model expects:</p>
<pre class="lang-py prettyprint-override"><code>[print(i.shape, i.dtype) for i in model.inputs]
[print(o.shape, o.dtype) for o in model.outputs]
[print(l.name, l.input_shape, l.dtype) for l in model.layers]
</code></pre>
<hr>
<p>The problem's rooted in using <em>lists</em> as inputs, as opposed to Numpy arrays; Keras/TF doesn't support former. A simple conversion is: <code>x_array = np.asarray(x_list)</code>.</p>
<p>The next step's to ensure data is fed in expected format; for LSTM, that'd be a 3D tensor with dimensions <code>(batch_size, timesteps, features)</code> - or equivalently, <code>(num_samples, timesteps, channels)</code>. Lastly, as a debug pro-tip, <em>print ALL the shapes</em> for your data. Code accomplishing all of the above, below:</p>
<pre class="lang-py prettyprint-override"><code>Sequences = np.asarray(Sequences)
Targets = np.asarray(Targets)
show_shapes()
Sequences = np.expand_dims(Sequences, -1)
Targets = np.expand_dims(Targets, -1)
show_shapes()
</code></pre>
<pre class="lang-py prettyprint-override"><code># OUTPUTS
Expected: (num_samples, timesteps, channels)
Sequences: (200, 1000)
Targets: (200,)
Expected: (num_samples, timesteps, channels)
Sequences: (200, 1000, 1)
Targets: (200, 1)
</code></pre>
<hr>
<p>As a bonus tip, I notice you're running via <code>main()</code>, so your IDE probably lacks a Jupyter-like cell-based execution; I strongly recommend the <a href="https://www.spyder-ide.org/" rel="noreferrer">Spyder IDE</a>. It's as simple as adding <code># In[]</code>, and pressing <code>Ctrl + Enter</code> below:</p>
<img src="https://i.sstatic.net/AltOS.png" height="69">
<hr>
<p><strong>Function used</strong>:</p>
<pre class="lang-py prettyprint-override"><code>def show_shapes(): # can make yours to take inputs; this'll use local variable values
print("Expected: (num_samples, timesteps, channels)")
print("Sequences: {}".format(Sequences.shape))
print("Targets: {}".format(Targets.shape))
</code></pre>
| you are the stackoverflow assistant |
|
Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation | <h1>Tensorflow 2.1+</h1>
<h2>What's going on?</h2>
<p>With the <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.1.0" rel="noreferrer">new Tensorflow 2.1 release</a>, the default <code>tensorflow</code> pip package contains both CPU and GPU versions of TF. In previous TF versions, not finding the CUDA libraries would emit an error and raise an exception, while now the library dynamically searches for the correct CUDA version and, if it doesn't find it, emits the warning (The <strong>W</strong> in the beginning stands for warnings, errors have an <strong>E</strong> (or <strong>F</strong> for fatal errors) and falls back to CPU-only mode. In fact, this is also written in the log as an info message right after the warning (do note that if you have a higher minimum log level that the default, you might not see info messages). The full log is (emphasis mine):</p>
<blockquote>
<p>2020-01-20 12:27:44.554767: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found</p>
<p>2020-01-20 12:27:44.554964: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] <strong>Ignore above cudart dlerror if you do not have a GPU set up on your machine.</strong></p>
</blockquote>
<h2>Should I worry? How do I fix it?</h2>
<p>If you don't have a CUDA-enabled GPU on your machine, or if you don't care about not having GPU acceleration, no need to worry. If, on the other hand, you installed tensorflow and wanted GPU acceleration, check your CUDA installation (TF 2.1 requires CUDA <strong>10.1</strong>, <em>not</em> 10.2 or 10.0).</p>
<p>If you just want to get rid of the warning, you can <a href="https://stackoverflow.com/questions/35911252/disable-tensorflow-debugging-information">adapt TF's logging level</a> to suppress warnings, but that might be overkill, as it will silence <em>all</em> warnings.</p>
<hr />
<h1>Tensorflow 1.X or 2.0:</h1>
<p>Your CUDA setup is broken, ensure you have the correct version installed.</p>
| you are the stackoverflow assistant |
|
Deep-Learning Nan loss reasons | <p>There are lots of things I have seen make a model diverge.</p>
<ol>
<li><p>Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity.</p>
</li>
<li><p>I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue.</p>
</li>
<li><p>Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root whose derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier.</p>
</li>
<li><p>You may have an issue with the input data. Try calling <code>assert not np.any(np.isnan(x))</code> on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255].</p>
</li>
<li><p>The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below).</p>
</li>
</ol>
| you are the stackoverflow assistant |
|
What does this tensorflow message mean? Any side effect? Was the installation successful? | <p>An important part of Tensorflow is that it is supposed to be fast. With a suitable installation, it works with CPUs, GPUs, or TPUs. Part of going fast means that it uses different code depending on your hardware. Some CPUs support operations that other CPUs do not, such as vectorized addition (adding multiple variables at once). Tensorflow is simply telling you that the version you have installed can use the AVX and AVX2 operations and is set to do so by default in certain situations (say inside a forward or back-prop matrix multiply), which can speed things up. This is not an error, it is just telling you that it can and will take advantage of your CPU to get that extra speed out.</p>
<p>Note: AVX stands for Advanced Vector Extensions.</p>
| you are the stackoverflow assistant |
|
What does tf.nn.conv2d do in tensorflow? | <p>Ok I think this is about the simplest way to explain it all.</p>
<hr>
<p>Your example is 1 image, size 2x2, with 1 channel. You have 1 filter, with size 1x1, and 1 channel (size is height x width x channels x number of filters). </p>
<p>For this simple case the resulting 2x2, 1 channel image (size 1x2x2x1, number of images x height x width x x channels) is the result of multiplying the filter value by each pixel of the image.</p>
<hr>
<p>Now let's try more channels:</p>
<pre class="lang-py prettyprint-override"><code>input = tf.Variable(tf.random_normal([1,3,3,5]))
filter = tf.Variable(tf.random_normal([1,1,5,1]))
op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')
</code></pre>
<p>Here the 3x3 image and the 1x1 filter each have 5 channels. The resulting image will be 3x3 with 1 channel (size 1x3x3x1), where the value of each pixel is the dot product across channels of the filter with the corresponding pixel in the input image.</p>
<hr>
<p>Now with a 3x3 filter</p>
<pre class="lang-py prettyprint-override"><code>input = tf.Variable(tf.random_normal([1,3,3,5]))
filter = tf.Variable(tf.random_normal([3,3,5,1]))
op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')
</code></pre>
<p>Here we get a 1x1 image, with 1 channel (size 1x1x1x1). The value is the sum of the 9, 5-element dot products. But you could just call this a 45-element dot product.</p>
<hr>
<p>Now with a bigger image</p>
<pre class="lang-py prettyprint-override"><code>input = tf.Variable(tf.random_normal([1,5,5,5]))
filter = tf.Variable(tf.random_normal([3,3,5,1]))
op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='VALID')
</code></pre>
<p>The output is a 3x3 1-channel image (size 1x3x3x1).
Each of these values is a sum of 9, 5-element dot products. </p>
<p>Each output is made by centering the filter on one of the 9 center pixels of the input image, so that none of the filter sticks out. The <code>x</code>s below represent the filter centers for each output pixel.</p>
<pre><code>.....
.xxx.
.xxx.
.xxx.
.....
</code></pre>
<hr>
<p>Now with "SAME" padding:</p>
<pre class="lang-py prettyprint-override"><code>input = tf.Variable(tf.random_normal([1,5,5,5]))
filter = tf.Variable(tf.random_normal([3,3,5,1]))
op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')
</code></pre>
<p>This gives a 5x5 output image (size 1x5x5x1). This is done by centering the filter at each position on the image. </p>
<p>Any of the 5-element dot products where the filter sticks out past the edge of the image get a value of zero. </p>
<p>So the corners are only sums of 4, 5-element dot products.</p>
<hr>
<p>Now with multiple filters.</p>
<pre class="lang-py prettyprint-override"><code>input = tf.Variable(tf.random_normal([1,5,5,5]))
filter = tf.Variable(tf.random_normal([3,3,5,7]))
op = tf.nn.conv2d(input, filter, strides=[1, 1, 1, 1], padding='SAME')
</code></pre>
<p>This still gives a 5x5 output image, but with 7 channels (size 1x5x5x7). Where each channel is produced by one of the filters in the set.</p>
<hr>
<p>Now with strides 2,2:</p>
<pre class="lang-py prettyprint-override"><code>input = tf.Variable(tf.random_normal([1,5,5,5]))
filter = tf.Variable(tf.random_normal([3,3,5,7]))
op = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME')
</code></pre>
<p>Now the result still has 7 channels, but is only 3x3 (size 1x3x3x7).</p>
<p>This is because instead of centering the filters at every point on the image, the filters are centered at every other point on the image, taking steps (strides) of width 2. The <code>x</code>'s below represent the filter center for each output pixel, on the input image.</p>
<pre><code>x.x.x
.....
x.x.x
.....
x.x.x
</code></pre>
<hr>
<p>And of course the first dimension of the input is the number of images so you can apply it over a batch of 10 images, for example:</p>
<pre class="lang-py prettyprint-override"><code>input = tf.Variable(tf.random_normal([10,5,5,5]))
filter = tf.Variable(tf.random_normal([3,3,5,7]))
op = tf.nn.conv2d(input, filter, strides=[1, 2, 2, 1], padding='SAME')
</code></pre>
<p>This performs the same operation, for each image independently, giving a stack of 10 images as the result (size 10x3x3x7) </p>
| you are the stackoverflow assistant |
|
Loading a trained Keras model and continue training | <p>Actually - <code>model.save</code> saves all information need for restarting training in your case. The only thing which could be spoiled by reloading model is your optimizer state. To check that - try to <code>save</code> and reload model and train it on training data.</p>
| you are the stackoverflow assistant |
|
How are the new tf.contrib.summary summaries in TensorFlow evaluated? | <p><em>answer moved from edit to self-answer as requested</em></p>
<hr />
<p>I just played around with this a little bit, and it seems that if one combines <code>tf.control_dependencies</code> with <code>tf.record_summaries_every_n_global_steps</code> it behaves as expected and the summary only gets recorded every nth step. But if they are run together within a session, such as <code>session.run([train, summs])</code>, the summaries are stored every once in a while, but not exactly every nth step. I tested this with n=2 and with the second approach the summary was often written at odd steps, while with the control dependency approach it was always on an even step.</p>
| you are the stackoverflow assistant |
|
In Tensorflow, get the names of all the Tensors in a graph | <p>You can do</p>
<pre><code>[n.name for n in tf.get_default_graph().as_graph_def().node]
</code></pre>
<p>Also, if you are prototyping in an IPython notebook, you can show the graph directly in notebook, see <code>show_graph</code> function in Alexander's Deep Dream <a href="http://nbviewer.jupyter.org/github/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/deepdream/deepdream.ipynb" rel="noreferrer">notebook</a></p>
| you are the stackoverflow assistant |
|
Should we do learning rate decay for adam optimizer | <p>It depends. ADAM updates any parameter with an individual learning rate. This means that every parameter in the network has a specific learning rate associated.</p>
<p><em>But</em> the single learning rate for each parameter is computed using lambda (the initial learning rate) as an upper limit. This means that every single learning rate can vary from 0 (no update) to lambda (maximum update).</p>
<p>It's true, that the learning rates adapt themselves during training steps, but if you want to be sure that every update step doesn't exceed lambda you can than lower lambda using exponential decay or whatever.
It can help to reduce loss during the latest step of training, when the computed loss with the previously associated lambda parameter has stopped to decrease.</p>
| you are the stackoverflow assistant |
|
TensorFlow, why there are 3 files after saving the model? | <p>Try this:</p>
<pre><code>with tf.Session() as sess:
saver = tf.train.import_meta_graph('/tmp/model.ckpt.meta')
saver.restore(sess, "/tmp/model.ckpt")
</code></pre>
<p>The TensorFlow save method saves three kinds of files because it stores the <b>graph structure</b> separately from the <b>variable values</b>. The <code>.meta</code> file describes the saved graph structure, so you need to import it before restoring the checkpoint (otherwise it doesn't know what variables the saved checkpoint values correspond to).</p>
<p>Alternatively, you could do this:</p>
<pre><code># Recreate the EXACT SAME variables
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Now load the checkpoint variable values
with tf.Session() as sess:
saver = tf.train.Saver()
saver.restore(sess, "/tmp/model.ckpt")
</code></pre>
<p>Even though there is no file named <code>model.ckpt</code>, you still refer to the saved checkpoint by that name when restoring it. From the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/saver.py" rel="noreferrer"><code>saver.py</code> source code</a>: </p>
<blockquote>
<p>Users only need to interact with the user-specified prefix... instead
of any physical pathname.</p>
</blockquote>
| you are the stackoverflow assistant |
|
Difference between Variable and get_variable in TensorFlow | <p>I'd recommend to always use <code>tf.get_variable(...)</code> -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it. </p>
<p>Pure <code>tf.Variable</code> is lower-level; at some point <code>tf.get_variable()</code> did not exist so some code still uses the low-level way.</p>
| you are the stackoverflow assistant |
|
TensorFlow, "'module' object has no attribute 'placeholder'" | <p>If you have this error after an upgrade to TensorFlow 2.0, you can still use 1.X API by replacing:</p>
<pre><code>import tensorflow as tf
</code></pre>
<p>by</p>
<pre><code>import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
</code></pre>
| you are the stackoverflow assistant |
|
Meaning of buffer_size in Dataset.map , Dataset.prefetch and Dataset.shuffle | <p><strong>TL;DR</strong> Despite their similar names, these arguments have quite difference meanings. The <code>buffer_size</code> in <code>Dataset.shuffle()</code> can affect the randomness of your dataset, and hence the order in which elements are produced. The <code>buffer_size</code> in <code>Dataset.prefetch()</code> only affects the time it takes to produce the next element.</p>
<hr>
<p>The <code>buffer_size</code> argument in <a href="https://www.tensorflow.org/versions/r1.4/api_docs/python/tf/data/Dataset#prefetch" rel="noreferrer"><code>tf.data.Dataset.prefetch()</code></a> and the <code>output_buffer_size</code> argument in <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/data/Dataset#map" rel="noreferrer"><code>tf.contrib.data.Dataset.map()</code></a> provide a way to tune the <strong>performance</strong> of your input pipeline: both arguments tell TensorFlow to create a buffer of at most <code>buffer_size</code> elements, and a background thread to fill that buffer in the background.
(Note that we removed the <code>output_buffer_size</code> argument from <code>Dataset.map()</code> when it moved from <code>tf.contrib.data</code> to <code>tf.data</code>. New code should use <code>Dataset.prefetch()</code> after <code>map()</code> to get the same behavior.)</p>
<p>Adding a prefetch buffer can improve performance by overlapping the preprocessing of data with downstream computation. Typically it is most useful to add a small prefetch buffer (with perhaps just a single element) at the very end of the pipeline, but more complex pipelines can benefit from additional prefetching, especially when the time to produce a single element can vary.</p>
<p>By contrast, the <code>buffer_size</code> argument to <a href="https://www.tensorflow.org/versions/r1.4/api_docs/python/tf/data/Dataset#shuffle" rel="noreferrer"><code>tf.data.Dataset.shuffle()</code></a> affects the <strong>randomness</strong> of the transformation. We designed the <code>Dataset.shuffle()</code> transformation (like the <a href="https://www.tensorflow.org/api_docs/python/tf/train/shuffle_batch" rel="noreferrer"><code>tf.train.shuffle_batch()</code></a> function that it replaces) to handle datasets that are too large to fit in memory. Instead of shuffling the entire dataset, it maintains a buffer of <code>buffer_size</code> elements, and randomly selects the next element from that buffer (replacing it with the next input element, if one is available). Changing the value of <code>buffer_size</code> affects how uniform the shuffling is: if <code>buffer_size</code> is greater than the number of elements in the dataset, you get a uniform shuffle; if it is <code>1</code> then you get no shuffling at all. For very large datasets, a typical "good enough" approach is to randomly shard the data into multiple files once before training, then shuffle the filenames uniformly, and then use a smaller shuffle buffer. However, the appropriate choice will depend on the exact nature of your training job.</p>
<hr>
| you are the stackoverflow assistant |
|
Keras split train test set when using ImageDataGenerator | <p>Keras has now added Train / validation split from a single directory using ImageDataGenerator:</p>
<pre><code>train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0.2) # set validation split
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='binary',
subset='validation') # set as validation data
model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = nb_epochs)
</code></pre>
<p><a href="https://keras.io/preprocessing/image/" rel="noreferrer">https://keras.io/preprocessing/image/</a></p>
| you are the stackoverflow assistant |
|
What's the purpose of tf.app.flags in TensorFlow? | <p>The <code>tf.app.flags</code> module is presently a thin wrapper around <strike>python-gflags, so the <a href="https://github.com/gflags/python-gflags">documentation for that project</a> is the best resource for how to use it</strike> <a href="https://docs.python.org/2.7/library/argparse.html"><code>argparse</code></a>, which implements a subset of the functionality in <a href="https://github.com/gflags/python-gflags"><code>python-gflags</code></a>.</p>
<p>Note that this module is currently packaged as a convenience for writing demo apps, and is not technically part of the public API, so it may change in future.</p>
<p>We recommend that you implement your own flag parsing using <code>argparse</code> or whatever library you prefer.</p>
<p><strong>EDIT:</strong> The <code>tf.app.flags</code> module is not in fact implemented using <code>python-gflags</code>, but it uses a similar API. </p>
| you are the stackoverflow assistant |
|
Tensorflow Strides Argument | <p>The pooling and convolutional ops slide a "window" across the input tensor. Using <a href="https://www.tensorflow.org/versions/master/api_docs/python/nn.html#conv2d"><code>tf.nn.conv2d</code></a> as an example: If the input tensor has 4 dimensions: <code>[batch, height, width, channels]</code>, then the convolution operates on a 2D window on the <code>height, width</code> dimensions.</p>
<p><code>strides</code> determines how much the window shifts by in each of the dimensions. The typical use sets the first (the batch) and last (the depth) stride to 1.</p>
<p>Let's use a very concrete example: Running a 2-d convolution over a 32x32 greyscale input image. I say greyscale because then the input image has depth=1, which helps keep it simple. Let that image look like this:</p>
<pre><code>00 01 02 03 04 ...
10 11 12 13 14 ...
20 21 22 23 24 ...
30 31 32 33 34 ...
...
</code></pre>
<p>Let's run a 2x2 convolution window over a single example (batch size = 1). We'll give the convolution an output channel depth of 8.</p>
<p>The input to the convolution has <code>shape=[1, 32, 32, 1]</code>.</p>
<p>If you specify <code>strides=[1,1,1,1]</code> with <code>padding=SAME</code>, then the output of the filter will be [1, 32, 32, 8].</p>
<p>The filter will first create an output for:</p>
<pre><code>F(00 01
10 11)
</code></pre>
<p>And then for:</p>
<pre><code>F(01 02
11 12)
</code></pre>
<p>and so on. Then it will move to the second row, calculating:</p>
<pre><code>F(10, 11
20, 21)
</code></pre>
<p>then</p>
<pre><code>F(11, 12
21, 22)
</code></pre>
<p>If you specify a stride of [1, 2, 2, 1] it won't do overlapping windows. It will compute:</p>
<pre><code>F(00, 01
10, 11)
</code></pre>
<p>and then</p>
<pre><code>F(02, 03
12, 13)
</code></pre>
<p>The stride operates similarly for the pooling operators.</p>
<p><strong>Question 2: Why strides [1, x, y, 1] for convnets</strong></p>
<p>The first 1 is the batch: You don't usually want to skip over examples in your batch, or you shouldn't have included them in the first place. :)</p>
<p>The last 1 is the depth of the convolution: You don't usually want to skip inputs, for the same reason.</p>
<p>The conv2d operator is more general, so you <em>could</em> create convolutions that slide the window along other dimensions, but that's not a typical use in convnets. The typical use is to use them spatially.</p>
<p><strong>Why reshape to -1</strong> -1 is a placeholder that says "adjust as necessary to match the size needed for the full tensor." It's a way of making the code be independent of the input batch size, so that you can change your pipeline and not have to adjust the batch size everywhere in the code.</p>
| you are the stackoverflow assistant |
|
What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits? | <p>Having two different functions is a <strong>convenience</strong>, as they produce the same result. </p>
<p>The difference is simple:</p>
<ul>
<li>For <code>sparse_softmax_cross_entropy_with_logits</code>, labels must have the shape [batch_size] and the dtype int32 or int64. Each label is an int in range <code>[0, num_classes-1]</code>.</li>
<li>For <code>softmax_cross_entropy_with_logits</code>, labels must have the shape [batch_size, num_classes] and dtype float32 or float64.</li>
</ul>
<p>Labels used in <code>softmax_cross_entropy_with_logits</code> are the <strong>one hot version</strong> of labels used in <code>sparse_softmax_cross_entropy_with_logits</code>.</p>
<p>Another tiny difference is that with <code>sparse_softmax_cross_entropy_with_logits</code>, you can give -1 as a label to have loss <code>0</code> on this label.</p>
| you are the stackoverflow assistant |
|
Will scikit-learn utilize GPU? | <p>Tensorflow only uses GPU if it is built against Cuda and CuDNN. By default it does not use GPU, especially if it is running inside Docker, unless you use <a href="https://github.com/NVIDIA/nvidia-docker" rel="noreferrer">nvidia-docker</a> and an image with a built-in support.</p>
<p>Scikit-learn is not intended to be used as a deep-learning framework and it does not provide any GPU support.</p>
<blockquote>
<p><strong>Why is there no support for deep or reinforcement learning / Will there be support for deep or reinforcement learning in scikit-learn?</strong></p>
<p>Deep learning and reinforcement learning both require a rich
vocabulary to define an architecture, with deep learning additionally
requiring GPUs for efficient computing. However, neither of these fit
within the design constraints of scikit-learn; as a result, deep
learning and reinforcement learning are currently out of scope for
what scikit-learn seeks to achieve.</p>
</blockquote>
<p>Extracted from <a href="http://scikit-learn.org/stable/faq.html#why-is-there-no-support-for-deep-or-reinforcement-learning-will-there-be-support-for-deep-or-reinforcement-learning-in-scikit-learn" rel="noreferrer">http://scikit-learn.org/stable/faq.html#why-is-there-no-support-for-deep-or-reinforcement-learning-will-there-be-support-for-deep-or-reinforcement-learning-in-scikit-learn</a></p>
<blockquote>
<p><strong>Will you add GPU support in scikit-learn?</strong></p>
<p>No, or at least not in the near future. The main reason is that GPU
support will introduce many software dependencies and introduce
platform specific issues. scikit-learn is designed to be easy to
install on a wide variety of platforms. Outside of neural networks,
GPUs don’t play a large role in machine learning today, and much
larger gains in speed can often be achieved by a careful choice of
algorithms.</p>
</blockquote>
<p>Extracted from <a href="http://scikit-learn.org/stable/faq.html#will-you-add-gpu-support" rel="noreferrer">http://scikit-learn.org/stable/faq.html#will-you-add-gpu-support</a></p>
| you are the stackoverflow assistant |
|
Can Keras with Tensorflow backend be forced to use CPU or GPU at will? | <p>If you want to force Keras to use CPU</p>
<h2>Way 1</h2>
<pre><code>import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
</code></pre>
<p>before Keras / Tensorflow is imported.</p>
<h2>Way 2</h2>
<p>Run your script as</p>
<pre><code>$ CUDA_VISIBLE_DEVICES="" ./your_keras_code.py
</code></pre>
<p>See also </p>
<ol>
<li><a href="https://github.com/keras-team/keras/issues/152" rel="noreferrer">https://github.com/keras-team/keras/issues/152</a></li>
<li><a href="https://github.com/fchollet/keras/issues/4613" rel="noreferrer">https://github.com/fchollet/keras/issues/4613</a></li>
</ol>
| you are the stackoverflow assistant |
|
How to apply gradient clipping in TensorFlow? | <p>Gradient clipping needs to happen after computing the gradients, but before applying them to update the model's parameters. In your example, both of those things are handled by the <code>AdamOptimizer.minimize()</code> method.</p>
<p>In order to clip your gradients you'll need to explicitly compute, clip, and apply them as described in <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/Optimizer#processing_gradients_before_applying_them" rel="nofollow noreferrer">this section in TensorFlow's API documentation</a>. Specifically you'll need to substitute the call to the <code>minimize()</code> method with something like the following:</p>
<pre><code>optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
gvs = optimizer.compute_gradients(cost)
capped_gvs = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gvs]
train_op = optimizer.apply_gradients(capped_gvs)
</code></pre>
| you are the stackoverflow assistant |
|
ImportError: No module named tensorflow | <p>Try installing tensorflow again with the whatever version you want and with option --ignore-installed like:</p>
<pre><code>pip install tensorflow==1.2.0 --ignore-installed
</code></pre>
<p>I solved same issue using this command.</p>
| you are the stackoverflow assistant |
|
Does model.compile() initialize all the weights and biases in Keras (tensorflow backend)? | <p><strong>When to use?</strong></p>
<p><strong>If</strong> you're using <code>compile</code>, surely it must be after <code>load_model()</code>. After all, you need a model to compile. (PS: <code>load_model</code> automatically compiles the model with the optimizer that was saved along with the model)</p>
<p><strong>What does <code>compile</code> do?</strong></p>
<p>Compile defines the <strong>loss function</strong>, the <strong>optimizer</strong> and the <strong>metrics</strong>. That's all. </p>
<p>It has nothing to do with the weights and you can compile a model as many times as you want without causing any problem to pretrained weights. </p>
<p>You need a compiled model to <strong>train</strong> (because training uses the loss function and the optimizer). But it's not necessary to compile a model for predicting. </p>
<p><strong>Do you need to use compile more than once?</strong></p>
<p>Only if:</p>
<ul>
<li>You want to change one of these:
<ul>
<li>Loss function </li>
<li>Optimizer / Learning rate </li>
<li>Metrics </li>
<li>The <code>trainable</code> property of some layer </li>
</ul></li>
<li>You loaded (or created) a model that is not compiled yet. Or your load/save method didn't consider the previous compilation.</li>
</ul>
<p><strong>Consequences of compiling again:</strong></p>
<p>If you compile a model again, you will lose the <strong>optimizer states</strong>. </p>
<p>This means that your training will suffer a little at the beginning until it adjusts the learning rate, the momentums, etc. But there is absolutely no damage to the weights (unless, of course, your initial learning rate is so big that the first training step wildly changes the fine tuned weights). </p>
| you are the stackoverflow assistant |
|
What is the difference between sparse_categorical_crossentropy and categorical_crossentropy? | <p>Simply:</p>
<ul>
<li><code>categorical_crossentropy</code> (<code>cce</code>) produces a one-hot array containing the probable match for each category,</li>
<li><code>sparse_categorical_crossentropy</code> (<code>scce</code>) produces a category index of the <em>most likely</em> matching category.</li>
</ul>
<p>Consider a classification problem with 5 categories (or classes).</p>
<ul>
<li><p>In the case of <code>cce</code>, the one-hot target may be <code>[0, 1, 0, 0, 0]</code> and the model may predict <code>[.2, .5, .1, .1, .1]</code> (probably right)</p>
</li>
<li><p>In the case of <code>scce</code>, the target index may be [1] and the model may predict: [.5].</p>
</li>
</ul>
<p>Consider now a classification problem with 3 classes.</p>
<ul>
<li>In the case of <code>cce</code>, the one-hot target might be <code>[0, 0, 1]</code> and the model may predict <code>[.5, .1, .4]</code> (probably inaccurate, given that it gives more probability to the first class)</li>
<li>In the case of <code>scce</code>, the target index might be <code>[0]</code>, and the model may predict <code>[.5]</code></li>
</ul>
<p>Many categorical models produce <code>scce</code> output because you save space, but lose A LOT of information (for example, in the 2nd example, index 2 was also very close.) I generally prefer <code>cce</code> output for model reliability.</p>
<p>There are a number of situations to use <code>scce</code>, including:</p>
<ul>
<li>when your classes are mutually exclusive, i.e. you don't care at all about other close-enough predictions,</li>
<li>the number of categories is large to the prediction output becomes overwhelming.</li>
</ul>
<p><strong>220405</strong>: response to "one-hot encoding" comments:</p>
<p>one-hot encoding is used for a category feature INPUT to select a specific category (e.g. male versus female). This encoding allows the model to train more efficiently: training weight is a product of category, which is 0 for all categories except for the given one.</p>
<p><code>cce</code> and <code>scce</code> are a model OUTPUT. <code>cce</code> is a probability array of each category, totally 1.0. <code>scce</code> shows the MOST LIKELY category, totally 1.0.</p>
<p><code>scce</code> is technically a one-hot array, just like a hammer used as a door stop is still a hammer, but its purpose is different. <code>cce</code> is NOT one-hot.</p>
| you are the stackoverflow assistant |
|
Can I use TensorBoard with Google Colab? | <p><strong>EDIT:</strong> You probably want to give the official <a href="https://github.com/tensorflow/tensorboard/blob/a49abcbb91467a693d068b42f45b3f7b1880deca/docs/tensorboard_in_notebooks.ipynb" rel="nofollow noreferrer"><code>%tensorboard</code> magic</a> a go, available from TensorFlow 1.13 onward.</p>
<hr />
<p>Prior to the existence of the <code>%tensorboard</code> magic, the standard way to
achieve this was to proxy network traffic to the Colab VM using
<a href="http://ngrok.com/" rel="nofollow noreferrer" title="ngrok">ngrok</a>. A Colab example can be found <a href="https://drive.google.com/file/d/16YpqISWqruH6wQuyWFp5ZUi5RaIF0_nB/view?usp=sharing" rel="nofollow noreferrer" title="colab example for running TensorBoard">here</a>.</p>
<p>These are the steps (the code snippets represent cells of type "code" in colab):</p>
<ol>
<li><p>Get TensorBoard running in the background. <br/>
<em>Inspired by <a href="https://stackoverflow.com/a/33838211/5222402">this answer</a>.</em></p>
<pre class="lang-py prettyprint-override"><code>LOG_DIR = '/tmp/log'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
</code></pre>
</li>
<li><p>Download and unzip <a href="https://ngrok.com/download" rel="nofollow noreferrer" title="ngrok download links">ngrok</a>. <br/>
<em>Replace the link passed to <code>wget</code> with the correct download link for your OS.</em></p>
<pre class="lang-py prettyprint-override"><code>! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
! unzip ngrok-stable-linux-amd64.zip
</code></pre>
</li>
<li><p>Launch ngrok background process...</p>
<pre class="lang-py prettyprint-override"><code>get_ipython().system_raw('./ngrok http 6006 &')
</code></pre>
</li>
</ol>
<p>...and retrieve public url.
<em><a href="https://stackoverflow.com/a/1955555/5222402">Source</a></em></p>
<pre class="lang-py prettyprint-override"><code> ! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
</code></pre>
| you are the stackoverflow assistant |
|
How to set adaptive learning rate for GradientDescentOptimizer? | <p>First of all, <code>tf.train.GradientDescentOptimizer</code> is designed to use a constant learning rate for all variables in all steps. TensorFlow also provides out-of-the-box adaptive optimizers including the <a href="http://www.tensorflow.org/api_docs/python/train.html#AdagradOptimizer"><code>tf.train.AdagradOptimizer</code></a> and the <a href="http://www.tensorflow.org/api_docs/python/train.html#AdamOptimizer"><code>tf.train.AdamOptimizer</code></a>, and these can be used as drop-in replacements.</p>
<p>However, if you want to control the learning rate with otherwise-vanilla gradient descent, you can take advantage of the fact that the <code>learning_rate</code> argument to the <a href="http://www.tensorflow.org/api_docs/python/train.html#GradientDescentOptimizer.__init__"><code>tf.train.GradientDescentOptimizer</code> constructor</a> can be a <code>Tensor</code> object. This allows you to compute a different value for the learning rate in each step, for example:</p>
<pre><code>learning_rate = tf.placeholder(tf.float32, shape=[])
# ...
train_step = tf.train.GradientDescentOptimizer(
learning_rate=learning_rate).minimize(mse)
sess = tf.Session()
# Feed different values for learning rate to each training step.
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.1})
sess.run(train_step, feed_dict={learning_rate: 0.01})
sess.run(train_step, feed_dict={learning_rate: 0.01})
</code></pre>
<p>Alternatively, you could create a scalar <code>tf.Variable</code> that holds the learning rate, and assign it each time you want to change the learning rate.</p>
| you are the stackoverflow assistant |
|
TensorFlow saving into/loading a graph from a file | <p>There are many ways to approach the problem of saving a model in TensorFlow, which can make it a bit confusing. Taking each of your sub-questions in turn:</p>
<ol>
<li><p>The checkpoint files (produced e.g. by calling <a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver#save" rel="noreferrer"><code>saver.save()</code></a> on a <a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver" rel="noreferrer"><code>tf.train.Saver</code></a> object) contain only the weights, and any other variables defined in the same program. To use them in another program, you must re-create the associated graph structure (e.g. by running code to build it again, or calling <a href="https://www.tensorflow.org/api_docs/python/tf/import_graph_def" rel="noreferrer"><code>tf.import_graph_def()</code></a>), which tells TensorFlow what to do with those weights. Note that calling <code>saver.save()</code> also produces a file containing a <a href="https://www.tensorflow.org/api_guides/python/meta_graph" rel="noreferrer"><code>MetaGraphDef</code></a>, which contains a graph and details of how to associate the weights from a checkpoint with that graph. See <a href="https://www.tensorflow.org/api_guides/python/meta_graph" rel="noreferrer">the tutorial</a> for more details.</p></li>
<li><p><a href="https://www.tensorflow.org/api_docs/python/tf/train/write_graph" rel="noreferrer"><code>tf.train.write_graph()</code></a> only writes the graph structure; not the weights.</p></li>
<li><p>Bazel is unrelated to reading or writing TensorFlow graphs. (Perhaps I misunderstand your question: feel free to clarify it in a comment.)</p></li>
<li><p>A frozen graph can be loaded using <a href="https://www.tensorflow.org/api_docs/python/tf/import_graph_def" rel="noreferrer"><code>tf.import_graph_def()</code></a>. In this case, the weights are (typically) embedded in the graph, so you don't need to load a separate checkpoint.</p></li>
<li><p>The main change would be to update the names of the tensor(s) that are fed into the model, and the names of the tensor(s) that are fetched from the model. In the TensorFlow Android demo, this would correspond to the <code>inputName</code> and <code>outputName</code> strings that are passed to <a href="https://github.com/tensorflow/tensorflow/blob/d67ce6c449fabb3bebccd85815d9d291f114e6e4/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowClassifier.java#L34" rel="noreferrer"><code>TensorFlowClassifier.initializeTensorFlow()</code></a>.</p></li>
<li><p>The <code>GraphDef</code> is the program structure, which typically does not change through the training process. The checkpoint is a snapshot of the state of a training process, which typically changes at every step of the training process. As a result, TensorFlow uses different storage formats for these types of data, and the low-level API provides different ways to save and load them. Higher-level libraries, such as the <a href="https://www.tensorflow.org/api_guides/python/meta_graph" rel="noreferrer"><code>MetaGraphDef</code></a> libraries, <a href="https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model" rel="noreferrer">Keras</a>, and <a href="https://github.com/tensorflow/skflow#saving--restoring-models" rel="noreferrer">skflow</a> build on these mechanisms to provide more convenient ways to save and restore an entire model.</p></li>
</ol>
| you are the stackoverflow assistant |
|
What is the difference between np.mean and tf.reduce_mean? | <p>The functionality of <code>numpy.mean</code> and <code>tensorflow.reduce_mean</code> are the same. They do the same thing. From the documentation, for <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html" rel="noreferrer">numpy</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/reduce_mean" rel="noreferrer">tensorflow</a>, you can see that. Lets look at an example,</p>
<pre><code>c = np.array([[3.,4], [5.,6], [6.,7]])
print(np.mean(c,1))
Mean = tf.reduce_mean(c,1)
with tf.Session() as sess:
result = sess.run(Mean)
print(result)
</code></pre>
<p>Output</p>
<pre><code>[ 3.5 5.5 6.5]
[ 3.5 5.5 6.5]
</code></pre>
<p>Here you can see that when <code>axis</code>(numpy) or <code>reduction_indices</code>(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so <code>1</code> defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea.</p>
<p>Now what are the differences between them?</p>
<p>You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow <code>Session</code>. You can read more about it <a href="https://www.tensorflow.org/get_started/get_started" rel="noreferrer">here</a>. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow <code>Session</code>.</p>
<p>Lets look at another example.</p>
<pre><code>npMean = np.mean(c)
print(npMean+1)
tfMean = tf.reduce_mean(c)
Add = tfMean + 1
with tf.Session() as sess:
result = sess.run(Add)
print(result)
</code></pre>
<p>We could increase mean by <code>1</code> in <code>numpy</code> as you would naturally, but in order to do it in tensorflow, you need to perform that in <code>Session</code>, without using <code>Session</code> you can't do that. In other words, when you are computing <code>tfMean = tf.reduce_mean(c)</code>, tensorflow doesn't compute it then. It only computes that in a <code>Session</code>. But numpy computes that instantly, when you write <code>np.mean()</code>.</p>
<p>I hope it makes sense.</p>
| you are the stackoverflow assistant |
|
What does global_step mean in Tensorflow? | <p><code>global_step</code> refers to the number of batches seen by the graph. Every time a batch is provided, the weights are updated in the direction that minimizes the loss. <code>global_step</code> just keeps track of the number of batches seen so far. When it is passed in the <code>minimize()</code> argument list, the variable is increased by one. Have a look at <a href="https://www.tensorflow.org/api_docs/python/tf/train/Optimizer#minimize" rel="noreferrer"><code>optimizer.minimize()</code></a>.</p>
<p>You can get the <code>global_step</code> value using <a href="https://www.tensorflow.org/api_docs/python/tf/train/global_step" rel="noreferrer"><code>tf.train.global_step()</code></a>.
Also handy are the utility methods <a href="https://www.tensorflow.org/api_docs/python/tf/train/get_global_step" rel="noreferrer"><code>tf.train.get_global_step</code></a> or <a href="https://www.tensorflow.org/api_docs/python/tf/train/get_or_create_global_step" rel="noreferrer"><code>tf.train.get_or_create_global_step</code></a>.</p>
<p><code>0</code> is the initial value of the global step in this context.</p>
| you are the stackoverflow assistant |
|
AttributeError: 'Tensor' object has no attribute 'numpy' | <p>Since the accepted answer did not solve the problem for me so I thought it might be helpful for some people who face the problem and that already have tensorflow version >= 2.2.0 and eager execution enabled.</p>
<p>The issue seems to be that for certain functions during the fitting <code>model.fit()</code>
the <code>@tf.function</code> decorator prohibits the execution of functions like <code>tensor.numpy()</code> for performance reasons.</p>
<p>The solution for me was to pass the flag <a href="https://www.tensorflow.org/guide/eager?hl=en" rel="noreferrer"><code>run_eagerly=True</code></a> to the <code>model.compile()</code> like this:</p>
<pre><code>model.compile(..., run_eagerly=True)
</code></pre>
| you are the stackoverflow assistant |
|
How to get stable results with TensorFlow, setting random seed | <p>Setting the current TensorFlow random seed affects the current default graph only. Since you are creating a new graph for your training and setting it as default (<code>with g.as_default():</code>), you must set the random seed within the scope of that <code>with</code> block.</p>
<p>For example, your loop should look like the following:</p>
<pre><code>for i in range(3):
g = tf.Graph()
with g.as_default():
tf.set_random_seed(1)
accuracy_result, average_error = network.train_network(
parameters, inputHeight, inputWidth, inputChannels, outputClasses)
</code></pre>
<p>Note that this will use the same random seed for each iteration of the outer <code>for</code> loop. If you want to use a different—but still deterministic—seed in each iteration, you can use <code>tf.set_random_seed(i + 1)</code>.</p>
| you are the stackoverflow assistant |
|
Tensorflow set CUDA_VISIBLE_DEVICES within jupyter | <p>You can set environment variables in the notebook using <code>os.environ</code>. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.</p>
<pre><code>import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"
</code></pre>
<p>You can double check that you have the correct devices visible to TF</p>
<pre><code>from tensorflow.python.client import device_lib
print device_lib.list_local_devices()
</code></pre>
<p>I tend to use it from utility module like <a href="https://github.com/yaroslavvb/stuff/blob/master/notebook_util.py" rel="noreferrer">notebook_util</a></p>
<pre><code>import notebook_util
notebook_util.pick_gpu_lowest_memory()
import tensorflow as tf
</code></pre>
| you are the stackoverflow assistant |
|
How to get Tensorflow tensor dimensions (shape) as int values? | <p>To get the shape as a list of ints, do <code>tensor.get_shape().as_list()</code>.</p>
<p>To complete your <code>tf.shape()</code> call, try <code>tensor2 = tf.reshape(tensor, tf.TensorShape([num_rows*num_cols, 1]))</code>. Or you can directly do <code>tensor2 = tf.reshape(tensor, tf.TensorShape([-1, 1]))</code> where its first dimension can be inferred.</p>
| you are the stackoverflow assistant |
|
How to solve "AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key"? | <p>The protoc version I got through <code>pip show protobuf</code> and <code>protoc --version</code> were different. The version in pip was a bit outdated.</p>
<p>After I upgraded the pip version with</p>
<pre class="lang-sh prettyprint-override"><code>pip install --upgrade protobuf
</code></pre>
<p>the problem was solved.</p>
| you are the stackoverflow assistant |
|
What does batch, repeat, and shuffle do with TensorFlow Dataset? | <p>Update: <a href="https://colab.research.google.com/drive/1VS6-dYk3YAzoRmALhgTK7bb2_tBPrB4c?usp=sharing" rel="noreferrer">Here</a> is a small collaboration notebook for demonstration of this answer.</p>
<hr />
<p>Imagine, you have a dataset: <code>[1, 2, 3, 4, 5, 6]</code>, then:</p>
<p><strong>How ds.shuffle() works</strong></p>
<p><code>dataset.shuffle(buffer_size=3)</code> will allocate a buffer of size 3 for picking random entries. This buffer will be connected to the source dataset.
We could image it like this:</p>
<pre><code>Random buffer
|
| Source dataset where all other elements live
| |
↓ ↓
[1,2,3] <= [4,5,6]
</code></pre>
<p>Let's assume that entry <code>2</code> was taken from the random buffer. Free space is filled by the next element from the source buffer, that is <code>4</code>:</p>
<pre><code>2 <= [1,3,4] <= [5,6]
</code></pre>
<p>We continue reading till nothing is left:</p>
<pre><code>1 <= [3,4,5] <= [6]
5 <= [3,4,6] <= []
3 <= [4,6] <= []
6 <= [4] <= []
4 <= [] <= []
</code></pre>
<p><strong>How ds.repeat() works</strong></p>
<p>As soon as all the entries are read from the dataset and you try to read the next element, the dataset will throw an error.
That's where <code>ds.repeat()</code> comes into play. It will re-initialize the dataset, making it again like this:</p>
<pre><code>[1,2,3] <= [4,5,6]
</code></pre>
<p><strong>What will ds.batch() produce</strong></p>
<p>The <code>ds.batch()</code> will take the first <code>batch_size</code> entries and make a batch out of them. So, a batch size of 3 for our example dataset will produce two batch records:</p>
<pre><code>[2,1,5]
[3,6,4]
</code></pre>
<p>As we have a <code>ds.repeat()</code> before the batch, the generation of the data will continue. But the order of the elements will be different, due to the <code>ds.random()</code>. What should be taken into account is that <code>6</code> will never be present in the first batch, due to the size of the random buffer.</p>
| you are the stackoverflow assistant |
|
When importing tensorflow, I get the following error: No module named 'numpy.core._multiarray_umath' | <p>I also had the same issue.
It got resloved once I upgraded the numpy from 1.15.4 to 1.16.1.</p>
<p>If you're using pip:
<code>pip install numpy --upgrade</code></p>
<p>Numpy that came with Anaconda3 is of version 1.15.4. so i upgraded and it worked.</p>
<hr />
<p>Side note: if you're also using <strong>scikit-image</strong> in your script, be aware that numpy 1.16.3 has a conflict with old versions of scikit-image (e.g. you may get <code>ImportError: cannot import name '_validate_lengths'</code>). In that case, <code>pip install --upgrade scikit-image</code> from terminal solved the issue for me.</p>
| you are the stackoverflow assistant |
|
Using a pre-trained word embedding (word2vec or Glove) in TensorFlow | <p>There are a few ways that you can use a pre-trained embedding in TensorFlow. Let's say that you have the embedding in a NumPy array called <code>embedding</code>, with <code>vocab_size</code> rows and <code>embedding_dim</code> columns and you want to create a tensor <code>W</code> that can be used in a call to <a href="https://www.tensorflow.org/api_guides/python/nn#Embeddings" rel="noreferrer"><code>tf.nn.embedding_lookup()</code></a>.</p>
<ol>
<li><p>Simply create <code>W</code> as a <a href="https://www.tensorflow.org/api_guides/python/constant_op#constant" rel="noreferrer"><code>tf.constant()</code></a> that takes <code>embedding</code> as its value:</p>
<pre><code>W = tf.constant(embedding, name="W")
</code></pre>
<p>This is the easiest approach, but it is not memory efficient because the value of a <code>tf.constant()</code> is stored multiple times in memory. Since <code>embedding</code> can be very large, you should only use this approach for toy examples.</p></li>
<li><p>Create <code>W</code> as a <code>tf.Variable</code> and initialize it from the NumPy array via a <a href="https://www.tensorflow.org/api_guides/python/io_ops#placeholder" rel="noreferrer"><code>tf.placeholder()</code></a>:</p>
<pre><code>W = tf.Variable(tf.constant(0.0, shape=[vocab_size, embedding_dim]),
trainable=False, name="W")
embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, embedding_dim])
embedding_init = W.assign(embedding_placeholder)
# ...
sess = tf.Session()
sess.run(embedding_init, feed_dict={embedding_placeholder: embedding})
</code></pre>
<p>This avoid storing a copy of <code>embedding</code> in the graph, but it does require enough memory to keep two copies of the matrix in memory at once (one for the NumPy array, and one for the <code>tf.Variable</code>). Note that I've assumed that you want to hold the embedding matrix constant during training, so <code>W</code> is created with <code>trainable=False</code>.</p></li>
<li><p>If the embedding was trained as part of another TensorFlow model, you can use a <a href="https://www.tensorflow.org/api_guides/python/state_ops#Saver" rel="noreferrer"><code>tf.train.Saver</code></a> to load the value from the other model's checkpoint file. This means that the embedding matrix can bypass Python altogether. Create <code>W</code> as in option 2, then do the following:</p>
<pre><code>W = tf.Variable(...)
embedding_saver = tf.train.Saver({"name_of_variable_in_other_model": W})
# ...
sess = tf.Session()
embedding_saver.restore(sess, "checkpoint_filename.ckpt")
</code></pre></li>
</ol>
| you are the stackoverflow assistant |
|
How to stack multiple lstm in keras? | <p>You need to add <code>return_sequences=True</code> to the first layer so that its output tensor has <code>ndim=3</code> (i.e. batch size, timesteps, hidden state).</p>
<p>Please see the following example:</p>
<pre><code># expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
input_shape=(timesteps, data_dim))) # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True)) # returns a sequence of vectors of dimension 32
model.add(LSTM(32)) # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))
</code></pre>
<p>From: <a href="https://keras.io/getting-started/sequential-model-guide/" rel="noreferrer">https://keras.io/getting-started/sequential-model-guide/</a> (search for "stacked lstm")</p>
| you are the stackoverflow assistant |
|
How to choose cross-entropy loss in TensorFlow? | <h2>Preliminary facts</h2>
<ul>
<li><p>In functional sense, the <a href="https://stats.stackexchange.com/q/233658/130598">sigmoid is a partial case of the softmax function</a>, when the number of classes equals 2. Both of them do the same operation: transform the logits (see below) to probabilities.</p>
<p>In simple binary classification, there's no big difference between the two,
however in case of multinomial classification, sigmoid allows to deal
with non-exclusive labels (a.k.a. <em>multi-labels</em>), while softmax deals
with exclusive classes (see below).</p></li>
<li><p>A <em>logit</em> (also called a score) is a <a href="https://stats.stackexchange.com/q/52825/130598">raw unscaled value associated with a class</a>, before computing the probability. In terms of neural network architecture, this means that a logit is an output of a dense (fully-connected) layer.</p>
<p>Tensorflow naming is a bit strange: <strong>all of the functions below accept logits, not probabilities</strong>, and apply the transformation themselves (which is simply more efficient).</p></li>
</ul>
<h2>Sigmoid functions family</h2>
<ul>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits" rel="noreferrer"><code>tf.nn.sigmoid_cross_entropy_with_logits</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits" rel="noreferrer"><code>tf.nn.weighted_cross_entropy_with_logits</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/losses/sigmoid_cross_entropy" rel="noreferrer"><code>tf.losses.sigmoid_cross_entropy</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/contrib/losses/sigmoid_cross_entropy" rel="noreferrer"><code>tf.contrib.losses.sigmoid_cross_entropy</code></a> (DEPRECATED)</li>
</ul>
<p>As stated earlier, <code>sigmoid</code> loss function is for binary classification.
But tensorflow functions are more general and allow to do
multi-label classification, when the classes are independent.
In other words, <code>tf.nn.sigmoid_cross_entropy_with_logits</code> solves <code>N</code>
binary classifications at once.</p>
<p>The labels must be one-hot encoded or can contain soft class probabilities.</p>
<p><code>tf.losses.sigmoid_cross_entropy</code> in addition allows to set the <em>in-batch weights</em>,
i.e. make some examples more important than others.
<code>tf.nn.weighted_cross_entropy_with_logits</code> allows to set <em>class weights</em>
(remember, the classification is binary), i.e. make positive errors larger than
negative errors. This is useful when the training data is unbalanced.</p>
<h2>Softmax functions family</h2>
<ul>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits" rel="noreferrer"><code>tf.nn.softmax_cross_entropy_with_logits</code></a> (DEPRECATED IN 1.5)</li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits_v2" rel="noreferrer"><code>tf.nn.softmax_cross_entropy_with_logits_v2</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/losses/softmax_cross_entropy" rel="noreferrer"><code>tf.losses.softmax_cross_entropy</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/contrib/losses/softmax_cross_entropy" rel="noreferrer"><code>tf.contrib.losses.softmax_cross_entropy</code></a> (DEPRECATED)</li>
</ul>
<p>These loss functions should be used for multinomial mutually exclusive classification,
i.e. pick one out of <code>N</code> classes. Also applicable when <code>N = 2</code>.</p>
<p>The labels must be one-hot encoded or can contain soft class probabilities:
a particular example can belong to class A with 50% probability and class B
with 50% probability. Note that strictly speaking it doesn't mean that
it belongs to both classes, but one can interpret the probabilities this way.</p>
<p>Just like in <code>sigmoid</code> family, <code>tf.losses.softmax_cross_entropy</code> allows
to set the <em>in-batch weights</em>, i.e. make some examples more important than others.
As far as I know, as of tensorflow 1.3, there's no built-in way to set <em>class weights</em>.</p>
<p><strong>[UPD]</strong> In tensorflow 1.5, <code>v2</code> version <a href="https://github.com/tensorflow/tensorflow/commit/d60f6513232bd49b658c188c0597dd119e9a52d8" rel="noreferrer">was introduced</a> and the original <code>softmax_cross_entropy_with_logits</code> loss got deprecated. The only difference between them is that in a newer version, backpropagation happens into both logits and labels (<a href="https://stats.stackexchange.com/q/327348/130598">here's a discussion</a> why this may be useful).</p>
<h2>Sparse functions family</h2>
<ul>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits" rel="noreferrer"><code>tf.nn.sparse_softmax_cross_entropy_with_logits</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/losses/sparse_softmax_cross_entropy" rel="noreferrer"><code>tf.losses.sparse_softmax_cross_entropy</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/contrib/losses/sparse_softmax_cross_entropy" rel="noreferrer"><code>tf.contrib.losses.sparse_softmax_cross_entropy</code></a> (DEPRECATED)</li>
</ul>
<p>Like ordinary <code>softmax</code> above, these loss functions should be used for
multinomial mutually exclusive classification, i.e. pick one out of <code>N</code> classes.
The difference is in labels encoding: the classes are specified as integers (class index),
not one-hot vectors. Obviously, this doesn't allow soft classes, but it
can save some memory when there are thousands or millions of classes.
However, note that <code>logits</code> argument must still contain logits per each class,
thus it consumes at least <code>[batch_size, classes]</code> memory.</p>
<p>Like above, <code>tf.losses</code> version has a <code>weights</code> argument which allows
to set the in-batch weights.</p>
<h2>Sampled softmax functions family</h2>
<ul>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss" rel="noreferrer"><code>tf.nn.sampled_softmax_loss</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/contrib/nn/rank_sampled_softmax_loss" rel="noreferrer"><code>tf.contrib.nn.rank_sampled_softmax_loss</code></a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss" rel="noreferrer"><code>tf.nn.nce_loss</code></a></li>
</ul>
<p>These functions provide another alternative for dealing with huge number of classes.
Instead of computing and comparing an exact probability distribution, they compute
a loss estimate from a random sample.</p>
<p>The arguments <code>weights</code> and <code>biases</code> specify a separate fully-connected layer that
is used to compute the logits for a chosen sample.</p>
<p>Like above, <code>labels</code> are not one-hot encoded, but have the shape <code>[batch_size, num_true]</code>.</p>
<p>Sampled functions are only suitable for training. In test time, it's recommended to
use a standard <code>softmax</code> loss (either sparse or one-hot) to get an actual distribution.</p>
<p>Another alternative loss is <code>tf.nn.nce_loss</code>, which performs <em>noise-contrastive estimation</em> (if you're interested, see this <a href="https://datascience.stackexchange.com/q/13216/18375">very detailed discussion</a>). I've included this function to the softmax family, because NCE guarantees approximation to softmax in the limit.</p>
| you are the stackoverflow assistant |
|
How to add regularizations in TensorFlow? | <p>As you say in the second point, using the <code>regularizer</code> argument is the recommended way. You can use it in <code>get_variable</code>, or set it once in your <code>variable_scope</code> and have all your variables regularized.</p>
<p>The losses are collected in the graph, and you need to manually add them to your cost function like this.</p>
<pre><code> reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
reg_constant = 0.01 # Choose an appropriate one.
loss = my_normal_loss + reg_constant * sum(reg_losses)
</code></pre>
| you are the stackoverflow assistant |
|
Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads | <p>The <code>inter_op_parallelism_threads</code> and <code>intra_op_parallelism_threads</code> options are documented in the <a href="https://github.com/tensorflow/tensorflow/blob/26b4dfa65d360f2793ad75083c797d57f8661b93/tensorflow/core/protobuf/config.proto#L165" rel="noreferrer">source of the <code>tf.ConfigProto</code> protocol buffer</a>. These options configure two thread pools used by TensorFlow to parallelize execution, as the comments describe:</p>
<pre><code>// The execution of an individual op (for some op types) can be
// parallelized on a pool of intra_op_parallelism_threads.
// 0 means the system picks an appropriate number.
int32 intra_op_parallelism_threads = 2;
// Nodes that perform blocking operations are enqueued on a pool of
// inter_op_parallelism_threads available in each process.
//
// 0 means the system picks an appropriate number.
//
// Note that the first Session created in the process sets the
// number of threads for all future sessions unless use_per_session_threads is
// true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;
</code></pre>
<p>There are several possible forms of parallelism when running a TensorFlow graph, and these options provide some control multi-core CPU parallelism:</p>
<ul>
<li><p>If you have an operation that can be parallelized internally, such as matrix multiplication (<code>tf.matmul()</code>) or a reduction (e.g. <code>tf.reduce_sum()</code>), TensorFlow will execute it by scheduling tasks in a thread pool with <code>intra_op_parallelism_threads</code> threads. This configuration option, therefore, controls the maximum parallel speedup for a single operation. Note that if you run multiple operations in parallel, these operations will share this thread pool.</p>
</li>
<li><p>If you have many operations that are independent in your TensorFlow graph— because there is no directed path between them in the dataflow graph— TensorFlow will attempt to run them concurrently, using a thread pool with <code>inter_op_parallelism_threads</code> threads. If those operations have a multithreaded implementation, they will (in most cases) share the same thread pool for intra-op parallelism.</p>
</li>
</ul>
<p>Finally, both configuration options take a default value of <code>0</code>, which means "the system picks an appropriate number." Currently, this means that each thread pool will have one thread per CPU core in your machine.</p>
| you are the stackoverflow assistant |
|
How do display different runs in TensorBoard? | <p>In addition to TensorBoard scanning subdirectories (so you can pass a directory containing the directories with your runs), you can also pass multiple directories to TensorBoard explicitly and give custom names (example taken from the --help output):</p>
<pre><code>tensorboard --logdir=name1:/path/to/logs/1,name2:/path/to/logs/2
</code></pre>
<p>More information can be found at the <a href="https://github.com/tensorflow/tensorboard/blob/master/README.md#runs-comparing-different-executions-of-your-model" rel="noreferrer">TensorBoard documentation</a>.</p>
<p>In recent versions of TensorBoard, aliasing this way requires a different argument, however its use is discouraged (quote from current documentation on github - linked above):</p>
<blockquote>
<p><strong>Logdir & Logdir_spec (Legacy Mode)</strong></p>
<p>You may also pass a comma separated list of log directories, and
TensorBoard will watch each directory. You can also assign names to
individual log directories by putting a colon between the name and the
path, as in</p>
<p><code>tensorboard --logdir_spec name1:/path/to/logs/1,name2:/path/to/logs/2</code></p>
<p><strong>This flag (--logdir_spec) is discouraged and can usually be avoided.</strong>
TensorBoard walks log directories recursively; for finer-grained
control, prefer using a symlink tree. Some features may not work when
using --logdir_spec instead of --logdir.</p>
</blockquote>
| you are the stackoverflow assistant |
|
How to remove cuda completely from ubuntu? | <p>From cuda 11.4 onwards, an uninstaller script has been provided. Use it for the uninstallation:</p>
<pre><code># To uninstall cuda
sudo /usr/local/cuda-11.4/bin/cuda-uninstaller
# To uninstall nvidia
sudo /usr/bin/nvidia-uninstall
</code></pre>
<p>If you are using cuda 11.3 or earlier refer to the section below for how to remove it.</p>
<p>If you get the problem of <strong>broken packages</strong>, it has happened since you added repo to the apt/sources.lst. Run the following to delete it:</p>
<pre><code>sudo vim /etc/apt/sources.list
</code></pre>
<p>Go to the line containing reference to Nvidia repo and comment it by appending <code>#</code> in front of the line, for e.g.:</p>
<pre><code>#deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /
</code></pre>
<p>Then run</p>
<pre><code>sudo apt-get update
</code></pre>
<p>This will fix the problem.</p>
<p><strong>Removing Cuda 11.3 and earlier</strong></p>
<p>There are two things- nvidia drivers and cuda toolkit- which you may want to remove.
If you have installed using <strong>apt-get</strong> use the following to remove the packages completely from the system:</p>
<p>To remove cuda toolkit:</p>
<pre><code>sudo apt-get --purge remove "*cublas*" "cuda*" "nsight*"
</code></pre>
<p>To remove Nvidia drivers:</p>
<pre><code>sudo apt-get --purge remove "*nvidia*"
</code></pre>
<p>If you have installed via <strong>source files</strong> (assuming the default location to be <code>/usr/local</code>) then remove it using:</p>
<pre><code>sudo rm -rf /usr/local/cuda*
</code></pre>
<p>References: <a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#removing-cuda-toolkit-and-driver" rel="noreferrer">Nvidia uninstallation</a></p>
| you are the stackoverflow assistant |
|
What is the difference between Dataset.from_tensors and Dataset.from_tensor_slices? | <p><code>from_tensors</code> combines the input and returns a dataset with a single element:</p>
<pre><code>>>> t = tf.constant([[1, 2], [3, 4]])
>>> ds = tf.data.Dataset.from_tensors(t)
>>> [x for x in ds]
[<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4]], dtype=int32)>]
</code></pre>
<p><code>from_tensor_slices</code> creates a dataset with a separate element for each row of the input tensor:</p>
<pre><code>>>> t = tf.constant([[1, 2], [3, 4]])
>>> ds = tf.data.Dataset.from_tensor_slices(t)
>>> [x for x in ds]
[<tf.Tensor: shape=(2,), dtype=int32, numpy=array([1, 2], dtype=int32)>,
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([3, 4], dtype=int32)>]
</code></pre>
| you are the stackoverflow assistant |
|
Higher validation accuracy, than training accurracy using Tensorflow and Keras | <p>This happens when you use <code>Dropout</code>, since the behaviour when training and testing are different. </p>
<p>When training, a percentage of the features are set to zero (50% in your case since you are using <code>Dropout(0.5)</code>). When testing, all features are used (and are scaled appropriately). So the model at test time is more robust - and can lead to higher testing accuracies.</p>
| you are the stackoverflow assistant |
|
Using Keras & Tensorflow with AMD GPU | <p>I'm writing an OpenCL 1.2 backend for Tensorflow at <a href="https://github.com/hughperkins/tensorflow-cl" rel="noreferrer">https://github.com/hughperkins/tensorflow-cl</a></p>
<p>This fork of tensorflow for OpenCL has the following characteristics:</p>
<ul>
<li>it targets any/all OpenCL 1.2 devices. It doesnt need OpenCL 2.0, doesnt need SPIR-V, or SPIR. Doesnt need Shared Virtual Memory. And so on ...</li>
<li>it's based on an underlying library called 'cuda-on-cl', <a href="https://github.com/hughperkins/cuda-on-cl" rel="noreferrer">https://github.com/hughperkins/cuda-on-cl</a>
<ul>
<li>cuda-on-cl targets to be able to take <em>any</em> NVIDIA® CUDA™ soure-code, and compile it for OpenCL 1.2 devices. It's a very general goal, and a very general compiler</li>
</ul></li>
<li>for now, the following functionalities are implemented:
<ul>
<li>per-element operations, using Eigen over OpenCL, (more info at <a href="https://bitbucket.org/hughperkins/eigen/src/eigen-cl/unsupported/test/cuda-on-cl/?at=eigen-cl" rel="noreferrer">https://bitbucket.org/hughperkins/eigen/src/eigen-cl/unsupported/test/cuda-on-cl/?at=eigen-cl</a> )</li>
<li>blas / matrix-multiplication, using Cedric Nugteren's CLBlast <a href="https://github.com/cnugteren/CLBlast" rel="noreferrer">https://github.com/cnugteren/CLBlast</a></li>
<li>reductions, argmin, argmax, again using Eigen, as per earlier info and links</li>
<li>learning, trainers, gradients. At least, StochasticGradientDescent trainer is working, and the others are commited, but not yet tested</li>
</ul></li>
<li>it is developed on Ubuntu 16.04 (using Intel HD5500, and NVIDIA GPUs) and Mac Sierra (using Intel HD 530, and Radeon Pro 450)</li>
</ul>
<p>This is not the only OpenCL fork of Tensorflow available. There is also a fork being developed by Codeplay <a href="https://www.codeplay.com" rel="noreferrer">https://www.codeplay.com</a> , using Computecpp, <a href="https://www.codeplay.com/products/computesuite/computecpp" rel="noreferrer">https://www.codeplay.com/products/computesuite/computecpp</a> Their fork has stronger requirements than my own, as far as I know, in terms of which specific GPU devices it works on. You would need to check the Platform Support Notes (at the bottom of hte computecpp page), to determine whether your device is supported. The codeplay fork is actually an official Google fork, which is here: <a href="https://github.com/benoitsteiner/tensorflow-opencl" rel="noreferrer">https://github.com/benoitsteiner/tensorflow-opencl</a></p>
| you are the stackoverflow assistant |
|
How can I use a pre-trained neural network with grayscale images? | <p>The model's architecture <strong>cannot</strong> be changed because the weights have been trained for a specific input configuration. Replacing the first layer with your own would pretty much render the rest of the weights useless. </p>
<p>-- Edit: elaboration suggested by Prune--<br>
CNNs are built so that as they go deeper, they can extract high-level features derived from the lower-level features that the previous layers extracted. By removing the initial layers of a CNN, you are destroying that hierarchy of features because the subsequent layers won't receive the features that they are supposed to as their input. In your case the second layer has been trained to <strong>expect</strong> the features of the first layer. By replacing your first layer with random weights, you are essentially throwing away any training that has been done on the subsequent layers, as they would need to be retrained. I doubt that they could retain any of the knowledge learned during the initial training.<br>
--- end edit ---</p>
<p>There is an easy way, though, which you can make your model work with grayscale images. You just need to make the image to <strong>appear</strong> to be RGB. The easiest way to do so is to <em>repeat</em> the image array 3 times on a new dimension. Because you will have the <em>same image</em> over all 3 channels, the performance of the model should be the same as it was on RGB images.</p>
<p>In <em>numpy</em> this can be easily done like this:</p>
<pre><code>print(grayscale_batch.shape) # (64, 224, 224)
rgb_batch = np.repeat(grayscale_batch[..., np.newaxis], 3, -1)
print(rgb_batch.shape) # (64, 224, 224, 3)
</code></pre>
<p>The way this works is that it first creates a new dimension (to place the channels) and then it repeats the existing array 3 times on this new dimension.</p>
<p>I'm also pretty sure that keras' <a href="https://keras.io/preprocessing/image/" rel="noreferrer">ImageDataGenerator</a> can load grayscale images as RGB.</p>
| you are the stackoverflow assistant |
|
What is the purpose of the Tensorflow Gradient Tape? | <p>With eager execution enabled, Tensorflow will calculate the values of tensors as they occur in your code. This means that it won't precompute a static graph for which inputs are fed in through placeholders. This means to back propagate errors, you have to keep track of the gradients of your computation and then apply these gradients to an optimiser.</p>
<p>This is very different from running without eager execution, where you would build a graph and then simply use <code>sess.run</code> to evaluate your loss and then pass this into an optimiser directly. </p>
<p>Fundamentally, because tensors are evaluated immediately, you don't have a graph to calculate gradients and so you need a gradient tape. It is not so much that it is just used for visualisation, but more that you cannot implement a gradient descent in eager mode without it.</p>
<p>Obviously, Tensorflow could just keep track of every gradient for every computation on every <code>tf.Variable</code>. However, that could be a huge performance bottleneck. They expose a gradient tape so that you can control what areas of your code need the gradient information. Note that in non-eager mode, this will be statically determined based on the computational branches that are descendants of your loss but in eager mode there is no static graph and so no way of knowing. </p>
| you are the stackoverflow assistant |
|
On Windows, running "import tensorflow" generates No module named "_pywrap_tensorflow" error | <p>The problem was the cuDNN Library for me - for whatever reason cudnn-8.0-windows10-x64-v6.0 was NOT working - I used cudnn-8.0-windows10-x64-v5.1 - ALL GOOD!</p>
<p>My setup working with Win10 64 and the Nvidia GTX780M:</p>
<ul>
<li>Be sure you have the lib MSVCP140.DLL by checking your system/path - if not get it <a href="https://www.microsoft.com/en-us/download/details.aspx?id=48145" rel="noreferrer">here</a></li>
<li>Run the windows installer for python 3.5.3-amd64 from <a href="https://www.python.org/downloads/release/python-352/" rel="noreferrer">here</a> - DO NOT try newer versions as they probably won't work </li>
<li>Get the cuDNN v5.1 for CUDA 8.0 from <a href="https://developer.nvidia.com/rdp/cudnn-download" rel="noreferrer">here</a> - put it under your users folder or in another known location (you will need this in your path)</li>
<li>Get CUDA 8.0 x86_64 from <a href="https://developer.nvidia.com/cuda-downloads" rel="noreferrer">here</a></li>
<li>Set PATH vars as expected to point at the cuDNN libs and python (the python path should be added during the python install)</li>
<li>Make sure that ".DLL" is included in your PATHEXT variable</li>
<li>If you are using tensorflow 1.3 then you want to use cudnn64_6.dll <a href="https://github.com/tensorflow/tensorflow/issues/7705" rel="noreferrer">github.com/tensorflow/tensorflow/issues/7705</a></li>
</ul>
<p>If you run Windows 32 be sure to get the 32 bit versions of the files mentioned above.</p>
| you are the stackoverflow assistant |
|
How to do Xavier initialization on TensorFlow | <p>Since version 0.8 there is a Xavier initializer, <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/contrib.layers.html#xavier_initializer">see here for the docs</a>.</p>
<p>You can use something like this:</p>
<pre><code>W = tf.get_variable("W", shape=[784, 256],
initializer=tf.contrib.layers.xavier_initializer())
</code></pre>
| you are the stackoverflow assistant |
|
How to assign a value to a TensorFlow variable? | <p>In TF1, the statement <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/Variable#assign" rel="noreferrer"><code>x.assign(1)</code></a> does not actually assign the value <code>1</code> to <code>x</code>, but rather creates a <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Operation" rel="noreferrer"><code>tf.Operation</code></a> that you have to explicitly <em>run</em> to update the variable.* A call to <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Operation#run" rel="noreferrer"><code>Operation.run()</code></a> or <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/Session#run" rel="noreferrer"><code>Session.run()</code></a> can be used to run the operation:</p>
<pre><code>assign_op = x.assign(1)
sess.run(assign_op) # or `assign_op.op.run()`
print(x.eval())
# ==> 1
</code></pre>
<p>(* In fact, it returns a <code>tf.Tensor</code>, corresponding to the updated value of the variable, to make it easier to chain assignments.)</p>
<p>However, in TF2 <a href="https://www.tensorflow.org/api_docs/python/tf/Variable#assign" rel="noreferrer"><code>x.assign(1)</code></a> will now assign the value eagerly:</p>
<pre><code>x.assign(1)
print(x.numpy())
# ==> 1
</code></pre>
| you are the stackoverflow assistant |
|
How to *actually* read CSV data in TensorFlow? | <p>I think you are mixing up imperative and graph-construction parts here. The operation <code>tf.train.shuffle_batch</code> creates a new queue node, and a single node can be used to process the entire dataset. So I think you are hanging because you created a bunch of <code>shuffle_batch</code> queues in your for loop and didn't start queue runners for them. </p>
<p>Normal input pipeline usage looks like this:</p>
<ol>
<li>Add nodes like <code>shuffle_batch</code> to input pipeline</li>
<li>(optional, to prevent unintentional graph modification) finalize graph</li>
</ol>
<p>--- end of graph construction, beginning of imperative programming --</p>
<ol start="3">
<li><code>tf.start_queue_runners</code></li>
<li><code>while(True): session.run()</code></li>
</ol>
<p>To be more scalable (to avoid Python GIL), you could generate all of your data using TensorFlow pipeline. However, if performance is not critical, you can hook up a numpy array to an input pipeline by using <code>slice_input_producer.</code> Here's an example with some <code>Print</code> nodes to see what's going on (messages in <code>Print</code> go to stdout when node is run)</p>
<pre><code>tf.reset_default_graph()
num_examples = 5
num_features = 2
data = np.reshape(np.arange(num_examples*num_features), (num_examples, num_features))
print data
(data_node,) = tf.slice_input_producer([tf.constant(data)], num_epochs=1, shuffle=False)
data_node_debug = tf.Print(data_node, [data_node], "Dequeueing from data_node ")
data_batch = tf.batch([data_node_debug], batch_size=2)
data_batch_debug = tf.Print(data_batch, [data_batch], "Dequeueing from data_batch ")
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
tf.get_default_graph().finalize()
tf.start_queue_runners()
try:
while True:
print sess.run(data_batch_debug)
except tf.errors.OutOfRangeError as e:
print "No more inputs."
</code></pre>
<p>You should see something like this</p>
<pre><code>[[0 1]
[2 3]
[4 5]
[6 7]
[8 9]]
[[0 1]
[2 3]]
[[4 5]
[6 7]]
No more inputs.
</code></pre>
<p>The "8, 9" numbers didn't fill up the full batch, so they didn't get produced. Also <code>tf.Print</code> are printed to sys.stdout, so they show up in separately in Terminal for me.</p>
<p>PS: a minimal of connecting <code>batch</code> to a manually initialized queue is in <a href="https://github.com/tensorflow/tensorflow/issues/2193">github issue 2193</a></p>
<p>Also, for debugging purposes you might want to set <code>timeout</code> on your session so that your IPython notebook doesn't hang on empty queue dequeues. I use this helper function for my sessions</p>
<pre><code>def create_session():
config = tf.ConfigProto(log_device_placement=True)
config.gpu_options.per_process_gpu_memory_fraction=0.3 # don't hog all vRAM
config.operation_timeout_in_ms=60000 # terminate on long hangs
# create interactive session to register a default session
sess = tf.InteractiveSession("", config=config)
return sess
</code></pre>
<p>Scalability Notes:</p>
<ol>
<li><code>tf.constant</code> inlines copy of your data into the Graph. There's a fundamental limit of 2GB on size of Graph definition so that's an upper limit on size of data</li>
<li>You could get around that limit by using <code>v=tf.Variable</code> and saving the data into there by running <code>v.assign_op</code> with a <code>tf.placeholder</code> on right-hand side and feeding numpy array to the placeholder (<code>feed_dict</code>)</li>
<li>That still creates two copies of data, so to save memory you could make your own version of <code>slice_input_producer</code> which operates on numpy arrays, and uploads rows one at a time using <code>feed_dict</code></li>
</ol>
| you are the stackoverflow assistant |
|
In Keras, what exactly am I configuring when I create a stateful `LSTM` layer with N `units`? | <p>You can check <a href="https://stackoverflow.com/questions/38714959/understanding-keras-lstms/38737941#38737941">this question</a> for further information, although it is based on Keras-1.x API.</p>
<p>Basically, the <code>unit</code> means the dimension of the inner cells in LSTM. Because in LSTM, the dimension of inner cell (C_t and C_{t-1} in the graph), output mask (o_t in the graph) and hidden/output state (h_t in the graph) should have the <strong>SAME</strong> dimension, therefore you output's dimension should be <code>unit</code>-length as well.</p>
<p>And <code>LSTM</code> in Keras only define exactly one LSTM block, whose cells is of <code>unit</code>-length. If you set <code>return_sequence=True</code>, it will return something with shape: <code>(batch_size, timespan, unit)</code>. If <code>false</code>, then it just return the last output in shape <code>(batch_size, unit)</code>. </p>
<p>As for the input, you should provide input for every timestamp. Basically, the shape is like <code>(batch_size, timespan, input_dim)</code>, where <code>input_dim</code> can be different from the <code>unit</code>. If you just want to provide input at the first step, you can simply pad your data with zeros at other time steps. </p>
| you are the stackoverflow assistant |
|
RuntimeError: tf.placeholder() is not compatible with eager execution | <p>I found an easy solution here: <a href="https://stackoverflow.com/questions/53429896/disable-tensorflow-eager-execution">disable Tensorflow eager execution</a></p>
<p>Basicaly it is:</p>
<p><code>tf.compat.v1.disable_eager_execution()</code></p>
<p>With this, you disable the default activate eager execution and you don't need to touch the code much more.</p>
| you are the stackoverflow assistant |
|
Making predictions with a TensorFlow model | <p>In the "<a href="https://www.tensorflow.org/get_started/mnist/pros" rel="noreferrer">Deep MNIST for Experts</a>" example, see this line:</p>
<blockquote>
<p>We can now implement our regression model. It only takes one line! We
multiply the vectorized input images x by the weight matrix W, add the
bias b, and compute the softmax probabilities that are assigned to
each class.</p>
<pre><code>y = tf.nn.softmax(tf.matmul(x,W) + b)
</code></pre>
</blockquote>
<p>Just pull on node y and you'll have what you want.</p>
<pre><code>feed_dict = {x: [your_image]}
classification = tf.run(y, feed_dict)
print classification
</code></pre>
<p>This applies to just about any model you create - you'll have computed the prediction probabilities as one of the last steps before computing the loss.</p>
| you are the stackoverflow assistant |
|
How to inspect a Tensorflow .tfrecord file? | <p>Found it!</p>
<pre><code>import tensorflow as tf
for example in tf.python_io.tf_record_iterator("data/foobar.tfrecord"):
print(tf.train.Example.FromString(example))
</code></pre>
<p>You can also add:</p>
<pre><code>from google.protobuf.json_format import MessageToJson
...
jsonMessage = MessageToJson(tf.train.Example.FromString(example))
</code></pre>
| you are the stackoverflow assistant |
|
Clearing Tensorflow GPU memory after model execution | <p>You can use numba library to release all the gpu memory</p>
<pre class="lang-sh prettyprint-override"><code>pip install numba
</code></pre>
<pre class="lang-py prettyprint-override"><code>from numba import cuda
device = cuda.get_current_device()
device.reset()
</code></pre>
<p>This will release all the memory</p>
| you are the stackoverflow assistant |
|
How do I disable TensorFlow's eager execution? | <p>Assume you are using Tensorflow 2.0 preview release which has eager execution enabled by default. There is a <code>disable_eager_execution()</code> in v1 API, which you can put in the front of your code like:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
tf.compat.v1.disable_eager_execution()
</code></pre>
<p>On the other hand, if you are not using <strong>2.0 preview</strong>, please check if you accidentally enabled eager execution somewhere.</p>
| you are the stackoverflow assistant |
|
How to import keras from tf.keras in Tensorflow? | <p>Use the keras module from tensorflow like this:</p>
<p><code>import tensorflow as tf</code></p>
<p>Import classes</p>
<p><code>from tensorflow.python.keras.layers import Input, Dense</code></p>
<p>or use directly</p>
<p><code>dense = tf.keras.layers.Dense(...)</code></p>
<p><strong>EDIT Tensorflow 2</strong></p>
<p><code>from tensorflow.keras.layers import Input, Dense</code></p>
<p>and the rest stays the same.</p>
| you are the stackoverflow assistant |
|
Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, | <p>I've seen this error message for three different reasons, with different solutions:</p>
<h2>1. You have cache issues</h2>
<p>I regularly work around this error by shutting down my python process, removing the <code>~/.nv</code> directory (on linux, <code>rm -rf ~/.nv</code>), and restarting the Python process. I don't exactly know why this works. It's probably at least partly related to the second option:</p>
<h2>2. You're out of memory</h2>
<p>The error can also show up if you run out of graphics card RAM. With an nvidia GPU you can check graphics card memory usage with <code>nvidia-smi</code>. This will give you a readout of how much GPU RAM you have in use (something like <code>6025MiB / 6086MiB</code> if you're almost at the limit) as well as a list of what processes are using GPU RAM.</p>
<p>If you've run out of RAM, you'll need to restart the process (which should free up the RAM) and then take a less memory-intensive approach. A few options are:</p>
<ul>
<li>reducing your batch size</li>
<li>using a simpler model</li>
<li>using less data</li>
<li>limit TensorFlow GPU memory fraction: For example, the following will make sure TensorFlow uses <= 90% of your RAM:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import keras
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9 # 0.6 sometimes works better for folks
keras.backend.tensorflow_backend.set_session(tf.Session(config=config))
</code></pre>
<p>This can slow down your model evaluation if not used together with the items above, presumably since the large data set will have to be swapped in and out to fit into the small amount of memory you've allocated.</p>
<p>A second option is to have TensorFlow start out using only a minimum amount of memory and then allocate more as needed (documented <a href="https://www.tensorflow.org/guide/gpu" rel="noreferrer">here</a>):</p>
<pre class="lang-py prettyprint-override"><code>os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = 'true'
</code></pre>
<h2>3. You have incompatible versions of CUDA, TensorFlow, NVIDIA drivers, etc.</h2>
<p>If you've never had similar models working, you're not running out of VRAM <strong>and</strong> your cache is clean, I'd go back and set up CUDA + TensorFlow using the best available installation guide - I have had the most success with following the instructions at <a href="https://www.tensorflow.org/install/gpu" rel="noreferrer">https://www.tensorflow.org/install/gpu</a> rather than those on the NVIDIA / CUDA site. <a href="https://lambdalabs.com/lambda-stack-deep-learning-software" rel="noreferrer">Lambda Stack</a> is also a good way to go.</p>
| you are the stackoverflow assistant |
|
NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array | <p>For me, the issue occurred when upgrading from <code>numpy 1.19</code> to <code>1.20</code> and using <code>ray</code>'s RLlib, which uses <code>tensorflow 2.2</code> internally.
Simply downgrading with</p>
<pre><code>pip install numpy==1.19.5
</code></pre>
<p>solved the problem; the error did not occur anymore.</p>
<p>Update (comment by @codeananda): You can also update to a newer TensorFlow (2.6+) version now that resolves the problem (<code>pip install -U tensorflow</code>).</p>
| you are the stackoverflow assistant |
|
Get the value of some weights in a model trained by TensorFlow | <p>In TensorFlow, trained weights are represented by <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/state_ops.html#Variable"><code>tf.Variable</code></a> objects. If you created a <code>tf.Variable</code>—e.g. called <code>v</code>—yourself, you can get its value as a NumPy array by calling <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/client.html#Session.run"><code>sess.run(v)</code></a> (where <code>sess</code> is a <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/client.html#Session"><code>tf.Session</code></a>).</p>
<p>If you do not currently have a pointer to the <code>tf.Variable</code>, you can get a list of the trainable variables in the current graph by calling <a href="https://www.tensorflow.org/versions/r0.7/api_docs/python/state_ops.html#trainable_variables"><code>tf.trainable_variables()</code></a>. This function returns a list of all trainable <code>tf.Variable</code> objects in the current graph, and you can select the one that you want by matching the <code>v.name</code> property. For example:</p>
<pre><code># Desired variable is called "tower_2/filter:0".
var = [v for v in tf.trainable_variables() if v.name == "tower_2/filter:0"][0]
</code></pre>
| you are the stackoverflow assistant |
|
How could I use batch normalization in TensorFlow? | <p><strong>Update July 2016</strong> The easiest way to use batch normalization in TensorFlow is through the higher-level interfaces provided in either <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/layers.py" rel="noreferrer">contrib/layers</a>, <a href="http://tflearn.org/layers/normalization/" rel="noreferrer">tflearn</a>, or <a href="https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py" rel="noreferrer">slim</a>.</p>
<p><strong>Previous answer if you want to DIY</strong>:
The documentation string for this has improved since the release - see the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc#L65" rel="noreferrer">docs comment in the master branch</a> instead of the one you found. It clarifies, in particular, that it's the output from <code>tf.nn.moments</code>.</p>
<p>You can see a very simple example of its use in the <a href="https://github.com/tensorflow/tensorflow/blob/3972c791b9f4d9a61b9ad6399b481df396f359ff/tensorflow/python/ops/nn_test.py#L518" rel="noreferrer">batch_norm test code</a>. For a more real-world use example, I've included below the helper class and use notes that I scribbled up for my own use (no warranty provided!):</p>
<pre class="lang-py prettyprint-override"><code>"""A helper class for managing batch normalization state.
This class is designed to simplify adding batch normalization
(http://arxiv.org/pdf/1502.03167v3.pdf) to your model by
managing the state variables associated with it.
Important use note: The function get_assigner() returns
an op that must be executed to save the updated state.
A suggested way to do this is to make execution of the
model optimizer force it, e.g., by:
update_assignments = tf.group(bn1.get_assigner(),
bn2.get_assigner())
with tf.control_dependencies([optimizer]):
optimizer = tf.group(update_assignments)
"""
import tensorflow as tf
class ConvolutionalBatchNormalizer(object):
"""Helper class that groups the normalization logic and variables.
Use:
ewma = tf.train.ExponentialMovingAverage(decay=0.99)
bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True)
update_assignments = bn.get_assigner()
x = bn.normalize(y, train=training?)
(the output x will be batch-normalized).
"""
def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm):
self.mean = tf.Variable(tf.constant(0.0, shape=[depth]),
trainable=False)
self.variance = tf.Variable(tf.constant(1.0, shape=[depth]),
trainable=False)
self.beta = tf.Variable(tf.constant(0.0, shape=[depth]))
self.gamma = tf.Variable(tf.constant(1.0, shape=[depth]))
self.ewma_trainer = ewma_trainer
self.epsilon = epsilon
self.scale_after_norm = scale_after_norm
def get_assigner(self):
"""Returns an EWMA apply op that must be invoked after optimization."""
return self.ewma_trainer.apply([self.mean, self.variance])
def normalize(self, x, train=True):
"""Returns a batch-normalized version of x."""
if train:
mean, variance = tf.nn.moments(x, [0, 1, 2])
assign_mean = self.mean.assign(mean)
assign_variance = self.variance.assign(variance)
with tf.control_dependencies([assign_mean, assign_variance]):
return tf.nn.batch_norm_with_global_normalization(
x, mean, variance, self.beta, self.gamma,
self.epsilon, self.scale_after_norm)
else:
mean = self.ewma_trainer.average(self.mean)
variance = self.ewma_trainer.average(self.variance)
local_beta = tf.identity(self.beta)
local_gamma = tf.identity(self.gamma)
return tf.nn.batch_norm_with_global_normalization(
x, mean, variance, local_beta, local_gamma,
self.epsilon, self.scale_after_norm)
</code></pre>
<p>Note that I called it a <code>ConvolutionalBatchNormalizer</code> because it pins the use of <code>tf.nn.moments</code> to sum across axes 0, 1, and 2, whereas for non-convolutional use you might only want axis 0.</p>
<p>Feedback appreciated if you use it.</p>
| you are the stackoverflow assistant |
|
tensorflow:AttributeError: 'module' object has no attribute 'mul' | <p>According to the <a href="https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md" rel="noreferrer">tensorflow 1.0.0 release notes</a>, </p>
<blockquote>
<p><code>tf.mul</code>, <code>tf.sub</code> and <code>tf.neg</code> are deprecated in favor of <code>tf.multiply</code>, <code>tf.subtract</code> and <code>tf.negative</code>.</p>
</blockquote>
<p>You'll need to replace <code>tf.mul</code> with <code>tf.multiply</code>.</p>
| you are the stackoverflow assistant |
|
What is the default kernel initializer in tf.layers.conv2d and tf.layers.dense? |
<p>Great question! It is quite a trick to find out!</p>
<ul>
<li>As you can see, it is not documented in <a href="https://www.tensorflow.org/api_docs/python/tf/layers/conv2d" rel="noreferrer"><code>tf.layers.conv2d</code></a></li>
<li>If you look at the definition of <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/layers/convolutional.py" rel="noreferrer">the function</a> you see that the function calls <code>variable_scope.get_variable</code>: </li>
</ul>
<p>In code: </p>
<pre class="lang-py prettyprint-override"><code>self.kernel = vs.get_variable('kernel',
shape=kernel_shape,
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
trainable=True,
dtype=self.dtype)
</code></pre>
<p>Next step: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/variable_scope.py" rel="noreferrer">what does the variable scope do when the initializer is None?</a></p>
<p>Here it says: </p>
<blockquote>
<p>If initializer is <code>None</code> (the default), the default initializer passed in
the constructor is used. If that one is <code>None</code> too, we use a new
<code>glorot_uniform_initializer</code>.</p>
</blockquote>
<p>So the answer is: it uses the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/init_ops.py" rel="noreferrer"><code>glorot_uniform_initializer</code></a></p>
<p>For completeness the definition of this initializer: </p>
<blockquote>
<p>The Glorot uniform initializer, also called Xavier uniform initializer.
It draws samples from a uniform distribution within [-limit, limit]
where <code>limit</code> is <code>sqrt(6 / (fan_in + fan_out))</code>
where <code>fan_in</code> is the number of input units in the weight tensor
and <code>fan_out</code> is the number of output units in the weight tensor.
Reference: <a href="http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf" rel="noreferrer">http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf</a></p>
</blockquote>
<p>Edit: this is what I found in the code and documentation. Perhaps you could verify that the initialization looks like this by running eval on the weights!</p>
| you are the stackoverflow assistant |
|
What is the proper way to install TensorFlow on Apple M1 in 2022 | <h1>Conda Environment YAMLs</h1>
<h2>TensorFlow 2.13+</h2>
<p>Distilling <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">the official directions from Apple</a> (as of 24 November 2024), one would create an environment using the following YAML:</p>
<p><strong>tf-metal-arm64.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>name: tf-metal
channels:
- conda-forge
- nodefaults
dependencies:
- python=3.11 ## specify desired version
- pip
## uncomment for use with Jupyter
## - ipykernel
## PyPI packages
- pip:
- tensorflow
- tensorflow-metal
</code></pre>
<h2>TensorFlow <= 2.12 (original directions)</h2>
<p>Distilling <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">the official directions from Apple</a> (as of 13 July 2022), one would create an environment using the following YAML:</p>
<p><strong>tf-metal-arm64.yaml</strong></p>
<pre class="lang-yaml prettyprint-override"><code>name: tf-metal
channels:
- apple
- conda-forge
dependencies:
- python=3.9 ## specify desired version
- pip
- tensorflow-deps
## uncomment for use with Jupyter
## - ipykernel
## PyPI packages
- pip:
- tensorflow-macos
- tensorflow-metal ## optional, but recommended
</code></pre>
<p>Edit to include additional packages.</p>
<h1>Creating environment</h1>
<p>Before creating the environment we need to know what the base architecture is. Check this with <code>conda config --show subdir</code>.</p>
<h2>Native (<strong>osx-arm64</strong>) base</h2>
<p>If you have installed a native <strong>osx-arm64</strong> <a href="https://github.com/conda-forge/miniforge" rel="nofollow noreferrer">Miniforge variant</a>, then you can create with:</p>
<pre class="lang-bash prettyprint-override"><code>conda env create -n my_tf_env -f tf-metal-arm64.yaml
</code></pre>
<h2>Emulated (<strong>osx-64</strong>) base</h2>
<p>If you do not have a native <strong>base</strong>, then you will need to override the <code>subdir</code> setting:</p>
<pre class="lang-bash prettyprint-override"><code>## create env
CONDA_SUBDIR=osx-arm64 conda env create -n my_tf_env -f tf-metal-arm64.yaml
## activate
conda activate my_tf_env
## permanently set the subdir
conda config --env --set subdir osx-arm64
</code></pre>
<p>Be sure to always activate the environment before installing or updating packages.</p>
| you are the stackoverflow assistant |
TensorFlow, PyTorch, and Keras Framework Dataset: StackOverflow & GitHub
Dataset Overview
This dataset contains a collection of questions, answers, and code snippets related to the TensorFlow, PyTorch, and Keras frameworks, sourced from StackOverflow and GitHub repositories. It provides a comprehensive resource for researchers, practitioners, and developers interested in analyzing and enhancing their understanding of these popular machine learning frameworks.
Data Sources
- StackOverflow posts: User-generated questions and answers focused on common issues, optimizations, and best practices in TensorFlow, PyTorch, and Keras.
- GitHub repositories: Relevant code snippets, documentation, and discussions from open-source repositories that leverage these frameworks.
Use Cases
The dataset is designed to support various tasks such as:
- Natural Language Processing (NLP)
- Sentiment analysis
- Code summarization
- Question-answering models
It is a valuable resource for AI research and development related to deep learning and machine learning frameworks.
Data Structure
- StackOverflow Data: Contains questions, answers, and comments. Each entry includes metadata such as post ID, user ID, score, and relevant tags.
- GitHub Data: Includes code snippets, repository metadata, issues, and pull requests.
License
This dataset is made available for research and educational purposes. Please refer to the respective licenses of StackOverflow and GitHub for usage terms.
Acknowledgements
We would like to acknowledge StackOverflow and GitHub for providing the rich source of data that this dataset is based on.
Citation
If you use this dataset in your research, please cite it as follows:
- Downloads last month
- 51