id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st205700 | The parameter ‘photo’ contains the characteristics vector and photo’s detections then i walk through each description for the image
for desc in desc_list:
# encode the sequence
seq = tokenizer.texts_to_sequences([desc])[0]
# split one sequence into multiple X,y pairs
for i in range(1, len(seq)):
# split into input and output pair
in_seq, out_seq = seq[:i], seq[i]
# pad input sequence
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
# encode output sequence
out_seq = to_categorical(…)
This what iam doing … is there any wrong in types here or you need print(type) for each one ? |
st205701 | Excuse me I didn’t get what do you mean by numpy.array ()
I tried to handle photo by
X1.append(nd.array(photo)) but got
Value error : failed to convert numpy array to tensor (unsupported object type numpy.ndarray) |
st205702 | i check now data in X1 like
array([9.88089450e-05, 2.25199750e-04, 7.83673313e-05, 1.24953804e-04,
4.95471577e-05, … ,
dtype=float32)]
X2 like
[array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1], dtype=int32)
y like
[array([0., 0., 0., …, 0., 0., 0.], dtype=float32) |
st205703 | How about changing
return array(X1), array(X2), array(y)
to
return numpy.asarray(X1), numpy.asarray(X2), numpy.asarray(y)
Does it still cause the same error? |
st205704 | Running Keras model in eager execution causes most of the ops to be placed on the host instead of device. Obviously, it causes eager execution to be much slower. Is it some issue, or that’s how eager execution works?
TensorFlow Profiler output:
Code : keras-io/mnist_convnet.py at master · keras-team/keras-io · GitHub
Same code with run_eagerly=True in the model.compile().
system: 5.10.42-1-MANJARO
version: tensorflow 2.5 (Manjaro repository) |
st205705 | I used both sklearn’s random forests and tfdf on the same dataset. The results was very different between the two. Below was my configurations for the sklearn one.
RandomForestClassifier(n_estimators=1000, max_depth=16, oob_score=True, min_samples_leaf=1, random_state=42, n_jobs=-1)
I tried to use the same configurations with tfdf’s, but no luck. Please correct me on the configurations if I am wrong. |
st205706 | Hi,
While both SkLearn and TF-DF implement the classical Random Forest algorithm, there is some few differences in between the implementations. For this reason, it is expected for the results (both the model structure and model quality) not to be exactly the same (but still very close).
Following are some parameter values that should make sklearn’s RandomForestClassifier as close as possible to TF-DF’s Random Forest.
PS: Random Forest and Gradient Boosted Trees are different algorithms.
n_estimators = 300
max_depth = 16
criterion = "entropy"
min_samples_split = 5
In addition, if the problem is regressive, make sure to have:
max_features = 1./3
If your dataset contains categorical or categorical-set features, there are not equivalent parameters for sklearn as it does not support those type of features.
If the differences are larges, it would be very interesting for us to look at it. |
st205707 | Hello Mathieu,
Thanks for your answer!
The results are hugely different I’d say. I have 3 classes that I’d like to classify, please see the results below!
The result from Sklearn
precision recall f1-score support
-1 0.67 0.04 0.07 338
0 0.71 0.99 0.83 1002
1 0.00 0.00 0.00 86
The result from TFDF
precision recall f1-score support
-1 1.00 0.94 0.97 338
0 0.90 1.00 0.95 1002
1 0.00 0.00 0.00 86
I set everything just like the given code snippet. It’s intriguing, isn’t it?
The datasets I used for the 2 models were basically the same - all categorical data (text) was removed - The targets (ground truth) were mapped to positive integer index [0, 1, 2]. Basically, the ingredients for sklearn and TFDF are the same.
Notice that the dataset is very imbalanced, but the TFDF did a very impressive job. This is every cool but I don’t want be fooled by the metrics. I just wanna make sure the models work correctly. ^^ |
st205708 | Hello guys,
Just to clarify, the performance sklearn’s and tensorlfow’s random forests is the same. It was actually my fault in processing the data - I removed the most important feature out of the training data. In my case, the sklearn’s site works a little better. Have a nice day! |
st205709 | hi Keile, I’m happy to hear that. If it were significantly different (in any direction) we would be concerned/curious for more details.
cheers! |
st205710 | Hi, I made a “simple” python script with embedding layer, skip connection… for tabular data, with usual API function for a multi classification target and I have tried to connect tfdf.keras.GradientBoostedTreesModel to one of the last layer (before the softmax) to see if it can improves the result. It works, and when you compare the Neural Network alone and the NN + Gradient BoostedTrees, you get always a better logloss result. But when you don’t import tensorflow_decision_forests as tfdf the NN gets better results than in the first case, even comparing to NN+Gradient. I made several tests and the difference is quite significant and reproducible. Weird ? it seems that tf 2.5 get worse resuts comparing to 2.4.1 (script without tfdf) ? |
st205711 | Thank you very much.
@Mathieu without importing tfdf but only tf (version 2.5.0) I get the same issue for the NN model, tf 2.4.1 gives much better result (based upon many tries) in the log_loss, with the same script. So it does not come from tfdf but tf version 2.5.0.
I guess we can not have a workaround to use tfdf with tf 2.4.1 ?
For you information, the script imports :
import tensorflow as tf
import tensorflow_addons as tfa
import tensorflow.keras.backend as K
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import activations,callbacks
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras import initializers
from keras.models import Model |
st205712 | @Mathieu, After further investigations, it appears when the pip installation is running that tfdf is using a nightly version of KERAS ( keras-nightly 2.5.0.dev2021032900). I didn’t succeed in using keras==2.4.3 which gives also (as 2.4.1) good results for NN only…which is not the case with tf & Keras 2.5. Again, I’m afraid we can not use tfdf with Keras 2.4.3 ? |
st205713 | Hi Laurent,
About your observations, this sounds like one of the following:
A regression in the TF2.4.1->TF2.5.0 code.
A change of implementation (e.g. changing a random seed) that will affect individual results, but should not impact global distributions.
A change in API e.g. the default value / logic of a method has been changed.
Case 2 can be validated or eliminated by replicating the experiment multiple times: If the results are 50% of the time better, case 2 is likely true. If the results are always worse, case 2 is likely false.
For case 1 and 3, it might be interesting to create and share a minimal colab using public data that illustrates the problem. If the complexity of the model is small, it might be possible to detect case 3. Otherwise, this is the kind of colab that should be reported as an issue to TF. However, there is no guarantee of a quick resolution.
A few things have changed internally between TF2.4.1 and TF2.5.0, and we tried not to have TF-DF for both versions. It is possible to recompile TF-DF for TF2.4.1 with a few tweaks, but it is probably not interesting in your case. Sorry I cannot help you more.
Cheers,
M. |
st205714 | Hello,
I am trying to get Tensor Flow to apply a vairable noise floor to some X & Y data. Essentially this:
image260×696 5.28 KB
The problem I am having is that the Boolean Masking function does not return a tensor of the same shape that i fed it. Moreover, I cannot assign a NULL, ZERO or some default value to the cells that are below the cut off. Here is the code I have:
def NoiseFloor(x):
NFValue = tf.Variable(1.,dtype=tf.float64,constraint=lambda t: tf.clip_by_value(t, 10, 20))
y = tf.fill(tf.shape(x), NFValue)
y = tf.cast(y, np.float32)
return(tf.math.greater(x, y))
InputLayer = layers.Dense(5, activation=NoiseFloor, name=“Input_layer”)(TensorRFPower)
The issue is that this returns a T,F array that is of a different overall shape. |
st205715 | So I have been trying to solve this issue with no luck so far. I switched to trying to gain access to the elements in the tensor, so I could use an IF statement to make the element assignments. I tried a couple different functions:
tf.tensor_scatter_nd_add(
tf.assign_sub(
tf.assign(
Accessing the elements in the tensor itself (in v2) seems to be limited. Like I said I can:
Get a Bool Array of values above and below the cutoff
Apply a Bool Mask to that array and then get the values. But this array is not the same shape.
I also looked at recursive code to step through the elements in this array
def my_elementwise_func(tensorelement):
return tensorelement + 1
def NoiseFloor(x):
print(x)
if tf.shape(x).ndims > 0:
return tf.map_fn(NoiseFloor, x)
else:
return my_elementwise_func(x)
result = NoiseFloor(x)
print(result)
I just do not think their isn’t an existing function that works with Tensorflow that does not already do this. I am going to look at SciPy if nobody gets back to me. |
st205716 | Something like this?
def NoiseFloor(x):
mask = tf.greater(x, 7)
zeros = tf.zeros(x.shape, tf.int32)
return tf.where(mask, x, zeros)
inp = tf.constant([1,2,3,4,5,6,7,8,9,10])
out = NoiseFloor(inp) |
st205717 | Hi all,
as many of you I was very excited to learn about TensorFlow Decision Forests, and especially that its API lies within Keras. However, I am unable to install it. My pip install returns the following message:
“ERROR: Could not find a version that satisfies the requirement tensorflow_decision_forests (from versions: none)
ERROR: No matching distribution found for tensorflow_decision_forests”
Would you have any pointers here that could help me? Thanks very much in advance. |
st205718 | Hi Doug,
Sorry about the disagreement.
Can you share details about your operating system and python version.
Note that TF-DF is currently only available for Linux x64 with Pyhton>=3.6. |
st205719 | @Doug , Since TFDF is wrapper around Yggdrasil Decision Forest, you can see the troubleshooting guide there 111 |
st205720 | Hi Mathieu,
Thanks for the response. Indeed I hadn’t noted that it works only in Linux (I’m using MacOS). Is there any expectation that it will be ported to other systems as well?
Thanks! |
st205721 | Many thanks for your kind sharing of this troubleshooting guide. As Mathieu said, it only works in Linux (and I am using MacOS). But when I use it in Colab, if I find any problem I will definitely refer to the Yggdrasil troubleshooting guide as you indicated.
Thanks! |
st205722 | Thanks for the idea, @ashutosh1919. Actually I have started using it in Colab. For the benefit of others who might be running in cloud environments where machine may or may not be linux, I am posting below a code snippet that checks if the machine is linux, and if it s it installs tfdf.
from sys import platform
if platform != "linux" and platform != "linux":
print("'tensorflow_decision_forests' is currently only available for Linux.")
try:
import tensorflow_decision_forests
except ModuleNotFoundError:
!pip install tensorflow_decision_forests
import tensorflow_decision_forests as tfdf |
st205723 | I too have just spent considerable time building a new TF 2.5 cuda* environment for Windows (10, 64-bit) only to discover that the TFDF installation fails because there is “no matching distribution” (because, as I discover here, it is currently Linux only).
I reviewed the original announcement TensorFlow Decision Forests 0.1.3 open sourced 9 and was working through the example (for installation info) at Introducing TensorFlow Decision Forests — The TensorFlow Blog 8 and I did not find any mention of OS limitations.
Please could you update documentation, blog posts, etc. to make the Linux Only limitation clear?
@Doug’s question about when support for other OSes is expected also got lost in the mix - could you shed light on that as well?
Thx!
i.e. docker, cloud etc. support doesn’t mitigate the issue for me |
st205724 | I am going to write a custom grappler pass. So I want to first see what the graph grappler is handling on.
I tried to dump a graph using the following :
TF_CPP_MAX_VLOG_LEVEL=2 \
TF_DUMP_GRAPH_PREFIX="dumped_graph" \
python model.py
This command outputs too many graphs, it will have really a long list of graph,
Is there any docs for me to find which graph is doing which? Or what graph should I refer to if I want to rewrite the graph in grappler pass?
I tried to search for solution, but I got little ideas.
Thanks in advance.
after_group_0_phase_0_MlirV1CompatGraphOptimizationPass_94635183069520_1.pbtxt
after_group_0_phase_0_MlirV1CompatGraphOptimizationPass_94635183069520_2.pbtxt
after_group_0_phase_0_MlirV1CompatGraphOptimizationPass_94635183069520.pbtxt
after_group_0_phase_0_MlirV1CompatGraphOptimizationPass_94635188981296.pbtxt
after_group_0_phase_10_AccumulateNV2RemovePass_94635183069520_1.pbtxt
after_group_0_phase_10_AccumulateNV2RemovePass_94635183069520_2.pbtxt
after_group_0_phase_10_AccumulateNV2RemovePass_94635183069520.pbtxt
after_group_0_phase_10_AccumulateNV2RemovePass_94635188981296.pbtxt
after_group_0_phase_10_LowerFunctionalOpsPass_94635183069520_1.pbtxt
after_group_0_phase_10_LowerFunctionalOpsPass_94635183069520_2.pbtxt
after_group_0_phase_10_LowerFunctionalOpsPass_94635183069520.pbtxt
after_group_0_phase_10_LowerFunctionalOpsPass_94635188981296.pbtxt
after_group_0_phase_10_ParallelConcatRemovePass_94635183069520_1.pbtxt
after_group_0_phase_10_ParallelConcatRemovePass_94635183069520_2.pbtxt
after_group_0_phase_10_ParallelConcatRemovePass_94635183069520.pbtxt
after_group_0_phase_10_ParallelConcatRemovePass_94635188981296.pbtxt
after_group_0_phase_35_IntroduceFloatingPointJitterPass_94635183069520_1.pbtxt
after_group_0_phase_35_IntroduceFloatingPointJitterPass_94635183069520_2.pbtxt
after_group_0_phase_35_IntroduceFloatingPointJitterPass_94635183069520.pbtxt
after_group_0_phase_35_IntroduceFloatingPointJitterPass_94635188981296.pbtxt
after_group_0_phase_35_IsolatePlacerInspectionRequiredOpsPass_94635183069520_1.pbtxt
after_group_0_phase_35_IsolatePlacerInspectionRequiredOpsPass_94635183069520_2.pbtxt
after_group_0_phase_35_IsolatePlacerInspectionRequiredOpsPass_94635183069520.pbtxt
after_group_0_phase_35_IsolatePlacerInspectionRequiredOpsPass_94635188981296.pbtxt
after_group_0_phase_36_EncapsulateXlaComputationsPass_94635185505904.pbtxt
after_group_0_phase_36_EncapsulateXlaComputationsPass_94635186548256.pbtxt
after_group_0_phase_36_EncapsulateXlaComputationsPass_94635186649296.pbtxt
after_group_0_phase_36_EncapsulateXlaComputationsPass_94635191039344.pbtxt
after_group_0_phase_37_FunctionalizeControlFlowForXlaPass_94635185505904.pbtxt
after_group_0_phase_37_FunctionalizeControlFlowForXlaPass_94635186548256.pbtxt
after_group_0_phase_37_FunctionalizeControlFlowForXlaPass_94635186649296.pbtxt
after_group_0_phase_37_FunctionalizeControlFlowForXlaPass_94635191039344.pbtxt
after_group_1_phase_0_NcclReplacePass_94635185505904.pbtxt
after_group_1_phase_0_NcclReplacePass_94635186548256.pbtxt
after_group_1_phase_0_NcclReplacePass_94635186649296.pbtxt
after_group_1_phase_0_NcclReplacePass_94635191039344.pbtxt
after_group_2_phase_10_MarkForCompilationPass_94635183069520_1.pbtxt
after_group_2_phase_10_MarkForCompilationPass_94635183069520_2.pbtxt
after_group_2_phase_10_MarkForCompilationPass_94635183069520.pbtxt
after_group_2_phase_10_MarkForCompilationPass_94635188981296.pbtxt
after_group_2_phase_12_ForceXlaConstantsOnHostPass_94635183069520_1.pbtxt
after_group_2_phase_12_ForceXlaConstantsOnHostPass_94635183069520_2.pbtxt
after_group_2_phase_12_ForceXlaConstantsOnHostPass_94635183069520.pbtxt
after_group_2_phase_12_ForceXlaConstantsOnHostPass_94635188981296.pbtxt
after_group_2_phase_20_IncreaseDynamismForAutoJitPass_94635183069520_1.pbtxt
after_group_2_phase_20_IncreaseDynamismForAutoJitPass_94635183069520_2.pbtxt
after_group_2_phase_20_IncreaseDynamismForAutoJitPass_94635183069520.pbtxt
after_group_2_phase_20_IncreaseDynamismForAutoJitPass_94635188981296.pbtxt
after_group_2_phase_30_PartiallyDeclusterPass_94635183069520_1.pbtxt
after_group_2_phase_30_PartiallyDeclusterPass_94635183069520_2.pbtxt
after_group_2_phase_30_PartiallyDeclusterPass_94635183069520.pbtxt
after_group_2_phase_30_PartiallyDeclusterPass_94635188981296.pbtxt
after_group_2_phase_40_ReportClusteringInfoPass_94635183069520_1.pbtxt
after_group_2_phase_40_ReportClusteringInfoPass_94635183069520_2.pbtxt
after_group_2_phase_40_ReportClusteringInfoPass_94635183069520.pbtxt
after_group_2_phase_40_ReportClusteringInfoPass_94635188981296.pbtxt
after_group_2_phase_50_EncapsulateSubgraphsPass_94635185505904.pbtxt
after_group_2_phase_50_EncapsulateSubgraphsPass_94635186548256.pbtxt
after_group_2_phase_50_EncapsulateSubgraphsPass_94635186649296.pbtxt
after_group_2_phase_50_EncapsulateSubgraphsPass_94635191039344.pbtxt
after_group_2_phase_5_CloneConstantsForBetterClusteringPass_94635183069520_1.pbtxt
after_group_2_phase_5_CloneConstantsForBetterClusteringPass_94635183069520_2.pbtxt
after_group_2_phase_5_CloneConstantsForBetterClusteringPass_94635183069520.pbtxt
after_group_2_phase_5_CloneConstantsForBetterClusteringPass_94635188981296.pbtxt
after_group_2_phase_60_BuildXlaOpsPass_94635185505904.pbtxt
after_group_2_phase_60_BuildXlaOpsPass_94635186548256.pbtxt
after_group_2_phase_60_BuildXlaOpsPass_94635186649296.pbtxt
after_group_2_phase_60_BuildXlaOpsPass_94635191039344.pbtxt
after_group_2_phase_9_ClusterScopingPass_94635183069520_1.pbtxt
after_group_2_phase_9_ClusterScopingPass_94635183069520_2.pbtxt
after_group_2_phase_9_ClusterScopingPass_94635183069520.pbtxt
after_group_2_phase_9_ClusterScopingPass_94635188981296.pbtxt
after_group_3_phase_1_MklLayoutRewritePass_partition__job:localhost_replica:0_task:0_device:CPU:0_94635183069520_1.pbtxt
after_group_3_phase_1_MklLayoutRewritePass_partition__job:localhost_replica:0_task:0_device:CPU:0_94635183069520_2.pbtxt
after_group_3_phase_1_MklLayoutRewritePass_partition__job:localhost_replica:0_task:0_device:CPU:0_94635183069520.pbtxt
after_group_3_phase_1_MklLayoutRewritePass_partition__job:localhost_replica:0_task:0_device:CPU:0_94635188981296.pbtxt
after_group_3_phase_1_MklLayoutRewritePass_partition__job:localhost_replica:0_task:0_device:GPU:0_94635188001136.pbtxt
after_MetaOptimizer_140736774835712.pbtxt
after_MetaOptimizer_140736774837200.pbtxt
after_MetaOptimizer_140736774838416.pbtxt
after_MetaOptimizer_140736774841968.pbtxt
build_xla_ops_1.pbtxt
build_xla_ops_2.pbtxt
build_xla_ops_3.pbtxt
build_xla_ops.pbtxt
encapsulate_subgraphs_after_1.pbtxt
encapsulate_subgraphs_after_2.pbtxt
encapsulate_subgraphs_after_3.pbtxt
encapsulate_subgraphs_after.pbtxt
encapsulate_subgraphs_before_1.pbtxt
encapsulate_subgraphs_before_2.pbtxt
encapsulate_subgraphs_before_3.pbtxt
encapsulate_subgraphs_before.pbtxt
encapsulate_xla_computations_after_1.pbtxt
encapsulate_xla_computations_after_2.pbtxt
encapsulate_xla_computations_after_3.pbtxt
encapsulate_xla_computations_after.pbtxt
encapsulate_xla_computations_before_1.pbtxt
encapsulate_xla_computations_before_2.pbtxt
encapsulate_xla_computations_before_3.pbtxt
encapsulate_xla_computations_before.pbtxt
encapsulate_xla_computations_halfway_1.pbtxt
encapsulate_xla_computations_halfway_2.pbtxt
encapsulate_xla_computations_halfway_3.pbtxt
encapsulate_xla_computations_halfway.pbtxt
EnsureMemoryTypes_1.pbtxt
EnsureMemoryTypes_2.pbtxt
EnsureMemoryTypes_3.pbtxt
EnsureMemoryTypes_4.pbtxt
EnsureMemoryTypes.pbtxt
partition__job:localhost_replica:0_task:0_device:CPU:0_94635183130328_1.pbtxt
partition__job:localhost_replica:0_task:0_device:CPU:0_94635183130328.pbtxt
partition__job:localhost_replica:0_task:0_device:CPU:0_94635183181336.pbtxt
partition__job:localhost_replica:0_task:0_device:CPU:0_94635191982792.pbtxt
partition__job:localhost_replica:0_task:0_device:GPU:0_94635190961352.pbtxt
pflr_after_all_optimization_passes_94635183069520__job:localhost_replica:0_task:0_device:CPU:0_1.pbtxt
pflr_after_all_optimization_passes_94635183069520__job:localhost_replica:0_task:0_device:CPU:0_2.pbtxt
pflr_after_all_optimization_passes_94635183069520__job:localhost_replica:0_task:0_device:CPU:0.pbtxt
pflr_after_all_optimization_passes_94635188001136__job:localhost_replica:0_task:0_device:GPU:0.pbtxt
pflr_after_all_optimization_passes_94635188981296__job:localhost_replica:0_task:0_device:CPU:0.pbtxt |
st205725 | I looked at the difference between an autoregressive vs non-autoregressive in transformer architecture. but I am wondering whether the attention layer in TensorFlow is actually autoregressive? or do I need to implement the autoregressive mechanism?
I don’t see any option for causal (e.g. causal=true/false) or whether “tfa.layers.MultiHeadAttention” is autoregressive or not
Any thoughts on that would be appreciated. |
st205726 | Hi, I have a weired problem with training a PPO agent.
I have taken the PPO example from git and gave it my own environment. In that environment the agent learns to act based on a time series. Training works as intended, but as I decided to use RNN/LSTM I encountered a problem with using a single environment for trajectory collection:
The policy network is not learning at all while the value network changes, but doesn’t really improve, too. This happens ONLY with num_parallel_environments=1.
Here a comparison of 2 vs 1 parallel environments (identical implementation):
2vs1_env1369×286 58.6 KB
this is how i define the env(s)
tf_env = TFPyEnvironment(
ParallelPyEnvironment(
[lambda: env_load_fn(train_df)] * num_parallel_environments))
actor_net = actor_distribution_rnn_network.ActorDistributionRnnNetwork(
tf_env.observation_spec(),
tf_env.action_spec(),
input_fc_layer_params=actor_fc_layers,
lstm_size=lstm_size,
output_fc_layer_params=None,)
value_net = value_rnn_network.ValueRnnNetwork(
tf_env.observation_spec(),
input_fc_layer_params=value_fc_layers,
output_fc_layer_params=None)
Has anyone an idea? |
st205727 | Recently, when we tried to use MultiWorkerMirroredStrategy with Keras, we found:
When Keras wrap our passed in dataset with experimental distributed dataset, we found we cannot scale over x nodes because it needs us pass in a global batch size and global batch size needs to take number of workers into consideration (global batch size = batch size * num of workers * num of replica). Therefore, when we have a lot of workers, compared with Mirrored strategy, we start seeing job failure due to OOM
We try to get around this issue by passing in distribute_datasets_from_function that we can have full control over per replica batch and sharding logic (and get around OOM issue). Then our job failed at:
tensorflow/tensorflow/python/keras/engine/data_adapter.py 1
Line 733 in 1923123 1
if _is_distributed_dataset(self._dataset):
When we passed in normal dataset, it has UNKNOWN cardinality and leverage
tensorflow/tensorflow/python/keras/engine/data_adapter.py 1
Line 710 in 1923123 1
def should_recreate_iterator(self):
to recreate iterator for every epoch. Our use case is to have validation step to exhaust our dataset instead of hard coding steps. I wonder if we can relax check in L733 altogether with change to L714. Then we can support no steps input from users? If you agree, I can submit the PR to make the change.
Please let me know if any downside of doing so.
Thanks |
st205728 | What is the different between tf.data.experimental.TFRecordWriter and tf.io.TFRecordWriter? |
st205729 | E.g. this official tutorial mix both the API. /cc @markdaoust
TensorFlow
TFRecord and tf.train.Example | TensorFlow Core 2 |
st205730 | That’s the right link, thanks @Bhack.
The main differences are:
tf.io.TFRecordWriter is a python interface where you call writer.write(example).
tf.data.experimental.TFRecordWriter is experimental, and writes out a whole tf.data.Dataset. If you already have your data in a tf.data.Dataset, that may be simpler. |
st205731 | Hi,
I would like to distribute my Tensorflow prediction using a Spark RDD containing one tfrecord file per partition. Here is the snipped of my code
df_score_files = df_score_files.withColumn("part_idx", f.monotonically_increasing_id()).repartition("part_idx")
n_part_idx, part_idx_map = create_id_map(df_score_files, id_name="part_idx")
part_idx_map = spark.sparkContext.broadcast(part_idx_map)
predictions_rdd = df_score_files.rdd.map(lambda row: (part_idx_map.value[row.part_idx], row)) \
.partitionBy(n_part_idx, lambda x: x) \
.mapPartitions(distribute_score)
The function distribute_score is applied to all partitions
def distribute_score(iterator):
score_files = [ row[1].asDict()["score_files"] for row in iterator]
dfpred, _ = predict_evaluate1(files_list="gs://b_meta_algo_tmp/nrpc_deep_wide_tfrecords_v1/selector=test_standard_scaler/account=trivago/split=pred/yyyy_mm_dd=2021-05-30/part-00019-e6d924ff-6e7b-40ab-ae9f-9d2c833f92c2-c000.tfrecord.gz",
estimator=estimator,
checkpoint_path=None,
features_dict=features_dict,
input_tfrecords_compression="GZIP")
return dfpred.values.tolist()
The scoring itself is made by the function predict_evaluate1 which works perfectly in non-distributed mode. However when I try the distributed version I get this error
return parse_example_v2(serialized, features, example_names, name)
File “/opt/conda/miniconda3/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py”, line 201, in wrapper
return target(*args, **kwargs)
File “/opt/conda/miniconda3/lib/python3.8/site-packages/tensorflow/python/ops/parsing_ops.py”, line 309, in parse_example_v2
params = _ParseOpParams.from_features(features, [
File “/opt/conda/miniconda3/lib/python3.8/site-packages/tensorflow/python/ops/parsing_config.py”, line 451, in from_features
raise ValueError(“Unsupported %s %s.” %
ValueError: Unsupported FixedLenFeature FixedLenFeature(shape=(), dtype=tf.string, default_value=None).
I suspect this should be connected with my input_fn and in particular to my parser defined into it
def input_fn1(
file_names: List[str],
batch_size: int,
shuffle_buffer_size: int,
features_dict: Dict[str, tf.io.FixedLenFeature],
num_epochs: int = 1,
n_examples: int = None,
compression: str = "",
parallel_files_reads: int = 10,
deterministic_interleave: bool = False,
) -> tf.data.TFRecordDataset:
"""
Input function to provide data for tf.Estimator
Args:
file_names (List[str]): List of files containing features
batch_size (int): Batch size
shuffle_buffer_size (int): Size of the buffer to be shuffled
features_dict (Dict[str, tf.io.FixedLenFeature]): Dictionary with features
num_epochs (int): Number of epochs to run
n_examples (int): Number of random examples to use for train or prediction
compression (str): Compression codec of tfrecords
parallel_files_reads (int): Number of files to be read in parallel
deterministic_interleave (bool): Use deterministic=True in dataset.interleave
"""
#def parser(record, features_dict):
# parsed_features = tf.io.parse_single_example(record, features_dict)
# return parsed_features, parsed_features['target']
files = tf.data.Dataset.list_files(file_names)
dataset = files.interleave(
lambda x: tf.data.TFRecordDataset(
x, compression_type=compression, num_parallel_reads=parallel_files_reads
).prefetch(buffer_size=tf.data.experimental.AUTOTUNE),
cycle_length=tf.data.experimental.AUTOTUNE,
num_parallel_calls=tf.data.experimental.AUTOTUNE,
deterministic=deterministic_interleave,
)
if n_examples is not None:
dataset = dataset.take(n_examples)
dataset = dataset.map(lambda x: tf.io.parse_single_example(x, features_dict),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
if shuffle_buffer_size > 0:
dataset = dataset.shuffle(shuffle_buffer_size)
dataset = dataset.repeat(num_epochs)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
if num_epochs > 1:
dataset = dataset.cache()
return dataset |
st205732 | I new in TF, I tried the classic flowers recognition sample:
colab.research.google.com
Google Colaboratory 2
I replicate this in PyCharm and it works but I dont’ understand how to test this script passing some real flowers picture to test this in real world.
Thank in advance |
st205733 | You can follow this tutorial 10 to get an idea. Go through the particular section I linked and if there’s any doubt, let me know. |
st205734 | If you want to play with this on your Android device camera you could try to follow this tutorial:
Google Codelabs
Recognize Flowers with TensorFlow Lite on Android | Google Codelabs 2
In this codelab you will take an image classifier, and run it on an Android phone using TensorFlow Lite. |
st205735 | Many thanks for info.
I’m trying to create a custom classification using my set of images but I’m getting lost in hundreds of examples all different each other.
Regards |
st205736 | Hi @Paolo_Pini You can try tf.keras.Model.predict as in name_of_your_model.predict(...) (be mindful of tensor shapes).
Examples:
Image classification | TensorFlow Core 2
Predict on new data
Finally, let’s use our model to classify an image that wasn’t included in the training or validation sets.
…
sunflower_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/592px-Red_sunflower.jpg"
sunflower_path = tf.keras.utils.get_file('Red_sunflower', origin=sunflower_url)
img = keras.preprocessing.image.load_img(
sunflower_path, target_size=(img_height, img_width)
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
...
Writing your own callbacks | TensorFlow Core
Now, define a simple custom callback that logs:
When fit/evaluate/predict starts & ends
When each epoch starts & ends
When each training batch starts & ends
When each evaluation (test) batch starts & ends
When each inference (prediction) batch starts & ends
…
class CustomCallback(keras.callbacks.Callback):
...
...
res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()]) |
st205737 | The Gaussian process regression example (Gaussian Process Regression in TensorFlow Probability) is giving following error
ValueError: No gradients provided for any variable: ['amplitude:0', 'length_scale:0', 'observation_noise_variance_var:0'].
This error goes away when I comment out the tf.function decorator.
I am using tensorflow 2.5.0.
What is the cause of the error? |
st205738 | Thanks for pointing that out @tonygrey
I filled a bug on that and I’ll keep you posted! |
st205739 | There was already a ticket:
github.com/tensorflow/probability
Gaussian Process Regression in TensorFlow Probability Notebook not working 8
opened
Jun 14, 2021
jung-benjamin
When running the tutorial notebook *Gaussian Process Regression in TensorFlow Pr…obability* in Google Colab I run into
the following issue:
I execute the code cells in order, without changing any code. The cell containing the following code produces an error.
```
# Now we optimize the model parameters.
num_iters = 1000
optimizer = tf.optimizers.Adam(learning_rate=.01)
# Store the likelihood values during training, so we can plot the progress
lls_ = np.zeros(num_iters, np.float64)
for i in range(num_iters):
with tf.GradientTape() as tape:
loss = -target_log_prob(amplitude_var, length_scale_var,
observation_noise_variance_var)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
lls_[i] = loss
print('Trained parameters:')
print('amplitude: {}'.format(amplitude_var._value().numpy()))
print('length_scale: {}'.format(length_scale_var._value().numpy()))
print('observation_noise_variance: {}'.format(observation_noise_variance_var._value().numpy()))
```
The error message is:
```
ValueError Traceback (most recent call last)
<ipython-input-81-0b42fdbef836> in <module>()
10 observation_noise_variance_var)
11 grads = tape.gradient(loss, trainable_variables)
---> 12 optimizer.apply_gradients(zip(grads, trainable_variables))
13 lls_[i] = loss
14
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/utils.py in filter_empty_gradients(grads_and_vars)
74 if not filtered:
75 raise ValueError("No gradients provided for any variable: %s." %
---> 76 ([v.name for _, v in grads_and_vars],))
77 if vars_with_empty_grads:
78 logging.warning(
ValueError: No gradients provided for any variable: ['amplitude:0', 'length_scale:0', 'observation_noise_variance_var:0'].
```
After consulting Stack Overflow I tried adding `tape.watch(trainable_variables)`, but this did not change anything.
Might someone be able to help figure out why this is happening? |
st205740 | I suppose that the main problem is that this and other notebooks are not under CI testing in the TFP repo. |
st205741 | I could not get the plot of model graph of tensorflow object detection models. It doesn’t even showup on tensorboard.
Is there any script to handle this? Where should I look for? |
st205742 | Are you training/finetuning the model with model.fit?
Did you set theTensorboard callback 3? |
st205743 | There is no fit. it is trained on SSDMetaArch custom model generated from config file.
It would be quick if I could share links but I can’t. |
st205744 | I answered a similar question on StackOverflow: python 3.x - How to graph tf.keras model in Tensorflow-2.0? - Stack Overflow 8
In short, you can try to load your model in a function decorated with tf.function, execute the forward pass, and now that the execution of the function has been traced trying to plot the graph on TensorBoard using tf.summary.trace_export |
st205745 | What is the proper way to reinitialize tf.Dataset with initializable iterator? I tried many ways and always results in memory leak. Should we use gc.collect or tf.reset_graph_default? How do I use tf.reset_graph_default if it always results in error “AssertionError: Do not use tf.reset_default_graph() to clear nested graphs. If you need a cleared graph, exit the nesting and create a new graph”? I just want to change train dataset and continue training with new data |
st205746 | I’m in a project that requires me to update the training data regularly, that’s why I need to reset the iterator each time I update the dataset. I tried to use tf.placeholder as feedict to update the training set, but the memory usage increase as training progress (not so serious as long as it’s till affordable) but sometimes at some training step the code stop when it reinitialize the iterator (weird that at training step before that it still work fine despite increase memory usage), not only that the memory usage increase much faster than before. On my local machine, I don’t see this problem, but it arise once I send my code to a remote gpu cluster to train. I try to use tcmalloc but not sure if the system load it correctly, as there is no change, it still get OOM after a while |
st205747 | I can, but somehow the dataset seems to cache data of previous run, despite I don’t use .cache() at all |
st205748 | ah sorry, it doesn’t look like cache now. I change around 1000 sample, immediately after change data I run the model with new data, and around 127 sample has different labels. I guess it maybe due to prefetch or something similar that the pipeline get some of old sample into its buffer, but it still remain the same after I remove prefetch. Is there anyway to clean buffer of old dataset in pipeline?
For a runnable example, it’s a bit hard since it looks like I need to upload the whole model to see the problem I’m talking about. |
st205749 | Check the dataset and dataset optimization options:
TensorFlow
tf.data.Options | TensorFlow Core v2.5.0 33
Represents options for tf.data.Dataset. |
st205750 | Hi, I was going through the tutorials at tf.keras.preprocessing.image.ImageDataGenerator 2
and Data augmentation | TensorFlow Core 1 when I came across this doubt.
If I have a training directory with some images and I used ImageDataGenerator to augment the data with a validation_split = 0.2, as shown below.
train_datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1./255, width_shift_range=0.2,
shear_range=0.2, height_shift_range = 0.2,
zoom_range=0.2, validation_split = 0.2,
horizontal_flip=True)
test_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
train_ds = train_datagen.flow_from_directory(
train_dir, seed = 42,
target_size= img_size, subset = ‘training’,
batch_size=32)
valid_ds = train_datagen.flow_from_directory(
train_dir, seed = 42,
target_size= img_size, subset = ‘validation’,
batch_size=32)
test_ds = test_datagen.flow_from_directory(
test_dir, seed = 42,
target_size= img_size,
batch_size=32)
my question is this:
Does the image augmentation applies to the validation_ds by default ?. If so it wouldn’t it create more bias towards the original training data? (as mentioned in Data augmentation | TensorFlow Core 1 We should not augment the validation data.)
What if the validation_split argument was provided in the model.fit() method instead? does it mean that the validation split would have applied in the augmented training data? |
st205751 | It was just closed 4 days ago
Check Split train data into training and validation when using ImageDataGenerator and model.fit_generator · Issue #5862 · keras-team/keras · GitHub 85 |
st205752 | Thanks a lot
So, In one approach it says to create different ImageDataGenerators for validation and training subsets while keeping a constant seed value. It works! |
st205753 | indent preformatted text by 4 spaces
import numpy as np
import tensorflow as tf
from PIL import Image
from tensorflow.keras import Model,Sequential
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Flatten,Dense,Dropout,
from tensorflow.keras.applications.vgg16 import VGG16
from sklearn.model_selection import train_test_split
from keras_preprocessing.image import ImageDataGenerator
from PIL import Image
import os.path
import glob
import random
image_path=glob.glob('gar/*/*.jpg')
label_type=[image_p.split('\\')[1] for image_p in image_path]
labels=np.unique(label_type)
index_to_label=dict((l,n)for l,n in enumerate(labels))
label_to_index=dict((n,l)for l,n in index_to_label.items())
all_labels=[label_to_index.get(name)for name in label_type]
random_index=np.random.permutation(len(image_path))
image_path=np.array(image_path)[random_index]
all_labels=np.array(all_labels)[random_index]
sep=int(len(image_path)*0.7)
train_image_path=image_path[:sep]
train_y=all_labels[:sep]
test_image_path=image_path[sep:]
test_y=all_labels[sep:]
train=tf.data.Dataset.from_tensor_slices((train_image_path,train_y))
test=tf.data.Dataset.from_tensor_slices((test_image_path,test_y))
def load_pic(path,label):
image=tf.io.read_file(path)
image=tf.image.decode_jpeg(image,channels=3)
image=tf.image.resize(image,[256,256])
image=tf.cast(image,tf.float32)
image=image/255
return image,label
train=train.map(load_pic)
train=train.shuffle(1000).batch(64)
test=test.map(load_pic)
test=test.shuffle(1000).batch(64)
def get_model():
gmodel=Sequential([
Conv2D(64, (3, 3), activation='relu', input_shape=(256,256,3)),
Conv2D(64,(3,3),activation='relu'),
MaxPooling2D((1,1),strides=2,padding='same'),
Conv2D(128, (3, 3), activation='relu'),
Conv2D(128, (3, 3), activation='relu'),
MaxPooling2D((1,1), strides=2, padding='same'),
Conv2D(256, (3, 3), activation='relu'),
Conv2D(256, (3, 3), activation='relu'),
MaxPooling2D((1,1), strides=2, padding='same'),
Conv2D(512, (3, 3), activation='relu'),
Conv2D(512, (3, 3), activation='relu'),
MaxPooling2D((1,1), strides=2, padding='same'),
Conv2D(512, (3, 3), activation='relu'),
Conv2D(512, (3, 3), activation='relu'),
MaxPooling2D((1,1), strides=2, padding='same'),
Flatten(),
Dense(256),
Dense(3, activation='softmax')
])
return gmodel
model=get_model()
model.compile(
optimizer=tf.keras.optimizers.Adam(0.0001),
loss="categorical_crossentropy",
metrics=['accuracy']
)
history=model.fit(train,epochs=5,validation_data=test)
I dont know why when I try to train this model to classify picture into four catagories, the accuracy only about 25%, and I have tried change model, loss function and optimizer but all of these do not not. Can some one tell me how to solve this problems. Thank very much |
st205754 | Hi, I try to run a Probabilistic face embeddings model. It works well when I run my function with tensorflow 2.1 or tensorflow 1.x but when I tried to run it with tensorflow 2.2 and more I have this error : ValueError: Node ‘gradients/UncertaintyModule/fc_log_sigma_sq/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad’ has an _output_shapes attribute inconsistent with the GraphDef for output [#3]: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512]
It happens when the pretrained model I have is loading when it does saver = tf.compat.v1.train.import_meta_graph(meta_file, clear_devices=True, import_scope=scope) to import the meta file of the PFE_sphere64_msarcface_am model
Capture du 2021-06-28 10-43-59906×361 38.2 KB
To reproduce the error : download the folder Probabilistic-Face-Embeddings_new : Probabilistic-Face-Embeddings_new - Google Drive 2 and run eval_lfw with parameters --model_dir pretrained/PFE_sphere64_msarcface_am --dataset_path data/Dataset --protocol_path ./proto/pairs_dataset.txt
Capture du 2021-06-28 10-49-401193×470 66.4 KB
Thank you for your help! |
st205755 | Have you checked:
github.com/davidsandberg/facenet
Workaround for tensorflow 2.0 27
opened
Nov 25, 2019
rizkyputranto
Is there any workaround for tensorflow 2.0?
TensorFlow version compatibility | TensorFlow Core |
st205756 | Yes, I used import tensorflow.compat.v1 as tf tf.disable_v2_behavior() but it didn’t change anything |
st205757 | Hi, I would like to know if it is possible to use the code available for tensorflow federated research on the stackoverflow dataset for next word prediction on a Raspberry Pi.
Also, there are various online resources which suggest that tensorflow federated implementations are just simulations and cannot be embedded to a real world scenario using IoT devices like Raspberry Pi. Is this information correct?
I am trying to implement an LSTM for next word prediction using stackoverflow dataset but on a Pi to check if its possible to get the results on a resource constrained device like a Pi.
Please Advice. |
st205758 | Probably the status is still this:
github.com/tensorflow/federated
Installation Federated Learning on raspberry pi 11
opened
Mar 18, 2019
closed
May 15, 2019
codarm
Would be needing help with installation of tensorflow_federated on raspberry pi ….
github.com/tensorflow/federated
Is it possible to have Federated Learning on Cloud-Edge? 15
opened
Oct 15, 2020
Martiniann
Hi everyone,
Currently I am working on a school project about federated lear…ning and came across your framework during exploratory analysis. My project should utilize federated learning in this manner - I have an aggregation server (let's say in a cloud). I want this server to provide model to my 2 Raspberry PIs. These two RPIs would then train the model on a local data for x epochs and provide the trained models/gradients back to the global server. On this server, the results would be federated averaged and new model would be sent to the PIs. Is such a workflow possible with your framework? If so, could you provide me a hint?
Thank you,
Best regards |
st205759 | Also more in general you could follow or comment this RFC:
github.com/tensorflow/community
RFC: On-device training with TensorFlow Lite
tensorflow:master ← miaout17:tflite-training-rfc
opened
Jun 7, 2021
miaout17
+309
-0
We're sharing this RFC to reflect our newest thoughts of implementing on-device …training in TensorFlow Lite.
We didn't setup a timeline to close the comments. We want to surface the RFC early for transparency and get feedback. |
st205760 | I’m wondering what’s the Tensorflow way of storing and performing Tensor manipulation (i.e. Tensor multiplication, apply_gradient) of trainable weights in a custom layer that could potentially be sparse based on user specification. I’ve looked into
TensorFlow
tf.boolean_mask | TensorFlow Core v2.5.0
Apply boolean mask to tensor.
and
TensorFlow
tf.RaggedTensor | TensorFlow Core v2.5.0
Represents a ragged tensor.
but I’m not sure if it’s possible to apply these as ways to store and update trainable weights. |
st205761 | Have you already take a look at:
github.com/tensorflow/tensorflow
Back-propagating gradients through a sparse tensor? 2
opened
Jan 22, 2017
closed
Jun 16, 2017
zergylord
stat:awaiting tensorflower
I have a normal feed-forward network that produces a vector v. The elements of v… are then used as the non-zero entries of a sparse matrix M (assume the coordinates are predefined). The sparse matrix is then multiplied by a dense vector and a loss is defined on the resulting scalar. I want to back-propagate the loss w.r.t. the weights of the network, which entails going through the sparse matrix.
This seems like a perfectly reasonable use-case for a sparse matrix, but it appears that such functionality is not supported. Indeed, even calling tf.gradients(M,[v]) produces an error:
> AttributeError: 'SparseTensor' object has no attribute 'value_index'
Am I doing something wrong or am I correct in presuming that this functionality doesn't (yet?) exist? If the latter, then is there a work-around for this particular use-case short of rewriting all of the sparse tensor operations with gradients defined? |
st205762 | I am modifying identity_3d which is initialized as an n by n by n numpy array per the following operations:
identity_3d = np.zeros((n, n, n))
idx = np.arange(n)
identity_3d[:, idx, idx] = 1
I,J = np.nonzero(wR==0)
identity_3d[I,:,J]=0
identity_3d[I,J,:]=0
If identity_3d was an Tensor instead, is there a way to perform the equivalent operation? |
st205763 | import numpy as np
n = 5
wR = np.random.choice(a=[0, 1, 2], size=(n, n), p=[0.5, 0.25,0.25])
identity_3d = np.zeros((n, n, n))
idx = np.arange(n)
identity_3d[:, idx, idx] = 1
I,J = np.nonzero(wR==0)
identity_3d[I,:,J]=0
identity_3d[I,J,:]=0
identity_3d |
st205764 | There is not a direct slice assigment for Tensor that maps the numpy syntax.
As you can see is currently not available also in the TF experimental numpy API:
TensorFlow
Module: tf.experimental.numpy | TensorFlow Core v2.5.0 3
# tf.experimental.numpy: NumPy API on TensorFlow.
But It Is a very frequent topic, take a look at:
github.com/tensorflow/tensorflow
how to assign value to a EagerTensor slice? ----'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment 4
opened
Oct 8, 2019
closed
Oct 25, 2019
aohan237
TF 2.0
comp:ops
type:bug
as in numpy or pytorch ,we can do someting like this, but how to do it with tf2.…0.
the following code will raise exception as :
`'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment`
prediction[:,:,0]=tf.math.sigmoid(prediction[:,:,0])
github.com/tensorflow/tensorflow
How to efficiently update a tensor slice? 2
opened
Feb 8, 2020
closed
Feb 13, 2020
OverLordGoldDragon
TF 2.1
comp:keras
stat:awaiting tensorflower
type:feature
Suppose we have `x = K.zeros((4, 6))`, and we wish to add 1 to row 0: `x[0] += 1…`. The variable is created via `Layer`'s [`add_weight()`](https://github.com/keras-team/keras/blob/master/keras/engine/base_layer.py#L250) w/ `training=False`, so it isn't updated via backprop. What is the most _speed-efficient_ way to do so?
<hr>
**Context**: I'm implementing recurrent batch normalization, with `moving_mean` and `moving_variance` variables distinct for each timestep in an RNN - each thus having a shape of `(units, timesteps)`. The goal is to update one `timesteps` slice per step via `K.moving_average_update()`. One approach is as follows:
```python
import tensorflow.keras.backend as K
units, timesteps = 4, 6
x = K.zeros((units, timesteps), dtype='float32', name='x')
x_new = x[:units, 0].assign(K.ones((units,), dtype='float32')) # dummy example
K.set_value(x, K.get_value(x_new))
print(K.get_value(x))
```
```python
[[1. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0.]]
```
Looks good - except, a _new copy_ of `x` was created. In practice, we can have `timesteps > 100` (e.g. 120), so we are creating an array 120x larger than it needs to be, 120 times (1 / step), making it an `O(timesteps**2)` operation - as opposed to usual slicing, `O(timesteps)`.
Is there anything more efficient? Doesn't have to be `keras`, just at least `tf.keras`-friendly.
github.com/tensorflow/tensorflow
Dynamical Tensor (and EagerTensor) slice assignment 2
opened
Jun 19, 2020
zaccharieramzi
comp:ops
stat:awaiting tensorflower
type:feature
**System information**
- TensorFlow version (you are using): 2.2
- Are you wil…ling to contribute it (Yes/No): Yes
**Describe the feature and the current behavior/state.**
I would like to have slice assignment for Tensor objects in TensorFlow.
The code I would like to write is:
```python
import tensorflow as tf
a = tf.constant([1, 2, 4, 5, 7, 3, 2, 6,])
indices = tf.constant([3, 4], dtype=tf.int32)
a[indices] += 1
```
Of course it's a simplistic example and doesn't cover everything I want to do (I would use it in more complex functions not with constants), and I am happy to make it more complex if necessary.
Currently this code gives the error:
```
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Tensor: shape=(2,), dtype=int32, numpy=array([3, 4], dtype=int32)>
```
**Will this change the current api? How?**
I guess this is a change of API since it introduces a new functionality.
**Who will benefit with this feature?**
A lot of people have been asking for this feature for example in this GitHub issues:
- https://github.com/tensorflow/tensorflow/issues/14132#issuecomment-483002522
- https://github.com/tensorflow/tensorflow/issues/33131
These issues have unfortunately been closed because some workarounds for specific use-cases have been found (ones where the slicing is fixed and you can use [masking](https://github.com/tensorflow/tensorflow/issues/14132#issuecomment-483002522) or [TensorArrays](https://github.com/tensorflow/tensorflow/issues/14132#issuecomment-487643287)).
Some other issues deal with `Variable`s which is not what I am talking about here. [Some workarounds do exist](https://stackoverflow.com/a/62202181/4332585) involving `Variable` but they seem hacky.
I will personally benefit from it, in the multiple places where I now use `tensor_scatter_nd_add` or `tensor_scatter_nd_update`, which is solution that always works but is very difficult to write and very slow:
- [for a wavelet-based neural network, called MWCNN](https://github.com/zaccharieramzi/tf-mwcnn/blob/master/mwcnn.py#L106-L110);
- [for non-uniform fast fourier transform](https://github.com/zaccharieramzi/tfkbnufft/blob/master/tfkbnufft/nufft/interp_functions.py#L151);
- [for sensitivity map extraction when doing MRI reconstruction with TensorFlow neural networks](https://github.com/zaccharieramzi/fastmri-reproducible-benchmark/blob/master/fastmri_recon/data/utils/multicoil/smap_extract.py#L27-L35).
**Any Other info.**
The `tensor_scatter_nd_*` alternative might seem like a viable solution, but it suffers from 2 drawbacks that I consider huge:
- It is very difficult to write. It is actually so difficult, I decided to make a package that would alleviate this difficulty by having the different slicing possibilities unit tested: [tf-slice-assign](https://github.com/zaccharieramzi/tf-slice-assign).
- It is very slow. I made a [benchmark notebook](https://colab.research.google.com/drive/1gEjha7h1mhQkFwULS9MAU0bWQfzfEALY?usp=sharing) vs `pytorch` for slice assignment add. You can see that on GPU, using `tensor_scatter_nd_add` is 10 times slower than slice assignment in `pytorch` and 20 times slower on CPU. For a practical example, it means that my `tfkbnufft` (for non-uniform fast fourier transform) package is 30 times slower than its [torch counterpart](https://github.com/mmuckley/torchkbnufft#computation-speed) which I translated. This currently removes the possibility of training neural networks using the non-uniform fourier transform in TensorFlow. |
st205765 | If I want to do a multi-label text classification task, not multi-class classification, and my data is in this format:
1 this is a test. 0,0,1,0
2 this is another test 0,1,1,1
3 one more test 1,0,0,1
How should I prepare my data so that Keras preprocessing API can easily create TF.DataSet from it? For single label classification, I can use this format (one file directory per class) as below from the Keras/TF tutorial. But if my task is multi-label classification, how should I go about this and make tf.keras.preprocessing.text_dataset_from_directory still works with my data?
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
class_names = raw_train_ds.class_names
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE) |
st205766 | You cannot use this directly for this kind of multi labels.
See this example, also if It is for images It Is quite the same:
stackoverflow.com
How to manually specify class labels in keras flow_from_directory? 50
python, image-processing, deep-learning, keras
asked by
Malte
on 07:02AM - 29 Mar 17 UTC |
st205767 | I need to implement layer drop in TensorFlow Transformer. Can some one guide me how to do that??
Reference paper: https://arxiv.org/pdf/1909.11556.pdf 4
Thanks!! |
st205768 | You can take a look at the HugginFace impl in different TF models.
E.g. TFBert
github.com
huggingface/transformers/blob/master/src/transformers/models/bart/modeling_tf_bart.py#L755:L762 1
# The tf.debugging asserts are not compliant with XLA then they
# have to be disabled in other modes than eager.
if inputs["head_mask"] is not None and tf.executing_eagerly():
tf.debugging.assert_equal(
shape_list(inputs["head_mask"])[0],
len(self.layers),
message=f"The head_mask should be specified for {len(self.layers)} layers, but it is for {shape_list(inputs['head_mask'])[0]}.",
)
# encoder layers
for idx, encoder_layer in enumerate(self.layers):
if inputs["output_hidden_states"]:
encoder_states = encoder_states + (hidden_states,)
# add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
dropout_probability = random.uniform(0, 1)
if inputs["training"] and (dropout_probability < self.layerdrop): # skip the layer
continue
hidden_states, attn = encoder_layer(
hidden_states, |
st205769 | Hello @Bhack,
thanks for your reply. I have tried to implement this way, but this implementation is not working when decorating train-step with tf.function(...) since new variables for few layers won’t get formed during 1st step & tf.function(...) doesn’t allow us to form variables in further steps.
Just a side note: This solution is perfectly working in eager mode though. |
st205770 | here is the error message:
Traceback (most recent call last):
File "main.py", line 93, in <module>
main(args)
File "main.py", line 82, in main
verbose="auto",
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py", line 1183, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 889, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 950, in _call
return self._stateless_fn(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3022, in __call__
filtered_flat_args) = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3444, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 3289, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 999, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 672, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 986, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:855 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:845 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1285 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2833 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3608 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:838 run_step **
outputs = model.train_step(data)
/content/gsoc-wav2vec2/src/wav2vec2/modeling.py:236 train_step
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:636 apply_gradients
self._create_all_weights(var_list)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:823 _create_all_weights
self._create_slots(var_list)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/adam.py:124 _create_slots
self.add_slot(var, 'm')
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:913 add_slot
initial_value=initial_value)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:262 __call__
return cls._variable_v2_call(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:256 _variable_v2_call
shape=shape)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:67 getter
return captured_getter(captured_previous, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3523 creator
return next_creator(**kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:67 getter
return captured_getter(captured_previous, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3523 creator
return next_creator(**kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:67 getter
return captured_getter(captured_previous, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3523 creator
return next_creator(**kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:67 getter
return captured_getter(captured_previous, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:769 invalid_creator_scope
"tf.function-decorated function tried to create "
ValueError: tf.function-decorated function tried to create variables on non-first call. |
st205771 | Probably you could have some impact with:
github.com/tensorflow/tensorflow
Lifting variable on retrace 2
tensorflow:master ← bhack:patch-18
opened
May 19, 2021
bhack
+68
-3
Explore the effect on tests to fix: https://github.com/tensorflow/tensorflow/iss…ues/27120
github.com/tensorflow/tensorflow
tf.function-decorated function tried to create variables on non-first call 3
opened
Mar 26, 2019
ericpts
TF 2.0
comp:ops
type:bug
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04…): Debian Testing
- TensorFlow installed from (source or binary): Binary
- TensorFlow version (use command below): 2.0.0.dev20190227
- Python version: 3.7
**Describe the current behavior**
A function which correctly works when in eager mode does not work anymore when annotated with `tf.function`.
In particular, it complains about `ValueError: tf.function-decorated function tried to create variables on non-first call.`, even though the function is always called with different parameters.
This is a continuation of https://github.com/tensorflow/tensorflow/issues/26812#issuecomment-475600836.
**Describe the expected behavior**
The `apply_gradients_once()` function should work even when annotated with `tf.function`.
**Code to reproduce the issue**
```python3
import tensorflow as tf
import numpy as np
fast_optimizer = tf.keras.optimizers.Adam(
learning_rate=1e-3)
slow_optimizer = tf.keras.optimizers.Adam(
learning_rate=1e-3 * 1e-9)
@tf.function
def apply_gradients_once(optimizer, grads, vars):
grads = [grads]
optimizer.apply_gradients(zip(grads, vars))
def apply_grads(use_fast, grads_per_model, vars):
for i in range(2):
if use_fast[i]:
apply_gradients_once(fast_optimizer, grads_per_model[i], vars[i])
else:
apply_gradients_once(slow_optimizer, grads_per_model[i], vars[i])
def compute_loss(w, x, y):
r = (w * x - y)**2
r = tf.math.reduce_mean(r)
return r
def compute_gradients(model):
with tf.GradientTape() as tape:
tape.watch(model)
loss = compute_loss(model, x, y)
grads = tape.gradient(loss, model)
return grads
w = [
tf.Variable(0.0),
tf.Variable(1.0)]
x = np.array([1, 2, 3])
y = np.array([1, 2, 3])
vars = []
grads = []
for i in range(2):
vars.append([w[i]])
grads.append(compute_gradients(w[i]))
apply_grads([True, False], grads, vars)
```
**Other info / logs**
Error log:
```
Traceback (most recent call last):
File "main.py", line 52, in <module>
apply_grads([True, False], grads, vars)
File "main.py", line 23, in apply_grads
apply_gradients_once(slow_optimizer, grads_per_model[i], vars[i])
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 414, in __call__
return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 1254, in __call__
graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 1577, in _maybe_define_function
args, kwargs, override_flat_arg_shapes=relaxed_arg_shapes)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 1479, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 685, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 317, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py", line 677, in wrapper
), args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 392, in converted_call
result = converted_f(*effective_args, **kwargs)
File "/tmp/tmpr2ti5o1e.py", line 4, in tf__apply_gradients_once
ag__.converted_call('apply_gradients', optimizer, ag__.ConversionOptions(recursive=True, verbose=0, strip_decorators=(tf.function, defun, ag__.convert, ag__.do_not_convert, ag__.converted_call), force_conversion=False, optional_features=(), internal_convert_user_code=True), (zip(grads, vars),), {})
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 267, in converted_call
return _call_unconverted(f, args, kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 188, in _call_unconverted
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 399, in apply_gradients
self._create_hypers()
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 558, in _create_hypers
aggregation=tf_variables.VariableAggregation.ONLY_FIRST_REPLICA)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 727, in add_weight
aggregation=aggregation)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py", line 622, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 152, in make_variable
aggregation=aggregation)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py", line 212, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py", line 175, in _variable_v1_call
aggregation=aggregation)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py", line 58, in getter
return captured_getter(captured_previous, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 375, in invalid_creator_scope
"tf.function-decorated function tried to create "
ValueError: tf.function-decorated function tried to create variables on non-first call.
```
But I’ve not tested your specific case with this new flag. |
st205772 | I have some tf.summary.scaler() in my keras model, and I can see them logged with tensorboard callback. But if I save the model and load it back, I can no longer see the tf summaries. Is this expected behavior? Thx! |
st205773 | Hi!
I am a complete beginner in Tensorflow so, please excuse my noob question.
I am trying to extract snippets of text from a larger text file. Eg. extract Stanford University from Jim is a smart guy. He studied at Stanford University in his 20s. The solution must also work with languages other than English (at least Finnish).
I searched online, but wasn’t able to find any examples that fit my requirements. Could somebody give me an example or help me get started with this? I already have a dataset with which I managed to train a text classification model which worked well. Now I just need to implement it in a way that allows me to extract snippets similar to those in the dataset.
Thanks in advance! |
st205774 | As I understand, you are trying to implement Named Entity Recognition (NER). Also known as Entity Extraction.
I don’t think there’s a tutorial on tensorflow.org 7 but I found some available from the community.
any insights @markdaoust ? |
st205775 | It could also be thought of as a text generation (sumarization) task.
Or if you know you want a snippet that exists in the input you could run some sort of attention over the input to choose the start/end tokens. |
st205776 | We was talking about tagger at
Is there an Android equivalent to the Apple Word Tagger Model? General Discussion
On the iOS version of our app we are using the Apple Word Tagger Model so that it can assist us with parsing some complex sentences and that way we can extract the parts that we actually want. It is working fairly well.
I need something similar on Android.
Basically I would feed it a large sample of sentences and a classification for every word in the sentence and once it learns those I can pass it some new sentences and it’ll tell me the classification of each of the words.
A similar example… |
st205777 | Yeah this is similar to what I want. I need to be able to give categories to different words or sets of words and then feed it a large piece of text and I need it to figure out which category each word in that text belongs to. It is working fairly well with the Apple Word Tagger on iOS even with a small set of training data. |
st205778 | follow up on this. Today was published on keras.io 8 a tutorial to do what you want:
Named Entity Recognition using Transformers 12
very good timing! |
st205779 | I have downloaded SSD ResNet50 V1 and converted it to a tflite model since I am going to use it on a Jetson Nano. The loading time is about 226 seconds.
If I do the same operation with MobileNet v2 it takes about 195 seconds to load the model
Of course, for an application to use over 3 minutes to get ready is a little bit high in my opinion. If for example the application crashes and has to be restarted it can already be to late doing predictions as the object might be gone.
Is there anyway I can make the loading time improve? |
st205780 | Hi @TensorOverflow ,
I got a response from a TensorFlow Advocate:
" Jetson Nano uses TF 4 for inference, not TFLite nor TFLite Micro.
TF has a benchmark tool: GitHub - tensorflow/benchmarks: A benchmark framework for Tensorflow 8
As I’m not familiar with Jetson Nano, I’m not sure if this tool can run on the device but it’s worth a try.
If one wants to do object detection on an IoT device, I’d advise them to use Coral 3. AFAIK, Coral doesn’t have this slow initialization issue." |
st205781 | Have you tried to use TF with Tensorrt on Jetson Nano?
NVIDIA Developer – 5 Apr 16
NVIDIA TensorRT 2
NVIDIA TensorRT NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. Get Started... |
st205782 | I have a input pipeline that I need to update input regularly, so I use TFRecordDataset and thought I just need to update the file to update pipeline. However, it looks like the pipeline auto cache the dataset, but I didn’t use cache() method. Can anyone help me point out what making my pipeline automatically cache dataset?
Below is my pipeline:
ds = tf.data.TFRecordDataset(os.path.join(self.data_path,file_name))
ds = ds.map(self.decode_fn(is_train), num_parallel_calls=tf.data.experimental.AUTOTUNE)
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = (
tf.data.experimental.AutoShardPolicy.OFF)
train_dataflow = ds.with_options(options)
train_ds = train_dataflow.repeat().batch(
self.batch_size, drop_remainder=True
).map(
autoaug_batch_process_map_fn,
num_parallel_calls=tf.data.experimental.AUTOTUNE).prefetch(
buffer_size=tf.data.experimental.AUTOTUNE)
train_input_iterator = (
self.strategy.experimental_distribute_dataset(
train_ds).make_initializable_iterator()) |
st205783 | I am running a script using GPU and the output has “^H” and “^M” characters interspersed between the Epoch and ETA, loss and error values.
Is this an issue, why is it happening and can I supress it or stop it
Sorry in advance, I am new to the forum and program |
st205784 | Hi @sportsfanspaceman
If you could put the code snippet and part of the output with the issue it would make it easier to understand the issue and help you. |
st205785 | I’m in the need of creating a virtual chat bot. The bot should be multilingual and be able to interact with its user with text as well as voice.
Need to incorporate in an android device.
Can this be implemented completely using TensorFlow?
Any idea/suggestions would be very helpful!
Thanks. |
st205786 | Yes but you need to build up many things around TF to create a chatbot service.
I suggest you take an overview to:
Deconstructing Chatbots - An Overview
So you can figure out how a real E2E chatbot system is working
There are also other videos at:
Google Developers
Build chatbots with Dialogflow | Google Developers 2
Learn to build chatbots with Dialogflow, and create a great conversational experience for users with BigQuery, Cloud Functions, and Stackdriver.
If you want to experiment a little bit with a specialized library check:
github.com
deepmipt/DeepPavlov 2
An open source library for deep learning end-to-end dialog systems and chatbots. |
st205787 | Hi everyone,
At SIG JVM, we have just decided to stop supporting and building native TensorFlow MKL-enabled artifacts for the following reasons:
At pretty much each new release of TensorFlow, the MKL build is broken on various platforms and it requires some gymnastics on our side to get it work again (if we are even able to).
We did not investigated much on the reasons why but performances with MKL were many times worst than without it.
So that being said, if anyone here has some insights to share about the actual status of MKL in TensorFlow and/or any ideas on how we can continue to support it without trouble, that would be greatly appreciated.
Thanks!
Karl |
st205788 | Hi @karllessard
Back in TensorFlow 1.12.x I did a benchmark via custom build of TensorFlow with mkl and opt flags. To build the Java part I just followed these instructions: tensorflow/README.md at master · tensorflow/tensorflow · GitHub 8
In my benchmark (training NER) the Intel Cascade Lake w/MKL was close and sometimes better than GPU (using the system’s memory it could have a larger batch size).
That’s being said, I’ve never tried testing the inference. But the training was much faster than a native CPU build on newer CPU architectures. |
st205789 | Thanks @MaziyarPanahi , can you tell me on which platform (OS) you observed such performances at that time? |
st205790 | Absolutely! These are the platforms I observed improvements:
Dell PowerEdge C4130 - Ubuntu 16.04 LTS - Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
AWS p3.8xlarge - Ubuntu 18.04 LTS - Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
AWS c5.12xlarge - Ubuntu 18.04 LTS - 2nd generation Intel Xeon Scalable Processors (Cascade Lake) |
st205791 | It looks like the default build of TF Core does have something of “MKL” in it, just not enabled by default:
Medium – 14 May 21
Leverage Intel Deep Learning Optimizations in TensorFlow 13
Set a Single Environment Variable to Get up to 3x Performance Boost
Reading time: 4 min read
Setting TF_ENABLE_ONEDNN_OPTS=1 with default builds of TF Java might just do the same as Python. |
st205792 | Really, interesting… I’ll give it a try, if anyone does before me please share your benchmarks! |
st205793 | I studied the example given by the tensor flow documentation 2 for the movielens dataset but then never explained how to handle boolean and array data types and how to create embedding for them.
So I have written some code but not able to understand where I am wrong
import numpy as np
from math import ceil
import tensorflow as tf
from tensorflow_datasets.core import download
import tensorflow_recommenders as tfrs
import tensorflow_datasets as tfds
tf.autograph.set_verbosity(10)
tf.get_logger().setLevel('CRITICAL')
class DCN(tfrs.Model):
def __init__(self, use_cross_layer, deep_layer_sizes, datainfo, projection_dim=None):
super().__init__()
self.embedding_dimension = 32
self._all_features = datainfo['all_features']
self._embeddings = {}
# Compute embeddings for string features.
for feature_name in datainfo['str_features']:
vocabulary = datainfo['vocabularies'][feature_name]
self._embeddings[feature_name] = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=vocabulary, mask_token=None),
tf.keras.layers.Embedding(
len(vocabulary) + 1, self.embedding_dimension)
], name=feature_name)
for feature_name in datainfo['int_lookup_feature']+datainfo['list_features']:
vocabulary = datainfo['vocabularies'][feature_name]
self._embeddings[feature_name] = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.IntegerLookup(
vocabulary=vocabulary, mask_token=None),
tf.keras.layers.Embedding(
len(vocabulary) + 1, self.embedding_dimension)
], name=feature_name)
for feature_name in datainfo['bool_features']:
self._embeddings[feature_name] = tf.keras.Sequential([
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dense(units=1)
], name=feature_name)
if use_cross_layer:
self._cross_layer = tfrs.layers.dcn.Cross(
projection_dim=projection_dim,
kernel_initializer="glorot_uniform")
else:
self._cross_layer = None
self._deep_layers = [tf.keras.layers.Dense(layer_size, activation="relu")
for layer_size in deep_layer_sizes]
self._logit_layer = tf.keras.layers.Dense(1)
self.task = tfrs.tasks.Ranking(
loss=tf.keras.losses.MeanSquaredError(),
metrics=[tf.keras.metrics.RootMeanSquaredError("RMSE")]
)
def call(self, features):
# Concatenate embeddings
embeddings = []
for feature_name in self._all_features:
embedding_fn = self._embeddings.get(feature_name, None)
if embedding_fn is not None:
embeddings.append(embedding_fn(features[feature_name]))
x = tf.concat(embeddings, axis=1)
# Build Cross Network
if self._cross_layer is not None:
x = self._cross_layer(x)
# Build Deep Network
for deep_layer in self._deep_layers:
x = deep_layer(x)
return self._logit_layer(x)
def compute_loss(self, features, training=False):
labels = features.pop("user_rating")
scores = self(features)
return self.task(
labels=labels,
predictions=scores,
)
def main():
tf.random.set_seed(42)
ds = tfds.load("movie_lens/100k-ratings", split="train")
ds = ds.map(lambda x: {
"movie_id": x["movie_id"],
"user_id": x["user_id"],
"user_rating": x["user_rating"],
"user_gender": int(x["user_gender"]),
"user_zip_code": x["user_zip_code"],
"user_occupation_text": x["user_occupation_text"],
"bucketized_user_age": int(x["bucketized_user_age"]),
"movie_genres": x["movie_genres"],
})
dataLen = len(ds)
trainLen = ceil(dataLen*0.8)
testLen = dataLen - trainLen
shuffled = ds.shuffle(100, reshuffle_each_iteration=False)
str_features = ["movie_id", "user_id",
"user_zip_code", "user_occupation_text"]
int_lookup_feature = ["bucketized_user_age"]
list_features = ["movie_genres"]
bool_features = ["user_gender"]
all_features = str_features + \
bool_features + list_features + int_lookup_feature
vocabularies = {}
dataValues = {}
for feature_name in str_features+int_lookup_feature:
vocab = shuffled.map(lambda x: x[feature_name])
vocabularies[feature_name] = np.unique(
[i.numpy() for i in list(vocab)]).tolist()
for feature_name in list_features:
vocab = shuffled.map(lambda x: x[feature_name])
vocabularies[feature_name] = np.unique(
np.concatenate(list(vocab))).tolist()
datainfo = {
'all_features': all_features,
'str_features': str_features,
'list_features': list_features,
'int_lookup_feature': int_lookup_feature,
'bool_features': bool_features,
'vocabularies': vocabularies,
'dataValues': dataValues
}
train = shuffled.take(trainLen)
test = shuffled.skip(trainLen).take(testLen)
cached_train = train.shuffle(100_000).batch(8192).cache()
cached_test = test.batch(4096).cache()
epochs = 8
learning_rate = 0.01
use_cross_layer = True
deep_layer_sizes = [192, 192]
projection_dim = None
model = DCN(use_cross_layer=use_cross_layer,
deep_layer_sizes=deep_layer_sizes,
projection_dim=projection_dim,
datainfo=datainfo)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate))
model.fit(cached_train, epochs=epochs, verbose=True)
metrics = model.evaluate(cached_test, return_dict=True)
print(metrics)
main()
It is giving me the wrong dimensions error in my layers but I could not figure out which layer. |
st205794 | I everybody,
I want to average the weights of three identical model trained with different training sets and that create a new model with averagerd weights coming from these three models.
My models have the same structure.
I’ve tried with this code but seems the weights of the new model that i want create remanins equal to the last model that i fit.
What i should do in order to make it works?
Thanks
def load_all_models():
all_models = list()
for i in range(3):
filename = 'Conv_autoencoder_'+str(n)+'_layer_.h5'
model = load_model(filename)
all_models.append(model)
print('Loaded model %s' % filename)
return all_models
# create a model from the weights of multiple models
def model_weight_ensemble(members, weights):
n_layers = len(members[0].get_weights())
avg_model_weights = list()
for layer in range(n_layers):
layer_weights = np.array([model.get_weights()[layer] for model in members])
# weighted average of weights for this layer
avg_layer_weights = np.average(layer_weights, axis=0, weights=weights)
# store average layer weights
avg_model_weights.append(avg_layer_weights)
# create a new model with the same structure
model = clone_model(members[1])
# set the weights in the new
model.set_weights(avg_model_weights)
model.compile(optimizer='SGD', loss='mean_squared_error')
return model
members = load_all_models()
print('Loaded %d models' % len(members))
n_models = len(members)
weights = [1 for i in range(1, n_models+1)]
autoencoder_global = model_weight_ensemble(members,weights)
print(autoencoder_global.get_weights()[12])
print('------------------------------------------------------------------') |
st205795 | I’m working on a neural network implementation in C++.
In Python, we built the model sequentially via keras.
I want to learn C++, is there any good example or explanation?
The TensorFlow C++ manual is too hard to explain, so it’s quite difficult for me as a beginner to understand.
From what I understand, in the first python, the form of the neural network is saved as a .pb file.
Second, learn C++. An example from this second part is desperately needed.
Looking forward to advice from anyone who knows or has experience. Thank you in advance. |
st205796 | It is really not supported/documented for end users.
You could check if this tutorial is still valid:
Medium – 10 Jun 19
Creating a TensorFlow CNN in C++ (Part 2) 34
In this post I’ll show how to create, train and test a Convolutional Neural Network using the TensorFlow C++ API
Reading time: 13 min read
More in general, what is your use case? Why do you need to train the model in c++? |
st205797 | @Bhack 's question is very important: what is the use case?
because depending on the answer, there might be easier tooling to use to achieve what you want. |
st205798 | Would you please give me a simple example to work with tf.raw_ops.DenseToCSRSparseMatrix. Here is my latest try but It didn’t work:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
ss = tf.constant([[1, 2, 3, 4, 5], [0, 0, 0, 2, 1]], dtype=tf.float32)
indecies = tf.where(tf.not_equal(ss, 0))
b = tf.compat.v1.raw_ops.DenseToCSRSparseMatrix(dense_input=ss, indices=indecies)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(b)) |
st205799 | I think this is deprecated.
Have you already checked:
TensorFlow
tf.sparse.from_dense | TensorFlow Core v2.5.0 1
Converts a dense tensor into a sparse tensor.
And more in general the sparse tensor guide
TensorFlow
Working with sparse tensors | TensorFlow Core 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.