id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st207300 | Hi folks,
I hope you are doing well. I wanted to tell y’all about the new ConvNeXt models [1] I have been converting for the past few days. Finally, they are available on TF-Hub [2]. The collection contains a total of 30 models that are categorised into two groups: classifier and feature extractor.
These models are NOT blackbox SavedModels i.e., they can be fully expanded into tf.keras.Model objects and one can call all the utility functions on them (example: .summary()). TF-Hub links to the models,
conversion code, off-the-shelf classification, and fine-tuning code are available in the GitHub repository [3]. There are some points in the repository that calls for contributions, so happy to welcome them if
anyone’s interested
A huge shoutout to the ML-GDE team for providing GCP credits that made the validation of these models [4] possible. Also, thanks to @vasudevgupta, @lgusm, and Willi Gierke for helping out. Happy to address any feedback and answer any questions.
References:
[1] https://arxiv.org/abs/2201.03545
[2] TensorFlow Hub 1
[3] GitHub - sayakpaul/ConvNeXt-TF: Includes PyTorch -> Keras model porting code for ConvNeXt family of models with fine-tuning and inference notebooks. 2
[4] ConvNeXt-TF/i1k_eval at main · sayakpaul/ConvNeXt-TF · GitHub 1 |
st207301 | This is awesome @Sayak_Paul Thanks! And thanks @vasudevgupta and @lgusm and Willi |
st207302 | @Sayak_Paul @lgusm How to reproduce conv-next in keras? Is your GitHub holds an on-the-shell classifier to do that? What should be the steps? I want to train the model on image net and inference on a validation set. |
st207303 | The ImageNet-1k evaluation scripts are here: ConvNeXt-TF/i1k_eval at main · sayakpaul/ConvNeXt-TF · GitHub.
For training on ImageNet-1k, you’d need to follow the paper and implement the necessary utilities. |
st207304 | I am trying to do experiments on multiple data sets. Some are more imbalanced than others. Now, in order to assure fair reporting, we compute F1-Score 3 on test data. In most machine learning models, we train and validate the model via accuracy measure metric. However, this time, I decided to train and validate the model on an F1-score metric measure. Technically, there should be no problems, in my opinion. However, I am wondering if this is the correct approach to go.
Second, when I use this method (training, validation on F1-score), I receive a higher loss error and a lower F1-score on training data than on validation data. I’m not sure why. |
st207305 | Nafees:
Second, when I use this method (training, validation on F1-score), I receive a higher loss error and a lower F1-score
What do you mean with this? |
st207306 | Hi Folks.
I have some queries specific to the TF Object Detection API.
Basically, I am trying to fine-tune a SSDMobilenetV2 from the TF OD API model zoo to detect traffic signs, so first I fine-tuned on GTSDB (around 500 samples) and the resulting model wasnt very good.
Then, I tried to train on an augmented version(using rotation, translation, shearing etc.) of GTSDB (4500 samples). During training this time after about 9k iterations, the validation loss starts increasing and never comes back down.
I am assuming that implies overfitting, which according to me could be due to:
train and eval data being very different – i checked and this isnt the case
learning rate too high – i reduced it by a factor of 10 and still overfitting occurs
model might be too complex – the same model was getting properly trained on un-augmented GTSDB with only 500 train samples, so this model shouldnt be too complex for the augmented GTSDB which has around 4500 samples
Augmented dataset might not have been properly created – I converted all the annotated images into a video and checked that manually, the dataset seems fine
I am trying to think of other reasons and would appreciate any help in that regard.
Note: I used imgaug library for data augmentation.
For reference, I have attached my loss curves and config file
Loss_classification_loss1031×625 71.6 KB
Loss_localization_loss1031×625 89 KB
Loss_regularization_loss1031×625 83.5 KB
Loss_total_loss1031×625 81.5 KB
Config file:
model {
ssd {
inplace_batchnorm_update: true
freeze_batchnorm: false
num_classes: 9
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
encode_background_as_zeros: true
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: 3.0
aspect_ratios: 0.3333
}
}
image_resizer {
fixed_shape_resizer {
height: 300
width: 300
}
}
box_predictor {
convolutional_box_predictor {
min_depth: 0
max_depth: 0
num_layers_before_predictor: 0
use_dropout: false
dropout_keep_probability: 0.8
kernel_size: 1
box_code_size: 4
apply_sigmoid_to_scores: false
class_prediction_bias_init: -4.6
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
random_normal_initializer {
stddev: 0.01
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.97,
epsilon: 0.001,
}
}
}
}
feature_extractor {
type: 'ssd_mobilenet_v2_keras'
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.00004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
train: true,
scale: true,
center: true,
decay: 0.97,
epsilon: 0.001,
}
}
override_base_feature_extractor_hyperparams: true
}
loss {
classification_loss {
weighted_sigmoid_focal {
alpha: 0.75,
gamma: 2.0
}
}
localization_loss {
weighted_smooth_l1 {
delta: 1.0
}
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
normalize_loc_loss_by_codesize: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}
train_config: {
fine_tune_checkpoint_version: V2
fine_tune_checkpoint: "./sprint1_ssd_mobilenetv2_try2/pretrained_model/mobilnetv2/checkpoint/ckpt-0"
fine_tune_checkpoint_type: "detection"
batch_size: 24
sync_replicas: true
startup_delay_steps: 0
replicas_to_aggregate: 8
num_steps: 50000
data_augmentation_options {
random_horizontal_flip {
}
}
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: .0008
total_steps: 50000
warmup_learning_rate: 0.00013333
warmup_steps: 1000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
}
train_input_reader: {
label_map_path: "./sprint1_ssd_mobilenetv2_try2/label_map.pbtxt"
tf_record_input_reader {
input_path: "./sprint1_ssd_mobilenetv2_try2/gtsdb_stop_train_9.record"
}
}
eval_config: {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
batch_size: 1
}
eval_input_reader: {
label_map_path: "./sprint1_ssd_mobilenetv2_try2/label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "./sprint1_ssd_mobilenetv2_try2/gtsdb_stop_val_9.record"
}
} |
st207307 | Have you tried to add another validation dataset on the same not augmented training when you train on augmented version? |
st207308 | I augmented the GTSDB in one go, and then split it 90/10 train/test with random shuffling. Is this your question? |
st207309 | I tried with 80/20 instead of 90/10 but didnt see much difference in loss curves.
As for shuffling across test and train sets, I checked the number of samples per class for the 9 classes:
For test set:
{'0': 96,
'1': 81,
'2': 89,
'3': 96,
'4': 97,
'5': 104,
'6': 102,
'7': 125,
'8': 148}
For train set:
{'0': 406,
'1': 319,
'2': 316,
'3': 352,
'4': 443,
'5': 408,
'6': 516,
'7': 454,
'8': 512}
Seems appropriately shuffled to me, I am not sure if there are other things I should analyse.
P.S: The annotations count per class for the original un-augmented dataset:
4
79
81
30
68
53
41
57
32 |
st207310 | I tested on around 50 samples on the original un-augmented data, and the model is able to detect most of signs properly (usually with confidence of 90% or above as compared to my earlier models trained on original dataset which almost never had a confidence above 50%) at least the ones clearly visible to the naked eye. |
st207311 | Have you tried to reduce your augmentation variance?
E.g. You could start to create a first train/val dataset reducing the augmentation hyperparamters rangers.
If It work well you could try to extend the range a bit and so on. |
st207312 | Thats what I have been trying today all day (removed certain augmentations like cropping and reduced others like rotation angle). During the current training, the model accuracy on training set seems to be increasing fast but not as fast as when I first trained it, so I guess doing aforementioned has slowed down or delayed overfitting, but I fear it’s still going to occur.
I’ll still wait for the current training to complete before testing the model.
Thanks. |
st207313 | You really need to check that you have uniform sampled between the train and eval to cover the same augmentation hyperparams range.
I think also that a 500 sample dataset it could be a little bit small also with augmentation. |
st207314 | arXiv: LaMDA: Language Models for Dialog Applications 3
We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text. While model scaling alone can improve quality, it shows less improvements on safety and factual grounding. We demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding. The first challenge, safety, involves ensuring that the model’s responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias. We quantify safety using a metric based on an illustrative set of human values, and we find that filtering candidate responses using a LaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promising approach to improving model safety. The second challenge, factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator. We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible. Finally, we explore the use of LaMDA in the domains of education and content recommendations, and analyze their helpfulness and role consistency.
Table 3 LaMDA Music from the research paper1070×753 380 KB
Blog post:
Google AI Blog
LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything 7
Posted by Heng-Tze Cheng, Senior Staff Software Engineer and Romal Thoppilan, Senior Software Engineer, Google Research, Brain Team Langu... |
st207315 | We covered this new model at the Machine Learning Singapore MeetUp, and I’ve uploaded the video “DeepMind’s RETRO vs Google’s LaMDA” 1 - let me know if you like it |
st207316 | Can someone give a general overview about the models request we are collecting on Keras-cv?
What kind of relationship we will have between this, models garden, tf.keras.applications namespace and “marginally” TFHUB?
I would be really nice to disambiguate a little bit this topic to avoid duplication, fragmentation and confusion about the contribution path in the TF ecosystem and to optimize the external contributors resources.
We have already have some historically pinned tickets about models garden community requests and help wanted requests at:
github.com/tensorflow/models
📄 Community requests: New paper implementations 2
opened
Jun 6, 2020
jaeyounkim
type:support
models:official
This issue contains **all open requests for paper implementations requested by t…he community**.
We cannot guarantee that we can fulfill community requests for specific paper implementations.
If you'd like to contribute, **please add a comment to the relevant GitHub issue to express your interest in providing your paper implementation**.
Awesome external contributors will be nominated for [Google Open Source Peer Bonus](https://opensource.google/docs/growing/peer-bonus/).
Please also see our [contribution guidelines](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution) and [paper selection criteria](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution#model-selection).
## Computer Vision
| Paper | Conference | GitHub issue | Note |
--------|------------|--------------|------|
| ResNeXt: [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431) | CVPR 2017 | #6752 | |
| DenseNet: [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) | CVPR 2017 | #8278 | |
| [Density estimation using Real NVP](https://arxiv.org/abs/1605.08803) | ICLR 2017 | #7848 | Need to migrate [TF 1 code](https://github.com/tensorflow/models/tree/master/research/real_nvp) to TF 2 |
| [Spatiotemporal Contrastive Video Representation Learning](https://arxiv.org/abs/2008.03800) | CVPR 2021 | #9993 | In progress (Internally) |
github.com/tensorflow/models
[Help wanted] Research paper implementations (Project Tracker) 2
opened
Jun 21, 2020
jaeyounkim
help wanted:paper implementation
# Help wanted: Research paper code and models
This issue contains a list of t…he research papers we want to implement in TensorFlow 2 with help from the community.
If you'd like to contribute, please **add a comment to the relevant GitHub issue** or **create a new issue** to express your interest in providing your paper implementation.
Awesome external contributors will be nominated for [Google Open Source Peer Bonus](https://opensource.google/docs/growing/peer-bonus/).
Please also see our [contribution guidelines](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution) and [paper selection criteria](https://github.com/tensorflow/models/wiki/Research-paper-code-contribution#model-selection).
## Computer Vision
| Paper | GitHub issue | Status |
|-------|--------------|--------|
| FCOS: Fully Convolutional One-Stage Object Detection | #10275 | In progress |
| DarkPose: [Distribution Aware Coordinate Representation for Human Pose Estimation](https://arxiv.org/abs/1910.06278) | #8713 | In progress |
| MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/abs/1911.05722) | #8708 | Need contribution |
| YOLOv4 [Optimal Speed and Accuracy of Object Detection](https://arxiv.org/abs/2004.10934) | N/A | [In progress](https://github.com/tensorflow/models/tree/master/official/vision/beta/projects/yolo) |
## Natural Language Processing
| Paper | GitHub issue | Status |
|-------|--------------|--------|
| RoBERTa: [A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) | #8704 | Need contribution |
| RoFormer: Enhanced Transformer with Rotary Position Embedding | N/A | In progress |
| Longformer: The Long-Document Transformer | N/A | In progress |
| BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension | N/A | In progress |
### Benchmark datasets
| Dataset | GitHub issue(s) | Status |
|----------|------------------|--------|
## Speech Recognition
| Paper | Conference | GitHub issue | Status |
|-------|------------|--------------|--------|
| Deep Speech 2: [End-to-End Speech Recognition in English and Mandarin](https://arxiv.org/abs/1512.02595) | ICML 2016 | #8702 | In progress |
Useful for the context
See also other community members comments like @sebastian-sz :
github.com/keras-team/keras-cv
ResNet-RS block/layer 1
opened
Jan 12, 2022
LukeWood
Or our thread at:
github.com/keras-team/keras
Updating the ResNet-* weights
opened
Dec 12, 2021
sayakpaul
type:feature
Contributions welcome
If you open a GitHub issue, here is our policy:
It must be a bug, a feature r…equest, or a significant problem with the documentation (for small docs fixes please send a PR instead).
The form below must be filled out.
**Here's why we have that policy:**.
Keras developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.
**System information**.
TensorFlow version (you are using): 2.7.0
Are you willing to contribute it (Yes/No) : Currently no
**Describe the feature and the current behavior/state**.
ResNets are arguably one of the most influential architectures in deep learning. Today, they are used in different capacities. For example, sometimes they act as strong baselines, sometimes they are used as backbones. Since their inception, their performance on ImageNet-1k, in particular, has improved quite a lot. I think it's time the ResNets under `tf.keras.applications` were updated to facilitate these changes.
**Will this change the current api? How?**
ResNet-RS (https://arxiv.org/abs/2103.07579) introduces slight architectural changes to the vanilla ResNet architecture (https://arxiv.org/abs/1512.03385). So, yes, there will be changes to the current implementation of ResNets (among other things) we have under `tf.keras.applications`. We could call it `tf.keras.applications.ResNet50RS`, for example. Following summarizes the performance benefits that ResNet-RS introduces to the final ImageNet-1k performance (measured on the `val` set):

<sub><a href=https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs#imagenet-checkpoints>Source</a></sub>
**Who will benefit from this feature?**
Keras users that use ResNets from `tf.keras.applications` for building downstream applications.
**[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)**
- Do you want to contribute a PR? (yes/no): Currently no
- If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions
- Briefly describe your candidate solution(if contributing):
/cc @thea @yarri-oss @lgusm @Luke_Wood @Jaehong_Kim @Scott_Zhu |
st207317 | Thanks Bhack for the question.
We will have some readme/contribution guild available on the keras-cv github project to provide more details about what’s the difference between keras-cv/model garden/keras.applications/tf-hub. |
st207318 | P.s. I suppose that we could extend this topic as is also on keras-nlp or more generally to any other multi-modal/omni network that is hard to constrain in a specific cv/nlp/etc. domain. |
st207319 | See also:
github.com/keras-team/keras-cv
Standardized / preferred way to implement blocks and models. 1
opened
Jan 27, 2022
sebastian-sz
Given the models requirements are being gathered in the [discussion](https://git…hub.com/keras-team/keras-cv/discussions/52), is there a preferred way to implement them?
There are multiple ways, to implement blocks and models:
1. `keras.applications` way -> Models and blocks are functional [Example](https://github.com/keras-team/keras/blob/2c48a3b38b6b6139be2da501982fd2f61d7d48fe/keras/applications/resnet.py#L440-L459).
2. Model Garden way -> Model is [functional](https://github.com/tensorflow/models/blob/c8a402fc7fc0cc391a9e3ac56fb7b3ea6f9d202e/official/vision/beta/modeling/backbones/resnet.py#L99), but blocks are [layer subclasses](https://github.com/tensorflow/models/blob/c8a402fc7fc0cc391a9e3ac56fb7b3ea6f9d202e/official/vision/beta/modeling/layers/nn_blocks.py#L56). (even though the model is a direct subclass, it does not override call method).
3. Blocks are layer subclasses, and models are model subclasses of `keras.layers.Layer` and `keras.Model` respectively. Both implement `call` method.
Each way has it's own benefits and drawbacks. Is one of the above preferred? Or maybe something entirely different? |
st207320 | Scott_Zhu:
We will have some readme/contribution guild available on the keras-cv github project to provide more details about what’s the difference between keras-cv/model garden/keras.applications/tf-hub.
Keras-cv Now we have:
github.com/keras-team/keras-cv
Initial commit for the keras-cv roadmap/community guide line 1
keras-team:master ← qlzh727:master
opened
Jan 28, 2022
qlzh727
+125
-4
Mostly copy/pasted from internal roadmap doc (with some modifications).
I will add some inline comments |
st207321 | Can Tensorflow\ Keras be used for a synchronous \ holistic modeling off NN architecture\ structure, types of neurons, weights etc? Are there examples to show that? |
st207322 | Do you have any non Keras example/reference? Just to understand a little bit the topic. |
st207323 | We did some holistic nn modeling for some automotive companies, but when the know-how owners experienced the power and value of such models, they were not open for any publishing of this work…
Now we look for a common platform for our holistic modeling, and we just wonder if TensorFlow\ Keras could do the job… |
st207324 | Hmm… we just try to optimize\ train the architecture, the size, and the weights of a model in an integrated “holistic” process, and I wonder if Tensorflow\ Keras can provide special support for such a strategy.
The resulting effects (eg “adversarial robustness, parameter sparsity, and output stability”) may be partially the same, but there is more… |
st207325 | P.s. If you are just looking instead for AutML you could check:
https://autokeras.com/
GitHub
GitHub - google/automl: Google Brain AutoML
Google Brain AutoML. Contribute to google/automl development by creating an account on GitHub.
For cloud solutions:
Google Cloud
AutoML Vision documentation | Google Cloud
Enables you to train machine learning models to classify your images according to your own defined labels.
Google Cloud
Vertex AI documentation | Google Cloud
Documentation for Vertex AI, a suite of machine learning tools that enables developers to train high-quality models specific to their business needs. |
st207326 | arXiv: The Efficiency Misnomer 1 (Google Research)
Model efficiency is a critical aspect of developing and deploying machine learning models. Inference time and latency directly affect the user experience, and some applications have hard requirements. In addition to inference costs, model training also have direct financial and environmental impacts. Although there are numerous well-established metrics (cost indicators) for measuring model efficiency, researchers and practitioners often assume that these metrics are correlated with each other and report only few of them. In this paper, we thoroughly discuss common cost indicators, their advantages and disadvantages, and how they can contradict each other. We demonstrate how incomplete reporting of cost indicators can lead to partial conclusions and a blurred or incomplete picture of the practical considerations of different models. We further present suggestions to improve reporting of efficiency metrics.
A primer on cost indicators
One of the main considerations in designing neural network architectures is quality-cost tradeoff… In almost all cases, the more computational budget is given to a method, the better the quality of its outcome will be. To account for such a trade-off, several cost indicators are used in the literature of machine learning and its applications to showcase the efficiency of different models. These indicators take different points of view to the computational costs.
FLOPs: A widely used metric as the proxy for the computational cost of a model is the number of floating-point multiplication-and-addition operations… Alternative to FLOPs, the number of multiply-accumulate (MAC1) as a single unit of operation is also used in the literature (Johnson, 2018). Reported FLOPs are usually calculated using theoretical values. Note that theoretical FLOPs ignores practical factors, like which parts of the model can be parallelized.
Number of Parameters: Number of trainable parameters is also used as an indirect indicator of
computational complexity as well as memory usage (during inference)… Many research works that study the scaling law…, especially in the NLP domain, use the number of parameters as the primary cost indicator…
Speed: Speed is one the most informative indicator for comparing the efficiency of different models… In some setups, when measuring speed, the cost of “pipeline” is also taken into account which better reflects the efficiency in a real-world scenario. Note that speed strongly depends on hardware and implementation, so keeping the hardware fixed or normalizing based on the amount of resources used is the key for a fair comparison. Speed is often reported in various forms:
• Throughput refers to the number of examples (or tokens) that are processed within a specific
period of time, e.g., “examples (or tokens) per second”.
• Latency usually refers to the inference time (forward pass) of the model given an example or
batch of examples, and is usually presented as “seconds per forward pass”. The main point
about latency is that compared to throughput, it ignores parallelism introduced by batching
examples. As an example, when processing a batch of 100 examples in 1 second, throughput
is 100 examples per second, while latency is 1 second. Thus, latency is an important factor
for real-time systems that require user input.
• Wall-clock time/runtime measures the time spent to process a fixed set of examples by the
model. Usually, this is used to measure the training cost, for instance the total training time
up to convergence.
• Pipeline bubble is the time that computing devices are idle at the start and end of every
batch… This indirectly measures the speed of the non-pipeline parts
of the process.
• Memory Access Cost (MAC) corresponds to the number of memory accesses. It typically
makes up a large portion of runtime and is the actual bottleneck when running on modern
platforms with strong computational power such as GPUs and TPUs…
…
Comparison of standard Transformers, Universal Transformers and Switch Transformers in terms of the number of parameters, FLOPs, and throughput1282×570 204 KB |
st207327 | MLP-Mixer: An all-MLP Architecture for Vision (Tolstikhin et al., 2021) 16 (Google)
Convolutional Neural Networks (CNNs) are the go-to model for computer vision.
Recently, attention-based networks, such as the Vision Transformer, have also
become popular. In this paper we show that while convolutions and attention are
both sufficient for good performance, neither of them are necessary. We present
MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs).
MLP-Mixer contains two types of layers: one with MLPs applied independently to
image patches (i.e. “mixing” the per-location features), and one with MLPs applied
across patches (i.e. “mixing” spatial information). When trained on large datasets,
or with modern regularization schemes, MLP-Mixer attains competitive scores on
image classification benchmarks, with pre-training and inference cost comparable
to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.
image1218×851 233 KB
… our architecture can be seen as a very special CNN, which uses 1×1 convolutions
for channel mixing, and single-channel depth-wise convolutions of a full receptive field and parameter sharing for token mixing. However, the converse is not true as typical CNNs are not special cases of Mixer. Furthermore, a convolution is more complex than the plain matrix multiplication in MLPs as it requires an additional costly reduction to matrix multiplication and/or specialized implementation.
Despite its simplicity, Mixer attains competitive results. When pre-trained on large datasets (i.e., ∼100M images), it reaches near state-of-the-art performance, previously claimed by CNNs and Transformers, in terms of the accuracy/cost trade-off. This includes 87.94% top-1 validation accuracy on ILSVRC2012 “ImageNet” [13]. When pre-trained on data of more modest scale (i.e., ∼1– 10M images), coupled with modern regularization techniques [48, 53], Mixer also achieves strong performance. However, similar to ViT, it falls slightly short of specialized CNN architectures.
image1118×724 352 KB
We describe a very simple architecture for vision. Our experiments demonstrate that it is as good as existing state-of-the-art methods in terms of the trade-off between accuracy and computational resources required for training and inference. We believe these results open many questions. On the practical side, it may be useful to study the features learned by the model and identify the main differences (if any) from those learned by CNNs and Transformers. On the theoretical side, we would like to understand the inductive biases hidden in these various features and eventually their role in generalization. Most of all, we hope that our results spark further research, beyond the realms of established models based on convolutions and self-attention. It would be particularly interesting to see whether such a design works in NLP or other domains.
@Sayak_Paul’s implementation:
MLP-Mixer with CIFAR-10 Show and Tell
Here’s my implementation of MLP-Mixer, the all MLP architecture for computer vision without any use of convs and self-attention:
Here’s what is included:
Distributed training with mixed-precision.
Visualization of the token-mixing MLP weights.
A TensorBoard callback to keep track of the learned linear projections of the image patches.
Results are quite competitive with room for improvements for interpretability.
github.com
sayakpaul/MLP-Mixer-CIFAR10 7
Implements MLP-Mixer (https://arxiv.org/abs/2105.01601) with the CIFAR-10 dataset. |
st207328 | I think that we need to better promote our TF models in paperswithcode /cc @Joana.
In this specific case you can see the official Google reference implementation in JAX and all the others alternative implementations:
paperswithcode.com
Papers with Code - MLP-Mixer: An all-MLP Architecture for Vision 15
#11 best model for Image Classification on ImageNet ReaL (Accuracy metric) |
st207329 | On the same MLP like models thread CycleMLP:
arXiv.org
CycleMLP: A MLP-like Architecture for Dense Prediction 4
This paper presents a simple MLP-like architecture, CycleMLP, which is a
versatile backbone for visual recognition and dense predictions, unlike modern
MLP architectures, e.g., MLP-Mixer, ResMLP, and gMLP, whose architectures are
correlated to image...
Let’s see how the official and community implementations will populate in:
paperswithcode.com
Papers with Code - CycleMLP: A MLP-like Architecture for Dense Prediction 5
#139 best model for Image Classification on ImageNet (Top 1 Accuracy metric) |
st207330 | Continuing with MLP-Mixers, there are Vision Permutators as well.
https://arxiv.org/abs/2106.12368 9
Official code implementation,
https://github.com/Andrew-Qibin/VisionPermutator 8 |
st207331 | arXiv.org
S$^2$-MLPv2: Improved Spatial-Shift MLP Architecture for Vision 4
Recently, MLP-based vision backbones emerge. MLP-based vision architectures
with less inductive bias achieve competitive performance in image recognition
compared with CNNs and vision Transformers. Among them, spatial-shift MLP
(S$^2$-MLP), adopting... |
st207332 | A new interesting Convmixer approach is avaialable (ICLR 2022 submission) :
OpenReview
Patches Are All You Need?
Although convolutional networks have been the dominant architecture for vision tasks for many years, recent experiments have shown that Transformer-based models, most notably the Vision Transformer...
Medium – 2 Nov 21
ConvMixer: Patches Are All You Need? Overview and thoughts 🤷
CNNs don’t always have to progressively decrease resolution. A revolutionary idea that might shape the next-gen architectures for Computer…
Reading time: 8 min read |
st207333 | Bhack:
Patches Are All You Need? | OpenReview
The reviewer discussion is interesting Patches Are All You Need? | OpenReview |
st207334 | Dear community,
In general, ANN models are non-identified. But, we attempt to address the identifiability problem by imposing some constraints. How can I impose these constraints using tf.keras.constraints?
Someone already did this?
Regards. |
st207335 | If you are looking for how to just handle some constrains optimizzation you could take a look at:
Google AI Blog: Setting Fairness Goals with the TensorFlow Constrained Optimization Library 1 |
st207336 | Hi Everyone,
I am new to Tensorflow (beginner). I am researching solutions for a project i’m working on and was wondering if someone could point me in the right direction. I checked models at modelzoo and it was a bit overwhelming and so far did not find what i was looking for. It seems like a rather common use-case so thought there might already be something built with tensorflow for this, but maybe another tool is better suited?
Basically I want to match incoming photos taken by client with a database of photos and info.
Kind regards,
-M |
st207337 | You can start to play with something like:
Near-duplicate image search Show and Tell
New example on building a near-duplication image search utility. Comprises of an image classifier, Bit LSH (Locality Sensitive Hashing), random projection, and TensorRT optimized inference to drastically reduce the query time. LSH and random projection have been shown with from-scratch implementations for readers to better understand them.
Check also this thread:
Tutorials/materials for fast image retrieval tasks General Discussion
Hi all.
Looking for resources/materials that build on TensorFlow/Keras (preferably) for doing scalable image similarity searches. |
st207338 | Can someone explain which one are the best algorithm ? I know but I want to learn from a person who may best experience in above. Thanks |
st207339 | Cefar10 or other one. But the Densnet are the change version of resnet . Can you suggest what are the benefits of Densnet? |
st207340 | Check out the following recent papers:
ResNet strikes back: An improved training procedure in timm 1
The influential Residual Networks designed by He et al. remain the gold-standard architecture in numerous scientific publications. They typically serve as the default architecture in studies, or as baselines when new architectures are proposed. Yet there has been significant progress on best practices for training neural networks since the inception of the ResNet architecture in 2015. Novel optimization & data-augmentation have increased the effectiveness of the training recipes. In this paper, we re-evaluate the performance of the vanilla ResNet-50 when trained with a procedure that integrates such advances. We share competitive training settings and pre-trained models in the timm open-source library, with the hope that they will serve as better baselines for future work. For instance, with our more demanding training setting, a vanilla ResNet-50 reaches 80.4% top-1 accuracy at resolution 224x224 on ImageNet-val without extra data or distillation. We also report the performance achieved with popular models with our training procedure.
A ConvNet for the 2020s 1.
The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually “modernize” a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets. |
st207341 | In the first work I suppose that the new training regime/protocol could be applied also to a densnet. But It seems that we don’t have a densnet baseline in the tables applying the same protocol. Instead similar tricks are included in the 2nd paper.
For the second one it is a recent STOA model form Facebook Research (reference pytorch official impl + pretrained weights available on GitHub) but I suppose we could consider that one outside of resnet/densnet perimeter but still good for who Is interested in CNN-family STOA. I suppose that the real scope was to integrate this in new CNN-Transformer Hybrid models. |
st207342 | Hey Community, I hope you’re doing great.
I’m working on binary classification using structured data, and my model has gave a great validation recall result(around 80%) but low validation accuracy(around 40%)!
and i like to improve the validation accuracy even if that will slightly decrease the validation recall? any suggestions please.
Thank you so much! |
st207343 | Plotting the “precision vs. recall” curve will show you the performance you can reach just by changing the threshold level. Have a look at this tutorial:
TensorFlow
Classification on imbalanced data | TensorFlow Core 4 |
st207344 | How did you train the classifier? 40% accuracy on a binary classification is worse than random (would be 50% if the class are balanced or higher if they are not)
Anyway, combined with changing the trashold, that that is done after you have altready trained the classifier, if you have unbalanced (but that’s usually high accuracy and low recall of the minority) consider oversampling the smaller class or undersampling the other. Alternatively you can weight the loss function (or the gradient); this can also bo done in case predicting well a class is more important than the other.
But I’d check the classifier first. |
st207345 | Hey @Samuele_Bortolato, the data that i have is unbalanced data(70%/30%) and i’m trying to maximize the precision.
I managed to get a good accuracy, but i like to get good precision and recall results. |
st207346 | I don’t get what you mean with “I’m trying to maximize precision”, when you train you simply minimize the loss, unless you wrote a custom loss to only maximize precision (not advised) if you use crossentropy the measure more similar to what you maximize is the weighted accuracy (but even that tecnically is not correct, you minimize the loss, not maximize accuracy or recall).
In your case since you have unblanced data it’s probable that it will learn to classify the bigger class more than the smaller. For example, classifying all instances as the bigger class without undertanding anything of the problem would get it a 70% accuracy.
If you just want to account for the unbalance in the data I would just give the bigger class a weight of 0.3 and the other a weight of 0.7 in the loss function. In this way if it tries to cheat and simply predict always a class it would get a low score. If you are more interested in one class than the other set different weights. (I wouldn’t use a validation_precision ealy stopping as done in the tutorial linked though, for the same reason as before).
Or follow exactly the example tutorial.
Anyway you can simply try, make more classifiers, plot the behaviour of the different classifiers for different trasholds and choose the more suitable. |
st207347 | sorry for not being clear, what i’m trying beside minimizing the loss is to get the high precision and recall as possible.
I got a good recall results, but the precision still something under 40%.
and thank you for your time @Samuele_Bortolato |
st207348 | Check out a new post on the Google AI Blog:
Google AI Blog
Google Research: Themes from 2021 and Beyond 14
Posted by Jeff Dean, Senior Fellow and SVP of Google Research, on behalf of the entire Google Research community Over the last several de...
A few themes:
· Trend 1: More Capable, General-Purpose ML Models 1
· Trend 2: Continued Efficiency Improvements for ML 2
· Trend 3: ML Is Becoming More Personally and Communally Beneficial 1
· Trend 4: Growing Benefits of ML in Science, Health and Sustainability 1
· Trend 5: Deeper and Broader Understanding of ML 1
HDR+1600×745 295 KB
ML models are increasingly prevalent in many different products and features at Google because their power and ease of expression streamline experimentation and productionization of ML models in performance-critical environments. Research into model architectures to create Seq2Seq, Inception, EfficientNet, and Transformer or algorithmic research like batch normalization and distillation 1 is driving progress in the fields of language understanding, vision, speech, and others
Other work:
… research publications by area below or by year (and if you’re interested in quantum computing, our Quantum team recently wrote a retrospective of their work in 2021 1):
Algorithms and Theory
Machine Perception
Data Management
Machine Translation
Data Mining
Mobile Systems 1
Distributed Systems & Parallel Computing
Natural Language Processing 1
Economics & Electronic Commerce
Networking
Education Innovation
Quantum Computing
General Science
Responsible AI
Health and Bioscience 1
Robotics
Hardware and Architecture
Security, Privacy and Abuse Prevention
Human-Computer Interaction and Visualization
Software Engineering
Information Retrieval and the Web
Software Systems
Machine Intelligence
Speech Processing |
st207349 | I hope you can find this detailed report useful:
An Experience Report on Machine Learning
Reproducibility: Guidance for Practitioners and
TensorFlow Model Garden Contributors 5 |
st207350 | I am aware of the preprocessing proto that is used in the models repo:
github.com
tensorflow/models/blob/master/research/object_detection/protos/preprocessor.proto 3
syntax = "proto2";
package object_detection.protos;
// Message for defining a preprocessing operation on input data.
// See: //third_party/tensorflow_models/object_detection/core/preprocessor.py
// Next ID: 41
message PreprocessingStep {
oneof preprocessing_step {
NormalizeImage normalize_image = 1;
RandomHorizontalFlip random_horizontal_flip = 2;
RandomPixelValueScale random_pixel_value_scale = 3;
RandomImageScale random_image_scale = 4;
RandomRGBtoGray random_rgb_to_gray = 5;
RandomAdjustBrightness random_adjust_brightness = 6;
RandomAdjustContrast random_adjust_contrast = 7;
RandomAdjustHue random_adjust_hue = 8;
RandomAdjustSaturation random_adjust_saturation = 9;
RandomDistortColor random_distort_color = 10;
RandomJitterBoxes random_jitter_boxes = 11;
This file has been truncated. show original
My question is how one configures their augmentation pipeline when using the TFOD API. Consider this configuration file. It has a field for augmentation:
train_config: {
...
data_augmentation_options {
random_horizontal_flip {
}
}
If I wanted to expand the set of augmentation transformations here what should I do?
@Laurence_Moroney @khanhlvg any pointers? |
st207351 | Solved by markdaoust in post #3
Interesting.
This looks like they’ve re-encoded something like keras’s model.get_config, as a proto.
To change the data-augmentation, you edit that data_augmentation_options list.
The .proto files define what’s allowed. The definition of TrainConfig is here:
data_augmentation_options is a repe… |
st207352 | Interesting.
This looks like they’ve re-encoded something like keras’s model.get_config, as a proto.
To change the data-augmentation, you edit that data_augmentation_options list.
The .proto files define what’s allowed. The definition of TrainConfig is here:
github.com
tensorflow/models/blob/aa3e639f80c2967504310b0f578f0f00063a8aff/research/object_detection/protos/train.proto#L25
// Message for configuring DetectionModel training jobs (train.py).
// Next id: 31
message TrainConfig {
// Effective batch size to use for training.
// For TPU (or sync SGD jobs), the batch size per core (or GPU) is going to be
// `batch_size` / number of cores (or `batch_size` / number of GPUs).
optional uint32 batch_size = 1 [default=32];
// Data augmentation options.
repeated PreprocessingStep data_augmentation_options = 2;
// Whether to synchronize replicas during training.
optional bool sync_replicas = 3 [default=false];
// How frequently to keep checkpoints.
optional float keep_checkpoint_every_n_hours = 4 [default=10000.0];
// Optimizer used to train the DetectionModel.
optional Optimizer optimizer = 5;
data_augmentation_options is a repeated PreprocessingStep.
A PreprocessingStep is one of the items from that list. The parameters of each and their default values are defined in preprocessor.proto
If you want to add a RandomScale step:
train_config: {
...
data_augmentation_options {
random_horizontal_flip {
}
random_image_scale {
min_scale_ratio: 0.9
max_scale_ratio: 1.1
}
}
}
That format is “proto-text” (.PBTXT), you can check your syntax with:
from google.protobuf import text_format
train_config = TrainConfig()
train_config = text_format.Parse(
r"""
train_config: {
...
data_augmentation_options {
random_horizontal_flip {
}
random_image_scale {
min_scale_ratio: 0.9
max_scale_ratio: 1.1
}
}
}
""", train_config)
print(train_config) |
st207353 | I think we can agree that the importance of ResNets in the Deep Learning community is paramount. Since their inception, they have gone through several improvements most of them leading to increased performance on the ImageNet-1k dataset.
This thread 3 seems contributions from the community for adding ResNetRS 1 models to tf.keras.applications. |
st207354 | System information
Have I written custom code:
OS Platform and Distribution (Windows 10):
TensorFlow installed from (binary):
TensorFlow version (1.15):
Python version(3.7.6):
CUDA/cuDNN version(10.0):
GPU model and memory(GTX 1660ti and 6GB):
Description of problem
I was running file.py but still, it is not running, I did some changes in the source file. Still, the placeholder error is the same. The code is also updated with this question. Kindly review and Help.
Source code / logs
minimal code
with tf.io.gfile.GFile(args.model, "rb") as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
generated_image_1, generated_image_2, generated_image_3, = tf.import_graph_def(
graph_def,
input_map={'input_image' : input_tensor, 'short_edge_1' : short_edge_1, 'short_edge_2' : short_edge_2, 'short_edge_3' : short_edge_3},
return_elements=['style_subnet/conv-block/resize_conv_1/output:0', 'enhance_subnet/resize_conv_1/output:0', 'refine_subnet/resize_conv_1/output:0'],
name=None,
op_dict=None,
producer_op_list=None *#line number 55*
)
short_edges = [int(e) for e in args.hierarchical_short_edges.split(',')]
error
(myenv) D:\subbu>python file.py --model=models/model.pb --input_image=test_images/image.jpg
2021-12-25 00:54:11.765889: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
input image - test_images/image.jpg (1200, 630, 3)
WARNING:tensorflow:From file.py:55: calling import_graph_def (from tensorflow.python.framework.importer) with op_dict is deprecated and will be removed in a future version.
Instructions for updating:
Please file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.
Traceback (most recent call last):
File "C:\Users\subbu\myenv\lib\site-packages\tensorflow_core\python\framework\importer.py", line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: node 'Placeholder' in input_map does not exist in graph (input_map entry: input_image:0->Placeholder:0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "file.py", line 87, in <module>
main()
File "file.py", line 55, in main
producer_op_list=None
File "C:\Users\subbu\myenv\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\subbu\myenv\lib\site-packages\tensorflow_core\python\framework\importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File "C:\Users\subbu\myenv\lib\site-packages\tensorflow_core\python\framework\importer.py", line 505, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: node 'Placeholder' in input_map does not exist in graph (input_map entry: input_image:0->Placeholder:0) |
st207355 | I am looking for lane detection model in TensorFlow. I found LaneNet in GitHub - MaybeShewill-CV/lanenet-lane-detection: Unofficial implemention of lanenet model for real time lane detection using deep neural network model https://maybeshewill-cv.github.io/lanenet-lane-detection/ 5 but I think owner no longer maintain this repo. Would anyone recommend some models? Much appreciated. |
st207356 | A very interesting survey by Google Brain Tokyo:
https://arxiv.org/abs/2111.14377 15 |
st207357 | In the case you are looking for a gentle introduction to the “emergence” I recommend you this 2014 blog post by David Pines:
Medium – 13 Nov 14
Emergence: A unifying theme for 21st century science 4
By David Pines, Co-Founder in Residence, Santa Fe Institute
Reading time: 13 min read |
st207358 | Hello,
I’m working on an Image Super-Resolution Model called ESRGAN.tflite. Currently, it takes a 50X50 size image as an input and a 200X200 size as the output. But I want,
It accepts 240X240 as input and generates 960X960 as output.
Please Help me to achieve this. |
st207359 | Hi developers?
How to prepare .wav and .amr file for yamnet.tflite model in kotlin or java. I have checked the example project on Github but it has only real-time classification using the mic, but I need to know how to prepare the wav and amr file for this model. thanks |
st207360 | Solved by Rufan_Khokhar in post #11
Hello sir, I hope you’re well, I found the solution for yamnet model, and write the article on Medium,
Please check it out and give me suggestions to improve it. |
st207361 | Hi @Rufan_Khokhar
Take a look at this article where there is an explanation of android usage of Yamnet model. Also at the end there is a github link. I hope you find it useful.
Medium – 9 Dec 20
Classification of sounds using android mobile phone and the YAMNet ML model 19
Written by George Soloupis ML GDE
Reading time: 3 min read
Best |
st207362 | Sir, Thanks for your answer. I also tried your provided source but it’s also using only a mic and is very hard to understand, I’m using this project as an example.
github.com
6-4/Tensorflow_lite/lite/examples/sound_classification/android at... 4
91afe9ee34446558da2f6f9e4ec7535a2aac4c61/Tensorflow_lite/lite/examples/sound_classification/android
Contribute to onwood/6-4 development by creating an account on GitHub.
This project using the * TFLite Task Library 2 is very easy.
I’m using this code to prepare the wav file, please check out this
object AudioConverter {
fun readAudioSimple(path: File): FloatArray {
val input =
BufferedInputStream(FileInputStream(path))
val buff = ByteArray(path.length().toInt())
val dis = DataInputStream(input)
dis.readFully(buff)
// remove wav header at first 44 bytes
return floatMe(shortMe(buff.sliceArray(buff.indices)) ?: ShortArray(0)) ?: FloatArray(
0
)
}
fun FloatArray.sliceTo(step: Int): List<FloatArray> {
val slicedAudio = arrayListOf<FloatArray>()
var startAt = 0
var endAt = 15600
val stepSize = if (step != 0) (15600 * (1f / (2 * step))).toInt() else 0
while ((startAt + 15600) < this.size) {
if (startAt != 0) {
startAt = endAt - stepSize
endAt = startAt + 15600
}
slicedAudio.add(this.copyOfRange(startAt, endAt))
startAt = endAt
}
return slicedAudio
}
private fun shortMe(bytes: ByteArray): ShortArray {
val out = ShortArray(bytes.size / 2)
ByteBuffer.wrap(bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(out)
return out
}
private fun floatMe(pcms: ShortArray): FloatArray {
val floats = FloatArray(pcms.size)
pcms.forEachIndexed { index, sh ->
// The input must be normalized to floats between -1 and +1.
// To normalize it, we just need to divide all the values by 2**16 or in our code, MAX_ABS_INT16 = 32768
floats[index] = sh.toFloat() / 32768.0f
}
return floats
}
}
I’m Student, please help me. I really need this solution. |
st207363 | Hi,
Put your file inside assets folder.
Create an input stream:
java - InputStream from Assets folder on Android returning empty - Stack Overflow 3
Create a list of shorts like the accepted answer here where it uses input stream and guava:
java - Mix two files audio wav on android use short array - Stack Overflow 5
If you do not have quava insert the dependency as here:
https://github.com/google/guava 2
Having the short array list create a float array and continue from this line inside my project and see what I have done next:
https://github.com/farmaker47/Yamnet_classification_project/blob/master/app/src/main/java/com/soloupis/yamnet_classification_project/viewmodel/ListeningFragmentViewmodel.kt#L88 8
So bacically the idea is to convert the .wav file to list of shorts then floatarray and then feed the interpreter.
I hope my post helps you.
Best |
st207364 | Hello sir, Thanks for your solution,
The above solution only works with specific wave files( that matches the model input specifications like byte rate and channel). My question is how to process wave files that do not match the required input specification. How I can input this file. I have tried so many codes and libraries but lost.
Please help me with this.
Thanks. |
st207365 | Check a little bit the specifications of the Yamnet model to see if there is an alternative for inputs:
github.com
models/research/audioset/yamnet at d9541052aaf6fdc015c8150cf6576a2da68771f7 ·... - Input: Audio Features 8
d9541052aaf6fdc015c8150cf6576a2da68771f7/research/audioset/yamnet
Models and examples built with TensorFlow. Contribute to tensorflow/models development by creating an account on GitHub.
If there is no alternative you have to convert your wav files to the correct format.
Best |
st207366 | Hello sir, I hope you’re well, I found the solution for yamnet model, and write the article on Medium,
Medium – 19 Oct 21
Prepare .wav or .amr files for yamnet.tflite model Android 5
After a lot of research, I realized there is no any easiest way to prepare a local .wav or .amr file for yamnet.tflite model, as we know…
Reading time: 1 min read
Please check it out and give me suggestions to improve it. |
st207367 | Nice work @Rufan_Khokhar !
I read your article. I think you can explain a little bit more about the library FFmpegKit…provide some links for the user so they can decide to use it or not. The issue with custom libraries is that someday the authors stop supporting them and they do not work with future android APIs.
I see that you are using TensorFlow AudioClassifier… Have you tried directly the conversion 7 the library provides?
Best |
st207368 | And I also facing the same problem with ESRGAN (Image super-resolution) model, the model accepts 50X50 image size and outputs 200X200.
My question is how to train the model for custom input, like 150X150 or 240X240.
here is the link.
TensorFlow
Super resolution with TensorFlow Lite 1 |
st207369 | Partially local federated learning with Federated Reconstruction - by Google Research
Blog post: Google AI Blog: A Scalable Approach for Partially Local Federated Learning 1
TensorFlow Federated tutorial: Federated Reconstruction for Matrix Factorization | TensorFlow Federated
Colab: Google Colab
General purpose TensorFlow Federated libraries: Module: tff.learning.reconstruction | TensorFlow Federated
GitHub: federated/reconstruction at master · google-research/federated · GitHub (“This library uses TensorFlow Federated. For a more general look at using TensorFlow Federated for research, see Using TFF for Federated Learning Research.”)
Paper: Federated Reconstruction: Partially Local Federated Learning 1 (NeurIPS 2021)
From the blog post 1:
In “Federated Reconstruction: Partially Local Federated Learning 1”, presented at NeurIPS 2021, we introduce an approach that enables scalable partially local federated learning, where some model parameters are never aggregated on the server. For matrix factorization, this approach trains a recommender model while keeping user embeddings local to each user device. For other models, this approach trains a portion of the model to be completely personal for each user while avoiding communication of these parameters. We successfully deployed partially local federated learning to Gboard, resulting in better recommendations for hundreds of millions of keyboard users. We’re also releasing a TensorFlow Federated tutorial demonstrating how to use Federated Reconstruction.
Federated Reconstruction in GBoard's expressions1440×1420 355 KB
Real-World Deployment in Gboard
To validate the practicality of Federated Reconstruction in large-scale settings, we deployed the algorithm to Gboard, a mobile keyboard application with hundreds of millions of users. Gboard users use expressions (e.g., GIFs, stickers) to communicate with others. Users have highly heterogeneous preferences for these expressions, making the setting a good fit for using matrix factorization to predict new expressions a user might want to share.
From the paper:
Abstract
Personalization methods in federated learning aim to balance the benefits of federated and local training for data availability, communication cost, and robustness to client heterogeneity. Approaches that require clients to communicate all model parameters can be undesirable due to privacy and communication constraints. Other approaches require always-available or stateful clients, impractical in large-scale cross-device settings. We introduce Federated Reconstruction, the first model-agnostic framework for partially local federated learning suitable for training and inference at scale. We motivate the framework via a connection to model-agnostic meta learning, empirically demonstrate its performance over existing approaches for collaborative filtering and next word prediction, and release an open-source library for evaluating approaches in this setting. We also describe the successful deployment of this approach at scale for federated collaborative filtering in a mobile keyboard application. |
st207370 | New research from Google Research/Columbia University published in NeurIPS 2021.
Convolution-free Video-Audio-Text Transformer (VATT) “takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks.”
Tasks include: image classification, video action recognition, audio event classification, and zero-shot text-to-video retrieval.
Self-supervised multimodal learning strategy (pre-training requires minimal labeling).
arXiv: VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text 5
TensorFlow code: google-research/vatt at master · google-research/google-research · GitHub 10
VATT architecture overview1199×634 310 KB
We present a framework for learning multimodal representations from unlabeled data using convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer (VATT) takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks. We train VATT end-to-end from scratch using multimodal contrastive losses and evaluate its performance by the downstream tasks of video action recognition, audio event classification, image classification, and text-to-video retrieval. Furthermore, we study a modality-agnostic, single-backbone Transformer by sharing weights among the three modalities. We show that the convolution-free VATT outperforms state-of-the-art ConvNet-based architectures in the downstream tasks. Especially, VATT’s vision Transformer achieves the top-1 accuracy of 82.1% on Kinetics-400, 83.6% on Kinetics-600, 72.7% on Kinetics-700, and 41.1% on Moments in Time, new records while avoiding supervised pre-training. Transferring to image classification leads to 78.7% top-1 accuracy on ImageNet compared to 64.7% by training the same Transformer from scratch, showing the generalizability of our model despite the domain gap between videos and images. VATT’s audio Transformer also sets a new record on waveform-based audio event recognition by achieving the mAP of 39.4% on AudioSet without any supervised pre-training…
… we study self-supervised, multimodal pre-training of three Transformers [88], which take as input the raw RGB frames of internet videos, audio waveforms, and text transcripts of the speech audio, respectively. We call the video, audio, text Transformers VATT… VATT borrows the exact architecture from BERT [23] and ViT [25] except the layer of tokenization and linear projection reserved for each modality separately. This design shares the same spirit as ViT that we make the minimal changes to the architecture so that the learned model can transfer its weights to various frameworks and tasks. Furthermore, the self-supervised, multimodal learning strategy resonates the spirit of BERT and GPT that the pre-training requires minimal human curated labels. We evaluate the pre-trained Transformers on a variety of downstream tasks: image classification, video action recognition, audio event classification, and zero-shot text-to-video retrieval…
In this paper, we present a self-supervised multimodal representation learning framework based on Transformers. Our study suggests that Transformers are effective for learning semantic video/audio/text representations — even if one model is shared across modalities — and multimodal self-supervised pre-training is promising for reducing their dependency on large-scale labeled data. We show that DropToken can significantly reduce the pre-training complexity with video and audio modalities and have minor impact on the models’ generalization. We report new records of results on video action recognition and audio event classification and competitive performance on image classification and video retrieval…
arXiv: VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text 5
TensorFlow code: google-research/vatt at master · google-research/google-research · GitHub 10 |
st207371 | Hi everyone,
I’m doing some research on Unified Memory Management on Multi-GPUs system and trying to compare the performance with explicit copy on some real ML workloads.
The benefits from Unified Memory are
Allow memory oversubscription
Improve programmability, programmers don’t need to worry about data placement and movement
I found there’s a switch per_process_gpu_memory_fraction to turn on Unified Memory in tensorflow. For distributed training on multi GPUs, I used tf.distribute.MirroredStrategy API. But from profiling result, it seems that tensorflow just leverage Unified Memory to overcome memory oversubscription, there are still explicit memory copies between GPU and CPU, or GPU and GPU.
I’m wondering if there’s a way to train on multi GPUs and fully explore the power of Unified Memory, like letting memory system manage the data, in tensorflow.
System information
TensorFlow version (you are using): 2.4
CUDA version: 11.0
cudnn version: 8.0
Thanks |
st207372 | Have you tried with these envs on TF 2.7:
github.com/tensorflow/tensorflow
[PJRT] Allow GPU memory oversubscription when unified memory is enabled. 7
committed
Jul 9, 2021
tensorflower-gardener
+5
-2
With this CL, we can enable GPU memory oversubscription via env flags.
For examp…le, `TF_FORCE_UNIFIED_MEMORY=1 XLA_PYTHON_CLIENT_MEM_FRACTION=8.0` provides 8x the GPU memory to the program. The 'extra' memory is physically located on the other GPU devices and the host's RAM, with swapping done transparently by CUDA.
PiperOrigin-RevId: 383819164
Change-Id: Id139d3184d3a62983c1e86bf95ca4078a08db4f4 |
st207373 | New self-supervised methods—Compressed SimCLR and Compressed BYOL with Conditional Entropy Bottleneck—for learning effective and robust visual representations, which enable learning visual classifiers with limited data.
arXiv: Compressive Visual Representations 5 (Lee et al., 2021) (Google Research)
Learning effective visual representations that generalize well without human supervision is a fundamental problem in order to apply Machine Learning to a wide variety of tasks. Recently, two families of self-supervised methods, contrastive learning and latent bootstrapping, exemplified by SimCLR and BYOL respectively, have made significant progress. In this work, we hypothesize that adding explicit information compression to these algorithms yields better and more robust representations. We verify this by developing SimCLR and BYOL formulations compatible with the Conditional Entropy Bottleneck (CEB) objective, allowing us to both measure and control the amount of compression in the learned representation, and observe their impact on downstream tasks. Furthermore, we explore the relationship between Lipschitz continuity and compression, showing a tractable lower bound on the Lipschitz constant of the encoders we learn. As Lipschitz continuity is closely related to robustness, this provides a new explanation for why compressed models are more robust. Our experiments confirm that adding compression to SimCLR and BYOL significantly improves linear evaluation accuracies and model robustness across a wide range of domain shifts. In particular, the compressed version of BYOL achieves 76.0% Top-1 linear evaluation accuracy on ImageNet with ResNet-50, and 78.8% with ResNet-50 2x.1
Recent contrastive approaches to self-supervised visual representation learning aim to learn representations that maximally capture the mutual information between two transformed views of an image… The primary idea of these approaches is that this mutual information corresponds to a general shared context that is invariant to various transformations of the input, and it is assumed that such invariant features will be effective for various downstream higher-level tasks. However, although existing contrastive approaches maximize mutual information between augmented views of the same input, they do not necessarily compress away the irrelevant information from these views… retaining irrelevant information often leads to less stable representations and to failures in robustness and generalization, hampering the efficacy of the learned representations. An alternative state-of-the-art self-supervised learning approach is BYOL [30], which uses a slow-moving average network to learn consistent, view-invariant representations of the inputs. However, it also does not explicitly capture relevant compression in its objective.
In this work, we modify SimCLR [12], a state-of-the-art contrastive representation method, by adding information compression using the Conditional Entropy Bottleneck (CEB) [27]. Similarly, we show how BYOL [30] representations can also be compressed using CEB. By using CEB we are able to measure and control the amount of information compression in the learned representation [26], and observe its impact on downstream tasks. We empirically demonstrate that our compressive variants of SimCLR and BYOL, which we name C-SimCLR and C-BYOL, significantly improve accuracy and robustness to domain shifts across a number of scenarios.
Code: GitHub - google-research/compressive-visual-representations: Tensorflow 2 implementations of the C-SimCLR and C-BYOL self-supervised visual representation methods from "Compressive Visual Representations" (NeurIPS 2021) 2
C-SimCLR866×259 129 KB
C-BYOL866×337 182 KB
Related work:
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations (Chen et al., 2020) (Google Research, Brain Team)
SimCLRv2: Big Self-Supervised Models are Strong Semi-Supervised Learners (Chen et al., 2020) (Google Research, Brain Team)
GitHub with TF 2 implementation
BYOL: Bootstrap your own latent: A new approach to self-supervised Learning (Grill et al., 2020) (DeepMind/Imperial College) |
st207374 | Interested in learning about self-supervised methods? Here are some resources:
Google AI Blog: Advancing Self-Supervised and Semi-Supervised Learning with SimCLR
Google AI Blog: Extending Contrastive Learning to the Supervised Setting
Some code examples and other posts made by the ML community members:
Keras: Self-supervised contrastive learning with SimSiam (by @Sayak_Paul)
GitHub - sayakpaul/SimCLR-in-TensorFlow-2: (Minimally) implements SimCLR (https://arxiv.org/abs/2002.05709) in TensorFlow 2. (by @Sayak_Paul)
GitHub - ayulockin/SwAV-TF: TensorFlow implementation of "Unsupervised Learning of Visual Features by Contrasting Cluster Assignments". (by ayulockin and @Sayak_Paul)
GitHub - sayakpaul/SimSiam-TF: Minimal implementation of SimSiam (https://arxiv.org/abs/2011.10566) in TensorFlow 2. (by @Sayak_Paul)
Lilian Weng’s blog post: Self-Supervised Representation Learning (2019)
Lilian Weng’s blog post: Contrastive Representation Learning (2021) |
st207375 | Thanks for sharing. If the sole purpose is to compress a bigger self-supervised model and have it perform well under limited supervised data, I think SimCLRV2 is by far the simplest approach. |
st207376 | Thanks for sharing the links. Adding some of my own favorites and others:
NeurIPS tutorial on self-supervision by Lilian Weng and Jong Wook Kim: Self-Supervised Learning: Self-Prediction and Contrastive Learning (slides 1)
A better minimal implementation of SimCLR with lots of cool stuff: Semi-supervised image classification using contrastive pretraining with SimCLR 1
SimSiam blog: Self-supervised contrastive learning with SimSiam (originally from FAIR)
Masked Image Modeling with Autoencoders (arguably the simplest one): Masked image modeling with Autoencoders (with @ariG23498, originally from FAIR)
Interview with Ishan Misra: #55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR). - YouTube.
Using self-supervision in a supervised setting:
https://arxiv.org/abs/2006.10803
Efficient Training of Visual Transformers with Small Datasets | OpenReview |
st207377 | It could be really nice if we could launch an experiment/initiative with the Model Garden, Federated teams and our community on training our first model garden model with our federated tools.
We have some recent interesting experiment with other frameworks.
learning@home
Train vast neural networks together
A library to train large neural networks across the internet. Imagine training one huge transformer on thousands of computers from universities, companies, and volunteers.
https://arxiv.org/abs/2106.10207
huggingface.co
neuropark/sahajBERT · Hugging Face |
st207378 | Hello, TensorFlow already does and you can use KubelFlow, ML overlay from Kubernetes… |
st207379 | I think that we are talking about something really different.
Take a look to the mentioned links.
More in general:
arXiv.org
Advances and Open Problems in Federated Learning 8
Federated learning (FL) is a machine learning setting where many clients
(e.g. mobile devices or whole organizations) collaboratively train a model
under the orchestration of a central server (e.g. service provider), while
keeping the training data... |
st207380 | I am also looking forward to the federated learning with mobile devices and orchestration server. From my resent conversation with a ML compiler programmer, the tflite model runs on the mobile device are compiled, it would be difficult to decompile the tflite model to get the tf ml code back. Thus, it would be really difficult to retrain or using transfer-learning to modify the tflite model. Currently, i am using java/kotlin to build additional layer to the tflite model codes, so that the additional layer can be retrained on mobiel device. But still i have an issue to get the weights from mobile device aggregated, since it doesn’t really converge to an equilibrium. |
st207381 | Check:
github.com/tensorflow/community
RFC: On-device training with TensorFlow Lite 5
tensorflow:master ← miaout17:tflite-training-rfc
opened
Jun 7, 2021
miaout17
+309
-0
We're sharing this RFC to reflect our newest thoughts of implementing on-device …training in TensorFlow Lite.
We didn't setup a timeline to close the comments. We want to surface the RFC early for transparency and get feedback. |
st207382 | Take a look at this NeurIPS 2021 experiment:
Training Transformers Together
Train vast neural networks together 1
A NeurIPS'21 demonstration that explains how to train large models together with multiple collaborators. |
st207383 | @Jason What do you think about this crowdtraining experiment by a TF.js point of view? |
st207384 | Also if different we had something for TF.js at:
GitHub
GitHub - epfml/DeAI: Decentralized privacy-preserving ML training software...
Decentralized privacy-preserving ML training software framework, using p2p networking - GitHub - epfml/DeAI: Decentralized privacy-preserving ML training software framework, using p2p networking |
st207385 | I would like to find some symmetric 8 bit quantization models to deploy on hardware. However, every model I found in model zoo are asymmetric. Hosted models | TensorFlow Lite
TensorFlow
TensorFlow Lite 8-bit quantization specification
Here mentioned that “In the past our quantization tooling used per-tensor, asymmetric, uint8 quantization. New tooling, reference kernels, and optimized kernels for 8-bit quantization will use this spec.”
Do you mean that symmetric quantizated tflite models are still under development? Or I could find some symmetric quantization tflite model elsewhere? |
st207386 | We have a tracking ticket at:
github.com/tensorflow/tensorflow
Allow symmetric TFLite quantization (no zero point/scale only) 2
opened
Sep 10, 2020
bwang1991
stat:awaiting tensorflower
type:feature
comp:lite
As far as I know, TFLite's quantization forces activations to have both scales a…nd zero points. However, for some networks, symmetric quantization (no zero point) does not cause a significant loss in accuracy. It is therefore sufficient to use scale only. Please add support for symmetric quantization. |
st207387 | I am trying to implement Unet with TensorFlow subclassing API and something does not seem to work properly, and I get the following error:
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
Furthermore, I am uncertain if I have correctly implemented the logic inside the call() function. Any help to correct my mistakes would be much appreciated.
Here I am attaching the full copy of the implementation and the error tracks:
Code Implementation:
from functools import partial
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
conv2d = partial(keras.layers.Conv2D, kernel_size = 3,
padding = 'SAME',
kernel_initializer = 'he_normal',
use_bias = False)
conv2dtranspose = partial(keras.layers.Conv2DTranspose,
kernel_size = 2, strides = 2,
padding = 'SAME')
class encoder(keras.layers.Layer):
def __init__(self, filters, **kwargs):
super(encoder, self).__init__(**kwargs)
self.convs = [
conv2d(filters),
keras.layers.BatchNormalization(),
keras.layers.Activation('relu'),
conv2d(filters),
keras.layers.BatchNormalization(),
keras.layers.Activation('relu')
]
def call(self, inputs):
Z = inputs
for layer in self.convs:
Z = layer(Z)
return Z
class UNet(keras.models.Model):
def __init__(self, filters, inputs_shape = [128, 128, 1], **kwargs):
super(UNet, self).__init__(**kwargs)
self.filters = filters
self.inputs = keras.layers.Input(shape = inputs_shape)
self.maxpool2d = keras.layers.MaxPool2D(pool_size = (2, 2), strides = 2)
self.conv2dtranspose = conv2dtranspose
self.concat = keras.layers.Concatenate()
def call(self, inputs):
skips = {}
Z, inpt = inputs
#implementing encoder path
for fId in range(len(self.filters)):
Z = encoder(filters = self.filters[fId])(Z)
if fId < len(self.filters) - 1:
skips[fId] = Z
Z = self.maxpool2d(Z)
#implementing decoder path
for fId in reversed(range(len(self.filters) - 1)):
Z = self.conv2dtranspose(self.filters[fId])(Z)
Z = self.concat([Z, skips[fId]])
Z = encoder(self.filters[::-1][fId])(Z)
output = keras.layers.Conv2D(1, kernel_size = 1, activation = 'sigmoid')(Z)
return keras.Model(inputs = [inpt], outputs = [output])
filters = [64, 128, 256, 512]
inpt = keras.layers.Input(shape = [128, 128, 1])
model = UNet(filters = filters)(inpt)
#Generating some test data
x = tf.random.normal(shape = (10, 128, 128, 1))
y = tf.random.normal(shape = (10, 128, 128, 1))
model.compile(loss = 'binary_crossentropy', optimizer = keras.optimizers.SGD(), metrics = ['accuracy'])
model.fit(x, y, epochs = 3)
Error Tracks:
WARNING:tensorflow:AutoGraph could not transform <bound method UNet.call of <__main__.UNet object at 0x2930b3d30>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <bound method UNet.call of <__main__.UNet object at 0x2930b3d30>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: module 'gast' has no attribute 'Index'
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
446 program_ctx = converter.ProgramContext(options=options)
--> 447 converted_f = _convert_actual(target_entity, program_ctx)
448 if logging.has_verbosity(2):
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in _convert_actual(entity, program_ctx)
283
--> 284 transformed, module, source_map = _TRANSPILER.transform(entity, program_ctx)
285
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/pyct/transpiler.py in transform(self, obj, user_context)
285 if inspect.isfunction(obj) or inspect.ismethod(obj):
--> 286 return self.transform_function(obj, user_context)
287
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/pyct/transpiler.py in transform_function(self, fn, user_context)
469 # TODO(mdan): Confusing overloading pattern. Fix.
--> 470 nodes, ctx = super(PyToPy, self).transform_function(fn, user_context)
471
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/pyct/transpiler.py in transform_function(self, fn, user_context)
362 node = self._erase_arg_defaults(node)
--> 363 result = self.transform_ast(node, context)
364
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in transform_ast(self, node, ctx)
251 unsupported_features_checker.verify(node)
--> 252 node = self.initial_analysis(node, ctx)
253
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in initial_analysis(self, node, ctx)
238 graphs = cfg.build(node)
--> 239 node = qual_names.resolve(node)
240 node = activity.resolve(node, ctx, None)
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/pyct/qual_names.py in resolve(node)
251 def resolve(node):
--> 252 return QnResolver().visit(node)
253
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in visit(self, node)
370 visitor = getattr(self, method, self.generic_visit)
--> 371 return visitor(node)
372
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in generic_visit(self, node)
446 if isinstance(value, AST):
--> 447 value = self.visit(value)
448 if value is None:
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in visit(self, node)
370 visitor = getattr(self, method, self.generic_visit)
--> 371 return visitor(node)
372
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in generic_visit(self, node)
446 if isinstance(value, AST):
--> 447 value = self.visit(value)
448 if value is None:
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in visit(self, node)
370 visitor = getattr(self, method, self.generic_visit)
--> 371 return visitor(node)
372
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in generic_visit(self, node)
455 elif isinstance(old_value, AST):
--> 456 new_node = self.visit(old_value)
457 if new_node is None:
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in visit(self, node)
370 visitor = getattr(self, method, self.generic_visit)
--> 371 return visitor(node)
372
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in generic_visit(self, node)
455 elif isinstance(old_value, AST):
--> 456 new_node = self.visit(old_value)
457 if new_node is None:
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in visit(self, node)
370 visitor = getattr(self, method, self.generic_visit)
--> 371 return visitor(node)
372
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in generic_visit(self, node)
446 if isinstance(value, AST):
--> 447 value = self.visit(value)
448 if value is None:
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in visit(self, node)
370 visitor = getattr(self, method, self.generic_visit)
--> 371 return visitor(node)
372
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in generic_visit(self, node)
455 elif isinstance(old_value, AST):
--> 456 new_node = self.visit(old_value)
457 if new_node is None:
~/miniforge3/envs/mlm1-engine/lib/python3.8/ast.py in visit(self, node)
370 visitor = getattr(self, method, self.generic_visit)
--> 371 return visitor(node)
372
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/pyct/qual_names.py in visit_Subscript(self, node)
231 s = node.slice
--> 232 if not isinstance(s, gast.Index):
233 # TODO(mdan): Support range and multi-dimensional indices.
AttributeError: module 'gast' has no attribute 'Index'
During handling of the above exception, another exception occurred:
OperatorNotAllowedInGraphError Traceback (most recent call last)
<ipython-input-449-e6f92329b0db> in <module>
2
3 inpt = keras.layers.Input(shape = [128, 128, 1])
----> 4 model = UNet(filters = filters)(inpt)
5
6 #Generating some test data
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
944 # >> model = tf.keras.Model(inputs, outputs)
945 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
--> 946 return self._functional_construction_call(inputs, args, kwargs,
947 input_list)
948
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1083 layer=self, inputs=inputs, build_graph=True, training=training_value):
1084 # Check input assumptions set after layer building, e.g. input shape.
-> 1085 outputs = self._keras_tensor_symbolic_call(
1086 inputs, input_masks, args, kwargs)
1087
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
815 return nest.map_structure(keras_tensor.KerasTensor, output_signature)
816 else:
--> 817 return self._infer_output_signature(inputs, args, kwargs, input_masks)
818
819 def _infer_output_signature(self, inputs, args, kwargs, input_masks):
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
856 # TODO(kaftan): do we maybe_build here, or have we already done it?
857 self._maybe_build(inputs)
--> 858 outputs = call_fn(inputs, *args, **kwargs)
859
860 self._handle_activity_regularization(inputs, outputs)
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in wrapper(*args, **kwargs)
665 try:
666 with conversion_ctx:
--> 667 return converted_call(f, args, kwargs, options=options)
668 except Exception as e: # pylint:disable=broad-except
669 if hasattr(e, 'ag_error_metadata'):
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in converted_call(f, args, kwargs, caller_fn_scope, options)
452 if is_autograph_strict_conversion_mode():
453 raise
--> 454 return _fall_back_unconverted(f, args, kwargs, options, e)
455
456 with StackTraceMapper(converted_f), tf_stack.CurrentModuleFilter():
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in _fall_back_unconverted(f, args, kwargs, options, exc)
499 logging.warn(warning_template, f, file_bug_message, exc)
500
--> 501 return _call_unconverted(f, args, kwargs, options)
502
503
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py in _call_unconverted(f, args, kwargs, options, update_cache)
476
477 if kwargs is not None:
--> 478 return f(*args, **kwargs)
479 return f(*args)
480
<ipython-input-448-ce9f55fd84b1> in call(self, inputs)
49 skips = {}
50
---> 51 Z, inpt = inputs
52
53 #implementing encoder path
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in __iter__(self)
503 def __iter__(self):
504 if not context.executing_eagerly():
--> 505 self._disallow_iteration()
506
507 shape = self._shape_tuple()
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in _disallow_iteration(self)
499 else:
500 # Default: V1-style Graph execution.
--> 501 self._disallow_in_graph_mode("iterating over `tf.Tensor`")
502
503 def __iter__(self):
~/miniforge3/envs/mlm1-engine/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in _disallow_in_graph_mode(self, task)
477
478 def _disallow_in_graph_mode(self, task):
--> 479 raise errors.OperatorNotAllowedInGraphError(
480 "{} is not allowed in Graph execution. Use Eager execution or decorate"
481 " this function with @tf.function.".format(task))
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function. |
st207388 | What are the possible ways to install topology for Keras. Because the following import command gives error.
Command:
from keras.engine.topology import network
Error:
ModuleNotFoundError: No module named ‘keras.engine.topology’ |
st207389 | This might depend on the Keras version you are using. Look into if the keras.engine.topology has depricated.
You can force install an earlier version by:
pip install 'keras==2.1.6' --force-reinstall
Where 2.1.6 is a suitable example. You may try
import tensorflow.python.keras.engine
But you will not be able to import topology from tensorflow.python.keras.engine .
Please refer to the answers in similar issue1 15, issue2 13.
Thanks! |
st207390 | Thanks Sushree for the response!
I changed the Keras version and found the topology module under dist-package. I need to import Network from topology but it shows an error. Provided use used Network() in the later section of the code.
ImportError: cannot import name ‘Network’ from ‘keras.engine.topology’ (/usr/local/lib/python3.7/dist-packages/keras/engine/topology.py) |
st207391 | history = model.fit(X_tr, np.array(y_tr), batch_size=22, epochs=200, validation_split=0.1, verbose=1)
Epoch 1/200
ValueError Traceback (most recent call last)
in
----> 1 history = model.fit(X_tr, np.array(y_tr), batch_size=22, epochs=200, validation_split=0.1, verbose=1)
~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in _method_wrapper(self, *args, **kwargs)
106 def _method_wrapper(self, *args, **kwargs):
107 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
→ 108 return method(self, *args, **kwargs)
109
110 # Running inside run_distribute_coordinator already.
~\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1096 batch_size=batch_size):
1097 callbacks.on_train_batch_begin(step)
→ 1098 tmp_logs = train_function(iterator)
1099 if data_handler.should_sync:
1100 context.async_wait()
~\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in call(self, *args, **kwds)
778 else:
779 compiler = “nonXla”
→ 780 result = self._call(*args, **kwds)
781
782 new_tracing_count = self._get_tracing_count()
~\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in _call(self, *args, **kwds)
812 # In this case we have not created variables on the first call. So we can
813 # run the first trace but we should fail if variables are created.
→ 814 results = self._stateful_fn(*args, **kwds)
815 if self._created_variables:
816 raise ValueError(“Creating variables on a non-first call to a function”
~\anaconda3\lib\site-packages\tensorflow\python\eager\function.py in call(self, *args, **kwargs)
2826 “”“Calls a graph function specialized to the inputs.”""
2827 with self._lock:
→ 2828 graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
2829 return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
2830
~\anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _maybe_define_function(self, args, kwargs)
3208 and self.input_signature is None
3209 and call_context_key in self._function_cache.missed):
→ 3210 return self._define_function_with_shape_relaxation(args, kwargs)
3211
3212 self._function_cache.missed.add(call_context_key)
~\anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _define_function_with_shape_relaxation(self, args, kwargs)
3139 expand_composites=True)
3140
→ 3141 graph_function = self._create_graph_function(
3142 args, kwargs, override_flat_arg_shapes=relaxed_arg_shapes)
3143 self._function_cache.arg_relaxed[rank_only_cache_key] = graph_function
~\anaconda3\lib\site-packages\tensorflow\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3063 arg_names = base_arg_names + missing_arg_names
3064 graph_function = ConcreteFunction(
→ 3065 func_graph_module.func_graph_from_py_func(
3066 self._name,
3067 self._python_function,
~\anaconda3\lib\site-packages\tensorflow\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
984 _, original_func = tf_decorator.unwrap(python_func)
985
→ 986 func_outputs = python_func(*func_args, **func_kwargs)
987
988 # invariant: func_outputs contains only Tensors, CompositeTensors,
~\anaconda3\lib\site-packages\tensorflow\python\eager\def_function.py in wrapped_fn(*args, **kwds)
598 # wrapped allows AutoGraph to swap in a converted function. We give
599 # the function a weak reference to itself to avoid a reference cycle.
→ 600 return weak_wrapped_fn().wrapped(*args, **kwds)
601 weak_wrapped_fn = weakref.ref(wrapped_fn)
602
~\anaconda3\lib\site-packages\tensorflow\python\framework\func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, “ag_error_metadata”):
→ 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:806 train_function *
return step_function(self, iterator)
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:789 run_step **
outputs = model.train_step(data)
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:756 train_step
_minimize(self.distribute_strategy, tape, self.optimizer, loss,
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py:2736 _minimize
gradients = optimizer._aggregate_gradients(zip(gradients, # pylint: disable=protected-access
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py:562 _aggregate_gradients
filtered_grads_and_vars = _filter_grads(grads_and_vars)
C:\Users\BlackPearl\anaconda3\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py:1270 _filter_grads
raise ValueError("No gradients provided for any variable: %s." %
ValueError: No gradients provided for any variable: ['embedding/embeddings:0', 'bidirectional/forward_lstm/lstm_cell_1/kernel:0', 'bidirectional/forward_lstm/lstm_cell_1/recurrent_kernel:0', 'bidirectional/forward_lstm/lstm_cell_1/bias:0', 'bidirectional/backward_lstm/lstm_cell_2/kernel:0', 'bidirectional/backward_lstm/lstm_cell_2/recurrent_kernel:0', 'bidirectional/backward_lstm/lstm_cell_2/bias:0', 'dense/kernel:0', 'dense/bias:0'].
from seqeval.metrics import precision_score, recall_score, f1_score, classification_report
test_pred = model.predict(X_te, verbose=1) |
st207392 | I need to measure time and memory complexity for a keras model or captioning model using keras … how can I start ?
Thanks |
st207393 | Check:
How to find out keras model memory size? General Discussion
Hello!
I am doing a school work and I need to find out keras model memory size so I could compare different models. It is supposed to be composed of weights/parameters and model itself. It was given that model.summary() should contain all the information. From there I see that layer info and output shapes with the number of parameters.
I understand parameters as they are just a numbers and so, number of parameters * 4B most likely will give how much room parameters take. But I know that more i… |
st207394 | Thanks a lot for replying … excuse me confirm just i got the point or not please …
i read the links and got to measure the time and memory consumption, i need to add those lines to my model
run_metadata = tf.RunMetadata()
with tf.Session() as sess:
_ = sess.run(train_op,
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
after the model finished , i should run tf.profiler.profile ?
what if i finished the training stages and got the final file with h5 at epoch 20 … is there another way to measure time and memory instead start training again |
st207395 | You can approximate memory with some calc of input, number parameter, dtype etc… but to really profile your model you need to run it. |
st207396 | Thanks a lot … is there any tutorial for these calculations ? In what title I should search for this , please |
st207397 | You can check Ability to calculate projected memory usage for a given model · Issue #36327 · tensorflow/tensorflow · GitHub 3
But I suggest you to estimate It at runtime with the previous solution. |
st207398 | thanks a lot but excuse me what will be the difference between the runtime and the other way ? |
st207399 | Cause at runtime it is what it is going to really consume the library on the machine e.g. kernel memory requirements and any other allocation, data transfer etc… |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.