language:
"List of ISO 639-1 code for your language"
lang1
lang2 thumbnail: "url to a thumbnail used in social sharing" tags:
tag1
tag2 license: "any valid license identifier" datasets:
dataset1
dataset2 metrics:
metric1
metric2
Multi-Label-Classification-of-Pubmed-Articles
The traditional machine learning models give a lot of pain when we do not have sufficient labeled data for the specific task or domain we care about to train a reliable model. Transfer learning allows us to deal with these scenarios by leveraging the already existing labeled data of some related task or domain. We try to store this knowledge gained in solving the source task in the source domain and apply it to our problem of interest. In this work, I have utilized Transfer Learning utilizing BertForSequenceClassification model to fine tune on Pubmed MultiLabel classification Datset.
Also tried RobertaForSequenceClassification and XLNetForSequenceClassification models for Fine-Tuning the Model on Pubmed MultiLabel Datset.
I have integrated Weight and Bias for visualizations and logging artifacts and comparisons of different models!
[Multi Label Classification of PubMed Articles (Paper Night Presentation)] https://wandb.ai/owaiskhan9515/Multi%20Label%20Classification%20of%20PubMed%20Articles%20(Paper%20Night%20Presentation)
- To get the API key, create an account in the website .
- Use secrets to use API Keys more securely inside Kaggle.
For more information on the attributes visit the Kaggle Dataset Description here.
In order to, get a full grasp of what steps should I be taking to utilizing this dataset. Have a Full look at the Dataset and information present in the Kaggle Notebook Link & Kaggle Dataset Link
References
- Attention Is All You Need
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- https://github.com/google-research/bert
- https://github.com/huggingface/transformers
- BCE WITH LOGITS LOSS Pytorch
- Transformers for Multi-Label Classification made simple by Ronak Patel