url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9229 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9229/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9229/comments | https://api.github.com/repos/huggingface/transformers/issues/9229/events | https://github.com/huggingface/transformers/issues/9229 | 771,927,000 | MDU6SXNzdWU3NzE5MjcwMDA= | 9,229 | Generate function does not work with GPU | {
"login": "contribcode",
"id": 24355946,
"node_id": "MDQ6VXNlcjI0MzU1OTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/24355946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/contribcode",
"html_url": "https://github.com/contribcode",
"followers_url": "https://api.github.com/users/contribcode/followers",
"following_url": "https://api.github.com/users/contribcode/following{/other_user}",
"gists_url": "https://api.github.com/users/contribcode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/contribcode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/contribcode/subscriptions",
"organizations_url": "https://api.github.com/users/contribcode/orgs",
"repos_url": "https://api.github.com/users/contribcode/repos",
"events_url": "https://api.github.com/users/contribcode/events{/privacy}",
"received_events_url": "https://api.github.com/users/contribcode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hey @contribcode,\r\n\r\ncould you please provide a complete code snippet that would allow us to reproduce the error? Thanks!",
"@patrickvonplaten thank you for your response. \r\n\r\n```python\r\nimport torch\r\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\r\nfrom transformers import GPT2LMHeadModel\r\nfrom transformers import GPT2TokenizerFast\r\ntokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")\r\n\r\nspecial_tokens_dict = {'additional_special_tokens': ['[spanstarts]','[spanends]']}\r\nnum_added_toks = tokenizer.add_special_tokens(special_tokens_dict)\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\", pad_token_id=tokenizer.eos_token_id)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.to(device)\r\n```\r\nI have posted the snippets about model definition, since I think the rest of the code does not affect the `generate` method. After this I just train the model. \r\n\r\nRegards.",
"Hey @contribcode, \r\n\r\nI can execute the above code snippet without error -> could you include the part of the code where the error arises?",
"Hey @patrickvonplaten, \r\n\r\nthe part of the code where the error occurs is the code in the original post\r\n\r\n```\r\ninput_ids = tokenizer.encode(train_texts['text'].iloc[0], return_tensors='pt')\r\nmodel.to('cuda')\r\ninput_ids.to('cuda')\r\nsample_output = model.generate(\r\n input_ids, \r\n do_sample=True, \r\n max_length=150,\r\n top_k=50,\r\n top_p=0.92\r\n)\r\n```\r\n\r\nand I get the error \r\n\r\n`RuntimeError: Input, output and indices must be on the current device`\r\n\r\nAs I mentioned in the original post, if I move the model and input_ids `to('cpu')`, it works.\r\n\r\nI also wanted to ask about parameters (`top_k`, `top_p`), are these values common for the generate function?",
"Could you please post a single code snippet that I can copy/paste into a terminal and run to reproduce the error? The above snippet is not executable because `train_texts['text'].iloc[0]` is not defined. It's impossible to help you if we are not able to reproduce the error, I'm afraid.",
"`train_texts['text'].iloc[0]` is just some text, for example: This man is a joke.\r\n\r\nThe whole code snippet is \r\n\r\n```python\r\nimport torch\r\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\r\nfrom transformers import GPT2LMHeadModel\r\nfrom transformers import GPT2TokenizerFast\r\ntokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")\r\n\r\nspecial_tokens_dict = {'additional_special_tokens': ['[spanstarts]','[spanends]']}\r\nnum_added_toks = tokenizer.add_special_tokens(special_tokens_dict)\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\", pad_token_id=tokenizer.eos_token_id)\r\nmodel.resize_token_embeddings(len(tokenizer))\r\nmodel.to(device)\r\n\r\ninput_ids = tokenizer.encode('This man is a joke.', return_tensors='pt')\r\ninput_ids.to(device)\r\nsample_output = model.generate(\r\n input_ids, \r\n do_sample=True, \r\n max_length=150,\r\n top_k=50,\r\n top_p=0.92\r\n)\r\n```\r\n\r\nand the error is what I mentioned earlier.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I have the same issue with Transformers 4.4.2 and PyTorch 1.8.0\r\n\r\nUPD: also tried Transformers 4.5.1, the issue is still here",
"Very nice @cloveranon !\r\n\r\nNow, do you know how/if GPUs can accelerate generation? Can this work in batches?\r\n\r\nMaybe, @patrickvonplaten can shed some light."
] | 1,608 | 1,622 | 1,614 | NONE | null | Hello,
I want to use generate function with single GPU. Specifically, I fine tuned a GPT-2 model (on GPU) and subsequently, I want to generate text with it.
When I run this
```
input_ids.to(device)
sample_output = model.generate(
input_ids,
do_sample=True,
max_length=150,
top_k=50,
top_p=0.92
)
```
I get this error
`RuntimeError: Input, output and indices must be on the current device`
When i move the model and the input `.to('cpu')`, it works. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9229/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9228 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9228/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9228/comments | https://api.github.com/repos/huggingface/transformers/issues/9228/events | https://github.com/huggingface/transformers/issues/9228 | 771,882,102 | MDU6SXNzdWU3NzE4ODIxMDI= | 9,228 | Differences between original implementation and HuggingFace implementation | {
"login": "osabnis",
"id": 38884346,
"node_id": "MDQ6VXNlcjM4ODg0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/38884346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osabnis",
"html_url": "https://github.com/osabnis",
"followers_url": "https://api.github.com/users/osabnis/followers",
"following_url": "https://api.github.com/users/osabnis/following{/other_user}",
"gists_url": "https://api.github.com/users/osabnis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osabnis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osabnis/subscriptions",
"organizations_url": "https://api.github.com/users/osabnis/orgs",
"repos_url": "https://api.github.com/users/osabnis/repos",
"events_url": "https://api.github.com/users/osabnis/events{/privacy}",
"received_events_url": "https://api.github.com/users/osabnis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi there,\r\n\r\nI made some [integration tests](https://github.com/NielsRogge/transformers/blob/e5431da34ab2d03d6114303f18fd70192c880913/tests/test_modeling_layoutlm.py#L318) for both the base model (`LayoutLM`) as well as the model with a token classification head on top (`LayoutLMForTokenClassification`). These integration tests do not reveal any differences in terms of output on the same input data between the original implementation and the one in HuggingFace Transformers. So the implementation seems to be OK. Btw, the `segment_ids` you are referring to are called `token_type_ids` in the Transformers library.\r\n\r\nI also made a demo [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) that showcases how to fine-tune LayoutLMForTokenClassification on the FUNSD dataset, I'm getting quite good performance even though I'm not using Mask-RCNN features. Let me know if this helps you. ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **4.0.0**
- Platform: **Windows**
- Python version: **3.6.5**
- PyTorch version (GPU?): **1.6.0+cu101**
- Tensorflow version (GPU?): **-**
- Using GPU in script?: **Yes**
- Using distributed or parallel set-up in script?: **No**
### Who can help
**@stefan-it**
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
The model I am using (Bert, XLNet ...): LayoutLMforTokenClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
**My own modified script**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
**My own dataset**
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
This is more of a question rather than an issue.
When I trained the layoutlm model using my data and I used the tokenclassification model from huggingface, I got a small drop in performance. I wanted to ask if there are any differences between the two models? I have kept the hyper-parameters to be exactly the same in both cases.
Two key points where I found the differences were:
(1). When taking in the dataset - in the Microsoft version, there is a concept called "segment_ids" which is not a parameter in the huggingface layoutlm documentation.
(2). I loaded both the models and printed the number of layers in both, I saw that there is 1 extra layer called layoutlm.embeddings.position_ids in the huggingface implementation.
I am trying to find out the reason for the drop in performance. Hence, wanted to find out if there is any difference between the model implementations itself. It would be great help if you could help explain the two differences I found!
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9228/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9228/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9227 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9227/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9227/comments | https://api.github.com/repos/huggingface/transformers/issues/9227/events | https://github.com/huggingface/transformers/issues/9227 | 771,880,443 | MDU6SXNzdWU3NzE4ODA0NDM= | 9,227 | Can't lazy initialize BART model on GPU | {
"login": "jethrokuan",
"id": 1667473,
"node_id": "MDQ6VXNlcjE2Njc0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1667473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jethrokuan",
"html_url": "https://github.com/jethrokuan",
"followers_url": "https://api.github.com/users/jethrokuan/followers",
"following_url": "https://api.github.com/users/jethrokuan/following{/other_user}",
"gists_url": "https://api.github.com/users/jethrokuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jethrokuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jethrokuan/subscriptions",
"organizations_url": "https://api.github.com/users/jethrokuan/orgs",
"repos_url": "https://api.github.com/users/jethrokuan/repos",
"events_url": "https://api.github.com/users/jethrokuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jethrokuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jethrokuan - Please try this on the transformers 4.1.0; issue looks to be fixed in this version",
"Actually it looks like the problem is exasperated in newer HF. In a devbox with no CUDA:\r\n\r\n```\r\nPython 3.7.9 (default, Nov 18 2020, 11:18:33)\r\n[GCC 6.3.0 20170516] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n.../lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\n```\r\n \r\nThis suggests that the simple action of importing anything from hf causes cuda initialization.\r\n\r\nI worked around it by deferring the import altogether:\r\n\r\n```\r\nclass Model:\r\n def __init__(self, path):\r\n self.path = path\r\n self.initialized = False\r\n\r\n def init(self):\r\n if not self.initialized:\r\n from transformers import ...\r\n self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n self.model = AutoModelForSeq2SeqLM.from_pretrained(self.path)\r\n self.model.to(self.device)\r\n self.model.eval()\r\n self.initialized = True\r\n```\r\n "
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 3.5.1
- Platform: Linux-4.14.81.bm.15-amd64-x86_64-with-debian-9.11
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
Bart: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Bart
We have finetuned a BART model for Seq2SeqLM, and want to serve it using a Python web microframework. This microframework uses `multiprocessing` underneath, so we wish to lazily initialize the model on the GPU on the first call. However, in `modeling_bart.py`, some of the layers already call CUDA in the main process:
https://github.com/huggingface/transformers/blob/f38c4ad302dd34faef1137b13e9636f4408b0462/src/transformers/models/bart/modeling_bart.py#L125
So trying to do so results in the following error:
```
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
```
## To reproduce
```
class Model:
def __init__(self, path):
self.path = path
self.initialized = False
def init(self):
if not self.initialized:
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.model = AutoModelForSeq2SeqLM.from_pretrained(self.path)
self.model.to(self.device)
self.model.eval()
self.initialized = True
```
```
model = Model("bart-base")
# in multiprocessing subprocess...
model.init()
# RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
```
Sorry about the pseudocode, the microframework used is proprietary.
Commenting out the linked lines removes the error.
## Expected behavior
Able to call `model.cuda()` from subprocess. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9227/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9226 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9226/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9226/comments | https://api.github.com/repos/huggingface/transformers/issues/9226/events | https://github.com/huggingface/transformers/issues/9226 | 771,755,483 | MDU6SXNzdWU3NzE3NTU0ODM= | 9,226 | n_gpus is set to 1 in case of distributed training on multiple gpus, how to access to the correct n_gpus | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please do not spam the issues: this is a duplicate of #9225. You can edit the title and the description, there is no need to send a new notification to everyone watching the repo.\r\nAlso, please use the [forum](https://discuss.huggingface.co/) for questions like this, we keep the issues for bugs and features requests only. In your case, this is a PyTorch command: `toch.distributed.get_world_size()` will return you the number of GPUs used.",
"Hi Slvain, thank you, sorry I did not really realized the duplicate, sure, thanks "
] | 1,608 | 1,608 | 1,608 | NONE | null | Hi
I am using transformer 4.1.1 on multiple gpus with distributed launch script below, the n_gpu set in this case is 1 instead of actual n_gpus, could you tell me how I can access to the total number of ranks calling the launcher with? Is there any variable in huggingface library which could show the number of gpus the script is called with? Here is the command I run:
`export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9915 finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation --val_max_target_length 128 --warmup_steps 500 --n_train 500
`
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9226/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9225 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9225/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9225/comments | https://api.github.com/repos/huggingface/transformers/issues/9225/events | https://github.com/huggingface/transformers/issues/9225 | 771,755,274 | MDU6SXNzdWU3NzE3NTUyNzQ= | 9,225 | n_gpu=1 in case of using distributed pytorch launcher | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #9226"
] | 1,608 | 1,608 | 1,608 | NONE | null | Hi
I am using transformer 4.1.1 on multiple gpus with distributed launch script below, the n_gpu set in this case is 1 instead of actual n_gpus, could you tell me how I can access to the total number of ranks calling the launcher with? Is there any variable in huggingface library which could show the number of gpus the script is called with? Here is the command I run:
`export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9915 finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation --val_max_target_length 128 --warmup_steps 500 --n_train 500
`
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9225/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9224 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9224/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9224/comments | https://api.github.com/repos/huggingface/transformers/issues/9224/events | https://github.com/huggingface/transformers/issues/9224 | 771,698,366 | MDU6SXNzdWU3NzE2OTgzNjY= | 9,224 | allowing the user to set booleans with HfArgumentParser | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @rabeehkarimimahabadi, \r\n\r\nOne should not call the `HfArgumentParser` with \r\n\r\n`--overwrite_output_dir True`, but just with `--overwrite_output_dir` to set the bool from `False` to `True`. If one wants to leave it as `False`, the arg should just not be passed. If the default field of this variable is already `True` and one wants to set it to `False` the argument should be passed as follows :`--no_remove_unused_columns`, see: https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L83\r\n."
] | 1,608 | 1,608 | 1,608 | NONE | null | Hi
this is more general issue, if the user defines some booleans as arguments like --flag false/true once calling the finetune_trainer.py one would get following error as below with HfArgumentParser, it would be great to allow users set explicit values for booleans too, currently this is only possible with config files. thanks
```
raceback (most recent call last):
Traceback (most recent call last):
File "finetune_t5_trainer.py", line 329, in <module>
File "finetune_t5_trainer.py", line 329, in <module>
main()main()
File "finetune_t5_trainer.py", line 48, in main
File "finetune_t5_trainer.py", line 48, in main
model_args, data_args, training_args, adapter_args = parser.parse_args_into_dataclasses()model_args, data_args, training_args, adapter_args = parser.parse_args_into_dataclasses()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/hf_argparser.py", line 144, in parse_args_into_dataclasses
File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/hf_argparser.py", line 144, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueErrorValueError: : Some specified arguments are not used by the HfArgumentParser: ['True', 'true', 'False', 'False', 'True', 'True', 'True', 'True', 'True', 'True', 'True', '--add_adapters_in_decoder', 'False']Some specified arguments are not used by the HfArgumentParser: ['True', 'true', 'False', 'False', 'True', 'True', 'True', 'True', 'True', 'True', 'True', '--add_adapters_in_decoder', 'False']
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 260, in <module>
main()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/internship/bin/python', '-u', 'finetune_t5_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-small', '--tokenizer_name', 't5-small', '--learning_rate', '1e-2', '--output_dir', 'outputs/test', '--max_source_length', '128', '--max_target_length', '128', '--val_max_target_length', '128', '--test_max_target_length', '128', '--num_train_epochs', '10', '--warmup_steps', '500', '--eval_steps', '200', '--overwrite_output_dir', 'True', '--tasks', '[scitail,', 'boolq]', '--eval_tasks', '[rte,', 'boolq]', '--sampling', 'true', '--label_smoothing', '0.1', '--freeze_encoder', 'False', '--freeze_embeds', 'False', '--per_device_train_batch_size', '64', '--per_device_eval_batch_size', '64', '--save_steps', '20', '--logging_first_step', 'True', '--logging_steps', '200', '--save_total_limit', '1', '--train_adapters', 'True', '--adapter_config_name', 'parametric-meta-adapter', '--temperature', '10', '--do_eval', 'True', '--predict_with_generate', 'True', '--n_train', '10', '--task_embedding_dir', 'test_data/task_embeddings/n-train-all', '--task_embedding_dim', '512', '--n_val', '10', '--n_train', '10', '--do_finetune', 'True', '--do_train', 'True', '--n_finetune', '100', '--eval_output_dir', 'outputs/eval_test', '--reduction_factor', '16', '--non_linearity', 'relu', '--train_task_embeddings', 'True', '--projected_task_embedding_dim', '512', '--add_adapters_in_decoder', 'False', '--unfreeze_lm_head', '--unfreeze_layer_norms']' returned non-zero exit status 1.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9224/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9223 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9223/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9223/comments | https://api.github.com/repos/huggingface/transformers/issues/9223/events | https://github.com/huggingface/transformers/issues/9223 | 771,689,498 | MDU6SXNzdWU3NzE2ODk0OTg= | 9,223 | passing config file to train with on multiple gpus | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"this does not work since finetuner_trainer.py gets config as only one argument and local_rank is also passed with this command, is there an easy fix for this without parsing arguments from config? thanks ",
"I solved it with updating finetune_trainer accepting --local_rank + config file"
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
- transformers 3.5.1
- Python version: 3.7
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
Trainer: @sgugger
T5: @patrickvonplaten
FSMT: @stas00
examples/seq2seq: @patil-suraj
documentation: @sgugger
## Information
Hi, I want to run finetune_trainer with multiple gpus with a config file, I am using the following command:
`export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 finetune_t5_trainer.py configs/experiments/test.json
`
getting this error, looks like config file is not passed properly in this case, the command on one gpu works, could you tell me please how I can call the codes with config, it would be great to have it in the read me. thanks
```
inetune_t5_trainer.py: error: the following arguments are required: --model_name_or_path, --output_dir
usage: finetune_t5_trainer.py [-h] --model_name_or_path MODEL_NAME_OR_PATH
[--not_load_t5_checkpoint]
[--config_name CONFIG_NAME]
[--tokenizer_name TOKENIZER_NAME]
[--cache_dir CACHE_DIR] [--freeze_encoder]
[--freeze_embeds] [--freeze_model_but_lm_head]
[--unfreeze_lm_head]
[--freeze_model_but_task_embeddings]
[--unfreeze_layer_norms] [--sampling]
[--tasks TASKS [TASKS ...]]
[--eval_tasks EVAL_TASKS [EVAL_TASKS ...]]
[--adapters ADAPTERS [ADAPTERS ...]]
[--max_source_length MAX_SOURCE_LENGTH]
[--max_target_length MAX_TARGET_LENGTH]
[--val_max_target_length VAL_MAX_TARGET_LENGTH]
[--test_max_target_length TEST_MAX_TARGET_LENGTH]
[--n_train N_TRAIN] [--n_val N_VAL]
[--n_test N_TEST] [--eval_beams EVAL_BEAMS]
[--no_ignore_pad_token_for_loss]
[--n_finetune N_FINETUNE] --output_dir
OUTPUT_DIR [--overwrite_output_dir] [--do_train]
[--do_eval] [--do_predict]
[--evaluate_during_training]
[--evaluation_strategy {EvaluationStrategy.NO,EvaluationStrategy.STEPS,EvaluationStrategy.EPOCH}]
[--prediction_loss_only]
[--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE]
[--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE]
[--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE]
[--per_gpu_eval_batch_size PER_GPU_EVAL_BATCH_SIZE]
[--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
[--eval_accumulation_steps EVAL_ACCUMULATION_STEPS]
[--learning_rate LEARNING_RATE]
[--weight_decay WEIGHT_DECAY]
[--adam_beta1 ADAM_BETA1]
[--adam_beta2 ADAM_BETA2]
[--adam_epsilon ADAM_EPSILON]
[--max_grad_norm MAX_GRAD_NORM]
[--num_train_epochs NUM_TRAIN_EPOCHS]
[--max_steps MAX_STEPS]
[--warmup_steps WARMUP_STEPS]
[--logging_dir LOGGING_DIR]
[--logging_first_step]
[--logging_steps LOGGING_STEPS]
[--save_steps SAVE_STEPS]
[--save_total_limit SAVE_TOTAL_LIMIT]
[--no_cuda] [--seed SEED] [--fp16]
[--fp16_opt_level FP16_OPT_LEVEL]
[--local_rank LOCAL_RANK]
[--tpu_num_cores TPU_NUM_CORES]
[--tpu_metrics_debug] [--debug]
[--dataloader_drop_last]
[--eval_steps EVAL_STEPS]
[--dataloader_num_workers DATALOADER_NUM_WORKERS]
[--past_index PAST_INDEX] [--run_name RUN_NAME]
[--disable_tqdm DISABLE_TQDM]
[--no_remove_unused_columns]
[--label_names LABEL_NAMES [LABEL_NAMES ...]]
[--load_best_model_at_end]
[--metric_for_best_model METRIC_FOR_BEST_MODEL]
[--greater_is_better GREATER_IS_BETTER]
[--label_smoothing LABEL_SMOOTHING]
[--sortish_sampler] [--predict_with_generate]
[--adafactor]
[--encoder_layerdrop ENCODER_LAYERDROP]
[--decoder_layerdrop DECODER_LAYERDROP]
[--dropout DROPOUT]
[--attention_dropout ATTENTION_DROPOUT]
[--lr_scheduler LR_SCHEDULER]
[--fixed_length_emb]
[--encoder_projection ENCODER_PROJECTION]
[--encoder_pooling ENCODER_POOLING]
[--projection_length PROJECTION_LENGTH]
[--only_projection_bottleneck]
[--concat_projection_token]
[--gcs_bucket GCS_BUCKET]
[--temperature TEMPERATURE] [--train_adapters]
[--do_finetune] [--parametric_task_embedding]
[--eval_output_dir EVAL_OUTPUT_DIR]
[--generate_classifier_weights]
[--adapter_config_name ADAPTER_CONFIG_NAME]
[--task_embedding_dir TASK_EMBEDDING_DIR]
[--task_embedding_dim TASK_EMBEDDING_DIM]
[--add_layer_norm_before_adapter]
[--no_add_layer_norm_after_adapter]
[--hidden_dim HIDDEN_DIM]
[--reduction_factor REDUCTION_FACTOR]
[--train_task_embeddings]
[--non_linearity NON_LINEARITY]
[--projected_task_embedding_dim PROJECTED_TASK_EMBEDDING_DIM]
[--no_add_adapters_in_decoder]
finetune_t5_trainer.py: error: the following arguments are required: --model_name_or_path, --output_dir
Traceback (most recent call last):
File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 260, in <module>
main()
File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/internship/bin/python', '-u', 'finetune_t5_trainer.py', '--local_rank=1', 'configs/experiments/test.json']' returned non-zero exit status 2.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9223/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9222 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9222/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9222/comments | https://api.github.com/repos/huggingface/transformers/issues/9222/events | https://github.com/huggingface/transformers/issues/9222 | 771,512,991 | MDU6SXNzdWU3NzE1MTI5OTE= | 9,222 | Does provide any scripts about how to convert mpnet pretrain model to transformers pretrain one? | {
"login": "RyanHuangNLP",
"id": 49582480,
"node_id": "MDQ6VXNlcjQ5NTgyNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49582480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanHuangNLP",
"html_url": "https://github.com/RyanHuangNLP",
"followers_url": "https://api.github.com/users/RyanHuangNLP/followers",
"following_url": "https://api.github.com/users/RyanHuangNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanHuangNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanHuangNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanHuangNLP/subscriptions",
"organizations_url": "https://api.github.com/users/RyanHuangNLP/orgs",
"repos_url": "https://api.github.com/users/RyanHuangNLP/repos",
"events_url": "https://api.github.com/users/RyanHuangNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanHuangNLP/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe the author of MPNet can help here, @StillKeepTry ",
"@patrickvonplaten problem is fix"
] | 1,608 | 1,608 | 1,608 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
I want to convert my pretrain chinese mpnet with the fairseq format to transformers format
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9222/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9221/comments | https://api.github.com/repos/huggingface/transformers/issues/9221/events | https://github.com/huggingface/transformers/issues/9221 | 771,511,217 | MDU6SXNzdWU3NzE1MTEyMTc= | 9,221 | TAPAS: IndexError: index out of range in self | {
"login": "xiaopi-ouo",
"id": 58041436,
"node_id": "MDQ6VXNlcjU4MDQxNDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/58041436?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiaopi-ouo",
"html_url": "https://github.com/xiaopi-ouo",
"followers_url": "https://api.github.com/users/xiaopi-ouo/followers",
"following_url": "https://api.github.com/users/xiaopi-ouo/following{/other_user}",
"gists_url": "https://api.github.com/users/xiaopi-ouo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiaopi-ouo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiaopi-ouo/subscriptions",
"organizations_url": "https://api.github.com/users/xiaopi-ouo/orgs",
"repos_url": "https://api.github.com/users/xiaopi-ouo/repos",
"events_url": "https://api.github.com/users/xiaopi-ouo/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiaopi-ouo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @NielsRogge ",
"It doesn't work because `type_vocab_sizes=[3, 256, 256, 2, 256, 256, 10]` which means the default max size of a table is 256*256. However, the tokenizer doesn't check it. \r\nStrictly speaking, it's not a bug but I think in that kind of tasks, many tables' size can exceed the default value.\r\nMaybe it's good to add codes to handle this problem.",
"Ok I've investigated this a bit more. The problem here is that there's a column called \"Area (acres, 1872)\" in the table whose values cause the `column_ranks` token types to exceed the vocab size of 256. Deleting this column resolves the issue. Also, the table that you provide is actually (way) too large for TAPAS. Note that it only has a max sequence length of 512 tokens, and that in the original paper, only tables having at most 500 cells were considered. \r\n\r\nTo correctly truncate the table, you have to initialize TapasTokenizer as follows:\r\n\r\n```\r\nfrom transformers import TapasTokenizer\r\n\r\ntokenizer = TapasTokenizer.from_pretrained(\"google/tapas-base-finetuned-tabfact\", drop_rows_to_fit=True)\r\n```\r\n\r\nand then encode the table + query as follows:\r\n\r\n```\r\ninputs = tokenizer(table=table, queries=queries,\r\n padding=\"max_length\", \r\n truncation=True, return_tensors=\"pt\")\r\n```\r\n\r\ncc @LysandreJik \r\n\r\nIt's maybe a bit weird that people have to initialize `TapasTokenizer` with `drop_rows_to_fit` to True and set `truncation` to True when calling it.\r\n",
"Thank you for the reply. It's really helpful!",
"You're right @NielsRogge; should we switch the `drop_rows_to_fit` to `True` by default given that no models can handle it set to `False` when overflowing?",
"I guess it should only be set to `True` when calling the tokenizer with `truncation` set to `True` or to `drop_rows_to_fit`. If the user does not specify truncation, and the table is too large, then an error will be thrown as shown [here](https://github.com/huggingface/transformers/blob/8eb7f26d5d9ce42eb88be6f0150b22a41d76a93d/src/transformers/models/tapas/tokenization_tapas.py#L1426).",
"Fixed by #9507 ",
"https://stackoverflow.com/questions/76849532/hugging-face-google-tapas-base-finetuned-wtq-indexerror-index-out-of-range"
] | 1,608 | 1,701 | 1,610 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.15.0-122-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, 1080ti
- Using distributed or parallel set-up in script?: No
### Who can help
May @LysandreJik help?
## Information
Model I am using (Bert, XLNet ...): TAPAS
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. data in json string
```
data = '{"table": [[{"value": "County", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Name", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Irish name", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Date", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Area (acres, 1872)", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Notes", "is_header": true, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Antrim Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Aontroim \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "80,826", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Antrim town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Antrim Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Aontroim Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,489", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Antrim town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Belfast Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al Feirste \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,142", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Belfast town (now city)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Belfast Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al Feirste Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,942", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Belfast town (now city)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carrickfergus", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carraig Fhearghais", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1325", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,702", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate: the County of the Town of Carrickfergus", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cary or Carey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "75,035", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Cothrugu (Cotraigib, Crotraigib), an ancient tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunluce Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Libhse \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,575", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "See also Dunluce Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunluce Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Libhse Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,788", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "See also Dunluce Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenarm Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann Arma \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,945", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Glenarm village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenarm Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann Arma Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "24,032", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Glenarm village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilconway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill Chonmha\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "68,640", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"forest of the Conmha\\u00edcne\\".", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Massereene Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00e1sa R\\u00edona \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,228", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Massereene Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00e1sa R\\u00edona Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,675", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Toome Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tuaim \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Toome village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Toome Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tuaim Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,571", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Toome village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ard Mhacha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,645", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Armagh town (now city)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fews Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na Fe\\u00e1 \\u00cdochtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1745; Fews by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,757", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Irish Na Feadha, \\"The lengths\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fews Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na Fe\\u00e1 Uachtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1745; Fews by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,433", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Irish Na Feadha, \\"The lengths\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Oneilland East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Niall\\u00e1in Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Oneilland by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "20,890", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Oneilland West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Niall\\u00e1in Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Oneilland by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,584", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orior Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na hOirthir \\u00cdochtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Orior by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,927", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orior Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na hOirthir Uachtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Orior by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,086", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tiranny or Turaney", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tuath Threana", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,397", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Threna tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceatharlach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,353", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carlow town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Forth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fotharta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,510", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named from the Irish Fothairt Mag Fe\\u00e1, \\"fothairt of the beech plain.\\" A fothairt was a kingdom not ruled by a branch of the provincial ruling family.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Idrone East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Dhr\\u00f3na Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1799", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,857", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Idrone West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Dhr\\u00f3na Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1799", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,066", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathvilly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Bhile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,806", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rathvilly village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "St. Mullin\'s Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh Moling \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,914", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after St Mullin\'s village. Does not border St. Mullin\'s Upper.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "St. Mullin\'s Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh Moling Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "7,784", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after St. Mullin\'s village; the land was a detached fragment of the original St.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlerahan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Caisle\\u00e1n Raithin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,279", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlerahan parish.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clankee", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Chaoich", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,377", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Caoch\'s clan\\"; Caoch (meaning \\"blind\\" or \\"squint\\") was the nickname of Niall mac Cathal na Beith\\u00ed mac Annadh \\u00d3 Raghallaigh (died 1296).", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanmahon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Mhath\\u00fana", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,170", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Math\\u00fain\'s clan.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughtee Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lucht T\\u00ed \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821; Loughtee by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,240", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughtee Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lucht T\\u00ed Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821; Loughtee by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,842", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tullygarvey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Teallach Ghairbh\\u00edth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,871", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"tribe of Gairbh\\u00e9ith\\".", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tullyhaw", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Teallach Eathach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "89,852", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Eochaid\'s tribe\\", referring to a king of c. AD 700.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tullyhunco or Tulloghonoho", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Teallach Dh\\u00fanchadha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,624", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"D\\u00fanchadh\'s tribe.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bunratty Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bun Raite \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,314", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bunratty Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bun Raite Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "53,595", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Burren", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Boirinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "74,360", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The barony is called \\"Burren\\"; the region is now usually \\"The Burren\\", a name meaning \\"great rock.\\" Formerly aka Gragans.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonderalaw", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain idir Dh\\u00e1 L\\u00e1", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "75,878", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clonderalaw Castle. Formerly aka East Corkewasken.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corcomroe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corca Mrua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "61,385", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Corco Modhruadh, formerly the ruling dynasty in the area. Formerly aka Dowaghy connoghor/Tuoghmore y Conour.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ibrickan or Ibrickane", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Bhreac\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,696", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Bhreac\\u00e1in, formerly the ruling dynasty in the area", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inchiquin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inse U\\u00ed Chuinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "88,387", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name is Irish for \\"Quinn\'s water meadow.\\" Namesake of Baron Inchiquin", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Islands", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na hOile\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,592", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name refers to the islands of the Fergus estuary. Formerly aka Cloynerawde/Clonraude", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyarta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Fhearta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "68,679", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name from Irish Mag Fearta, \\"plain of graves\\". Formerly aka West Corkewasken.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tulla Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Tulach \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "73,454", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tulla Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Tulach Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "94,919", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bantry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Beanntra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,216", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Bantry town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Barretts", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bar\\u00f3idigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,761", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Barrett family.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Barrymore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Barraigh Mh\\u00f3ra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "148,143", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Earl of Barrymore. Name means \\"Great Barrys.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bear", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9arra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "89,986", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Beara Peninsula. It is said to be named after a princess named B\\u00e9irre, or possibly settlers from Iberia.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery East, East Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thoir, an Roinn Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "67,235", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery East, West Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thoir, an Roinn Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "105,141", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery West, East Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thiar, an Roinn Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,263", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery West, West Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thiar, an Roinn Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "109,178", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Condons and Clangibbon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cond\\u00fanaigh agus Clann Ghiob\\u00fain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "78,481", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The territories of two families: the Condons or Cauntons, and the FitzGibbons or White Knight", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cork City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Chorca\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1608", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,265", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate, originally including the Liberties which later formed the separate Barony of Cork. It contains 7 civil parishes.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corcaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "43,813", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formed from the \\"Liberties of Cork\\", the portion previously within the County of the city of Cork which was not within the borough of Cork.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Courceys", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "C\\u00farsaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "8,812", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the de Courcy barons.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Duhallow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00faiche Ealla", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "232,328", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"land of the Munster Blackwater\\".", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fermoy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mainistir Fhear Ma\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,188", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Fermoy town, which is actually in Condons and Clangibbon", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ibane and Barryroe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Bhamhna agus Barraigh Rua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1711", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,291", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ibane and Barryroe are peninsulas on opposite sides of Clonakilty Bay The names mean, respectively, \\"Descendants of Bamna\\" and \\"Red-haired Barrys.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Imokilly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Mhic Coille", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "93,617", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Meic Caille, a sept of the U\\u00ed Liath\\u00e1in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kerrycurrihy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ciarra\\u00ed Cuirche", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,957", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kerrycurrihy and Kinalea united in Down Survey. A tribal name: the Ciarraige Cuirchi.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinalea", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cine\\u00e1l Aodha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "50,692", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kerrycurrihy and Kinalea united in Down Survey. The \\"tribe of A\\u00e9d.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinalmeaky", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cine\\u00e1l mB\\u00e9ice", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,068", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Cen\\u00e9l mBeice \\"Beice\'s people\\", a sept of the O\'Mahonys.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinnatalloon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill na Tal\\u00fan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,718", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Tolamhnach\'s forest,\\" referring to a 7th-century chief of the U\\u00ed Liath\\u00e1in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinsale", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cionn tS\\u00e1ile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "12,430", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kinsale town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muskerry East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00fascra\\u00ed Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "122,874", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Baron Muskerry. The only barony split between the East and West Ridings of County Cork.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muskerry West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00fascra\\u00ed Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "188,487", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Baron Muskerry. Named after the ancient tribe of the M\\u00fascraige.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orrery and Kilmore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orbhra\\u00ed agus An Choill Mh\\u00f3r", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,346", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Earl of Orrery. Named after the Orbhraighe tribe, while Kilmore means \\"great forest.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Banagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e1inigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "177,288", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the Cinel Boghaine, descended from Niall of the Nine Hostages. Combined with Boylagh till 1791", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Boylagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baollaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "156,245", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the O\'Boyles. Combined with Banagh till 1791", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inishowen (or Innishowen) East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inis Eoghain Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "123,356", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Eoghan\'s peninsula.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inishowen (or Innishowen) West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inis Eoghain Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "76,828", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Eoghan\'s peninsula.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilmacrenan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Mhic R\\u00e9an\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "310,325", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilmacrenan village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Raphoe North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Bhoth Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u20131821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "80,610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Raphoe town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Raphoe South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Bhoth Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u20131821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "140,841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Raphoe town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirhugh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Aodha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "125,828", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Aodh\'s country.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ards (or Ardes) Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Aird \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,462", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ards (or Ardes) Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Aird Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,697", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlereagh Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Riabhach \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,452", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlereagh townland. Gives its name to the borough of Castlereagh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlereagh Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Riabhach Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "53,856", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlereagh townland. Gives its name to the borough of Castlereagh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dufferin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Duifrian", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "17,208", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name from the Irish duibhthrian (black third).", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Lower, Lower Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach \\u00cdochtarach, An Leath \\u00cdochtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "46,057", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Lower, Upper Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach \\u00cdochtarach, An Leath Uachtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,538", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Upper, Lower Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach Uachtarach, An Leath \\u00cdochtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,317", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Upper, Upper Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach Uachtarach, An Leath Uachtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,249", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinelarty", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cine\\u00e1l Fh\\u00e1rtaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,322", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Faghartach\'s kindred.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lecale Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leath Cathail \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,920", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lecale Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leath Cathail Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,521", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lordship of Newry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An tI\\u00far", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "15,813", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The historic Lordship encompassed lands on both sides of the Down-Armagh border. Later, the jurisdiction of the \\"Lordship of Newry\\" for baronial presentment sessions extended only to County Down.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mourne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00farna", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,822", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Mourne Mountains. A half-barony in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Balrothery East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Ridire Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1842", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,005", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Balrothery village. Balrothery existed by 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Balrothery West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Ridire Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1842", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,195", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Balrothery village. Balrothery existed by 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castleknock", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Caisle\\u00e1n Cnucha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,371", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castleknock village (now suburban); from 1861, reduced in size by the expanded borders of Dublin city", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coolock", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ch\\u00fal\\u00f3g", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "26,614", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the historical village of Coolock, now suburban; from 1861, reduced in size by the expanded borders of Dublin city", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Cliath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1840", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1,693", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Created by the 1840 Acts from land previously liberties in the county of the City. Its name and area were confirmed by the Dublin Baronies Act 1842.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dublin City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Bhaile \\u00c1tha Cliath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1548", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,114", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Nethercross", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Chrois \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,818", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after a cross erected by Saint Cainnech in Finglas. Compare Uppercross.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Newcastle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Nua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "22,876", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the village of Newcastle, County Dublin. Not related to the Wicklow barony of Newcastle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathdown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th an D\\u00fain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,974", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony from 1606, with the Wicklow half-barony of Rathdown separated out. From 1861, reduced in size by the expanded borders of Dublin city.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uppercross", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Chrois Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1792\\u20131821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,307", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Compare Nethercross. In the Down Survey, Uppercross and Newcastle were not distinguished.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanawley or Glenawley", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Amhlaoibh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "72,894", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Awley\\" is from Mac Amhlaoibh and Mac Amhalghaidh (Irish septs)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clankelly or Clonkelly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Cheallaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,067", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clan of the Kellys", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coole", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ch\\u00fail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "17,320", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony in the Down Survey. Name means \\"corner.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Knockninny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cnoc Ninnidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,732", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the hill of Saint Ninnidh", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lurg", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lorg", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "66,163", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Tuath Luirg (Fir Luirg; \\"tribe/men of the path\\").", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Magheraboy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Machaire Bu\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,038", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"yellow plain\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Magherastephana", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Machaire Steaf\\u00e1nach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "58,979", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name origin unclear; \\"plain of the FitzStephens?\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirkennedy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Cheannada", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,267", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Fergus son of Cremthann, nicknamed Cennfhota (\\"long head\\"). No relation to the surname Kennedy.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Aran or Arran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\u00c1rainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "11,287", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Conterminous with the Aran Islands; Inishmore (\\u00c1rainn Mh\\u00f3r) is named for its shape (ara = kidney)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Athenry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha an R\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,782", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Athenry town; called \\"Halfe Barony and liberties of Athenrey\\" in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballymoe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al \\u00c1tha M\\u00f3", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "89,270", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballymoe village; Half with Ballymoe, County Roscommon. Full barony existed in Galway by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballynahinch", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile na hInse", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "189,813", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballynahinch town; \\"Ballenanen\\" in Down Survey (or Hibernia Delinateo)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Chl\\u00e1ir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "127,486", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the River Clare and village of Claregalway. The name means \\"[river of the] plain.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonmacnowen or Clonmacnoon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain Mhac nEoghain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,467", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Clanemtoneen\\" in Down Survey (or Hibernia Delinateo). Name means \\"Valley of the sons of Eoghan.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunkellin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Coill\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "83,371", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Coill\\u00edn\'s hillfort\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunmore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan M\\u00f3r", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "71,011", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dunmore village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gaillimh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "22,492", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate: the county of the Town (now city) of Galway", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilconnell or Kilconnnel", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chonaill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,819", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilconnell village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Killian", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Liath\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,388", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Liath\\u00e1in\'s church\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kiltartan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Tartan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "65,664", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Killcartar\\" in Down Survey (or Hibernia Delinateo). Was originally named after Saint Attracta\'s church. Kiltaraght in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Liatroim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "109,567", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Clare. Name means \\"grey ridge.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Longfort", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,506", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"ship landing-ground\\", referring to a longphort on a tributary of the River Shannon.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughrea", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Locha Riach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,406", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Loughrea town; called \\"Half Barony of Lougheagh\\" in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moycullen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Cuilinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "202,386", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Moycullen village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ross", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ros", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "77,351", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "In County Mayo in 1574; transferred to Galway within decades; since 1898 partly in Mayo. The name means \\"The promontory.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tiaquin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh Dachoinne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "110,135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"House of double coign.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanmaurice", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Mhuiris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "120,520", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Maurice\'s clan\\", referring to Maurice FitzGerald, 1st Earl of Desmond.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corkaguiny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corca Dhuibhne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "138,605", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ancient ruling tribe, the Corcu Duibne.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunkerron North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Ciar\\u00e1in Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "72,414", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunkerron South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Ciar\\u00e1in Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,289", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glanarought or Glanerought", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann na Ruachta\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,865", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Valley of the O\'Roughty.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iraghticonnor", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Oireacht U\\u00ed Chonch\\u00fair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "88,105", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Inheritance of the O\'Connors.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveragh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh R\\u00e1thach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "159,980", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Descendants of R\\u00e1thach.\\" On the Kilcoolaght East ogham stone (CIIC 211), this name appears in the Primitive Irish form Rittaveccas.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Magunihy or Magonhy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh gCoinchinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "166,427", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Coinchinn\'s plain\\"; a personal name meaning wolf-warrior.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Trughanacmy or Trughenackmy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tri\\u00facha an Aicme", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "194,593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"cantred of the tribe.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbury or Carbery", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,286", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carbury", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clane", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Claonadh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,023", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clane village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connell or Great Connell", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "34,785", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after [Old] Connell, a holy site and ford near Newbridge.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ikeathy and Oughterany", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Ch\\u00e9ithigh agus Uachtar Fhine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1608", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,753", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The baronies of Ikeathy and Oughterany were united some time between 1558 and 1608. \\"Okeathy Ocerny\\" in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilcullen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chuillinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "8,492", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilcullen town. A half-barony in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilkea and Moone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Ch\\u00e1 agus Maoin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "46,286", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the villages of Kilkea and Moone.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Naas North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An N\\u00e1s Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,579", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Naas town. \\"Naas Upper\\" in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Naas South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An N\\u00e1s Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,478", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Naas town. \\"Naas Nether\\" in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Narragh and Reban East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Fhorrach agus an R\\u00e9ab\\u00e1n Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,374", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Narragh and Rheban Castle. Namesake of the hereditary Barony of Norragh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Narragh and Reban West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Fhorrach agus an R\\u00e9ab\\u00e1n Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "22,136", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See Narragh and Reban East)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Offaly East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Fhail\\u00ed Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,029", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after U\\u00ed Failghe; also the name of County Offaly to the west. Barony of Offaly existed in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Offaly West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Fhail\\u00ed Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(see Offaly West)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North Salt", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An L\\u00e9im Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,930", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Salt\\" derived from Saltus Salmonis, the Latin name for Leixlip. Barony of Salt existed by 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "South Salt", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An L\\u00e9im Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,655", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See North Salt)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Callan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Callainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "5,653", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Callan town; \\"Callen Liberties\\" in Down Survey. The 1836 Act \\"for removing doubts\\" explicitly states the town and liberties \\"shall be deemed and taken to be a barony\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Crannagh or Crannach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Crannach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "58,675", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"abounding in trees.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fassadinin or Fassadining", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "F\\u00e1sach an Deighn\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "68,174", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"wilderness by the River Dinan.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Galmoy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gabhalmhaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,236", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"plain of the River Goul.3", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gowran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gabhr\\u00e1n", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "111,706", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Gowran village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ida, or \\"Ida, Igrinn and Iberchon\\"", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Dhe\\u00e1", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "60,132", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Wexford. A tribal name: the U\\u00ed Dheaghaidh, descendants of Deagaid.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iverk", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eirc", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,528", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendents of Erc.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kells", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceanannas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,376", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kells, County Kilkenny.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilculliheen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Choilch\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1848", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,139", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Originally a civil parish in the county of the city of Waterford, transferred to the county in 1840. Its status as a barony separate from Gaultier was not recognised by the census until 1871.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chainnigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "921", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate: the County of the city of Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Knocktopher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cnoc an T\\u00f3chair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "46,765", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Knocktopher village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shillelogher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Fhaolchair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,684", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name, meaning \\"descendants of Faolchar\\", a name meaning \\"wolf-love.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballyadams", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1daim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "24,081", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballyadams Castle", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clandonagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Donnchadha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "43,733", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clarmallagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cl\\u00e1r Ma\\u00ed Locha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "43,533", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cullenagh or Cullinagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cuileannach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,094", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Cullenagh Mountains.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maryborough East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Port Laoise Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,160", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Portlaoise, formerly named Maryborough", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maryborough West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Port Laoise Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "41,914", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Portlaoise, formerly named Maryborough", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Portnahinch or Portnehinch", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Port na hInse", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,835", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Portnahinch, a landing-ground on the River Barrow.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slievemargy, Slewmergie, Slieuemargue, Slieuemargy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Sliabh Mairge", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,490", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Slievemargy hills. Now also partly in Carlow", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Stradbally", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Sr\\u00e1idbhaile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,895", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Stradbally village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tinnahinch or Tinnehinch", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh na hInse", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "54,187", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Tinnahinch village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Upper Woods or Upperwoods", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Choill Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,926", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carrigallen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carraig \\u00c1lainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "62,395", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carrigallen", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Drumahaire", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Droim Dh\\u00e1 Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "110,146", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Drumahaire. Considered part of Sligo in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Liatroim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,164", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Leitrim village. Considered part of Sligo in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mohill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maothail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "62,904", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Mohill", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rosclougher or Rossclogher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ros Clochair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,601", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rosclogher Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanwilliam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Liam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "55,627", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"clan of William de Burgh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connello (or Conello) Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Conallaigh \\u00cdochtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,850", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the O\'Connells.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connello (or Conello) Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Conallaigh Uachtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "61,256", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the O\'Connells.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coonagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Chuanach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,323", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendants of Cuana.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coshlea", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cois Sl\\u00e9ibhe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "95,232", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name literally means \\"foot of the mountain.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coshma", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cois M\\u00e1ighe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,018", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"edge of the plain.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenquin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann an Choim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,402", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Prior to 1841, part of Connello Upper.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kenry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Caonra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "26,222", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the C\\u00e1enraige, an ancient tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilmallock or Kilmallock Liberties", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Mocheall\\u00f3g", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "4,074", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilmallock. Not enumerated in the 1821 census.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Limerick City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Luimnigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,074", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate; includes the \\"[South] Liberties\\" of Down Survey", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North Liberties of Limerick city", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na L\\u00edbearta\\u00ed Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1872", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "3,050", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "formerly Liberties; the \\"North Liberties\\" were record separately from the \\"South Liberties\\" in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Owneybeg", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uaithne Beag", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,211", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The territory of Uaithni encompassed Owneybeg and part of Owney and Arra", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Pubblebrien", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Pobal Bhriain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,138", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Brian\'s people\\", referring to Brian Boru.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shanid", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Seanaid", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "84,075", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Prior to 1841, part of Connello Lower.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Smallcounty", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An D\\u00e9is Bheag", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,424", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The Irish name means \\"the little vassal tribe\\"; see Deisi.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coleraine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "C\\u00fail Raithin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "85,836", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Coleraine town, although the town itself is in the North East Liberties of Coleraine. A half-barony in 1807, including the south-west liberties of Coleraine.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Keenaght or Kenaught", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cianachta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591 (as Limavady)", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "130,329", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Ciannachta tribe, descended from Tadc mac C\\u00e9in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughinsholin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loch Inse U\\u00ed Fhloinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "171,662", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"lough of O\'Lynn\'s island\\", referring to a lake containing a crann\\u00f3g.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North East Liberties of Coleraine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "L\\u00edbearta\\u00ed Thoir Thuaidh Ch\\u00fail Raithin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "18,005", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "formerly Liberties of Coleraine town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North-West Liberties of Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "L\\u00edbearta\\u00ed Thiar Thuaidh Dhoire", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "11,506", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "formerly Liberties of Londonderry city.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirkeeran or Tyrkeeran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Mhic Caoirthinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591 (as Anagh)", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "94,014", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony in 1807, including the south-east liberties of Londonderry. Name means \\"land of the sons of Cartin.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ardagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ardach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,223", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ardagh village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Granard", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gr\\u00e1nard", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,857", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Granard village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Longfort", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,243", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Longford town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moydow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Dumha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "34,470", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Moydow village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathcline", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Claon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,421", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rathcline Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shrule or Abbeyshrule", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Sruthail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,006", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Abbeyshrule", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ardee", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Fhirdhia", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "53,832", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ardee town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Drogheda", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Droichead \\u00c1tha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1412", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "4,497", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate. A barony separate from the county was formed in 1840 from the portion previously within the County of the town of Drogheda which was not within the town of Drogheda.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dundalk Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Dealgan \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,803", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dundalk town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dundalk Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Dealgan Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,750", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dundalk town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ferrard", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fir Arda", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,806", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Fera Arda Ciannachta, \\"men of high Ciannachta.\\" Namesake of Viscount Massereene and Ferrard", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "L\\u00fa", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,704", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Louth village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Burrishoole", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Buir\\u00edos Umhaill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "145,172", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Burrishoole Castle; a few sources list Burrishoole split into \\"Burrishoole North\\" and \\"Burrishoole South\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceara", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "134,206", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carra village. Called Burriscarra/Burisker in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanmorris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Mhuiris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,252", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Baron Clanmorris. Name means \\"Muiris\' family.\\" Called Croslwyhin/Crossboyne in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Costello or Clancostello", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coistealaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "143,874", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Roscommon. Named after the Hiberno-Norman MacOisdealbhaigh (Costello) family. Called Beallahaunes/Ballyhaunis in 1574", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Erris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iorras", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "230,452", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Erris village. A half-barony in the Gilbert Manuscript of the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gallen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gaileanga", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "119,153", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Gailenga tribe. Beallalahane in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilmaine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Mhe\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "95,284", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilmaine village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Murrisk", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muraisc", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "137,061", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Murrisk village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirawley or Tyrawley", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Amhlaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "246,822", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Amlaid\'s land\\", referring to Amalgaid mac Fiachrae. \\"Many\\"/Moyne in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00e9ise \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "20,013", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece barony present by 1542. Named after the D\\u00e9isi Becc.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00e9ise Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,763", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece barony present by 1542. Named after the D\\u00e9isi Becc.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Duleek Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Damhliag \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,772", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Duleek village. Now also partly in Louth. Duleek barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Duleek Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Damhliag Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,463", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Duleek village. Duleek barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunboyne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan B\\u00fainne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,781", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dunboyne town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fore or Demifore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Fhobhair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "42,388", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Fore, County Westmeath since 1542. Named after Fore Abbey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kells Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceanannas \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,171", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kells town. Kells barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kells Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceanannas Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,552", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kells town. Kells barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lune", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lu\\u00edne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,326", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Luighne tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Morgallion", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Machaire Gaileang", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,492", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"plain of the Gailenga\\", a medieval tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath (or Moyfenragh) Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Fionnr\\u00e1ithe \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,313", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath (or Moyfenragh) Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Fionnr\\u00e1ithe Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,696", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Navan Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Uaimh \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,835", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Navan town. Navan barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Navan Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Uaimh Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "17,651", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Navan town. Navan barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ratoath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th T\\u00f3", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,697", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ratoath village.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Skreen or Skryne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Scr\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,891", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Skryne village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slane Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Shl\\u00e1ine \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "26,224", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Slane village. Slane barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slane Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Shl\\u00e1ine Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,211", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Slane village. Slane barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cremorne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cr\\u00edoch Mh\\u00farn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "84,508", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Irish meaning \\"border of the Mugdorna.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dartree or Dartry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dartra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name from the ancient kingdom of Dartraighe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Farney", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fearnaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "67,333", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named from the ancient kingdom of Fernmag, \\"plain of alders.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muineach\\u00e1n", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,735", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Monaghan town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Trough", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Tri\\u00facha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,376", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the Irish tr\\u00edcha c\\u00e9t, a unit of territory in Medieval Ireland.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballyboy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Bu\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,398", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballyboy village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballybritt", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Bhriotaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,378", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballybritt Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballycowen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Mhic Comhainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballycowan Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonlisk", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain Leisc", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,052", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clonlisk Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coolestown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Ch\\u00fala\\u00edgh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,866", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Coolestown, the former name of Edenderry.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Eglish or Fercale", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Eaglais", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,697", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"church,\\" while Fercale means \\"men of the churches.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Garrycastle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Garra\\u00ed an Chaisle\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "102,841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Garrycastle", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Geashill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "G\\u00e9isill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,864", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Geashill village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilcoursey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chuairs\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "19,274", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilcoursey Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Philipstown Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Daingean \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,669", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Philipstown, now renamed Daingean", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Philipstown Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Daingean Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,087", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Philipstown, now renamed Daingean", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Warrenstown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Bhair\\u00ednigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,456", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballybrittain (Warrenstown) Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Athlone North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Luain Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,863", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Athlone town. North and South not separated in 1871 census. The original Athlone barony existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Athlone South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Luain Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,659", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Athlone town. North and South not separated in 1871 census.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballintober North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Tobair Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,853", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballintober South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Tobair Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,113", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballymoe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al \\u00c1tha M\\u00f3", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,287", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Ballymoe, County Galway. Named after Ballymoe village, on the County Galway side of the River Suck.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Boyle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mainistir na B\\u00faille", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,163", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Boyle town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlereagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Riabhach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "82,081", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlerea town. Previously one of three sections of Ballintober barony.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Frenchpark", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Gar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "71,203", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Frenchpark village; previously part of the barony of Boyle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moycarn or Moycarnon or Moycarne or Moycarnan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Charn\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,595", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Galway. A half-barony in 1807.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ros Com\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,584", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Roscommon town, which is in Ballintober South", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbury", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "73,685", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided into Upper and Lower baronies before 1841. Named after the ancient t\\u00faath of the Cairbre Drom Cliabh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coolavin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "C\\u00fail \\u00d3 bhFinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,473", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"corner of the descendants of Finn.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Corann", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "45,376", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Corann village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leyny or Leney", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lu\\u00edne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,233", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Luighne Connacht tribe", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tireragh or Tyreragh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Fhiachrach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "106,598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Mayo. Name means \\"land of the U\\u00ed Fiachrach.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirerril or Tyraghrill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Oirill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "75,812", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Olliol\'s land\\", referring to Ailill mac Echach Mugmed\\u00f3in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanwilliam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Liam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "115,755", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"clan of William de Burgh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Eliogarty", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\u00c9ile U\\u00ed Fh\\u00f3garta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "90,257", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony (with Ikerrin) in the Down Survey. Name means \\"\\u00c9ile of the U\\u00ed Fhogartaigh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iffa and Offa East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,819", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iffa and Offa West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "117,175", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ikerrin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Chair\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,805", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony (with Eliogarty) in the Down Survey. Name means \\"descendents of Cair\\u00edn.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilnamanagh Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill na Manach \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1838", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "42,041", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilnamanagh town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilnamanagh Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill na Manach Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1838", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,990", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilnamanagh town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Middle Third", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Trian Me\\u00e1nach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "113,544", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From trian meaning \\"third\\" or \\"portion.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ormond Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Urumhain \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "127,222", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Compare Ormond (\\"east Munster\\")", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ormond Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Urumhain Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,471", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Compare Ormond (\\"east Munster\\")", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Owney and Arra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uaithne agus Ara", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United 1672\\u20131792", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "85,494", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Owney Mulrian\\" and Arra were separate baronies in the Down Survey, named respectively after the ancient kingdom of Uaithni and the River Ara.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slievardagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Sliabh Ardach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "90,772", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Slevardagh & Compsy\\" in the Down Survey. The name means \\"high mountain of the Eoganachta.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clogher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clochar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "97,569", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clogher town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dungannon Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Geanainn \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Dungannon by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "42,794", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dungannon town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dungannon Middle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Geanainn L\\u00e1ir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Dungannon by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "87,541", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dungannon town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dungannon Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Geanainn Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Dungannon by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "85,995", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dungannon town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Omagh East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An \\u00d3maigh Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u201321; Omagh by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "132,149", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Omagh town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Omagh West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An \\u00d3maigh Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u201321; Omagh by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "93,321", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Omagh town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Strabane Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Srath B\\u00e1n \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Strabane by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "117,419", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Strabane town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Strabane Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Srath B\\u00e1n Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Strabane by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,282", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Strabane town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coshmore and Coshbride", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cois Abha M\\u00f3ire agus Cois Bhr\\u00edde", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1831", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "88,253", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baronies of Coshmore and Coshbride were separate in the 1821 census. The names mean, respectively, \\"Bank of the Munster Blackwater\\" and \\"Bank of the River Bride.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies-within-Drum", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na D\\u00e9ise laistigh den Drom", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies divided by 1746", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,325", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies south of the Drum Hills.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies-without-Drum", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na D\\u00e9ise lasmuigh den Drom", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies divided by 1746", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "129,894", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies north of the Drum Hills. \\"Without\\" is used with the meaning of \\"beyond\\" or \\"outside.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gaultier or Gaultiere", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ghaillt\\u00edr", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,447", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilculliheen was formerly a parish of this barony. Name means \\"land of foreigners,\\" referring to Vikings.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenahiry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann na hUidhre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,940", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"valley of the Nier\\", referring to the Nier River.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Middle Third or Middlethird", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Trian Me\\u00e1nach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From trian meaning \\"third\\" or \\"portion.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Upperthird or Upper Third", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uachtar T\\u00edre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name originally meant \\"Upper country\\"; probably acquired \\"third\\" in name by analogy with Middle Third.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Waterford City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Phort L\\u00e1irge", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "532", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Brawny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bre\\u00e1mhaine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "10,070", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The ancient territory of Bregmaine.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonlonan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain Lon\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,095", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Lon\\u00e1n\'s meadow.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corkaree", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corca Raoi", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,787", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name, \\"descendants of Raoi.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Delvin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dealbhna", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,062", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Delvin village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Farbill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fir Bhile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,453", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name: \\"men of the sacred tree.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fartullagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fir Thulach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,512", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Previously Tyrrells country. Name means \\"men of the hillock\\", a tribal name.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fore or Demifore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Fhobhair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,056", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Fore, County Meath. Named after Fore Abbey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilkenny West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chainnigh Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,169", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Previously Maherquirke, Dillons country", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyashel and Magheradernon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Asail agus Machaire \\u00d3 dTiarn\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,565", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyashel and Magheradernon listed separately in 1542. They formed the ancient territories of Mag nAssail (Assail\'s plain) and the plain of the O\'Tiernans.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moycashel", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Chaisil", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,097", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Originally the Barony of Rossaughe; before that, Delamares country. Name means \\"plain of the stone ringfort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moygoish", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Mhac gCuais", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,483", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name: \\"Descendants of the Son of Cuas.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathconrath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Conarta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,415", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rathconrath village; previously Daltons country", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Bealach Caoin Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen created 1606; Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "45,413", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen means \\"way of sorrow.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Bealach Caoin Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen created 1606; Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,986", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen means \\"way of sorrow.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bantry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Beanntra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "101,598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Bendtraigi Laigen, the former ruling people.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bargy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Bhairrche", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,002", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ruling U\\u00ed Bairrche family, who claimed descent from D\\u00e1ire Barrach.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Forth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fotharta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,384", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A Fortuatha was a kingdom not ruled directly by members of the dominant dynasty of a province. This area was ruled by Fothairt in Chairn.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gorey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Guaire", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,913", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Gorey town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Scarawalsh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Scairbh Bhailis", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "106,650", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"rocky ford of light.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shelburne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Bhroin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,103", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the tribe, S\\u00edl Broin, \\"offspring of Broin.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shelmaliere East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Maolu\\u00edr Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,363", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shelmaliere West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Maolu\\u00edr Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "50,299", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Arklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An tInbhear M\\u00f3r", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "66,980", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Arklow town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballinacor North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile na Corra Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1832\\u20135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "74,109", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United barony of Talbotstown created in 1606, and divided into half-baronies for civil law purposes in 1798. Named after Ballinacor Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballinacor South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile na Corra Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1832\\u20135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "78,316", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See Ballinacor North)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Newcastle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Nua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,938", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the village of Newcastle, County Wicklow. Not related to County Dublin barony of the same name.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathdown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th an D\\u00fain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "33,462", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Rathdown, County Dublin. Named after Rathdown Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shillelagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol \\u00c9alaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,348", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Shillelagh village. A half-barony in 1807.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Talbotstown Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Talb\\u00f3idigh \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1801", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "86,857", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Talbotstown village. United barony of Talbotstown created in 1606.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Talbotstown Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Talb\\u00f3idigh Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1801", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "62,510", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See Talbotstown Lower)", "is_header": false, "column_span": 1, "row_span": 1}]], "table_webpage_url": "http://en.wikipedia.org/wiki/List_of_baronies_of_Ireland", "table_page_title": "List of baronies of Ireland", "table_section_title": "Final list", "table_section_text": "The final catalogue of baronies numbered 331, with an average area of 255 km\\u00b2 (98 sq mi; 63,000 acres); therefore, each county was divided, on average, into 10 or 11 baronies.", "highlighted_cells": [[315, 0], [315, 1]], "example_id": -7506134436007289724, "sentence_annotations": [{"original_sentence": "Rathconrath is also one of the baronies in Co. Westmeath, see list of baronies of Ireland.", "sentence_after_deletion": "Rathconrath is one of the baronies in Co. Westmeath of baronies of Ireland.", "sentence_after_ambiguity": "Rathconrath is one of the baronies in Co. Westmeath of baronies of Ireland.", "final_sentence": "Rathconrath is one of the baronies of the county of Westmeath in Ireland."}], "table_list": [["County", "Name", "Irish name", "Date", "Area (acres, 1872)", "Notes"], ["Antrim", "Antrim Lower", "Aontroim \\u00cdochtarach", "Divided 1792\\u20131798", "80,826", "Named after Antrim town"], ["Antrim", "Antrim Upper", "Aontroim Uachtarach", "Divided 1792\\u20131798", "36,489", "Named after Antrim town"], ["Antrim", "Belfast Lower", "B\\u00e9al Feirste \\u00cdochtarach", "Divided 1792\\u20131798", "56,142", "Named after Belfast town (now city)"], ["Antrim", "Belfast Upper", "B\\u00e9al Feirste Uachtarach", "Divided 1792\\u20131798", "32,942", "Named after Belfast town (now city)"], ["Antrim", "Carrickfergus", "Carraig Fhearghais", "By 1325", "16,702", "Formerly a county corporate: the County of the Town of Carrickfergus"], ["Antrim", "Cary or Carey", "Cathra\\u00ed", "By 1672", "75,035", "Named after the Cothrugu (Cotraigib, Crotraigib), an ancient tribe."], ["Antrim", "Dunluce Lower", "D\\u00fan Libhse \\u00cdochtarach", "Divided 1792\\u20131798", "30,575", "See also Dunluce Castle."], ["Antrim", "Dunluce Upper", "D\\u00fan Libhse Uachtarach", "Divided 1792\\u20131798", "52,788", "See also Dunluce Castle."], ["Antrim", "Glenarm Lower", "Gleann Arma \\u00cdochtarach", "Divided 1792\\u20131798", "64,945", "Named after Glenarm village"], ["Antrim", "Glenarm Upper", "Gleann Arma Uachtarach", "Divided 1792\\u20131798", "24,032", "Named after Glenarm village"], ["Antrim", "Kilconway", "Coill Chonmha\\u00ed", "By 1672", "68,640", "Name means \\"forest of the Conmha\\u00edcne\\"."], ["Antrim", "Massereene Lower", "M\\u00e1sa R\\u00edona \\u00cdochtarach", "Divided 1792\\u20131798", "27,228", "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery."], ["Antrim", "Massereene Upper", "M\\u00e1sa R\\u00edona Uachtarach", "Divided 1792\\u20131798", "56,675", "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery."], ["Antrim", "Toome Lower", "Tuaim \\u00cdochtarach", "Divided 1792\\u20131798", "36,135", "Named after Toome village"], ["Antrim", "Toome Upper", "Tuaim Uachtarach", "Divided 1792\\u20131798", "47,571", "Named after Toome village"], ["Armagh", "Armagh", "Ard Mhacha", "By 1609", "47,645", "Named after Armagh town (now city)"], ["Armagh", "Fews Lower", "Na Fe\\u00e1 \\u00cdochtaracha", "Divided by 1745; Fews by 1609", "29,757", "From Irish Na Feadha, \\"The lengths\\""], ["Armagh", "Fews Upper", "Na Fe\\u00e1 Uachtaracha", "Divided by 1745; Fews by 1609", "47,433", "From Irish Na Feadha, \\"The lengths\\""], ["Armagh", "Oneilland East", "U\\u00ed Niall\\u00e1in Thoir", "Divided 1792\\u20131807; Oneilland by 1609", "20,890", "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills."], ["Armagh", "Oneilland West", "U\\u00ed Niall\\u00e1in Thiar", "Divided 1792\\u20131807; Oneilland by 1609", "57,584", "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills."], ["Armagh", "Orior Lower", "Na hOirthir \\u00cdochtaracha", "Divided 1792\\u20131807; Orior by 1609", "31,927", "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla."], ["Armagh", "Orior Upper", "Na hOirthir Uachtaracha", "Divided 1792\\u20131807; Orior by 1609", "49,086", "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla."], ["Armagh", "Tiranny or Turaney", "Tuath Threana", "By 1609", "27,397", "Named after the U\\u00ed Threna tribe."], ["Carlow", "Carlow", "Ceatharlach", "By 1672", "31,353", "Named after Carlow town"], ["Carlow", "Forth", "Fotharta", "By 1672", "39,510", "Named from the Irish Fothairt Mag Fe\\u00e1, \\"fothairt of the beech plain.\\" A fothairt was a kingdom not ruled by a branch of the provincial ruling family."], ["Carlow", "Idrone East", "U\\u00ed Dhr\\u00f3na Thoir", "Divided in 1799", "52,857", "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na."], ["Carlow", "Idrone West", "U\\u00ed Dhr\\u00f3na Thiar", "Divided in 1799", "23,066", "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na."], ["Carlow", "Rathvilly", "R\\u00e1th Bhile", "By 1672", "44,806", "Named after Rathvilly village"], ["Carlow", "St. Mullin\'s Lower", "Tigh Moling \\u00cdochtarach", "Divided by 1841", "21,914", "Named after St Mullin\'s village. Does not border St. Mullin\'s Upper."], ["Carlow", "St. Mullin\'s Upper", "Tigh Moling Uachtarach", "Divided by 1841", "7,784", "Named after St. Mullin\'s village; the land was a detached fragment of the original St."], ["Cavan", "Castlerahan", "Caisle\\u00e1n Raithin", "By 1609", "69,279", "Named after Castlerahan parish."], ["Cavan", "Clankee", "Clann Chaoich", "By 1609", "64,377", "The name means \\"Caoch\'s clan\\"; Caoch (meaning \\"blind\\" or \\"squint\\") was the nickname of Niall mac Cathal na Beith\\u00ed mac Annadh \\u00d3 Raghallaigh (died 1296)."], ["Cavan", "Clanmahon", "Clann Mhath\\u00fana", "By 1609", "51,170", "The name means \\"Math\\u00fain\'s clan.\\""], ["Cavan", "Loughtee Lower", "Lucht T\\u00ed \\u00cdochtarach", "Divided by 1821; Loughtee by 1609", "28,240", "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\""], ["Cavan", "Loughtee Upper", "Lucht T\\u00ed Uachtarach", "Divided by 1821; Loughtee by 1609", "63,842", "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\""], ["Cavan", "Tullygarvey", "Teallach Ghairbh\\u00edth", "By 1609", "59,871", "The name means \\"tribe of Gairbh\\u00e9ith\\"."], ["Cavan", "Tullyhaw", "Teallach Eathach", "By 1609", "89,852", "The name means \\"Eochaid\'s tribe\\", referring to a king of c. AD 700."], ["Cavan", "Tullyhunco or Tulloghonoho", "Teallach Dh\\u00fanchadha", "By 1609", "39,624", "The name means \\"D\\u00fanchadh\'s tribe.\\""], ["Clare", "Bunratty Lower", "Bun Raite \\u00cdochtarach", "Divided by 1841", "57,314", "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574."], ["Clare", "Bunratty Upper", "Bun Raite Uachtarach", "Divided by 1841", "53,595", "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574."], ["Clare", "Burren", "Boirinn", "By 1574", "74,360", "The barony is called \\"Burren\\"; the region is now usually \\"The Burren\\", a name meaning \\"great rock.\\" Formerly aka Gragans."], ["Clare", "Clonderalaw", "Cluain idir Dh\\u00e1 L\\u00e1", "By 1574", "75,878", "Named after Clonderalaw Castle. Formerly aka East Corkewasken."], ["Clare", "Corcomroe", "Corca Mrua", "By 1574", "61,385", "Named after the Corco Modhruadh, formerly the ruling dynasty in the area. Formerly aka Dowaghy connoghor/Tuoghmore y Conour."], ["Clare", "Ibrickan or Ibrickane", "U\\u00ed Bhreac\\u00e1in", "By 1672", "56,696", "Named after the U\\u00ed Bhreac\\u00e1in, formerly the ruling dynasty in the area"], ["Clare", "Inchiquin", "Inse U\\u00ed Chuinn", "By 1672", "88,387", "Name is Irish for \\"Quinn\'s water meadow.\\" Namesake of Baron Inchiquin"], ["Clare", "Islands", "Na hOile\\u00e1in", "By 1574", "63,592", "Name refers to the islands of the Fergus estuary. Formerly aka Cloynerawde/Clonraude"], ["Clare", "Moyarta", "Maigh Fhearta", "By 1574", "68,679", "Name from Irish Mag Fearta, \\"plain of graves\\". Formerly aka West Corkewasken."], ["Clare", "Tulla Lower", "An Tulach \\u00cdochtarach", "Divided by 1841", "73,454", "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574"], ["Clare", "Tulla Upper", "An Tulach Uachtarach", "Divided by 1841", "94,919", "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574"], ["Cork", "Bantry", "Beanntra\\u00ed", "By 1672", "59,216", "Named after Bantry town"], ["Cork", "Barretts", "Bar\\u00f3idigh", "By 1672", "31,761", "Named after the Barrett family."], ["Cork", "Barrymore", "Barraigh Mh\\u00f3ra", "By 1672", "148,143", "Namesake of the Earl of Barrymore. Name means \\"Great Barrys.\\""], ["Cork", "Bear", "B\\u00e9arra", "By 1672", "89,986", "Namesake of the Beara Peninsula. It is said to be named after a princess named B\\u00e9irre, or possibly settlers from Iberia."], ["Cork", "Carbery East, East Division", "Cairbrigh Thoir, an Roinn Thoir", "Divided by 1821", "67,235", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Carbery East, West Division", "Cairbrigh Thoir, an Roinn Thiar", "Divided by 1821", "105,141", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Carbery West, East Division", "Cairbrigh Thiar, an Roinn Thoir", "Divided by 1821", "79,263", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Carbery West, West Division", "Cairbrigh Thiar, an Roinn Thiar", "Divided by 1821", "109,178", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Condons and Clangibbon", "Cond\\u00fanaigh agus Clann Ghiob\\u00fain", "By 1672", "78,481", "The territories of two families: the Condons or Cauntons, and the FitzGibbons or White Knight"], ["Cork", "Cork City", "Cathair Chorca\\u00ed", "1608", "2,265", "Formerly a county corporate, originally including the Liberties which later formed the separate Barony of Cork. It contains 7 civil parishes."], ["Cork", "Cork", "Corcaigh", "By 1841", "43,813", "Formed from the \\"Liberties of Cork\\", the portion previously within the County of the city of Cork which was not within the borough of Cork."], ["Cork", "Courceys", "C\\u00farsaigh", "By 1672", "8,812", "Named after the de Courcy barons."], ["Cork", "Duhallow", "D\\u00faiche Ealla", "By 1672", "232,328", "Name means \\"land of the Munster Blackwater\\"."], ["Cork", "Fermoy", "Mainistir Fhear Ma\\u00ed", "By 1672", "121,188", "Namesake of Fermoy town, which is actually in Condons and Clangibbon"], ["Cork", "Ibane and Barryroe", "U\\u00ed Bhamhna agus Barraigh Rua", "United by 1711", "35,291", "Ibane and Barryroe are peninsulas on opposite sides of Clonakilty Bay The names mean, respectively, \\"Descendants of Bamna\\" and \\"Red-haired Barrys.\\""], ["Cork", "Imokilly", "U\\u00ed Mhic Coille", "By 1672", "93,617", "Named after the U\\u00ed Meic Caille, a sept of the U\\u00ed Liath\\u00e1in."], ["Cork", "Kerrycurrihy", "Ciarra\\u00ed Cuirche", "Divided by 1821", "23,957", "Kerrycurrihy and Kinalea united in Down Survey. A tribal name: the Ciarraige Cuirchi."], ["Cork", "Kinalea", "Cine\\u00e1l Aodha", "Divided by 1821", "50,692", "Kerrycurrihy and Kinalea united in Down Survey. The \\"tribe of A\\u00e9d.\\""], ["Cork", "Kinalmeaky", "Cine\\u00e1l mB\\u00e9ice", "By 1672", "36,068", "Named after the Cen\\u00e9l mBeice \\"Beice\'s people\\", a sept of the O\'Mahonys."], ["Cork", "Kinnatalloon", "Coill na Tal\\u00fan", "By 1672", "27,718", "The name means \\"Tolamhnach\'s forest,\\" referring to a 7th-century chief of the U\\u00ed Liath\\u00e1in."], ["Cork", "Kinsale", "Cionn tS\\u00e1ile", "By 1672", "12,430", "Named after Kinsale town"], ["Cork", "Muskerry East", "M\\u00fascra\\u00ed Thoir", "Divided by 1821", "122,874", "Namesake of Baron Muskerry. The only barony split between the East and West Ridings of County Cork."], ["Cork", "Muskerry West", "M\\u00fascra\\u00ed Thiar", "Divided by 1821", "188,487", "Namesake of Baron Muskerry. Named after the ancient tribe of the M\\u00fascraige."], ["Cork", "Orrery and Kilmore", "Orbhra\\u00ed agus An Choill Mh\\u00f3r", "United by 1821", "69,346", "Namesake of Earl of Orrery. Named after the Orbhraighe tribe, while Kilmore means \\"great forest.\\""], ["Donegal", "Banagh", "B\\u00e1inigh", "Divided in 1791", "177,288", "Territory of the Cinel Boghaine, descended from Niall of the Nine Hostages. Combined with Boylagh till 1791"], ["Donegal", "Boylagh", "Baollaigh", "Divided in 1791", "156,245", "Territory of the O\'Boyles. Combined with Banagh till 1791"], ["Donegal", "Inishowen (or Innishowen) East", "Inis Eoghain Thoir", "Divided by 1851", "123,356", "Name means \\"Eoghan\'s peninsula.\\""], ["Donegal", "Inishowen (or Innishowen) West", "Inis Eoghain Thiar", "Divided by 1851", "76,828", "Name means \\"Eoghan\'s peninsula.\\""], ["Donegal", "Kilmacrenan", "Cill Mhic R\\u00e9an\\u00e1in", "By 1672", "310,325", "Named after Kilmacrenan village"], ["Donegal", "Raphoe North", "R\\u00e1th Bhoth Thuaidh", "Divided 1807\\u20131821", "80,610", "Named after Raphoe town"], ["Donegal", "Raphoe South", "R\\u00e1th Bhoth Theas", "Divided 1807\\u20131821", "140,841", "Named after Raphoe town"], ["Donegal", "Tirhugh", "T\\u00edr Aodha", "By 1672", "125,828", "Name means \\"Aodh\'s country.\\""], ["Down", "Ards (or Ardes) Lower", "An Aird \\u00cdochtarach", "Divided by 1851", "38,462", "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\""], ["Down", "Ards (or Ardes) Upper", "An Aird Uachtarach", "Divided by 1851", "29,697", "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\""], ["Down", "Castlereagh Lower", "An Caisle\\u00e1n Riabhach \\u00cdochtarach", "Divided by 1841", "51,452", "Named after Castlereagh townland. Gives its name to the borough of Castlereagh."], ["Down", "Castlereagh Upper", "An Caisle\\u00e1n Riabhach Uachtarach", "Divided by 1841", "53,856", "Named after Castlereagh townland. Gives its name to the borough of Castlereagh."], ["Down", "Dufferin", "An Duifrian", "By 1672", "17,208", "Name from the Irish duibhthrian (black third)."], ["Down", "Iveagh Lower, Lower Half", "U\\u00edbh Eachach \\u00cdochtarach, An Leath \\u00cdochtair", "Divided by 1851", "46,057", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Iveagh Lower, Upper Half", "U\\u00edbh Eachach \\u00cdochtarach, An Leath Uachtair", "Divided by 1851", "47,538", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Iveagh Upper, Lower Half", "U\\u00edbh Eachach Uachtarach, An Leath \\u00cdochtair", "Divided by 1851", "96,317", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Iveagh Upper, Upper Half", "U\\u00edbh Eachach Uachtarach, An Leath Uachtair", "Divided by 1851", "63,249", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Kinelarty", "Cine\\u00e1l Fh\\u00e1rtaigh", "By 1672", "40,322", "Name means \\"Faghartach\'s kindred.\\""], ["Down", "Lecale Lower", "Leath Cathail \\u00cdochtarach", "Divided by 1851", "30,920", "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\""], ["Down", "Lecale Upper", "Leath Cathail Uachtarach", "Divided by 1851", "30,521", "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\""], ["Down", "Lordship of Newry", "An tI\\u00far", "By 1672", "15,813", "The historic Lordship encompassed lands on both sides of the Down-Armagh border. Later, the jurisdiction of the \\"Lordship of Newry\\" for baronial presentment sessions extended only to County Down."], ["Down", "Mourne", "M\\u00farna", "By 1672", "47,822", "Named after the Mourne Mountains. A half-barony in the Down Survey."], ["Dublin", "Balrothery East", "Baile an Ridire Thoir", "Divided 1842", "30,005", "Named after Balrothery village. Balrothery existed by 1593."], ["Dublin", "Balrothery West", "Baile an Ridire Thiar", "Divided 1842", "25,195", "Named after Balrothery village. Balrothery existed by 1593."], ["Dublin", "Castleknock", "Caisle\\u00e1n Cnucha", "By 1593", "21,371", "Named after Castleknock village (now suburban); from 1861, reduced in size by the expanded borders of Dublin city"], ["Dublin", "Coolock", "An Ch\\u00fal\\u00f3g", "By 1593", "26,614", "Named after the historical village of Coolock, now suburban; from 1861, reduced in size by the expanded borders of Dublin city"], ["Dublin", "Dublin", "Baile \\u00c1tha Cliath", "1840", "1,693", "Created by the 1840 Acts from land previously liberties in the county of the City. Its name and area were confirmed by the Dublin Baronies Act 1842."], ["Dublin", "Dublin City", "Cathair Bhaile \\u00c1tha Cliath", "1548", "2,114", "Formerly a county corporate"], ["Dublin", "Nethercross", "An Chrois \\u00cdochtarach", "By 1672", "21,818", "Named after a cross erected by Saint Cainnech in Finglas. Compare Uppercross."], ["Dublin", "Newcastle", "An Caisle\\u00e1n Nua", "By 1593", "22,876", "Named after the village of Newcastle, County Dublin. Not related to the Wicklow barony of Newcastle."], ["Dublin", "Rathdown", "R\\u00e1th an D\\u00fain", "By 1593", "29,974", "A half-barony from 1606, with the Wicklow half-barony of Rathdown separated out. From 1861, reduced in size by the expanded borders of Dublin city."], ["Dublin", "Uppercross", "An Chrois Uachtarach", "1792\\u20131821", "37,307", "Compare Nethercross. In the Down Survey, Uppercross and Newcastle were not distinguished."], ["Fermanagh", "Clanawley or Glenawley", "Clann Amhlaoibh", "By 1603", "72,894", "\\"Awley\\" is from Mac Amhlaoibh and Mac Amhalghaidh (Irish septs)"], ["Fermanagh", "Clankelly or Clonkelly", "Clann Cheallaigh", "By 1603", "39,067", "Clan of the Kellys"], ["Fermanagh", "Coole", "An Ch\\u00fail", "By 1603", "17,320", "A half-barony in the Down Survey. Name means \\"corner.\\""], ["Fermanagh", "Knockninny", "Cnoc Ninnidh", "By 1603", "27,732", "Named after the hill of Saint Ninnidh"], ["Fermanagh", "Lurg", "Lorg", "By 1603", "66,163", "Named after the Tuath Luirg (Fir Luirg; \\"tribe/men of the path\\")."], ["Fermanagh", "Magheraboy", "An Machaire Bu\\u00ed", "By 1603", "79,038", "Name means \\"yellow plain\\""], ["Fermanagh", "Magherastephana", "An Machaire Steaf\\u00e1nach", "By 1603", "58,979", "Name origin unclear; \\"plain of the FitzStephens?\\""], ["Fermanagh", "Tirkennedy", "T\\u00edr Cheannada", "By 1603", "56,267", "Named after Fergus son of Cremthann, nicknamed Cennfhota (\\"long head\\"). No relation to the surname Kennedy."], ["Galway", "Aran or Arran", "\\u00c1rainn", "By 1574", "11,287", "Conterminous with the Aran Islands; Inishmore (\\u00c1rainn Mh\\u00f3r) is named for its shape (ara = kidney)"], ["Galway", "Athenry", "Baile \\u00c1tha an R\\u00ed", "By 1672", "25,782", "Named after Athenry town; called \\"Halfe Barony and liberties of Athenrey\\" in the Down Survey."], ["Galway", "Ballymoe", "B\\u00e9al \\u00c1tha M\\u00f3", "By 1672", "89,270", "Named after Ballymoe village; Half with Ballymoe, County Roscommon. Full barony existed in Galway by 1574."], ["Galway", "Ballynahinch", "Baile na hInse", "By 1574", "189,813", "Named after Ballynahinch town; \\"Ballenanen\\" in Down Survey (or Hibernia Delinateo)"], ["Galway", "Clare", "Baile Chl\\u00e1ir", "By 1574", "127,486", "Namesake of the River Clare and village of Claregalway. The name means \\"[river of the] plain.\\""], ["Galway", "Clonmacnowen or Clonmacnoon", "Cluain Mhac nEoghain", "By 1672", "35,467", "\\"Clanemtoneen\\" in Down Survey (or Hibernia Delinateo). Name means \\"Valley of the sons of Eoghan.\\""], ["Galway", "Dunkellin", "D\\u00fan Coill\\u00edn", "By 1574", "83,371", "Name means \\"Coill\\u00edn\'s hillfort\\""], ["Galway", "Dunmore", "D\\u00fan M\\u00f3r", "By 1574", "71,011", "Named after Dunmore village"], ["Galway", "Galway", "Gaillimh", "1610", "22,492", "Formerly a county corporate: the county of the Town (now city) of Galway"], ["Galway", "Kilconnell or Kilconnnel", "Cill Chonaill", "By 1574", "64,819", "Named after Kilconnell village"], ["Galway", "Killian", "Cill Liath\\u00e1in", "By 1574", "52,388", "Name means \\"Liath\\u00e1in\'s church\\""], ["Galway", "Kiltartan", "Cill Tartan", "By 1574", "65,664", "\\"Killcartar\\" in Down Survey (or Hibernia Delinateo). Was originally named after Saint Attracta\'s church. Kiltaraght in 1574."], ["Galway", "Leitrim", "Liatroim", "By 1574", "109,567", "Now also partly in Clare. Name means \\"grey ridge.\\""], ["Galway", "Longford", "An Longfort", "By 1574", "96,506", "Name means \\"ship landing-ground\\", referring to a longphort on a tributary of the River Shannon."], ["Galway", "Loughrea", "Baile Locha Riach", "By 1574", "64,406", "Named after Loughrea town; called \\"Half Barony of Lougheagh\\" in the Down Survey."], ["Galway", "Moycullen", "Maigh Cuilinn", "By 1574", "202,386", "Named after Moycullen village"], ["Galway", "Ross", "An Ros", "By 1574", "77,351", "In County Mayo in 1574; transferred to Galway within decades; since 1898 partly in Mayo. The name means \\"The promontory.\\""], ["Galway", "Tiaquin", "Tigh Dachoinne", "By 1574", "110,135", "Name means \\"House of double coign.\\""], ["Kerry", "Clanmaurice", "Clann Mhuiris", "By 1598", "120,520", "Name means \\"Maurice\'s clan\\", referring to Maurice FitzGerald, 1st Earl of Desmond."], ["Kerry", "Corkaguiny", "Corca Dhuibhne", "By 1598", "138,605", "Named after the ancient ruling tribe, the Corcu Duibne."], ["Kerry", "Dunkerron North", "D\\u00fan Ciar\\u00e1in Thuaidh", "Divided by 1851", "72,414", "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\""], ["Kerry", "Dunkerron South", "D\\u00fan Ciar\\u00e1in Theas", "Divided by 1851", "96,289", "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\""], ["Kerry", "Glanarought or Glanerought", "Gleann na Ruachta\\u00ed", "By 1598", "121,865", "Name means \\"Valley of the O\'Roughty.\\""], ["Kerry", "Iraghticonnor", "Oireacht U\\u00ed Chonch\\u00fair", "By 1598", "88,105", "Name means \\"Inheritance of the O\'Connors.\\""], ["Kerry", "Iveragh", "U\\u00edbh R\\u00e1thach", "By 1598", "159,980", "Name means \\"Descendants of R\\u00e1thach.\\" On the Kilcoolaght East ogham stone (CIIC 211), this name appears in the Primitive Irish form Rittaveccas."], ["Kerry", "Magunihy or Magonhy", "Maigh gCoinchinn", "By 1598", "166,427", "Name means \\"Coinchinn\'s plain\\"; a personal name meaning wolf-warrior.\\""], ["Kerry", "Trughanacmy or Trughenackmy", "Tri\\u00facha an Aicme", "By 1598", "194,593", "Name means \\"cantred of the tribe.\\""], ["Kildare", "Carbury or Carbery", "Cairbre", "By 1672", "48,286", "Named after Carbury"], ["Kildare", "Clane", "Claonadh", "By 1593", "32,023", "Named after Clane village"], ["Kildare", "Connell or Great Connell", "Connail", "By 1593", "34,785", "Named after [Old] Connell, a holy site and ford near Newbridge."], ["Kildare", "Ikeathy and Oughterany", "U\\u00ed Ch\\u00e9ithigh agus Uachtar Fhine", "United by 1608", "25,753", "The baronies of Ikeathy and Oughterany were united some time between 1558 and 1608. \\"Okeathy Ocerny\\" in 1593."], ["Kildare", "Kilcullen", "Cill Chuillinn", "By 1593", "8,492", "Named after Kilcullen town. A half-barony in the Down Survey."], ["Kildare", "Kilkea and Moone", "Cill Ch\\u00e1 agus Maoin", "By 1593", "46,286", "Named after the villages of Kilkea and Moone."], ["Kildare", "Naas North", "An N\\u00e1s Thuaidh", "By 1593", "25,579", "Named after Naas town. \\"Naas Upper\\" in 1593."], ["Kildare", "Naas South", "An N\\u00e1s Theas", "By 1593", "27,478", "Named after Naas town. \\"Naas Nether\\" in 1593."], ["Kildare", "Narragh and Reban East", "An Fhorrach agus an R\\u00e9ab\\u00e1n Thoir", "Divided by 1807", "21,374", "Named after Narragh and Rheban Castle. Namesake of the hereditary Barony of Norragh."], ["Kildare", "Narragh and Reban West", "An Fhorrach agus an R\\u00e9ab\\u00e1n Thiar", "Divided by 1807", "22,136", "(See Narragh and Reban East)"], ["Kildare", "Offaly East", "U\\u00edbh Fhail\\u00ed Thoir", "Divided by 1807", "47,029", "Named after U\\u00ed Failghe; also the name of County Offaly to the west. Barony of Offaly existed in 1593."], ["Kildare", "Offaly West", "U\\u00edbh Fhail\\u00ed Thiar", "Divided by 1807", "40,603", "(see Offaly West)"], ["Kildare", "North Salt", "An L\\u00e9im Thuaidh", "Divided by 1807", "21,930", "\\"Salt\\" derived from Saltus Salmonis, the Latin name for Leixlip. Barony of Salt existed by 1593."], ["Kildare", "South Salt", "An L\\u00e9im Theas", "Divided by 1807", "16,655", "(See North Salt)"], ["Kilkenny", "Callan", "Callainn", "By 1672", "5,653", "Named after Callan town; \\"Callen Liberties\\" in Down Survey. The 1836 Act \\"for removing doubts\\" explicitly states the town and liberties \\"shall be deemed and taken to be a barony\\""], ["Kilkenny", "Crannagh or Crannach", "Crannach", "By 1672", "58,675", "Name means \\"abounding in trees.\\""], ["Kilkenny", "Fassadinin or Fassadining", "F\\u00e1sach an Deighn\\u00edn", "By 1672", "68,174", "Name means \\"wilderness by the River Dinan.\\""], ["Kilkenny", "Galmoy", "Gabhalmhaigh", "By 1672", "40,236", "Name means \\"plain of the River Goul.3"], ["Kilkenny", "Gowran", "Gabhr\\u00e1n", "By 1672", "111,706", "Named after Gowran village"], ["Kilkenny", "Ida, or \\"Ida, Igrinn and Iberchon\\"", "U\\u00ed Dhe\\u00e1", "By 1672", "60,132", "Now also partly in Wexford. A tribal name: the U\\u00ed Dheaghaidh, descendants of Deagaid."], ["Kilkenny", "Iverk", "U\\u00edbh Eirc", "By 1672", "40,528", "Name means \\"descendents of Erc.\\""], ["Kilkenny", "Kells", "Ceanannas", "By 1672", "38,376", "Named after Kells, County Kilkenny."], ["Kilkenny", "Kilculliheen", "Cill Choilch\\u00edn", "By 1848", "2,139", "Originally a civil parish in the county of the city of Waterford, transferred to the county in 1840. Its status as a barony separate from Gaultier was not recognised by the census until 1871."], ["Kilkenny", "Kilkenny", "Cill Chainnigh", "1610", "921", "Formerly a county corporate: the County of the city of Kilkenny"], ["Kilkenny", "Knocktopher", "Cnoc an T\\u00f3chair", "By 1672", "46,765", "Named after Knocktopher village"], ["Kilkenny", "Shillelogher", "S\\u00edol Fhaolchair", "By 1672", "36,684", "A tribal name, meaning \\"descendants of Faolchar\\", a name meaning \\"wolf-love.\\""], ["Laois", "Ballyadams", "Baile \\u00c1daim", "By 1672", "24,081", "Named after Ballyadams Castle"], ["Laois", "Clandonagh", "Clann Donnchadha", "1846", "43,733", "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846."], ["Laois", "Clarmallagh", "Cl\\u00e1r Ma\\u00ed Locha", "1846", "43,533", "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846."], ["Laois", "Cullenagh or Cullinagh", "Cuileannach", "By 1672", "44,094", "Named after the Cullenagh Mountains."], ["Laois", "Maryborough East", "Port Laoise Thoir", "Divided by 1807", "25,160", "Named after Portlaoise, formerly named Maryborough"], ["Laois", "Maryborough West", "Port Laoise Thiar", "Divided by 1807", "41,914", "Named after Portlaoise, formerly named Maryborough"], ["Laois", "Portnahinch or Portnehinch", "Port na hInse", "By 1672", "35,835", "Named after Portnahinch, a landing-ground on the River Barrow."], ["Laois", "Slievemargy, Slewmergie, Slieuemargue, Slieuemargy", "Sliabh Mairge", "By 1672", "35,490", "Named after the Slievemargy hills. Now also partly in Carlow"], ["Laois", "Stradbally", "An Sr\\u00e1idbhaile", "By 1672", "27,895", "Named after Stradbally village"], ["Laois", "Tinnahinch or Tinnehinch", "Tigh na hInse", "By 1672", "54,187", "Named after Tinnahinch village"], ["Laois", "Upper Woods or Upperwoods", "An Choill Uachtarach", "1846", "48,926", "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846."], ["Leitrim", "Carrigallen", "Carraig \\u00c1lainn", "By 1672", "62,395", "Named after Carrigallen"], ["Leitrim", "Drumahaire", "Droim Dh\\u00e1 Thiar", "By 1574", "110,146", "Named after Drumahaire. Considered part of Sligo in 1574."], ["Leitrim", "Leitrim", "Liatroim", "By 1574", "59,164", "Named after Leitrim village. Considered part of Sligo in 1574."], ["Leitrim", "Mohill", "Maothail", "By 1672", "62,904", "Named after Mohill"], ["Leitrim", "Rosclougher or Rossclogher", "Ros Clochair", "By 1672", "81,601", "Named after Rosclogher Castle."], ["Limerick", "Clanwilliam", "Clann Liam", "By 1672", "55,627", "Name means \\"clan of William de Burgh.\\""], ["Limerick", "Connello (or Conello) Lower", "Conallaigh \\u00cdochtaracha", "Divided by 1821", "47,850", "Territory of the O\'Connells."], ["Limerick", "Connello (or Conello) Upper", "Conallaigh Uachtaracha", "Divided by 1821", "61,256", "Territory of the O\'Connells."], ["Limerick", "Coonagh", "U\\u00ed Chuanach", "By 1672", "36,323", "Name means \\"descendants of Cuana.\\""], ["Limerick", "Coshlea", "Cois Sl\\u00e9ibhe", "By 1672", "95,232", "Name literally means \\"foot of the mountain.\\""], ["Limerick", "Coshma", "Cois M\\u00e1ighe", "By 1672", "49,018", "Name means \\"edge of the plain.\\""], ["Limerick", "Glenquin", "Gleann an Choim", "By 1841", "96,402", "Prior to 1841, part of Connello Upper."], ["Limerick", "Kenry", "Caonra\\u00ed", "By 1672", "26,222", "From the C\\u00e1enraige, an ancient tribe."], ["Limerick", "Kilmallock or Kilmallock Liberties", "Cill Mocheall\\u00f3g", "By 1672", "4,074", "Named after Kilmallock. Not enumerated in the 1821 census."], ["Limerick", "Limerick City", "Cathair Luimnigh", "1609", "2,074", "Formerly a county corporate; includes the \\"[South] Liberties\\" of Down Survey"], ["Limerick", "North Liberties of Limerick city", "Na L\\u00edbearta\\u00ed Thuaidh", "By 1872", "3,050", "formerly Liberties; the \\"North Liberties\\" were record separately from the \\"South Liberties\\" in the Down Survey."], ["Limerick", "Owneybeg", "Uaithne Beag", "By 1672", "27,211", "The territory of Uaithni encompassed Owneybeg and part of Owney and Arra"], ["Limerick", "Pubblebrien", "Pobal Bhriain", "By 1672", "30,138", "Name means \\"Brian\'s people\\", referring to Brian Boru."], ["Limerick", "Shanid", "Seanaid", "By 1841", "84,075", "Prior to 1841, part of Connello Lower."], ["Limerick", "Smallcounty", "An D\\u00e9is Bheag", "By 1672", "44,424", "The Irish name means \\"the little vassal tribe\\"; see Deisi."], ["Londonderry", "Coleraine", "C\\u00fail Raithin", "By 1591", "85,836", "Named after Coleraine town, although the town itself is in the North East Liberties of Coleraine. A half-barony in 1807, including the south-west liberties of Coleraine."], ["Londonderry", "Keenaght or Kenaught", "Cianachta", "By 1591 (as Limavady)", "130,329", "Named after the Ciannachta tribe, descended from Tadc mac C\\u00e9in."], ["Londonderry", "Loughinsholin", "Loch Inse U\\u00ed Fhloinn", "By 1591", "171,662", "Name means \\"lough of O\'Lynn\'s island\\", referring to a lake containing a crann\\u00f3g."], ["Londonderry", "North East Liberties of Coleraine", "L\\u00edbearta\\u00ed Thoir Thuaidh Ch\\u00fail Raithin", "By 1672", "18,005", "formerly Liberties of Coleraine town."], ["Londonderry", "North-West Liberties of Londonderry", "L\\u00edbearta\\u00ed Thiar Thuaidh Dhoire", "By 1672", "11,506", "formerly Liberties of Londonderry city."], ["Londonderry", "Tirkeeran or Tyrkeeran", "T\\u00edr Mhic Caoirthinn", "By 1591 (as Anagh)", "94,014", "A half-barony in 1807, including the south-east liberties of Londonderry. Name means \\"land of the sons of Cartin.\\""], ["Longford", "Ardagh", "Ardach", "By 1629", "40,223", "Named after Ardagh village"], ["Longford", "Granard", "Gr\\u00e1nard", "By 1629", "63,857", "Named after Granard village"], ["Longford", "Longford", "An Longfort", "By 1629", "57,243", "Named after Longford town"], ["Longford", "Moydow", "Maigh Dumha", "By 1629", "34,470", "Named after Moydow village"], ["Longford", "Rathcline", "R\\u00e1th Claon", "By 1629", "40,421", "Named after Rathcline Castle."], ["Longford", "Shrule or Abbeyshrule", "Sruthail", "By 1629", "21,006", "Named after Abbeyshrule"], ["Louth", "Ardee", "Baile \\u00c1tha Fhirdhia", "By 1593", "53,832", "Named after Ardee town"], ["Louth", "Drogheda", "Droichead \\u00c1tha", "1412", "4,497", "Formerly a county corporate. A barony separate from the county was formed in 1840 from the portion previously within the County of the town of Drogheda which was not within the town of Drogheda."], ["Louth", "Dundalk Lower", "D\\u00fan Dealgan \\u00cdochtarach", "Divided by 1821", "37,803", "Named after Dundalk town"], ["Louth", "Dundalk Upper", "D\\u00fan Dealgan Uachtarach", "Divided by 1821", "30,750", "Named after Dundalk town"], ["Louth", "Ferrard", "Fir Arda", "By 1593", "48,806", "From Fera Arda Ciannachta, \\"men of high Ciannachta.\\" Namesake of Viscount Massereene and Ferrard"], ["Louth", "Louth", "L\\u00fa", "By 1672", "25,704", "Named after Louth village"], ["Mayo", "Burrishoole", "Buir\\u00edos Umhaill", "By 1574", "145,172", "Named after Burrishoole Castle; a few sources list Burrishoole split into \\"Burrishoole North\\" and \\"Burrishoole South\\""], ["Mayo", "Carra", "Ceara", "By 1574", "134,206", "Named after Carra village. Called Burriscarra/Burisker in 1574."], ["Mayo", "Clanmorris", "Clann Mhuiris", "By 1574", "69,252", "Namesake of Baron Clanmorris. Name means \\"Muiris\' family.\\" Called Croslwyhin/Crossboyne in 1574."], ["Mayo", "Costello or Clancostello", "Coistealaigh", "By 1574", "143,874", "Now also partly in Roscommon. Named after the Hiberno-Norman MacOisdealbhaigh (Costello) family. Called Beallahaunes/Ballyhaunis in 1574"], ["Mayo", "Erris", "Iorras", "By 1672", "230,452", "Named after Erris village. A half-barony in the Gilbert Manuscript of the Down Survey."], ["Mayo", "Gallen", "Gaileanga", "By 1574", "119,153", "Named after the Gailenga tribe. Beallalahane in 1574."], ["Mayo", "Kilmaine", "Cill Mhe\\u00e1in", "By 1574", "95,284", "Named after Kilmaine village"], ["Mayo", "Murrisk", "Muraisc", "By 1574", "137,061", "Named after Murrisk village"], ["Mayo", "Tirawley or Tyrawley", "T\\u00edr Amhlaidh", "By 1574", "246,822", "Name means \\"Amlaid\'s land\\", referring to Amalgaid mac Fiachrae. \\"Many\\"/Moyne in 1574."], ["Meath", "Deece Lower", "D\\u00e9ise \\u00cdochtarach", "Divided by 1807", "20,013", "Deece barony present by 1542. Named after the D\\u00e9isi Becc."], ["Meath", "Deece Upper", "D\\u00e9ise Uachtarach", "Divided by 1807", "28,763", "Deece barony present by 1542. Named after the D\\u00e9isi Becc."], ["Meath", "Duleek Lower", "Damhliag \\u00cdochtarach", "Divided by 1807", "37,772", "Named after Duleek village. Now also partly in Louth. Duleek barony present by 1542"], ["Meath", "Duleek Upper", "Damhliag Uachtarach", "Divided by 1807", "28,463", "Named after Duleek village. Duleek barony present by 1542"], ["Meath", "Dunboyne", "D\\u00fan B\\u00fainne", "By 1542", "16,781", "Named after Dunboyne town."], ["Meath", "Fore or Demifore", "Baile Fhobhair", "By 1542", "42,388", "Half with Fore, County Westmeath since 1542. Named after Fore Abbey."], ["Meath", "Kells Lower", "Ceanannas \\u00cdochtarach", "Divided by 1807", "36,171", "Named after Kells town. Kells barony present by 1542"], ["Meath", "Kells Upper", "Ceanannas Uachtarach", "Divided by 1807", "49,552", "Named after Kells town. Kells barony present by 1542"], ["Meath", "Lune", "Lu\\u00edne", "By 1542", "39,326", "Named after the Luighne tribe."], ["Meath", "Morgallion", "Machaire Gaileang", "By 1542", "31,492", "Name means \\"plain of the Gailenga\\", a medieval tribe."], ["Meath", "Moyfenrath (or Moyfenragh) Lower", "Maigh Fionnr\\u00e1ithe \\u00cdochtarach", "Divided by 1807", "40,313", "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\""], ["Meath", "Moyfenrath (or Moyfenragh) Upper", "Maigh Fionnr\\u00e1ithe Uachtarach", "Divided by 1807", "31,696", "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\""], ["Meath", "Navan Lower", "An Uaimh \\u00cdochtarach", "Divided by 1807", "25,835", "Named after Navan town. Navan barony present by 1542"], ["Meath", "Navan Upper", "An Uaimh Uachtarach", "Divided by 1807", "17,651", "Named after Navan town. Navan barony present by 1542"], ["Meath", "Ratoath", "R\\u00e1th T\\u00f3", "By 1542", "35,697", "Named after Ratoath village."], ["Meath", "Skreen or Skryne", "An Scr\\u00edn", "By 1542", "40,891", "Named after Skryne village"], ["Meath", "Slane Lower", "Baile Shl\\u00e1ine \\u00cdochtarach", "Divided in 1791", "26,224", "Named after Slane village. Slane barony present by 1542"], ["Meath", "Slane Upper", "Baile Shl\\u00e1ine Uachtarach", "Divided in 1791", "29,211", "Named after Slane village. Slane barony present by 1542"], ["Monaghan", "Cremorne", "Cr\\u00edoch Mh\\u00farn", "1585", "84,508", "From Irish meaning \\"border of the Mugdorna.\\""], ["Monaghan", "Dartree or Dartry", "Dartra\\u00ed", "1585", "59,610", "Name from the ancient kingdom of Dartraighe."], ["Monaghan", "Farney", "Fearnaigh", "1585", "67,333", "Named from the ancient kingdom of Fernmag, \\"plain of alders.\\""], ["Monaghan", "Monaghan", "Muineach\\u00e1n", "1585", "69,735", "Named after Monaghan town."], ["Monaghan", "Trough", "An Tri\\u00facha", "1585", "37,376", "From the Irish tr\\u00edcha c\\u00e9t, a unit of territory in Medieval Ireland."], ["Offaly", "Ballyboy", "Baile \\u00c1tha Bu\\u00ed", "By 1672", "32,398", "Named after Ballyboy village"], ["Offaly", "Ballybritt", "Baile an Bhriotaigh", "By 1672", "52,378", "Named after Ballybritt Castle."], ["Offaly", "Ballycowen", "Baile Mhic Comhainn", "By 1672", "38,610", "Named after Ballycowan Castle."], ["Offaly", "Clonlisk", "Cluain Leisc", "By 1672", "49,052", "Named after Clonlisk Castle."], ["Offaly", "Coolestown", "Baile an Ch\\u00fala\\u00edgh", "By 1672", "47,866", "Named after Coolestown, the former name of Edenderry."], ["Offaly", "Eglish or Fercale", "An Eaglais", "By 1672", "28,697", "The name means \\"church,\\" while Fercale means \\"men of the churches.\\""], ["Offaly", "Garrycastle", "Garra\\u00ed an Chaisle\\u00e1in", "By 1672", "102,841", "Named after Garrycastle"], ["Offaly", "Geashill", "G\\u00e9isill", "By 1672", "30,864", "Named after Geashill village"], ["Offaly", "Kilcoursey", "Cill Chuairs\\u00ed", "By 1672", "19,274", "Named after Kilcoursey Castle."], ["Offaly", "Philipstown Lower", "An Daingean \\u00cdochtarach", "Divided by 1807", "30,669", "Named after Philipstown, now renamed Daingean"], ["Offaly", "Philipstown Upper", "An Daingean Uachtarach", "Divided by 1807", "37,087", "Named after Philipstown, now renamed Daingean"], ["Offaly", "Warrenstown", "Baile an Bhair\\u00ednigh", "By 1672", "21,456", "Named after Ballybrittain (Warrenstown) Castle."], ["Roscommon", "Athlone North", "Baile \\u00c1tha Luain Thuaidh", "Divided by 1868", "57,863", "Named after Athlone town. North and South not separated in 1871 census. The original Athlone barony existed by 1574."], ["Roscommon", "Athlone South", "Baile \\u00c1tha Luain Theas", "Divided by 1868", "79,659", "Named after Athlone town. North and South not separated in 1871 census."], ["Roscommon", "Ballintober North", "Baile an Tobair Thuaidh", "Divided by 1841", "30,853", "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574."], ["Roscommon", "Ballintober South", "Baile an Tobair Theas", "Divided by 1841", "48,113", "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574."], ["Roscommon", "Ballymoe", "B\\u00e9al \\u00c1tha M\\u00f3", "By 1672", "23,287", "Half with Ballymoe, County Galway. Named after Ballymoe village, on the County Galway side of the River Suck."], ["Roscommon", "Boyle", "Mainistir na B\\u00faille", "By 1574", "81,163", "Named after Boyle town"], ["Roscommon", "Castlereagh", "An Caisle\\u00e1n Riabhach", "By 1841", "82,081", "Named after Castlerea town. Previously one of three sections of Ballintober barony."], ["Roscommon", "Frenchpark", "D\\u00fan Gar", "By 1841", "71,203", "Named after Frenchpark village; previously part of the barony of Boyle."], ["Roscommon", "Moycarn or Moycarnon or Moycarne or Moycarnan", "Maigh Charn\\u00e1in", "By 1574", "29,595", "Now also partly in Galway. A half-barony in 1807."], ["Roscommon", "Roscommon", "Ros Com\\u00e1in", "By 1574", "81,584", "Named after Roscommon town, which is in Ballintober South"], ["Sligo", "Carbury", "Cairbre", "United by 1841", "73,685", "Divided into Upper and Lower baronies before 1841. Named after the ancient t\\u00faath of the Cairbre Drom Cliabh."], ["Sligo", "Coolavin", "C\\u00fail \\u00d3 bhFinn", "By 1672", "25,473", "Name means \\"corner of the descendants of Finn.\\""], ["Sligo", "Corran", "An Corann", "By 1672", "45,376", "Named after Corann village"], ["Sligo", "Leyny or Leney", "Lu\\u00edne", "By 1672", "121,233", "Named after the Luighne Connacht tribe"], ["Sligo", "Tireragh or Tyreragh", "T\\u00edr Fhiachrach", "By 1672", "106,598", "Now also partly in Mayo. Name means \\"land of the U\\u00ed Fiachrach.\\""], ["Sligo", "Tirerril or Tyraghrill", "T\\u00edr Oirill", "By 1672", "75,812", "Name means \\"Olliol\'s land\\", referring to Ailill mac Echach Mugmed\\u00f3in."], ["Tipperary", "Clanwilliam", "Clann Liam", "By 1672", "115,755", "Name means \\"clan of William de Burgh.\\""], ["Tipperary", "Eliogarty", "\\u00c9ile U\\u00ed Fh\\u00f3garta", "By 1672", "90,257", "A half-barony (with Ikerrin) in the Down Survey. Name means \\"\\u00c9ile of the U\\u00ed Fhogartaigh.\\""], ["Tipperary", "Iffa and Offa East", "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thoir", "Divided by 1807", "56,819", "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\""], ["Tipperary", "Iffa and Offa West", "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thiar", "Divided by 1807", "117,175", "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\""], ["Tipperary", "Ikerrin", "U\\u00ed Chair\\u00edn", "By 1672", "69,805", "A half-barony (with Eliogarty) in the Down Survey. Name means \\"descendents of Cair\\u00edn.\\""], ["Tipperary", "Kilnamanagh Lower", "Coill na Manach \\u00cdochtarach", "Divided in 1838", "42,041", "Named after Kilnamanagh town"], ["Tipperary", "Kilnamanagh Upper", "Coill na Manach Uachtarach", "Divided in 1838", "59,990", "Named after Kilnamanagh town."], ["Tipperary", "Middle Third", "An Trian Me\\u00e1nach", "By 1672", "113,544", "From trian meaning \\"third\\" or \\"portion.\\""], ["Tipperary", "Ormond Lower", "Urumhain \\u00cdochtarach", "Divided by 1672", "127,222", "Compare Ormond (\\"east Munster\\")"], ["Tipperary", "Ormond Upper", "Urumhain Uachtarach", "Divided by 1672", "79,471", "Compare Ormond (\\"east Munster\\")"], ["Tipperary", "Owney and Arra", "Uaithne agus Ara", "United 1672\\u20131792", "85,494", "\\"Owney Mulrian\\" and Arra were separate baronies in the Down Survey, named respectively after the ancient kingdom of Uaithni and the River Ara."], ["Tipperary", "Slievardagh", "Sliabh Ardach", "By 1672", "90,772", "\\"Slevardagh & Compsy\\" in the Down Survey. The name means \\"high mountain of the Eoganachta.\\""], ["Tyrone", "Clogher", "Clochar", "By 1591", "97,569", "Named after Clogher town"], ["Tyrone", "Dungannon Lower", "D\\u00fan Geanainn \\u00cdochtarach", "Divided by 1851; Dungannon by 1591", "42,794", "Named after Dungannon town"], ["Tyrone", "Dungannon Middle", "D\\u00fan Geanainn L\\u00e1ir", "Divided by 1851; Dungannon by 1591", "87,541", "Named after Dungannon town"], ["Tyrone", "Dungannon Upper", "D\\u00fan Geanainn Uachtarach", "Divided by 1851; Dungannon by 1591", "85,995", "Named after Dungannon town"], ["Tyrone", "Omagh East", "An \\u00d3maigh Thoir", "Divided 1807\\u201321; Omagh by 1591", "132,149", "Named after Omagh town"], ["Tyrone", "Omagh West", "An \\u00d3maigh Thiar", "Divided 1807\\u201321; Omagh by 1591", "93,321", "Named after Omagh town"], ["Tyrone", "Strabane Lower", "An Srath B\\u00e1n \\u00cdochtarach", "Divided by 1851; Strabane by 1591", "117,419", "Named after Strabane town"], ["Tyrone", "Strabane Upper", "An Srath B\\u00e1n Uachtarach", "Divided by 1851; Strabane by 1591", "121,282", "Named after Strabane town"], ["Waterford", "Coshmore and Coshbride", "Cois Abha M\\u00f3ire agus Cois Bhr\\u00edde", "United by 1831", "88,253", "Baronies of Coshmore and Coshbride were separate in the 1821 census. The names mean, respectively, \\"Bank of the Munster Blackwater\\" and \\"Bank of the River Bride.\\""], ["Waterford", "Decies-within-Drum", "Na D\\u00e9ise laistigh den Drom", "Decies divided by 1746", "57,325", "Decies south of the Drum Hills."], ["Waterford", "Decies-without-Drum", "Na D\\u00e9ise lasmuigh den Drom", "Decies divided by 1746", "129,894", "Decies north of the Drum Hills. \\"Without\\" is used with the meaning of \\"beyond\\" or \\"outside.\\""], ["Waterford", "Gaultier or Gaultiere", "An Ghaillt\\u00edr", "By 1672", "29,447", "Kilculliheen was formerly a parish of this barony. Name means \\"land of foreigners,\\" referring to Vikings."], ["Waterford", "Glenahiry", "Gleann na hUidhre", "By 1672", "38,940", "Name means \\"valley of the Nier\\", referring to the Nier River."], ["Waterford", "Middle Third or Middlethird", "An Trian Me\\u00e1nach", "By 1672", "44,609", "From trian meaning \\"third\\" or \\"portion.\\""], ["Waterford", "Upperthird or Upper Third", "Uachtar T\\u00edre", "By 1672", "63,846", "Name originally meant \\"Upper country\\"; probably acquired \\"third\\" in name by analogy with Middle Third."], ["Waterford", "Waterford City", "Cathair Phort L\\u00e1irge", "1574", "532", "Formerly a county corporate."], ["Westmeath", "Brawny", "Bre\\u00e1mhaine", "By 1672", "10,070", "The ancient territory of Bregmaine."], ["Westmeath", "Clonlonan", "Cluain Lon\\u00e1in", "By 1672", "32,095", "Name means \\"Lon\\u00e1n\'s meadow.\\""], ["Westmeath", "Corkaree", "Corca Raoi", "By 1542", "23,787", "A tribal name, \\"descendants of Raoi.\\""], ["Westmeath", "Delvin", "Dealbhna", "By 1542", "39,062", "Named after Delvin village"], ["Westmeath", "Farbill", "Fir Bhile", "By 1542", "35,453", "A tribal name: \\"men of the sacred tree.\\""], ["Westmeath", "Fartullagh", "Fir Thulach", "1542", "37,512", "Previously Tyrrells country. Name means \\"men of the hillock\\", a tribal name."], ["Westmeath", "Fore or Demifore", "Baile Fhobhair", "1542", "49,056", "Half with Fore, County Meath. Named after Fore Abbey."], ["Westmeath", "Kilkenny West", "Cill Chainnigh Thiar", "1542", "31,169", "Previously Maherquirke, Dillons country"], ["Westmeath", "Moyashel and Magheradernon", "Maigh Asail agus Machaire \\u00d3 dTiarn\\u00e1in", "By 1672", "40,565", "Moyashel and Magheradernon listed separately in 1542. They formed the ancient territories of Mag nAssail (Assail\'s plain) and the plain of the O\'Tiernans."], ["Westmeath", "Moycashel", "Maigh Chaisil", "1542", "47,097", "Originally the Barony of Rossaughe; before that, Delamares country. Name means \\"plain of the stone ringfort.\\""], ["Westmeath", "Moygoish", "U\\u00ed Mhac gCuais", "By 1542", "39,483", "A tribal name: \\"Descendants of the Son of Cuas.\\""], ["Westmeath", "Rathconrath", "R\\u00e1th Conarta", "1542", "48,415", "Named after Rathconrath village; previously Daltons country"], ["Wexford", "Ballaghkeen North", "An Bealach Caoin Thuaidh", "Ballaghkeen created 1606; Divided by 1868", "45,413", "Ballaghkeen means \\"way of sorrow.\\""], ["Wexford", "Ballaghkeen South", "An Bealach Caoin Theas", "Ballaghkeen created 1606; Divided by 1868", "40,986", "Ballaghkeen means \\"way of sorrow.\\""], ["Wexford", "Bantry", "Beanntra\\u00ed", "By 1672", "101,598", "Named after the Bendtraigi Laigen, the former ruling people."], ["Wexford", "Bargy", "U\\u00ed Bhairrche", "By 1672", "40,002", "Named after the ruling U\\u00ed Bairrche family, who claimed descent from D\\u00e1ire Barrach."], ["Wexford", "Forth", "Fotharta", "By 1672", "38,384", "A Fortuatha was a kingdom not ruled directly by members of the dominant dynasty of a province. This area was ruled by Fothairt in Chairn."], ["Wexford", "Gorey", "Guaire", "1606", "81,913", "Named after Gorey town"], ["Wexford", "Scarawalsh", "Scairbh Bhailis", "1606", "106,650", "Name means \\"rocky ford of light.\\""], ["Wexford", "Shelburne", "S\\u00edol Bhroin", "By 1672", "51,103", "Named after the tribe, S\\u00edl Broin, \\"offspring of Broin.\\""], ["Wexford", "Shelmaliere East", "S\\u00edol Maolu\\u00edr Thoir", "Divided by 1841", "16,363", "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\""], ["Wexford", "Shelmaliere West", "S\\u00edol Maolu\\u00edr Thiar", "Divided by 1841", "50,299", "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\""], ["Wicklow", "Arklow", "An tInbhear M\\u00f3r", "1606", "66,980", "Named after Arklow town"], ["Wicklow", "Ballinacor North", "Baile na Corra Thuaidh", "Divided 1832\\u20135", "74,109", "United barony of Talbotstown created in 1606, and divided into half-baronies for civil law purposes in 1798. Named after Ballinacor Castle."], ["Wicklow", "Ballinacor South", "Baile na Corra Theas", "Divided 1832\\u20135", "78,316", "(See Ballinacor North)"], ["Wicklow", "Newcastle", "An Caisle\\u00e1n Nua", "1606", "51,938", "Named after the village of Newcastle, County Wicklow. Not related to County Dublin barony of the same name."], ["Wicklow", "Rathdown", "R\\u00e1th an D\\u00fain", "1606", "33,462", "Half with Rathdown, County Dublin. Named after Rathdown Castle."], ["Wicklow", "Shillelagh", "S\\u00edol \\u00c9alaigh", "1606", "44,348", "Named after Shillelagh village. A half-barony in 1807."], ["Wicklow", "Talbotstown Lower", "Baile an Talb\\u00f3idigh \\u00cdochtarach", "Divided by 1801", "86,857", "Named after Talbotstown village. United barony of Talbotstown created in 1606."], ["Wicklow", "Talbotstown Upper", "Baile an Talb\\u00f3idigh Uachtarach", "Divided by 1801", "62,510", "(See Talbotstown Lower)"]], "label": [[315, 0], [315, 1]]}'
```
2.
```
import json
import pandas as pd
from transformers import TapasModel, TapasTokenizer
data = json.loads(data)
model = TapasModel.from_pretrained("google/tapas-base-finetuned-tabfact")
tokenizer = TapasTokenizer.from_pretrained("google/tapas-base-finetuned-tabfact")
table = pd.DataFrame(data['table_list'][1:], columns=data['table_list'][0]).astype(str)
queries = [data['sentence_annotations'][0]['final_sentence']]
inputs = tokenizer(table=table, queries=queries,
padding="max_length", return_tensors="pt",
max_length=512, truncation=True)
input_ids = inputs['input_ids']
attention_mask = inputs['attention_mask']
token_type_ids = inputs['token_type_ids']
x = model(input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids)
```
3. error message
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-15-fc5524a5d018> in <module>()
1 x = model(input_ids=input_ids,
2 attention_mask=attention_mask,
----> 3 token_type_ids=token_type_ids)
/shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/transformers/models/tapas/modeling_tapas.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict)
853
854 embedding_output = self.embeddings(
--> 855 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
856 )
857 encoder_outputs = self.encoder(
/shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/transformers/models/tapas/modeling_tapas.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
323 for i in range(self.number_of_token_type_embeddings):
324 name = f"token_type_embeddings_{i}"
--> 325 embeddings += getattr(self, name)(token_type_ids[:, :, i])
326
327 embeddings = self.LayerNorm(embeddings)
/shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
/shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
```
## Expected behavior
I have inspected the content of `token_type_ids` and all the shapes of tensors involved in this snippet but found nothing strange.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9221/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9220/comments | https://api.github.com/repos/huggingface/transformers/issues/9220/events | https://github.com/huggingface/transformers/pull/9220 | 771,509,767 | MDExOlB1bGxSZXF1ZXN0NTQzMDE0Nzk1 | 9,220 | Proposed Fix : [RagSequenceForGeneration] generate "without" input_ids | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ratthachat - thanks a lot for your PR! I agree that option 2) is the better option and think the general approach is exactly right. I left a couple of statements making that improve readability a bit IMO. Also could we add one test here: \r\nhttps://github.com/huggingface/transformers/blob/f38c4ad302dd34faef1137b13e9636f4408b0462/tests/test_modeling_rag.py#L502\r\ncalled `test_model_generate_from_context_input_ids` that verifies that using `input_ids` and `context_input_ids` yields the same result?",
"@ratthachat Don't worry about `make style` I can fix that later!",
"Thanks for the review, Patrick! I pretty much agree on every point, esp. the test case. I will do it. :)",
"Hi Patrick, I think I addressed every suggestion :)\r\n[ Still fails on code-quality even already applied `make style` again :D ] @patrickvonplaten ",
"Great job @ratthachat - I'm currently running all RAG slow tests to make sure nothing was unintentionally broken. Will merge once they all pass ",
"Slow tests all passing",
"Nice @patrickvonplaten !"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Proposal of option (2) mentioned in the issue `RagSequenceForGeneration.generate()` without `input_ids`
https://github.com/huggingface/transformers/issues/9184
### Re-paste here:
In `RagSequenceForGeneration` method `generate()` function, the doc said that both `input_ids` and `context_input_ids` are optional (one of them must be specified) .
However, in the code https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L907
It specifically need `input_ids` in all cases.
Not sure which option is the best
(1) simply said `input_ids` is always needed , OR
(2) add code to calculate `nll` if only `context_input_ids` is provided , but in this case `doc_scores` and `context_attention_mask` have to be provided as well (similar to RagModel requirement ) : https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L588
I think option (2) should be reasonable since `RagTokenForGeneration` method `generate()` also requires the same.
### who can review?
@lhoestq or @patrickvonplaten
### Make style applied but failed quality check
I already applied `make style` and passed, but somehow still fails the "code quality check" sorry :( | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9220/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9220/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9220",
"html_url": "https://github.com/huggingface/transformers/pull/9220",
"diff_url": "https://github.com/huggingface/transformers/pull/9220.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9220.patch",
"merged_at": 1608813481000
} |
https://api.github.com/repos/huggingface/transformers/issues/9219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9219/comments | https://api.github.com/repos/huggingface/transformers/issues/9219/events | https://github.com/huggingface/transformers/pull/9219 | 771,476,408 | MDExOlB1bGxSZXF1ZXN0NTQyOTkyNjM0 | 9,219 | Fix beam search generation for GPT2 and T5 on model parallelism | {
"login": "TobiasNorlund",
"id": 2678217,
"node_id": "MDQ6VXNlcjI2NzgyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2678217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TobiasNorlund",
"html_url": "https://github.com/TobiasNorlund",
"followers_url": "https://api.github.com/users/TobiasNorlund/followers",
"following_url": "https://api.github.com/users/TobiasNorlund/following{/other_user}",
"gists_url": "https://api.github.com/users/TobiasNorlund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TobiasNorlund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TobiasNorlund/subscriptions",
"organizations_url": "https://api.github.com/users/TobiasNorlund/orgs",
"repos_url": "https://api.github.com/users/TobiasNorlund/repos",
"events_url": "https://api.github.com/users/TobiasNorlund/events{/privacy}",
"received_events_url": "https://api.github.com/users/TobiasNorlund/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger can you maybe review as well and merge if ok?"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes beam search generation crash when the model layers are distributed on several devices (model parallelism). I have also added a test which showcase the bug on master and pass with the provided fix. Fixes issue #9200
This is my first contribution to Transformers, so please let me know if something seems wrong or can be improved! The added test doesn't actually test anything at the moment, just raises an error on master. Looking forward to some feedback on how you'd like to see that.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Issue #9200
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@alexorona
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9219/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9219",
"html_url": "https://github.com/huggingface/transformers/pull/9219",
"diff_url": "https://github.com/huggingface/transformers/pull/9219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9219.patch",
"merged_at": 1608555924000
} |
https://api.github.com/repos/huggingface/transformers/issues/9218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9218/comments | https://api.github.com/repos/huggingface/transformers/issues/9218/events | https://github.com/huggingface/transformers/issues/9218 | 771,462,949 | MDU6SXNzdWU3NzE0NjI5NDk= | 9,218 | run_clm.py AttributeError: 'NoneType' object has no attribute 'keys' | {
"login": "mukhtar-algezoli",
"id": 38084259,
"node_id": "MDQ6VXNlcjM4MDg0MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/38084259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mukhtar-algezoli",
"html_url": "https://github.com/mukhtar-algezoli",
"followers_url": "https://api.github.com/users/mukhtar-algezoli/followers",
"following_url": "https://api.github.com/users/mukhtar-algezoli/following{/other_user}",
"gists_url": "https://api.github.com/users/mukhtar-algezoli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mukhtar-algezoli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mukhtar-algezoli/subscriptions",
"organizations_url": "https://api.github.com/users/mukhtar-algezoli/orgs",
"repos_url": "https://api.github.com/users/mukhtar-algezoli/repos",
"events_url": "https://api.github.com/users/mukhtar-algezoli/events{/privacy}",
"received_events_url": "https://api.github.com/users/mukhtar-algezoli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @mukhtar-algezoli,\r\n\r\nIt seems like you are using tensorflow, but the script `run_clm.py` only works for PyTorch -> I'm afraid you have to use PyTorch in order to use `run_clm.py`.",
"@patrickvonplaten thank you.",
"hi @patrickvonplaten ,\r\nif i want use tensorflow to pretrain bert model with transformers , are there some examples like run_mlm.py ?",
"https://github.com/huggingface/transformers/tree/master/examples/tensorflow/language-modeling\r\n\r\nuse tensorflow examples get the same error, How can i fix it?",
"@Rocketknight1 for TF examples :-)",
"@hrdxwandg This is an odd issue! Can you please install the most recent version of transformers (e.g. with `pip install --upgrade transformers`) and then try running the following in a console or notebook and tell me what you see?\r\n\r\n```\r\nfrom transformers import MODEL_FOR_CAUSAL_LM_MAPPING\r\n\r\nprint(MODEL_FOR_CAUSAL_LM_MAPPING)\r\n```",
"> print(MODEL_FOR_CAUSAL_LM_MAPPING)\r\n\r\nOrderedDict([(<class 'transformers.models.roformer.configuration_roformer.RoFormerConfig'>, <class 'transformers.models.roformer.modeling_roformer.RoFormerForCausalLM'>), (<class 'transformers.models.bigbird_pegasus.configuration_bigbird_pegasus.BigBirdPegasusConfig'>, <class 'transformers.models.bigbird_pegasus.modeling_bigbird_pegasus.BigBirdPegasusForCausalLM'>), (<class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'>, <class 'transformers.models.gpt_neo.modeling_gpt_neo.GPTNeoForCausalLM'>), (<class 'transformers.models.big_bird.configuration_big_bird.BigBirdConfig'>, <class 'transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM'>), (<class 'transformers.models.camembert.configuration_camembert.CamembertConfig'>, <class 'transformers.models.camembert.modeling_camembert.CamembertForCausalLM'>), (<class 'transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig'>, <class 'transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaForCausalLM'>), (<class 'transformers.models.roberta.configuration_roberta.RobertaConfig'>, <class 'transformers.models.roberta.modeling_roberta.RobertaForCausalLM'>), (<class 'transformers.models.bert.configuration_bert.BertConfig'>, <class 'transformers.models.bert.modeling_bert.BertLMHeadModel'>), (<class 'transformers.models.openai.configuration_openai.OpenAIGPTConfig'>, <class 'transformers.models.openai.modeling_openai.OpenAIGPTLMHeadModel'>), (<class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'>, <class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>), (<class 'transformers.models.transfo_xl.configuration_transfo_xl.TransfoXLConfig'>, <class 'transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModel'>), (<class 'transformers.models.xlnet.configuration_xlnet.XLNetConfig'>, <class 'transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModel'>), (<class 'transformers.models.xlm.configuration_xlm.XLMConfig'>, <class 'transformers.models.xlm.modeling_xlm.XLMWithLMHeadModel'>), (<class 'transformers.models.ctrl.configuration_ctrl.CTRLConfig'>, <class 'transformers.models.ctrl.modeling_ctrl.CTRLLMHeadModel'>), (<class 'transformers.models.reformer.configuration_reformer.ReformerConfig'>, <class 'transformers.models.reformer.modeling_reformer.ReformerModelWithLMHead'>), (<class 'transformers.models.bert_generation.configuration_bert_generation.BertGenerationConfig'>, <class 'transformers.models.bert_generation.modeling_bert_generation.BertGenerationDecoder'>), (<class 'transformers.models.xlm_prophetnet.configuration_xlm_prophetnet.XLMProphetNetConfig'>, <class 'transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetForCausalLM'>), (<class 'transformers.models.prophetnet.configuration_prophetnet.ProphetNetConfig'>, <class 'transformers.models.prophetnet.modeling_prophetnet.ProphetNetForCausalLM'>), (<class 'transformers.models.bart.configuration_bart.BartConfig'>, <class 'transformers.models.bart.modeling_bart.BartForCausalLM'>), (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>, <class 'transformers.models.mbart.modeling_mbart.MBartForCausalLM'>), (<class 'transformers.models.pegasus.configuration_pegasus.PegasusConfig'>, <class 'transformers.models.pegasus.modeling_pegasus.PegasusForCausalLM'>), (<class 'transformers.models.marian.configuration_marian.MarianConfig'>, <class 'transformers.models.marian.modeling_marian.MarianForCausalLM'>), (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>, <class 'transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM'>), (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>, <class 'transformers.models.blenderbot_small.modeling_blenderbot_small.BlenderbotSmallForCausalLM'>), (<class 'transformers.models.megatron_bert.configuration_megatron_bert.MegatronBertConfig'>, <class 'transformers.models.megatron_bert.modeling_megatron_bert.MegatronBertForCausalLM'>)])",
"That's extremely strange - the error above said that `MODEL_FOR_CAUSAL_LM_MAPPING` was `None`, but your code there clearly shows that it's importing the dict fine.\r\n\r\nAre you sure you're getting the same error as the commenter above?",
"> That's extremely strange - the error above said that `MODEL_FOR_CAUSAL_LM_MAPPING` was `None`, but your code there clearly shows that it's importing the dict fine.\r\n> \r\n> Are you sure you're getting the same error as the commenter above?\r\n\r\nsorry, I don't check clearly. My error msg is below:\r\nTraceback (most recent call last):\r\n File \"run_mlm_tf.py\", line 63, in <module>\r\n MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_LM_MAPPING.keys())\r\nAttributeError: 'NoneType' object has no attribute 'keys'\r\n\r\nI use your method and recheck:\r\nfrom transformers import MODEL_FOR_MASKED_LM_MAPPING\r\nprint(MODEL_FOR_MASKED_LM_MAPPING)\r\n\r\nthe output is None",
"Hi, sorry! I lost track of this issue because we marked it as closed, but I see we didn't resolve @hrdxwandg's problem. Are you still encountering the same error?",
"@Rocketknight1 ; sorry to bump this, but I am getting the same error with the latest transformers version"
] | 1,608 | 1,632 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.0dev0
- Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-debian-10.7
- Python version: 3.7.8
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No (using TPU)
- Using distributed or parallel set-up in script?: don't know
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
Trainer: @sgugger
examples/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): gpt2
The problem arises when using:
* [ *] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ *] an official GLUE/SQUaD task: pre-training on wikitext dataset
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.create a tpu
2.install transformers and dataset
3.run the code
```ruby
python3 run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
```
error message:
```ruby
Traceback (most recent call last):
File "run_clm.py", line 51, in <module>
MODEL_CONFIG_CLASSES = list(MODEL_FOR_CAUSAL_LM_MAPPING.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
## Expected behavior
To start pretraining on the TPU
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9218/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9217/comments | https://api.github.com/repos/huggingface/transformers/issues/9217/events | https://github.com/huggingface/transformers/pull/9217 | 771,433,755 | MDExOlB1bGxSZXF1ZXN0NTQyOTYzOTU2 | 9,217 | Fix documentation links always pointing to master. | {
"login": "sugeeth14",
"id": 22686557,
"node_id": "MDQ6VXNlcjIyNjg2NTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/22686557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sugeeth14",
"html_url": "https://github.com/sugeeth14",
"followers_url": "https://api.github.com/users/sugeeth14/followers",
"following_url": "https://api.github.com/users/sugeeth14/following{/other_user}",
"gists_url": "https://api.github.com/users/sugeeth14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sugeeth14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sugeeth14/subscriptions",
"organizations_url": "https://api.github.com/users/sugeeth14/orgs",
"repos_url": "https://api.github.com/users/sugeeth14/repos",
"events_url": "https://api.github.com/users/sugeeth14/events{/privacy}",
"received_events_url": "https://api.github.com/users/sugeeth14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for merging @LysandreJik but I want to know why the errors occurred when I ran `make style ` as I see in https://github.com/huggingface/transformers/pull/9217#discussion_r548898170"
] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes documentation issue. The issue is that hyperlinks pointing to code file in the docs always point to the master.
Reference issue : https://github.com/huggingface/transformers/issues/7988
I found [extlinks](https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html) more appropriate than [rst_epilog
](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_epilog) as mentioned in the original issue because hyperlink replacement is not possible with `rst_epilog`
Stackoverflow issue reference : https://stackoverflow.com/questions/1227037/substitutions-inside-links-in-rest-sphinx
Although this fixes the issues as discussed I am unable to fix the issue in markup files which are https://huggingface.co/transformers/master/contributing.html and https://huggingface.co/transformers/master/notebooks.html
Please suggest if it is possible to do that as well.
Fixes https://github.com/huggingface/transformers/issues/7988
@sgugger @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9217/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9217",
"html_url": "https://github.com/huggingface/transformers/pull/9217",
"diff_url": "https://github.com/huggingface/transformers/pull/9217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9217.patch",
"merged_at": 1609845528000
} |
https://api.github.com/repos/huggingface/transformers/issues/9216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9216/comments | https://api.github.com/repos/huggingface/transformers/issues/9216/events | https://github.com/huggingface/transformers/issues/9216 | 771,426,566 | MDU6SXNzdWU3NzE0MjY1NjY= | 9,216 | checkpoint callbacks | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In particular I need to define a callback with on_save function, for this I need to be able to acces \"is_world_process_zero\" and I am not sure how to do it. If I want to pass trainer to callback, how can I make sure this is passed? thanks",
"solved with understanding is_world_process_zero is accessible via state.is_world_process_zero "
] | 1,608 | 1,608 | 1,608 | NONE | null | Hi
I need to save more things during checkpoint and I am using finetune_trainer.py not the pytorch lightening one, could you provide me with example on how I can modify the save part of the checkpoints? thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9216/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9215/comments | https://api.github.com/repos/huggingface/transformers/issues/9215/events | https://github.com/huggingface/transformers/issues/9215 | 771,415,258 | MDU6SXNzdWU3NzE0MTUyNTg= | 9,215 | File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 280, in _setup_backward_hooks assert p_tmp.grad_fn is not None | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As I tried to explain earlier it's not ready yet for general consumption.\r\n\r\nIf you want it sooner see: https://github.com/huggingface/transformers/issues/9156#issuecomment-748501582"
] | 1,608 | 1,608 | 1,608 | NONE | null | Hi
@stas00 @sgugger
I am testing last version of finetune_trainer on multiple gpus, with this command
`export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp
`
these two packages neeed to be added to requirements.txt
pytorch-lightning=1.1.1
fairscale
but still getting this error
```
12/19/2020 16:56:16 - INFO - utils - using task specific params for translation: {}
Traceback (most recent call last):
File "finetune_trainer.py", line 353, in <module>
main()
File "finetune_trainer.py", line 291, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 672, in train
model = ShardedDDP(model, self.optimizer)
File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 99, in __init__
self._setup_backward_hooks()
File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 280, in _setup_backward_hooks
assert p_tmp.grad_fn is not None
AssertionError
12/19/2020 16:56:16 - INFO - __main__ - *** Train ***
Traceback (most recent call last):
File "finetune_trainer.py", line 353, in <module>
main()
File "finetune_trainer.py", line 291, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 672, in train
model = ShardedDDP(model, self.optimizer)
File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 99, in __init__
self._setup_backward_hooks()
File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 280, in _setup_backward_hooks
assert p_tmp.grad_fn is not None
AssertionError
Traceback (most recent call last):
File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/updated/bin/python', '-u', 'finetune_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-small', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--data_dir', 'wmt_en_ro', '--do_train', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_train_batch_size', '4', '--sortish_sampler', '--task', 'translation', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '500', '--sharded_ddp']' returned non-zero exit status 1.
```
thanks
requirements erorr
```
Traceback (most recent call last):
File "finetune_trainer.py", line 353, in <module>
main()
File "finetune_trainer.py", line 282, in main
data_args=data_args,
File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/seq2seq_trainer.py", line 58, in __init__
super().__init__(*args, **kwargs)
File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 299, in __init__
raise ImportError("Sharded DDP training requires fairscale: `pip install fairscale`.")
ImportError: Sharded DDP training requires fairscale: `pip install fairscale`.
Traceback (most recent call last):
File "finetune_trainer.py", line 353, in <module>
main()
File "finetune_trainer.py", line 282, in main
data_args=data_args,
File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/seq2seq_trainer.py", line 58, in __init__
super().__init__(*args, **kwargs)
File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 299, in __init__
raise ImportError("Sharded DDP training requires fairscale: `pip install fairscale`.")
ImportError: Sharded DDP training requires fairscale: `pip install fairscale`.
Traceback (most recent call last):
File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module>
main()
File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/envs/updated/bin/python', '-u', 'finetune_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-small', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--data_dir', 'wmt_en_ro', '--do_train', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_train_batch_size', '4', '--sortish_sampler', '--task', 'translation', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '500', '--sharded_ddp']' returned non-zero exit status 1.
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9215/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9214/comments | https://api.github.com/repos/huggingface/transformers/issues/9214/events | https://github.com/huggingface/transformers/issues/9214 | 771,390,621 | MDU6SXNzdWU3NzEzOTA2MjE= | 9,214 | Save underlying BertModel only | {
"login": "pugantsov",
"id": 16597333,
"node_id": "MDQ6VXNlcjE2NTk3MzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/16597333?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pugantsov",
"html_url": "https://github.com/pugantsov",
"followers_url": "https://api.github.com/users/pugantsov/followers",
"following_url": "https://api.github.com/users/pugantsov/following{/other_user}",
"gists_url": "https://api.github.com/users/pugantsov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pugantsov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pugantsov/subscriptions",
"organizations_url": "https://api.github.com/users/pugantsov/orgs",
"repos_url": "https://api.github.com/users/pugantsov/repos",
"events_url": "https://api.github.com/users/pugantsov/events{/privacy}",
"received_events_url": "https://api.github.com/users/pugantsov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If your bert model is an instance of `BertModel` then you should be able to save it using `self.your_model.bert.save_pretrained(path)` and load using `.from_pretrained`",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | I currently have a custom model on top of a pretrained BERT Model, which is effectively just dropout and a linear layer on top for classification. I want to be able to train this classifier, updating the underlying weights, but then to save the `BERTModel` underneath (w/o linear classification layer) so that I can load from file and use this as my input to the same custom model. Is there a way I can access and save the underlying transformer to then be used again like this?
Code for clarification:
```
class BERTBinaryClassifier(torch.nn.Module):
def __init__(self, model_name_or_path: str):
super(BERTBinaryClassifier, self).__init__()
self.bert = ModelSelection.get_model(
model_name=model_name_or_path)
self.drop = torch.nn.Dropout(p=0.3)
self.out = torch.nn.Linear(self.bert.config.hidden_size, 2).cuda()
def forward(self, inputs):
logits, embs, *_ = self.bert(**inputs)
output = self.drop(logits)
return self.out(output), embs
```
Right now I'm just outputting the logits and hidden_state for future usage but I'd like to be able to use this same function to effectively load a `BertModel` like I do at the start here with `self.bert` (which just loads `BertModel.from_pretrained` and save my Pytorch classifier as a whole, separately.
Would it be as simple as say accessing `self.bert.bert` and then saving it that way? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9214/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9213/comments | https://api.github.com/repos/huggingface/transformers/issues/9213/events | https://github.com/huggingface/transformers/issues/9213 | 771,387,662 | MDU6SXNzdWU3NzEzODc2NjI= | 9,213 | bert-mlm-converting from tensorflow to pytorch | {
"login": "ahmedkotb98",
"id": 42472093,
"node_id": "MDQ6VXNlcjQyNDcyMDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/42472093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahmedkotb98",
"html_url": "https://github.com/ahmedkotb98",
"followers_url": "https://api.github.com/users/ahmedkotb98/followers",
"following_url": "https://api.github.com/users/ahmedkotb98/following{/other_user}",
"gists_url": "https://api.github.com/users/ahmedkotb98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahmedkotb98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahmedkotb98/subscriptions",
"organizations_url": "https://api.github.com/users/ahmedkotb98/orgs",
"repos_url": "https://api.github.com/users/ahmedkotb98/repos",
"events_url": "https://api.github.com/users/ahmedkotb98/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahmedkotb98/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"hello, i convert bert model from tensorflow to pytorch to fine tune it and in train function specially in loss , pred_masks = model(inputs, labels) i got this error not enough values to unpack (expected 2, got 1) why model return one only it should return two one for loss and the second for pred_masks",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9213/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9212/comments | https://api.github.com/repos/huggingface/transformers/issues/9212/events | https://github.com/huggingface/transformers/issues/9212 | 771,351,348 | MDU6SXNzdWU3NzEzNTEzNDg= | 9,212 | LXMERT cross_modality_matching logits order in code vs. documentation | {
"login": "LetiP",
"id": 16118202,
"node_id": "MDQ6VXNlcjE2MTE4MjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/16118202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LetiP",
"html_url": "https://github.com/LetiP",
"followers_url": "https://api.github.com/users/LetiP/followers",
"following_url": "https://api.github.com/users/LetiP/following{/other_user}",
"gists_url": "https://api.github.com/users/LetiP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LetiP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LetiP/subscriptions",
"organizations_url": "https://api.github.com/users/LetiP/orgs",
"repos_url": "https://api.github.com/users/LetiP/repos",
"events_url": "https://api.github.com/users/LetiP/events{/privacy}",
"received_events_url": "https://api.github.com/users/LetiP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | Issue #7266 evolved into an entirely different issue described below.
# Possible documentation error
Independently of considering #8333 or not, it is clear that the results of the cross-modality matching in LXMERT (for example in [this demo](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing)) are better (below random chance vs. above random chance) when the first logit of the `output_lxmert['cross_relationship_score']` *– (torch.FloatTensor of shape (batch_size, 2))* represents *a mismatch* rather than *a match*.
This **contradicts the [documentation](https://huggingface.co/transformers/model_doc/lxmert.html)** which in my understanding assigns the first logit to `is_match` (True):
> cross_relationship_score – (torch.FloatTensor of shape (batch_size, 2)): Prediction scores of the textual matching objective (classification) head (scores of True/False continuation before SoftMax).
@LysandreJik, @eltoto1219: Is the documentation right or wrong? How was the model trained and which logit was intended to predict the match and which one the mismatch?
I would be very happy to have a clear answer about the order of the logits in training and code and its correspondence to the documentation. I know how easy it is to flip the logits around and choose the one delivering better results on the data at hand. But in my understanding of scientific conduct, I can not just choose this by wishful thinking and ignoring the documentation: there might be other things going on exhibiting this behavior.
Thank you in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9212/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9211/comments | https://api.github.com/repos/huggingface/transformers/issues/9211/events | https://github.com/huggingface/transformers/pull/9211 | 771,325,824 | MDExOlB1bGxSZXF1ZXN0NTQyODg5OTc4 | 9,211 | [trainer] deepspeed integration | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thanks @stas00 for putting this together! I think there might be a few things we can do on the deepspeed side to smooth a few pain points out. Most of us our out of office until the new year but will definitely be taking a close look at this soon and help where we can.",
"Thank you, @jeffra! \r\n\r\nNew year sounds perfect as a time for you to make suggestions if any, but meanwhile I think it's coming along nicely. ",
"OK, so this PR also introduces a concept of `self.wrapped_model` so that we have less confusion about which is which in the trainer and user code. \r\n\r\n* `self.model` is always transformers model - this is the internal model\r\n* `self.wrapped_model` is a wrapped model - always the most external model which could be DDP(Transformers Model), DDP(Deepspeed(Transformers Model)), etc.\r\n\r\nIt's not documented yet, but @sgugger when you get a chance could you please check that it looks correct what I did here https://github.com/huggingface/transformers/pull/9211/commits/1510444399a4a088e6b14076bd857a47c3f2b1eb\r\n\r\nQuestions: \r\n\r\n1. is it correct that I set it to `None` if there is no wrapped model?\r\n2. would it be better to call it `model_wrapped` - so the two better align side by side during debug or IDE prediction completion engines?\r\n3. I'm not sure where to document this? And we should probably add a public API accessor?\r\n4. We can now probably remove and refactor the following code, as we now have a simpler way to get the internal model -\r\nhttps://github.com/huggingface/transformers/blob/cbe63949d76efd153a1f389f38fe9ce1287e06b0/src/transformers/trainer.py#L1635-L1651\r\n or is it used in some deep code where there is no trainer object? In which case this code won't work as it needs double unwrapping under deepspeed - `model.module.module` (since we have DDP too).\r\n \r\n I see it's used only in `floating_point_ops` which does have access to `self` so I'm not sure why it was needed in first place. Also in 2 tests, but that could be moved into the tests if need be.\r\n \r\nIf we want a general unwrap function it needs to do it recursively until there is no more `.module`.",
"Let me think.\r\nFor 1, I think the `wrapped_model` should be the model, in this case, just to avoid the inconvenience of testing if `None`.\r\nFor 2, I have no strong opinion, so you can pick the version you prefer.\r\nFor 3, none of the attributes of the `Trainer` are properly documented yet. This could be added in the main docstring\r\nFor 4, yes, absolutely. This was a quick fix that was merged when I didn't get much time to do a nice solution, I though I had removed all uses of that function.\r\n\r\nCould you add the `wrapped_model` or `model_wrapped` in a separate PR? This would be easier to follow and not highjack the discussion on the deepspeed integration. We can rebase this when that PR is merged.",
"> For 1, I think the `wraped_model` should be the model, in this case, just to avoid the inconvenience of testing if `None`.\r\n\r\nThen we somewhat lose information - `None` is telling us that notihng is wrapping the model. But I suppose we could achieve the same by `self.model == self.wrapped_mode` - OK, that works!\r\n\r\nThank you for the rest of the answers, @sgugger. Will integrate those and make a separate PR with wrapped_model.",
"@stas00 Should it be mentioned in some README/documentation that folks can only use DeepSpeed with the PyTorch `Trainer` and not the TF `Trainer`? There's a hard dependency of using `torch.distributed` with the NCCL backend to use DeepSpeed.\r\n\r\nSecondly, what is the plan in terms of introducing DeepSpeed in the `transformers` [`setup.py`](https://github.com/huggingface/transformers/blob/master/setup.py) and the PyTorch GPU [`Dockerfile`](https://github.com/huggingface/transformers/blob/master/docker/transformers-pytorch-gpu/Dockerfile)?\r\n\r\nWhile DeepSpeed has a pip installable PyPI package, IIRC it is highly recommended that it be installed from source. Also, in order to use certain features in DeepSpeed such as 1-bit Adam, there are certain special installations to be done that do not come with the PyPI package. Will this PR support every underlying DeepSpeed feature? If not, can the scope of the initial DeepSpeed integration be defined clearly in some README/documentation, while allowing for further iterations in future to enable the utilization of more DeepSpeed features with the `transformers` `Trainer`?",
"> @stas00 Should it be mentioned in some README/documentation that folks can only use DeepSpeed with the PyTorch Trainer and not the TF Trainer? There's a hard dependency of using torch.distributed with the NCCL backend to use DeepSpeed.\r\n\r\nYes, we should definitely be clear about that. thank you!\r\n\r\nAt the moment the idea is to put all the ZeRO related docs here: https://github.com/huggingface/transformers/pull/9208 (that PR covers fairscale at the moment)\r\n\r\n> Secondly, what is the plan in terms of introducing DeepSpeed in the transformers setup.py \r\n\r\nIt'll be up to users to install `deepspeed`, just like it's the case with `fairscale` or any other libraries the `transformers` core doesn't require. Currently if you use `--deepspeed` and you don't have it installed the trainer will assert with a suggestion to install that library.\r\n\r\n> and the PyTorch GPU Dockerfile?\r\n\r\nI have no idea. I don't see any reason why it can't be included. \r\n\r\nLet's do it in baby steps. First, make the support available, test it out, solve initial issues if any. Then worry about everything else?\r\n\r\n> While DeepSpeed has a pip installable PyPI package, IIRC it is highly recommended that it be installed from source. Also, in order to use certain features in DeepSpeed such as 1-bit Adam, there are certain special installations to be done that do not come with the PyPI package. Will this PR support every underlying DeepSpeed feature? If not, can the scope of the initial DeepSpeed integration be defined clearly in some README/documentation, while allowing for further iterations in future to enable the utilization of more DeepSpeed features with the transformers Trainer?\r\n\r\nAs I have shown in the example of the upcoming fairscale-support doc PR (we are waiting for fairscale to make a new pypi release before we merge it), we will document the same for DeepSpeed and address your questions. Your comments would be super-helpful for that document, so please save them for when we get to write that document. With your permission I can tag you on that future PR. Thank you.\r\n\r\nWrt to specifics let's see what ends up working out of box and what needs to be polished. I think the main issues will be bugs on the DS side. Otherwise there is a ton of features and I have only been testing a few.\r\n\r\nIf you feel inspired and are already experienced with DS it'd be awesome if you made a checklist of features and then between you and I, and anybody else who wants to contribute, test those features and check what's supported and report back to DS what is not. Since DS does most of the things on its own, I don't think there will be much to change in `transformers` Trainer once this PR is polished. I can be wrong of course.\r\n\r\n**edit**: actually there is no point waiting - I started adding notes into docs/source/training.rst in this PR. So already added a few of your comments - will need to expand those later.",
"> how is the optimizer/scheduler creation handled? E.g. how is the fact we don't apply weight decay to some parameters or the proper schedule with the number of training steps handled? I don't see it in the current version of the code since deepspeed is responsible for creating the optimizer and scheduler\r\n\r\nIt wasn't, but it is now. Please have a look at what I just committed https://github.com/huggingface/transformers/pull/9211/commits/869173f62373a068c2cb932680093e7fd1d4ab78\r\n\r\n- schedulers - I was able to remap 2 `constant_with_warmup` + `linear`\r\n- optimizers - I think only Adam is possible at the moment with our cl args - but users can do whatever they want in the config file\r\n- the main logic is that if either `optimizer` or `scheduler` is configured in the config file we keep those and don't override the corresponding parts - if they are missing - we build our own config.\r\n- one thing I could use your help with, @sgugger. To use `linear` I needed `num_training_steps` early in `__init__` before distributed is set up. I tried postponing ds init for the time we have `num_training_steps` available in `train`, but it was too late - must set up dist early in `__init__` - so I added a hacky `get_num_training_steps` - please have a look if you can think of a better way to handle this chicken-n-egg problem. Thank you!",
"> Left some comments. Not super happy about the `get_num_training_steps` method, but I don't see a way around it either.\r\n\r\nI agree. I tried to move `_init_deepspeed` into a later stage when we already have that figured out cleanly, but there is a chicken and egg problem there.\r\n\r\nI have made another commit to only run `get_num_training_steps` if absolutely necessary, and not by default for `--deepspeed`\r\n\r\n> Last thing I wonder is if it works okay for checkpointing (saving model/optimizer/scheduler after `save_steps` steps) and then resuming training? I.e., does DeepSpeed gets in the way of saving/reloading a model/optimzier state/scheduler state.\r\n\r\nIt's on the todo list in the OP, where I'm tracking what needs to be done. It's just a matter of doing it.\r\n",
"@sgugger, as `_init_deepspeed` was born during this PR's evolution and it continues evolving and growing - are you happy with it in the main file or should it be moved into some other file, say `trainer_integrations.py`?",
"@sgugger, I'd love some guidance from you to that last point\r\n \r\nWe have `deepspeed.DeepSpeedEngine.save_checkpoint` and `deepspeed.DeepSpeedEngine.load_checkpoint` ([doc](https://deepspeed.readthedocs.io/en/latest/model-checkpointing.html))\r\n\r\nSo in `_save_checkpoint` I will delegate to `deepspeed.DeepSpeedEngine.save_checkpoint` the 2 parts:\r\n```\r\n # Save model checkpoint\r\n # Save optimizer and scheduler\r\n```\r\nand leave the rest untouched, right?\r\n\r\nThen I see `_tune_save_checkpoint` - looks very similar to `_save_checkpoint` but it's much simpler - I guess doing the same here? \r\n\r\n`deepspeed.DeepSpeedEngine.save_checkpoint` saves/loads everything in one call - doesn't separate them into model/sched/optim -.\r\n\r\nAnd then on loading - there are quite a few places where this is happening. Any suggestions at how to approach this? Just match any place where there is `torch.load`?\r\n\r\nAlternatively, I can just give it a try and then you can comment on what I have missed / did wrong - if that sounds like a better use of your time.\r\n\r\nThank you!",
"For the checkpointing, `_tune_save_checkpoint` is only there for Ray tune saving of checkpoints, so it's a bit different.\r\n\r\nI think it's fine to delegate to DS the saving of model/optimizer/scheduler. For the reloading, I'd go for using `torch.load` when reloading the model at the end of training but use deepspeed for loading optimzier/scheduler (it will also reload the model, but it should match the previous one).",
"@sgugger, what should we do about tests - I only have a basic test that runs a full train/eval with `finetune_trainer.py` - like we have for `sharded_ddp`.\r\n1. do we want to test all the myriad of different config combinations - probably not since we can't control them anyway and perhaps it's best to do it on demand if and when we find bugs in our integration code.\r\n2. I could move it from `examples` to `tests` but then it'd be much more difficult to do a functional test I think - please correct me if I'm wrong. If there is an easy way then by all means - and `sharded_ddp` tests need to go there too then. we can also deal with this separately after concluding this PR as it's a project of its own.\r\n3. In either case - should we give it a dedicated test file/have integrations dedicated file/leave it for now as is? I think the latter.\r\n\r\nThe thing is we only pass on the configuration to DS and aren't really doing anything other than really getting out of its way, so there isn't much to test. Perhaps checkpointing should be tested as they will be part of the trainer code, which I think is the last big thing I need to integrate.\r\n\r\nThanks.\r\n",
"> Could we however move it to `integrations` since the trainer code is already quite long? It only seems to use `self` to access the `args` so we could have its signature be `(args, model)`\r\n\r\nIt needs `self` for the num of training steps calculations, and I tried to only run it if it's needed (linear scheduler), so I can't pre-calculate and pass it to the `_init_deepspeed` method as it'd be inefficient for code that doesn't need it. I'm waiting for the DS team to follow up on my question about the chicken and the egg - perhaps there will be a way to avoid that hack.\r\n\r\nOtherwise I'm in total agreement with you wrt to moving it away and having only the args it needs.",
"> It needs `self` for the num of training steps calculations, and I tried to only run it if it's needed (linear scheduler), so I can't pre-calculate and pass it to the `_init_deepspeed` method as it'd be inefficient for code that doesn't need it.\r\n\r\nThis is just a few math operations, I'd prefer we pass it either way to avoid the added complexity in the Trainer for everyone. \r\n\r\n> @sgugger, what should we do about tests\r\n\r\nFor now, a simple integration test seems enough (I imagine in the multi-GPU tests). It's going to be challenging to properly test all those frameworks that require multi GPU but this is a discussion for another PR. ",
"> , I'd prefer we pass it either way to avoid the added complexity in the Trainer for everyone.\r\n\r\nSorry I don't know what you mean when you say that. Would you please rephrase that sentence with explicit terms - pass what/where - \"it\" being what? Thank you.",
"> Sorry I don't know what you mean when you say that. Would you please rephrase that sentence with explicit terms - pass what/where - \"it\" being what? Thank you.\r\n\r\nI was speaking of the number of training steps, always pass it even if we end up not needing it because the scheduler does without.",
"> I was speaking of the number of training steps, always pass it even if we end up not needing it because the scheduler does without.\r\n\r\nThank you for clarifying. \r\n\r\nI disagree. I imagine most of the time users will run their own ds_config.json with custom config, which would not use that expensively calculated number - so why introduce a repeated wasted overhead at calculating something that will never be used? (or am I exaggerating and it's not that much of an overhead?)\r\n\r\nWhy is it so bad if we pass the `trainer` object if it helps to optimize speed? I understand that since we are talking about changing `_init_deepspeed` not to be a trainer method it'd be good to make it so - but there is nothing wrong with it taking a `trainer` object as the first arg, no?\r\n\r\nI hope this whole thing might be resolved in a different, much cleaner way if I can only get help on it here https://github.com/microsoft/DeepSpeed/issues/633 - this is a hack we are doing that shouldn't be needed in first place...\r\n",
"Oh, if you prefer sending it `self`, I have no objections. We do this too for some of the other integrations method.",
"So our training steps calculation hack is no longer needed as @jeffra just happened to add `deepspeed.init_distributed` recently which is what we were missing, and adding which lead to a much cleaner code. yay!",
"OK, this is pretty much done, just waiting for some feedback from the DeepSpeed team on the best way to handle checkpoints, but the code is in place.\r\n\r\nSo please kindly send me any final comments/suggestions/requests and let's merge it. Please re-invite old reviewers and add new ones if needed.\r\n\r\nThank you!",
"deepspeed-0.3.10 has been just released by @jeffra on pypi - I verified that it works - so we are ready to merge this whenever you're happy with it.\r\n\r\nit'd be great if you tried running it too, since I think it has been only me running it, so my work is only as good as my environment is and I may not know of other culprits - e.g. I can't test with pytorch < pt-nightly since my card doesn't work with those pytorch versions.\r\n\r\nYou will need 2+ gpus to use it\r\n\r\nFirst install it:\r\n```\r\npip install deepspeed\r\n```\r\nAt the very least do the test:\r\n```\r\ncd examples/seq2seq\r\npytest -sv test_finetune_trainer.py -k deepspeed\r\n```\r\n\r\nOr if you want to fiddle with the normal run, here is what I have been using.\r\n```\r\ncd examples/seq2seq\r\nwget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz\r\ntar -xzvf wmt_en_ro.tar.gz\r\nexport BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --deepspeed ds_config.json --fp16 --save_steps 1\r\n```\r\n",
"@sgugger, \r\n\r\n1. While working on docs I discovered that DS does its own gradient clipping (that doc was buried and I didn't see it) so I had to undo the code in the trainer that did that on behalf of DS - just skipping it\r\n2. I did a major rewrite/expansion of the docs (including the fairscale section) - so please kindly have a look. It's mainly mirroring the config logic in the integration code.\r\n3. In the docs I used consistently Trainer (upcase) to refer to HF trainer. I know you didn't like it when I did that for Issue in a different PR, let me know if you prefer it to be a lowercase trainer.\r\n\r\nWhile this PR is perfectly ready for a final review, I need to wait for https://github.com/microsoft/DeepSpeed/pull/656 to be answered before we can merge this as I'm unsure about their defaults for gradient clipping.\r\n\r\nThank you.\r\n",
"> Went through the documentation and left comments. \r\n\r\nAwesome - thank you - all integrated.\r\n\r\n> On the optimizer side, it doesn't seem like DeepSpeed supports AdamW from what you're saying, so we should document the default optimizer is changed at the very beginning of the DeepSpeed section. It does change drastically the value of `weight_decay` to use.\r\n\r\nI found a way to use AdamW, thank you for catching that, @sgugger. I documented the nuances.\r\n",
"I think the DeepSpeed team is on vacation, as there is no response since several days. And since I have no way of talking to anyone there, I have no way of knowing when they will be back. So I will go ahead and merge this so that others can start experimenting and then we can fix whatever needs to be fixed when I get the gradient clipping Issue answered.",
"Amazing work @stas00 !"
] | 1,608 | 1,610 | 1,610 | CONTRIBUTOR | null | This PR adds experimental support for Deepspeed <https://github.com/microsoft/deepspeed>, whose main feature is ZeRO covered by the paper [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He](https://arxiv.org/abs/1910.02054).
Recently added support for sharded DDP ([fairscale](https://github.com/facebookresearch/fairscale)) also implements parts of ZeRO. Deepspeed implements all of ZeRO.
I haven't experimented enough yet, but it indeed delivers incredible results.
For example I can get about a 5-8 times bigger batch onto the same hardware as compared to the same code running w/o deepspeed and the speedup is huge too. In the following example I was able to get 4.5x speedup on training, and ~2x on validation/testing:
```
# baseline
export BS=3; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python \
-m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path \
sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro \
--do_eval --do_predict --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 \
--learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 \
--num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \
--predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --src_lang en_XX --task translation \
--test_max_target_length 128 --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 2000 \
--n_val 2000 --n_test 2000
2020-12-18 22:31:40 | INFO | __main__ | train_runtime = 144.9132
2020-12-18 22:37:10 | INFO | __main__ | val_runtime = 329.8146
2020-12-18 22:42:37 | INFO | __main__ | test_runtime = 326.6212
# deepspeed
export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed \
./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 \
--data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 \
--learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 \
--num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \
--predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --src_lang en_XX --task translation \
--test_max_target_length 128 --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 2000 \
--n_test 2000 --deepspeed ds_config.json
2020-12-18 22:51:46 | INFO | __main__ | train_runtime = 32.6825
2020-12-18 22:54:47 | INFO | __main__ | val_runtime = 180.5917
2020-12-18 22:57:51 | INFO | __main__ | test_runtime = 183.7731
```
The bleu eval scores were slightly better than the baseline (~0.5 point higher), but it's not enough to make any conclusions based on a single run.
The cool thing is that deepspeed does everything by itself, even the `--fp16` handling, so really it was all about getting out of its way, thus the main part of the integration was to disable a lot of things the trainer does when `--deepspeed` is enabled.
Note the different invocation pattern. If normally we run distributed as:
```
python -m torch.distributed.launch --nproc_per_node=2 ./program.py args
```
deepspeed performs its own DDP internally, and requires the program to be started with:
```
deepspeed ./program.py args
```
The only thing I'm not sure about with this PR is that deepspeed enables all of its features via a json config file, so I'm not sure where to stash a sample one. I guess I will just add it to the documentation. Currently I put one under `examples/seq2seq/ds_config.json` since that's where the test that needs it lives.
But once this is merged all interested parties can start experimenting with various features, and it won't impact `transformers` code. They will just need to tweak `ds_config.json`. And we convert many trainer cl args into DS config on the fly.
There surely will be competition betweeen fairscale and deepspeed integrations. So far from the few experiments I did `deepspeed` allows for a bigger batch size than fairscale.
To install deepspeed you can just do `pip install deepspeed` - I'm not sure if all bug fixes are in the release. We can make a request to release a new version when this is merged.
If the build fails I recommend pre-compiling its CUDA extensions (otherwise they get built at run time via PTX) via master:
```
git clone https://github.com/microsoft/deepspeed
cd deepspeed
DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e . 2>&1 | tee build.log
```
If you want a faster build add an env var `TORCH_CUDA_ARCH_LIST` with the cuda compute capabilities you need, e.g. I do:
```
TORCH_CUDA_ARCH_LIST="6.1;8.6" DS_BUILD_OPS=1 pip install --no-clean --no-cache -v --disable-pip-version-check -e . 2>&1 | tee build.log
```
It was awesome that @sgugger has just added `fairscale` support, so it was much easier to do the same for deepspeed seeing how `fairscale` was integrated, so I'm appreciating the work you have done, Sylvain.
----------------------
Do try it so we get better testing!
You will need 2+ gpus to use it
First install it:
```
pip install deepspeed
```
At the very least do the test:
```
cd examples/seq2seq
pytest -sv test_finetune_trainer.py -k deepspeed
```
Or if you want to fiddle with the normal run, here is what I have been using.
```
cd examples/seq2seq
wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
tar -xzvf wmt_en_ro.tar.gz
export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --deepspeed ds_config.json --fp16 --save_steps 1
```
----------------------
Questions that need to be addressed so that all Trainer features continue to work under deepspeed.
* [ ] a notebook with benchmarks was requested
Probably at a later time, my uneven gpu-sized setup doesn't lend to impressive benchmarks - may be someone will send me another rtx-3090 card ;)
@sgugger, @LysandreJik, @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9211/reactions",
"total_count": 10,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 6,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9211/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9211",
"html_url": "https://github.com/huggingface/transformers/pull/9211",
"diff_url": "https://github.com/huggingface/transformers/pull/9211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9211.patch",
"merged_at": 1610507119000
} |
https://api.github.com/repos/huggingface/transformers/issues/9210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9210/comments | https://api.github.com/repos/huggingface/transformers/issues/9210/events | https://github.com/huggingface/transformers/issues/9210 | 771,318,697 | MDU6SXNzdWU3NzEzMTg2OTc= | 9,210 | MLFlow logger breaks training | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | I'm running into two issues when I try and train with mflow installed.
The main one is transformers logs every single parameter and mlflow has restrictions on parameter size. So when I tried to train a `AutoModelForSequenceClassification` using the code below
```
num_labels=25
config = AutoConfig.from_pretrained(
'./models/roberta',
num_labels=num_labels)
model = AutoModelForSequenceClassification.from_pretrained(
'./models/roberta',
config=config)
```
An exception is thrown by the mlflow/utils/validation.py - specifically that the length of the value of parameter with key "id2label" is > 250 (this is a character limit so it's trying to send the dict as a string)
The second issue is if I resume training after this error I get a further mlflow exception saying that the run is still in progress - I have to manually call
`mlflow.end_run()`
I'm not sure how to work around this (for now I've uninstalled mlflow). Can I tell the trainer not to log certain parameters?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9210/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9209/comments | https://api.github.com/repos/huggingface/transformers/issues/9209/events | https://github.com/huggingface/transformers/issues/9209 | 771,314,862 | MDU6SXNzdWU3NzEzMTQ4NjI= | 9,209 | Error while loading model file - .ckpt file :: Missing key(s) in state_dict | {
"login": "adithyaan-creator",
"id": 54103522,
"node_id": "MDQ6VXNlcjU0MTAzNTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/54103522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adithyaan-creator",
"html_url": "https://github.com/adithyaan-creator",
"followers_url": "https://api.github.com/users/adithyaan-creator/followers",
"following_url": "https://api.github.com/users/adithyaan-creator/following{/other_user}",
"gists_url": "https://api.github.com/users/adithyaan-creator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adithyaan-creator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adithyaan-creator/subscriptions",
"organizations_url": "https://api.github.com/users/adithyaan-creator/orgs",
"repos_url": "https://api.github.com/users/adithyaan-creator/repos",
"events_url": "https://api.github.com/users/adithyaan-creator/events{/privacy}",
"received_events_url": "https://api.github.com/users/adithyaan-creator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @adithyaan-creator \r\n\r\nSeems like you used `pytorch-lightning` for training, `pl` prefixes every `state_dict` key with `model.` so to load the `pl` checkpoint to hf format, remove the `model.` prefix from `state_dict` keys.\r\n\r\n```python3\r\ndef remove_prefix(text: str, prefix: str):\r\n if text.startswith(prefix):\r\n return text[len(prefix) :]\r\n return text # or whatever\r\n\r\nstate_dict = {remove_prefix(k, \"model.\"): v for k, v in state_dict.items()}\r\nhf_model.load_state_dict(state_dict)\r\nhf_model.save_pretrained(save_path)\r\n```\r\nAlso, would be nice if you could post code snippet instead of screenshots :)\r\n\r\n",
"Hi @patil-suraj,\r\nI tried it with loading the 'state_dict' item to the model, and getting the following error \"Unexpected key(s) in state_dict: \"decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight\". \".\r\n\r\nThe code is :: \r\ndef remove_prefix(text: str, prefix: str):\r\n if text.startswith(prefix):\r\n return text[len(prefix) :]\r\n return text # or whatever\r\n\r\nckpt = {remove_prefix(k, \"model.\"): v for k, v in ckpt['state_dict'].items()}\r\n\r\nmodel.load_state_dict(ckpt)\r\nmodel.save_pretrained(\"/content/model\")`\r\n ",
"Hey @adithyaan-creator,\r\n\r\nYou can ignore this warning, see: https://github.com/huggingface/transformers/pull/9231",
"@patrickvonplaten the code that was merged wasnt working out, but since the weight was unnecessary I deleted the weight layer and it worked out.\r\n'del ckpt[\"decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight\"]'\r\nThanks.",
"@adithyaan-creator Hi, I am having the same issue now. Could you elaborate on how and where did you add 'del ckpt[\"decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight\"]' ? Thank you very much!"
] | 1,608 | 1,616 | 1,608 | NONE | null | @julien-c @patrickvonplaten @thomwolf
transformers version == 4.0.0
I trained a T5 model for classification and I am trying to load the checkpoint model saved after 3 epochs.
While trying to load the model, getting the below error.

The model file is a ckpt file.
Any help is appreciated.
Thanks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9209/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9208/comments | https://api.github.com/repos/huggingface/transformers/issues/9208/events | https://github.com/huggingface/transformers/pull/9208 | 771,312,144 | MDExOlB1bGxSZXF1ZXN0NTQyODgwODE4 | 9,208 | [docs] outline sharded ddp doc | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,609 | 1,609 | CONTRIBUTOR | null | This PR provides an initial outline of HF Trainer integration starting with ZeRO. We have fairscale's Sharded optimizer/gradients supported already and deepspeed is coming
We won't merge this until fairscale merged all the required fixes and released a new version, but I thought it'd be good to start the doc going so it's ready when fairscale is ready.
I hope to submit a deepspeed integration shortly as well, so we will extend it then with deepspeed info. edit (https://github.com/huggingface/transformers/pull/9211)
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9208/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9208",
"html_url": "https://github.com/huggingface/transformers/pull/9208",
"diff_url": "https://github.com/huggingface/transformers/pull/9208.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9208.patch",
"merged_at": 1609896856000
} |
https://api.github.com/repos/huggingface/transformers/issues/9207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9207/comments | https://api.github.com/repos/huggingface/transformers/issues/9207/events | https://github.com/huggingface/transformers/issues/9207 | 771,304,235 | MDU6SXNzdWU3NzEzMDQyMzU= | 9,207 | Saving Pretrained Tokenizer | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I realised that\r\n\r\n`tokenizer.model.save(\"./tokenizer\")`\r\n\r\nIs unnecessary. I've started saving only the `tokenizer.json` since this contains not only the merges and vocab but also the pipeline.\r\n\r\nAnd I noticed that `tokenizer.save_pretrained()` has a parameter `legacy_format` which defaults to True. When I set it to false it properly round trips (i.e. saves out the unified tokenizer.json rather than the model files (merges.txt and vocab.json)\r\n\r\nThe only issue now is if I create a trainer\r\n\r\n```\r\ntrainer = Trainer(\r\n model=model,\r\n tokenizer=tokenizer\r\n args=training_args,\r\n train_dataset=tokenized_datasets[\"train\"],\r\n eval_dataset=tokenized_datasets[\"validation\"] ,\r\n data_collator=data_collator,\r\n)\r\n\r\nlogger.info(\"*** Train ***\")\r\ntrainer.train(model_path=model_path)\r\ntrainer.save_model()\r\n```\r\n\r\nThe last line doesn't accept kwargs so it saves the tokenizer in the legacy format.\r\n\r\nA workaround is to do it manually ie.\r\n\r\n```\r\ntrainer = Trainer(\r\n model=model,\r\n #tokenizer=tokenizer\r\n args=training_args,\r\n train_dataset=tokenized_datasets[\"train\"],\r\n eval_dataset=tokenized_datasets[\"validation\"] ,\r\n data_collator=data_collator,\r\n)\r\n\r\nlogger.info(\"*** Train ***\")\r\ntrainer.train(model_path=model_path)\r\ntrainer.save_model()\r\ntokenizer.save_pretrained(output_dir, legacy_format=False)\r\n```\r\n\r\nOnly issue is I don't see a workaround to ensure the checkpoints are created using `legacy_format=False`\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | I created a custom `tokenizers.Tokenizer` and saved it as follows
```
tokenizer.model.save("./tokenizer")
tokenizer.save("./tokenizer.json")
```
This produces 3 files, merges.txt, vocab.json and tokenizer.json
Then I created a `transformers.RobertaTokenizerFast` and saved it to the same folder
```
tokenizer = RobertaTokenizerFast.from_pretrained("./tokenizer")
tokenizer.save_pretrained("./tokenizer")
```
This adds `special_tokens_map.json` and `tokenizer_config.json`
I then saved it to another folder to simulate what happens when I train my model
```
tokenizer.save_pretrained("./model")
tokenizer = RobertaTokenizerFast.from_pretrained("./model")
```
What I noticed was `tokenizer_config.json` contains a key `name_or_path` which still points to `./tokenizer`, so what seems to be happening is `RobertaTokenizerFast.from_pretrained("./model") ` is loading files from two places (./model and ./tokenizer)
Not sure if this is expected, it seems that the tokenizer_config.json should be updated in save_pretrained, and tokenizer.json should be saved with it?
Or perhaps this is just an issue because I'm training my tokenizer in a subdirectory of the model folder?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9207/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9206/comments | https://api.github.com/repos/huggingface/transformers/issues/9206/events | https://github.com/huggingface/transformers/issues/9206 | 771,290,740 | MDU6SXNzdWU3NzEyOTA3NDA= | 9,206 | DataTrainingArguments: __init__() got an unexpected keyword argument 'evaluate_during_training' | {
"login": "githubrandomuser2017",
"id": 25097908,
"node_id": "MDQ6VXNlcjI1MDk3OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/25097908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubrandomuser2017",
"html_url": "https://github.com/githubrandomuser2017",
"followers_url": "https://api.github.com/users/githubrandomuser2017/followers",
"following_url": "https://api.github.com/users/githubrandomuser2017/following{/other_user}",
"gists_url": "https://api.github.com/users/githubrandomuser2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubrandomuser2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubrandomuser2017/subscriptions",
"organizations_url": "https://api.github.com/users/githubrandomuser2017/orgs",
"repos_url": "https://api.github.com/users/githubrandomuser2017/repos",
"events_url": "https://api.github.com/users/githubrandomuser2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubrandomuser2017/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, will fix that notebook on Monday. Thanks for reporting!",
"Note that this notebook is quite old, so you might prefer the most recent one [here](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb). I fixed the issue by pinning the transformers version inside it, we will keep it for legacy reasons but not update it.",
"Same problem on this notebook: [https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing).\r\nThis notebook is referenced here: [https://huggingface.co/transformers/training.html#additional-resources]().\r\nAlso, if I understand correctly, `evaluate_during_training=True` should be replaced with `evaluation_strategy='epoch'`, and then it will work fine. ",
"I think this notebook is community-contributed (it's not in our repos) so it's up to the person who wrote it to port it to the most recent version of transformers."
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): Distilbert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the HF example Notebook here: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb
That Notebook is linked from here: https://huggingface.co/transformers/examples.html
2. At this line:
```python
training_args = TrainingArguments(
output_dir="./models/model_name",
overwrite_output_dir=True,
do_train=True,
do_eval=True,
per_gpu_train_batch_size=32,
per_gpu_eval_batch_size=128,
num_train_epochs=1,
logging_steps=500,
logging_first_step=True,
save_steps=1000,
evaluate_during_training=True,
)
```
an error is raised:
```
TypeError Traceback (most recent call last)
<ipython-input-6-e83ba093226a> in <module>()
14 logging_first_step=True,
15 save_steps=1000,
---> 16 evaluate_during_training=True,
17 )
TypeError: __init__() got an unexpected keyword argument 'evaluate_during_training'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The HF example Notebook should complete successfully. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9206/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9206/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9205/comments | https://api.github.com/repos/huggingface/transformers/issues/9205/events | https://github.com/huggingface/transformers/issues/9205 | 771,286,176 | MDU6SXNzdWU3NzEyODYxNzY= | 9,205 | [model_utils] very slow model instantiation | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2690307185,
"node_id": "MDU6TGFiZWwyNjkwMzA3MTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Performance",
"name": "Performance",
"color": "207F32",
"default": false,
"description": ""
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Doesn't that script also loads and preprocess the data? From what you're reporting, I don't interpret this as \"transformers takes a long time to load the model\" (since the line that does that takes the same time as a torch load) but as \"stuff that happens in that script before the model loading takes a lot of time\" (which is probably data preprocessing + the 3s to import transformers if TF is in your env). Or am I missing something?\r\n",
"Perhaps my first post is confusing, what I did is bracketing the `torch.load` call in modeling_utils.py:\r\n```\r\n start_time = time.time()\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n end_time = time.time() - start_time\r\n```\r\nSo all the other stuff isn't being measured, just the `torch.load` call. ",
"Ah, I understand better. I don't think your comparison is fair: `AutoModel.from_pretrained` does two things: creating a model and filling it with the weights. From a small experiment in timing on my side, I believe all the time is spent in the model creation. So you should compare the timing of creating the model and loading the weights inside to have something that's apple to apple.",
"I removed the 2nd part that was showing the same issue from a different angle, as it appears to just confuse and isn't contributing to understanding the issue at hand.\r\n\r\nThere is just `state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")` call - and nothing else. On its own:\r\n`python -c \"import torch; torch.load('/hf/transformers-master/data/distill-mbart-en-ro-12-4/pytorch_model.bin')\"`\r\n it takes ~1s, the exact same call inside `modeling_utils` takes 22+ secs.",
"OK, somehow I made a mistake and was taking the snapshot of startime before `model = cls(config, *model_args, **model_kwargs)` and not `torch.load()` - my apologies :( and thank you for double checking my invalid report.\r\n\r\n```\r\n import time\r\n t0 = time.time()\r\n model = cls(config, *model_args, **model_kwargs)\r\n t1 = time.time()\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n t2 = time.time()\r\n print(f\"cls init { round(t1-t0, 4)}\")\r\n print(f\"load { round(t2-t1, 4)}\")\r\n import sys\r\n sys.exit(0)\r\n\r\n```\r\n\r\n```\r\ncls init 21.2055\r\nload 0.5074\r\n```\r\n\r\nSo it's setting up the model that takes so long, just as you said.\r\n\r\nCan this somehow be sped up? I was integrating deepspeed and re-running the same command repeatedly and 23 extra secs of waiting to just discover that something is off was very painful for debugging. All the failures happened at much later stages. I worked around it it by switching to a tiny model, but even that takes some secs.\r\n\r\nCan we think of a way to make an image and load it rather than rebuilding the model from scratch? So we torch.load the weights, but also cache the model image itself and load it too, rather then create it anew. It seems to be so wasteful and slow if I'm not debugging the model creation but say tuning up something in the trainer and I want the other parts to load blazingly fast and get me to the point of interest quickly. What would be the best way to approach such need?\r\n",
"So doing profiling on model instantiation code it can be seen that `_init_weights` is where some 75% of that slowdown happens\r\n\r\n```\r\n ncalls tottime percall cumtime percall filename:lineno(function)\r\n 354 18.942 0.054 18.942 0.054 {method 'normal_' of 'torch._C._TensorBase' objects}\r\n 225 2.286 0.010 2.286 0.010 {method 'uniform_' of 'torch._C._TensorBase' objects}\r\n````\r\n\r\nSo we are completely wasting time doing init weights, since we are immediately replacing them. (with the exception to `SinusoidalPositionalEmbedding` which do not get loaded from the pretrained model). \r\n\r\nIf you prefer the visual version:\r\n\r\n\r\n\r\nChances are that model init needs to be made context aware and not init weights which will be immediately replaced. Thoughts?\r\n\r\nThat would make `transformers` so much faster to start! (e.g. think the model pages website which takes forever to load a model).\r\n\r\nThe profiling was done with:\r\n```\r\n# prep\r\npip install graphviz gprof2dot\r\ncat <<EOT > prog\r\nfrom transformers import AutoModelForSeq2SeqLM\r\nAutoModelForSeq2SeqLM.from_pretrained(\"sshleifer/distill-mbart-en-ro-12-4\")\r\nEOT\r\n\r\n# text profile\r\nUSE_TF=0 PYTHONPATH=src python -m cProfile -s tottime prog > profile.txt\r\nhead -10 profile.txt\r\n\r\n# visual profile\r\nUSE_TF=0 PYTHONPATH=src python -m cProfile -o profile.pstats prog\r\ngprof2dot -f pstats profile.pstats | dot -Tsvg -o callgraph.svg\r\ndisplay callgraph.svg\r\n```\r\n\r\n",
"If we see a significant gain in loading time, maybe it's worth to explore a way to only apply `init_weights` on missing layers. Not sure how easy it would be to implement it though...\r\n\r\nMaybe a `init_weights` function arg in `__init__` might make sense:\r\n```python\r\nmodel = cls(config, init_weights=False, *model_args, **model_kwargs) # don't call init_weights, but initialize all weights to zero because it's much faster\r\n# load weights into model and get missing layers\r\n# init missing layers\r\n```",
"Yeah Patrick's suggestion is probably the best, though I'm not sure it can easily be achieved in the current API. Note that this is only one slowdown at the beginning of training, so I don't think this should be high priority.",
"I totally get it that it's not high priority, since most people don't care for a slow start when they run it non-stop for hours - it only affects people who need a quick start - which is the case when debugging something or as I suggested the demo function on the model pages which takes a really long time to load.\r\n\r\nIn the case of BART, its deterministic segments do the init internally, so it's enough to just monkeypatch as a proof of concept:\r\n\r\n```\r\n # modeling_utils.py::from_pretrained\r\n init_weights_orig = PreTrainedModel.init_weights\r\n def init_weights_pretrained(self):\r\n # self.apply(self._init_weights)\r\n if self.config.pruned_heads: self.prune_heads(self.config.pruned_heads)\r\n self.tie_weights()\r\n \r\n PreTrainedModel.init_weights = init_weights_pretrained\r\n model = cls(config, *model_args, **model_kwargs)\r\n PreTrainedModel.init_weights = init_weights_orig\r\n\r\n```\r\nand this command:\r\n```\r\nPYTHONPATH=../../src USE_TF=0 time python -c 'from transformers import AutoModelForSeq2SeqLM; AutoModelForSeq2SeqLM.from_pretrained(\"sshleifer/distill-mbart-en-ro-12-4\")'\r\n```\r\ngoes from 25sec to 8secs. The instantiation goes from 22 secs to 5 secs.\r\n\r\nThere are few `uniform_` calls left which account for 2.3 extra secs, which if shaves off we should be down to 2-3 secs (from 22!).\r\n\r\nI quickly checked that the core functions normally - same scores - well, I did just one finetune_trainer run.\r\n\r\nOne way is to solve this as @patrickvonplaten suggested, and I'm also thinking of changing the design a bit. So that each model has a normal `init_weights` and `init_weights_pretrained` - then it's very clear to the developer what goes where and then simply invoke one or the other depending on the context. And then it's just a matter of choosing how to signal the context.\r\n",
"I don't see how you could have an `init_weights_pretrained`: it depends on the checkpoint you pass: if you pass the checkpoint of a `BertModel` to `BertForMaskedLM`, you just have one bias to initialize (if weights are tied). But if you pass a checkpoint of a `BertForMaskedLM` checkpoint then you have nothing to initialize. And the same holds for every variant (which would have different specific weights to initialize in case of a pretrained model) so I don't really see how you can do this API-wise.\r\n\r\nThe only way I see through it is to allow the `init_weights` to get the list of model parameters to randomly initialize, but since we use the `apply` method afterward (and rely on it to get modules inside each model specific `_init_weights` method) I don't see how to use it properly. It would probably require some clever recursive method.\r\n\r\nAgain, lots of headaches and possibilities for errors for an end result that doesn't strike me as high priority.\r\n\r\n> it only affects people who need a quick start - which is the case when debugging something or as I suggested the demo function on the model pages which takes a really long time to load.\r\n\r\nIt doesn't take 25 seconds on a tiny model, only a big one. So I'd suggest debugging on a tiny model :-)",
"Thank you both for entertaining possible approaches and suggesting that you are not quite seeing a smooth solution. I just don't know enough about all of it, so I'm surely missing on cases I haven't thought of, but somehow in my mind it looks simple. The devil is in the details.\r\n\r\n> It doesn't take 25 seconds on a tiny model, only a big one. So I'd suggest debugging on a tiny model :-)\r\n\r\nUnfortunately the tiny model approach doesn't work with debugging OOM in deepspeed, as its configuration correlates to the model size. I guess it's not special to deepspeed at all. So the tiny model trick works for checking mechanics (i.e. that the code compiles), but isn't helpful for OOM debug.",
"@patrickvonplaten, @sgugger, @LysandreJik - could we please revisit this - working on making t5-11b train was painful - it was taking really really really long time to init the model, just to drop it and replace with pre-trained weights. Transformers is mainly about pre-trained models, so perhaps this can be made somehow configurable? \r\n\r\nWe know when a pretrained model is loaded, so why not propagate that information and let the model know it's being loaded in pre-trained mode, so that it could skip any weight inits that are going to be replaced anyway?\r\n\r\nAnd while we are at it, I don't suppose there is a way to involve more than one CPU core in loading the model? I guess that would be a question for pytorch.\r\n\r\nThank you!",
"I'm happy to add such a featurue. It should be feasible to only initialize those layers that are not in the saved `.pt` file.",
"Indeed, this would be a welcome feature, big models aren't going away.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"@patrickvonplaten, I should probably work on it - since it doesn't seem like you will have time any time soon.",
"It's on my To-Do List, but still don't think, I'll be able to take a look within the next 2,3 weeks - sorry :-/ If you find some time for this, it would be great",
"Ihave finetuned a longformer encoder decoder model, and trying to convert it into an api but model takes too long to load that api throws a not responding error.\r\n\r\nKindly if anyone can guide me on how can I reduce the time for the model to load.\r\nThank You in advance.",
"Hello @AyeshaSarwar,\r\n\r\ncould you please use the forum: https://discuss.huggingface.co/ instead for such questions? We don't support Flask compatibility in `transformers`. Please keep in mind that the issues are mainly used for issues related to just `transformers`.\r\n\r\nThanks",
"Im on the same boat as @stas00 . I understand that the code need to maintain a wider compatibility across the oceans of models, but people needs a working workaround before an elegant solution born into reality. I believe as huggingface slowly graduating from pure research field, more and more people are being hurt by the tremendous model initialization time. \r\nHoping for a change",
"@DeXtmL, this thread is 2 years old - the particular problem I raised in this Issue has been solved a long time ago. The model is no longer being init'ed twice.\r\n\r\nIf you feel something is still slow please start a new Issue.\r\n\r\nthank you."
] | 1,608 | 1,664 | 1,620 | CONTRIBUTOR | null | For some reason I'm noticing a very slow model instantiation time.
For example to load `shleifer/distill-mbart-en-ro-12-4` it takes
* 21 secs to instantiate the model
* 0.5sec to `torch.load` its weights.
If I'm not changing how the model is created and want to quickly fast forward to the area of debug how could these slow parts be cached and not rebuilt anew again and again?
But also it looks like we are doing a completely wasteful operation of init_weights, which immediately get overwritten with pretrained model weights (https://github.com/huggingface/transformers/issues/9205#issuecomment-748741195) (for the use case of pre-trained model).
(I initially made a mistake and thought that it was `torch.load` that had an issue, but it's `cls(config, *model_args, **model_kwargs)`) - thank you, @sgugger - so this post has been edited to reflect reality. So if you're joining later you can skip the comments up to https://github.com/huggingface/transformers/issues/9205#issuecomment-748722644 and continue from there)
@patrickvonplaten, @sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9205/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9205/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9204/comments | https://api.github.com/repos/huggingface/transformers/issues/9204/events | https://github.com/huggingface/transformers/issues/9204 | 771,264,897 | MDU6SXNzdWU3NzEyNjQ4OTc= | 9,204 | Load saved Pytorch model into Tensorflow or convert from Pytorch model to TF | {
"login": "Jess0-0",
"id": 29790071,
"node_id": "MDQ6VXNlcjI5NzkwMDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/29790071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jess0-0",
"html_url": "https://github.com/Jess0-0",
"followers_url": "https://api.github.com/users/Jess0-0/followers",
"following_url": "https://api.github.com/users/Jess0-0/following{/other_user}",
"gists_url": "https://api.github.com/users/Jess0-0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jess0-0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jess0-0/subscriptions",
"organizations_url": "https://api.github.com/users/Jess0-0/orgs",
"repos_url": "https://api.github.com/users/Jess0-0/repos",
"events_url": "https://api.github.com/users/Jess0-0/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jess0-0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @Jess0-0,\r\n\r\nYou can load TF checkpoints into PT and vice-versa via:\r\n\r\n`XLMRoberta.from_pretrained(...., from_tf=True)`\r\n\r\nor \r\n\r\n`TFXLMRoberta.from_pretrained(...., from_pt=True)`.\r\n\r\nSince `XLMRobertaForSequenceClassification` is just an alias of `RobertaForSequenceClassification` there should be no problem in doing \r\n\r\n```python\r\nTFRobertaForSequenceClassification.from_pretrained(\"<your/path/to/saved/xlm/roberta/pytorch/dir>\", from_pt=True)\r\n```"
] | 1,608 | 1,610 | 1,610 | NONE | null | Hi,
Thanks for this awesome framework!
I have trained and saved an XLMRoberta model in PyTorch and I'm wondering if there is any way I can load the model into TFRobertaForSequenceClassification class. Or if there are ways to convert the checkpoint to TensorFlow checkpoints so that it can be load by TF2.
I came across this file https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py
and I'm wondering if there are any docs or instructions to use the script.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9204/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9204/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9203/comments | https://api.github.com/repos/huggingface/transformers/issues/9203/events | https://github.com/huggingface/transformers/pull/9203 | 771,260,315 | MDExOlB1bGxSZXF1ZXN0NTQyODQ3MjUw | 9,203 | [finetune trainer] better logging and help | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | As a follow up to this [thread](https://discuss.huggingface.co/t/summarization-is-finetune-trainer-py-accepting-length-arguments-correctly/2879/) this PR:
* documents that `--val_max_target_length` is also used during `generate`
* disambiguates `use_task_specific_params` logger so that it's clear that it dumps just the initial params and that those could be overridden by user's cl args
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9203/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9203",
"html_url": "https://github.com/huggingface/transformers/pull/9203",
"diff_url": "https://github.com/huggingface/transformers/pull/9203.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9203.patch",
"merged_at": 1608488908000
} |
https://api.github.com/repos/huggingface/transformers/issues/9202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9202/comments | https://api.github.com/repos/huggingface/transformers/issues/9202/events | https://github.com/huggingface/transformers/issues/9202 | 771,239,511 | MDU6SXNzdWU3NzEyMzk1MTE= | 9,202 | [wanted] explicit docs for inherited methods | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [
"I don't agree on an always-document-everything basis, as the documentation page of each model is already quite long (which may be the reason not a lot of people seem to be reading them...) In the case of generate, since there is no link to the part where the generate method is documented, we could add the documentation of the generate method there, but in the case of tokenizers for instance, I prefer our approach where we document just the main methods in the subclasses and point to the superclass for less important ones.\r\nWhich again also goes in the direction of documenting generate in the models that implement it, so why not go with it. I don't think it should be harder than adding `generate` to the members field, though sphinx might not like `GenerationMixin`. If anyone wants to give it a try, I'll happily review a PR.",
"`generate` was just an example, so let's not solve just a sub-case if possible.\r\n\r\nAs mentioned in OP if you feel that including full docs is too much - a short entry that links to the super class or mixin's method is a satisfactory solution, but the user shouldn't hunt for where that entry might be elsewhere. \r\n\r\nAlso for methods such as `T5ForConditionalGeneration.generate` mentioned in free prose it'd be awesome to have it linked to the right doc entry. I understand it should happen automatically with sphinx if there is an actual entry to link to. \r\n\r\n> the documentation page of each model is already quite long (which may be the reason not a lot of people seem to be reading them...)\r\n\r\nI don't know what you mean by \"not a lot of people seem to be reading them\" - do you imply that users ask a lot of questions that are already answered by the existing documentation or do you have some other way to measure the \"not a lot\" part?\r\n\r\n",
"> I don't know what you mean by \"not a lot of people seem to be reading them\" - do you imply that users ask a lot of questions that are already answered by the existing documentation\r\n\r\nYes, I was implying exactly that. For the general rule, I'd keep it to: main parts of the API of a class should be documented in which class (within reasons) and more minor parts should be documented once in the superclass/mixin with a link from all the subclasses (like what is done for `Tokenizer`)\r\n",
"I agree the process is long an feels like a lot of unnecessary steps.",
">> I don't know what you mean by \"not a lot of people seem to be reading them\" - do you imply that users ask a lot of questions that are already answered by the existing documentation\r\n> Yes, I was implying exactly that. \r\n\r\nDo we have a way to reach out and ask why users don't per-use docs before asking questions? \r\n\r\nIs this:\r\n\r\n* an issue of the quality/readability/navigationability of the docs\r\n* users are just lazy (as a virtue)\r\n* users don't know that there are docs to search through\r\n* the search engine doesn't have the smarts to show most relevant info and shows too many irrelevant hits? many docs search engines are pretty crappy in my experience. It's usually the best to use google with: \r\n\r\n`\"my search query\" site:https://huggingface.co/transformers/`\r\n\r\nto find the best information. (swap in whatever other docs site you need, I wasn't singling out transformers, it was just an example)\r\n\r\n> For the general rule, I'd keep it to: main parts of the API of a class should be documented in which class (within reasons) and more minor parts should be documented once in the superclass/mixin with a link from all the subclasses (like what is done for Tokenizer)\r\n\r\nThat works 100% for me. \r\n\r\nWhat needs to be done to make this happen? Definitely no rush.",
"I don't know whether this helps, but when I managed a huge open source project many years back I trained our users to per-use docs by almost never replying with the repeat information they requested but with a direct link to where it was in the documentation. Over time it was very clear to the community that the information is available in the docs and less and less questions were asked and more and more one line answers with the link to the right info were posted by various users of that community.\r\n\r\nWhen one answers in clear text it signals users that the information is in your brain and not outside of it.\r\n\r\nThat is just how my experience had been, YMMV.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"@sgugger, how do we resolve this - please let me know if I can be of help.",
"I think we should agree to disagree on this point. I didn't see anything in the last survey that showed a strong desire from the user to have the documentation of each model page be even longer, quite the opposite, so I stand by what I said with the generic methods like `generate` be documented in the base classes unless they have some model-specific behavior that needs to override that documentation. It's the same for the tokenizer encode and call methods.",
"I think there are two different possibilities here:\r\n1. duplicating content - you disagree with - OK\r\n2. having a placeholder with a link to the main location of this entry\r\n\r\nFor example, when documentation refers to `T5ForConditionalGeneration.generate` it could:\r\n1. link directly to the general `generate` doc entry\r\n2. link to an entry `generate` in the t5 doc which will link to the main location of this entry\r\n\r\nWhy make the user work extra hard searching for something, when it can be automated and get the user what they need in 1 or 2 quick clicks.\r\n\r\nI'm not asking for a hypothetical nice-to-have feature, I often find myself frustrated when I can't quickly link to a method when I address someone's Issue and have to search for it. So it's a very selfish request. And I'm willing to work for it, since it'll save my time and minimize frustration where it doesn't have to happen.",
"> For example, when documentation refers to T5ForConditionalGeneration.generate it could:\r\n> 1. link directly to the general generate doc entry\r\n\r\nI am not sure I know how to do that in sphinx, but if it's possible, I'm all for it!\r\n\r\n> 2. link to an entry generate in the t5 doc which will link to the main location of this entry\r\n\r\nThis one will require writing a custom docstring for the generate method of T5. Probably more possible and can be automated with a decorator to use on the classes with a generate method I think.",
"Great. I will research it then and report back if I find a way. \r\n\r\nThank you for your feedback, @sgugger "
] | 1,608 | 1,622 | 1,622 | CONTRIBUTOR | null | # 🚀 Feature request
The HF approach is to unroll most of the code for the ease of understanding, but somehow this is not the case with docs.
If possible, could we explicitly add documentation for inherited methods in the specific model documentation page?
e.g. why https://huggingface.co/transformers/model_doc/t5.html doesn't have `T5ForConditionalGeneration.generate` documented - sure one eventually figures out it's https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate but why make user's life so difficult when the docs are autogenerated anyway.
In the worst case there could be an entry for `T5ForConditionalGeneration.generate` with the link to https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate, but that is not ideal since one has to leave the main doc of the model, which makes it much harder to jump around its different parts.
IMHO, it'd greatly improve user's experience to have all the functionality documentation supported by a model in one page of that model.
And xrefs are super-useful too, e.g. [t5 pre-amble](https://huggingface.co/transformers/model_doc/t5.html#overview) mentions `T5ForConditionalGeneration.generate `but it's not linked to anywhere.
Thank you!
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9202/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9201/comments | https://api.github.com/repos/huggingface/transformers/issues/9201/events | https://github.com/huggingface/transformers/issues/9201 | 771,232,178 | MDU6SXNzdWU3NzEyMzIxNzg= | 9,201 | when to use sortish sampler | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Is there any update on this? \r\n\r\nSeems like it allows dynamic batching. ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | Hi,
I appreciate adding to the documentation on when to use sortish sampler and which platform this works/how much it impacts the speed, this is related to seq2seq folder of huggingface, for Seq2seqDataset.
thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9201/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9201/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9200/comments | https://api.github.com/repos/huggingface/transformers/issues/9200/events | https://github.com/huggingface/transformers/issues/9200 | 771,227,462 | MDU6SXNzdWU3NzEyMjc0NjI= | 9,200 | Beam search fails when using model parallelism | {
"login": "TobiasNorlund",
"id": 2678217,
"node_id": "MDQ6VXNlcjI2NzgyMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2678217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TobiasNorlund",
"html_url": "https://github.com/TobiasNorlund",
"followers_url": "https://api.github.com/users/TobiasNorlund/followers",
"following_url": "https://api.github.com/users/TobiasNorlund/following{/other_user}",
"gists_url": "https://api.github.com/users/TobiasNorlund/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TobiasNorlund/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TobiasNorlund/subscriptions",
"organizations_url": "https://api.github.com/users/TobiasNorlund/orgs",
"repos_url": "https://api.github.com/users/TobiasNorlund/repos",
"events_url": "https://api.github.com/users/TobiasNorlund/events{/privacy}",
"received_events_url": "https://api.github.com/users/TobiasNorlund/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As the trace suggests, the error seem to come from the `_reorder_cache` method in `generation_utils.py`. Since the model is parallelized among multiple devices, it fails since the device of `beam_idx` and `layer_past` don't match for all layers.\r\n\r\nI just tried to modify line 229 in `generation_utils.py` to:\r\n```\r\nreturn tuple(layer_past.index_select(1, beam_idx.to(layer_past.device)) for layer_past in past)\r\n```\r\nwhich seems to work. \r\nI'm happy to file a PR with this change if you approve. Please let me know if there is anything I should be aware of, or pay extra attention to.",
"FWIW, this fix doesn't currently work for T5, as the fix to `_reorder_cache` is not reflected in the `modeling_t5.py` file. Following the above, changing [this line](https://github.com/huggingface/transformers/blob/fa84540e98a6af309c3007f64def5011db775a70/src/transformers/models/t5/modeling_t5.py#L1679) to `layer_past_state.index_select(0, beam_idx.to(layer_past_state.device)),` appears to fix it.\r\n\r\n@patrickvonplaten ",
"@OyvindTafjord - would you mind opening a new PR for it? :-)"
] | 1,608 | 1,620 | 1,608 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.4.0-194-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, two GTX 1080, on a single node
- Using distributed or parallel set-up in script?: Using model parallelism through `model.parallelize()`
### Who can help
@LysandreJik
@alexorona
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [ ] my own task or dataset:
## To reproduce
The recent (and awesome!) model `parallelize()` doesn't seem to work with beam search decoding at the moment. The behavior can be reproduced on the official `huggingface/transformers-pytorch-gpu:4.1.1` docker image by running the following (on a machine with multiple GPUs):
```python
import transformers
tokenizer = transformers.GPT2Tokenizer.from_pretrained("gpt2")
model = transformers.GPT2LMHeadModel.from_pretrained("gpt2")
model.parallelize()
input_ids = tokenizer.encode("This is a test", return_tensors="pt").to("cuda:0")
model.generate(input_ids, num_beams=2)
```
This raises the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 612, in generate
**model_kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 1088, in beam_search
model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 229, in _reorder_cache
return tuple(layer_past.index_select(1, beam_idx) for layer_past in past)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 229, in <genexpr>
return tuple(layer_past.index_select(1, beam_idx) for layer_past in past)
RuntimeError: Input, output and indices must be on the current device
```
## Expected behavior
The expected behavior is to not raise an error, but instead correctly return the beam search decoding.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9200/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9199/comments | https://api.github.com/repos/huggingface/transformers/issues/9199/events | https://github.com/huggingface/transformers/pull/9199 | 771,215,551 | MDExOlB1bGxSZXF1ZXN0NTQyODEyMjg4 | 9,199 | [t5 doc] typos | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | This PR fixes a few run away backticks
Sylvain, why do we not have documentation for `T5ForConditionalGeneration.generate`? This doc is trying to link to it, but there is no such entry in https://huggingface.co/transformers/model_doc/t5.html
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9199",
"html_url": "https://github.com/huggingface/transformers/pull/9199",
"diff_url": "https://github.com/huggingface/transformers/pull/9199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9199.patch",
"merged_at": 1608336207000
} |
https://api.github.com/repos/huggingface/transformers/issues/9198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9198/comments | https://api.github.com/repos/huggingface/transformers/issues/9198/events | https://github.com/huggingface/transformers/pull/9198 | 771,189,349 | MDExOlB1bGxSZXF1ZXN0NTQyNzg5OTE2 | 9,198 | [run_glue] add speed metrics | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | This PR starts to sync with recent changes in trainer+finetune_trainer.py
* train: (sync with finetune_trainer):
- prints and saves train speed metrics - needed for benchmarking
- saves the state,
* eval: sorts metrics logging info
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9198/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9198",
"html_url": "https://github.com/huggingface/transformers/pull/9198",
"diff_url": "https://github.com/huggingface/transformers/pull/9198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9198.patch",
"merged_at": 1608340170000
} |
https://api.github.com/repos/huggingface/transformers/issues/9197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9197/comments | https://api.github.com/repos/huggingface/transformers/issues/9197/events | https://github.com/huggingface/transformers/pull/9197 | 771,123,018 | MDExOlB1bGxSZXF1ZXN0NTQyNzM1ODkx | 9,197 | [RAG] Add Ray implementation for distributed retrieval | {
"login": "amogkam",
"id": 8068268,
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amogkam",
"html_url": "https://github.com/amogkam",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"repos_url": "https://api.github.com/users/amogkam/repos",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sgugger @patrickvonplaten @LysandreJik @lhoestq ",
"Nice, good to merge then!",
"Awesome, thank you so much for the reviews @lhoestq @patrickvonplaten -- happy holidays!",
"Thanks guys!",
"@amogkam @patrickvonplaten I need some help to implement an end-to-end retrieval training feature for the rag with Ray.\r\n\r\nHow can I run document encoding and indexing with an updated doc-encoder (context encoder network that kept frozen in the original RAG) using a Ray actor separated from the main training process? \r\n\r\nHow can I access the document index inside Ray actors during the training incase I want to update the index, say in every 5000 steps. \r\n\r\n",
"@shamanez could you open a new issue to track this?",
"@richardliaw \r\n\r\nI have already opened one a few weeks ago. Please refer to this [issue](https://github.com/huggingface/transformers/issues/9646)\r\n\r\nI added a new issue explaining the exact problem in [this](https://github.com/huggingface/transformers/issues/10135)"
] | 1,608 | 1,613 | 1,608 | COLLABORATOR | null | # What does this PR do?
This PR adds a new distributed retriever implementation for RAG built on Ray, as an alternative to the current retriever implementation that uses torch.distributed. With Ray it's possible to load the index on multiple processes instead of just the rank 0 training worker, allowing fine tuning to scale out better to multiple GPUs, and also allowing the index to potentially be fit in GPU memory. This also removes a core dependency on Pytorch, allowing a Tensorflow implementation of `finetune.py`.
This PR also makes changes to support finetune.py with Pytorch Lightning >v1.0.
A benchmark of Pytorch distribtued retrieval vs. Ray distributed retrieval

## Implementation Details
In the current Pytorch retrieval implementation, the index is loaded once on just the rank 0 training workers. Training worker 0 gathers the inputs from all other workers, performs the index lookup, and scatters the results back to the other workers.

With the Ray implementation, the index is loaded on *separate* processes, which are referred to as Ray actors. Each training worker randomly selects a retrieval actor to query for documents and Ray handles all the communication between the processes. Because the index can be loaded in *multiple* processes, training can scale up since no synchronization needs to happen for the index lookup.

Note that Pytorch Lightning is still handling distributed *training*, but Ray manages distributed *retrieval*. Because PTL calls the entire training script under the hood multiple times, we have to use Ray's named actors feature (https://docs.ray.io/en/master/actors.html?highlight=named%20actors#named-actors) allowing the retrieval actors to be referenced by all training processes. The use of named actors is necessitated by how PTL handles distributed training, and a simpler approach could probably be used for a Tensorflow implentation.
## Testing Strategy
Unit tests were added to `test_distributed_retriever.py`. Note that the local Ray cluster for the tests had to be started with `local_mode=True` because the test file modifies `sys.path` and these changes are not propagated to remote processes. See https://stackoverflow.com/questions/54338013/parallel-import-a-python-file-from-sibling-folder for more info.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9197/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9197",
"html_url": "https://github.com/huggingface/transformers/pull/9197",
"diff_url": "https://github.com/huggingface/transformers/pull/9197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9197.patch",
"merged_at": 1608543571000
} |
https://api.github.com/repos/huggingface/transformers/issues/9196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9196/comments | https://api.github.com/repos/huggingface/transformers/issues/9196/events | https://github.com/huggingface/transformers/pull/9196 | 771,098,115 | MDExOlB1bGxSZXF1ZXN0NTQyNzE2MDc4 | 9,196 | Add timing inside Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"may I ask for one more bit, while you're at it - sorting the metrics before printing them out?\r\n\r\nI added this already into the final json file writing, but it'd make it easier to read the info logs.\r\n\r\nNow we have:\r\n\r\n```\r\n2020-12-18 11:51:55 | INFO | __main__ | val_loss = 368.4116\r\n2020-12-18 11:51:55 | INFO | __main__ | val_bleu = 26.3465\r\n2020-12-18 11:51:55 | INFO | __main__ | val_gen_len = 31.2\r\n2020-12-18 11:51:55 | INFO | __main__ | val_runtime = 22.1214\r\n2020-12-18 11:51:55 | INFO | __main__ | val_samples_per_second = 9.041\r\n2020-12-18 11:51:55 | INFO | __main__ | epoch = 1.0\r\n2020-12-18 11:51:55 | INFO | __main__ | val_n_objs = 200\r\n```\r\n\r\nas compared to sorted:\r\n```\r\n2020-12-18 11:51:55 | INFO | __main__ | epoch = 1.0\r\n2020-12-18 11:51:55 | INFO | __main__ | val_bleu = 26.3465\r\n2020-12-18 11:51:55 | INFO | __main__ | val_gen_len = 31.2\r\n2020-12-18 11:51:55 | INFO | __main__ | val_loss = 368.4116\r\n2020-12-18 11:51:55 | INFO | __main__ | val_n_objs = 200\r\n2020-12-18 11:51:55 | INFO | __main__ | val_runtime = 22.1214\r\n2020-12-18 11:51:55 | INFO | __main__ | val_samples_per_second = 9.041\r\n```\r\n\r\nor I could make a PR later if it's too unrelated... I probably should do that and not waste your time.\r\n\r\nThanks."
] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
Add timing reports for training/evaluation and test inside the Trainer. Also, change the default repr of `TrainingArguments` to avoid printing the deprecated arguments.
There is a breaking change in this PR: the output of `Trainer.train` gains a new field `metrics`, so the length of the namedtuple changes. I don't think it's too bad since all scripts and examples I've seen never store the result of this `train` method. After discussion with @LysandreJik we proposed to merge this breaking change and revert it before the next release if users complain. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9196/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9196/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9196",
"html_url": "https://github.com/huggingface/transformers/pull/9196",
"diff_url": "https://github.com/huggingface/transformers/pull/9196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9196.patch",
"merged_at": 1608322239000
} |
https://api.github.com/repos/huggingface/transformers/issues/9195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9195/comments | https://api.github.com/repos/huggingface/transformers/issues/9195/events | https://github.com/huggingface/transformers/issues/9195 | 771,066,100 | MDU6SXNzdWU3NzEwNjYxMDA= | 9,195 | Error "if input.dim() == 2 and bias is not None" | {
"login": "renjiege",
"id": 24770461,
"node_id": "MDQ6VXNlcjI0NzcwNDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/24770461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renjiege",
"html_url": "https://github.com/renjiege",
"followers_url": "https://api.github.com/users/renjiege/followers",
"following_url": "https://api.github.com/users/renjiege/following{/other_user}",
"gists_url": "https://api.github.com/users/renjiege/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renjiege/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renjiege/subscriptions",
"organizations_url": "https://api.github.com/users/renjiege/orgs",
"repos_url": "https://api.github.com/users/renjiege/repos",
"events_url": "https://api.github.com/users/renjiege/events{/privacy}",
"received_events_url": "https://api.github.com/users/renjiege/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please try\r\n`_, cls_hs = self.bert(sent_id, attention_mask=mask)`\r\nto\r\n`_, cls_hs = self.bert(sent_id, attention_mask=mask)[:2]`\r\nor\r\n`_, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False)`",
"It fixes the issue. Thanks!",
"> Please try\r\n> `_, cls_hs = self.bert(sent_id, attention_mask=mask)`\r\n> to\r\n> `_, cls_hs = self.bert(sent_id, attention_mask=mask)[:2]`\r\n> or\r\n> `_, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False)`\r\n\r\nYou the man! I also got stuck in this issue for an hour, and your solution just fixes it perfectly! \r\nThanks man!"
] | 1,608 | 1,612 | 1,608 | NONE | null | Pytorch version: pytorch-1.7.1-py3.8_cuda11.0.221_cudnn8.0.5_0
Transformer version: 4.0.0
Code:
```
import torch
import torch.nn as nn
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from transformers import AutoModel, BertTokenizerFast
class BERT_Arch(nn.Module):
def __init__(self, bert):
super(BERT_Arch, self).__init__()
self.bert = bert
# dropout layer
self.dropout = nn.Dropout(0.1)
# relu activation function
self.relu = nn.ReLU()
# dense layer 1
self.fc1 = nn.Linear(768,512)
# dense layer 2 (Output layer)
self.fc2 = nn.Linear(512,2)
#softmax activation function
self.softmax = nn.LogSoftmax(dim=1)
#define the forward pass
def forward(self, sent_id, mask):
#pass the inputs to the model
_, cls_hs = self.bert(sent_id, attention_mask=mask)
x = self.fc1(cls_hs)
x = self.relu(x)
x = self.dropout(x)
# output layer
x = self.fc2(x)
# apply softmax activation
x = self.softmax(x)
return x
# import BERT-base pretrained model
bert = AutoModel.from_pretrained('bert-base-uncased')
# pass the pre-trained BERT to our define architecture
model = BERT_Arch(bert)
# push the model to GPU
model = model.to(device)
# dataLoader for train set
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
for step,batch in enumerate(train_dataloader):
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
preds = model(sent_id, mask)
```
Error:
> ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-46-3656917b982b> in <module>
2 batch = [r.to(device) for r in batch]
3 sent_id, mask, labels = batch
----> 4 preds = model(sent_id, mask)
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-43-05830d6f294e> in forward(self, sent_id, mask)
21 #pass the inputs to the model
22 _, cls_hs = self.bert(sent_id, attention_mask=mask)
---> 23 x = self.fc1(cls_hs)
24 x = self.relu(x)
25 x = self.dropout(x)
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
~/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1686 if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops):
1687 return handle_torch_function(linear, tens_ops, input, weight, bias=bias)
-> 1688 if input.dim() == 2 and bias is not None:
1689 # fused op is marginally faster
1690 ret = torch.addmm(bias, input, weight.t())
AttributeError: 'str' object has no attribute 'dim'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9195/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9194/comments | https://api.github.com/repos/huggingface/transformers/issues/9194/events | https://github.com/huggingface/transformers/issues/9194 | 771,042,301 | MDU6SXNzdWU3NzEwNDIzMDE= | 9,194 | Loading MPNet from disc: ValueError: An instance of tokenizer class MPNetTokenizer cannot be converted in a Fast tokenizer instance. | {
"login": "nreimers",
"id": 10706961,
"node_id": "MDQ6VXNlcjEwNzA2OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/10706961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nreimers",
"html_url": "https://github.com/nreimers",
"followers_url": "https://api.github.com/users/nreimers/followers",
"following_url": "https://api.github.com/users/nreimers/following{/other_user}",
"gists_url": "https://api.github.com/users/nreimers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nreimers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nreimers/subscriptions",
"organizations_url": "https://api.github.com/users/nreimers/orgs",
"repos_url": "https://api.github.com/users/nreimers/repos",
"events_url": "https://api.github.com/users/nreimers/events{/privacy}",
"received_events_url": "https://api.github.com/users/nreimers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looking into it!\r\n\r\nAn easy fix for now would be the following:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\nlocal_dir = 'mpnet-model/'\r\n\r\nmodel = AutoModel.from_pretrained('microsoft/mpnet-base')\r\ntokenizer = AutoTokenizer.from_pretrained('microsoft/mpnet-base')\r\nmodel.save_pretrained(local_dir)\r\ntokenizer.save_pretrained(local_dir)\r\n\r\n#Load tokenizer and model from dir\r\nmodel = AutoModel.from_pretrained(local_dir)\r\n\r\n# The following command will throw an exception\r\ntokenizer = AutoTokenizer.from_pretrained(local_dir, use_fast=False)\r\n```\r\n\r\nbut that's more of a hack than really solving the underlying problem as it just loads the slow tokenizer. Will check what's going on!\r\n",
"The PR linked to the issue adds the required converter so that the above code:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\nlocal_dir = 'mpnet-model/'\r\n\r\nmodel = AutoModel.from_pretrained('microsoft/mpnet-base')\r\ntokenizer = AutoTokenizer.from_pretrained('microsoft/mpnet-base')\r\nmodel.save_pretrained(local_dir)\r\ntokenizer.save_pretrained(local_dir)\r\n\r\n#Load tokenizer and model from dir\r\nmodel = AutoModel.from_pretrained(local_dir)\r\n\r\n# The following command will throw an exception\r\ntokenizer = AutoTokenizer.from_pretrained(local_dir)\r\n```\r\n\r\nshould work after merging. It's a bit weird that one can use a FastTokenizer if the model id is `microsoft/mpnet-base` but not if the model is serialized and loaded again...we should maybe think of a better way to prevent such issues in the future. Maybe just not allow one to add a \"FastTokenizer\" class without adding the corresponding converter? @sgugger @LysandreJik ",
"Great, thanks for the quick fix."
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.1.1 (pip version)
- Platform: Ubuntu 20.04
- Python version: 3.7
- PyTorch version (GPU?): Pytorch 1.7 GPU
## Information
Hi,
thanks for adding MPNet. I got quite promising results when using it for generating sentence embeddings.
However, there is an issue when saving and loading the MPNet model (when using version 4.1.1 of transformers, installed via pip):
```python
from transformers import AutoTokenizer, AutoModel
local_dir = 'mpnet-model/'
model = AutoModel.from_pretrained('microsoft/mpnet-base')
tokenizer = AutoTokenizer.from_pretrained('microsoft/mpnet-base')
model.save_pretrained(local_dir)
tokenizer.save_pretrained(local_dir)
#Load tokenizer and model from dir
model = AutoModel.from_pretrained(local_dir)
# The following command will throw an exception
tokenizer = AutoTokenizer.from_pretrained(local_dir)
```
This leads to the following error:
```
File "/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/convert_slow_tokenizer.py", line 636, in convert_slow_tokenizer
f"An instance of tokenizer class {tokenizer_class_name} cannot be converted in a Fast tokenizer instance. "
ValueError: An instance of tokenizer class MPNetTokenizer cannot be converted in a Fast tokenizer instance. No converter was found. Currently available slow->fast convertors: ['AlbertTokenizer', 'BartTokenizer', 'BarthezTokenizer', 'BertTokenizer', 'CamembertTokenizer', 'DistilBertTokenizer', 'DPRReaderTokenizer', 'DPRQuestionEncoderTokenizer', 'DPRContextEncoderTokenizer', 'ElectraTokenizer', 'FunnelTokenizer', 'GPT2Tokenizer', 'HerbertTokenizer', 'LayoutLMTokenizer', 'LongformerTokenizer', 'LxmertTokenizer', 'MBartTokenizer', 'MobileBertTokenizer', 'OpenAIGPTTokenizer', 'PegasusTokenizer', 'ReformerTokenizer', 'RetriBertTokenizer', 'RobertaTokenizer', 'SqueezeBertTokenizer', 'T5Tokenizer', 'XLMRobertaTokenizer', 'XLNetTokenizer']
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9194/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9193/comments | https://api.github.com/repos/huggingface/transformers/issues/9193/events | https://github.com/huggingface/transformers/pull/9193 | 771,014,844 | MDExOlB1bGxSZXF1ZXN0NTQyNjQ4OTMx | 9,193 | Full rework of the TF input/output embeddings and bias resizing | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I haven't reviewed in detail yet, but just looking at the API with the number of things to change for ALBERT (and in terms of line of code) is a hard pass for me. Overriding the resize method as was done before was way easier, this adds too much complexity.",
"I understand that it is a big update. Nevertheless, the way it was done before didn't worked and was quite buggy (the tests basically was testing almost nothing) and to make the resizing properly working, these changes are necessary.",
"In all cases I'm open to any suggestion that will reduce the number of changes :)",
"@sgugger @LysandreJik I tried a new approach for the resizing that reduce a lot the changes in each model implementation, it is even much shorter than what we currently have in master. I have done my test only on ALBERT for now, can you recheck that file and let me know what you think about it.",
"Ok I will clarify this a bit more:\r\n\r\n1. The fist most important issue in the current implementation is that the resizing is not graph compilation+ execution compliant because of the usage of `numpy` and `tensor.numpy()` calls and then not usable in such cases.\r\n2. The naming was depending of where the build was coming from, that's why we needed the `get_prefix_bias_name` for the bias and the manual build of the embeddings names. Which was a temporary fix, because it is very error prone, because the naming depends of several other things that are not taken into account into this manual build.\r\n3. Resizing was not working for some models, such as BART which doesn't work properly and raises an error. (Proof that the tests was not testing everything)\r\n4. The current resizing has two issues when resizing when we instantiate a model from scratch: either it raises an attribute error because the model is not fully built (weights not instantiated) and get a wrong naming, and then if we save the model with this wrong naming we cannot save/load it properly because the naming doesn't correspond to the current architecture.\r\n5. All the weights names across the models don't share the same names sometimes `embeddings.word_embeddings`, sometimes `shared.weight`, sometimes `lm_head.bias`, sometimes `lm_loss.bias`, sometimes `mlm.bias`, sometimes `mlm.predictions.bias` and many other ways...\r\n\r\nAs stated in #8657 this was just a temporary fix to go to the quickest way to wait for the real rework.\r\n\r\nThis PR aims to solve all these issues and bring something more generic and less error prone.",
"I personally never understood that #8657 was a quick fix that was needing another PR afterwards. We cannot operate by adding new methods in one release then breaking them or deleting them in the next so the works that was done in #8657 needs to be built upon not destroyed (and please, say in bold next time you are just making a quick fix as I would never have approved #8657 to be merged had I known...)\r\n\r\nSo before we review this, the following need to be addressed:\r\n- `get_input_embeddings` needs to keep the same return type\r\n- `get_output_embeddings` needs to keep the same return type\r\n- `get_output_layer_with_bias` can't disappear\r\n- `get_prefix_bias_name` can't disappear\r\n\r\nThis is annoying but this is why we usually don't merge a half-baked fix introducing new APIs, we can't break that after.",
"We can keep this for the next major release.\r\n\r\nWhat you ask is doable but will make the codebase more complicated. I will rework this.",
"I have just done the following restore:\r\n\r\n- `get_input_embeddings` still returns a layer\r\n- `get_output_embeddings` still returns a layer\r\n- `get_output_layer_with_bias` is back\r\n- `get_prefix_bias_name` is back\r\n\r\nThe old and new approach was much more compliant than I thought so it was easier to restore what @sgugger asked, and now there should be zero breaking change. Really sorry for the misunderstanding, I will clearer next time.",
"@sgugger I should have addressed all your comments :)\r\n\r\n> which makes me think there is a better way to code the default in modeling_tf_utils.\r\n\r\nShare your thoughts 😉",
"Thanks @LysandreJik! For the tying test look in the `_get_resized_lm_head_decoder()` method. Unless you mean adding a test in `test_modeling_tf_common` ?",
"I mean I'm not seeing a test that checks `get_input_embeddings() == get_output_embeddings()` when weights are tied, but I may be missing something here. \r\n\r\nI know these two generally point to the same tensors, but no always, do they?",
"> I mean I'm not seeing a test that checks get_input_embeddings() == get_output_embeddings() when weights are tied, but I may be missing something here.\r\n\r\nYes, there is a test for this, line 884 in `modeling_tf_utils`.\r\n\r\n> I know these two generally point to the same tensors, but no always, do they?\r\n\r\nYes they always point to the same tensor when they equals, 100% sure.",
"I should have addressed all the comments.",
"Ah yeah, we probably need a rebase here since TF-serving just got merged :-/",
"Arf good point!",
"@patrickvonplaten the test `TFLEDModelTest::test_pt_tf_model_equivalence` seems very flaky, it looks like that it randomly pass/fail. ",
"Good to merge for me now :)",
"> @patrickvonplaten the test `TFLEDModelTest::test_pt_tf_model_equivalence` seems very flaky, it looks like that it randomly pass/fail.\r\n\r\nJust fixed it: https://github.com/huggingface/transformers/pull/9459",
"@sgugger any objection to merge this PR?"
] | 1,608 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
This PR 100% reworks the entire process of input/output and bias resizing. Now the exceptions are better handled including the names that now are always similar. The corresponding tests have also been entirely reworked and now have a better coverage of this feature.
This PR adds a small breaking change. Now the `get_input_embeddings` methods returns the weights and not anymore the embedding layer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9193/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9193",
"html_url": "https://github.com/huggingface/transformers/pull/9193",
"diff_url": "https://github.com/huggingface/transformers/pull/9193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9193.patch",
"merged_at": 1610364449000
} |
https://api.github.com/repos/huggingface/transformers/issues/9192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9192/comments | https://api.github.com/repos/huggingface/transformers/issues/9192/events | https://github.com/huggingface/transformers/issues/9192 | 771,008,138 | MDU6SXNzdWU3NzEwMDgxMzg= | 9,192 | example code for fine-tuning CLM does not work for GPT | {
"login": "TalitaAnthonio",
"id": 25078987,
"node_id": "MDQ6VXNlcjI1MDc4OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/25078987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TalitaAnthonio",
"html_url": "https://github.com/TalitaAnthonio",
"followers_url": "https://api.github.com/users/TalitaAnthonio/followers",
"following_url": "https://api.github.com/users/TalitaAnthonio/following{/other_user}",
"gists_url": "https://api.github.com/users/TalitaAnthonio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TalitaAnthonio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TalitaAnthonio/subscriptions",
"organizations_url": "https://api.github.com/users/TalitaAnthonio/orgs",
"repos_url": "https://api.github.com/users/TalitaAnthonio/repos",
"events_url": "https://api.github.com/users/TalitaAnthonio/events{/privacy}",
"received_events_url": "https://api.github.com/users/TalitaAnthonio/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you try putting `--block_size=512` in your command to see if it changes something?",
"@LysandreJik Thank you it solved the issue! "
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
- `transformers` version: 3.5.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NO
### Who can help
@patrickvonplaten @TevenLeScao
## Information
Model I am using: open-ai GPT.
The problem arises when using:
* [x] the official example scripts: (give details below)
```
python run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm
```
It's the example script on: [(https://github.com/huggingface/transformers/tree/master/examples/language-modeling], which is used to fine-tune a casual language model.
## To reproduce
Steps to reproduce the behavior:
1. go to transformers/examples/language-modeling
2. run the following command
`python run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm`
3 The following error occurs:
`
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1
`
In my case, the error does not occur when using `--model_name_or_path gpt2`.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The progress bar should be filled and the language model should be finetuned.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9192/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9191/comments | https://api.github.com/repos/huggingface/transformers/issues/9191/events | https://github.com/huggingface/transformers/issues/9191 | 770,965,175 | MDU6SXNzdWU3NzA5NjUxNzU= | 9,191 | Segfault on python 3.9 exit | {
"login": "LoicGrobol",
"id": 14248012,
"node_id": "MDQ6VXNlcjE0MjQ4MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/14248012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LoicGrobol",
"html_url": "https://github.com/LoicGrobol",
"followers_url": "https://api.github.com/users/LoicGrobol/followers",
"following_url": "https://api.github.com/users/LoicGrobol/following{/other_user}",
"gists_url": "https://api.github.com/users/LoicGrobol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LoicGrobol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LoicGrobol/subscriptions",
"organizations_url": "https://api.github.com/users/LoicGrobol/orgs",
"repos_url": "https://api.github.com/users/LoicGrobol/repos",
"events_url": "https://api.github.com/users/LoicGrobol/events{/privacy}",
"received_events_url": "https://api.github.com/users/LoicGrobol/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I encountered a similar issue, in a context where `transformers` was not imported. I've [reported](https://github.com/pytorch/pytorch/issues/50858) the issue to the PyTorch project.",
"I too have similar problem when running unittests for a Python package on travis, using Python 3.9. Specifically, all unit tests run fine, but the final outcome is segfault. See: https://travis-ci.org/github/LoryPack/abcpy/builds/755321073\r\n\r\n",
"This is probably fixed by https://github.com/pytorch/pytorch/pull/50998"
] | 1,608 | 1,639 | 1,611 | NONE | null | Ok, that's a weird one
## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-5.8.0-34-generic-x86_64-with-glibc2.32
- Python version: 3.9.0+
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Steps to reproduce the behavior:
1. Run the following script
```python
import torch
import transformers
loss = torch.tensor([1.0], requires_grad=True)
loss.backward()
```
The script runs correctly but exits with
```text
[1] 46823 segmentation fault (core dumped) python testcase.py
```
Which doesn't happen if `import transformers` is commented out.
Only happens when on Python 3.9, it works as expected in 3.8.
## Full env
```text
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
filelock==3.0.12
idna==2.10
joblib==1.0.0
numpy==1.19.4
packaging==20.8
pyparsing==2.4.7
regex==2020.11.13
requests==2.25.1
sacremoses==0.0.43
six==1.15.0
tokenizers==0.9.4
torch==1.7.1
tqdm==4.54.1
transformers==4.1.1
typing-extensions==3.7.4.3
urllib3==1.26.2
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9191/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/9191/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9190/comments | https://api.github.com/repos/huggingface/transformers/issues/9190/events | https://github.com/huggingface/transformers/pull/9190 | 770,955,064 | MDExOlB1bGxSZXF1ZXN0NTQyNTk5Njgz | 9,190 | Addition of MuRIL - BERT based model for 17 Indian Languages to the library | {
"login": "ravi03071991",
"id": 12198101,
"node_id": "MDQ6VXNlcjEyMTk4MTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/12198101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ravi03071991",
"html_url": "https://github.com/ravi03071991",
"followers_url": "https://api.github.com/users/ravi03071991/followers",
"following_url": "https://api.github.com/users/ravi03071991/following{/other_user}",
"gists_url": "https://api.github.com/users/ravi03071991/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ravi03071991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ravi03071991/subscriptions",
"organizations_url": "https://api.github.com/users/ravi03071991/orgs",
"repos_url": "https://api.github.com/users/ravi03071991/repos",
"events_url": "https://api.github.com/users/ravi03071991/events{/privacy}",
"received_events_url": "https://api.github.com/users/ravi03071991/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Yes, feel free to. However, you seem to have based yourself off of a `tf` branch that is very old?",
"@LysandreJik Yeah sorry. Should I raise a new issue so that it can be changed accordingly?",
"If you want to contribute this model, I invite you to base yourself off of the `master` branch and create a new branch from there. \r\n\r\nOnce you your branch, you can leverage the [template scripts](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model) which should help you by creating a new model and adding it everywhere it should be added; you'll only have to update the created files.",
"Reading the [contributing guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) will also be very helpful.",
"great effort and contribution @ravi03071991 ",
"@LysandreJik Sure. Will follow the guidelines. Thank you."
] | 1,608 | 1,651 | 1,608 | CONTRIBUTOR | null | Hi,
This PR is regarding the addition of MuRIL, a BERT-based model trained specifically for 17 Indian Languages to the hugging face library.
MuRIL is released on tfhub. Link to the repo: [https://tfhub.dev/google/MuRIL/1](https://tfhub.dev/google/MuRIL/1)
I am interested to work on this contribution to the library. Please let me know if I can work on it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9190/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9190/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9190",
"html_url": "https://github.com/huggingface/transformers/pull/9190",
"diff_url": "https://github.com/huggingface/transformers/pull/9190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9190.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9189/comments | https://api.github.com/repos/huggingface/transformers/issues/9189/events | https://github.com/huggingface/transformers/pull/9189 | 770,953,440 | MDExOlB1bGxSZXF1ZXN0NTQyNTk4MzY5 | 9,189 | GPT-model attention heads pruning example | {
"login": "altsoph",
"id": 2072749,
"node_id": "MDQ6VXNlcjIwNzI3NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2072749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/altsoph",
"html_url": "https://github.com/altsoph",
"followers_url": "https://api.github.com/users/altsoph/followers",
"following_url": "https://api.github.com/users/altsoph/following{/other_user}",
"gists_url": "https://api.github.com/users/altsoph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/altsoph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/altsoph/subscriptions",
"organizations_url": "https://api.github.com/users/altsoph/orgs",
"repos_url": "https://api.github.com/users/altsoph/repos",
"events_url": "https://api.github.com/users/altsoph/events{/privacy}",
"received_events_url": "https://api.github.com/users/altsoph/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One last detail: could you run `make style` on your branch? There seems to be some bad formatting.",
"> One last detail: could you run `make style` on your branch? There seems to be some bad formatting.\r\n\r\nI did, but let me recheck it"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
This script is adapted from the [BERT attention heads pruning code](https://github.com/huggingface/transformers/blob/master/examples/research_projects/bertology/run_bertology.py) AKA Bertology to make it possible to prune GPT-model heads as well.
It basically works the same way as run_bertology.py, but can deal with GPT-models.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9189/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9189",
"html_url": "https://github.com/huggingface/transformers/pull/9189",
"diff_url": "https://github.com/huggingface/transformers/pull/9189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9189.patch",
"merged_at": 1608327130000
} |
https://api.github.com/repos/huggingface/transformers/issues/9188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9188/comments | https://api.github.com/repos/huggingface/transformers/issues/9188/events | https://github.com/huggingface/transformers/issues/9188 | 770,859,577 | MDU6SXNzdWU3NzA4NTk1Nzc= | 9,188 | MRPC Reproducibility with transformers-4.1.0 | {
"login": "Linwenye",
"id": 19305330,
"node_id": "MDQ6VXNlcjE5MzA1MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/19305330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Linwenye",
"html_url": "https://github.com/Linwenye",
"followers_url": "https://api.github.com/users/Linwenye/followers",
"following_url": "https://api.github.com/users/Linwenye/following{/other_user}",
"gists_url": "https://api.github.com/users/Linwenye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Linwenye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Linwenye/subscriptions",
"organizations_url": "https://api.github.com/users/Linwenye/orgs",
"repos_url": "https://api.github.com/users/Linwenye/repos",
"events_url": "https://api.github.com/users/Linwenye/events{/privacy}",
"received_events_url": "https://api.github.com/users/Linwenye/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,608 | 1,608 | 1,608 | NONE | null | I always get lower precision following the MRPC example, what's the reason?
```
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
and get
```
12/18/2020 17:16:38 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 17:16:38 - INFO - __main__ - eval_loss = 0.5318707227706909
12/18/2020 17:16:38 - INFO - __main__ - eval_accuracy = 0.7622549019607843
12/18/2020 17:16:38 - INFO - __main__ - eval_f1 = 0.8417618270799347
12/18/2020 17:16:38 - INFO - __main__ - eval_combined_score = 0.8020083645203595
12/18/2020 17:16:38 - INFO - __main__ - epoch = 3.0
12/18/2020 16:45:29 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 16:45:29 - INFO - __main__ - eval_loss = 0.47723284363746643
12/18/2020 16:45:29 - INFO - __main__ - eval_accuracy = 0.8063725490196079
12/18/2020 16:45:29 - INFO - __main__ - eval_f1 = 0.868988391376451
12/18/2020 16:45:29 - INFO - __main__ - eval_combined_score = 0.8376804701980294
12/18/2020 16:45:29 - INFO - __main__ - epoch = 3.0
12/18/2020 16:34:37 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 16:34:37 - INFO - __main__ - eval_loss = 0.571368932723999
12/18/2020 16:34:37 - INFO - __main__ - eval_accuracy = 0.6838235294117647
12/18/2020 16:34:37 - INFO - __main__ - eval_f1 = 0.8122270742358079
12/18/2020 16:34:37 - INFO - __main__ - eval_combined_score = 0.7480253018237863
12/18/2020 16:34:37 - INFO - __main__ - epoch = 3.0
```
GPU: GTX 1080
transformers: 4.1.0
Torch: 1.6.0
python: 3.8
Server: Ubuntu 18.04
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9188/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9187/comments | https://api.github.com/repos/huggingface/transformers/issues/9187/events | https://github.com/huggingface/transformers/issues/9187 | 770,806,519 | MDU6SXNzdWU3NzA4MDY1MTk= | 9,187 | Problem with pretraining GPT-2 on TPU with Pytorch/XLA | {
"login": "redrussianarmy",
"id": 24498747,
"node_id": "MDQ6VXNlcjI0NDk4NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/24498747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/redrussianarmy",
"html_url": "https://github.com/redrussianarmy",
"followers_url": "https://api.github.com/users/redrussianarmy/followers",
"following_url": "https://api.github.com/users/redrussianarmy/following{/other_user}",
"gists_url": "https://api.github.com/users/redrussianarmy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/redrussianarmy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/redrussianarmy/subscriptions",
"organizations_url": "https://api.github.com/users/redrussianarmy/orgs",
"repos_url": "https://api.github.com/users/redrussianarmy/repos",
"events_url": "https://api.github.com/users/redrussianarmy/events{/privacy}",
"received_events_url": "https://api.github.com/users/redrussianarmy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I realized that I didn't follow [instructions](https://github.com/pytorch/xla/blob/master/README.md) properly.",
"@redrussianarmy What`s the problem? I have the same issue but can not find a solution."
] | 1,608 | 1,610 | 1,608 | NONE | null | ### **Environment info**
transformers version: 4.0.1
Platform: Ubuntu 18.04.4 LTS
Python version: 3.6.9
PyTorch version: 1.7.1
Torch-XLA version: 1.7
Tensorflow version: 2.3.1
### **Information**
Model I am intended to pretrain (Bert, XLNet ...): GPT2
The problem arises when using:
- my own modified scripts: (give details below)
The tasks I am working on is:
- pretraining GPT-2 on Google TPU
### **To reproduce**
Steps to reproduce the behavior:
1. According to instructions [here](https://github.com/pytorch/xla/blob/master/README.md#-consume-prebuilt-compute-vm-images), creating instance of compute engine on Google Cloud with following spec:
- OS: Deep Learning on Linux
- Version: Debian GNU/Linux 9 Stretch + Pytorch/XLA
2. Running the code of:
`export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470"`
3. Running following script.
```python
#!/usr/bin/env bash
pipenv run python3 xla_spawn.py --num_cores 8 \
run_clm.py \
--num_train_epochs 5 \
--output_dir saved_model/ \
--overwrite_output_dir \
--logging_dir logs \
--logging_steps 50 \
--save_total_limit 2 \
--save_steps 2000 \
--model_type gpt2 \
--config_name tokenized_data/ \
--tokenizer_name tokenized_data/ \
--block_size 1024 \
--train_file dataset/dataset.txt \
--per_device_train_batch_size=64 \
--do_train
```
Here is the exception occured:
```
Exception in device=TPU:3: tensorflow/compiler/xla/xla_client/mesh_service.cc:316 : Check failed: impl_->channel->WaitForConnected( std::chrono::system_clock::now() + std::chrono::seconds(connect_wait_seconds))
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::service::MeshClient::MeshClient(std::string const&)
xla::service::MeshClient::Get()
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_GetAttr
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyRun_StringFlags
PyRun_SimpleStringFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Failed to connect to client mesh master: workstation:41295
Traceback (most recent call last):
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
_setup_replication()
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 315, in _setup_replication
device = xm.xla_device()
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 231, in xla_device
devkind=devkind if devkind is not None else None)
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 136, in get_xla_supported_devices
xla_devices = _DEVICES.value
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/utils/utils.py", line 32, in value
self._value = self._gen_fn()
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 18, in <lambda>
_DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices())
RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:316 : Check failed: impl_->channel->WaitForConnected( std::chrono::system_clock::now() + std::chrono::seconds(connect_wait_seconds))
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::service::MeshClient::MeshClient(std::string const&)
xla::service::MeshClient::Get()
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_GetAttr
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyRun_StringFlags
PyRun_SimpleStringFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Failed to connect to client mesh master: workstation:41295
Exception in device=TPU:2: tensorflow/compiler/xla/xla_client/mesh_service.cc:316 : Check failed: impl_->channel->WaitForConnected( std::chrono::system_clock::now() + std::chrono::seconds(connect_wait_seconds))
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9187/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9186/comments | https://api.github.com/repos/huggingface/transformers/issues/9186/events | https://github.com/huggingface/transformers/pull/9186 | 770,739,076 | MDExOlB1bGxSZXF1ZXN0NTQyNDI4MjM2 | 9,186 | fixed not JSON serializable error in run_qa.py with fp16 | {
"login": "WissamAntoun",
"id": 44616226,
"node_id": "MDQ6VXNlcjQ0NjE2MjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/44616226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WissamAntoun",
"html_url": "https://github.com/WissamAntoun",
"followers_url": "https://api.github.com/users/WissamAntoun/followers",
"following_url": "https://api.github.com/users/WissamAntoun/following{/other_user}",
"gists_url": "https://api.github.com/users/WissamAntoun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WissamAntoun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WissamAntoun/subscriptions",
"organizations_url": "https://api.github.com/users/WissamAntoun/orgs",
"repos_url": "https://api.github.com/users/WissamAntoun/repos",
"events_url": "https://api.github.com/users/WissamAntoun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WissamAntoun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fixed an issue where running the run_qa.py script in a Squad-like dataset with fp16 enabled, would lead to a JSON serialization error:
```
TypeError: Object of type 'float16' is not JSON serializable
```
The bug is caused by not converting `np.float16` to `float` on line 209 and 397 in the utils_qa.py file.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? --> yes
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. --> no
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). --> No need for it
- [x] Did you write any new necessary tests? No
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9186/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9186",
"html_url": "https://github.com/huggingface/transformers/pull/9186",
"diff_url": "https://github.com/huggingface/transformers/pull/9186.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9186.patch",
"merged_at": 1608296004000
} |
https://api.github.com/repos/huggingface/transformers/issues/9185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9185/comments | https://api.github.com/repos/huggingface/transformers/issues/9185/events | https://github.com/huggingface/transformers/issues/9185 | 770,717,055 | MDU6SXNzdWU3NzA3MTcwNTU= | 9,185 | can we use ckpt model file generated after finetuning the pre-trained models on custom dataset | {
"login": "SagarPalyal",
"id": 38795924,
"node_id": "MDQ6VXNlcjM4Nzk1OTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/38795924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SagarPalyal",
"html_url": "https://github.com/SagarPalyal",
"followers_url": "https://api.github.com/users/SagarPalyal/followers",
"following_url": "https://api.github.com/users/SagarPalyal/following{/other_user}",
"gists_url": "https://api.github.com/users/SagarPalyal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SagarPalyal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SagarPalyal/subscriptions",
"organizations_url": "https://api.github.com/users/SagarPalyal/orgs",
"repos_url": "https://api.github.com/users/SagarPalyal/repos",
"events_url": "https://api.github.com/users/SagarPalyal/events{/privacy}",
"received_events_url": "https://api.github.com/users/SagarPalyal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"hi @SagarPalyal not sure what you mean here. by ckpt file do you mean a saved checkpoint ?",
"Yes you are right. This is saved checkpoint file only.",
"One thing to note: `pytorch_model.bin` is the name of the weights file, and every torch HF model will have that file.\r\n\r\nTo load the checkpoint simply pass the path for the checkpoint instead of `model_path`",
"Hey, @SagarPalyal did you solve the problem? I met the same problem. 'from_pretrained' function needs pytorch model file, not tensorflow model file. I don't know how to convert a costumed pegasus tensorflow model to pytorch model. Anyone knows??? ",
"I am not able to figure out how to convert .ckpt file to pytorch model file but in your case if you have tensorflow model file then you can use parameter from_tf=True",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,685 | 1,614 | NONE | null | Hi Team,
I have fine tuned pegasus huggingface wikihow model with my own custom dataset and end up with below files:
model.ckpt-1000.data-00000-of-00001
model.ckpt-1000.index
model.ckpt-1000.meta
events.out.tfevents.1608192436.ip-xxx-xx-x-xx
events.out.tfevents.1608192510.ip-xxx-xx-x-xx.v2
When I am running below code, it is taking pytorch_model.bin file automatically for generating summary. Where as I want to use ckpt model file for generating summary which I got after training.
**Code I am using is :**
model_path ='local-pegasus-wikihow'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_path)
model = PegasusForConditionalGeneration.from_pretrained(model_path).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors="pt").to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(tgt_text[0])
Do I need to change code here so that it will take ckpt model file for generating summary instead of searching for pre-trained pytorch_model.bin or .h5 file? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9185/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9184/comments | https://api.github.com/repos/huggingface/transformers/issues/9184/events | https://github.com/huggingface/transformers/issues/9184 | 770,707,837 | MDU6SXNzdWU3NzA3MDc4Mzc= | 9,184 | [RagSequenceForGeneration] generate "without" input_ids | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | Hi guys,
In `RagSequenceForGeneration` method `generate()` function, the doc said that both `input_ids` and `context_input_ids` are optional (one of them must be specified) .
However, in the code https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L907
It specifically need `input_ids` in all cases.
Not sure which option is the best
(1) simply said `input_ids` is always needed , OR
(2) add code to calculate `nll` if only `context_input_ids` is provided , but in this case `doc_scores` and `context_attention_mask` have to be provided as well (similar to RagModel requirement ) : https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L588
I think option (2) should be reasonable since `RagTokenForGeneration` method `generate()` also requires the same.
Proposed fix in https://github.com/huggingface/transformers/pull/9220
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9184/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9183/comments | https://api.github.com/repos/huggingface/transformers/issues/9183/events | https://github.com/huggingface/transformers/pull/9183 | 770,699,787 | MDExOlB1bGxSZXF1ZXN0NTQyMzk3NTQ4 | 9,183 | Add caching mechanism to BERT, RoBERTa | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`python utils/check_copies.py --fix_and_overwrite` should be run to make the `check_code_quality` test pass",
"> `python utils/check_copies.py --fix_and_overwrite` should be run to make the `check_code_quality` test pass\r\n\r\nYeah, did that, for some reason, it's not working for `RobertaEmbeddings`, the code is the same as that of `BertEmbeddings`",
"> > `python utils/check_copies.py --fix_and_overwrite` should be run to make the `check_code_quality` test pass\r\n> \r\n> Yeah, did that, for some reason, it's not working for `RobertaEmbeddings`, the code is the same as that of `BertEmbeddings`\r\n\r\nIf Roberta has to be different feel free to remove the copy statement",
"I just merged a PR that made `cache` related tests a bit more aggressive: https://github.com/huggingface/transformers/pull/9256. It would be awesome if you could run your PR on the `EncoderDecoderModel` slow tests to make sure the cache doesn't change the results.",
"Merging !"
] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
- This PR adds past key/values caching mechanism to `BertLMHeadModel`, `BertGenerationDecoder`, and `RobertaForCausalLM` to speed up the generation of `EncoderDecoder` models
- delete the `CausalLMOutputWithPastAndCrossAttentions` class and add `past_key_values` to `CausalLMOutputWithCrossAttentions` and `BaseModelOutputWithPoolingAndCrossAttentions` . All `ModelOutputs` that have a `cross-attention` should also have a `past_key_values`
- by default caching is enabled for `BertLMHeadModel`, `BertGenerationDecoder`, and `RobertaForCausalLM` during inference (not just generation, also for the forward pass) and now they also output `past_key_values` by default during inference, which is a small breaking change.
specifically when `config.output_attentions=True` in a `EncoderDecoderModel` model then the 2nd index of the output will be `past_key_values` instead of `attentions`
```python3
model = EncoderDecoderModel.from_pretrained(...)
outputs = model(input_ids)
attentions = outputs[1] # 2nd index will be past_key_values, instead of attentions
```
- this will only affect the output during generation for `EncoderDecoder` models. Caching will be disabled when the models are used as standalone encoders, so the default output, in that case, is unchanged.
Fixes #9052
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9183/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9183/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9183",
"html_url": "https://github.com/huggingface/transformers/pull/9183",
"diff_url": "https://github.com/huggingface/transformers/pull/9183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9183.patch",
"merged_at": 1608744693000
} |
https://api.github.com/repos/huggingface/transformers/issues/9182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9182/comments | https://api.github.com/repos/huggingface/transformers/issues/9182/events | https://github.com/huggingface/transformers/pull/9182 | 770,466,926 | MDExOlB1bGxSZXF1ZXN0NTQyMjA5MTA3 | 9,182 | Fix link to old NER fine-tuning script | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9182/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9182",
"html_url": "https://github.com/huggingface/transformers/pull/9182",
"diff_url": "https://github.com/huggingface/transformers/pull/9182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9182.patch",
"merged_at": 1608252602000
} |
https://api.github.com/repos/huggingface/transformers/issues/9181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9181/comments | https://api.github.com/repos/huggingface/transformers/issues/9181/events | https://github.com/huggingface/transformers/pull/9181 | 770,461,400 | MDExOlB1bGxSZXF1ZXN0NTQyMjA0NjU4 | 9,181 | Fix link to old SQUAD fine-tuning script | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9181/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9181",
"html_url": "https://github.com/huggingface/transformers/pull/9181",
"diff_url": "https://github.com/huggingface/transformers/pull/9181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9181.patch",
"merged_at": 1608300730000
} |
https://api.github.com/repos/huggingface/transformers/issues/9180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9180/comments | https://api.github.com/repos/huggingface/transformers/issues/9180/events | https://github.com/huggingface/transformers/pull/9180 | 770,460,348 | MDExOlB1bGxSZXF1ZXN0NTQyMjAzODE0 | 9,180 | [trainer] apex fixes and tests | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | This PR:
* [x] fixes a bug in with `fp16_backend` apex (`is_apex_available` + `amp` weren't getting imported w/ pt>=1.6)
* [x] adds test
* [x] adds a logger info on which fp16 backend will be used
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9180/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9180",
"html_url": "https://github.com/huggingface/transformers/pull/9180",
"diff_url": "https://github.com/huggingface/transformers/pull/9180.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9180.patch",
"merged_at": 1608252551000
} |
https://api.github.com/repos/huggingface/transformers/issues/9179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9179/comments | https://api.github.com/repos/huggingface/transformers/issues/9179/events | https://github.com/huggingface/transformers/issues/9179 | 770,429,173 | MDU6SXNzdWU3NzA0MjkxNzM= | 9,179 | [trainer] speed issues: --fp16 doesn't improve speed, DP runs really slow | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Could you run the other examples script to check? On my end they result in a roughly x2 speedup but I'm not on pytorch-nightly.",
"Any recommendations and specific command lines that you use? \r\n\r\nI have been using the finetune and other scripts in seq2seq as a go-to scripts for testing, so I'm not quite experienced with the others. \r\n\r\nThank you!",
"So on a finetune_trainer setup `--fp16` is slower than w/o it - on either amp or apex, so that elimination the caching concern. (w/ pytorch nightly)\r\n\r\nCaveat: The following tests aren't ideal since I have 1 fast and 1 slow card, but they should be consistent since the overall speed is always at the slowest card (with the exception of single gpu tests), so it's like having 2 slow cards.\r\n\r\n### DDP\r\n\r\n```\r\n# baseline w/ --fp16\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500\r\n\r\n2020-12-17 16:04:18 | INFO | __main__ | train_runtime = 27.9693\r\n\r\n# --fp16 --fp16_backend apex\r\n\r\n2020-12-17 16:01:14 | INFO | __main__ | train_runtime = 30.0469\r\n\r\n# --fp16 --fp16_backend amp\r\n\r\n2020-12-17 16:06:41 | INFO | __main__ | train_runtime = 29.5368\r\n\r\n```\r\n\r\n### DP\r\n\r\nDP setup is about the same correlation - but runs twice as slow! (that looks wrong too!)\r\n\r\n```\r\n\r\n# baseline (no --fp16)\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500\r\n\r\n2020-12-17 16:16:09 | INFO | __main__ | train_runtime = 56.7522\r\n\r\n# --fp16 --fp16_backend apex\r\n\r\n2020-12-17 16:14:26 | INFO | __main__ | train_runtime = 59.4309\r\n\r\n# --fp16 --fp16_backend amp\r\n\r\n2020-12-17 16:12:18 | INFO | __main__ | train_runtime = 58.4406\r\n```\r\n\r\n\r\n\r\n### Single GPU (gtx-1070 slowest)\r\n\r\nno improvement either with fp16\r\n\r\n```\r\n# baseline (no --fp16)\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=1 PYTHONPATH=../../src USE_TF=0 python ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500\r\n\r\n2020-12-17 16:26:10 | INFO | __main__ | train_runtime = 24.6995\r\n\r\n# --fp16 --fp16_backend apex\r\n\r\n2020-12-17 16:27:26 | INFO | __main__ | train_runtime = 28.6601\r\n\r\n# --fp16 --fp16_backend amp\r\n\r\n2020-12-17 16:28:37 | INFO | __main__ | train_runtime = 27.9687\r\n\r\n```\r\n\r\n\r\n### Single GPU (rtx-3090 fastest)\r\n\r\nno improvement either with fp16\r\n\r\n```\r\n# baseline (no --fp16)\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0 PYTHONPATH=../../src USE_TF=0 python ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500\r\n\r\n2020-12-17 16:22:33 | INFO | __main__ | train_runtime = 11.2534\r\n\r\n# --fp16 --fp16_backend apex\r\n\r\n2020-12-17 16:19:41 | INFO | __main__ | train_runtime = 14.4828\r\n\r\n# --fp16 --fp16_backend amp\r\n\r\n2020-12-17 16:21:15 | INFO | __main__ | train_runtime = 11.7265\r\n```\r\n\r\nrtx-3040 is so much faster than gtx-1070.\r\n\r\nAlso wrt single gpu test: slow card 100%, fast one 75% - so the trainer is not fast enough feeding data in for the latter.\r\n",
"> Any recommendations and specific command lines that you use?\r\n\r\nNormally the first command indicated in the README of each example folder should work well :-)",
"> Normally the first command indicated in the README of each example folder should work well :-)\r\n\r\nThe one I tried doesn't report speed. `text-classification/run_glue.py`. I'm not sure what method you were using to detect speed improvements.",
"Thanks for adding the speed metrics into the core trainer, with https://github.com/huggingface/transformers/pull/9198 I run benchmarks for run_glue in text-classification, the results are terrible speed-wise just like with the finetune_trainer - terrible as in all those things that are supposed to speed things up, slow things down instead:\r\n\r\n```\r\n# baseline\r\nrm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 run_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output_dir\r\n\r\n12/18/2020 13:29:01 - INFO - __main__ - train_runtime = 97.4012\r\n\r\n# --fp16\r\n\r\n12/18/2020 13:46:09 - INFO - __main__ - train_runtime = 109.1225\r\n\r\n# --sharded_ddp\r\n\r\n12/18/2020 13:53:18 - INFO - __main__ - train_runtime = 103.1887\r\n\r\n# --fp16 --sharded_ddp\r\n\r\n12/18/2020 13:50:57 - INFO - __main__ - train_runtime = 113.9132\r\n```\r\n\r\nthis is all w/ pt-nightly, since I have to use it to run rtx-3090 card\r\n\r\n",
"In that case, it looks linked to your setup/env somehow. Maybe support for the RTX-3090 is not fully fledged? On my setup and env FP16 speeds up by a factor of x2 roughly for this script. Will test the sharded DDP variants on Monday.",
"if you could do the above 4 runs (https://github.com/huggingface/transformers/issues/9179#issuecomment-748338332) for comparison that would be great! I need to know if something is off with my setup since I'm trying to eval deepspeed. \r\n\r\ncuda-11.2 is out so hoping to get a normal support for rtx-3090 from pytorch really soon now.",
"Hi \r\nthe command @stas00 mentioend above does not work for me, I am using last version of huggingface, could you tell me which version you used to test? please see my bug here https://github.com/huggingface/transformers/issues/9215 ",
"@rabeehk: https://github.com/huggingface/transformers/issues/9156#issuecomment-748501582",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | CONTRIBUTOR | null | Splitting from https://github.com/huggingface/transformers/issues/9156#issuecomment-747636108 where while running benchmarks for fairscale's sharded ddp support I noticed that there was almost no difference between having or not having `--fp16` to the training runtime.
Not sure whether this impacts trainer in general or just seq2seq finetune_trainer.py, but there is almost no speed improvements with adding `--fp16`.
This is with pytorch-nightly.
Not sure whether this has to do with the recent autocast cache blowup fix https://github.com/pytorch/pytorch/issues/48049 (our corresponding issue https://github.com/huggingface/transformers/issues/8403) - part of the fix was to remove some of the cash that perhaps was essential and thus leads to this issue.
I will test with apex and compare.
Also as can be seen from: https://github.com/huggingface/transformers/issues/9179#issuecomment-747783879
DP is running really slow!
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9179/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9178/comments | https://api.github.com/repos/huggingface/transformers/issues/9178/events | https://github.com/huggingface/transformers/issues/9178 | 770,421,607 | MDU6SXNzdWU3NzA0MjE2MDc= | 9,178 | [ci] install fairscale on self-runner CIs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Probably should change CI to do:\r\n```\r\npip install fairscale --no-build-isolation\r\n```\r\nwhich is much faster then the new pip system, as it doesn't need to fetch dependent packages\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | CONTRIBUTOR | null | The tests for the new sharded ddp fairscale integration are in place, https://github.com/huggingface/transformers/pull/9177
but CIs don't have `fairscale` installed so they won't run on CI.
This is an issue to track this need so we will eventually run these tests.
Blocking event: needing a more reliable/quick way of installing fairscale - need to track the following issue for when this happens:
https://github.com/facebookresearch/fairscale/issues/264
Then we can add `pip install fairscale` to the self-runner CIs with multigpus. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9178/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9177/comments | https://api.github.com/repos/huggingface/transformers/issues/9177/events | https://github.com/huggingface/transformers/pull/9177 | 770,412,546 | MDExOlB1bGxSZXF1ZXN0NTQyMTY0NzQw | 9,177 | add tests for the new sharded ddp fairscale integration | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I made a request for a quicker/simpler binary: https://github.com/facebookresearch/fairscale/issues/264\r\n\r\nAnd added an issue to track this so we won't forget: https://github.com/huggingface/transformers/issues/9178\r\n"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | This PR adds tests for the just added sharded ddp fairscale integration https://github.com/huggingface/transformers/pull/9139
Obviously these won't run on CIs w/o having fairscale installed... hope we will sort this out down the road. the problem is building fairscale - no binary wheel - I will ask them if they could make a jit version.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9177/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9177",
"html_url": "https://github.com/huggingface/transformers/pull/9177",
"diff_url": "https://github.com/huggingface/transformers/pull/9177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9177.patch",
"merged_at": 1608243843000
} |
https://api.github.com/repos/huggingface/transformers/issues/9176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9176/comments | https://api.github.com/repos/huggingface/transformers/issues/9176/events | https://github.com/huggingface/transformers/pull/9176 | 770,409,555 | MDExOlB1bGxSZXF1ZXN0NTQyMTYyMTMw | 9,176 | [setup] correct transformers version format | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | setuptools has a pretty fixed expectation of version numbers.
```
x.y.z
x.y.z.dev0
x.y.z.rc1
```
This PR fixes the dev version number and adds a comment with correct formats for the future editors
This fix removes this warning on `make fixup|style|etc` or any other time `setup.py` is being run.
```
setuptools/dist.py:452: UserWarning: Normalizing '4.2.0dev0' to '4.2.0.dev0'
warnings.warn(tmpl.format(**locals()))
```
and the alternative:
```
/setuptools/dist.py:452: UserWarning: Normalizing '4.0.0-rc-1' to '4.0.0rc1'
```
Fixes: #8749
@LysandreJik, @sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9176/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9176",
"html_url": "https://github.com/huggingface/transformers/pull/9176",
"diff_url": "https://github.com/huggingface/transformers/pull/9176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9176.patch",
"merged_at": 1608299756000
} |
https://api.github.com/repos/huggingface/transformers/issues/9175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9175/comments | https://api.github.com/repos/huggingface/transformers/issues/9175/events | https://github.com/huggingface/transformers/pull/9175 | 770,360,515 | MDExOlB1bGxSZXF1ZXN0NTQyMTE5ODU1 | 9,175 | Add new run_swag example | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
This PR adds a new example for multiple-choice using Trainer and Datasets, and moves the older one to the legacy folder. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9175/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9175",
"html_url": "https://github.com/huggingface/transformers/pull/9175",
"diff_url": "https://github.com/huggingface/transformers/pull/9175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9175.patch",
"merged_at": 1608319165000
} |
https://api.github.com/repos/huggingface/transformers/issues/9174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9174/comments | https://api.github.com/repos/huggingface/transformers/issues/9174/events | https://github.com/huggingface/transformers/pull/9174 | 770,323,063 | MDExOlB1bGxSZXF1ZXN0NTQyMDg4OTY4 | 9,174 | [WIP] Adapt Cookie Cutter For EncoderDecoder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Cleaner version here: https://github.com/huggingface/transformers/pull/9251"
] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9174/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9174/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9174",
"html_url": "https://github.com/huggingface/transformers/pull/9174",
"diff_url": "https://github.com/huggingface/transformers/pull/9174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9174.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9173/comments | https://api.github.com/repos/huggingface/transformers/issues/9173/events | https://github.com/huggingface/transformers/issues/9173 | 770,265,980 | MDU6SXNzdWU3NzAyNjU5ODA= | 9,173 | GPT2 eval with attention_mask not returning expected result | {
"login": "dan-i",
"id": 17403280,
"node_id": "MDQ6VXNlcjE3NDAzMjgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17403280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dan-i",
"html_url": "https://github.com/dan-i",
"followers_url": "https://api.github.com/users/dan-i/followers",
"following_url": "https://api.github.com/users/dan-i/following{/other_user}",
"gists_url": "https://api.github.com/users/dan-i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dan-i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dan-i/subscriptions",
"organizations_url": "https://api.github.com/users/dan-i/orgs",
"repos_url": "https://api.github.com/users/dan-i/repos",
"events_url": "https://api.github.com/users/dan-i/events{/privacy}",
"received_events_url": "https://api.github.com/users/dan-i/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @dan-i, \r\n\r\n1) You should mask the labels of the first token it's not enough to set the attention_mask to 0 -> `labels=torch.tensor([-100, ...])`\r\n2) You have to change the `position_ids` to ensure the expected behavior in GPT2, *i.e.* make sure that in the second case the position_ids are [0, 0, 1, 2] instead of [0, 1, 2, 3]",
"thank patrick. seems to work if you just do labels to -100 where the pads are without mask and position. at least it returns same result as individual sentence scores. for anybody else that needs gpt2 batch sentence perplexity scores.\r\n\r\n```\r\ndef calc_loss(sentences,inputs,logits):\r\n # calc loss from logits returned from model\r\n lines_len = torch.sum(inputs['attention_mask'], dim=1) # quick way to get sent len. mask isn't used for anything else\r\n for line_ind in range(len(sentences)): \r\n log_prob = 0.0\r\n for token_ind in range(lines_len[line_ind] - 1): \r\n token_prob = F.softmax(logits[line_ind, token_ind], dim=0)\r\n token_id = inputs['input_ids'][line_ind, token_ind + 1]\r\n log_prob += torch.log(token_prob[token_id])\r\n sentloss=abs(log_prob)\r\n toks=lines_len[line_ind]\r\n loss=sentloss/(toks-1)\r\n print(sentences[line_ind],' loss=',loss.item(),'toks=',toks.item(),'sentloss=',sentloss.item())\r\n\r\nsentences=[\"Hello, my\",\"Hello, my dog\",\"Hello, my dog is a dog\"]\r\ntokenizer.pad_token = tokenizer.eos_token # set pad to eos\r\ninputs = tokenizer(sentences, return_tensors=\"pt\", padding=True) # right side padding as usual\r\n# mask of label_id's\r\nlabel_ids=inputs['input_ids'].clone()\r\nlabel_ids[label_ids==tokenizer.encode(tokenizer.pad_token)[0]] = -100\r\n# one shot score the batch\r\nwith torch.no_grad():\r\n model.eval()\r\n loss, logits = model(input_ids=inputs['input_ids'], labels=label_ids)[:2]\r\n# calc loss per sentence\r\ncalc_loss(sentences,inputs,logits)\r\n```\r\noutputs:\r\nHello, my loss= 3.069786548614502 toks= 3 sentloss= 6.139573097229004\r\nHello, my dog loss= 4.247798442840576 toks= 4 sentloss= 12.74339485168457\r\nHello, my dog is a dog loss= 3.671175003051758 toks= 7 sentloss= 22.027050018310547\r\n"
] | 1,608 | 1,608 | 1,608 | NONE | null | is this not correct use of attention_mask with padding (trying to batch sentence eval) or is it a bug? Shouldn't the return result be the same in each case because of the mask?
import torch
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained('gpt2', return_dict=True)
```
with torch.no_grad():
model.eval()
loss,logits = model(input_ids=torch.tensor([15496,11,616]), attention_mask=torch.tensor([1,1,1]), labels=torch.tensor([15496,11,616]))[:2]
print(loss) # 3.0698 correct answer
loss,logits = model(input_ids=torch.tensor([50256,15496,11,616]), attention_mask=torch.tensor([0,1,1,1]), labels=torch.tensor([50256,15496,11,616]))[:2]
print(loss) # 8.96 - shouldn't it be 3.0698 as above?
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9173/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9172/comments | https://api.github.com/repos/huggingface/transformers/issues/9172/events | https://github.com/huggingface/transformers/pull/9172 | 770,261,089 | MDExOlB1bGxSZXF1ZXN0NTQyMDM4MDE3 | 9,172 | [Flax] Implement FlaxElectraModel, FlaxElectraForMaskedLM, FlaxElectraForPreTraining | {
"login": "chris-tng",
"id": 2324137,
"node_id": "MDQ6VXNlcjIzMjQxMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2324137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chris-tng",
"html_url": "https://github.com/chris-tng",
"followers_url": "https://api.github.com/users/chris-tng/followers",
"following_url": "https://api.github.com/users/chris-tng/following{/other_user}",
"gists_url": "https://api.github.com/users/chris-tng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chris-tng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chris-tng/subscriptions",
"organizations_url": "https://api.github.com/users/chris-tng/orgs",
"repos_url": "https://api.github.com/users/chris-tng/repos",
"events_url": "https://api.github.com/users/chris-tng/events{/privacy}",
"received_events_url": "https://api.github.com/users/chris-tng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"Hi @chris-tng -- thanks for trying out Flax in HF Transformers!\r\n\r\nA quick comment on `nn.compact` and `setup` (I work on Flax) -- indeed if you want access to submodules such as for transfer learning then using `setup` is the way to go. I skimmed the PR and see that you use `setup` in some places and `nn.compact` in others? I'm curious whether you found `nn.compact` more useful in particular settings.\r\n\r\nIndeed `setup` is more similar to the PyTorch style (though you still get shape inference if you use `nn.compact` in modules that don't have submodules). `nn.compact` is nice if you want to use loops or conditionals to define submodules based on hyperparameters, and some people also prefer how it \"co-locates\" their submodule definitions and usage. But ultimate it's somewhat a matter of preference.\r\n\r\n(Please do let us know whatever other thoughts or questions on Flax on our discussion board: https://github.com/google/flax/discussions)\r\n\r\nHappy holidays and new year!",
"> shape\r\n\r\nHey @avital,\r\n\r\nThanks a lot for your input here! That's very useful. Most of the main contributors to Transformers are on holiday at the moment and this is a rather big design decision to make going forward with Flax, so I think we'll have to wait here until early January until everybody is back (@sgugger, @LysandreJik, @mfuntowicz) \r\n\r\nHappy holiday to you as well :-) ",
"Hi @avital ,\r\n\r\nApology for my delayed response. I appreciate your great work on Flax. Regarding the use of `setup()` and `nn.compact`, personally I find `setup` works better for testing submodules. This is useful for converting and debugging the module (and submodules). For instance, I can create a model/module with many submodules:\r\n\r\n```python\r\nclass Dummy(nn.Module):\r\n\r\n def setup(self):\r\n self.submodule1 = nn.Dense(10)\r\n self.submodule2 = MyLayerNorm()\r\n\r\n def __call__(self):\r\n # do something here\r\n```\r\n\r\nAfter loading model weights from a dict, I can access/debug submodule by simply accessing the attribute: `dummy.submodule1`, `dummy.submodule2`. From this, I can debug forward pass, check model weights of invididual submodule.\r\n\r\nShameless plug, I wrote a blog post about porting huggingface pytorch model to flax, [here](https://chris-tng.github.io/blog/transformer/nlp/flax/pytorch/jax/2020/12/16/tips-flax.html). I'm a new Flax user so please correct me if I'm missing anything.\r\n\r\nHappy holiday and happy new year to everyone 🎄 🍾 ",
"Hey @chris-tng, \r\n\r\nsorry to had you wait for this long. I'll solve the merge conflicts in your PR and then use your PR to change the `@nn.compact` to `setup` in all other flax models as well so that we have a common standard now. Since most of our users are used to the \"PyTorch\" style and I only see advantages for our library philosophy:\r\n\r\n- We base most design decisions on the PyTorch style\r\n- We prefer slightly less compact readable code over the slightly more \"magic\" functionalities that might reduce code\r\n- To me the are no real downsides to using `setup`",
"Intermediate state is saved here: #9484 will push to this PR on Monday the latest",
"Hey @chris-tng,\r\n\r\nI noticed that we will probably have to wait a bit to get this merged: https://github.com/google/flax/pull/683 to be able to continue the PR. Will keep you up-to-date :-)",
"Hi folks, sorry for the delay with the new-year shuffle and school shutdown.\r\n\r\ngoogle/flax#683 required a bit more conversation and updating some other codebases but now it's merged! If you have a moment, please take a look and see if it helps unblock progress. We'll release Flax 0.4.0 soon, but installing from GitHub now is the way to go.",
"Hey, sorry for barging in\r\nI was needing a small BERT-like model in Jax and so I've recently updated this in a local branch to work with the Flax refactoring that makes checkpoints directly compatible with PyTorch (plus fixing some other issues that had gone through the cracks)\r\nShould I push directly to this branch or make a new PR from my fork? Also should I wait #11364 and update my code accordingly?",
"> Hey, sorry for barging in\r\n> I was needing a small BERT-like model in Jax and so I've recently updated this in a local branch to work with the Flax refactoring that makes checkpoints directly compatible with PyTorch (plus fixing some other issues that had gone through the cracks)\r\n> Should I push directly to this branch or make a new PR from my fork? Also should I wait #11364 and update my code accordingly?\r\n\r\nHey @CoderPat,\r\n\r\nIt would be great if you could wait until #11364 is merged (should be done in the next 2 days). The PR fixes a couple of bugs :-)",
"No problem @patrickvonplaten! Also regarding git logistics, is it better to ask @chris-tng for permission to push directly to his branch?",
"> No problem @patrickvonplaten! Also regarding git logistics, is it better to ask @chris-tng for permission to push directly to his branch?\r\n\r\nI think it's alright to copy past the code that is still useful and open a new branch, if you'd like to add Electra :-). On the branch we should then give credit to @chris-tng , but since the PR is quite old now I think he would be fine if we close this one and open a new one (Please let me know if this is not the case @chris-tng :-)) . #11364 should be the last refactor before the \"fundamental\" Flax design is finished.",
"Just to confirm @patrickvonplaten , the flax refactor is merged and the structure should be stable enough that I can work on implementing Electra right?",
"Exactly @CoderPat - very much looking forward to your PR :-)"
] | 1,608 | 1,648 | null | CONTRIBUTOR | null | # What does this PR do?
1. Implement Flax version of Electra model : `FlaxElectraModel`, `FlaxElectraForMaskedLM`, `FlaxElectraForPreTraining`. Most of the code taken from FlaxBert version with changes in parameters and forward pass.
2. Adjust `convert_to_pytorch` to load weights for Electra
3. Implement `FlaxElectraGeneratorPredictions`, `FlaxElectraDiscriminatorPredictions` for generator and discriminator prediction head.
4. Implement test in `tests/test_modeling_flax_electra.py`
Forward pass works by running
```shell
pytest tests/test_modeling_flax_electra.py
```
Hi @patrickvonplaten , @mfuntowicz , I've seen your work on FlaxBert, so I'm tagging in case you want to review. Please note that I use `flax setup` instead of decorator `@nn.compact` since the former
- allows to test and inspect submodule
- separate submodule declaration from the forward pass. Forward pass method can be very long if using `@nn.compact`
I'm happy to revert this change to make code style consistent.
Let me know if you have any questions or feedbacks.
Thanks.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? Yes, test added `tests/test_modeling_flax_electra.py`
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9172/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9172/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9172",
"html_url": "https://github.com/huggingface/transformers/pull/9172",
"diff_url": "https://github.com/huggingface/transformers/pull/9172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9172.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9171/comments | https://api.github.com/repos/huggingface/transformers/issues/9171/events | https://github.com/huggingface/transformers/issues/9171 | 770,220,529 | MDU6SXNzdWU3NzAyMjA1Mjk= | 9,171 | Trainer returns logits of only one sequence instead of entire evaluation dataset | {
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"**Update:** I slightly changed my script based on the [text_classification example notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) and now it works. Don't know what made the difference, but in case someone else has a similar problem, I recommend following the steps in this script. "
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1 (also reproduced the same issue with 3.5.1)
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): no
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: (don't know. Probably not)
### Who can help
@sgugger
(The use-case is basically the same as reported in this issue: https://github.com/huggingface/transformers/issues/9160
But I'm opening a new issue, because this is a separate problem)
## Information
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
**Description:**
I’m trying to fine-tune a pre-trained NLI model (`ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli`) on a dataset of around 276.000 hypothesis-premise pairs. I’m following the instructions from the docs [here](https://huggingface.co/transformers/custom_datasets.html) and [here](https://huggingface.co/transformers/training.html).
- When I run the training, it seems like the fine-tuning works (it does the training and saves the checkpoints).
- But during evaluation, the trainer/model seems to only compute the logits for one sequence instead of the entire training sequence. This means that the resulting metrics don't make sense. This also makes me doubt whether the training is actually executed on the entire training dataset, or only on the first sequence (the loss stays between 7.2 and 7.0 for most of the training steps).
- I've tried to change the input and dataset format in many different ways. One possible issue is that I am somehow only passing the same sequence into the trainer. But I've made sure that this is not the case (see also the printed input_ids in the code snippet below).
```
### load model and tokenizer
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
max_length = 256
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name) # num_labels=3
model.config
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html
model.to(device)
model.train();
# ... some data preprocessing
### use hf datasets object + tokenize
from datasets import Dataset
# create hf dataset objects
dataset_train = Dataset.from_pandas(df_train)
dataset_val = Dataset.from_pandas(df_val)
dataset_test = Dataset.from_pandas(df_test)
## tokenize on dataset object
def tokenize(batch):
return tokenizer(batch['premise'], batch['hypothesis'], max_length=max_length, return_token_type_ids=True, truncation=False, padding=True)
dataset_train = dataset_train.map(tokenize, batched=True, batch_size=len(df_train))
dataset_val = dataset_val.map(tokenize, batched=True, batch_size=len(df_val))
dataset_test = dataset_test.map(tokenize, batched=True, batch_size=len(df_test))
# to tensors
dataset_train.set_format('torch', columns=['input_ids', 'attention_mask', 'label', 'token_type_ids']) # format_kwargs=torch.LongTensor()
dataset_val.set_format('torch', columns=['input_ids', 'attention_mask', 'label', 'token_type_ids'])
dataset_test.set_format('torch', columns=['input_ids', 'attention_mask', 'label', 'token_type_ids'])
print(dataset_val["input_ids"][:2])
# ! to show that the tokenized input sequences are not the same
# output:
tensor([[ 0, 45784, 5, 3302, 9, 5222, 274, 1729, 7, 1719,
5, 1403, 12, 28326, 7, 1157, 1232, 6, 11, 1989,
10801, 1041, 6, 2388, 8243, 11, 1818, 173, 4, 2,
2, 133, 2788, 16, 59, 592, 1134, 4, 286, 1246,
35, 6300, 8, 3111, 6, 786, 12, 12063, 15343, 1134,
6, 6610, 1134, 6, 1692, 1380, 8, 2038, 1134, 6,
223, 25943, 33421, 5688, 1134, 2, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1],
[ 0, 1121, 570, 51, 1381, 7, 989, 106, 19, 1085,
13, 10, 353, 4, 2, 2, 133, 2788, 16, 59,
6642, 8, 1318, 9, 301, 4, 286, 1246, 35, 3039,
2591, 6, 1265, 22830, 6, 2040, 6, 6642, 194, 22830,
6, 6642, 194, 2919, 6, 9057, 6, 1265, 2919, 2,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1]])
# compute metrics with trainer https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing#scrollTo=N8J-TLhBuaOf
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
print(pred.predictions, pred.predictions.argmax(-1))
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary', pos_label=0)
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
## training
from transformers import Trainer, TrainingArguments
# https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=16, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
fp16=True
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset_train, # training dataset
eval_dataset=dataset_val, # evaluation dataset
compute_metrics=compute_metrics,
tokenizer=tokenizer
)
trainer.train()
### evaluate
# for some reason the logits are the the same for each prediction
trainer.evaluate()
[[ 2.055 -7.992 2.053]
[ 2.055 -7.992 2.053]
[ 2.053 -7.984 2.05 ]
...
[ 2.055 -7.992 2.053]
[ 2.055 -7.992 2.053]
[ 2.055 -7.992 2.053]] [0 0 0 ... 0 0 0]
{'eval_accuracy': 0.499,
'eval_f1': 0.6657771847898598,
'eval_loss': 0.6932016611099243,
'eval_precision': 0.499,
'eval_recall': 1.0}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Get different logits for each sequence/model prediction in therefore get meaningful metrics output on the evaluation dataset.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9171/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9170/comments | https://api.github.com/repos/huggingface/transformers/issues/9170/events | https://github.com/huggingface/transformers/pull/9170 | 770,171,673 | MDExOlB1bGxSZXF1ZXN0NTQxOTY1MDU3 | 9,170 | Put all models in the constants | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"An alternative is to remove all content, but the previous state where one variable as some keys but the others not leads to failures.",
"Yes, we really need to remove all of this @LysandreJik @thomwolf ",
"We do, but right now we still use them in the tests",
"It was a bit confusing to me which to add in the constants and which not.. 😅 ",
"@sgugger one more small fix you can add is in tapas.rst under \"Usage: inference\", there should be one more space:\r\n\r\n\r\n\r\n(Sorry, if I see more things I make a new PR myself :p)",
"Pushing your fix on master directly Niels"
] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
It was impossible to use all pretrained checkpoints in the tapas tokenzier file because they were not in the constants of the file. This PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9170/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9170",
"html_url": "https://github.com/huggingface/transformers/pull/9170",
"diff_url": "https://github.com/huggingface/transformers/pull/9170.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9170.patch",
"merged_at": 1608222202000
} |
https://api.github.com/repos/huggingface/transformers/issues/9169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9169/comments | https://api.github.com/repos/huggingface/transformers/issues/9169/events | https://github.com/huggingface/transformers/pull/9169 | 770,132,619 | MDExOlB1bGxSZXF1ZXN0NTQxOTMzNDg4 | 9,169 | Added TF TransfoXL Sequence Classification | {
"login": "spatil6",
"id": 6419011,
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spatil6",
"html_url": "https://github.com/spatil6",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"repos_url": "https://api.github.com/users/spatil6/repos",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | This PR implements Sequence classification for TF TransfoXL model.
TFTransfoXLForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1 ,GPT-2) do.
Fixes #7623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik @jplu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9169/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9169",
"html_url": "https://github.com/huggingface/transformers/pull/9169",
"diff_url": "https://github.com/huggingface/transformers/pull/9169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9169.patch",
"merged_at": 1608385445000
} |
https://api.github.com/repos/huggingface/transformers/issues/9168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9168/comments | https://api.github.com/repos/huggingface/transformers/issues/9168/events | https://github.com/huggingface/transformers/pull/9168 | 770,091,454 | MDExOlB1bGxSZXF1ZXN0NTQxOTAwODY4 | 9,168 | Fix gradient clipping for Sharded DDP | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
As mentioned in the discussion of #9156, `Trainer` does not do gradient clipping correctly when using a sharded optimizer. This PR fixes that, and also allows `Trainer` to not perform any gradient clipping (by passing `None` or `0` to the corresponding argument). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9168/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9168/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9168",
"html_url": "https://github.com/huggingface/transformers/pull/9168",
"diff_url": "https://github.com/huggingface/transformers/pull/9168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9168.patch",
"merged_at": 1608216264000
} |
https://api.github.com/repos/huggingface/transformers/issues/9167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9167/comments | https://api.github.com/repos/huggingface/transformers/issues/9167/events | https://github.com/huggingface/transformers/pull/9167 | 770,089,471 | MDExOlB1bGxSZXF1ZXN0NTQxODk5MzYz | 9,167 | Add disclaimer to TAPAS rst file | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9167/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9167",
"html_url": "https://github.com/huggingface/transformers/pull/9167",
"diff_url": "https://github.com/huggingface/transformers/pull/9167.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9167.patch",
"merged_at": 1608215647000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9166/comments | https://api.github.com/repos/huggingface/transformers/issues/9166/events | https://github.com/huggingface/transformers/issues/9166 | 770,061,905 | MDU6SXNzdWU3NzAwNjE5MDU= | 9,166 | Language modeling logging | {
"login": "Ganeshgit96",
"id": 76167840,
"node_id": "MDQ6VXNlcjc2MTY3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/76167840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ganeshgit96",
"html_url": "https://github.com/Ganeshgit96",
"followers_url": "https://api.github.com/users/Ganeshgit96/followers",
"following_url": "https://api.github.com/users/Ganeshgit96/following{/other_user}",
"gists_url": "https://api.github.com/users/Ganeshgit96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ganeshgit96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ganeshgit96/subscriptions",
"organizations_url": "https://api.github.com/users/Ganeshgit96/orgs",
"repos_url": "https://api.github.com/users/Ganeshgit96/repos",
"events_url": "https://api.github.com/users/Ganeshgit96/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ganeshgit96/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think @LysandreJik will know more what to do for the logging. Also, we usually ask questions like this on the [forum](https://discuss.huggingface.co/) and keep issues for bugs/feature requests.",
"If you want to output everything to a file, you can redirect the standard output/standard error to a file. We use the official `logging` package for logs, and they have a full chapter en redirecting to a file: https://docs.python.org/3/howto/logging.html#logging-to-a-file",
"It is also possible to redirect the output from the script (without modifying anything). I use the following code to both have the output on my screen (console) and a copy in my log files (with stderr having the python logging logs):\r\n\r\n```bash\r\npython run_your_script.py \\\r\n > >(tee -a stdout.log) \\\r\n 2> >(tee -a stderr.log >&2)\r\n```\r\n\r\n(Found this gem on StackOverflow somewhere.)",
"@Querela and @LysandreJik, thanks a lot for the help. closing the issues."
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.0.0-rc-1
- Platform:Linux
- Python version:Python 3.7.9
- PyTorch version (GPU?):1.4.0
- Tensorflow version (GPU?):1.14.0
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?:Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am new to the logging package and I would like to know if we can redirect all the contents occurring in the screen to a text file when I am running the language modeling script. I am not sure if this is the correct forum to ask but can you please help me with this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9166/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9165/comments | https://api.github.com/repos/huggingface/transformers/issues/9165/events | https://github.com/huggingface/transformers/issues/9165 | 770,051,701 | MDU6SXNzdWU3NzAwNTE3MDE= | 9,165 | Roberta python Tokenizer encodes differently across transformers==2.11 and transformers==4.0.1 | {
"login": "ypapanik",
"id": 22024955,
"node_id": "MDQ6VXNlcjIyMDI0OTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/22024955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ypapanik",
"html_url": "https://github.com/ypapanik",
"followers_url": "https://api.github.com/users/ypapanik/followers",
"following_url": "https://api.github.com/users/ypapanik/following{/other_user}",
"gists_url": "https://api.github.com/users/ypapanik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ypapanik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ypapanik/subscriptions",
"organizations_url": "https://api.github.com/users/ypapanik/orgs",
"repos_url": "https://api.github.com/users/ypapanik/repos",
"events_url": "https://api.github.com/users/ypapanik/events{/privacy}",
"received_events_url": "https://api.github.com/users/ypapanik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, note that 'suicide' and ' suicide' are indeed different tokens with respect to SentencePieces, which is used by Roberta.\r\n\r\nSentencePieces keeps white space intact. So we expect different tokens from 'suicide' and ' suicide'. \r\n\r\nI assume in the old transformers 2.x, strip was applied in the tokenizer before it was passed to SentencePiece, which is incorrect. ",
"Thank you for letting me know. Regardless, the above creates some pretty big inconsistency issues and for example I end up with a model trained with 2.x, when used to predict in an env with transformers >= 3.x to have major discrepancy in predicting accuracy.\r\nThis should be at least flagged somewhere and there should be some \"backward compatibility\" option here.",
"You could call `text.strip()` before passing it to the tokenizer. ",
"Sorry, just to clarify. This doesn't have to do on how I can personally solve my issue with a hack. Of course I can do that or simply use the same transformers version when training/predicting. \r\nThis has to do on how a production system might get affected by this, in my view, major discrepancy. Tokenization behaviour shouldn't change \"silently\" across versions.\r\nThink of the following case: a product trained to do sentence similarity. The model is trained to understand similarity of sentences, thus tokens. Now if you happily change tokenization across versions, you end up with a model that won't work properly when you upgrade transformers library. That should not happen, or at least get flagged somewhere.",
"The old tokenization was buggy and incorrect (based on the info here). I don't see a point to add an option to enable back buggy / incorrect behavior of a component. That would be quite bad to have flags that re-enable all kind of buggy behavior you had in old versions. \r\n\r\nIf you need stable results in production then there is no other way than to stay at one version or otherwise have sufficient tests to ensure the functionality of your system when you upgrade your framework. \r\n\r\nIt is the same with every framework, that between major versions that there can be significant changes that can impact your software. \r\n\r\n(Just my personal opinion, I don't work for huggingface. I don't know if the bug in the old tokenizer was known and fixed (then it is likely mentioned in the release notes) or if the fix was just a by product by some other commit.)\r\n\r\n\r\nIt appears to be fixed here\r\nhttps://github.com/huggingface/transformers/pull/5287\r\n\r\nAnd was mentioned in the subsequent release ",
"Thank you this slipped my attention, it was nevertheless a pretty significant issue. Apparently it's connected to that: https://github.com/huggingface/transformers/issues/5256\r\nI guess this should be expected with a framewrok developing so quickly!",
"@ypapanik I am a bit surprised that it makes such a big difference for your use case. You wouldn't expect this as the only difference is a potential white space at the beginning of the text.\r\n\r\nMy roberta models changed performance in downstream tasks only slightly between the old tokenizer and the newer tokenizer (performance improved like 0.1 percentage points actually)."
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11 vs 4.0.1
- Platform: Any
- Python version: 3.8.3
- PyTorch version (GPU?): Any
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
I am attaching two samples below. In short, the word "suicide" gets encoded with different tokens across 2 different transformers versions. This leads to the same finetuned model behaving differently if using different transformers versions. Which is alarming in a sense.


Model I am using (Bert, XLNet ...): Roberta-base
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run the following code with transformers==2.11 and 4.0.1
`from transformers import AutoTokenizer`
`tokenizer = AutoTokenizer.from_pretrained('roberta-base', use_fast=False)`
`tokenizer.encode(' suicide ')`
`tokenizer.encode('suicide')`
`tokenizer.encode(' suicide')`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A tokenizer for the same pretrained model, should tokenize words identically.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9165/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9164/comments | https://api.github.com/repos/huggingface/transformers/issues/9164/events | https://github.com/huggingface/transformers/pull/9164 | 770,032,503 | MDExOlB1bGxSZXF1ZXN0NTQxODY2NzEx | 9,164 | Add clipping to relative positional embedding | {
"login": "hadaev8",
"id": 20247085,
"node_id": "MDQ6VXNlcjIwMjQ3MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/20247085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadaev8",
"html_url": "https://github.com/hadaev8",
"followers_url": "https://api.github.com/users/hadaev8/followers",
"following_url": "https://api.github.com/users/hadaev8/following{/other_user}",
"gists_url": "https://api.github.com/users/hadaev8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadaev8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadaev8/subscriptions",
"organizations_url": "https://api.github.com/users/hadaev8/orgs",
"repos_url": "https://api.github.com/users/hadaev8/repos",
"events_url": "https://api.github.com/users/hadaev8/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadaev8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten @LysandreJik @julien-c @zhiheng-huang",
"Hey @hadaev8,\r\n\r\nThanks for your PR. It would be awesome if you could provide a code snippet to this PR that shows a case where the current implementation of BERT's relative positional embeddings would fail. Hope it's fine to tag the original author here: @zhiheng-huang for some discussion. ",
"@patrickvonplaten \r\nColab notebook\r\nhttps://colab.research.google.com/drive/1bAwwNMbh27JW0H6uWG9rZgyjz_b0eB1M?usp=sharing\r\n",
"@patrickvonplaten \r\nAny update?",
"Hey @hadaev8,\r\n\r\nI'm having a hard time deciding here without the opinion of the official author @zhiheng-huang . @zhiheng-huang it would be great if you could leave a comment here :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@patrickvonplaten \r\nWell, in current state model with rel pos just would not work as expected eg process longer input.",
"@hadaev8, \r\n\r\nAlright, let's try to fix it together! It would be awesome if you could post a reproducible code snippet here so that we can see when an error arises",
"@patrickvonplaten \r\nHere it doesnt work\r\nhttps://colab.research.google.com/drive/1bAwwNMbh27JW0H6uWG9rZgyjz_b0eB1M?usp=sharing\r\nHere it work\r\nhttps://colab.research.google.com/drive/1OIHjR2kVndDzwro7n_SgkpJG4YwjgGQY?usp=sharing",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
This PR adds clipping to distance embedding as described in the paper [Improve Transformer Models with Better Relative Position Embeddings](https://arxiv.org/abs/2009.13658)
Without this addition, the model with relative positional embedding will return an error if the input length above max_position_embeddings param.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9164/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9164",
"html_url": "https://github.com/huggingface/transformers/pull/9164",
"diff_url": "https://github.com/huggingface/transformers/pull/9164.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9164.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9163/comments | https://api.github.com/repos/huggingface/transformers/issues/9163/events | https://github.com/huggingface/transformers/pull/9163 | 769,990,931 | MDExOlB1bGxSZXF1ZXN0NTQxODQ1Mjk2 | 9,163 | Fix mixed precision in TF models | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR aims to fix the mixed precision issues when `tf.keras.mixed_precision.experimental.set_policy()` is set to something else than `tf.float32`. In the same page, this PR aims to fix some TFLite quantization issues.
Before to further continue this PR, the PR #9418 has to be merged.
Fixes # (issue)
#7052
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9163/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9163",
"html_url": "https://github.com/huggingface/transformers/pull/9163",
"diff_url": "https://github.com/huggingface/transformers/pull/9163.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9163.patch",
"merged_at": 1611230412000
} |
https://api.github.com/repos/huggingface/transformers/issues/9162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9162/comments | https://api.github.com/repos/huggingface/transformers/issues/9162/events | https://github.com/huggingface/transformers/issues/9162 | 769,695,682 | MDU6SXNzdWU3Njk2OTU2ODI= | 9,162 | IndexError: index out of range in self while using longformers when i try to pass token_type_ids | {
"login": "yuvarajvc",
"id": 7568817,
"node_id": "MDQ6VXNlcjc1Njg4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7568817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvarajvc",
"html_url": "https://github.com/yuvarajvc",
"followers_url": "https://api.github.com/users/yuvarajvc/followers",
"following_url": "https://api.github.com/users/yuvarajvc/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvarajvc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvarajvc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvarajvc/subscriptions",
"organizations_url": "https://api.github.com/users/yuvarajvc/orgs",
"repos_url": "https://api.github.com/users/yuvarajvc/repos",
"events_url": "https://api.github.com/users/yuvarajvc/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvarajvc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can we assume that \"token_type_ids\" will not support in longformer?",
"Yes, actually yesterday a PR (#9152) was merged to update the docs stating the LongFormer does not support token type ids. ",
"@NielsRogge Thank you",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null |
- `transformers` version: 3.0.0
- Platform: windows
- Python version: 3.6.10 :: Anaconda, Inc.
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## To reproduce
import torch
from transformers import LongformerModel, LongformerTokenizer
model = LongformerModel.from_pretrained('allenai/longformer-base-4096')
tokenizer = LongformerTokenizer.from_pretrained('roberta-base')
SAMPLE_TEXT = ' '.join(['Hello world! '] * 100) # long input document
input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1
attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device)
global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device)
segment_ids = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device)
outputs = model(input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask,token_type_ids=segment_ids)
## Error info
IndexError Traceback (most recent call last)
<ipython-input-357-c7bf5dc7cbc9> in <module>
----> 1 outputs = model(input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask,token_type_ids=segment_ids)
~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\.conda\envs\env\lib\site-packages\transformers\modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states)
675 encoder_attention_mask=None,
676 output_attentions=output_attentions,
--> 677 output_hidden_states=output_hidden_states,
678 )
679
~\.conda\envs\env\lib\site-packages\transformers\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states)
751
752 embedding_output = self.embeddings(
--> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
754 )
755 encoder_outputs = self.encoder(
~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\.conda\envs\env\lib\site-packages\transformers\modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
66
67 return super().forward(
---> 68 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
69 )
70
~\.conda\envs\env\lib\site-packages\transformers\modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
178 inputs_embeds = self.word_embeddings(input_ids)
179 position_embeddings = self.position_embeddings(position_ids)
--> 180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
182 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\.conda\envs\env\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
~\.conda\envs\env\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
## Expected behavior
If i am not passing a segment_id (token_type_ids) then model is executing or if i am passing segment_id's as zeros then its getting executed when i am passing segment_id's 1 or 0's and 1's then i am getting an error as index out of range in self.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9162/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9161/comments | https://api.github.com/repos/huggingface/transformers/issues/9161/events | https://github.com/huggingface/transformers/issues/9161 | 769,556,446 | MDU6SXNzdWU3Njk1NTY0NDY= | 9,161 | Metric calculation across batches in seq2seq examples | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Definitely not correct. Additionally, the final batch might be much smaller (and still weighted evenly). To get \"perfect\" bleu scores use `run_distributed_eval.py`\r\n",
"Right, that's what I thought as well. If we think saving all the intermediate output and computing the bleu score once at the end of each validation epoch is too memory-intensive, one solution would be to accumulate intermediate BLEU-internal metrics, and compute the BLEU score at the end. This way the asymptotic memory complexity wouldn't depend on the validation set size. C.f. the AllenNLP implementation. At the very least, I think we should give a warning or something so that people realize this is not a precise number and wouldn't use it, say, in a paper.",
"Hey @ZhaofengWu \r\n\r\nThis is now fixed in the new `run_seq2seq.py` script."
] | 1,608 | 1,614 | 1,614 | CONTRIBUTOR | null | ### Who can help
@sshleifer, @patil-suraj
## Information
Currently, the seq2seq finetune script calculates the final metrics (bleu or rouge) per batch, and then simply averages these numbers across batches. Is it correct? For example, I just took a brief look at sacrebleu (which the script uses), and it seems that `corpus_bleu(preds, targets)` isn't necessarily equal to `(corpus_bleu(preds[:n], targets[:n] + corpus_bleu(preds[n:], targets[n:])/2`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9161/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9160/comments | https://api.github.com/repos/huggingface/transformers/issues/9160/events | https://github.com/huggingface/transformers/issues/9160 | 769,358,412 | MDU6SXNzdWU3NjkzNTg0MTI= | 9,160 | Trainer bug? Loss and logits are “nan” when fine-tuning NLI model (both RoBERTa/BART) | {
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"**Update:** \r\nI reran the training in native PyTorch with the following code and I did not get the same issue. This means that there is some issue with the trainer?\r\n\r\n```\r\nimport torch\r\n\r\nclass XDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings, labels):\r\n self.encodings = encodings\r\n self.labels = labels\r\n def __getitem__(self, idx):\r\n #item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}\r\n item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}\r\n #item = {key: torch.as_tensor(val[idx]).to(device) for key, val in self.encodings.items()}\r\n item['labels'] = torch.as_tensor(self.labels[idx])\r\n #item['labels'] = torch.LongTensor(self.labels[idx])\r\n #item['labels'] = self.labels[idx]\r\n return item\r\n def __len__(self):\r\n return len(self.labels)\r\n\r\ndataset_train = XDataset(encodings_train, label_train)\r\ndataset_val = XDataset(encodings_val, label_val)\r\ndataset_test = XDataset(encodings_test, label_test)\r\n\r\n\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers import AdamW\r\n\r\ndevice = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\r\nmodel.to(device)\r\nmodel.train()\r\n\r\ntrain_loader = DataLoader(dataset_train, batch_size=16, shuffle=True)\r\n\r\noptim = AdamW(model.parameters(), lr=5e-5)\r\n\r\nfor epoch in range(1):\r\n for batch in train_loader:\r\n optim.zero_grad()\r\n input_ids = batch['input_ids'].to(device)\r\n attention_mask = batch['attention_mask'].to(device)\r\n token_type_ids = batch['token_type_ids'].to(device)\r\n labels = batch['labels'].to(device)\r\n print(labels)\r\n outputs = model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels)\r\n loss = outputs[0] # outputs.loss\r\n print(loss)\r\n loss.backward()\r\n optim.step()\r\n# Output: it prints the labels and the loss correctly!\r\n#tensor([2, 0, 2, 2, 2, 2, 0, 0, 0, 2, 2, 2, 0, 0, 0, 2], device='cuda:0')\r\n#tensor(0.6895, device='cuda:0', dtype=torch.float16, grad_fn=<NllLossBackward>) ....\r\n```\r\n\r\nWhen I rerun the model for a test inference after this native pytorch training step, it also returns logits and loss as expected (no \"nan\"). ",
"In the first snippet of code you convert your whole model to FP16 with `model.half()` (this is not in your second snippet of code). This is not how mixed-precision training works and you should pass the flag `fp16=True` to your `TrainingArguments`.",
"thanks, I don't know much about mixed-precision training (the only reason why I added model.half() is because I understood that it reduces memory usage). Now, when I add `fp16=True`, i get the error: \r\n`ValueError: Attempting to unscale FP16 gradients.` when running `trainer.train()`\r\n\r\n```\r\ntraining_args = TrainingArguments(\r\n output_dir='./results', # output directory\r\n num_train_epochs=1, # total number of training epochs\r\n per_device_train_batch_size=8, # batch size per device during training\r\n per_device_eval_batch_size=8, # batch size for evaluation\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=30,\r\n fp16=True\r\n)\r\n```",
"Cool, but when I remove the model.half(), it does return the loss, that's great!",
"Yes you have to remove that line, that's what I was saying :-)",
"Great, so I understand that I can use mixed precision training by simply passing the flag `fp16=True` without manual modifications to the model. Is there actually any good reason not to pass \"fp16=True\"? The articles on mixed precision training I've found seem to be very positive about it.\r\n\r\nIn any case, thanks for solving my issue! :) ",
"There are no reason not to use, no. Sometimes for debugging purposes or there may be one of the exotic models that don't support FP16, but in general, it's a good way to speed up training and saving GPU memory.\r\n\r\nClosing the issues since it's solved!",
"> there may be one of the exotic models that don't support FP16\r\n\r\nThat was my case with `ltgoslo/norbert` producing the nan loss with FP16. Setting `fp16` to `False` solved the issue, thanks!"
] | 1,608 | 1,630 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1 (also reproduced the same issue with 3.5.1)
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): no
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: (don't know. Probably not)
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
The problem arises when using:
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] my own task or dataset: (give details below)
**Description:**
I’m trying to fine-tune a pre-trained NLI model (`ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli`) on a dataset of around 276.000 hypothesis-premise pairs. I’m following the instructions from the docs [here](https://huggingface.co/transformers/custom_datasets.html) and [here](https://huggingface.co/transformers/training.html). When I run the training, it seems like the fine-tuning works (it does the training and saves the checkpoints), but `trainer.train()` and `trainer.evaluate()` return "nan" as loss value.
**What I've tried:**
- I tried using both `ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli` and `facebook/bart-large-mnli` to make sure that it's not linked to specific model, but I get the issue for both models
- I tried following the advice in this [related github issue](https://github.com/huggingface/transformers/issues/1727), but adding `num_labels=3` to the config file does not solve the issue. (I think my issue is different because the models are already fine-tuned on NLI in my case)
- I tried many different ways of changing my input data because I suspected that there could be an issue with my input data, but I also couldn't solve it that way.
- **The probable source of the issue:** I inspected the prediction output from the model during training and the weird thing is that the prediction value always seems to be "0" (entailment) in 100% of cases (see printed output at the bottom of the code below). This cannot be right.
Even weirder: When I first run the model to predict a test sequences before running the trainer, I get normal logits as output. When I run the exact same code block again at the end after having run the trainer, I get `tensor([[nan, nan, nan]]` as output (see code below).
- I suspect that the source for the 'only 0 prediction output' is that the logits the model returns during training are possibly always `torch.tensor([[np.nan, np.nan, np.nan]])`. `torch.tensor([[np.nan, np.nan, np.nan]]).argmax(-1)` returns torch.tensor(0) without triggering an error. The big mystery for me is why the logits would become "nan", because the model does not do that when I use the same input data only outside of the trainer, but something changes once I've run the trainer.
=> I would be very thankful for any help on this! (I've been trying to solve this since two days now)
Thanks a lot in advance.
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
### Here is my code:
### load model & tokenize
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
max_length = 256
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
# also tried: hg_model_hub_name = "facebook/bart-large-mnli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name)
model.config
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
if device == "cuda":
model = model.half()
model.to(device)
model.train();
**Running a test inference with the model at this point works fine:**
```
test_enc = tokenizer(nli_train[0]["premise"], nli_train[0]["hypothesis"], return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=True, padding=True)
model.eval();
test_output_loss = model(test_enc["input_ids"].to(device), attention_mask=test_enc["attention_mask"].to(device), token_type_ids=test_enc["token_type_ids"].to(device), labels=torch.tensor(2).to(device))
print(test_output_loss)
#output: SequenceClassifierOutput(loss=tensor(2.2168, device='cuda:0', dtype=torch.float16, grad_fn=<NllLossBackward>), logits=tensor([[ 0.4075, 0.8511, -0.7549]], device='cuda:0', dtype=torch.float16,
grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**Then I continue with preprocessing and training:**
#... some data preprocessing
encodings_train = tokenizer(premise_train, hypothesis_train, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_val = tokenizer(premise_val, hypothesis_val, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_test = tokenizer(premise_test, hypothesis_test, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
### create pytorch dataset object
class XDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}
#item = {key: torch.as_tensor(val[idx]).to(device) for key, val in self.encodings.items()}
item['labels'] = torch.as_tensor(self.labels[idx])
#item['labels'] = self.labels[idx]
return item
def __len__(self):
return len(self.labels)
dataset_train = XDataset(encodings_train, label_train)
dataset_val = XDataset(encodings_val, label_val)
dataset_test = XDataset(encodings_test, label_test)
# compute metrics with trainer
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
print(labels)
preds = pred.predictions.argmax(-1)
print(preds)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary', pos_label=0)
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
## training
from transformers import Trainer, TrainingArguments
# https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset_train, # training dataset
eval_dataset=dataset_val # evaluation dataset
)
trainer.train()
# output: TrainOutput(global_step=181, training_loss=nan)
trainer.evaluate()
# output:
[2 2 2 0 0 2 2 2 0 2 0 0 2 2 2 2 0 2 0 2 2 2 2 0 2 0 2 0 0 2 0 0 2 0 0 0 2
0 2 0 0 0 0 0 2 0 0 2 2 2 0 2 2 2 2 2 0 0 0 0 2 0 0 0 2 2 0 0 0 2 0 0 0 2
2 0 2 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 2 0 0 0 0 2 0 2 2 0 2 0 0 2 2 2 2 2 2
2 0 0 0 0 2 0 0 2 0 0 0 0 2 2 2 0 0 0 0 0 2 0 0 2 0 2 0 2 0 2 0 0 2 2 0 0
2 2 2 2 2 2 0 0 2 2 2 2 0 2 0 0 2 2 2 0 0 2 0 2 0 2 0 0 0 0 0 0 2 0 0 2 2
0 2 2 2 0 2 2 0 2 2 2 2 2 2 0 0 2 0 0 2 2 0 0 0 2 0 2 2 2 0 0 0 0 0 0 0 0
2 0 2 2 2 0 2 0 0 2 0 2 2 0 0 0 0 2 2 2 0 0 0 2 2 2 2 0 2 0 2 2 2]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{'epoch': 1.0,
'eval_accuracy': 0.5137254901960784,
'eval_f1': 0.6787564766839378,
'eval_loss': nan,
'eval_precision': 0.5137254901960784,
'eval_recall': 1.0}
**Test running the model again after training, returns `tensor([[nan, nan, nan]]` for some reason:**
```
test_enc = tokenizer(nli_train[0]["premise"], nli_train[0]["hypothesis"], return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=True, padding=True)
model.eval();
test_output_loss = model(test_enc["input_ids"].to(device), attention_mask=test_enc["attention_mask"].to(device), token_type_ids=test_enc["token_type_ids"].to(device), labels=torch.tensor(2).to(device))
print(test_output_loss)
#output: SequenceClassifierOutput(loss=tensor(nan, device='cuda:0', dtype=torch.float16, grad_fn=<NllLossBackward>), logits=tensor([[nan, nan, nan]], device='cuda:0', dtype=torch.float16,
grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
[1]: https://huggingface.co/
## Expected behavior
Model should not return "nan" for logits and return a loss value.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9160/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9159/comments | https://api.github.com/repos/huggingface/transformers/issues/9159/events | https://github.com/huggingface/transformers/issues/9159 | 769,335,819 | MDU6SXNzdWU3NjkzMzU4MTk= | 9,159 | Unified transformer interface | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @ZhaofengWu, thank you for proposing this feature. \r\n\r\nYou're noting a decision that we've made very early on in the design of the `transformers` library, which is to have standalone model files, **without any abstraction**, including the `Transformer` abstraction you're noting here. We believe it's much easier to read and understand a model file if there is no abstraction. Understanding a model file is done by reading the model file. There are no other files to go to to understand it, simply a single model file. This is our objective, as is described in the [philosophy](https://huggingface.co/transformers/philosophy.html).\r\n\r\nYou mention having a `Transformer` class which is not associated with any specific architecture, but how would we do that? Would it be a Transformer that has absolute positional embeddings, or relative? Would it accept token type IDs? Would it have a pooler layer? How would it perform its attention? All of these questions have answers that mean we're defining a specific `Transformer` architecture, which is not something we want to do.\r\n\r\nFinally, I believe we already have what you're looking for; if what you want to do is to have a transformer that you can tweak, so as to implement your own architecture, I recommend you take a look at the [model templates](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model), which add a \"bland\" model which you can customize to your needs. It also adds the tests (which pass), according to the naming you defined, and places the import where they should be, alongside the auto classes.",
"Thank you for your response! I was thinking about the `Transformer` abstraction in pytorch, fairseq, etc. Re. not associating with any specific architecture, one solution could be to implement all options in `Transformer` and allow clients to control which one to use with flags. Of course, there's no such thing as \"all options\" as researchers develop new architectural improvements, and we would need to be constantly adding to this class, which would not be ideal. Nevertheless, for some common boolean stuff (e.g. token type IDs), perhaps it's not the worst idea to implement it there and have a flag to control if it is included. For most other stuff, the modularity I mentioned could help. If we can swap components of `Transformer` easily, we can only write small pieces of code while reusing the rest of the architecture. We can keep some \"default\" choices in there which other models can override (e.g. transformer-xl has a relative positional embedding class (this implementation can also be shared across models) that overrides the absolute one in `Transformer`), but if we are committed to making the class absolutely not associated with _any_ architecture, it can be made an abstract class.\r\n\r\nRe. model templates, they are nice, but in my understanding, it's similar to copying the entire architecture from some other model file, e.g. BERT, and modify it. This would still result in a lot of boilerplate code, even if the researcher/developer doesn't have to spend time writing it. The software engineer in me often screams when I have to copy massive amounts of architectural code which causes huge redundancy. And when I am reading the implementation of a new model, I also often don't like this boilerplate as it makes the real differences hard to notice. Of course, reading the paper is one thing, but there are often small engineering details not mentioned in the paper. This redundancy was the main motivation for this issue. Sometimes I have to do python hacks (thankfully we're not doing this in C) to accomplish code re-use.\r\n\r\nBut, of course, I definitely respect your philosophy. I'm sure there must be challenges that I'm not seeing, and this change would definitely be huge. If there are no ideal ways to do this at the moment, please feel free to close this issue.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | CONTRIBUTOR | null | # 🚀 Feature request
As we're called `transformers`, it would be nice if there's a `Transformer` class that is not associated with any pretrained model and that we can directly instantiate, like
```python
from transformers import Transformer, Seq2SeqTransformer
transformer = Transformer(n_layers=8, dim=512)
seq2seq_transformer = Seq2SeqTransformer(n_enc_layers=6, n_dec_layers=8, dim=512)
```
They could follow a modular design that makes swapping components very easy, e.g.
```python
class SelfAttention:
...
class Transformer:
def __init__(self, self_attention_cls=SelfAttention, n_layers=8, ...):
...
```
## Motivation
1. Makes it easier for researchers to modify the transformer architecture and train it from scratch.
2. Allows great refactoring of the current `modeling_xxx.py` files. Often these files are highly identical minus a few minor differences. For example, the embeddings, self-attention layers, etc. are often identical across multiple models, but the current design does not allow much code re-use. With this change, I imagine a lot of the current `modeling_xxx.py` files can be reduced significantly.
3. A side-effect of this refactoring would be increased readability. Without much repetitive code, it would be much clearer to see the differences between architectures, which would be helpful for many users. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9159/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9158/comments | https://api.github.com/repos/huggingface/transformers/issues/9158/events | https://github.com/huggingface/transformers/issues/9158 | 769,247,862 | MDU6SXNzdWU3NjkyNDc4NjI= | 9,158 | Getting a 404 error when loading 'model=facebook/bart-large-mnli' from pipeline('zero-shot-classification') | {
"login": "Sneg42",
"id": 25573820,
"node_id": "MDQ6VXNlcjI1NTczODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/25573820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sneg42",
"html_url": "https://github.com/Sneg42",
"followers_url": "https://api.github.com/users/Sneg42/followers",
"following_url": "https://api.github.com/users/Sneg42/following{/other_user}",
"gists_url": "https://api.github.com/users/Sneg42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sneg42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sneg42/subscriptions",
"organizations_url": "https://api.github.com/users/Sneg42/orgs",
"repos_url": "https://api.github.com/users/Sneg42/repos",
"events_url": "https://api.github.com/users/Sneg42/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sneg42/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After the full restart PC, works fine. "
] | 1,608 | 1,608 | 1,608 | NONE | null | Getting a 404 when trying to load the model.
model='joeddav/bart-large-mnli-yahoo-answers' also not working.
`404 Client Error: Not Found for url: https://huggingface.co/facebook/bart-large-mnli/resolve/main/tf_model.h5
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
695 # Load from URL or cache if already cached
--> 696 resolved_archive_file = cached_path(
697 archive_file,
~/.local/lib/python3.8/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)
999 # URL, so get it from the cache (downloading if necessary)
-> 1000 output_path = get_from_cache(
1001 url_or_filename,
~/.local/lib/python3.8/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)
1127 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1128 r.raise_for_status()
1129 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
/usr/lib/python3/dist-packages/requests/models.py in raise_for_status(self)
939 if http_error_msg:
--> 940 raise HTTPError(http_error_msg, response=self)
941
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/facebook/bart-large-mnli/resolve/main/tf_model.h5
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-48-a42b4974064c> in <module>
----> 1 classifier = pipeline('zero-shot-classification',
2 model='facebook/bart-large-mnli',
3 # model='joeddav/bart-large-mnli-yahoo-answers',
4 # model = 'bert-base-uncased'
5 # model ='phiyodr/bart-large-finetuned-squad2',
~/.local/lib/python3.8/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)
2934 model = get_default_model(targeted_task, framework, task_options)
2935
-> 2936 framework = framework or get_framework(model)
2937
2938 task_class, model_class = targeted_task["impl"], targeted_task[framework]
~/.local/lib/python3.8/site-packages/transformers/pipelines.py in get_framework(model, revision)
106 model = AutoModel.from_pretrained(model, revision=revision)
107 elif is_tf_available() and not is_torch_available():
--> 108 model = TFAutoModel.from_pretrained(model, revision=revision)
109 else:
110 try:
~/.local/lib/python3.8/site-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
558
559 if type(config) in TF_MODEL_MAPPING.keys():
--> 560 return TF_MODEL_MAPPING[type(config)].from_pretrained(
561 pretrained_model_name_or_path, *model_args, config=config, **kwargs
562 )
~/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
709 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {TF2_WEIGHTS_NAME}, {WEIGHTS_NAME}.\n\n"
710 )
--> 711 raise EnvironmentError(msg)
712 if resolved_archive_file == archive_file:
713 logger.info("loading weights file {}".format(archive_file))
OSError: Can't load weights for 'facebook/bart-large-mnli'. Make sure that:
- 'facebook/bart-large-mnli' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'facebook/bart-large-mnli' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9158/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9158/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9157/comments | https://api.github.com/repos/huggingface/transformers/issues/9157/events | https://github.com/huggingface/transformers/issues/9157 | 769,215,920 | MDU6SXNzdWU3NjkyMTU5MjA= | 9,157 | T5 checkpoint contains weights missing on current model. | {
"login": "dscarmo",
"id": 10614968,
"node_id": "MDQ6VXNlcjEwNjE0OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/10614968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dscarmo",
"html_url": "https://github.com/dscarmo",
"followers_url": "https://api.github.com/users/dscarmo/followers",
"following_url": "https://api.github.com/users/dscarmo/following{/other_user}",
"gists_url": "https://api.github.com/users/dscarmo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dscarmo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dscarmo/subscriptions",
"organizations_url": "https://api.github.com/users/dscarmo/orgs",
"repos_url": "https://api.github.com/users/dscarmo/repos",
"events_url": "https://api.github.com/users/dscarmo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dscarmo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes we should fix the warning. It's no problem that those weights are missing i.e.:\r\n#8933",
"Thank you! "
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
Colab (16 december 2020), transformers 4.0.1
### Who can help
T5: @patrickvonplaten
## Information
When loading the pre-trained T5 weights, directy with .from_pretrained, with the newest version, it returns the following warning:
> Some weights of the model checkpoint at t5-small were not used when initializing T5ForConditionalGeneration: ['decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight']
> - This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
> - This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
This also caused old checkpoints of mine to not load due to the missing weight.
## To reproduce
Reproduced in Colab:
https://colab.research.google.com/drive/158OiSKHz80b0PQYaWB6c-AI9xKCzE1ns?usp=sharing
## Expected behavior
Loading T5 should not return warnings if i am loading the pre-trained weights from the library.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9157/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9156/comments | https://api.github.com/repos/huggingface/transformers/issues/9156/events | https://github.com/huggingface/transformers/issues/9156 | 769,186,878 | MDU6SXNzdWU3NjkxODY4Nzg= | 9,156 | Sharded DDP training fails with seq2seq models | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is just a brief log of the 2 distinct errors mentioned in OP:\r\n\r\nw/ `--fp16` the failure is:\r\n```\r\n File \"./finetune_trainer.py\", line 379, in <module>\r\n main()\r\n File \"./finetune_trainer.py\", line 315, in main\r\n trainer.train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 818, in train\r\n self.scaler.step(self.optimizer)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py\", line 330, in step\r\n assert len(optimizer_state[\"found_inf_per_device\"]) > 0, \"No inf checks were recorded for this optimizer.\"\r\nAssertionError: No inf checks were recorded for this optimizer.\r\n```\r\n\r\nw/o `--fp16` the failure is:\r\n```\r\n File \"./finetune_trainer.py\", line 379, in <module>\r\n main()\r\n File \"./finetune_trainer.py\", line 315, in main\r\n trainer.train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 821, in train\r\n self.optimizer.step()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/optim/lr_scheduler.py\", line 65, in wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/optim/optimizer.py\", line 89, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/fairscale/optim/oss.py\", line 210, in step\r\n self._broadcast_params()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/fairscale/optim/oss.py\", line 522, in _broadcast_params\r\n if self.should_bucket_param[param]:\r\nKeyError: Parameter containing:\r\ntensor([[ ...]], device='cuda:1')\r\n```\r\nIt's the very first parameter `model.shared.weight` in the case of mbart for example.\r\n\r\nTo test with t5 (same errors), run:\r\n\r\n```\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path patrickvonplaten/t5-tiny-random --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation_en_XX_to_ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp \r\n```\r\n",
"The first problem (fp16) is easily fixed, it means that the doc is not good enough. Torch's grad scaler is not shard aware (the ranks do not have all the gradients with this technique), but you can use [this](https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/grad_scaler.py#L24) and that should work.\r\n\r\nedit for future readers: that prove wrong, ShardedGradScaler already in use\r\n\r\nThe second issue is new to me, would you mind sharing a bit more of the reproduction steps ?",
"We initialize the should_bucket_param dictionary when the OSS optimizer is created. The assumption is that parameters should be frozen at this point. Any chance parameters are modified after the optimizer was created?",
"Thank you so much @blefaudeux and @msbaines for your follow up.\r\n\r\nTo reproduce:\r\n\r\n```\r\n# setup \r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\ncd examples/seq2seq\r\nwget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz \r\ntar -xzvf wmt_en_ro.tar.gz\r\n```\r\n\r\nto reproduce the 2nd failure w/o `--fp16`:\r\n\r\n```\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp\r\n```\r\n\r\nand then the first one is to just add `--fp16`\r\n\r\nThis is a tiny model that is good enough for testing the mechanics, so no good results to be expected. It's also very quick to download and load. To see real results swap `sshleifer/tiny-mbart` for `sshleifer/distill-mbart-en-ro-12-4`.",
"> The first problem (fp16) is easily fixed, it means that the doc is not good enough. Torch's grad scaler is not shard aware (the ranks do not have all the gradients with this technique), but you can use [this](https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/grad_scaler.py#L24) and that should work.\r\n\r\nWe are using it already:\r\nhttps://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L315\r\n\r\nif I print the object just before it fails in `self.scaler.step(self.optimizer)`, I get:\r\n<fairscale.optim.grad_scaler.ShardedGradScaler object at 0x7ff27034bac0>\r\n\r\nhttps://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L818\r\n\r\nFWIW, I experience the exact same issue with deepspeed if I leave trainer's `--fp16` code - if I remove it and get deepspeed to handle that the failure goes away. So the common denominator is our code.",
"> Thank you so much @blefaudeux and @msbaines for your follow up.\r\n> \r\n> To reproduce:\r\n> \r\n> ```\r\n> # setup \r\n> git clone https://github.com/huggingface/transformers\r\n> cd transformers\r\n> cd examples/seq2seq\r\n> wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz \r\n> tar -xzvf wmt_en_ro.tar.gz\r\n> ```\r\n> \r\n> to reproduce the 2nd failure w/o `--fp16`:\r\n> \r\n> ```\r\n> export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp\r\n> ```\r\n> \r\n> and then the first one is to just add `--fp16`\r\n> \r\n> This is a tiny model that is good enough for testing the mechanics, so no good results to be expected. It's also very quick to download and load. To see real results swap `sshleifer/tiny-mbart` for `sshleifer/distill-mbart-en-ro-12-4`.\r\n\r\nperfect, having a look right now, thanks for the repro help !",
"> We initialize the should_bucket_param dictionary when the OSS optimizer is created. The assumption is that parameters should be frozen at this point. Any chance parameters are modified after the optimizer was created?\r\n\r\ngood point, it's something that I was planning to address someday actually but not sure how urgent that was\r\n\r\n> Thank you so much @blefaudeux and @msbaines for your follow up.\r\n> \r\n> To reproduce:\r\n> \r\n> ```\r\n> # setup \r\n> git clone https://github.com/huggingface/transformers\r\n> cd transformers\r\n> cd examples/seq2seq\r\n> wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz \r\n> tar -xzvf wmt_en_ro.tar.gz\r\n> ```\r\n> \r\n> to reproduce the 2nd failure w/o `--fp16`:\r\n> \r\n> ```\r\n> export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp\r\n> ```\r\n> \r\n> and then the first one is to just add `--fp16`\r\n> \r\n> This is a tiny model that is good enough for testing the mechanics, so no good results to be expected. It's also very quick to download and load. To see real results swap `sshleifer/tiny-mbart` for `sshleifer/distill-mbart-en-ro-12-4`.\r\n\r\nre: fp16, could it be that the FW has not been run within an AMP context (a) and the scaler is not invoked in the backward (b) ? I cannot find it [in the trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L799). The scaler needs to be invoked when computing the grads, it will check for infs there which could explain why you're seeing this assert. Note that the hugginface codebase is new to me, so could be that this is wrapped somewhere else and I'm completely missing it",
"I think the questions you're asking about are all in this `training_step` code:\r\n\r\nhttps://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L1126-L1146\r\n\r\nI didn't write it, but from a quick read it appears that it's a yes to all of your suggestions. \r\n\r\n`self.use_amp` = native amp, `use_apex` is apex - so we are talking native amp here - that is the branches with `use_amp = True`\r\n\r\nI'll step through with debugger to see that it is actually so.\r\n\r\n",
"> I think the questions you're asking about are all in this `training_step` code:\r\n> \r\n> https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L1126-L1146\r\n> \r\n> I didn't write it, but from a quick read it appears that it's a yes to all of your suggestions.\r\n> \r\n> `self.use_amp` = native amp, `use_apex` is apex - so we are talking native amp here - that is the branches with `use_amp = True`\r\n> \r\n> I'll step through with debugger to see that it is actually so.\r\n\r\nlooks good indeed ! basically I just know that the \"found_inf_per_device\" are populated in the \"unscale_\" step, so this key being absent points to this step being skipped somehow.",
"@stas00 https://github.com/facebookresearch/fairscale/pull/256 Fixes your repro on a single node, it's a side effect though (bucketing is effectively disabled on a single node), multinode + huggingface is still probably broken. It does look like the model somehow changes after construction, still finding my way around your codebase.\r\n",
"re: --fp16: If the machine is clean it does break, my assumption is that the dist.reduce() in the ShardedGradScaler fails somehow in between the processes. Once it dies, one process stays up actually (visible with nvidia-smi), and all the subsequent runs will work fine. So if that helps it looks like it could be an issue with the dist init, or passing the settings around that.",
"> looks good indeed ! basically I just know that the \"found_inf_per_device\" are populated in the \"unscale_\" step, so this key being absent points to this step being skipped somehow.\r\n\r\nSo, OK, I have it setup in the debugger \r\n\r\nIt runs `unscale_` just fine, but doesn't find any `inf`:\r\n\r\n```\r\n inv_scale = self._scale.double().reciprocal().float()\r\n found_inf = torch.full((1,), 0.0, dtype=torch.float32, device=self._scale.device)\r\n\r\n optimizer_state[\"found_inf_per_device\"] = self._unscale_grads_(optimizer, inv_scale, found_inf, False)\r\n```\r\n- `inv_scale = tensor([1.5259e-05], device='cuda:0')`\r\n- `found_inf = tensor([0.], device='cuda:0')`\r\n- `optimizer_state[\"found_inf_per_device\"] = {}`\r\n\r\n\r\nrerunning - inside `self._unscale_grads_` we get:\r\n\r\n```\r\nper_device_and_dtype_grads = {defaultdict: 1} defaultdict(<function GradScaler._unscale_grads_.<locals>.<lambda> at 0x7f895c2174c0>, {device(type='cuda', index=1): defaultdict(<class 'list'>, {torch.float32: [tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\\n [nan, \r\n cuda:1 = {defaultdict: 1} defaultdict(<class 'list'>, {torch.float32: [tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\\n [nan, \r\n default_factory = {type} <class 'list'>\r\n torch.float32 = {list: 92} [tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\\n [nan, nan]], device='cuda:1'), tensor([nan, nan], \r\n 00 = {Tensor: 2} tensor([[nan, nan],\\n [nan, nan]], device='cuda:1')\r\n 01 = {Tensor: 2} tensor([[nan, nan],\\n [nan, nan]], device='cuda:1')\r\n 02 = {Tensor: 2} tensor([[nan, nan],\\n [nan, nan]], device='cuda:1')\r\n 03 = {Tensor: 2} tensor([[nan, nan],\\n [nan, nan]], device='cuda:1')\r\n[...]\r\n 89 = {Tensor: 2} tensor([nan, nan], device='cuda:1')\r\n 90 = {Tensor: 2} tensor([nan, nan], device='cuda:1')\r\n 91 = {Tensor: 2} tensor([-12339.3516, -13527.4590], device='cuda:1')\r\n```\r\n\r\nand then this gets called:\r\n\r\n```\r\n for device, per_dtype_grads in per_device_and_dtype_grads.items():\r\n for grads in per_dtype_grads.values():\r\n torch._amp_foreach_non_finite_check_and_unscale_(grads,\r\n per_device_found_inf.get(device),\r\n per_device_inv_scale.get(device))\r\n```\r\n\r\nand it doesn't find anything. \r\n\r\nI think the debugger also has a race condition and sometimes we get the first process (cuda:0) and other times the 2nd one (cuda:1). \r\n\r\nok I figured out how to switch threads in pycharm debugger, so I can now go back and force between the 2 processes.\r\n\r\nI haven't figured out how to do `.to()` calls in pycharm debugger yet - when those happen as a step it just hangs so I have carefully to skip over those.\r\n\r\nNow rerunning it again and hitting `cuda:0` `per_device_and_dtype_grads` is empty.\r\n\r\nAnd yes, I have noticed that with `--fp16` only one process fails with this error - the other keeps on running. \r\n\r\nSo basically we are having this happen on one process but not the other. Somewhere must be a bug not running the same code for both processes.\r\n\r\n(this has just changed in pytorch master - now all subprocesses will die nicely w/o leaving zombies - by yours truly ;)",
"thanks for the backtrace ! so, for one the fact that the grads are not all the same is expected with this method, the grads are sharded across the ranks (ie: partitioned), depending on which parameters each rank will optimize. The ShardedGradScaler should be aware of that, and syncs in between the ranks to make sure that they all get the same knowledge, looks like this fails somehow then. Having a quick look right now\r\n\r\n(well done for the zombie process destruction ! now somehow if the zombie process is still here the next run \"works\")",
"So it appears that here:\r\n```\r\nself.scaler.scale(loss).backward()\r\n```\r\ndoesn't set `param.grad`s in `cuda:0` but sets it in `cuda:1`\r\n\r\nTo quickly see that add:\r\n\r\n```\r\n for group in self.optimizer.param_groups:\r\n for param in group[\"params\"]:\r\n print(f\"{self.args.local_rank}: {param.grad}\")\r\n```\r\nafter:\r\n\r\nhttps://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L1139\r\n\r\nSo on `cuda:0` it goes into `backward` and almost immediately returns. some flag must be off.\r\n\r\nI checked that `self.scaler.scale(loss).requires_grad == True` on both, so it's not that.\r\n\r\nAnd `unscale_` fails to find any data, because all grads are None for `self.optimizer` on `cuda:0` and that's where it crashes.\r\n\r\n(debugging parallel processes proved to be far from easy - not sure why - half the time pycharm debugger either gets stuck or suddenly can't see the other process - so it's very slow comparing what the difference is between the two sides. ideally I need to be able to run both processes side by side - each in its own debugger - then it'd be much easier to find the divergence)",
"hmm, looks like a device and rank mismatch then, that could match with the process group observation above. Looking into the code, are you using a specific process group ?",
"https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/training_args.py#L472-L474",
"I'm trying to find a better solution to the other issue you were seeing with the bucketing (now fixed on fairscale master for a single node), and it just appeared to me that they could be tied: could it be that the models change devices after the sharded optimizer is built ?\r\n\r\nedit: just checked that, not the case, rank/device match at construction time and during the first step",
"Ah ok, got to the bottom of the first bug. There are params which don't require grad, I was skipping them, my bad. Fixing that properly so that multi-node also works for you.\r\n\r\nedit: now this is properly done (proverbial second fix, which covers multinode)",
"ok, now on the fp16 issue, there's at least one thing which cannot work well in that case: the gradient clipping. \r\nhttps://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L809\r\n\r\nTo explain a little, when ShardedDDP is used torch.utils.clip_grad_norm is blind to the sharding (same issue with the scaler), in that it will only consider the norm of the gradients present on this rank, it's very much not aware of the distributed nature of the problem. I think that we should improve the API so that we don't need to change all these little pieces, this could be done with torch RPC for instance. We've a solution right now, in that we provide a shard-aware gradient clipping in https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/oss.py#L219\r\nwhose interface is similar.\r\n\r\nI'm not sure that it's the only issue with fp16, but that's one of them for sure.\r\n\r\n ",
"Ok, second and hopefully last issue with fp16 caught, works on my machine with the following \"patch\" around the clipping (to select fairscale's clipping when that makes sense), and an incoming fix in Fairscale related to a broken partitioning in a pathological case.\r\n\r\nReplace https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L809 \r\nwith \r\n```\r\n if self.use_amp:\r\n # AMP: gradients need unscaling\r\n self.scaler.unscale_(self.optimizer)\r\n \r\n if hasattr(self.optimizer, \"clip_grad_norm\"):\r\n # Sharded optimizer, specific gradient clipping \r\n self.optimizer.clip_grad_norm(self.args.max_grad_norm)\r\n else:\r\n # Vanilla -monolithic- clipping, handling Apex or full precision\r\n torch.nn.utils.clip_grad_norm_(\r\n amp.master_params(self.optimizer) if self.use_apex else model.parameters(),\r\n self.args.max_grad_norm,\r\n )\r\n```\r\n",
"With all the linked PRs it works for me with --fp16 and ShardedDDP, and the speed bumps up nicely, AMP basically doubles the throughput.",
"Wow, thanks a lot for all this debugging @blefaudeux ! I'll draft a quick patch for the `Trainer` gradient clipping and tag you on the PR.",
"Patch is in the PR linked above!",
"Amazing! Thank you so much, @blefaudeux! \r\n\r\nI merged all the suggested code and everything works. Yay!\r\n\r\n**Except it's 30% slower w/ sharded_ddp**\r\n\r\n> @blefaudeux wrote:\r\n> With all the linked PRs it works for me with --fp16 and ShardedDDP, and the speed bumps up nicely, AMP basically doubles the throughput.\r\n\r\nSo I'm not seeing what you're seeing, @blefaudeux\r\n\r\nWhat I did.\r\n\r\n1. hf:\r\n\r\nrebased to include https://github.com/huggingface/transformers/pull/9168\r\n\r\n2. fairscale:\r\n\r\n\r\n```\r\ngit checkout -b hf\r\n\r\n# https://github.com/facebookresearch/fairscale/pull/262\r\ngit cherry-pick e305f10e81db95d14c9edd3f9e1e18b0fb2847fd\r\n# (had to resolve a simple conflict)\r\n\r\n# https://github.com/facebookresearch/fairscale/pull/259\r\ngit cherry-pick ad28820\r\ngit cherry-pick 705c188\r\ngit cherry-pick d6aa285\r\ngit cherry-pick a315ee3\r\n\r\n# only this works for me at the moment for pytorch-nightly\r\npython setup.py bdist_wheel\r\npip uninstall -y fairscale; pip install dist/fairscale-0.1.1-cp38-cp38-linux_x86_64.whl\r\n```\r\n\r\n# speed benchmarks\r\n\r\nnote I switched to the real model `sshleifer/distill-mbart-en-ro-12-4` (`sshleifer/tiny-mbart` was perfect for testing mechanics)\r\n\r\n```\r\n# base-line (no --sharded_ddp / no --fp16)\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500\r\n\r\n2020-12-17 10:44:55 | INFO | __main__ | train_runtime = 28.0357\r\n\r\n# --sharded_ddp \r\n\r\n2020-12-17 10:41:02 | INFO | __main__ | train_runtime = 40.4482\r\n\r\n\r\n### fp16\r\n\r\n# --fp16\r\n\r\n2020-12-17 10:43:30 | INFO | __main__ | train_runtime = 29.4658\r\n\r\n# --sharded_ddp --fp16\r\n\r\n2020-12-17 10:38:53 | INFO | __main__ | train_runtime = 39.4722\r\n\r\n```\r\n",
"Also, clearly it can be seen from the benchmark that we don't gain from --fp16 at the baseline (before adding fairscale) - so something is not right there either.",
"In the good news, I can squeeze 3x batch size with ` --sharded_ddp` as compared to the baseline before my card OOMs. Amazing!\r\n\r\nSo with BS=12 (baseline I could do only 4)\r\n\r\n```\r\n# --sharded_ddp\r\n2020-12-17 11:10:17 | INFO | __main__ | train_runtime = 17.2038\r\n# --sharded_ddp --fp16\r\n2020-12-17 11:07:32 | INFO | __main__ | train_runtime = 15.2403\r\n```\r\n\r\nSo the total training time is about ~50% as compared to w/o `--sharded_ddp` but must increase BS 3 times.\r\n\r\n",
"quality is about the same - about 0.1% worse with ` --sharded_ddp` and same BS on a quick test.\r\n\r\n```\r\n# baseline\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --do_eval --eval_steps 25000 --evaluation_strategy=steps --n_val 200 --per_device_eval_batch_size $BS --do_predict --predict_with_generate --n_test 200 --fp16\r\n\r\n2020-12-17 11:20:28 | INFO | __main__ | train_runtime = 29.6081\r\n\r\n2020-12-17 11:20:50 | INFO | __main__ | val_bleu = 26.3473\r\n2020-12-17 11:21:20 | INFO | __main__ | test_bleu = 25.7341\r\n\r\n# run 2\r\n2020-12-17 11:52:40 | INFO | __main__ | val_bleu = 26.3473\r\n2020-12-17 11:53:10 | INFO | __main__ | test_bleu = 25.7341\r\n\r\n# --sharded_ddp\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --do_eval --eval_steps 25000 --evaluation_strategy=steps --n_val 200 --per_device_eval_batch_size $BS --do_predict --predict_with_generate --n_test 200 --fp16 --sharded_ddp\r\n\r\n2020-12-17 11:25:23 | INFO | __main__ | train_runtime = 39.4316\r\n\r\n2020-12-17 11:25:46 | INFO | __main__ | val_bleu = 26.2563\r\n2020-12-17 11:26:16 | INFO | __main__ | test_bleu = 25.5779\r\n\r\n# run 2\r\n2020-12-17 11:28:32 | INFO | __main__ | val_bleu = 26.1359\r\n2020-12-17 11:29:02 | INFO | __main__ | test_bleu = 25.6613\r\n\r\n# run 3\r\n2020-12-17 11:30:51 | INFO | __main__ | val_bleu = 26.1359\r\n2020-12-17 11:31:21 | INFO | __main__ | test_bleu = 25.6067\r\n\r\n# run 4\r\n2020-12-17 11:50:05 | INFO | __main__ | val_bleu = 26.1359\r\n2020-12-17 11:50:35 | INFO | __main__ | test_bleu = 25.5889\r\n\r\n```\r\n\r\nlarger BS is much faster, while the eval is on par\r\n```\r\n\r\n# --sharded_ddp\r\nexport BS=12; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --do_eval --eval_steps 25000 --evaluation_strategy=steps --n_val 200 --per_device_eval_batch_size $BS --do_predict --predict_with_generate --n_test 200 --fp16 --sharded_ddp\r\n\r\n2020-12-17 11:22:26 | INFO | __main__ | train_runtime = 15.2409\r\n\r\n2020-12-17 11:22:40 | INFO | __main__ | val_bleu = 26.466\r\n2020-12-17 11:23:01 | INFO | __main__ | test_bleu = 25.6237\r\n```\r\n",
"Hi,\r\nthanks a lot for this fixing this issue, this is great to have this option working, do you mind including this command for faster training of seq2seq models and some documentations on what is sharded_ddp option and how much it helps on README of seq2seq folder. thanks. ",
"We will do so soon. Since it was just added we don't have enough solid stats to advertise % improvements specific to transformers, so please give us some time.\r\n\r\nUntil then you can just add `--sharded_ddp` and see for yourself. One thing I noticed is that I can use a 3 times bigger batch size w/ it.\r\n**edit**: oh but wait, the fairscale master doesn't have the PRs with important fixes merged yet, so it's basically not ready yet unless you want to manually merge the changes. I guess you could use my branch where I did the merges already if it's urgent...\r\n\r\nWrt/ Optimizer state sharding, you can read the documentation in the paper on DeepSpeed/ZeRO https://arxiv.org/abs/1910.02054\r\n\r\nIf I'm not mistaken the exact entry is 5.1 Pos : Optimizer State Partitioning\r\n\r\nBut I don't know whether it's the same or not, since this is fairscale implementation. \r\n\r\n@msbaines, what's the best place for us to link to to explain what fairscale's sharded optimizer does? I didn't find any docs on your repo/website. Thank you!",
"hey there, trying to cover a couple of questions:\r\n- fp16 : should almost always give a boost, I was seeing close to 2x a couple of days back (on P100s with the above test case), could depend on your hardware ? (and workload of course, if IO is the bottleneck that wouldn't be the case)\r\n\r\n- speed impact of shardedDDP at iso-batch size: there's some speed lost indeed, it's probably not fundamental though, could be improved on our end without any change on your end. The two ongoing PRs for hugginface should help a bit for instance. Some time lost is not trivial due to all the code being python and some impact on GIL contention for instance (the reduce involves a lot of hooks being called to redirect the gradients to the right ranks). Longer term, ShardedDDP should just be a \"mode of operation\" of Pytorch's DDP, which is mostly cpp, so speed will probably improve, consider the current state as a baseline. Note that the state saving support is not very elegant right now, and is very slow, so if this counts against the throughput (if there are a lot of checkpoints) you'll see a difference indeed. We plan to improve on that asap\r\n\r\n- what does shardedDDP do ? Pos + Pg from the zero paper + mixed precision if you use AMP of course, in plain english optimizer state sharding + gradient sharding + automatic mixed precision (from PyTorch of course)."
] | 1,608 | 1,633 | 1,609 | COLLABORATOR | null | ## Information
Model I am using (Bert, XLNet ...): T5/BART/mBART/Marian
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: seq2seq
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run
```
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/finetune_trainer.py \
--model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir \
~/Downloads/wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 \
--logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 \
--num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 4 --sortish_sampler \
--src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 \
--n_train 500 --sharded_ddp
```
will fail with
```
Traceback (most recent call last):
File "examples/seq2seq/finetune_trainer.py", line 379, in <module>
main()
File "examples/seq2seq/finetune_trainer.py", line 316, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/sgugger/git/transformers/src/transformers/trainer.py", line 821, in train
self.optimizer.step()
File "/home/sgugger/.pyenv/versions/base/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/home/sgugger/git/fairscale/fairscale/optim/oss.py", line 210, in step
self._broadcast_params()
File "/home/sgugger/git/fairscale/fairscale/optim/oss.py", line 522, in _broadcast_params
if self.should_bucket_param[param]:
KeyError: Parameter containing:
tensor([[-0.0296, 0.0038],
[ 0.0000, 0.0000],
[ 0.0298, 0.0385],
...,
[-0.0161, -0.0024],
[ 0.0022, -0.0576],
[ 0.0053, 0.0256]], device='cuda:1')
0%|
```
Using FP16 also fails.
## Expected behavior
The script should run to completion.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9156/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9155/comments | https://api.github.com/repos/huggingface/transformers/issues/9155/events | https://github.com/huggingface/transformers/issues/9155 | 769,087,617 | MDU6SXNzdWU3NjkwODc2MTc= | 9,155 | evaluate_during_training is not acceptable in newer version of the Transformer | {
"login": "meysamgh",
"id": 34138742,
"node_id": "MDQ6VXNlcjM0MTM4NzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/34138742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meysamgh",
"html_url": "https://github.com/meysamgh",
"followers_url": "https://api.github.com/users/meysamgh/followers",
"following_url": "https://api.github.com/users/meysamgh/following{/other_user}",
"gists_url": "https://api.github.com/users/meysamgh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meysamgh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meysamgh/subscriptions",
"organizations_url": "https://api.github.com/users/meysamgh/orgs",
"repos_url": "https://api.github.com/users/meysamgh/repos",
"events_url": "https://api.github.com/users/meysamgh/events{/privacy}",
"received_events_url": "https://api.github.com/users/meysamgh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This argument was deprecated in transformers version 3.5 and removed in version 4.0, as indicated in the [release notes](https://github.com/huggingface/transformers/releases/). It needs to be replaced by `evaluation_strategy=\"steps\"` or `evaluation_strategy=\"epoch\"`"
] | 1,608 | 1,608 | 1,608 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: <True>
- Using distributed or parallel set-up in script?: <False>
### Who can help
Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [* ] my own modified scripts: (give details below)
*
config=RobertaConfig(vocab_size=30_000, max_position_embedding=512, num_attention_heads=12, num_hidden_layers=12, hidden_dropout_prob=0.1, attention_dropout_prob=0.1, initializer_range=0.2, intermediate_size=3072, type_vocab_size=1)
tokenizer=RobertaTokenizerFast.from_pretrained("XXX", max_len=512)
model=RobertaForMaskedLM(config=config)
dataset=LineByLineTextDataset(tokenizer, file_path="XXX")
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args=TrainingArguments(output_dir="xxxx", overwrite_output_dir=True,
num_training_epochs=2, do_train=True, do_eval=True, **evaluate_during_training=True**,
per_gpu_train_batch_size=128, learning_rate=0.0004,
gradient_accumulation_steps=32,
logging_steps=2048,
warmup_steps=10000,
weight_decay=0.01,
eval_steps=2048,
save_steps=2048,
save_total_limit=2, prediction_loss_only=True)
trainer=Trainer(model=model, args=training_args,data_collator=data_collator, train_dataset=dataset, eval_dataset=dataseteval, **prediction_loss_only=True**)
trainer.train()
The tasks I am working on is:
* [* ] my own task or dataset: (give details below)
It is a dataset of several million lines of text.
## To reproduce
Steps to reproduce the behavior:
1. Having transformers installed
2. Add lines of code for accessing a training and evaluation dataset and run it
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The following code was working fine with transformers 3.5 but since I updated the transformers version, it does not accept the following arguments:
evaluate_during_training=True
prediction_loss_only=True
The error is invalid arguments for both of them.
## Expected behavior
I need to have them to be able to check the validation loss during the model training. By removing the following arguments, the model just reports the training loss and not the evaluation loss during the training.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9155/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9154/comments | https://api.github.com/repos/huggingface/transformers/issues/9154/events | https://github.com/huggingface/transformers/pull/9154 | 769,021,330 | MDExOlB1bGxSZXF1ZXN0NTQxMjYxMTMz | 9,154 | AutoModelForTableQuestionAnswering | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | Adds the `AutoModelForTableQuestionAnswering` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9154/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9154",
"html_url": "https://github.com/huggingface/transformers/pull/9154",
"diff_url": "https://github.com/huggingface/transformers/pull/9154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9154.patch",
"merged_at": 1608138873000
} |
https://api.github.com/repos/huggingface/transformers/issues/9153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9153/comments | https://api.github.com/repos/huggingface/transformers/issues/9153/events | https://github.com/huggingface/transformers/issues/9153 | 769,007,538 | MDU6SXNzdWU3NjkwMDc1Mzg= | 9,153 | BertForSequenceClassification and DistilBertForSequenceClassification use pooler output in different ways | {
"login": "AndreaSottana",
"id": 48888970,
"node_id": "MDQ6VXNlcjQ4ODg4OTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/48888970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaSottana",
"html_url": "https://github.com/AndreaSottana",
"followers_url": "https://api.github.com/users/AndreaSottana/followers",
"following_url": "https://api.github.com/users/AndreaSottana/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaSottana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaSottana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaSottana/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaSottana/orgs",
"repos_url": "https://api.github.com/users/AndreaSottana/repos",
"events_url": "https://api.github.com/users/AndreaSottana/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaSottana/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"BERT and DistilBERT are different models. DistilBERT isn't simply a BERT model with fewer layers, but a BERT model without the pooling layer as you have seen, and with no token type embeddings.\r\n\r\nWe try to stay as close to the original implementations as possible, hence why BERT is done this way, and why DistilBERT was done differently. I invite you to read the paper or study the original BERT codebase to see how it was done, it should be very similar (or the same) as it is done here.",
"Thanks a lot for clarifying @LysandreJik I'll close the issue then!"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | Hi,
The `BertForSequenceClassification` includes a forward pass of the BertModel, and it takes the second element (index 1) from its output before moving forward, as shown [here ](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1378 )
This is the return of `BertModel`
```
return BaseModelOutputWithPoolingAndCrossAttentions(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
hidden_states=encoder_outputs.hidden_states,
attentions=encoder_outputs.attentions,
cross_attentions=encoder_outputs.cross_attentions,
)
```
hence `output[1]` is taking the pooler_output.
However, in `DistilBertForSequenceClassification`, it takes the first element (index 0) of the `DistilBertModel`'s forward pass, i.e. `distilbert_output[0]`, as shown [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py#L625)
This is the last hidden state for all tokens.
Why is there this discrepancy between the two models? The behaviour of `DistilBertForSequenceClassification` makes more intuitive sense to me.
Why is `BertForSequenceClassification` using only the pooler_output from `BertModel` (i.e. the hidden state of the first token)? Why are all other hidden states of other tokens not needed here, but needed in the distilled version?
Thanks for the help!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9153/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9152/comments | https://api.github.com/repos/huggingface/transformers/issues/9152/events | https://github.com/huggingface/transformers/pull/9152 | 768,989,315 | MDExOlB1bGxSZXF1ZXN0NTQxMjM3ODAx | 9,152 | Add message to documentation that longformer doesn't support token_type_ids | {
"login": "HHousen",
"id": 11785397,
"node_id": "MDQ6VXNlcjExNzg1Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/11785397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HHousen",
"html_url": "https://github.com/HHousen",
"followers_url": "https://api.github.com/users/HHousen/followers",
"following_url": "https://api.github.com/users/HHousen/following{/other_user}",
"gists_url": "https://api.github.com/users/HHousen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HHousen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HHousen/subscriptions",
"organizations_url": "https://api.github.com/users/HHousen/orgs",
"repos_url": "https://api.github.com/users/HHousen/repos",
"events_url": "https://api.github.com/users/HHousen/events{/privacy}",
"received_events_url": "https://api.github.com/users/HHousen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for your PR! It looks like you did not run the `make style` command to format properly your changes.",
"@sgugger Sorry about that. It should be good now.",
"Thanks a lot!"
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fixes #9111. This pull request adds a notice to the Longformer model documentation that it does not have `token_type_ids`, similarly to the message on the [RoBERTa documentation](https://huggingface.co/transformers/model_doc/roberta.html).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger (documentation)
@patrickvonplaten (longformer)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9152/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9152",
"html_url": "https://github.com/huggingface/transformers/pull/9152",
"diff_url": "https://github.com/huggingface/transformers/pull/9152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9152.patch",
"merged_at": 1608134775000
} |
https://api.github.com/repos/huggingface/transformers/issues/9151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9151/comments | https://api.github.com/repos/huggingface/transformers/issues/9151/events | https://github.com/huggingface/transformers/pull/9151 | 768,853,586 | MDExOlB1bGxSZXF1ZXN0NTQxMTU3MzM4 | 9,151 | Added TF CTRL Sequence Classification | {
"login": "spatil6",
"id": 6419011,
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spatil6",
"html_url": "https://github.com/spatil6",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"repos_url": "https://api.github.com/users/spatil6/repos",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | This PR implements Sequence classification for TF CTRL model.
TFCTRLForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. Transformer XL ,GPT-2) do.
Fixes #7623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you write any new necessary tests?
@jplu @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9151/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9151",
"html_url": "https://github.com/huggingface/transformers/pull/9151",
"diff_url": "https://github.com/huggingface/transformers/pull/9151.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9151.patch",
"merged_at": 1608246658000
} |
https://api.github.com/repos/huggingface/transformers/issues/9150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9150/comments | https://api.github.com/repos/huggingface/transformers/issues/9150/events | https://github.com/huggingface/transformers/pull/9150 | 768,836,641 | MDExOlB1bGxSZXF1ZXN0NTQxMTQ2MzI2 | 9,150 | Add flags to return scores, hidden states and / or attention weights in GenerationMixin | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
}
] | closed | false | null | [] | [
"@patrickvonplaten ready for a second review\r\nI guess the next steps are to implement GreedySearchOutput for TF and to update the pipelines using generation?",
"Think it should now be relatively straight-forward to implement the outputs for the other generate methods. Seems like circle ci is in holiday...let's check back on Monday again",
"When will this PR be merged? Now I have a model which need the score of generated sequences.",
"Hi @sgugger and @LysandreJik, thansk for the review ! I made the suggested changes to the documentation.",
"Thanks for your work on this @SBrandeis, great job!",
"Nice work, am I correct that this only works for the PyTorch models?",
"When would this be added to a release version? \r\n\r\n@SBrandeis How would one go about converting these scores into probabilities? Specifically, the probability of the word when it was generated? Seems like we would have had to softmax during generation and store that value rather than the raw score? Does that mean I am still left to role something custom to do that or am I missing something?",
"Hey @mshuffett, we will do a release later today, so this will be included in the release.\r\n\r\nRegarding how to turn the scores into probabilities, please see this discussion on the forum: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175\r\n\r\nWe don't want to add this inside `generate()` because there are all kinds of probs one could calculate and we want to keep it as \"barebone\" as possible for better maintenance. "
] | 1,608 | 1,610 | 1,609 | CONTRIBUTOR | null | # What does this PR do?
Add flags and logic to return attention scores, hidden states, and/or logits when using a model in generation mode.
Fixes #9121
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Issue: #9121
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
=> Check out forum post for more detail: https://discuss.huggingface.co/t/announcement-generationoutputs-scores-attentions-and-hidden-states-now-available-as-outputs-to-generate/3094
## UPDATE:
This PR is almost ready for merge. Here are the last things to take care of:
- [x] @patrickvonplaten, @SBrandeis, @LysandreJik, @sgugger - check if naming of return arguments is ok.
- [x] @patrickvonplaten add a correct `_check_outputs()` function for special models (Reformer, XLNet, XLM, TransfoXL, ...)
- [x] @SBrandeis We could think about a simple code snippet to advertise the new feature. Maybe use "Beam Search" for translation and show the different "sequence_scores" (probs) for 3 different translations.
## Future PR:
- [ ] Change the output in all notebooks, examples, tests to `return_dict_in_generate=True`.
- [ ] Even further in the future PR: Think about a way to deprecate the default usage of `return_dict_in_generate=False`.
- [ ] @patrickvonplaten Discuss with @sgugger if model documentation is ok. @sgugger - I think `generate()` might deserve its own "main" doc page maybe in "main classes" under "Models". I would add a small description at the top, then add all the generate functions and the new "Generation Outputs" there. Do you think this makes sense or would you rather keep "generate" as a subsection under "Models"? IMO, "generate" could be more visible in the docs.
- [ ] @patrickvonplaten, @LysandreJik, @sgugger - this PR adds a lot more generate testing to many models. Since `generate` quite an expensive method, it might be worth checking here that this PR does yield a significant slow down of the tests. Simon and I tried to make the generate test as short and light-weight as possible, but it might still be significant.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9150/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9150/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9150",
"html_url": "https://github.com/huggingface/transformers/pull/9150",
"diff_url": "https://github.com/huggingface/transformers/pull/9150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9150.patch",
"merged_at": 1609949503000
} |
https://api.github.com/repos/huggingface/transformers/issues/9149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9149/comments | https://api.github.com/repos/huggingface/transformers/issues/9149/events | https://github.com/huggingface/transformers/issues/9149 | 768,797,124 | MDU6SXNzdWU3Njg3OTcxMjQ= | 9,149 | Log metrics along with hparams in TensorBoardCallback | {
"login": "howardlau1999",
"id": 5250490,
"node_id": "MDQ6VXNlcjUyNTA0OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5250490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howardlau1999",
"html_url": "https://github.com/howardlau1999",
"followers_url": "https://api.github.com/users/howardlau1999/followers",
"following_url": "https://api.github.com/users/howardlau1999/following{/other_user}",
"gists_url": "https://api.github.com/users/howardlau1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howardlau1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howardlau1999/subscriptions",
"organizations_url": "https://api.github.com/users/howardlau1999/orgs",
"repos_url": "https://api.github.com/users/howardlau1999/repos",
"events_url": "https://api.github.com/users/howardlau1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/howardlau1999/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Log metrics along with hparams in TensorBoardCallback.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
It seems useless to log hparams with empty metric dict because the training arguments are already logged in the text section. Now users can see only a blank screen when they click on the HPARAMS of the TensorBoard. So I think it will be better to call `add_hparams` with evaluation metrics when available. Otherwise just don't call this function.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9149/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9149/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9148/comments | https://api.github.com/repos/huggingface/transformers/issues/9148/events | https://github.com/huggingface/transformers/pull/9148 | 768,687,237 | MDExOlB1bGxSZXF1ZXN0NTQxMDU3NjIy | 9,148 | DistilBertForSequenceClassification | {
"login": "AndreaSottana",
"id": 48888970,
"node_id": "MDQ6VXNlcjQ4ODg4OTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/48888970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreaSottana",
"html_url": "https://github.com/AndreaSottana",
"followers_url": "https://api.github.com/users/AndreaSottana/followers",
"following_url": "https://api.github.com/users/AndreaSottana/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreaSottana/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreaSottana/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreaSottana/subscriptions",
"organizations_url": "https://api.github.com/users/AndreaSottana/orgs",
"repos_url": "https://api.github.com/users/AndreaSottana/repos",
"events_url": "https://api.github.com/users/AndreaSottana/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreaSottana/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | DistilBertForSequenceClassification
fix small shape error in comments | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9148/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9148",
"html_url": "https://github.com/huggingface/transformers/pull/9148",
"diff_url": "https://github.com/huggingface/transformers/pull/9148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9148.patch",
"merged_at": 1608128503000
} |
https://api.github.com/repos/huggingface/transformers/issues/9147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9147/comments | https://api.github.com/repos/huggingface/transformers/issues/9147/events | https://github.com/huggingface/transformers/issues/9147 | 768,579,182 | MDU6SXNzdWU3Njg1NzkxODI= | 9,147 | BertTokenizer.from_pretrained fails for local_files_only=True when added_tokens.json is missing | {
"login": "julianmichael",
"id": 5375447,
"node_id": "MDQ6VXNlcjUzNzU0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5375447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julianmichael",
"html_url": "https://github.com/julianmichael",
"followers_url": "https://api.github.com/users/julianmichael/followers",
"following_url": "https://api.github.com/users/julianmichael/following{/other_user}",
"gists_url": "https://api.github.com/users/julianmichael/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julianmichael/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julianmichael/subscriptions",
"organizations_url": "https://api.github.com/users/julianmichael/orgs",
"repos_url": "https://api.github.com/users/julianmichael/repos",
"events_url": "https://api.github.com/users/julianmichael/events{/privacy}",
"received_events_url": "https://api.github.com/users/julianmichael/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually, all of the files 404 here except `vocab.txt`. I have `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, and `tokenizer.json` all missing for this model.",
"> Actually, all of the files 404 here except `vocab.txt`. I have `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, and `tokenizer.json` all missing for this model.\r\n\r\nIf these files are missing even BertTokenizer.from_pretrained('google/bert_uncased_L-2_H-128_A-2'); should give an error; however it passed due to the below code; any particular reason this logic was added in the below mentioned:\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1232",
"@hlahkar Are you sure? The code you linked seems to just check for `requests.exceptions.ConnectionError` and `requests.exceptions.Timeout`. I think a 404 will raise a `requests.exceptions.HTTPError`, which bubble up to be thrown by `get_from_cache`, through `cached_path`, and then [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1774) where it is then caught and ignored.\r\n\r\nIn fact, my hacky workaround was to replace [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1257) with `raise requests.exceptions.HTTPError(\"404 Client Error\")`, so the same thing happens when `local_files_only=True`; now I can load the tokenizer in that case.",
"> @hlahkar Are you sure? The code you linked seems to just check for `requests.exceptions.ConnectionError` and `requests.exceptions.Timeout`. I think a 404 will raise a `requests.exceptions.HTTPError`, which bubble up to be thrown by `get_from_cache`, through `cached_path`, and then [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1774) where it is then caught and ignored.\r\n> \r\n> In fact, my hacky workaround was to replace [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1257) with `raise requests.exceptions.HTTPError(\"404 Client Error\")`, so the same thing happens when `local_files_only=True`; now I can load the tokenizer in that case.\r\n\r\nMy concern is should we also not be going into the error flow whenever we are getting a 404 error also; otherwise it might give a false sense of working to the user",
"In my previous comment, I mentioned the wrong line number. My Question is; why is the 404 error ignored in the below code segment:\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1784",
"So, is this problem solved in any way? \r\nIt seems it is now impossible to use most Bert-like models without the Internet connection, even though all the model files are cached.\r\nTransformers tries to get the `added_tokens.json` file, can't find it, and fails with \"ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.\"\r\nThis is really bothersome on HPC systems, where compute nodes are often offline by design.",
"@akutuzov on which version of transformers are you?\r\n\r\nI agree that this is a bug that we should solve, cc @LysandreJik @sgugger ",
"Taking a look.",
"@julien-c I use Transformers 4.1.1",
"Aimed to fix that in #9807, feedback appreciated @julianmichael ",
"The PR looks good as a stopgap — I guess the subsequent check [at L1766](https://github.com/huggingface/transformers/pull/9807/files#diff-85b29486a884f445b1014a26fecfb189141f2e6b09f4ae701ee758a754fddcc1R1766) will catch the case where the tokenizer hasn't been downloaded yet since no files should be present. But is this problem necessarily only for tokenizers? It seems like a general issue which is going to hold for any cached resources that have optional files. It might be cleaner to handle it in the file cache itself. But that's a much bigger issue I guess.",
"I believe this is only the case for tokenizers. The two other that could be possibly affected by this are:\r\n- Configuration downloads -> downloads a single file\r\n- Model downloads -> downloads the configuration file and the model state dict, both of which are necessary and need to raise an error if missing.\r\n\r\nLet me know if you think I'm missing something and I'll see what we can do. ",
"Ok, sounds good. No need for unnecessary/premature refactoring then :)"
] | 1,608 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@mfuntowicz
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): `google/bert_uncased_L-2_H-128_A-2`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Run the following:
```
from transformers import BertTokenizer
BertTokenizer.from_pretrained('google/bert_uncased_L-2_H-128_A-2')
BertTokenizer.from_pretrained('google/bert_uncased_L-2_H-128_A-2', local_files_only=True)
```
In the Python interpreter, this produces the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gscratch/cse/julianjm/anaconda3/lib/python3.7/site-packages/transformers-4.0.1-py3.8.egg/transformers/tokenization_utils_base.py", line 1747, in from_pretrained
File "/gscratch/cse/julianjm/anaconda3/lib/python3.7/site-packages/transformers-4.0.1-py3.8.egg/transformers/file_utils.py", line 1007, in cached_path
File "/gscratch/cse/julianjm/anaconda3/lib/python3.7/site-packages/transformers-4.0.1-py3.8.egg/transformers/file_utils.py", line 1171, in get_from_cache
ValueError: Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.
```
Looking more closely, I have isolated the issue to the logic [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1774). In this case, the error is because the cached path for the url `https://huggingface.co/google/bert_uncased_L-2_H-128_A-2/resolve/main/added_tokens.json` cannot be found in the cache when `local_files_only=True`. This is because the URL 404s; i.e., the file does not exist.
When `local_files_only=False`, the GET returns a 404 and the tokenizer init code just ignores the missing file. However, when `local_files_only=True` and the file is not found, it throws a `ValueError` instead which is not caught.
What makes this non-trivial is that without making HTTP requests, there is no way of telling the difference between a file that doesn't exist and a file which exists but hasn't been downloaded. It seems to me that there are several potential ways of fixing the issue.
1. Ensure that all files exist. Don't let people upload incomplete sets of files (and fix the ones which are currently incomplete).
2. Recover from 404s by caching an "empty" file here. But this only works where there is a meaningful notion of "empty" file, like lists of tokens. I think this would not work for json files or serialized models.
3. Put a special kind of file in the cache which says "hey, this file isn't supposed to exist", and handle appropriately everywhere files are loaded. Potentially could throw a special error saying the file isn't supposed to exist; HTTP 404s could then be caught and re-thrown as this special error, so, the case could be handled uniformly.
4. Just log a warning for files that aren't in the cache, and treat them like 404s. Wild west, but at least if the code unexpectedly fails later the user will be able to guess the problem. Easy to implement, but will worsen the UX every time someone tries to use `local_files_only` without downloading the model first.
Option 3 seems the cleanest to me, while option 4 is what I'm shunting into my transformers egg for now so I can keep working.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
After downloading, I would expect any artifact to be loadable from cache and equivalent to the downloaded one.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9147/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9146/comments | https://api.github.com/repos/huggingface/transformers/issues/9146/events | https://github.com/huggingface/transformers/issues/9146 | 768,454,735 | MDU6SXNzdWU3Njg0NTQ3MzU= | 9,146 | Ray tune hyperparameters search error | {
"login": "howardlau1999",
"id": 5250490,
"node_id": "MDQ6VXNlcjUyNTA0OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5250490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howardlau1999",
"html_url": "https://github.com/howardlau1999",
"followers_url": "https://api.github.com/users/howardlau1999/followers",
"following_url": "https://api.github.com/users/howardlau1999/following{/other_user}",
"gists_url": "https://api.github.com/users/howardlau1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howardlau1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howardlau1999/subscriptions",
"organizations_url": "https://api.github.com/users/howardlau1999/orgs",
"repos_url": "https://api.github.com/users/howardlau1999/repos",
"events_url": "https://api.github.com/users/howardlau1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/howardlau1999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I googled for the error and it may be related to sending a large object to redis. Was it because the datasets are too large?",
"Hi! Did you try to open an issue at ray directly? It seems to be linked to their library rather than `transformers`",
"> Hi! Did you try to open an issue at ray directly? It seems to be linked to their library rather than `transformers`\r\n\r\nI googled and found some related issues: https://github.com/ray-project/ray/issues/2931 and according to the replies the solution is https://ray.readthedocs.io/en/latest/tune-usage.html#handling-large-datasets\r\n\r\nBut I don't know how to pass that `tune.with_parameters`. Maybe the `Trainer` should take care of this?",
"It looks like something way too complex to implement so I'd suggest using optuna and see if you have the same problem, or re-implementing your own loop to use `ray.tune` on this. I don't think it can be supported easily by `Trainer`, and the documentation on the ray side is a bit too sparse on this subject to help us do it ourselves.",
"I have the same issue, and Optuna seems to be working fine. I think the biggest difference is that Optuna uses SQLite / in-memory, where Ray wants to send a (very large) object to Redis.",
"I don't have a solution for this problem, but just for others that might encounter the same problem, I tried the proposed solution (passing the arguments to `tune.run` via `ray.tune.with_parameters` in `run_hp_search_ray`) but the results were exactly the same. By what I have been able to gather, I would say that the problem arises from models bigger than 512M, not from the datasets.\r\n",
"hey folks, this should be working on the latest version of ray -- could you try installing the newest version via `pip install -U ray` and trying again?",
"> \r\n> \r\n> hey folks, this should be working on the latest version of ray -- could you try installing the newest version via `pip install -U ray` and trying again?\r\n\r\nHi @richardliaw! After updating ray to the latest version (1.1.0), it still isn't working for me, although the exception stack trace has changed a little (prior to this, I got the same exception as @howardlau1999 in their first comment):\r\n\r\n```Traceback (most recent call last):\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/connection.py\", line 706, in send_packed_command\r\n sendall(self._sock, item)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/_compat.py\", line 9, in sendall\r\n return sock.sendall(*args, **kwargs)\r\nBrokenPipeError: [Errno 32] Broken pipe\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/DATA/nperez/PROJECTS/DNG/src/system/train_span_in_context.py\", line 266, in <module>\r\n main()\r\n File \"/DATA/nperez/PROJECTS/DNG/src/system/train_span_in_context.py\", line 142, in main\r\n local_dir='/DATA/nperez/PROJECTS/DNG/hsearch/ray-search/'\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/transformers/trainer.py\", line 979, in hyperparameter_search\r\n best_run = run_hp_search(self, n_trials, direction, **kwargs)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/transformers/integrations.py\", line 187, in run_hp_search_ray\r\n analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/tune.py\", line 325, in run\r\n restore=restore)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/experiment.py\", line 149, in __init__\r\n self._run_identifier = Experiment.register_if_needed(run)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/experiment.py\", line 287, in register_if_needed\r\n register_trainable(name, run_object)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/registry.py\", line 71, in register_trainable\r\n _global_registry.register(TRAINABLE_CLASS, name, trainable)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/registry.py\", line 124, in register\r\n self.flush_values()\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/registry.py\", line 146, in flush_values\r\n _internal_kv_put(_make_key(category, key), value, overwrite=True)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/experimental/internal_kv.py\", line 27, in _internal_kv_put\r\n updated = worker.redis_client.hset(key, \"value\", value)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/client.py\", line 3050, in hset\r\n return self.execute_command('HSET', name, *items)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/client.py\", line 900, in execute_command\r\n conn.send_command(*args)\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/connection.py\", line 726, in send_command\r\n check_health=kwargs.get('check_health', True))\r\n File \"/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/connection.py\", line 718, in send_packed_command\r\n (errno, errmsg))\r\nredis.exceptions.ConnectionError: Error 32 while writing to socket. Broken pipe.\r\n```\r\n\r\nTo be specific, in case it helps, I've _been_ able to make hyperparameter search work for the following pre-trained models—before and after updating ray—:\r\n* dccuchile/bert-base-spanish-wwm-cased\r\n* allenai/scibert_scivocab_cased\r\n* skimai/spanberta-base-cased\r\n* distilbert-base-uncased\r\n\r\nBut not these:\r\n* bert-base-multilingual-cased\r\n* xlm-roberta-base\r\n\r\n\r\n",
"I couldn't get ray tune working either for roberta-large after upgrading ray to version 1.1.0 @richardliaw",
"Got it! I'll take a closer look this week. Thanks!",
"Thanks for raising this issue. I could reproduce it (with `roberta-large`) on an AWS p2.xlarge instance. I created a PR that should fix this issue via `tune.with_parameters`: https://github.com/huggingface/transformers/pull/9749\r\n\r\n@naiarapm it would be interesting to see what you did differently in your try to use `tune.with_parameters` - do you still have that piece of code available? We designed this utility exactly for handling large datasets and it worked for me in my experiments.\r\n\r\nIf you have the chance @howardlau1999 it would be great if you could check if this fixes your issue.",
"@krfricke Big thanks for your fix! I checked out your branch and the hyperparameters search with `ray` now works for me with `roberta-large`!",
"Hi @krfricke! \r\n\r\nSorry for the delay. In response to your question, I simply changed the following line in `transformers.integrations.py` (function `run_hp_search_ray`):\r\n\r\n```\r\nanalysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)\r\n````\r\n\r\nto this:\r\n\r\n````\r\nanalysis = ray.tune.run(\r\n ray.tune.with_parameters(_objective),\r\n config=trainer.hp_space(None), num_samples=n_trials, **kwargs\r\n)\r\n````\r\nI see now in your PR that that alone was not enough though :-) But I did not know what else to change, I just followed the suggested instructions to the best of my ability.\r\n\r\nI can confirm as well that the error has been fixed for me. Thanks a lot!!"
] | 1,608 | 1,611 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.4.0-139-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): Roberta-large
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: GLUE SST-2
## To reproduce
Steps to reproduce the behavior:
1. I wanted to do a hyperparameter search so I referred to https://huggingface.co/blog/ray-tune and modified the `examples/text-classification/run_glue.py` replacing the training part with
```
def model_init():
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
return model
trainer = Trainer(
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
# Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.
data_collator=default_data_collator if data_args.pad_to_max_length else None,
model_init=model_init,
)
```
```
# Training
if training_args.do_train:
from ray import tune
import ray
ray.init()
best_trial = trainer.hyperparameter_search(
hp_space=lambda _ : {"seed": tune.grid_search([31, 42, 53])},
direction="maximize",
backend="ray",
)
logger.info(" Best run %s" % str(best_trial))
```
2. Run `python run_glue.py --model_name_or_path roberta-large --do_train --do_eval --per_gpu_train_batch_size 8 --output_dir hypersearch-0 --task_name sst2 --evaluation_strategy steps --eval_steps 20 --logging_steps 10`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Then the script exited with exception:
```
Traceback (most recent call last):
File "run_glue.py", line 428, in <module>
main()
File "run_glue.py", line 359, in main
best_trial = trainer.hyperparameter_search(
File "/data1/howard/transformers/src/transformers/trainer.py", line 1039, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/data1/howard/transformers/src/transformers/integrations.py", line 241, in run_hp_search_ray
analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/tune.py", line 299, in run
experiments[i] = Experiment(
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/experiment.py", line 138, in __init__
self._run_identifier = Experiment.register_if_needed(run)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/experiment.py", line 276, in register_if_needed
register_trainable(name, run_object)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 71, in register_trainable
_global_registry.register(TRAINABLE_CLASS, name, trainable)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 124, in register
self.flush_values()
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 146, in flush_values
_internal_kv_put(_make_key(category, key), value, overwrite=True)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/experimental/internal_kv.py", line 27, in _internal_kv_put
updated = worker.redis_client.hset(key, "value", value)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/client.py", line 3004, in hset
return self.execute_command('HSET', name, key, value)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/client.py", line 877, in execute_command
conn.send_command(*args)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/connection.py", line 720, in send_command
self.send_packed_command(self.pack_command(*args),
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/connection.py", line 712, in send_packed_command
raise ConnectionError("Error %s while writing to socket. %s." %
redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The script should run without errors.
## Related Issues
https://github.com/ray-project/ray/issues/2931
https://ray.readthedocs.io/en/latest/tune-usage.html#handling-large-datasets
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9146/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9145/comments | https://api.github.com/repos/huggingface/transformers/issues/9145/events | https://github.com/huggingface/transformers/pull/9145 | 768,389,676 | MDExOlB1bGxSZXF1ZXN0NTQwODQzMTY2 | 9,145 | TableQuestionAnsweringPipeline | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@NielsRogge I can't ping you for review but I would love your input on this!",
"Thanks for your comments @patrickvonplaten @sgugger, applied your changes. This PR includes the `AutoModelForTableQuestionAnswering` defined [here](https://github.com/huggingface/transformers/pull/9154), adds a check on `pandas` which will raise an error with how to install it if necessary, and will raise an error if the `tf` framework was specified.\r\n\r\nInstantiating a pipeline with the following:\r\n```py\r\ntqa_pipeline = pipeline(\"table-question-answering\", framework=\"tf\")\r\n```\r\nyields:\r\n```\r\nValueError: Pipeline using tf framework, but this framework is not supported by this pipeline.\r\n```\r\n\r\nInstantiating the pipeline without having pandas installed:\r\n```py\r\ntqa_pipeline = pipeline(\"table-question-answering\")\r\n```\r\nyields:\r\n```\r\nImportError: Pandas is required for the TAPAS tokenizer.\r\n```"
] | 1,608 | 1,608 | 1,608 | MEMBER | null | ## TableQuestionAnsweringPipeline
Introduces the `TableQuestionAnsweringPipeline` which will be used for the `TableQuestionAnswering` widget:
<p align="center">
<img src="https://user-images.githubusercontent.com/30755778/102266591-a9606880-3ee6-11eb-9f16-7173a9a85b58.gif" width="500">
</p>
There are examples of usage within the documentation, but here are some others if you want to give it a spin:
### WTQ and aggregators
```py
tqa_pipeline = pipeline("table-question-answering")
data = {
"Repository": ["Transformers", "Datasets", "Tokenizers"],
"Stars": ["36542", "4512", "3934"],
"Contributors": ["651", "77", "34"],
"Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
}
table = pd.DataFrame.from_dict(data)
queries = [
"What repository has the largest number of stars?",
"Given that the numbers of stars defines if a repository is active, what repository is the most active?",
"What is the number of repositories?",
"What is the average number of stars?",
"What is the total amount of stars?"
]
outputs = tqa_pipeline(table, queries)
print(outputs)
```
This outputs the following (given that the aggregator setup is respected in the model configuration, which won't be the case until the configuration changes as proposed here are accepted):
```
[
{'answer': 'Transformers', 'coordinates': [(0, 0)], 'cells': ['Transformers'], 'aggregator': 'NONE'},
{'answer': 'Transformers', 'coordinates': [(0, 0)], 'cells': ['Transformers'], 'aggregator': 'NONE'},
{'answer': 'COUNT > Transformers, Datasets, Tokenizers', 'coordinates': [(0, 0), (1, 0), (2, 0)], 'cells': ['Transformers', 'Datasets', 'Tokenizers'], 'aggregator': 'COUNT'},
{'answer': 'AVERAGE > 36542, 4512, 3934', 'coordinates': [(0, 1), (1, 1), (2, 1)], 'cells': ['36542', '4512', '3934'], 'aggregator': 'AVERAGE'},
{'answer': 'SUM > 36542, 4512, 3934', 'coordinates': [(0, 1), (1, 1), (2, 1)], 'cells': ['36542', '4512', '3934'], 'aggregator': 'SUM'}
]
```
Please note the aggregators, their presence in the answer when they exist, and their absence when they do not.
### SQA and sequential inference
```py
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"],
'Date of birth': ["7 february 1967", "10 june 1996", "28 november 1967"]}
queries = ["How many movies has George Clooney played in?", "How old is he?", "What's his date of birth?"]
table = pd.DataFrame.from_dict(data)
tqa_pipeline = pipeline("table-question-answering", model="nielsr/tapas-base-finetuned-sqa", tokenizer="nielsr/tapas-base-finetuned-sqa")
outputs = tqa_pipeline(table, queries, sequential=True)
print(outputs)
```
This outputs the following:
```
[
{'answer': '69', 'coordinates': [(2, 2)], 'cells': ['69']},
{'answer': '59', 'coordinates': [(2, 1)], 'cells': ['59']},
{'answer': '28 november 1967', 'coordinates': [(2, 3)], 'cells': ['28 november 1967']}
]
```
Please note the relationship between questions ("how old is he", who is "he"?) and the correct answers given by the model. One can try passing `sequential=False` and obtain vastly different results.
Here is the [documentation page](https://138478-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/pipelines.html#transformers.TableQuestionAnsweringPipeline). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9145/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9145",
"html_url": "https://github.com/huggingface/transformers/pull/9145",
"diff_url": "https://github.com/huggingface/transformers/pull/9145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9145.patch",
"merged_at": 1608139910000
} |
https://api.github.com/repos/huggingface/transformers/issues/9144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9144/comments | https://api.github.com/repos/huggingface/transformers/issues/9144/events | https://github.com/huggingface/transformers/issues/9144 | 768,344,560 | MDU6SXNzdWU3NjgzNDQ1NjA= | 9,144 | Saving model errors | {
"login": "cyfugr",
"id": 13256398,
"node_id": "MDQ6VXNlcjEzMjU2Mzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13256398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyfugr",
"html_url": "https://github.com/cyfugr",
"followers_url": "https://api.github.com/users/cyfugr/followers",
"following_url": "https://api.github.com/users/cyfugr/following{/other_user}",
"gists_url": "https://api.github.com/users/cyfugr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyfugr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyfugr/subscriptions",
"organizations_url": "https://api.github.com/users/cyfugr/orgs",
"repos_url": "https://api.github.com/users/cyfugr/repos",
"events_url": "https://api.github.com/users/cyfugr/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyfugr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, I'm sorry but I don't know spell.ml or how it works; if it works in a colab notebook and saves correctly, it seems the issue comes from spell.ml rather than transformers.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,608 | 1,614 | 1,614 | NONE | null | I am trying to fine-tune distillbert for a multilabel task using a V100 gpu and latest transformers from pip. When i try to save the model i get this error:
```
Traceback (most recent call last):
File "script/fine_tune_distillbert.py", line 51, in <module>
model.save_pretrained(ROOT_DIR)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 534, in save_pretrained
self.save_weights(output_model_file)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py", line 2085, in save_weights
hdf5_format.save_weights_to_hdf5_group(f, self.layers)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 640, in save_weights_to_hdf5_group
param_dset = g.create_dataset(name, val.shape, dtype=val.dtype)
File "/usr/local/lib/python3.7/dist-packages/h5py/_hl/group.py", line 143, in create_dataset
if '/' in name:
TypeError: a bytes-like object is required, not 'str'
```
The model init:
```
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels= max_lab)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy']) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(256), epochs=3, batch_size=256,
validation_data=val_dataset.shuffle(1000).batch(256))
```
The way i am saving after sucessfuly training the model is this :
```
import os
ROOT_DIR = os.path.abspath(os.curdir)
ROOT_DIR = ROOT_DIR + "/model"
tokenizer.save_pretrained(ROOT_DIR)
model.save_pretrained(ROOT_DIR)
```
Tokenizer is saved perfectly. I have tried to run the same code in a colab notebook ( with much less data) and it saves perfectly but when i use a service like `spell.ml` i get this error.
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9144/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9143/comments | https://api.github.com/repos/huggingface/transformers/issues/9143/events | https://github.com/huggingface/transformers/pull/9143 | 768,271,532 | MDExOlB1bGxSZXF1ZXN0NTQwNzY2NzAx | 9,143 | Pass kwargs to Pipeline's tokenizer call | {
"login": "guyrosin",
"id": 1250162,
"node_id": "MDQ6VXNlcjEyNTAxNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1250162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guyrosin",
"html_url": "https://github.com/guyrosin",
"followers_url": "https://api.github.com/users/guyrosin/followers",
"following_url": "https://api.github.com/users/guyrosin/following{/other_user}",
"gists_url": "https://api.github.com/users/guyrosin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guyrosin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guyrosin/subscriptions",
"organizations_url": "https://api.github.com/users/guyrosin/orgs",
"repos_url": "https://api.github.com/users/guyrosin/repos",
"events_url": "https://api.github.com/users/guyrosin/events{/privacy}",
"received_events_url": "https://api.github.com/users/guyrosin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Narsil who probably knows better.",
"Hi @guyrosin , we actually don't want to enable that.\r\nThe problem is that kwargs are used both by `_parse_and_tokenize` and by `generate`/`forward`.\r\nSee discussion here: https://github.com/huggingface/transformers/pull/9432#discussion_r552550844\r\n\r\nI'm guessing you want to override a tokenizer argument at runtime in the pipeline. The best way to do that is to whitelist all arguments of the `tokenizer` (like we did with truncation). and *only* pass `**kwargs` to generate. That's the best way to isolate arguments of both functionalities without creating a mess. **Hopefully** there won't be any arguments with the same name in both function calls..\r\n\r\n\r\nThe `**kwargs` in the function signature, is legacy for now as it simply captures previous arguments that used to be sent, and prevents triggering an error for previously written code.\r\n\r\n",
"Ohh, got it. Thanks for the explanation @Narsil!"
] | 1,608 | 1,610 | 1,610 | CONTRIBUTOR | null | # What does this PR do?
When calling a Pipeline, the `kwargs` argument is not passed to the tokenizer (it is actually not used at all).
I think the intended behavior is to pass it (as the base tokenizer's `__call__()` method already supports `kwargs`), and that's what this PR does.
[Related to #8180]
The call order is:
```Python3
SpecificPipeline.__call__(..., **kwargs)
# Which calls
Pipeline.__call__(..., **kwargs)
# Which calls
SpecificPipeline._parse_and_tokenize(..., **kwargs)
# Which in turn calls
self.tokenizer(...) # No kwargs in this call
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9143/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9143",
"html_url": "https://github.com/huggingface/transformers/pull/9143",
"diff_url": "https://github.com/huggingface/transformers/pull/9143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9143.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9142/comments | https://api.github.com/repos/huggingface/transformers/issues/9142/events | https://github.com/huggingface/transformers/issues/9142 | 768,249,379 | MDU6SXNzdWU3NjgyNDkzNzk= | 9,142 | RAGRetriever loads dataset in the default cache dir even if a different one is specified | {
"login": "JamesDeAntonis",
"id": 33379057,
"node_id": "MDQ6VXNlcjMzMzc5MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/33379057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamesDeAntonis",
"html_url": "https://github.com/JamesDeAntonis",
"followers_url": "https://api.github.com/users/JamesDeAntonis/followers",
"following_url": "https://api.github.com/users/JamesDeAntonis/following{/other_user}",
"gists_url": "https://api.github.com/users/JamesDeAntonis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamesDeAntonis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamesDeAntonis/subscriptions",
"organizations_url": "https://api.github.com/users/JamesDeAntonis/orgs",
"repos_url": "https://api.github.com/users/JamesDeAntonis/repos",
"events_url": "https://api.github.com/users/JamesDeAntonis/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamesDeAntonis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed! Would you like to open a PR with your fix?",
"I am dealing with this issue too, would love to find a way around this. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,608 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest
- Platform: any
- Python version: 3.8
- PyTorch version (GPU?): any
- Tensorflow version (GPU?): any
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
RAG: @patrickvonplaten, @lhoestq
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. load a `RagRetriever` model with `cache_dir='/mnt/.cache/huggingface/'
2. notice that the dataset is still downloaded to `'~/.cache'`
```{python
from transformers import RagRetriever
rag_retriever = RagRetriever.from_pretrained('facebook/rag-token-base', cache_dir='/mnt/.cache/huggingface')
```
## Expected behavior
The dataset is still downloaded in `'~/.cache'` even though we want it to download to the cache in '/mnt'
The reason this is happening is because, in `retrieval_rag.py`, `config.cache_dir` isn't passed through to `load_dataset` on line 273, for example | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9142/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9141/comments | https://api.github.com/repos/huggingface/transformers/issues/9141/events | https://github.com/huggingface/transformers/pull/9141 | 768,242,109 | MDExOlB1bGxSZXF1ZXN0NTQwNzQ3MTM0 | 9,141 | Support for private models from huggingface.co | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten I don't follow here. In this PR we want to have a way to either pass a token directly, or to opt in to use the one that's store in `~/`. Don't see how I can do that with just an optional string?",
"> @patrickvonplaten I don't follow here. In this PR we want to have a way to either pass a token directly, or to opt in to use the one that's store in `~/`. Don't see how I can do that with just an optional string?\r\n\r\nI might have misunderstood a bit what constrains there are on the functionality. I thought, the following logic is possible and makes sense here:\r\n\r\n- If user passes a string `use_auth_token`, then use this as the token\r\n- Else look for token in `~/.huggingface`:\r\n - if there is no token and model is private -> throw error\r\n - if there is no token and model is **not** private -> load the model as usual\r\n - if there is a token -> use this one\r\n\r\nNot sure if there is something I am completely overlooking here in the logic though, *e.g.* if we cannot know before hand whether the model is private or not\r\n",
"> > @patrickvonplaten I don't follow here. In this PR we want to have a way to either pass a token directly, or to opt in to use the one that's store in `~/`. Don't see how I can do that with just an optional string?\r\n> \r\n> I might have misunderstood a bit what constrains there are on the functionality. I thought, the following logic is possible and makes sense here:\r\n> \r\n> * If user passes a string `use_auth_token`, then use this as the token\r\n> * Else look for token in `~/.huggingface`:\r\n> - if there is no token and model is private -> throw error\r\n> - if there is no token and model is **not** private -> load the model as usual\r\n> - if there is a token -> use this one\r\n> \r\n> Not sure if there is something I am completely overlooking here in the logic though, _e.g._ if we cannot know before hand whether the model is private or not\r\n\r\nOk never mind - as discussed offline this would require more features to add which is out-of-scope for this PR -> so LGTM!",
"Also cc'ing @borisdayma as this PR adds a `exist_ok` param to `HfApi.create_repo()`"
] | 1,608 | 1,608 | 1,608 | MEMBER | null | Add a `use_auth_token` flag (or string) to all `from_pretrained` entry points, to specify token to use as Bearer authorization for remote files.
- if it's a string, use it
- If it's true, will get token from `~/.huggingface/token` (will throw if no token there)
You can test this with:
```python
model = AutoModelForMaskedLM.from_pretrained("pierric/hf-private", use_auth_token=True)
```
We'll add unit tests down the line but need to think about which environment those tests are going to hit.
⚠️ For now, I decided against adding token by default to all calls if user is logged in. Let's discuss though! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9141",
"html_url": "https://github.com/huggingface/transformers/pull/9141",
"diff_url": "https://github.com/huggingface/transformers/pull/9141.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9141.patch",
"merged_at": 1608131397000
} |
https://api.github.com/repos/huggingface/transformers/issues/9140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9140/comments | https://api.github.com/repos/huggingface/transformers/issues/9140/events | https://github.com/huggingface/transformers/pull/9140 | 768,202,340 | MDExOlB1bGxSZXF1ZXN0NTQwNzE4NDg0 | 9,140 | Fix T5 Encoder model parallel tests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9140/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9140",
"html_url": "https://github.com/huggingface/transformers/pull/9140",
"diff_url": "https://github.com/huggingface/transformers/pull/9140.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9140.patch",
"merged_at": 1608066240000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9139/comments | https://api.github.com/repos/huggingface/transformers/issues/9139/events | https://github.com/huggingface/transformers/pull/9139 | 768,183,956 | MDExOlB1bGxSZXF1ZXN0NTQwNzAzOTc3 | 9,139 | Experimental support for fairscale ShardedDDP | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"wrt your notes on GPU memory consumption improvements - from what I have seen checking GPU allocation often doesn't show the real difference, as pytorch tends to use more than it absolutely needs if there is spare memory - or rather it can go with less when the memory is tight - so to get the best improvements stats it's the best to try to push instead the BS until it OOMs, and then you get a more precise difference - which usually leads to more precise improvement numbers than just comparing memory allocation. This is just in my experience.\r\n\r\nAll I'm saying is that probably the improvements are even better than what they seem.",
"finetune_trainer crashes with this option:\r\n\r\n```\r\nexport BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp\r\n```\r\n```\r\nTraceback (most recent call last):\r\n File \"./finetune_trainer.py\", line 379, in <module>\r\n main()\r\n File \"./finetune_trainer.py\", line 315, in main\r\n trainer.train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 677, in train\r\n model = ShardedDDP(model, self.optimizer)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/fairscale/nn/data_parallel/sharded_ddp.py\", line 96, in __init__\r\n self._param_iterator = chain(*[optim.should_bucket_param.keys() for optim in self.sharded_optimizers])\r\nTypeError: 'AdamW' object is not iterable\r\n```\r\n\r\ncould probably extend `test_finetune_trainer.py` to deploy this option if `fairscale` is available? but CIs won't have it - and it's quite slow to build\r\n",
"Oh it's just because it overrides the `create_optimizer_and_scheduler` method. Will fix that method.",
"OK, next we have this:\r\n```\r\nTraceback (most recent call last):\r\n File \"./finetune_trainer.py\", line 379, in <module>\r\n main()\r\n File \"./finetune_trainer.py\", line 315, in main\r\n trainer.train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 818, in train\r\n self.scaler.step(self.optimizer)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py\", line 330, in step\r\n assert len(optimizer_state[\"found_inf_per_device\"]) > 0, \"No inf checks were recorded for this optimizer.\"\r\nAssertionError: No inf checks were recorded for this optimizer.\r\n```\r\n\r\nCoincidentally I have just had the same issue with deepspeed integration when I enable its internal fp16 handling. Didn't get to the root of it yet, but removing `--fp16` arg and thus disabling all the fp16 handling trainer does removed this error.\r\n\r\nnote: I'm switching to deepspeed fp16 handling there...\r\n",
"Is it FP16 with AMP or with apex? I don't believe fairscale is compatible with apex.",
"native amp\r\n\r\nSee the command line I'm testing with at:\r\nhttps://github.com/huggingface/transformers/pull/9139#issuecomment-745581491",
"If you're joining in and discovered you can't build `fairscale`, please see [this](https://github.com/facebookresearch/fairscale/pull/249) and perhaps [that](https://github.com/facebookresearch/fairscale/issues/250).",
"> OK, next we have this:\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"./finetune_trainer.py\", line 379, in <module>\r\n> main()\r\n> File \"./finetune_trainer.py\", line 315, in main\r\n> trainer.train(\r\n> File \"/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py\", line 818, in train\r\n> self.scaler.step(self.optimizer)\r\n> File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py\", line 330, in step\r\n> assert len(optimizer_state[\"found_inf_per_device\"]) > 0, \"No inf checks were recorded for this optimizer.\"\r\n> AssertionError: No inf checks were recorded for this optimizer.\r\n> ```\r\n> \r\n> Coincidentally I have just had the same issue with deepspeed integration when I enable its internal fp16 handling. Didn't get to the root of it yet, but removing `--fp16` arg and thus disabling all the fp16 handling trainer does removed this error.\r\n> \r\n> note: I'm switching to deepspeed fp16 handling there...\r\n\r\nhey there, a bit late, but one of the fairscale/shardedDDP author. The issue with Apex (and vanilla Torch) grad scaler is that it does not know about the gradient sharding, so not all the ranks will have the same behaviour. Torch AMP is supported though, you just have to pass in the ShardedGradScaler as defined here https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/grad_scaler.py#L24",
"Yes, we're passing that scaler :-) The issue was with AMP not Apex. It looks like there is a problem with or without FP16 with one of models.\r\nAh reading more, I see there is a lot on the issue I posted so will look there. Thanks for coming helping us!"
] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
This PR adds support for [FairScale](https://github.com/facebookresearch/fairscale)'s shared DDP training to save GPU memory when training distributed models. Initial tests see a nice reduction of GPU memory used indeed!
This follows the steps of the [main example](https://github.com/facebookresearch/fairscale/blob/master/benchmarks/oss.py) provided on the FairScale repo, integrating them in our Trainer API. To activate training with shared DDP, one must pass along the flag `--sharded_ddp` in a distributed launch command.
Benchmarks tried:
- a fine-tuning on MRPC with `bert_base_uncased` -> goes from 5GB per GPU to 4GB per GPU with no hurt on accuracy
- a fine-tuning on SQUAD v2 with `xlnet_large-cased` -> goes from 11.5GB per GPU to 8GB per GPU (didn't go until the end so didn't check if the accuracy was the same. Training loss seemed equivalent.) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9139/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9139/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9139",
"html_url": "https://github.com/huggingface/transformers/pull/9139",
"diff_url": "https://github.com/huggingface/transformers/pull/9139.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9139.patch",
"merged_at": 1608144468000
} |
https://api.github.com/repos/huggingface/transformers/issues/9138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9138/comments | https://api.github.com/repos/huggingface/transformers/issues/9138/events | https://github.com/huggingface/transformers/issues/9138 | 768,085,875 | MDU6SXNzdWU3NjgwODU4NzU= | 9,138 | adapting trainer.py for multiple optimizers | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,608 | 1,608 | 1,608 | NONE | null | Hi
I was wondering if there can be an easy way to adapt trainer.py for multiple optimizers, where each optimizer is responsible for updating a part of model. thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9138/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9138/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9137/comments | https://api.github.com/repos/huggingface/transformers/issues/9137/events | https://github.com/huggingface/transformers/pull/9137 | 768,083,743 | MDExOlB1bGxSZXF1ZXN0NTQwNjE3NDM4 | 9,137 | Add possibility to switch between APEX and AMP in Trainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This PR also removes `_use_ddp_no_sync` since presumably `transformers` no longer supports pytorch < 1.2",
"There was one nit that you agreed with but didn't integrate - but I'm fine if it remains as merged - just a potential for divergence down the road...",
"Oh, which one did I miss?",
"https://github.com/huggingface/transformers/pull/9137#discussion_r543638157\r\n",
"Argh, one sec - I see what happened - that wasn't what I meant - sorry for not being clear. `choices` is crucial here - since you don't validate the user-provided values - this is error-prone. \r\n\r\nI tried to suggest not repeating the options in the help comment - please let's have `choices` back, and duplicate them if you prefer the help to have the explicit repetition - thanks.",
"Fixed directly on master in [this commit](https://github.com/huggingface/transformers/commit/51adb97cd644a5840d971868d18c1d436fd6ff5d).",
"That's perfect. Thank you, @sgugger!"
] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
When PyTorch >= 1.6 is installed, Trainer is always using native AMP right now. This PR adds the option to switch between AMP and APEX, which can be useful:
- because of the memory leak in AMP fixed (fixed in 1.7.1 but present in 1.6)
- to benchmark APEX vs. AMP
It also simplifies a little bit the internal of Trainer with those. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9137/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9137",
"html_url": "https://github.com/huggingface/transformers/pull/9137",
"diff_url": "https://github.com/huggingface/transformers/pull/9137.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9137.patch",
"merged_at": 1608068290000
} |
https://api.github.com/repos/huggingface/transformers/issues/9136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9136/comments | https://api.github.com/repos/huggingface/transformers/issues/9136/events | https://github.com/huggingface/transformers/pull/9136 | 768,007,494 | MDExOlB1bGxSZXF1ZXN0NTQwNTUyNTkx | 9,136 | Update notebook table and transformers intro notebook | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Discussion on pinning/testing notebooks needs to be global on all notebooks (not just one) so merging this for now. We can think of a strategy and implement it in a follow-up PR."
] | 1,608 | 1,608 | 1,608 | COLLABORATOR | null | # What does this PR do?
Update the examples table and the notebooks table to include all recent examples. Also fix the intro notebook to the transformers library, in particular, the image that was missing.
Fixes #9083
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9136",
"html_url": "https://github.com/huggingface/transformers/pull/9136",
"diff_url": "https://github.com/huggingface/transformers/pull/9136.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9136.patch",
"merged_at": 1608132271000
} |
https://api.github.com/repos/huggingface/transformers/issues/9135 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9135/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9135/comments | https://api.github.com/repos/huggingface/transformers/issues/9135/events | https://github.com/huggingface/transformers/pull/9135 | 767,900,569 | MDExOlB1bGxSZXF1ZXN0NTQwNDcwNjY4 | 9,135 | Fix Bart Shift | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Previous PR #9134 was still WIP and accidentally merged to quickly -> sorry for the many commits.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9135",
"html_url": "https://github.com/huggingface/transformers/pull/9135",
"diff_url": "https://github.com/huggingface/transformers/pull/9135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9135.patch",
"merged_at": 1608055471000
} |
https://api.github.com/repos/huggingface/transformers/issues/9134 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9134/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9134/comments | https://api.github.com/repos/huggingface/transformers/issues/9134/events | https://github.com/huggingface/transformers/pull/9134 | 767,874,020 | MDExOlB1bGxSZXF1ZXN0NTQwNDUwNjg0 | 9,134 | [Bart] Correct wrong order in shift token to right in Bart | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The previous PR #9131 implemented the replacement of -100 with pad_token after retrieving the eos_token_idx. However it should be done before to make sure the correct eos_token_id is found.
Thanks a lot @patil-suraj for spotting this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9134/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9134",
"html_url": "https://github.com/huggingface/transformers/pull/9134",
"diff_url": "https://github.com/huggingface/transformers/pull/9134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9134.patch",
"merged_at": 1608053912000
} |
https://api.github.com/repos/huggingface/transformers/issues/9133 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9133/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9133/comments | https://api.github.com/repos/huggingface/transformers/issues/9133/events | https://github.com/huggingface/transformers/pull/9133 | 767,857,178 | MDExOlB1bGxSZXF1ZXN0NTQwNDM3Nzc3 | 9,133 | [Examples] Add automatic dataset splitting in language-modeling examples | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah, the commit from #9127 seems to have snuck its way in there. Should I remove it?",
"If you can do it easily, that would be best!",
"> If you can do it easily, that would be best!\r\n\r\nI've tried for a bit but I think I just made things worse ! If that's OK I'll leave it there and I'll fix things at merge time."
] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Currently, language-modeling examples support passing a HF-datasets dataset as training data. However, this dataset needs to have a `train` and `validation` split, which is not the case for many language-modeling datasets, which are just unstructured text. The updated scripts automatically partition the `train` split to create a `validation` split if it doesn't exist already, and adds `validation_split_percentage` argument to control the split ratio, set to 5% by default. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9133/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9133/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9133",
"html_url": "https://github.com/huggingface/transformers/pull/9133",
"diff_url": "https://github.com/huggingface/transformers/pull/9133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9133.patch",
"merged_at": 1608066164000
} |
https://api.github.com/repos/huggingface/transformers/issues/9132 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9132/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9132/comments | https://api.github.com/repos/huggingface/transformers/issues/9132/events | https://github.com/huggingface/transformers/pull/9132 | 767,834,686 | MDExOlB1bGxSZXF1ZXN0NTQwNDIwNDEx | 9,132 | Fix typo in trainer_tf.py | {
"login": "luckynozomi",
"id": 14944822,
"node_id": "MDQ6VXNlcjE0OTQ0ODIy",
"avatar_url": "https://avatars.githubusercontent.com/u/14944822?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckynozomi",
"html_url": "https://github.com/luckynozomi",
"followers_url": "https://api.github.com/users/luckynozomi/followers",
"following_url": "https://api.github.com/users/luckynozomi/following{/other_user}",
"gists_url": "https://api.github.com/users/luckynozomi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckynozomi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckynozomi/subscriptions",
"organizations_url": "https://api.github.com/users/luckynozomi/orgs",
"repos_url": "https://api.github.com/users/luckynozomi/repos",
"events_url": "https://api.github.com/users/luckynozomi/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckynozomi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,608 | 1,608 | 1,608 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in trainer_tf.py
Fixes #9053
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9132/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9132",
"html_url": "https://github.com/huggingface/transformers/pull/9132",
"diff_url": "https://github.com/huggingface/transformers/pull/9132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9132.patch",
"merged_at": 1608052349000
} |
https://api.github.com/repos/huggingface/transformers/issues/9131 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9131/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9131/comments | https://api.github.com/repos/huggingface/transformers/issues/9131/events | https://github.com/huggingface/transformers/pull/9131 | 767,813,276 | MDExOlB1bGxSZXF1ZXN0NTQwNDAzNzgw | 9,131 | [Bart] fix bart loss masking | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @patil-suraj "
] | 1,608 | 1,608 | 1,608 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9123
Bart should be able to replace -100 tokens when prepping `decoder_input_ids`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9131/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9131",
"html_url": "https://github.com/huggingface/transformers/pull/9131",
"diff_url": "https://github.com/huggingface/transformers/pull/9131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9131.patch",
"merged_at": 1608052637000
} |
https://api.github.com/repos/huggingface/transformers/issues/9130 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9130/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9130/comments | https://api.github.com/repos/huggingface/transformers/issues/9130/events | https://github.com/huggingface/transformers/issues/9130 | 767,791,245 | MDU6SXNzdWU3Njc3OTEyNDU= | 9,130 | Trainer: support iterable datasets for evaluation | {
"login": "marcelgwerder",
"id": 4008557,
"node_id": "MDQ6VXNlcjQwMDg1NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4008557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcelgwerder",
"html_url": "https://github.com/marcelgwerder",
"followers_url": "https://api.github.com/users/marcelgwerder/followers",
"following_url": "https://api.github.com/users/marcelgwerder/following{/other_user}",
"gists_url": "https://api.github.com/users/marcelgwerder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcelgwerder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcelgwerder/subscriptions",
"organizations_url": "https://api.github.com/users/marcelgwerder/orgs",
"repos_url": "https://api.github.com/users/marcelgwerder/repos",
"events_url": "https://api.github.com/users/marcelgwerder/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcelgwerder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is more complex than this as the `Trainer` needs to knows in advance the number of elements in your evaluation dataset, so you need to implement a `__len__` method in your evaluation dataset to have `Trainer` work on it.\r\n\r\nAn iterable dataset might make sense for training since you want to yield \"infinite\" examples and stop at a certain step, it doesn't really makes sense for evaluation where, by definition, you have a finite number of samples.",
"I see, but that also means that you'd have to implement two different datasets for the same data, which is a bit annoying. Why does the trainer need to know the length of the eval dataset?",
"This is for the distributed evaluation to work: we need to initialize the containers for the logits and predictions to the right size and fill them with the data returned by each node.\r\n\r\nYou iterable dataset must be finite, so you can just wrap it like this before sending it to `Trainer`:\r\n```\r\nclass FromIterableDataset:\r\n def __init__(self, iterable_dataset):\r\n self.dataset = list(iterable_dataset)\r\n\r\n def __getitem__(self, i):\r\n return self.dataset[i]\r\n\r\n def __len__(self):\r\n return len(self.dataset)\r\n```",
"Yeah I guess that works for most cases. Although It might make sense to catch the case where one implements `__len__` in an iterable dataset, which might be reasonable depending on how the data is stored. Currently you're first told to implement `__len__` and then the code just fails with above exception. It would probably be better if there was a more meaningful exception regarding the use of iterable datasets for evaluation.",
"Anyways, this seems to be an edge case, thanks for your help!"
] | 1,608 | 1,608 | 1,608 | NONE | null | The trainer seems to support passing iterable datasets as the `train_dataset` (see #5829) but misses to support the same for the `eval_dataset`. I have implemented an iterable dataset for training and now I cannot use the same implementation for evaluation. This doesn't make much sense as evaluation could easily be done using the iterable dataset.
Currently the evaluation fails with the following exception:
```
ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.SequentialSampler object at 0x7f53900c50d0>
```
I suspect the below method should return None in case of an iterable dataset:
https://github.com/huggingface/transformers/blob/ef2d4cd4457a344b633173c14ca7789f18f75b59/src/transformers/trainer.py#L402-L408
Just like how it is handled for the `train_dataset`:
https://github.com/huggingface/transformers/blob/ef2d4cd4457a344b633173c14ca7789f18f75b59/src/transformers/trainer.py#L380-L400
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9130/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9130/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.