repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
9,310
closed
ModuleNotFoundError: No module named 'tokenizations.tokenizations'
``` nlp = en_trf_bertbaseuncased_lg.load() File "/usr/lib/python3.9/site-packages/en_trf_bertbaseuncased_lg/__init__.py", line 12, in load return load_model_from_init_py(__file__, **overrides) File "/usr/lib/python3.9/site-packages/spacy/util.py", line 239, in load_model_from_init_py return load_model_from_path(data_path, meta, **overrides) File "/usr/lib/python3.9/site-packages/spacy/util.py", line 202, in load_model_from_path cls = get_lang_class(lang) File "/usr/lib/python3.9/site-packages/spacy/util.py", line 74, in get_lang_class if lang in registry.languages: File "/usr/lib/python3.9/site-packages/catalogue.py", line 56, in __contains__ has_entry_point = self.entry_points and self.get_entry_point(name) File "/usr/lib/python3.9/site-packages/catalogue.py", line 140, in get_entry_point return entry_point.load() File "/usr/lib/python3.9/importlib/metadata.py", line 77, in load module = import_module(match.group('module')) File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 790, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/usr/lib/python3.9/site-packages/spacy_transformers/__init__.py", line 2, in <module> from .pipeline.tok2vec import TransformersTok2Vec # noqa File "/usr/lib/python3.9/site-packages/spacy_transformers/pipeline/__init__.py", line 3, in <module> from .wordpiecer import TransformersWordPiecer # noqa File "/usr/lib/python3.9/site-packages/spacy_transformers/pipeline/wordpiecer.py", line 3, in <module> from tokenizations import get_alignments File "/usr/lib/python3.9/site-packages/tokenizations/__init__.py", line 2, in <module> from .tokenizations import ( ModuleNotFoundError: No module named 'tokenizations.tokenizations' ```
12-26-2020 09:13:44
12-26-2020 09:13:44
Sorry, wrong repo
transformers
9,309
closed
Entry-level demo of visual question answering
## Environment info Is there any entry-level demo of visual question answering?(I am also interested in adding title for each image later on) Better with `Trainer` added @sgugger. I follow the example from [Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html). ```python from transformers import LxmertTokenizer, LxmertModel import torch tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased') model = LxmertModel.from_pretrained('unc-nlp/lxmert-base-uncased') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` comes up ``` File "/home/yezli/miniconda3/lib/python3.8/site-packages/transformers/models/lxmert/modeling_lxmert.py", line 933, in forward assert visual_feats is not None, "`visual_feats` cannot be `None`" AssertionError: `visual_feats` cannot be `None` ``` - `transformers` version: - Platform: `Ubuntu 16.04.7 LTS` - Python version: `Python 3.7.0` - PyTorch version (GPU?): `No. But I am using PyTorch` - Tensorflow version (GPU?): `No` - Using GPU in script?: `No` - Using distributed or parallel set-up in script?: `No` ### Who can help @airsplay @bryant1410 Trainer @sgugger ## Information Model I am using (Bert, XLNet ...): `[Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html)` The problem arises when using: * [v] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) * [ ] Following example from [Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html). ```python from transformers import LxmertTokenizer, LxmertModel import torch tokenizer = LxmertTokenizer.from_pretrained('unc-nlp/lxmert-base-uncased') model = LxmertModel.from_pretrained('unc-nlp/lxmert-base-uncased') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) `Visual question answering` following example from [Lxmert](https://huggingface.co/transformers/model_doc/lxmert.html). -- If you have code snippets, error messages, stack traces please provide them here as well. ``` File "/home/yezli/miniconda3/lib/python3.8/site-packages/transformers/models/lxmert/modeling_lxmert.py", line 933, in forward assert visual_feats is not None, "`visual_feats` cannot be `None`" AssertionError: `visual_feats` cannot be `None` ```
12-26-2020 04:49:40
12-26-2020 04:49:40
Hi @yezhengli-Mr9 not sure what you are asking here, by `Trainer` demo do you mean an example showing how to fine-tune `LXMERT`? If you are looking for how to use `LXMERT` then this [demo notebook](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing) show's how to use LXMERT for visual QA <|||||>> Hi @yezhengli-Mr9 not sure what you are asking here, > by `Trainer` demo do you mean an example showing how to fine-tune `LXMERT`? > > If you are looking for how to use `LXMERT` then this [demo notebook](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing) show's how to use LXMERT for visual QA Hi @patil-suraj @patrickvonplaten , thanks a lot but [`examples/lxmert/`](https://github.com/huggingface/transformers/blob/master/examples/lxmert/) no longer exists although I am reconstructing some functionality since the [demo notebook](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing) seems quite instructive.
transformers
9,308
closed
[GPT2] Correct gradient checkpointing
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Previously, it was not possible to train GPT2 with gradient_checkpointing and `use_cache=False`. However `use_cache` should not be set to `True` when training. This PR corrects the behavior so that gradient checkpointing does not require `use_cache=True`. In addition, this PR changes lists to tuples in GPT2 for consistency with other models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-25-2020 21:09:37
12-25-2020 21:09:37
transformers
9,307
closed
from_pretrained does not load the modified part of model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Linux - Python version: 3.7 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Model Cards: @julien-c --> Model Cards: @julien-c ## Information Hi 1) I am observing that if one modifies a model lest say T5ForConditionalGeneration and then use T5ForConditionalGeneration.from_pretrained(...) then not all components of the models are loaded, meaning that the parts of the model the user has modified are initialized to random! 2) I observe this from accuracy, could you tell me how I can check which weights from_pretrained is loading, I am a bit lost in the repository. thanks ## Expected behavior If the model has been changed to have more layers, .... all the trained weights need to be loaded.
12-25-2020 16:05:47
12-25-2020 16:05:47
In the documentation, this is written how from_pretrained works for untouched models, but I cannot see how this works when one modifies the model. <|||||>Hey @juliahane, could you please provide a code snippet showcasing the unintended behavior? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,306
closed
comment correction in test_retrieval_rag.py?
HI, in https://github.com/huggingface/transformers/blob/master/tests/test_retrieval_rag.py#L223 comments of L223 and L224 are the same, maybe one of it should be "min inner product is reached with ...: But I am not sure which one. Pardon me if it is already correct.
12-25-2020 11:05:46
12-25-2020 11:05:46
Both comments are right. ```python self.assertEqual(doc_dicts[0]["id"][0], "1") # max inner product is reached with second doc ``` makes sure the first retrieved document of the first query is the second document of the corpus (it's the one that maximizes the inner product) while ```python self.assertEqual(doc_dicts[1]["id"][0], "0") # max inner product is reached with first doc ``` makes sure that the first retrieved document of the second query is the first document of the corpus (it's the one that maximizes the inner product) To be clearer the indices in those statements could be written as ```python query_idx = 0 retrieved_document_idx = 0 expected_id = "0" self.assertEqual(doc_dicts[query_idx]["id"][retrieved_document_idx], expected_id) ```<|||||>Thanks for your reply!
transformers
9,305
closed
[Don't merge] New design proposition for MAPPINGS in "auto" files
This PR would solve the issue: https://github.com/huggingface/transformers/issues/9250 but should not be used as a solution. The PR should rather just show how the current design of all `OrderedDicts`, called `MAPPINGS_...` is suboptimal. It's impossible to add two values if both values have the same key. We need to be able to add a tokenizer class to `AutoTokenizers` even if the tokenizer does not have its own unique configuration class. We had a similar problem for Rag, since there is `RagForSequenceGeneration` and `RagForTokenGeneration` which both should be in the same mapping. IMO, the only 100% where we prevent "key" conflicts is if we use "multi-key" to "value" mappings as follows: Tokenizer: (PretrainedConfig (the corresponding config class, we're using now), str (the tokenizer class as a string, sometimes saved under `config.tokenizer_class`) -> TokenizerClass Model: (PretrainedConfig (the corresponding config class, we're using now), str (the model type as a string, sometimes saved under `config.model_type`) -> ModelClass Some other "less" important shortcomings of this design: - Because we often check with `isinstance` whether a config class is in an OrderedDict, we need to be very careful about the position of the key in the ordered dict and even wrote a test for this: https://github.com/huggingface/transformers/blob/21fc676645b1cae7cb9b5835435d57d90f9bc714/tests/test_modeling_auto.py#L221. This added complexity for such a simple feature is quite unnecessary IMO. - These functions: https://github.com/huggingface/transformers/blob/21fc676645b1cae7cb9b5835435d57d90f9bc714/src/transformers/models/auto/tokenization_auto.py#L249 are more of a hack than a permanent solution IMO. - We currently don't document those classes. I guess we could but it's just a mapping. => I would propose that we change all "MAPPING_FOR_..." to a class `MAPPING_FOR_` where we make sure that 100% backward compatibility is kept (except for that now it's not anymore a `OrderedDict` anymore, but a class.) We can implement a `__getitem__` that could take inputs of different types (config for backward comp, but maybe also a "str" corresponding to the `"tokenizer_class"` or `"model_type"`). In general, it would give us more flexibility and prevent errors such as the one linked to this PR. A possible design could look like this: ```python class MappingGenerator: def __init__(self, keys_to_values: List[Tuple[Union[PretrainedConfig, str, Any]]]): self.tuple_to_class = OrderedDict(keys_to_values) all_configs = [keys_to_value[0] for keys_to_value in keys_to_values] self.duplicated_configs = set([x for x in all_configs if all_configs.count(x) > 1]) self.config_to_class = OrderedDict([(keys_to_value[0], keys_to_value[2]) for keys_to_value in keys_to_values]) # not possible to have key conflicts here self.str_to_class = OrderedDict([(keys_to_value[1], keys_to_value[2]) for keys_to_value in keys_to_values]) def __getattr__(key: Union[PretrainedConfig, str, Tuple[PretrainedConfig, str]]): if isintance(key, str): return self.str_to_class[key] elif isinstance(key, PretrainedConfig): if key in self.duplicade_configs: raise ... return self.config_to_class[key] elif isinstance(key, Tuple): return self.tuple_to_class[key] raise ... TOKENIZER_MAPPING = MappingGenerator([ (BertConfig, "BertTokenizer", BertTokenizer), (GPT2Config, "GPT2Tokenizer", GPT2Tokenizer), ..., ]) ``` Keen to hear your thoughts on this @LysandreJik, @sgugger, @julien-c before opening a PR.
12-25-2020 10:12:21
12-25-2020 10:12:21
You're right that the current design is sub-optimal, especially for the tokenizers since we have introduced tokenizers decoupled from models. - Having this approach would imply modifying most configuration files on the hubs, given that you use the approach ``` (config.class, config.tokenizer_class) -> ... ``` as most models configurations have no tokenizer class defined. - The `isinstance` should be replaced by `type` imo, which would prevent having such a test Overall I'm definitely not against refactoring this part to ensure better compatibility, but let's try to find a way of making sure we don't have to update 1000s of configurations on the hub. Maybe adding a `tokenizer_class = XXXTokenizer` field in the configurations would prevent this. <|||||>> You're right that the current design is sub-optimal, especially for the tokenizers since we have introduced tokenizers decoupled from models. > > * Having this approach would imply modifying most configuration files on the hubs, given that you use the approach > ``` > (config.class, config.tokenizer_class) -> ... > ``` > > > as most models configurations have no tokenizer class defined. > * The `isinstance` should be replaced by `type` imo, which would prevent having such a test > > Overall I'm definitely not against refactoring this part to ensure better compatibility, but let's try to find a way of making sure we don't have to update 1000s of configurations on the hub. Maybe adding a `tokenizer_class = XXXTokenizer` field in the configurations would prevent this. Sorry, I think my explanation wasn't very clear above - I modified the description. I didn't mean to force configs to have a `tokenizer_class` attribute. The idea was just that the `TOKENIZER_MAPPING` should expose a function that allows one to get the correct tokenizer not only by the config but also by the tokenizer_class as a string. So the idea is that we could replace the current `TOKENIZER_MAPPING` with a class like as (now) shown above, but then this class can be used in whatever way is best by `AutoTokenizer`, *e.g.* the AutoTokenizer's `from_pretrained(...)` method could then call the `TOKENIZER_MAPPING` class above as follows: ```python if hasattr(config, tokenizer_class): tokenizer = TOKENIZER_MAPPING[(config, config.tokenizer_class)] else tokenizer = TOKENIZER_MAPPING[config] ``` <|||||>I'd need to see a PoC to be sure, but this looks like an interesting idea to me. There are certainly big limitations in the way those AUTO variables are currently structured.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,304
closed
【 run_mlm.py 】attention_mask will be set to [1,1,...1] with DataCollatorForLanguageModeling
tokenized_datasets has been padded when padding="max_length" when get a dataloader, we will use DataCollatorForLanguageModeling with tokenizer.pad at first tokenizer.pad will set attention_mask to all 1 because input_ids have already been padded so i want to know whether the attention mask meet expectations.
12-25-2020 06:45:57
12-25-2020 06:45:57
Sorry, I don't really understand the question here. Could you clarify a bit? In case this is a question about the behavior of `DataCollatorForLanguageModeling`, it would be awesome if you could use the forum: https://discuss.huggingface.co/ . Otherwise it would be great if you provide a code snippet showcasing the unexpected behavior. Thanks!<|||||>in run_mlm.py first use: ```python def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer( examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length, # We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it # receives the `special_tokens_mask`. return_special_tokens_mask=True, ) ``` if i set padding="max_length", the inputs will be padded. that means the inputs will already be padded to [1, 4, 5, 6, 2, 0, 0..] and attention_mask will be set to [1, 1, 1, 1, 1, 0, 0...]. so when use DataCollatorForLanguageModeling, the inputs will be padded again (tokenizer.pad). the inputs will not change but the attention_mask will be [1, 1, 1, 1, 1, 1, 1..]. if i set padding="false", tokenizer.pad in DataCollatorForLanguageModeling will not pad the inputs to max_length. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @xieyuchen13 - I don't think the `attention_mask` will change from [1,1,1,...0,0,0] to [1,1,1,....1,1,1] => could you show me an example that proves otherwise? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,303
closed
add translation example
# What does this PR do? This PR will add translation example to the repo as per discussion with @thomwolf. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @patil-suraj, @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-25-2020 06:34:00
12-25-2020 06:34:00
Hi, @vasudevgupta7 thanks for adding this! Could you link this notebook in the community notebooks table [here](https://github.com/huggingface/transformers/tree/master/notebooks#community-notebooks) instead of adding it to `/notebooks` <|||||>done.<|||||>Thanks! I re-worded the description a bit, hope you don't mind ;)
transformers
9,302
closed
Fix TF TransfoXL
# What does this PR do? This PR fixes TransfoXL for graph compliancy.
12-24-2020 19:49:54
12-24-2020 19:49:54
transformers
9,301
closed
Fix TF T5
# What does this PR do? This PR fix a couple of bug in T5, one for graph compliancy and another one for the `past` output.
12-24-2020 19:04:29
12-24-2020 19:04:29
Already run of course 😉 and I can tell you that they all pass!
transformers
9,300
closed
Fix TF Funnel
# What does this PR do? This PR fixes Funnel to make it full graph compliant. Even though all the slow/quick tests are passing and got similar results with few experiements, @sgugger I would appreciate that you thoroughly look at the changes in order to be sure no bugs have been introduced.
12-24-2020 15:53:53
12-24-2020 15:53:53
@LysandreJik feel free to merge if it looks ok for you and if @sgugger approves the last fix on `pooled_hidden`.
transformers
9,299
closed
[Bart doc] Fix outdated statement
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9298 . Bart docs should be slightly updated. Thank @forest1988 ! ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-24-2020 13:28:42
12-24-2020 13:28:42
transformers
9,298
closed
`transformers.models.bart.modeling_bart._prepare_bart_decoder_inputs` seems to be renamed but remains in the document
## Environment info - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Bart: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Bart The problem arises in the [model_doc/bart.rst](https://github.com/huggingface/transformers/blob/v4.1.1/docs/source/model_doc/bart.rst#implementation-notes) ## To reproduce In our code using transformers v3.4.0, we have used : ``` from transformers.modeling_bart import _prepare_bart_decoder_inputs ``` I tried to rerewrite it as: ``` try: # transformers >= v4 from transformers.models.bart.modeling_bart import _prepare_bart_decoder_inputs except ModuleNotFoundError: # transformers == v3.4.0 from transformers.modeling_bart import _prepare_bart_decoder_inputs ``` but It seems Bart (modeling_bart) in v4.1.1 doesn't have `_prepare_bart_decoder_inputs` in its implementation. However, [model_doc/bart.rst](https://github.com/huggingface/transformers/blob/v4.1.1/docs/source/model_doc/bart.rst#implementation-notes) says > The forward pass of :class:`~transformers.BartModel` will create decoder inputs (using the helper function :func:`transformers.models.bart.modeling_bart._prepare_bart_decoder_inputs`) if they are not passed. This is different than some other modeling APIs. I think maybe the function is renamed in the refactoring of Bart #8900. I welcome this refactoring as I would love to take advantage of Bart (and other Seq2SeqLMs), but I am wondering how I can fix the old code to get the best performance out of the refactored code. Do you have any document about how to fix the old code to work well with the new version of the Bart? ## Expected behavior Maybe model_doc/bart.rst needs to be updated. I'm sorry if there is already an appropriate documentation.
12-24-2020 12:48:00
12-24-2020 12:48:00
Hey @forest1988, Thanks a lot for your issue - you're 100% correct, the docs need to be updated here - I'll open a PR for this and tag you. So to give some context on why this function doesn't exist anymore. - We are trying to align the API of all models which makes it easier for users to switch from one model to the other. No other model had such a function. - The function was only called internally, so you don't really to care about the change as long as you only call the public API of Bart being at the moment the `forward pass` of `BartModel` and `BartForConditionalGeneration`. All possible behaviors of pubic API functions should have stayed 1-to-1 the same for Bart. - In the bullet point in question, it says that the model will create the `decoder_input_ids` if they are not passed. This is a very Bart-specific feature and is only really be used in two cases: 1) You want to do <mask-filling> for bart as shown in the example in this section: https://huggingface.co/transformers/model_doc/bart.html#transformers.BartForConditionalGeneration . In this case you only have to pass the `input_ids` and Bart will correctly output something. All non-Bart seq2seq model yield an error here because they expect the `decoder_input_ids` to be passed as well (which should be the default case IMO). Bart is able to do this task thanks to its rather specific pre-training objective. 2) (This is the same for all Seq2Seq models) you pass `labels` and `input_ids` => this is used fro training and in this case Bart (and all other seq2seq models except EncoderDecoderModel) shift the labels to the right to create the `decoder_input_ids`.<|||||>So in short, the function doesn't exist anymore in the new code. If you adapted "old" bart code to your specific needs and are now stuck to "port" it to the new Bart code, I'd suggest to write an integration test using your "old" bart code and then closely look at what was done in #8900 to adapt your code analogously. If you're completely stuck, feel free to post an issue here and tag me - I'll help you then :-) <|||||>@patrickvonplaten Thank you for your quick comments and for solving the issue! I've checked PR #9299 and would like to say thank you for tagging me there. Some time ago, we tried to use various Seq2SeqLMs but had trouble using them in a unified way. The update of aligning APIs is very helpful! I read carefully your comments and linked documents, will write an integration test, and closely look #8900 to adapt my code analogously. If I’ll be still completely stuck, I would like to take your word and ask for your help. Thank you again!
transformers
9,297
closed
fix typo in modeling_encoder_decoder.py
Fixed typo. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes a typo ## Before submitting - [ x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-24-2020 12:19:56
12-24-2020 12:19:56
transformers
9,296
closed
[bert_generation] enable cache by default
# What does this PR do? - add `use_cache` to `BertGenerationConfig` with default to `True` - in `BertGenerationEncoder` if `use_cache` is `None` (this is the default behaviour) then set it using the `config`. This will enable caching by default in inference for `BertGenerationEncoder`
12-24-2020 12:05:27
12-24-2020 12:05:27
transformers
9,295
open
Good Second Issue: T5 FP16 in Pytorch
# 🚀 Feature request This "Good second issue" should revisit some of the problems we were having with FP16 for `T5ForConditionalGeneration`: https://github.com/huggingface/transformers/issues/4586 and help to make T5 compatible with fp16. **_Requirements:_** - use transformers master - use newest pytorch version - have access to GPU **_Context:_** To better explain the context, let's define the three different pre-trained T5 models types we have: - **T5v1** (original T5): => this corresponds to all those checkpoints: `t5-small`, `t5-base`, `t5-large`, `t5-3b`, `t5-11b` - **T5v1_1** (improved T5): => this corresponds to all those checkpoints: `google/t5-v1_1-small`, `google/t5-v1_1-base`, `google/t5-v1_1-large`, `google/t5-v1_1-xl`, `google/t5-v1_1-xxl`. **T5v1_1** has a slightly different architecture than **T5v1**. More info on differences can be found here: https://github.com/huggingface/transformers/issues/6285 - **MT5** (multi-lingual T5): => this model is identical in architecture to **T5v1_1** but has different pre-trained weights and a much larger word embedding matrix. As shown in this issue https://github.com/huggingface/transformers/issues/4586 , training **T5v1** in fp16 mode led in the past to numerical overflow in the `T5LayerFF` forward pass: https://github.com/huggingface/transformers/blob/6189ae99603bd5dc14c5631f1b4562f78e24d575/src/transformers/models/t5/modeling_t5.py#L279. At the time of this issue: https://github.com/huggingface/transformers/issues/4586, **T5v1** was added with a small bug that led to slightly wrong outputs that was only fixed by this PR: https://github.com/huggingface/transformers/pull/8518. Also, now there are new T5 checkpoints, notably the **T5v1_1** and **MT5** checkpoints, where it would be very interesting to see whether fp16 can work with those. **_Feature Request_** So for this feature request, we should two scenarios: 1) Inference: For each T5 model type we should test when the models break during inference. This can be as easy as testing the following script for a bunch of different checkpoints on different `input_str`: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch checkpoint = "t5-small" # "google/mt5-small", "google/t5-v1_1-small" model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint).to('cuda') tokenizer = AutoTokenizer.from_pretrained(checkpoint) input_str = "Hello there. This is the input." # here it would be better to test much larger inputs input_ids = tokenizer(input_str, return_tensors="pt").input_ids.to('cuda') # FP32 output_fp32 = model.generate(input_ids) # FP16 model.half() output_fp16 = model.generate(input_ids) if output_fp32.tolist() == output_fp16.tolist(): print("SUCCESS: Output is equal!") else: print("Output is different!") print("FP32", output_fp32) print("FP16", output_fp16) ``` 2) Training (the more interesting part): This is probably more important and will require more time / skill. In order to check how T5 does in FP16 training, I'd recommend to use the newly added `Seq2SeqTrainer`: https://github.com/huggingface/transformers/blob/6189ae99603bd5dc14c5631f1b4562f78e24d575/src/transformers/trainer_seq2seq.py#L38. I would recommend to train on a summarization task, such as CNN/Dailymail. One could closely follow, this notebook: https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing, but replacing Bert2Bert with the different T5 models. Ideally different "fp16 backends" should be tested: https://github.com/huggingface/transformers/blob/6189ae99603bd5dc14c5631f1b4562f78e24d575/src/transformers/training_args.py#L216 and one should try to see whether hacks as proposed in https://github.com/huggingface/transformers/issues/4586#issuecomment-748336815 can solve the problem. It would be very interesting to see whether the error happens only for **T5v1** or also for **T5v1_1** and **MT5** and it what point. For each type it would be great to test for "small", "base" and if possible even "large". Ideally, one should first create a short summarization fine-tuning script (happy to help here) and then run a bunch of different experiments with different fp16 backends and different models. **_Possible Outcome_** The results of those experiments should be documented here or even better on https://discuss.huggingface.co/. Ideally, a solution to the problem is found and one could publish a nice blog post explaining how to effectively train T5. ## Motivation T5 is one of the most widely used models of Transformers at the moment so that more results to this issue would be extremely useful for the community. In addition, this issue can be a great opportunity to learn more about the limits of fp16 and why some models still do require full fp32 support (or at least until bfloat16 is better supported in torch). This is not an easy issue to tackle, but an extremely important one. ## Your contribution I'm happy to help along the way, starting with making a nice T5 summarization training pipeline that lets one easily test on different models, and fp16 backends.
12-24-2020 10:25:43
12-24-2020 10:25:43
here's what I found `t5-small` is the only T5 model that works in fp16 at the moment. The rest of the models produce `nan` loss/logits. for all the models and versions (v1, v1.1, mT5) at some point we get `inf` values in `hidden_states` after applying the final linear layer (`wo`) in `T5DenseReluDense` and `T5DenseGatedGeluDense`. https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L248-L278 which results in `nan` values in `T5LayerNorm`. Also for `t5-large`, `t5-v1_1-base`, `t5-v1_1-large`, there are `inf` values in the output of `T5LayerSelfAttention` and `T5LayerCrossAttention`, specifically where we add the attn output with the `hidden_states` https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L548 https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/t5/modeling_t5.py#L584 This happens during both training and inference, to reproduce ```python model = T5ForConditionalGeneration.from_pretrained("t5-base").to("cuda:0").eval() model.half() tokenizer = T5Tokenizer.from_pretrained("t5-base") ARTICLE = """summarize: Marseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that "so far no videos were used in the crash investigation." He added, "A person who has such a video needs to immediately give it to the investigators." Robin's comments follow claims by two magazines, German daily Bild and French Paris Match, of a cell phone video showing the harrowing final seconds from on board Germanwings Flight 9525 as it crashed into the French Alps. All 150 on board were killed. Paris Match and Bild reported that the video was recovered from a phone at the wreckage site. The two publications described the supposed video, but did not post it on their websites. The publications said that they watched the video, which was found by a source close to the investigation. "One can hear cries of 'My God' in several languages," Paris Match reported. "Metallic banging can also be heard more than three times, perhaps of the pilot trying to open the cockpit door with a heavy object. Towards the end, after a heavy shake, stronger than the others, the screaming intensifies. Then nothing." "It is a very disturbing scene," said Julian Reichelt, editor-in-chief of Bild online. An official with France's accident investigation agency, the BEA, said the agency is not aware of any such video. Lt. Col. Jean-Marc Menichini, a French Gendarmerie spokesman in charge of communications on rescue efforts around the Germanwings crash site, told CNN that the reports were "completely wrong" and "unwarranted." Cell phones have been collected at the site, he said, but that they "hadn't been exploited yet." Menichini said he believed the cell phones would need to be sent to the Criminal Research Institute in Rosny sous-Bois, near Paris, in order to be analyzed by specialized technicians working hand-in-hand with investigators. But none of the cell phones found so far have been sent to the institute, Menichini said. Asked whether staff involved in the search could have leaked a memory card to the media, Menichini answered with a categorical "no." Reichelt told "Erin Burnett: Outfront" that he had watched the video and stood by the report, saying Bild and Paris Match are "very confident" that the clip is real. He noted that investigators only revealed they'd recovered cell phones from the crash site after Bild and Paris Match published their reports. "That is something we did not know before. ... Overall we can say many things of the investigation weren't revealed by the investigation at the beginning," he said. What was mental state of Germanwings co-pilot? German airline Lufthansa confirmed Tuesday that co-pilot Andreas Lubitz had battled depression years before he took the controls of Germanwings Flight 9525, which he's accused of deliberately crashing last week in the French Alps. Lubitz told his Lufthansa flight training school in 2009 that he had a "previous episode of severe depression," the airline said Tuesday. Email correspondence between Lubitz and the school discovered in an internal investigation, Lufthansa said, included medical documents he submitted in connection with resuming his flight training. The announcement indicates that Lufthansa, the parent company of Germanwings, knew of Lubitz's battle with depression, allowed him to continue training and ultimately put him in the cockpit. Lufthansa, whose CEO Carsten Spohr previously said Lubitz was 100% fit to fly, described its statement Tuesday as a "swift and seamless clarification" and said it was sharing the information and documents -- including training and medical records -- with public prosecutors. Spohr traveled to the crash site Wednesday, where recovery teams have been working for the past week to recover human remains and plane debris scattered across a steep mountainside. He saw the crisis center set up in Seyne-les-Alpes, laid a wreath in the village of Le Vernet, closer to the crash site, where grieving families have left flowers at a simple stone memorial. Menichini told CNN late Tuesday that no visible human remains were left at the site but recovery teams would keep searching. French President Francois Hollande, speaking Tuesday, said that it should be possible to identify all the victims using DNA analysis by the end of the week, sooner than authorities had previously suggested. In the meantime, the recovery of the victims' personal belongings will start Wednesday, Menichini said. Among those personal belongings could be more cell phones belonging to the 144 passengers and six crew on board. Check out the latest from our correspondents . The details about Lubitz's correspondence with the flight school during his training were among several developments as investigators continued to delve into what caused the crash and Lubitz's possible motive for downing the jet. A Lufthansa spokesperson told CNN on Tuesday that Lubitz had a valid medical certificate, had passed all his examinations and "held all the licenses required." Earlier, a spokesman for the prosecutor's office in Dusseldorf, Christoph Kumpa, said medical records reveal Lubitz suffered from suicidal tendencies at some point before his aviation career and underwent psychotherapy before he got his pilot's license. Kumpa emphasized there's no evidence suggesting Lubitz was suicidal or acting aggressively before the crash. Investigators are looking into whether Lubitz feared his medical condition would cause him to lose his pilot's license, a European government official briefed on the investigation told CNN on Tuesday. While flying was "a big part of his life," the source said, it's only one theory being considered. Another source, a law enforcement official briefed on the investigation, also told CNN that authorities believe the primary motive for Lubitz to bring down the plane was that he feared he would not be allowed to fly because of his medical problems. Lubitz's girlfriend told investigators he had seen an eye doctor and a neuropsychologist, both of whom deemed him unfit to work recently and concluded he had psychological issues, the European government official said. But no matter what details emerge about his previous mental health struggles, there's more to the story, said Brian Russell, a forensic psychologist. "Psychology can explain why somebody would turn rage inward on themselves about the fact that maybe they weren't going to keep doing their job and they're upset about that and so they're suicidal," he said. "But there is no mental illness that explains why somebody then feels entitled to also take that rage and turn it outward on 149 other people who had nothing to do with the person's problems." Germanwings crash compensation: What we know. Who was the captain of Germanwings Flight 9525? CNN's Margot Haddad reported from Marseille and Pamela Brown from Dusseldorf, while Laura Smith-Spark wrote from London. CNN's Frederik Pleitgen, Pamela Boykoff, Antonia Mortensen, Sandrine Amiel, and Anna-Maja Rappard contributed to this report.""" inputs = tokenizer(ARTICLE, max_length=512, truncation=True, return_tensors="pt").to("cuda:0") out = model(**inputs, decoder_input_ids=torch.tensor([[tokenizer.pad_token_id]]).to("cuda:0")) torch.isnan(out.logits).any() # => True ``` ## Proposed fix To avoid `inf` values we could clamp the `hidden_states` to the max values for the current data type if there are `inf` in it. i.e ```python if torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) ``` we need to add this after self attn, cross-attn, and the feed-forward layer which is where the `inf` values occur. This works for both `apex` and `amp` To verify this fix, I trained `t5-base`, `t5-v1_1-base` and `t5-v1_1-small` on `cnn/dm` for 10k steps (1.11 epochs) Here's the training command, to run this clone [this fork](https://github.com/patil-suraj/transformers) and check out the `fix-t5-fp16` branch. navigate to `examples/seq2seq` dir, follow the instructions in the readme to download `cnn_dm` and dataset, and then run the following command ```bash export M=google/t5-v1_1-base export OUT_DIR=t5-v1_1-base-cnn-fp16 export DATA_DIR=cnn_dm python finetune_trainer.py \ --model_name_or_path $M \ --data_dir $DATA_DIR \ --output_dir $OUT_DIR --overwrite_output_dir \ --max_steps=10000 \ --gradient_accumulation_steps=8 \ --learning_rate=1e-4 \ --per_device_train_batch_size=4 \ --n_val 500 \ --max_target_length=56 --val_max_target_length=128 \ --fp16 --fp16_backend apex \ --do_train --do_eval --evaluation_strategy steps \ --logging_steps=100 --logging_first_step --eval_steps=2500 --save_steps=2500 --save_total_limit=2 \ --sortish_sampler \ ``` for evaluation ```bash python run_eval.py \ t5-v1_1-base-cnn-fp16 cnn_dm/test.source hypothesis.txt \ --reference_path cnn_dm/test.target \ --score_path metrics.json \ --device cuda:0 \ --prefix summarize: \ --bs 16 \ --fp16 \ ``` and got the following metrics (ROUGE2) 1. for `t5-base`: 19.2804 2. for `t5-v1.1-base`: 18.4316 (note that the score for `t5-base` is more because it's already pre-trained on `cnn/dm`) To compare this, evaluated the pre-trained `t5-base` in both `fp32` and `fp16`, which gave the following results 1. `fp16`: 18.3681 2. `fp32`: 18.394 So the results are close enough. To verify the fix for `t5-large`, I evaluated the pre-trained `t5-large` in `fp32` and `fp16` (use the same command above to evaluate `t5-large`) and got the following results 1. `fp16`: 19.2734 2. `fp32`: 19.2342 Surprisingly, rouge2 is slightly better in `fp16`. So with the above fix, the following model types now work in `fp16` (opt level `01`), and give descent speed-up :) - **T5v1**: `t5-small`, `t5-base`, `t5-large` - **T5v1_1**: `google/t5-v1_1-small`, `google/t5-v1_1-base` - **MT5**: `google/mt5-small`, `google/mt5-base` `google/t5-v1_1-large` and `google/mt5-large` should also work, will confirm after running few experiments. One interesting observation, For inference, the `t5-base` fine-tuned with `fp16` and evaluated in `fp32` is faster than pre-trained `t5-base` evaluated in `fp16`. See this [colab](https://colab.research.google.com/drive/1UaMBsWp3e1Qf-fYKxXmtulsvPXViKa72?usp=sharing) **Update**: `google/t5-v1_1-large` still gives `nan` loss after about 200 steps <|||||>Great work! We should also share those results on the forum: https://discuss.huggingface.co/ :-) <|||||>Hi @exelents To answer your question, as mentioned above these changes will enable fp16 for all small and base version with `apex` `01` and native `amp`. For large models, I only tested it for inference, and it works. Right now I'm training large models and will report the results here. DeepSpeed handles it's own fp16 and I don't know all the details about it, so won't be able to help there at the moment. @stas00 might have some ideas as he's working with deepspeed. To sum up, this fix works with `apex 01` and `native amp` with `Seq2SeqTrainer` for training and with `.half` for inference.<|||||>> DeepSpeed handles it's own fp16 and I don't know all the details about it, so won't be able to help there at the moment. @stas00 might have some ideas as he's working with deepspeed. I would like the DeepSpeed integration to be merged and then anybody can start experimenting and seeing what else might be needed to be tweaked. To start with I've been primarily focusing on training/eval just working. The next stage would be using and tuning up.<|||||>Hi @patil-suraj It seems like huggingface still hasn't repaired the FP16 problem in MT5-large or MT5-xl, do you or anynoe else have any plans on it? <|||||>Hey @mxa4646, T5 was never made to be fully compatible with FP16, it was trained using bfloat16, which has a different range than PyTorch's fp16. There is a good chance though that training T5 with deepspeed and fp16 will work!<|||||>Hi I am training mt5-small with deepspeed, with fp16, and I am getting always nan, so far could not managed to make it work, do you mind to share how you set parameters to make it work? I am having a hard time with this and kindly appreciate your help @patrickvonplaten <|||||>> T5 was never made to be fully compatible with FP16, it was trained using bfloat16, Thank you for this insight, @patrickvonplaten - I didn't know that! I was reading up on bfloat16 for a related issue https://github.com/huggingface/transformers/issues/10816 and it looks like the main issue is that whenever one does an aggregate operation on big numbers in bfloat16 or fp16 - the accumulate needs to be in fp32. So for example the fix applied here: https://github.com/huggingface/transformers/pull/10815 - so perhaps it is possible to identify such operations and change them to `some_torch_operator(..., , dtype=torch.float32)` so most of the math will still be fp16, but there will be no overflow. And it won't impact the normal fp32 logic, as it'd already be of the same type. And this operation doesn't take much extra memory (other than doubling of the resulting variable size). But here it sounds like the problem is different and it's that bfloat16 may not convert to the same value in fp16. I wonder if someone tried to convert the weights and compare the difference. Perhaps it's enough to take the models and finetune them on the same data but in mixed precision and perhaps it'd rectify its level of precision. <|||||>I tried the T5-large in fp16 and it is slower which is really strange. For everything else the same for the same test data i get 5.62 sec with Fp32 and 6.95 sec for Fp16. However fp16 uses almost 50% less memory. <|||||>Has this model been implemented for PyTorch yet?
transformers
9,294
closed
Fix TF input for np.ndarray
# What does this PR do? This PR allows `np.ndarray` datatype as input of the models. # Fixes #9248
12-24-2020 10:17:43
12-24-2020 10:17:43
I commented on the corresponding issue, I don't fully understand what's going on there in the error<|||||>This is on purpose because all the methods of a Keras Model allow to have `np.ndarray` as input. You can check for example `fit`, `predict` or `evaluate` here https://www.tensorflow.org/api_docs/python/tf/keras/Model. They all take a numpy array as possible input<|||||>Okey, I see. I still don't think we should provide this feature, just because keras has some automatic conversion internally. Is there a use case where one cannot forward a TF tensor and has to forward a `nd.array`? The general philosophy of the lib is to "not add too many magic functions" and allowing `nd.arrays` as inputs for TF seems like opening the door for lots of future issues to me. Let's see what @sgugger @LysandreJik think about it<|||||>On my side I would prefer to keep as much compliancy as possible with TF. But if everyone are not confident because of this Keras magic, I'm ok to do not provide it :)<|||||>The TF models do not accept numpy arrays inputs, so this would allow bad inputs to be passed to TF models. I think we should stick with inputs accepted by TF models only.<|||||>The TF models allow to have numpy arrays as inputs (if we had the `np.ndarray` type among the allowed ones). As example: ```python from transformers import TFBertForSequenceClassification, BertTokenizer tokenizer = BertTokenizer.from_pretrained("bert-base-cased") inputs = tokenizer("Hello", return_tensors="np") model = TFBertForSequenceClassification.from_pretrained("bert-base-cased") model(inputs) ``` Gives: ``` TFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[0.08065815, 0.58226204]], dtype=float32)>, hidden_states=None, attentions=None) ``` Keras layers/models are by default compliant with numpy arrays.<|||||>Ah my bad, I tried but forgot to checkout the PR before :facepalm: If TF models do accept those inputs then, I have no strong objection.<|||||>@LysandreJik you mean adding a test in `test_modeling_tf_common` with numpy array as input for each model?<|||||>Yes, for example !<|||||>There is a now a new test to be sure that the models can be properly executed with numpy inputs.<|||||>Does it looks ok to be merged?
transformers
9,293
closed
Update tokenization_utils_base.py
Missing "s" typo in the error message, which is an invalid argument.
12-24-2020 09:32:16
12-24-2020 09:32:16
Thanks!
transformers
9,292
closed
Fix TF Flaubert
# What does this PR do? This PR fixes Flaubert to make able to be executed in graph mode.
12-24-2020 09:30:03
12-24-2020 09:30:03
transformers
9,291
closed
Fix TF CTRL
# What does this PR do? This PR fixes the model inputs in TF CTRL.
12-24-2020 09:24:44
12-24-2020 09:24:44
transformers
9,290
closed
Problem converting slow tokenizer to fast: token out of vocabulary
When I try to use a [Dutch RoBERTa model](https://huggingface.co/pdelobelle/robbert-v2-dutch-base/tree/main#how-to-use) as suggested, the library tries to convert the old (slow) tokenizer to the fast one. However, this leads to issues. (I can just keep the slow one, but I need to use the offset and word_ids functionality which is only available in the fast tokenizers.) ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("pdelobelle/robbert-v2-dutch-base") ``` Error trace: ``` File "C:/dev/python/jasper-tok2vec/main.py", line 10, in main tokenizer = AutoTokenizer.from_pretrained("pdelobelle/robbert-v2-dutch-base") File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 378, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\tokenization_utils_base.py", line 1804, in from_pretrained return cls._from_pretrained( File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\tokenization_utils_base.py", line 1877, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\models\roberta\tokenization_roberta_fast.py", line 160, in __init__ super().__init__( File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\models\gpt2\tokenization_gpt2_fast.py", line 133, in __init__ super().__init__( File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\tokenization_utils_fast.py", line 89, in __init__ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\convert_slow_tokenizer.py", line 642, in convert_slow_tokenizer return converter_class(transformer_tokenizer).converted() File "C:\Users\bramv\.virtualenvs\jasper-tok2vec-4kc9ajCV\lib\site-packages\transformers\convert_slow_tokenizer.py", line 262, in converted BPE( Exception: Error while initializing BPE: Token `Ċ` out of vocabulary ``` ### Environment - `transformers` version: 4.1.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.2 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): not installed (NA) ### Who can help @mfuntowicz
12-24-2020 09:00:53
12-24-2020 09:00:53
That looks like a tough one...we might need help from @n1t0 here<|||||>The files for this model can be found [here](https://huggingface.co/pdelobelle/robbert-v2-dutch-base/tree/main). In `merges.txt` is the merge rule: `ĉ Ċ` That means that the symbol `Ċ` should be present in `vocab.json` It is not. So the error does kind of make sense. @iPieter are these the correct files? In the paper you mention: > We limited the vocabulary to 40k words, > which is 10k words less than RobBERT v1, due to > additional tokens including non-negligible num- > ber of Unicode tokens that are not used in Dutch. There are 39982 words in the vocabulary. Is it possible that some of the symbols/tokens are missing? <|||||>@schelv Thanks for looking into the issue. Those files should be correct, there are indeed 39982 tokens. I am using the same files internally without any issues on the old tokenizer (i.e. correct behaviour and sensible predictions), for which the files were specifically created. I also looked into this error before, but cannot find an easy fix. There are a few other merge rules that are also conflicting. My suspicion is that this stems from the translation of the tokenizer's files from Fairseq to HF. This was done a year ago, so the details might be a bit fuzzy. The problem was that the Fairseq library, where we trained RobBERT, used the vocab and merges file, but also generated an additional file (`dict.txt`) that was used to count the number of occurrences for each token (which is ok, not the issue) and also orders the tokens. These new, ordered positions were then used in Fairseq, while HF uses the id's from the vocab.json file. So this means there is an additional dictionary lookup. To fix this behaviour in HF transformers, I created a script to merge the vocab.sjon with the dict.txt. Otherwise, the token id's from unrelated tokens would be used for the embedding layer and the MLM task, giving a garbage output. I will investigate this translation step again, but I'm confused by the fact that the behaviour is correct with the slow tokenizer. <|||||>Thanks for the quick answer! Just checking if I understand you correctly: Fairseq uses token id's that are based on the token occurrence count. This information is stored in dict.txt The token id's of the original vocab.json were updated with the information from dict.txt? did this create the current vocab.json that is loaded by the transformers library? If you upload the original files somewhere I can also take a look at them.🙂<|||||>Yes, that's exactly right. The conversion script is [here](https://github.com/iPieter/RobBERT/blob/340bd9d87ef362462fccf8f44e7740c7dfd1d865/src/convert_roberta_dict.py) and [this is the unit test](https://github.com/iPieter/RobBERT/blob/340bd9d87ef362462fccf8f44e7740c7dfd1d865/tests/test_convert_roberta_dict.py) that missed this case. The original files are also downloadable from our github release [here](https://github.com/iPieter/RobBERT/releases/tag/v2.0), but you have to download the entire model. However, when writing the previous response, I had an insight in what the issue might be. So'll try to debug and report the results here. _Long report ahead, TLDR: it now works_ 😃 There might be tokens in the `vocab.json` that was generated by the HuggingFace tokenizer library that are not found by the Fairseq tokenizer, thus they don't occur in the `dict.txt`. After some debugging, I found the these tokens (the special tokens are handled later): ```python 00:'Á' 01:'÷' 02:'À' 03:'þ' 04:'ø' 05:'ÿ' 06:'ú' 07:'ö' 08:'ĉĊ' 09:'</s>' 10:'č' 11:'ĠTheCompany' 12:'û' 13:'ü' 14:'<unk>' 15:'ý' 16:'õ' 17:'<pad>' 18:'Ċ' 19:'<s>' 20:'<mask>' 21:'ù' ``` So these tokens are what is causing the fast tokenizer to complain, since they appear in the `vocab.json` set and not in the `dict.txt` set. Ignoring the special tokens (`<unk>`, `<s>`, `</s>` and `<pad>`), this brings the latest vocab id to 39996, not yet 40k. So there is a second bug in my conversion script. The second bug has to do with the fact that Fairseq adds 2 custom tokens by default that I didn't remove. That's not a big deal, but they do affect the vocab length, so let's be totally correct and add those two tokens and the mask token (since `robbert-v2-dutch-base` is and MLM model) as well: <img width="1019" alt="image" src="https://user-images.githubusercontent.com/6965756/109134694-9a9ff880-7756-11eb-9734-f678d6dc8845.png"> Time for a sanity check: ```python [{'sequence': 'Er staat een boom in mijn tuin.', 'score': 0.16917602717876434, 'token': 2600, 'token_str': ' boom'}, {'sequence': 'Er staat een bankje in mijn tuin.', 'score': 0.08176644891500473, 'token': 21620, 'token_str': ' bankje'}, {'sequence': 'Er staat een schutting in mijn tuin.', 'score': 0.0384209081530571, 'token': 15000, 'token_str': ' schutting'}, {'sequence': 'Er staat een vijver in mijn tuin.', 'score': 0.038086555898189545, 'token': 8217, 'token_str': ' vijver'}, {'sequence': 'Er staat een plant in mijn tuin.', 'score': 0.03249552100896835, 'token': 2721, 'token_str': ' plant'}] ```` The vocab.json in `robbert-v2-dutch-base` is updated, so this issue can be closed. <|||||>Great, thanks a lot for the investigation @iPieter! Now I can use the fast tokenizers in all their glory.<|||||>Thanks from me as well! Looking forward to using it!
transformers
9,289
closed
Fix typo in file_utils.py
# What does this PR do? Fix typo of `add_code_sample_docstrings` in `file_utils.py` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-24-2020 06:38:53
12-24-2020 06:38:53
transformers
9,288
closed
[doc] How To Request Support document stab
As discussed it'd be great to have a clear document with guidelines at how to create great Issues that are easy to understand, reproduce and resolve. I wrote this stab of a document to get things started. Please feel free to edit it further to your satisfaction. I'm not attached to any parts it, just brain dumping what was coming based on my experience. So feel free to add remove, reformat, etc. Best commit your edits directly into this branch or second best via suggestions. I don't need to be the middleman. Please tag others that you think may want to contribute to this document. If this is successful, perhaps a similar document will be needed for PRs - or perhaps down the road it will be a single document as there is a lot of overlap between writing a good Issue and a good PR. But let's start simple. @sgugger, @patrickvonplaten, @LysandreJik
12-24-2020 04:16:06
12-24-2020 04:16:06
I really like the idea of adding such a document. Two general things: 1) I think we should make this document a bit more positive and not a "mandatory read" before posting an issue. IMO, in general we are (or should be) happy about all issues. Even if the issue is very badly formulated, it gives us a good signal on how the users work with the library and which features are more and which are less used. 2) I'd make a clearer distinction with what should be in the forum and what should be an issue. I've probably already answered ~30 times on issues that the user should please redirect the question to the forum. I'd like to make a clearer distinction here: The issues should ideally only be used for bug reports. **_Forum:_** All "please explain" questions or objectively very user-specific feature requests should land in the forum. IMO those should never land in the issues. What I mean by that are *e.g.* the following kinds of issues: i. "I would like to use a BertModel within a RL-Agent for a customer support service. How can I use a `BertForMaskedLM` in my `ChatBotModel`?" ii. "Could you please explain why T5 has no positional embedding matrix under `T5Model`?" iii. "How should I set my generation parameters for translation?" iiii. "How to train T5 on De->En translation?" => all these kinds of questions do not belong to the issues IMO. None of those issues hint at a bug in Transformers and have definitely a better place in the forum IMO. But, again, we **do** want people to ask exactly these questions and I'm more than happy to answer all of them (maybe i. a bit less). Not sure what @thomwolf @LysandreJik @sgugger think here. **_Issues:_** Everything which hints at a bug should be opened as an issue in Transformers. Here again, I'd like to encourage people to open issues as I'm much happier with users posting an objectively badly written issue than with users discovering an issue, but being afraid to post it in Transformers. Having said this, I really like the points you've written down so far @stas00! One thing, I like to add (as one of the first points) is that users should google (or whatever SE) their issue before opening the exact same one (yes using a SE with "your issue here" + "transformers" + "huggingface" often gives better results than searching on github itself). I often just link a new issue to an already answered one (which is also not that bad since it shows us again which parts of the transformer are heavily used). And I think in some cases it is fine if the user posts a link to a colab if it's bug absolutely requires a big data set. In general, I think such a document can save us a lot of time because we can just link to it on issues that are badly written and usually are as a consequence just ignored by us. Think the author of a "badly" written issue can then better understand why we sometimes stop answering. <|||||>Great suggestions, @patrickvonplaten - Thank you - I hope I integrated them well. I think at this point we are in the gathering stage - so bring on the ideas and points you feel are important. When this stage is done we will do a final edit so that the document feels most welcoming to the users.<|||||>All comments/suggestions you have kindly offered so far have been addressed. A vous!<|||||>> After applying Patrick's suggestion, I think this document is in excellent shape. One nit I would have is that it's written in perfect English. We have a lot of users that are not native speakers, and possibly won't understand some parts of it. Then again, I don't think we should reduce the quality of this document, but let's think about what we can do in general to be friendly to non-English speaking users/contributors. Oh, thank you for the complement as I'm one very non-native English speaker ;) But I totally hear what you mean. Perhaps the simplest thing to do is to add a note that if someone struggles with understanding anything in the document, they can ask us to make it more easy to understand? That way it doesn't have to be imperfectly perfect from the get-going.<|||||>Merging, thanks a lot @stas00!<|||||>Copied this into [this forum post](https://discuss.huggingface.co/t/how-to-request-support/3128) and added a link to it in the document in [this commit](https://github.com/huggingface/transformers/commit/6009668c631aa5773c66aa30c6bfd9c191e2a6be).
transformers
9,287
closed
SummarizationModule, Trainer and BertPreTrainedModel
Hi, I wonder what's the relationship between SummarizationModule (SummarizationDistiller), Trainer and BertPreTrainedModel? I want to reimplement the distillation.py of seq2seq example and run it on the glue dataset. But I'm confused by the relationship between these three classes. The model parameter of Trainer is BertPreTrainedModel and the Trainer train the model. But in the SummarizationDistiller there isn't a training function. I didn't find the training process in the distillation.py and finetune.py. Should I pass SummarizationDistiller object to Trainer to train the model? Or how should I train my custom SummarizationDistiller.
12-24-2020 03:37:59
12-24-2020 03:37:59
Hi @ziqi-zhang, the distillation code is written using the `pytorch-lightning` framework and `SummarizationModule` is a lightning module. You should go through the lightning docs to see how training works for lightning modules and how to customize it. `Trainer` is transformers training helper which is different from lightning and there is no connection between the two.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,286
closed
Why Bert-chinese use do_lower_case=False?
Some Chinese Text has some English words, for example: "Apples是苹果的复数形式。". I have questions about how to tokenize the text: 1. why Chinese Bert Case sensitive, but I can't find even 'A' in vocab.txt 2. Because English words in Chinese vocab.txt is few, should I use wordpiece tokenizer as default, like "['apple', '##s', '是', '苹', ...]"or split to char to tokenize, like "['a', 'p', 'p', 'l', 'e', 's', '是', '苹', ...]"?
12-24-2020 01:37:10
12-24-2020 01:37:10
Hmm, I don't really know how to best answer your question here...maybe @JetRunner ?<|||||>using lowercase=False preserves the information about casing and this information maybe helpful in the context of the some work. such as for sentiment analysis i think casing is not important there but for the task like NER it maybe useful. Again but for many task where we deal with multiple languages is it recommended to use cased model because i think every language has its own grammar and syntax and maybe casing helps in some or the other way? Could this be the reason why Bert-chinese uses lowercase=False by default?<|||||>I think your explanation makes sense. But there are no capital letters in the Chinese vocab.txt, so all words contain capitals will be regarded as [unk]. Sent from my iPhone > On Dec 25, 2020, at 4:57 AM, Shubham kumar <[email protected]> wrote: > >  > using lowercase=False preserves the information about casing and this information maybe helpful in the context of the some work. such as for sentiment analysis i think casing is not important there but for the task like NER it maybe useful. Again but for many task where we deal with multiple languages is it recommended to use cased model because i think every language has its own grammar and syntax and maybe casing helps in some or the other way? > > Could this be the reason why Bert-chinese uses lowercase=False by default? > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub, or unsubscribe. <|||||>Well I would say the design of Chinese BERT is not necessarily the best. It makes sense to use only lower cases to resolve the data sparsity since there are not many English sentences in Chinese Wikipedia.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,285
closed
TFRobertaModel warning - `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated
I'm using: google colab transformers 4.1.1 tensorflow 2.4.0 (with gpu) model TFRobertaModel I keep getting a warning when calling TFRobertaModel. For example: ``` tokenizer = RobertaTokenizer.from_pretrained('roberta-base') config = RobertaConfig.from_pretrained('roberta-base') roberta_layer = TFRobertaModel(config) max_seq_len = 64 ids = tf.keras.layers.Input((max_seq_len,), dtype=tf.int32) att = tf.keras.layers.Input((max_seq_len,), dtype=tf.int32) tok = tf.keras.layers.Input((max_seq_len,), dtype=tf.int32) roberta_inputs = [ids, att, tok] sequence_output = roberta_layer(ids,attention_mask=att,token_type_ids=tok) # this produces the message ``` Produces the following message: The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model. They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. I tried to set the variables in the config object but there is no change in the message: ``` tokenizer = RobertaTokenizer.from_pretrained('roberta-base') config = RobertaConfig.from_pretrained('roberta-base',output_attentions=False,output_hidden_states=False,return_dict =True) roberta_layer = TFRobertaModel(config) ``` @jplu, @LysandreJik
12-24-2020 00:39:08
12-24-2020 00:39:08
Hello! These two messages are just a warning, you can ignore them if you are not concerned. Basically, these messages will always be displayed everytime the graph node is executed, and only in graph mode.<|||||>Is there a way to suppress these warnings? They overwhelm the logs with useless messages...<|||||>@jplu the issue still persists, what is the purpose of these warnings? Why are they displayed to the user of the library?<|||||>@jklaise @Gilthans The logging happens through a TF Logger (See [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L50)) . You can suppress them by using something like `tf.get_logger().setLevel('ERROR')`
transformers
9,284
closed
[Templates] Adapt Bert
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adapt Bert-like templates following https://github.com/huggingface/transformers/pull/9183. This should fix the templates test on master. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-24-2020 00:08:18
12-24-2020 00:08:18
transformers
9,283
closed
Fix TF DPR
# What does this PR do? This PR rework the DPR architecture for its TF version. This rework allows DPR models to be saved as proper saved model.
12-23-2020 16:53:59
12-23-2020 16:53:59
@patrickvonplaten , I reworked a bit the approach. Now `TFDPREncoder` and `TFDPRSpanPredictor` are still models and keep their features from `TFPreTrainedModel` while all the DPR models benefit of the serving. All the slow/quick tests are still passing.<|||||>All @sgugger comments have been addressed.
transformers
9,282
closed
Adapt to new name of `label_smoothing_factor` training arg
# What does this PR do? This PR changes `label_smoothing` to its new name `label_smoothing_factor` in the tests and scripts that use it. Pinging @stas00 for information but will merge when CI is passing.
12-23-2020 15:58:21
12-23-2020 15:58:21
transformers
9,281
closed
tapas utils
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-23-2020 12:58:36
12-23-2020 12:58:36
transformers
9,280
closed
issue with evaluation of seq2seq_trainer.py on multiple gpus
## Environment info - `transformers` version: 3.5.1 - Platform: Linux - Python version: 3.7 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help Trainer: @sgugger ## Information and commands to reproduce I am running seq2seq trainer model on multiple GPUS. The problem arises during evaluation, here are my modified seq2seq_trainer.py codes and how to reproduce the error: ``` git clone [email protected]:rabeehk/seq2seq.git python setup.py develop cd seq2seq python -m torch.distributed.launch --nproc_per_node=4 --master_port=9918 finetune_t5_trainer.py temp_configs/mp-lr-3e-02-r-8-l-true.json ``` here is the error I get during the evaluation: `file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found ` ## Bug description I figured out in finetune_trainer.py if a user load config and trained model again during the evaluation, this operation results in processes not being able to find the config file, showing this is not multi-gpu safe, I realized if I add `trainer.is_world_process_zero:` beforehand and reload model and config in evaluation part, this resolves the issue, could you please comment on this and assist with this issue? I think this is a bug if the user cannot load the model in the evaluation part in multiple-gpus properly. Thanks. ## Full error stack. ``` I 1222 15:38:32.129721 2784 main shadow.py:122] > Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 388, in get_config_dict file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found I 1222 15:38:32.129757 2784 main shadow.py:122] > Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 388, in get_config_dict Traceback (most recent call last): I 1222 15:38:32.129800 2784 main shadow.py:122] > I 1222 15:38:32.130028 2784 main shadow.py:122] > local_files_only=local_files_only, I 1222 15:38:32.130069 2784 main shadow.py:122] > File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 962, in cached_path I 1222 15:38:32.130314 2784 main shadow.py:122] > raise EnvironmentError("file {} not found".format(url_or_filename)) I 1222 15:38:32.130367 2784 main shadow.py:122] > OSError: file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found I 1222 15:38:32.130448 2784 main shadow.py:122] > During handling of the above exception, another exception occurred: I 1222 15:38:32.130505 2784 main shadow.py:122] > I 1222 15:38:32.130557 2784 main shadow.py:122] > OSError: file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found I 1222 15:38:32.130720 2784 main shadow.py:122] > Traceback (most recent call last): OSError: file outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found I 1222 15:38:32.130823 2784 main shadow.py:122] > File "./finetune_t5_trainer.py", line 328, in <module> I 1222 15:38:32.130858 2784 main shadow.py:122] > I 1222 15:38:32.130903 2784 main shadow.py:122] > During handling of the above exception, another exception occurred: I 1222 15:38:32.130938 2784 main shadow.py:122] > I 1222 15:38:32.130976 2784 main shadow.py:122] > Traceback (most recent call last): File "./finetune_t5_trainer.py", line 207, in main main() File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 347, in from_pretrained cache_dir=model_args.cache_dir) File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 347, in from_pretrained 100%|??????????| 2100/2100 [36:51<00:00, 1.05s/it] I 1222 15:38:32.131012 2784 main shadow.py:122] > cache_dir=model_args.cache_dir) I 1222 15:38:32.131047 2784 main shadow.py:122] > File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 347, in from_pretrained I 1222 15:38:32.131081 2784 main shadow.py:122] > config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) I 1222 15:38:32.131301 2784 main shadow.py:122] > File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 400, in get_config_dict I 1222 15:38:32.131337 2784 main shadow.py:122] > raise EnvironmentError(msg) I 1222 15:38:32.131372 2784 main shadow.py:122] > OSError: Can't load config for 'outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true'. Make sure that: I 1222 15:38:32.131432 2784 main shadow.py:122] > - 'outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true' is a correct model identifier listed on 'https://huggingface.co/models' I 1222 15:38:32.131615 2784 main shadow.py:122] > raise EnvironmentError(msg) I 1222 15:38:32.131670 2784 main shadow.py:122] > OSError: Can't load config for 'outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true'. Make sure that: ```
12-23-2020 11:16:13
12-23-2020 11:16:13
Hello @rabeehk, We sadly cannot fix issues of different repos, such as `rabeehk/seq2seq.git` - this is too time-consuming and not really our responsibility. We're happy to assist if you could provide a **short, precise, and complete** code snippet that is based on Transformers `Seq2SeqTrainer` only.<|||||>@rabeehk, I think you may have not considered that open source projects are not a help desk. If you are going to continue in the same fashion you will not get any answers at all. Many people ask for help but you need to think how to ask for help so that it's easy for the developers to quickly understand what is going on, reproduce the problem and solve what needs to be solved. But if you dump 1000 line logs and say help me to fix this without investigating it first yourself you will not get anywhere here. For example, in your 1000 line log dump in OP if you look closely you will see that the error is on your side since it tells you: ``` outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found ``` i.e. your setup is broken. So you didn't really study the problem and yet want us to do this for you. That's said I personally will not do it again, so please don't tag me unless it's related to what I'm working on and you found a bug in the code I wrote or maintain. Also tagging multiple developers out of context is frowned upon - you tagged me on this issue: > Who can help > FSMT: @stas00 what does it have to do with FSMT? The tagging info is to help users to direct their questions to the right developers who are maintainers of certain domains. They can then decide at their own discretion to tag other developers if they feel it'd help move the issue forward. If you tag multiple people out of context you will gain no support. If you are not willing to invest energy and time into investigating the problems you encounter and forming quality questions, please consider hiring someone who will be willing to answer the multitude of your questions and sort things out for you. Perhaps ask at the forums if someone is willing to work with you professionally where you pay them for the services provided. I hope this comment has been useful and trust you will find a way to receive the support you need. <|||||>Yes @stas00 is completely right. @rabeehk your comments are borderline spammy. We try to help as much as possible but you also need to put in the work so the community can actually help you efficiently.<|||||>Hi Stephan, Hi Julien please find my responses below: On Wed, Dec 23, 2020 at 6:29 PM Stas Bekman <[email protected]> wrote: > rabeehk, I think you may have not considered that open source projects are > not a help desk. If you are going to continue in the same fashion you will > not get any answers at all. > Sorry if this looks like a spam to you, but I really still think this was a bug, if you try to load the model twice inside the finetune_trainer.py in evaluation part, which is something the user might well need when one wants to apply more chnages to the trained model before evaluation, you would see this is not multi-process safe resulting in the bug I reported. > Many people ask for help but you need to think how to ask for help so that > it's easy for the developers to quickly understand what is going on, > reproduce the problem and solve what needs to be solved. But if you dump > 1000 line logs and say help me to fix this without investigating it first > yourself you will not get anywhere here. > Sorry I thought providing full logs help, if this is not sure, I would not provide the full logs, I still included the one line error message above these logs, I did investigate the issue myself, and I realized this is not multi-process safe as I mentioned. > For example, in your 1000 line log dump in OP if you look closely you will > see that the error is on your side since it tells you: > > outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found > > no the setup is not broken, the files are there as I said, please read my bug report carefully, this is the result of the bug I said. > i.e. your setup is broken. > > So you didn't really study the problem and yet want us to do this for you. > > not really, I investigated it for hours and hours in fact. > That's said I personally will not do it again, so please don't tag me > unless it's related to what I'm working on and you found a bug in the code > I wrote or maintain. > > Sure, I was thinking you are working on seq2seq from various updates on this, sorry for the mistake, > Also tagging multiple developers out of context is frowned upon - you > tagged me on this issue: > > Who can help > FSMT: @stas00 <https://github.com/stas00> > > what does it have to do with FSMT? > sorry for the mistake, I explained this above, I was really thinking you are working on seq2seq and though this is relevant. > The tagging info is to help users to direct their questions to the right > developers who are maintainers of certain domains. They can then decide to > tag other developers if they feel it'd help the issue along. If you tag > multiple people out of context you will gain no support. > > I was really mistaken thinking this is relevant. > If you are not willing to invest energy and time into investigating the > problems you encounter and forming quality questions, please consider > hiring someone who will be willing to answer the multitude of your > questions and sort things out for you. Perhaps ask at the forums if someone > is willing to work with you professionally where you pay them for the > services provided. > No, this is not correct, I did spent hours and hours on this, to me this is still a bug. > I hope this comment has been useful and trust you will find a way to > receive the support you need. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/9280#issuecomment-750399462>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCBMPNG4IN2NE5DGRPDSWISH7ANCNFSM4VG3SIFA> > . > <|||||>I also should say putting comments like your last paragraph @stas00 is inappropripate. No matter I beleive this is a bug you think this is a spam, no matter in which position you are, no matter I mistakenly thought providing full logs helps, please behave people with respect. I still believe this is a bug. On Wed, Dec 23, 2020, 8:43 PM Rabeeh Karimi <[email protected]> wrote: > Hi Stephan, Hi Julien > please find my responses below: > > On Wed, Dec 23, 2020 at 6:29 PM Stas Bekman <[email protected]> > wrote: > >> rabeehk, I think you may have not considered that open source projects >> are not a help desk. If you are going to continue in the same fashion you >> will not get any answers at all. >> > Sorry if this looks like a spam to you, but I really still think this was > a bug, if you try to load the model twice inside the finetune_trainer.py in > evaluation part, which is something the user might well need when one wants > to apply more chnages to the trained model before evaluation, you would see > this is not multi-process safe resulting in the bug I reported. > >> Many people ask for help but you need to think how to ask for help so >> that it's easy for the developers to quickly understand what is going on, >> reproduce the problem and solve what needs to be solved. But if you dump >> 1000 line logs and say help me to fix this without investigating it first >> yourself you will not get anywhere here. >> > Sorry I thought providing full logs help, if this is not sure, I would not > provide the full logs, I still included the one line error message above > these logs, I did investigate the issue myself, and I realized this is not > multi-process safe as I mentioned. > >> For example, in your 1000 line log dump in OP if you look closely you >> will see that the error is on your side since it tells you: >> >> outputs/mixture1/meta-adapters-task-projector-new_sampler-num-gpus-4/mp-lr-3e-02-r-8-l-true/config.json not found >> >> no the setup is not broken, the files are there as I said, please read my > bug report carefully, this is the result of the bug I said. > >> i.e. your setup is broken. >> >> So you didn't really study the problem and yet want us to do this for you. >> >> not really, I investigated it for hours and hours in fact. > >> That's said I personally will not do it again, so please don't tag me >> unless it's related to what I'm working on and you found a bug in the code >> I wrote or maintain. >> >> Sure, I was thinking you are working on seq2seq from various updates on > this, sorry for the mistake, > >> Also tagging multiple developers out of context is frowned upon - you >> tagged me on this issue: >> >> Who can help >> FSMT: @stas00 <https://github.com/stas00> >> >> what does it have to do with FSMT? >> > sorry for the mistake, I explained this above, I was really thinking you > are working on seq2seq and though this is relevant. > >> The tagging info is to help users to direct their questions to the right >> developers who are maintainers of certain domains. They can then decide to >> tag other developers if they feel it'd help the issue along. If you tag >> multiple people out of context you will gain no support. >> >> I was really mistaken thinking this is relevant. > >> If you are not willing to invest energy and time into investigating the >> problems you encounter and forming quality questions, please consider >> hiring someone who will be willing to answer the multitude of your >> questions and sort things out for you. Perhaps ask at the forums if someone >> is willing to work with you professionally where you pay them for the >> services provided. >> > No, this is not correct, I did spent hours and hours on this, to me this > is still a bug. > >> I hope this comment has been useful and trust you will find a way to >> receive the support you need. >> >> — >> You are receiving this because you were mentioned. >> Reply to this email directly, view it on GitHub >> <https://github.com/huggingface/transformers/issues/9280#issuecomment-750399462>, >> or unsubscribe >> <https://github.com/notifications/unsubscribe-auth/ABP4ZCBMPNG4IN2NE5DGRPDSWISH7ANCNFSM4VG3SIFA> >> . >> > <|||||>The point we are trying to communicate is that you need to review how you communicate, @rabeehk. Your communications come across as too much and too indiscriminate. I'm totally accepting that you might be unaware of what is expected in good communications and perhaps HuggingFace needs to have a guidelines document at how users can ask for help in the most efficient way for all involved parties. As of this moment I'd happy to invest a bit of my free time to support you to find a way for you to become an asset to this community and not an annoyance. If you are willing to listen and take action: 1. Anybody looking at your first post will have an urge to flee - it's scary in its length and most people will not even try to understand what could be a very valid issue. So you need to edit the first post to remove any information that's not pertaining to the issue at hand. e.g. all those Download xx% logs are totally useless. You're saying you are attaching the traceback, but you're attaching the full log. I accept that you might have not known that. Attaching a full log can be helpful if it's done as an attachment, a link to a paste.bin or at the very least if you enclosed it in: ``` <details> <summary>Full log</summary> <pre> many lines go here </pre> </details> ``` Here is an example of the outcome: <details> <summary>Full log</summary> <pre> many lines go here </pre> </details> 2. As @patrickvonplaten replied to you, you can't ask someone to go into your repository and figure out what you may have done. The code is already very complex and unless there is an easy way to do a diff and it's a small diff, nobody has the time to investigate. So you need to spend time to find a way to reproduce the problem in a minimal example which should introduce no more than a few lines of code change-wise (of course, there are exceptions, but this is more of norm). Usually the best way is to just show the relevant backtrace (in DDP just one of them, as each process will dump a copy), the command line and then ask if there is anything else that you could supply to help the developer reproduce the problem. 3. Try to use the latest official version. We have no resources to go and debug older revisions, which could easily have bugs that have been fixed in the latest released version. I understand that this is not always possible. But this is the best way if it fits. 4. Most of the time you can't ask to test with your data, since we don't have your data. So either you should use some existing dataset supported by HF datasets or you need to have the code that generates a small sample on the fly. 5. Do not tag multiple people on the issue unless you know this is expected, either because you asked them and they gave you an explicit permission or the Issue template instructs you to do so. Having someone help you like I'm doing now is not an invitation to tag that person in the future on all your issues. I can see why you chose to tag me by looking at seq2seq commits, and while I made a few small changes in seq2seq recently, it just happened to be so because I was working on something totally unrelated and there were some changes that were required for me to proceed. But I'm not in charge of that domain. Remember that every time you tag someone, they get a notification and you're taking their time w/o their permission. Please be sensitive to that. 6. Use the edit button. Delete and merge multiple comments into one if nobody followed up yet. As you merge them edit them to be coherent. Use bullets and items if it makes sense. I know my first comment version almost always comes out with typos and can be incoherent or too verbose. If you look at my comments' history I often make a ton of re-edits, since I want to make sure my communication is as clear as possible (and I know I myself can be too verbose, mea culpa). --------------- The key message of this comment is that when you ask for support you are given a tiny sliver of developer's time and you need to quickly communicate the essentials of the issue at hand. You're not expected to be born with that knowledge. You're not expected to be perfect at it. If I may recommend - learn from issues posted by other people - see which issues get responses and which are ignored - learn what the posters who did get responses did right. It's a simple pattern matching with some patience. There is no harm in asking: "look, I have all these questions and I don't know how to ask them in the best possible way. Can someone help?" Perhaps, you need to find a mentor in the community at the forums, by asking if someone can support you to help you find a way to file good issues. When you tune up your communications then the developers of this fabulous project will be more than happy to address and resolve the issues you raise. This skill, of course, will help you at any other open source project. Please let me know if you found this helpful. And perhaps if you'd like to continue this discussion because you need further clarifications let's go to the forums https://discuss.huggingface.co/ and leave the Issues section alone for now. You have my permission to tag me (just @stas) in the forums for this particular discussion if you think it'd be helpful to you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,279
closed
[Refactor] Splitting pipelines.py into its own module.
# What does this PR do? Moves various pipelines into their own files. `pipelines.py` was 3k+ lines of code with feels a bit too much. To go along with the `models` split into various files, splitting into a cleaner module with subfiles was proposed by @thomwolf (Can't find the discussion). There's at least 3 parts than need to be explicited. - The *glue code* that makes `pipeline` so powerful (loading up the right task for the right model with right task, basically completing all the wholes based on the call signature). That's `__init__.py` - The main class `Pipeline` that mutualises a lot of the boilerplate. Thats `base.py`. - All the specialized classes `NerPipeline`, `FeatureExtractionPipeline`, ... that's the other files. All the tests remains strictly the same to ensure there's no breaking change in there. The main issue with that PR is that now its a bit harder to check the various code flow. Some is in the base, some is in a specialized file. `TranslationPipeline`, `SummarizationPipeline` and `Text2TextPipeline` are in the same file `text2text_generation.py` as they seem to share quit a bit of code, maybe some cleanup and better code sharing is imaginable in a follow-up PR (https://github.com/Narsil/transformers/pull/1), at the very least modifications in one, probably should be duplicated in the others as they use the same underlying models (Seq2Seq). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik @thomwolf <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-23-2020 11:11:47
12-23-2020 11:11:47
I think I fixed all of them. When moving everything around I felt more comfortable switching temporarily to absolute imports forgot to switch back.
transformers
9,278
closed
LED
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds LongformerEncoderDecoder (LED) from @ibeltagy - see: https://github.com/allenai/longformer#longformer Todo: - [x] **Important**: position embeddings have to be cut to correctly convert original Bart-ilke checkpoints to LED. The reason is that Bart uses a position embedding hack because of which the embedding idx 0 and 1 are never used resulting in an embedding matrix that has a length of 1026 instead of 1024, see: https://github.com/huggingface/transformers/blob/88ef8893cd649cc2b4adb9885aba88c750118cff/src/transformers/models/bart/modeling_bart.py#L131. All LED checkpoints are hence cut to remove this hack in LED: ```python model = LEDForConditionalGeneration.from_pretrained("./led-base-16384") model.model.encoder.embed_positions.weight = torch.nn.Parameter(model.model.encoder.embed_positions.weight[2:, :]) model.model.decoder.embed_positions.weight = torch.nn.Parameter(model.model.decoder.embed_positions.weight[2:, :]) model.save_pretrained("./led-base-16384") ``` - [x] Make Pytorch integration tests pass. See `LEDIntegrationTests` in `tests/test_modeling_led.py`. - [x] Add gradient_checkpointing - [x] Make common tests work - [x] Add convenient padding function so that input can be of whatever size and add global_attn logic to mask - [x] Automatically create attention_mask in encoder if not provided - [x] Finish PT version - [x] Make TF version work - [x] Add tips in docs for LED - [x] Eval notebook: https://colab.research.google.com/drive/12INTTR6n64TzS4RrXZxMSXfrOd9Xzamo?usp=sharing - [x] Nice to have: Fine-tune notebook: https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing - [ ] Make nice model cards ## TODO after PR is merged: - [ ] Correctly add `# Copied from ....` statements from Bart and Longformer (this probably requires the Bart refactor to be merged before) - [ ] Open issue regarding problems with TF save_model test - [ ] Correct templates: delete unnecessary test for tf bart; add gradient checkpointing by default in PT ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-23-2020 10:50:03
12-23-2020 10:50:03
@patrickvonplaten when you have time, can you fix the conflicts and apply the same updates merged in Longformer to LED. Thanks!
transformers
9,277
closed
[Seq2Seq Templates] Fix check_repo.py templates file
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-23-2020 10:26:15
12-23-2020 10:26:15
transformers
9,276
closed
Vision Transformer
# 🌟 New model addition ## Model description This issue is adding the Vision Transformer model described in the [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) paper. If you have any feedback and/or further ideas for the implementation please don't hesitate to mention them. ## Open source status * [ x] the model implementation is available: [The official github repo](https://github.com/google-research/vision_transformer) provides the implementation in Jax/Flax. * [x ] the model weights are available: See the github repo above. * [ x] who are the authors: (mention them, if possible by @gh-username) Google Research, Brain Team
12-23-2020 10:21:13
12-23-2020 10:21:13
This was implemented in https://github.com/huggingface/transformers/pull/10950
transformers
9,275
closed
Disable progress bar for Trainer
I am referencing a similar code compared to [run_glue.py] (https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py). I am running a preprocessing function that does the tokenization of text as well as `trainer.predict` on a pandas dataframe. How do I disable the progress bar from showing the progress made on each row of the dataframe? I thought it would work by disabling logging as well as `tqdm`, but it is not the case here. #3050
12-23-2020 10:19:38
12-23-2020 10:19:38
If you set `disable_tqdm=False` in your `TrainingArguments`, you shouldn't have any progress bar from the library.<|||||>Well, I think you meant `disable_tqdm=True`. By the way, the following worked: `args = TrainingArguments(disable_tqdm=True, output_dir="tmp_trainer")` I am still getting progress bars for the `dataset.map()` though. Is there something like `verbose=False`?<|||||>`dataset.map` comes from the Datasets library, not Transformers. So you should open an issue there for this part :-)
transformers
9,274
closed
Loss printed by tensorflow fit() differs from loss using custom loop for RoBERTa
## Environment info - `transformers` version: 3.3.1 - Platform: Windows 10 - Python version: 3.6 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @jplu, @LysandreJik ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the following script 2. The printed out losses are different ``` import tensorflow as tf from transformers import RobertaConfig, TFRobertaMainLayer # 1. Create a class to be able to use fit() class Transformer(tf.keras.Model): def __init__(self): super(Transformer, self).__init__() config = RobertaConfig( vocab_size=100, hidden_size=128, intermediate_size=128, max_position_embeddings=514, num_attention_heads=8, num_hidden_layers=6, type_vocab_size=1, ) self.encoder = TFRobertaMainLayer(config) def call(self, inp, training=False): return self.encoder(inp)[0] model = Transformer() # 2. Calculating loss manually for dummy input loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) x = tf.constant([[1, 0]]) y_true = tf.constant([[1, 0]]) y_pred = model((x, x)) loss = loss_fn(y_true, y_pred) print(loss) # printing 4.8093767 # 3. Run fit() model.compile(loss=loss_fn) model.fit((x, x), y_true) # printing 4.7854 ``` ## Expected behavior The losses should be equal.
12-23-2020 10:17:20
12-23-2020 10:17:20
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,273
closed
Fix param error
# What does this PR do? Fixes error ``` TypeError: forward() got an unexpected keyword argument 'token_type_ids' ``` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-23-2020 09:56:57
12-23-2020 09:56:57
transformers
9,272
closed
Fix gpt2 document
# What does this PR do? Fixes gpt2 document error. ``` AttributeError: 'GPT2DoubleHeadsModelOutput' object has no attribute 'lm_logits' ``` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
12-23-2020 07:58:19
12-23-2020 07:58:19
transformers
9,271
closed
allow integer device for BatchEncoding
# What does this PR do? Fixes #9244 I'm not fully aware of the details behind the Apex guard, in the method, so maybe this is not the solution. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? tokenizers: @mfuntowicz
12-23-2020 05:54:21
12-23-2020 05:54:21
transformers
9,270
closed
how can I change the AlbertModel's vocab
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> how can I change the AlbertModel's vocab Thanks ## Motivation I noticed that I can change the Bert's vocab by change the vocab.txt. But when I use the Albert's API, the document suggests: tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2'). So how can I change that. Thanks ## Your contribution
12-23-2020 02:39:41
12-23-2020 02:39:41
This should help: https://github.com/huggingface/transformers/issues/1413#issuecomment-538083512<|||||>Tanks a lot.
transformers
9,269
closed
Output probability from model.generate
# 🚀 Feature request Do we have the option to output the probability of the generated sequence from model.generate function? It will be super useful for evaluating the confidence score of the generated sequence. Thanks so much!
12-23-2020 00:04:24
12-23-2020 00:04:24
You'll have it soon 😉 , once #9150 is merged <|||||>> You'll have it soon 😉 , once #9150 is merged That's awesome. Thanks!<|||||>@patil-suraj great, it got merged but how does translates now to your question_generation repo? How do I get the output probability/confidence score to the the predicted answers?<|||||>I only found this comment from you https://discuss.huggingface.co/t/text-generation-pipeline-output-scores-parameter/3294/2: *the text-generation pipeline doesn’t return scores, however you could the generate method directly, to get the scores, this should help* Would be great if you could elaborate on that. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patil-suraj Any chance that `generate_tf_utils` will get the same functionality? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,268
closed
Unable to load LayoutLM from pretrained
## Environment info - `transformers` version: 3.3.0 - Platform: Linux-4.15.0-76-generic-x86_64-with-glibc2.10 - Python version: 3.8.2 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA ### Who can help @sgugger @LysandreJik ## Information When I try to load a LayoutLM model with the following script I hit an error. ``` from transformers import LayoutLMForTokenClassification model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased', from_tf=True) ``` ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) ~/miniconda3/envs/ML38/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 905 if resolved_archive_file is None: --> 906 raise EnvironmentError 907 except EnvironmentError: OSError: During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-11-9f82aa1a161f> in <module> ----> 1 model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased', from_tf=True) ~/miniconda3/envs/ML38/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 911 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME}.\n\n" 912 ) --> 913 raise EnvironmentError(msg) 914 915 if resolved_archive_file == archive_file: OSError: Can't load weights for 'microsoft/layoutlm-base-uncased'. Make sure that: - 'microsoft/layoutlm-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'microsoft/layoutlm-base-uncased' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ```
12-22-2020 22:12:22
12-22-2020 22:12:22
LayoutLM only has a PyTorch implementation available. If you remove the `from_tf=True` statement, it will work. <|||||>I am trying to load the model weights from [here](https://huggingface.co/microsoft/layoutlm-base-uncased/tree/main#) but `from_tf=False` doesn't work either. Traceback is below. ``` file_share_pre_train_model_path = "layoutlm-base-uncased" ... config = LayoutLMConfig.from_pretrained( ... os.path.join(file_share_pre_train_model_path, "config.json"), num_labels=len(tag_labels), cache_dir=None ... ) ... model = LayoutLMForTokenClassification.from_pretrained( ... file_share_pre_train_model_path, ... from_tf=False, ... config=config, ... cache_dir=None, ... ) Traceback (most recent call last): File "/Users/hsk/Company/environments/ner_layoutlm/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1035, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/Users/hsk/Company/environments/ner_layoutlm/lib/python3.8/site-packages/torch/serialization.py", line 527, in load with _open_zipfile_reader(f) as opened_zipfile: File "/Users/hsk/Company/environments/ner_layoutlm/lib/python3.8/site-packages/torch/serialization.py", line 224, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at ../caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at ../caffe2/serialize/inline_container.cc:132) frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x111e90787 in libc10.dylib) frame #1: caffe2::serialize::PyTorchStreamReader::init() + 2350 (0x119f5e14e in libtorch.dylib) frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 143 (0x119f5d79f in libtorch.dylib) frame #3: void pybind11::cpp_function::initialize<void pybind11::detail::initimpl::constructor<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::execute<pybind11::class_<caffe2::serialize::PyTorchStreamReader>, 0>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&)::'lambda'(pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >), void, pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::detail::is_new_style_constructor>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&&, (*)(0...), void pybind11::detail::initimpl::constructor<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::execute<pybind11::class_<caffe2::serialize::PyTorchStreamReader>, 0>(pybind11::class_<caffe2::serialize::PyTorchStreamReader>&)::'lambda'(pybind11::detail::value_and_holder&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) const&...)::'lambda'(pybind11::detail::function_call&)::operator()(pybind11::detail::function_call&) const + 147 (0x1113f57c3 in libtorch_python.dylib) frame #4: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3382 (0x110dede66 in libtorch_python.dylib) frame #5: cfunction_call_varargs + 120 (0x109b07518 in Python) frame #6: _PyObject_MakeTpCall + 373 (0x109b06f85 in Python) frame #7: method_vectorcall + 449 (0x109b0a0b1 in Python) frame #8: PyVectorcall_Call + 109 (0x109b072ad in Python) frame #9: slot_tp_init + 201 (0x109b5e619 in Python) frame #10: type_call + 297 (0x109b59a29 in Python) frame #11: _PyObject_MakeTpCall + 373 (0x109b06f85 in Python) frame #12: call_function + 533 (0x109bd5945 in Python) frame #13: _PyEval_EvalFrameDefault + 25678 (0x109bd274e in Python) frame #14: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python) frame #15: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python) frame #16: _PyObject_FastCallDict + 247 (0x109b06dd7 in Python) frame #17: _PyObject_Call_Prepend + 143 (0x109b083df in Python) frame #18: slot_tp_init + 145 (0x109b5e5e1 in Python) frame #19: type_call + 297 (0x109b59a29 in Python) frame #20: _PyObject_MakeTpCall + 373 (0x109b06f85 in Python) frame #21: call_function + 533 (0x109bd5945 in Python) frame #22: _PyEval_EvalFrameDefault + 25829 (0x109bd27e5 in Python) frame #23: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python) frame #24: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python) frame #25: call_function + 444 (0x109bd58ec in Python) frame #26: _PyEval_EvalFrameDefault + 25976 (0x109bd2878 in Python) frame #27: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python) frame #28: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python) frame #29: method_vectorcall + 170 (0x109b09f9a in Python) frame #30: call_function + 444 (0x109bd58ec in Python) frame #31: _PyEval_EvalFrameDefault + 25976 (0x109bd2878 in Python) frame #32: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python) frame #33: PyEval_EvalCode + 100 (0x109bcc224 in Python) frame #34: builtin_exec + 626 (0x109bc9612 in Python) frame #35: cfunction_vectorcall_FASTCALL + 175 (0x109b438bf in Python) frame #36: call_function + 444 (0x109bd58ec in Python) frame #37: _PyEval_EvalFrameDefault + 25829 (0x109bd27e5 in Python) frame #38: function_code_fastcall + 128 (0x109b078d0 in Python) frame #39: call_function + 444 (0x109bd58ec in Python) frame #40: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python) frame #41: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python) frame #42: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python) frame #43: call_function + 444 (0x109bd58ec in Python) frame #44: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python) frame #45: function_code_fastcall + 128 (0x109b078d0 in Python) frame #46: call_function + 444 (0x109bd58ec in Python) frame #47: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python) frame #48: function_code_fastcall + 128 (0x109b078d0 in Python) frame #49: call_function + 444 (0x109bd58ec in Python) frame #50: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python) frame #51: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python) frame #52: _PyFunction_Vectorcall + 270 (0x109b07a6e in Python) frame #53: call_function + 444 (0x109bd58ec in Python) frame #54: _PyEval_EvalFrameDefault + 25642 (0x109bd272a in Python) frame #55: function_code_fastcall + 128 (0x109b078d0 in Python) frame #56: call_function + 444 (0x109bd58ec in Python) frame #57: _PyEval_EvalFrameDefault + 25829 (0x109bd27e5 in Python) frame #58: function_code_fastcall + 128 (0x109b078d0 in Python) frame #59: call_function + 444 (0x109bd58ec in Python) frame #60: _PyEval_EvalFrameDefault + 25678 (0x109bd274e in Python) frame #61: _PyEval_EvalCodeWithName + 2804 (0x109bd6734 in Python) frame #62: PyEval_EvalCode + 100 (0x109bcc224 in Python) frame #63: PyRun_FileExFlags + 336 (0x109c1bed0 in Python) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<input>", line 5, in <module> File "/Users/hsk/environments/ner_layoutlm/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1037, in from_pretrained raise OSError( OSError: Unable to load weights from pytorch checkpoint file for 'layoutlm-base-uncased' at 'layoutlm-base-uncased/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ```<|||||>Maybe try updating to Transformers 4.1.1 (I just ran the following in a notebook and it works): ``` !pip install transformers from transformers import LayoutLMForTokenClassification model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased') ```<|||||>Thanks @NielsRogge It was a pytorch version issue. I solved it after seeing [this](https://github.com/huggingface/transformers/issues/7739#issuecomment-707214148)<|||||>Closing then, thanks for your help @NielsRogge !
transformers
9,267
closed
[hf args] shouldn't match partial arg names
For `--label_smoothing_factor` I can pass `--label_smoothing` and it still works - which is a bug, as it should do a full match and not a substring. This is with master. context: finetune_trainer just switched from `--label_smoothing` to `--label_smoothing_factor` (different functionality) and we were puzzling over why `--label_smoothing` still worked. this is definitely not urgent @sgugger
12-22-2020 21:58:23
12-22-2020 21:58:23
This looks like it's actually an intended behavior of `ArgumentParser`: see [here](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.parse_known_args) and in general of `argparse` (see [here](https://docs.python.org/3/library/argparse.html#prefix-matching)). So I don't think it categorizes as a bug, even if it should be documented.<|||||>Oh, fantastic - thank you for finding that out, @sgugger This feature surely bit us yesterday.
transformers
9,266
closed
Minor documentation revisions from copyediting
# What does this PR do? Minor changes to the documentation to correct typos and improve readability. I noticed these typos while reading through the docs to familiarize myself with the library for a project, and thought it would be nice to make a PR for them 😊 I've already tested building the docs from these changes, and all changes seem to have taken effect properly 👍 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? @sgugger would you mind reviewing this PR? I'm happy to make any changes (or remove any changes) you want 🙂
12-22-2020 21:37:57
12-22-2020 21:37:57
Looks like you need to run `make style` on your branch to fix the formatting of the doc files. Let me know if you run into any trouble doing that.<|||||>Thanks @sgugger 😄 I was able to run `make style` successfully (and update `preprocessing.rst` with the changes), but it looks like the `check_code_quality` check ran out of memory this time 😅 ![Screen Shot 2020-12-23 at 9 33 15 AM](https://user-images.githubusercontent.com/1848731/103007806-0e791700-4502-11eb-9f2c-b1946dc64dbf.png) I tried rerunning it, but it seems like I don't have permission. Could you try rerunning it? I'm also happy to bump the `resource_class` for the check from `medium` to `medium+` if that would be helpful 🙂 <|||||>Yes it was just a spurious failure. Thanks!
transformers
9,265
closed
[finetune_trainer] max length cl args redesign
Splitting of from https://github.com/huggingface/transformers/pull/9241, it's been proposed to refactor the following 4 cl args of `finetune_trainer.py`: 1. `--max_source_length` 2. `--max_target_length` 3. `--val_max_target_length` 4. `--test_max_target_length` https://github.com/huggingface/transformers/blob/f38c4ad302dd34faef1137b13e9636f4408b0462/examples/seq2seq/finetune_trainer.py#L82-L110 There are multiple comments wrt this in https://github.com/huggingface/transformers/pull/9241, especially towards the end of it Let's redesign it here and then do a single breaking change to this probably in the new year. To summarize the main suggestions so far were: 1. to perhaps remove `--max_source_length` - but we need use cases to see whether this is safe to do 2. collapse cl args 2-4 into a single `max_length` arg to match `generate`'s API. @sgugger, @patrickvonplaten, @patil-suraj
12-22-2020 20:41:15
12-22-2020 20:41:15
After giving this some thought, given the fact there are two different lengths here (input and targets) I would propose keeping `--max_source_length` and `--max_target_length` to avoid any confusion for the user. In the same vein, `run_qa.py` contains `max_seq_length` and `max_answer_length` to clearly differentiate the two. As for val/test I don't have any strong opinion, apart from the fact they are not used properly in the prediction at the end (only for the preprocessing), so they should be collapsed IMO<|||||>This works for me! I especially would like to see all examples use same cl args for the same functionality.<|||||>Works for me too, And for val/test targets lengths, IMO we can collapse it into one single `max_generate_length` or `eval_max_length` since most of the users (and the example scripts as well) use the same value for both args<|||||>Having only `max_source_length` and `max_target_length` works for me!<|||||>regarding the `val_max_target_length` and `test_max_target_length` args The reason we (I and Sam) decided to add that - in general, it’s okay to have a bit smaller max target length for training/validation because some documents could be too long than avg length, it’s okay during training if these get truncated - for the test, we should set the max target length to be as long as the longest text in the test set so it won’t get truncated. The reason is if the text in the test set is truncated then the calculated metrics won’t be accurate. Also, we should mention in the readme that it's best to use `run_eval.py` for calculating metrics. As there is an issue when calculating BLUE score this way as outlined in #9161<|||||>> * for the test, we should set the max target length to be as long as the longest text in the test set so it won’t get truncated. The reason is if the text in the test set is truncated then the calculated metrics won’t be accurate. Why not compute this in the script then? That would avoid having an argument that is half-used.<|||||>What I'm hearing is that perhaps there was a concrete situation where the training needed a shorter max length than eval/test. So @sgugger's suggestion will solve your concern that the scoring is done on the full length, but not if for some reason the training stage should use shorter sequences. So we have 2 related situations and I'm not sure @sgugger's solution covers the 2nd one. I just don't know whether it's a real use case or a may be. Please let me know if I haven't explained myself clearly. <|||||>Thinking more about it, @patil-suraj, won't fixing up `generate`'s `max_length` to match the longest max length of the test dataset lead to better scores than what they will be otherwise? And thus provide misleading results? For example, let's take translation, the best model would do the most correct translation regardless of whether it is allowed to generate much longer sequences. So if we calculate any such max length dynamically to be fair I think there needs to be added some extra length beyond the longest test sequence. Say `max(len(tokenize(test_inputs)))*1.1`? Does it make sense? So perhaps this is how it should work. - `max_target_length` is for training - for eval/train we derive max length from the input data plus some margin?<|||||>What I meant here is that the `test_max_target_length` is also passed to the dataset, and the dataset then truncates the reference targets (translations, summaries) longer than that. So later the generated targets are compared with (possibly) truncated references which will result in incorrect metrics <|||||>You're absolutely correct - that doesn't sound right. So before we can discuss the flags we then need to first discuss the algorithm, otherwise we won't get anywhere. If in order to get the correct metrics we must not truncate the val/test datasets then why are we doing that in the current code? Perhaps what I suggested at the end of https://github.com/huggingface/transformers/issues/9265#issuecomment-750423347 is a better way to approach it? Also please don't forget that max_length is used to deal with OOM limitations<|||||>This has been resolved.
transformers
9,264
closed
compute_metrics in the trainer does not seem to be extensible
Hi, This is more feature request, looking into compute_metrics function defined below: https://github.com/huggingface/transformers/blob/c89bdfbe720bc8f41c7dc6db5473a2cb0955f224/src/transformers/trainer.py#L204 to me this looks like the design does not allow user easy modification for different applications or I am missing something, please find the explanations below: Lets assume the user has multiple tasks like in T5 and each task needs multiple different evaluation metrics, which needs to be generated on the fly, then since this function does not accept any more arguments than `EvalPrediction`, this would not allow user to pass further parameters to generate the final evaluation metric on the fly, I appreciate modifying the design to allow easy modifications. thanks
12-22-2020 19:35:52
12-22-2020 19:35:52
If I understand correctly you raise 2 unrelated issues: 1. there is only one place where `compute_metrics` is set and perhaps it needs to be changed through the life of the trainer object? Since you can always override it with: ``` trainer.compute_metrics = new_compute_metrics ``` when you need to switch it to another version, so in a pinch you can do that. But clearly this is not a public API at the moment and can change at any time. Perhaps all is needed is a setable accessor for the `compute_metrics` attribute, so that a user can use it to swap in a new function at will, rather than adding new arguments? 2. You're saying that users may need to pass more args to `compute_metrics`, but it's not possible. You can do that via a closure mechanism, e.g. how it's done here: https://github.com/huggingface/transformers/blob/cbe63949d76efd153a1f389f38fe9ce1287e06b0/examples/seq2seq/utils.py#L80 so you build your `compute_metrics` on the fly, getting whatever data you need into the closure function and then it'll have access to whatever other data you may need at run time. Here is a silly example: ``` def make_compute_metrics(): extra_input = 1 def compute_metrics(pred): print(f"Look ma, I can pass my own args: {extra_input}") return compute_metrics trainer.compute_metrics = make_compute_metrics() # and then some time later in `prediction_loop`: self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids)) # calls your created on the fly function with whatever other data you want to be seen from it. ``` So your custom `compute_metrics_fn` now can access whatever other data you want besides the `EvalPrediction` object. <|||||>Hi there thank you for the response, yes, I agree this is possible, I solved this with `functools.partial`, but still I think the better design would be to allow the user add extra parameters. so this was more feature request. Please feel free to ignore if this does not make sense. thanks Best Rabeeh On Tue, Dec 22, 2020 at 11:12 PM Stas Bekman <[email protected]> wrote: > If I understand correctly you raise 2 unrelated issues: > > 1. there is only one place where compute_metrics is set and perhaps it > needs to be changed through the life of the trainer object? > > Since you can always override it with: > > trainer.compute_metrics = new_compute_metrics > > when you need to switch it to another version, so in a pinch you can do > that. > > Perhaps all is needed is an setable accessor for the compute_metrics > attribute, so that a user can use to swap in a new function at will, rather > than adding new arguments? > > 1. You're saying that users may need to pass more args to > compute_metrics, you can do that via closure, e.g. how it's done here: > > https://github.com/huggingface/transformers/blob/cbe63949d76efd153a1f389f38fe9ce1287e06b0/examples/seq2seq/utils.py#L80 > so you build your compute_metrics on the fly, getting whatever data > you need into the closure function and then it'll have access to whatever > other data you may need at run time. > > Here is a silly example: > > def make_compute_metrics(): > extra_input = 1 > def compute_metrics(pred): > print(f"Look ma, I can pass my own args: {extra_input}") > return compute_metrics > > compute_metrics_fn = make_compute_metrics() > compute_metrics_fn() > > So your custom compute_metrics_fn now can access whatever other data you > want besides the EvalPrediction object. > > As I have shown in (1) you can now assign this to trainer.compute_metrics. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/9264#issuecomment-749827774>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARPXHH2C7MLFOY6V3MWST5LSWERUDANCNFSM4VGBUKSQ> > . > <|||||>Yes, `partial` would do the trick. I've just shared my take on it. and that there might be a need for a public API to override `trainer.compute_metrics` post-`__init__`, In my limited experience `partial` or a manual closure is how some projects implement such functions. I will let others comment though on this feature request.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,263
closed
Adds MuRIL - BERT based model for 17 Indian Languages to the library
# What does this PR do? This PR adds a TensorFlow-based MuRIL model to the library. More details about the MuRIL model can be found [here](https://tfhub.dev/google/MuRIL/1). Fixes: #9190 @LysandreJik @patrickvonplaten
12-22-2020 19:27:19
12-22-2020 19:27:19
Hey @ravi03071991, thanks a lot for the new model! There are quite some empty files in the PR - can we maybe delete those?<|||||>From https://tfhub.dev/google/MuRIL/1 it seems that MuRIL is the same as BERT - do we need a new model class? It would be awesome if you could specify the differences between MuRIL and BERT in this PR :-) <|||||>> Hey @ravi03071991, > > thanks a lot for the new model! There are quite some empty files in the PR - can we maybe delete those? Sure. We can delete them.<|||||>- I've posted an adapted MuRIL BERT model here https://huggingface.co/monsoon-nlp/muril-adapted-local - Simran Khanuja has posted here https://huggingface.co/simran-kh/muril-cased-temp - there is also https://huggingface.co/google/muril-cased/tree/main but it has no model files Does this do the job?<|||||>Hey @ravi03071991 and @mapmeld, So what I understand is that the model can be used with **no** code addition using `BertModel` and `BertTokenizer` - is this correct? I think in this case it does make more sense to just add a model to the model hub as it's done with this checkpoint: https://huggingface.co/monsoon-nlp/muril-adapted-local/tree/main Did you guys check whether the model works as expected? We could do some quick fine-tuning evaluation on the XTREME benchmark to make sure the model behaves correctly in transformers. We should get more or less the same results as shown on the tf-hub: https://tfhub.dev/google/MuRIL/1 . I think it'll be pretty easy to do some fine-tuning / evaluation by slightly adapting this notebooks: https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb to use the XTREME dataset from `datasets`: https://huggingface.co/datasets/xtreme . Would someone be interested in giving it a shot at making such a notebook? I think with such a notebook, we can upload the pre-trained checkpoint to an "official" org name in the hub - probably `google/muril-bert-base` or something (google trained the model no?). Then we're happy to do some promotion on the model as well :-) <|||||>> Hey @ravi03071991 and @mapmeld, > > So what I understand is that the model can be used with **no** code addition using `BertModel` and `BertTokenizer` - is this correct? I think in this case it does make more sense to just add a model to the model hub as it's done with this checkpoint: https://huggingface.co/monsoon-nlp/muril-adapted-local/tree/main > > Did you guys check whether the model works as expected? We could do some quick fine-tuning evaluation on the XTREME benchmark to make sure the model behaves correctly in transformers. We should get more or less the same results as shown on the tf-hub: https://tfhub.dev/google/MuRIL/1 . I think it'll be pretty easy to do some fine-tuning / evaluation by slightly adapting this notebooks: https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb to use the XTREME dataset from `datasets`: https://huggingface.co/datasets/xtreme . Would someone be interested in giving it a shot at making such a notebook? > > I think with such a notebook, we can upload the pre-trained checkpoint to an "official" org name in the hub - probably `google/muril-bert-base` or something (google trained the model no?). Then we're happy to do some promotion on the model as well :-) Sure. I am can take up the task of making the notebook.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,262
closed
Revert renaming in finetune_trainer
# What does this PR do? As per the discussion in #9241, reverting all renaming in the `finetune_trainer.py` script for now.
12-22-2020 19:13:54
12-22-2020 19:13:54
transformers
9,261
closed
[seq2seq] memory regression
#9241 introduced a memory regression - found out via git bisect. I was able to do: BS=12 before this PR got merged and now only BS=8 with: ``` export BS=12; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp --fp16 ``` We really need to go back to that issue of memory benchmarks in CI and figure out how to make it happen. The problem is that I started working on it some months back but gave up since each gpu gave different numbers... For details please see: https://github.com/huggingface/transformers/issues/6045 edit: should also make sure that `--label_smoothing 0.1 --fp16 --fp16_backend apex` works https://github.com/huggingface/transformers/issues/9261#issuecomment-749800880 @patrickvonplaten, should we figure this out in the new year?
12-22-2020 18:33:49
12-22-2020 18:33:49
Yes, we really should take a stab at better speed and memory regression testing. Big new years resolution!<|||||>This specific commit introduced the regression: https://github.com/huggingface/transformers/pull/9241/commits/fe7960bcbe0183d198661e1c05d82ed7ff118e18 <|||||>There is a second problem: Same as above but with apex: ``` --label_smoothing 0.1 --fp16 --fp16_backend apex ``` hangs 5% into training - spinning CPU (not OOMing) - had to kill. checked pre this PR - no hanging. Full command: ``` export BS=12; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp --label_smoothing 0.1 --fp16 --fp16_backend apex ``` (It OOMs some time later into training) but no hanging.<|||||>So both problem seem to be related to label-smoothing, @sgugger has been testing hypotheses and this one worked: ``` # trainer.py (top) def label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=-100): """From fairseq""" if target.dim() == lprobs.dim() - 1: target = target.unsqueeze(-1) nll_loss = -lprobs.gather(dim=-1, index=target) smooth_loss = -lprobs.sum(dim=-1, keepdim=True) if ignore_index is not None: pad_mask = target.eq(ignore_index) nll_loss.masked_fill_(pad_mask, 0.0) smooth_loss.masked_fill_(pad_mask, 0.0) else: nll_loss = nll_loss.squeeze(-1) smooth_loss = smooth_loss.squeeze(-1) nll_loss = nll_loss.sum() # mean()? Scared to break other math. smooth_loss = smooth_loss.sum() eps_i = epsilon / lprobs.size(-1) loss = (1.0 - epsilon) * nll_loss + eps_i * smooth_loss return loss, nll_loss ``` ``` # trainer.py (in Trainer class) def compute_loss(self, model, inputs): labels = inputs.pop("labels") logits = model(**inputs)[0] return label_smoothed_nll_loss(logits.view(-1, logits.shape[-1]), labels.view(-1), self.args.label_smoothing_factor)[0] ``` **edit** @sgugger says that this code wasn't right, so we currently don't have a solution yet. will keep on experimenting.<|||||>Hi. related to this bug, is my bug report here https://github.com/huggingface/transformers/issues/9311 Is there an alternative allowing me to move forward resolving memory issue for now? thanks<|||||>Well, I don't think it's related other than both using up more RAM ;) This regression happened in a very recent change, but you're using a much older transformers version. I will follow up in your Issue you linked to. <|||||>So `--fp16` seems to be related, if I remove it the regression goes away.
transformers
9,260
closed
Add speed metrics to all example scripts + template
# What does this PR do? This does the same as #9198 but on all examples scripts and the example template.
12-22-2020 16:55:00
12-22-2020 16:55:00
The eval metrics are already reported with the other metrics, so need to add anything for them. Not sure about the refactor since this shouldn't really be a function in transformers (nothing to do with transformers models) so we would have to define it in every one of those scripts, which kind of takes the same length.<|||||>re: eval/train - except n_objs is missing from metrics - remember we had to add it separately in `finetune_trainer` when you refactored it? re: refactor: I haven't suggested anything for the core - we have utils.py for that.<|||||>Yes, but this is something I just did in `finetune_trainer` to have it output the same things as before, for other scripts I don't want that reported several times (it's already logged at the beginning of training/evaluation).
transformers
9,259
closed
Fix script that check objects are documented
# What does this PR do? Currently, the script that checks objects in the main init are not documented is not really running cause I'm stupid and forgot a pair of `()`... This PR fixes that and adds the objects introduces without documentation in their proper place.
12-22-2020 15:47:36
12-22-2020 15:47:36
transformers
9,258
closed
torch.hub colab doesn't work
ERROR: type should be string, got "https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/huggingface_pytorch-transformers.ipynb#scrollTo=T_3y0655Bqbj\r\n```\r\n%%bash\r\npip install tqdm boto3 requests regex sentencepiece sacremoses\r\n```\r\n\r\nThen all cells left don't work!"
12-22-2020 15:27:15
12-22-2020 15:27:15
transformers
9,257
closed
Pegasus Documentation May Conflict With Seq2Seq ReadMe
Here, under `tips and tricks`..... https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#tips-and-tricks `Both finetuning and eval are 30% faster with --fp16. For that you need to install apex.` But in the documentation... https://huggingface.co/transformers/master/model_doc/pegasus.html#examples `FP16 is not supported (help/ideas on this appreciated!).` Also in the documentation https://huggingface.co/transformers/master/model_doc/pegasus.html#examples `Script to fine-tune pegasus on the XSUM dataset.` leads to a 404: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh
12-22-2020 15:17:44
12-22-2020 15:17:44
Hi @kingpalethe, In general, for BART and Marian models, training and eval is faster with fp16, except Pegasus and T5 which currently don't work well with fp16 Yes, the fine-tuning script is now moved under `examples/research_projects/seq2seq-distillation` dir, https://github.com/huggingface/transformers/tree/master/examples/research_projects/seq2seq-distillation Thanks for reporting, Also please note that this script is not maintained anymore and is provided as-is. We only maintain the `finetune_trainer.py` script now.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,256
closed
[EncoderDecoder] Make tests more aggressive
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Before merging #9183, we should make sure that the EncoderDecoder and caching tests are aggressive enough to be sure everything works as expected. In addition, this PR refactors the `_expand_mask` function in Bart making it cleaner and move the responsibility correctly to the `attention_mask` creation. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-22-2020 14:48:39
12-22-2020 14:48:39
transformers
9,255
closed
Fix link to bertabs/README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-22-2020 14:31:47
12-22-2020 14:31:47
transformers
9,254
closed
Fix link to old language modeling script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-22-2020 14:25:52
12-22-2020 14:25:52
transformers
9,253
closed
Prediction problem of glue task
12-22-2020 10:54:34
12-22-2020 10:54:34
I have trained the glue task for mrpc, and I want to load the pretrained model and predict for new sentence pairs. ```py eval_dataset = load_dataset( "json", data_files={"test": "/home/aa/paraphrase/data/qqp/tt.json"}) eval_dataset = eval_dataset.map(preprocess_function, batched=False, load_from_cache_file=True) print(eval_dataset['test']['idx']) eval_dataset.remove_columns_("label") trainer = Trainer(model=model, tokenizer=tokenizer) predictions = trainer.predict(test_dataset=eval_dataset).predictions print(predictions) predictions = np.array([softmax(element) for element in predictions])[:, 1] ``` And I got this : ``` load model finish Using custom data configuration default Reusing dataset json (/home/aa/.cache/huggingface/datasets/json/default-8988cd19f10ded6e/0.0.0/70d89ed4db1394f028c651589fcab6d6b28dddcabbe39d3b21b4d41f9a708514) 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:00<00:00, 1413.70ex/s] [0, 1, 2, 3, 4, 5, 6, 7, 8] Traceback (most recent call last): File "predict.py", line 89, in <module> predictions = trainer.predict(test_dataset=eval_dataset).predictions File "/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/transformers/trainer.py", line 1381, in predict test_dataloader, description="Prediction", ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix File "/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/transformers/trainer.py", line 1441, in prediction_loop for step, inputs in enumerate(dataloader): File "/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/aa/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/aaanbo/anaconda3/envs/transformer/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] KeyError: 0 ``` Why I got keyerror? Anyone can help or show me how to use the pretrained models for sentence pair prediction? <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,252
closed
Fix TF BART for saved model creation
# What does this PR do? This PR fixes the graph execution issue in order to make BART able to create a proper saved model.
12-22-2020 10:45:21
12-22-2020 10:45:21
Quick question on the context before going deeper into the PR: At the moment all "fast" and "slow" `TFBart` tests are passing. I thought model creation is already tested currently. What is the exact use case for which `TFBart` currently fails? Should we maybe add a new `modeling_tf_commen.py` test that would prevent other TF models from having the same error?<|||||>Currently the `test_saved_model_with_hidden_states_output` and `test_saved_model_with_attentions_output` are just partially testing the creation of a saved model. When we extend the experiments (such as using use_cache and force the output to be a dict) it fails because some part of the graph was not taken into account when running the slow tests. I'm currently working on having proper saved model and testing most of the possible cases to create them, and currently TF BART fails for some of them when using a "real" serving approach. You can test it by yourself by adding ``` @tf.function(input_signature=[{ "input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"), "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"), "decoder_input_ids": tf.TensorSpec((None, None), tf.int32, name="decoder_input_ids"), "decoder_attention_mask": tf.TensorSpec((None, None), tf.int32, name="decoder_attention_mask"), }]) def serving(self, inputs): output = self.call(inputs) return self.serving_output(output) def serving_output(self, output): return TFSeq2SeqLMOutput( loss=None, logits=output.logits, past_key_values=output.past_key_values, decoder_hidden_states=tf.convert_to_tensor(output.decoder_hidden_states) if self.config.output_hidden_states else None, decoder_attentions=tf.convert_to_tensor(output.decoder_attentions) if self.config.output_attentions else None, encoder_last_hidden_state=output.encoder_last_hidden_state, encoder_hidden_states=tf.convert_to_tensor(output.encoder_hidden_states) if self.config.output_hidden_states else None, encoder_attentions=tf.convert_to_tensor(output.decoder_attentions) if self.config.output_attentions else None, ) ``` To the `TFBartForConditionalGeneration` and run: ``` from transformers import TFBartForConditionalGeneration model = TFBartForConditionalGeneration.from_pretrained("sshleifer/bart-tiny-random") model.save("here", include_optimizer=False, signatures=model.serving) ``` You can see the following error: ``` ValueError: 'combined_attention_mask' is None at the end of the else branch. ``` This is because, as you can see, the given input is different than the one we test with `dummy_inputs` or in the tests. Here we are compiling a part of the graph that is not used in those cases, and when this part comes to be compiled, it fails. Lessons learned: To properly test the creation of a savedmodel/graph compilation+execution we have to test as much inputs as possible in order to be sure that all the part of the graph can be compiled and executed. EDIT: I'm also 100% sure that BART is not the only model concerned about this.<|||||>Slow tests are passing as well - just verified on brutasse. PR looks good to me now
transformers
9,251
closed
Model Templates for Seq2Seq
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds the possibility to generate Encoder-Decoder models via the cookie-cutter tool. Model is correctly generated for PT and TF with all tests passing. Two test files are added. These templates should very much facilitate the addition of Pegasus, Blenderbot, Marian as separate model files as well as adding BigBird, etc... Please note that for now the safety checks: `# Copied from transformers.models.bart.modeling_bart...` are only added for very few layers because: - Bart has some hacks that we should not copy for new models, but that we need to keep for backwards compatibility. E.g. positional embeddings have an offset hack leading to slightly too large positional embeddings, which we should not repeat (same as in RoBERTa), automatic creation of `decoder_input_ids` is a special feature and not the default case, Sinusoidal position embeddings are IMO also not general enough to be in the templates - `modeling_bart.py` still has the `add_layer_norm` hacks which are not copied to the model templates. When Bart is separated into Pegasus, etc... those if-else hacks can be deleted from `modeling_bart.py` at which point some more `# Copied from transformers.models.bart.modeling_bart...` should be added to the Seq2Seq model templates
12-22-2020 08:30:48
12-22-2020 08:30:48
Improvements to TFBart: https://github.com/huggingface/transformers/pull/9252 are now included in this PR as well.
transformers
9,250
closed
ValueError: Tokenizer class T5Tokenizer does not exist or is not currently imported.
@mfuntowicz ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: Latest transformers==4.2.0.dev0 - Platform: Colab - Python version: Python 3.6.9 - PyTorch version (GPU?): torch==1.7.0+cu101 - Tensorflow version (GPU?): - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @mfuntowicz ## Information The following code indicated in the latest HF news letter seems to have isssues when I tried I get tokenizer error both under Fast and Slow (True/Flase tokenizer parameter) conditions when I had checked The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa",use_fast=False ) model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa") context = "HuggingFace won the best Demo paper at EMNLP2020." question = "What won HuggingFace?" input_text = 'question: %s context: %s' % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(**features) tokenizer.decode(output[0]) ``` ## To reproduce Steps to reproduce the behavior: 1. Run the above code on Google Colab <!-- If you have code snippets, error messages, stack traces please provide them here as well. --> **ERROR reported** `ValueError Traceback (most recent call last) <ipython-input-3-87256159791c> in <module>() 10 from transformers import AutoTokenizer, AutoModelForSeq2SeqLM 11 ---> 12 tokenizer = AutoTokenizer.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa",use_fast=False ) 13 14 model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa") /usr/local/lib/python3.6/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 358 if tokenizer_class is None: 359 raise ValueError( --> 360 "Tokenizer class {} does not exist or is not currently imported.".format(tokenizer_class_candidate) 361 ) 362 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ValueError: Tokenizer class T5Tokenizer does not exist or is not currently imported.`
12-22-2020 07:14:51
12-22-2020 07:14:51
Hey @nsankar, I cannot reproduce the above error concerning the tokenizer. The tokenizer is loaded correctly in my command line. However it seems like the model weights are not 100% correct. @mrm8488 when I load the model via: ```python model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/mT5-small-finetuned-tydiqa-for-xqa") ``` I get the following warning: ``` 2020-12-22 11:59:05.111580: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory 2020-12-22 11:59:05.111618: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Some weights of the model checkpoint at mrm8488/mT5-small-finetuned-tydiqa-for-xqa were not used when initializing T5ForConditionalGeneration: ['encoder.block.0.layer.1.DenseReluDense.wi.weight', 'encoder.block.1.layer.1.DenseReluDense.wi.weight', 'encoder.block.2.layer.1.DenseReluDense.wi.weight', 'encoder.block.3.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.1.DenseReluDense.wi.weight', 'encoder.block.5.layer.1.DenseReluDense.wi.weight', 'encoder.block.6.layer.1.DenseReluDense.wi.weight', 'encoder.block.7.layer.1.DenseReluDense.wi.weight', 'decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight', 'decoder.block.0.layer.2.DenseReluDense.wi.weight', 'decoder.block.1.layer.2.DenseReluDense.wi.weight', 'decoder.block.2.layer.2.DenseReluDense.wi.weight', 'decoder.block.3.layer.2.DenseReluDense.wi.weight', 'decoder.block.4.layer.2.DenseReluDense.wi.weight', 'decoder.block.5.layer.2.DenseReluDense.wi.weight', 'decoder.block.6.layer.2.DenseReluDense.wi.weight', 'decoder.block.7.layer.2.DenseReluDense.wi.weight'] - This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at mrm8488/mT5-small-finetuned-tydiqa-for-xqa and are newly initialized: ['encoder.block.0.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.0.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.1.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.2.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.3.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.4.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.5.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.6.layer.1.DenseReluDense.wi_1.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_0.weight', 'encoder.block.7.layer.1.DenseReluDense.wi_1.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.0.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.1.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.2.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.3.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.4.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.5.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.6.layer.2.DenseReluDense.wi_1.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_0.weight', 'decoder.block.7.layer.2.DenseReluDense.wi_1.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` -> I think the weights uploaded here correspond to the "old" T5 version. It would be awesome if you could check the weights :-) Also in the config: https://huggingface.co/mrm8488/mT5-small-finetuned-tydiqa-for-xqa/blob/main/config.json, the architecture `"T5ForConditionalGeneration"` is used as well as `"t5"` for the model type, but it should be `"MT5ForConditionalGeneration"` and `"mt5"` I think :-) <|||||>Thanks @patrickvonplaten. I will check it out, ASAP.<|||||>It seems to happen with other models: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("moussaKam/mbarthez") Traceback (most recent call last): File "/home/user/.local/share/virtualenvs/project/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3418, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-4-028660e65504>", line 3, in <module> tokenizer = AutoTokenizer.from_pretrained("moussaKam/mbarthez") File "/home/user/.local/share/virtualenvs/project/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 359, in from_pretrained raise ValueError( ValueError: Tokenizer class BarthezTokenizer does not exist or is not currently imported. ``` And: ``` (project) user@ubuntu:/mnt/workspace/project$ pip list | grep transformers transformers 4.1.1 ```<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece` Seems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later.<|||||>> I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece` > > Seems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later. This works fabulously with DeBerta models as well, seems that the error isn't very descriptive.<|||||>I think on current master a better error message is given when `from_pretrained(...)` is called from a dummy object cc @sgugger :-)<|||||>> I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece` > > Seems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later. while it doesn't work for me. :-( ` tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") ValueError: Tokenizer class BloomTokenizerFast does not exist or is not currently imported. `<|||||>> > I had a similar problem `ValueError: Tokenizer class M2M100Tokenizer does not exist or is not currently imported.` and solved it by running `pip install sentencepiece` > > Seems that when missing the `sentencepiece` package, `AutoTokenizer.from_pretrained` will silently not load the tokenizer and then crash later. > > while it doesn't work for me. :-( > > ` tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") > > ValueError: Tokenizer class BloomTokenizerFast does not exist or is not currently imported. ` well, newest version of transformers works for me.<|||||>I'm getting the same error with transformers==4.26 when trying to load [ernie-m-base](https://huggingface.co/PaddlePaddle/ernie-m-base) with ``` MODEL_NAME = "PaddlePaddle/ernie-m-base" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True, model_max_length=max_length) # model_max_length=512 model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME, label2id=label2id, id2label=id2label).to(device) ``` ``` Traceback (most recent call last): File "/gpfs/home5/laurerm/nli-scratch/nli_training.py", line 41, in <module> tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True, model_max_length=max_length) # model_max_length=512 File "/home/laurerm/.local/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 655, in from_pretrained raise ValueError( ValueError: Tokenizer class ErnieMTokenizer does not exist or is not currently imported. ``` The exact same code worked two days ago with XLM-V. I've made sure that sentencepiece is installed. Edit: Ah I think the error currently comes up because ernie-m is on the hub, but not yet merged into master for transformers https://github.com/huggingface/transformers/pull/21349 (?)
transformers
9,249
closed
GPT2 distributed TPU pre-training using run_clm.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.2.0dev0 - Platform: Linux-4.9.0-14-amd64-x86_64-with-debian-9.13 - Python version: 3.6.10 - PyTorch version (GPU?): 1.8.0a0+5c3788d (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: yes using V3-8 TPUs ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> albert, bert, GPT2, XLM: @LysandreJik Trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ 1] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ 1] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1.Create a GCP with pytorch/xla support 2.create a V3-8 TPU 3.install transformers and datasets libs and then run code: ```ruby python3 transformers/examples/xla_spawn.py --num_cores=8 \ transformers/examples/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /tmp/test-clm ``` then this error comes up: ```ruby Traceback (most recent call last): File "transformers/examples/xla_spawn.py", line 85, in <module> main() File "transformers/examples/xla_spawn.py", line 81, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 394, in spawn start_method=start_method) File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 205, in start_processes while not context.join(): File "/anaconda3/envs/torch-xla-nightly/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 160, in join exit_code=exitcode torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with exit code 17 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> and those exceptions come up before the error: ```ruby [[{{node XRTCompile}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: Ran out of memory in memory space hbm. Used 16.61G of 15.98G hbm. Exceeded hbm capacity by 645.48M. Total hbm usage >= 16.63G: reserved 18.00M program 13.93G arguments 2.68G (100.0% utilization) Output size 192.01M (100.0% utilization); shares 0B with arguments. Program hbm requirement 13.93G: global 4.0K HLO temp 13.93G (91.5% utilization: Unpadded (12.75G) Padded (13.93G), 0.0% fragmentation (2.60M)) Largest program allocations in hbm: 1. Size: 1.53G Shape: pred[8,1023,50257]{1,2,0:T(8,128)E(32)} Unpadded size: 392.25M Extra memory due to padding: 1.15G (4.0x expansion) XLA label: %broadcast.4850.remat3 = pred[8,1023,50257]{1,2,0:T(8,128)E(32)} broadcast(pred[]{:T(256)E(32)} %constant.4065), dimensions={} Allocation type: HLO temp ========================== 2. Size: 785.38M Shape: bf16[8,1023,50257]{1,2,0:T(8,128 ``` ## Expected behavior I don't think this is an OOM problem since I am using 8 cores TPU, so it must be an XLA multiprocessing problem. <!-- A clear and concise description of what you would expect to happen. -->
12-22-2020 06:30:55
12-22-2020 06:30:55
It clearly states you're out of hbm memory, which is the TPU memory from what Google tells me. I think you have to specify a lower batch size or a lower `block_size` (GPT-2 uses a very big one by default).<|||||>@sgugger yup, this was exactly what's wrong, my batch size (per device) is 8 so didn't consider that I am overloading the TPU but totally forgot about block_size (which defaults to 1024). Thank you very much.
transformers
9,248
closed
numpy ndarray type is not allowed on process pytorch model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help tensorflow: @jplu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [o] my own modified scripts: (give details below) Changed 1 line that `from_pt`'s default value from `False` to `True`((https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/modeling_tf_utils.py#L947)) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [o] my own task or dataset: (give details below) HuggingFace: monologg/kobert I loaded my pertained pytorch model, and an error occurred on input_processing function ## To reproduce Steps to reproduce the behavior: 1. run input_processing 2. numpy ndarray is not allowed type(https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/modeling_tf_utils.py#L331) 3. So ndarray cannot be processed(https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/modeling_tf_utils.py#L354) 4. Of course there is not any proper changing for ndarray not as other types(dict, Tensor and etc) 5. Error occurred like below ``` File "/Users/ys/dev/rasa/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 357, in input_processing raise ValueError(f"Data of type {type(v)} is not allowed only {allowed_types} is accepted for {k}.") ValueError: Data of type <class 'numpy.ndarray'> is not allowed only (<class 'tensorflow.python.framework.ops.Tensor'>, <class 'bool'>, <class 'int'>, <class 'transformers.file_utils.ModelOutput'>, <class 'tuple'>, <class 'list'>, <class 'dict'>) is accepted for attention_mask. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Train success!
12-22-2020 06:16:19
12-22-2020 06:16:19
Hello! Thanks for reporting this, we will apply a fix for the next release!<|||||>@LoveMeWithoutAll thanks for issue! Could you specific your use case here a bit? Do you want to convert a PyTorch model to a tensorflow model and consequently train the tensorflow model? Why do we need to forward `ndarray` types?<|||||>@patrickvonplaten Hello I'm using [Rasa](https://rasa.com) framework that embedding HuggingFace. Rasa is not yet support Pytorch model, so I must convert from Pytorch to TF model for using pre-trained model. Then `ndarray` is needed for convert model.<|||||>@LoveMeWithoutAll Hi, I am also using the Rasa framework with HFTransformers and Language Models in pipeline config. Faced the same issue with the latest transformers, but it works with lower versions, transformers-2.9.0. I haven't tested out on other versions (probably some 3.x might work too!), but 2.9.0 shall work fine if your project setup allows lower version transformers.<|||||>> @LoveMeWithoutAll Hi, I am also using the Rasa framework with HFTransformers and Language Models in pipeline config. Faced the same issue with the latest transformers, but it works with lower versions, transformers-2.9.0. I haven't tested out on other versions (probably some 3.x might work too!), but 2.9.0 shall work fine if your project setup allows lower version transformers. thank you for your advice! i'll try it as you did<|||||>This was fixed on `master`: https://github.com/huggingface/transformers/pull/9294 We'll release a new version tomorrow, which will benefit from this change. Thanks for reporting it!<|||||>Thank you for your effort!
transformers
9,247
closed
T5 tokenizer.vocab_size and config.vocab_size mismatch?
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 - tokenizers: 0.9.4 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Hi @patrickvonplaten I am trying to train a "t5-base" model and I directly use from_pretrained tokenizer, config and model. However, I found the vocabulary size given by the tokenizer and config is different (see to reproduce). Does this is expected? If I use the model `T5ForConditionalGeneration.from_pretrained('t5-base', config=config)` to do predictions, this will result in the last dimension of lm_logits is different from `tokenizer.vocab_size`. ## To reproduce Steps to reproduce the behavior: ``` >>> from transformers import T5Tokenizer, T5Config >>> tokenizer = T5Tokenizer.from_pretrained("t5-base") >>> config = T5Config.from_pretrained("t5-base") >>> print(tokenizer.vocab_size) 32100 >>> print(config.vocab_size) 32128 ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ``` >>> print(tokenizer.vocab_size) 32128 >>> print(config.vocab_size) 32128 ```
12-22-2020 05:02:11
12-22-2020 05:02:11
Duplicate of https://github.com/huggingface/transformers/issues/4875.<|||||>I see, I simply ignored this mismatch and seems nothing wrong with prediction. Thank you!
transformers
9,246
closed
AssertionError: Non-consecutive added token '<pad>' found. Should have index 40002 but has index 40000 in saved vocabulary
torch: 1.6.0 transformers: 3.5.1 OS: centos 7 GPU: A100 I trained a sentencepiece bpe model. There is no problem if I load it with `XLMRobertaTokenizer`. But when I load with `XLMRobertaTokenizerFast`, it cost a long time in `transformers/convert_slow_tokenizer.py`. After save the tokenizer and load it `from_pretrained`, the error occurs: ``` /ProjectRoot/tp_origin/pyenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs) 1758 for token, index in added_tok_encoder_sorted: 1759 assert index == len(tokenizer), ( -> 1760 f"Non-consecutive added token '{token}' found. " 1761 f"Should have index {len(tokenizer)} but has index {index} in saved vocabulary." 1762 ) AssertionError: Non-consecutive added token '<pad>' found. Should have index 40002 but has index 40000 in saved vocabulary. ``` I find that a file `added_tokens.json` is created with content `{"<pad>": 40000, "<mask>": 40001}`. my tokenizer ``` PreTrainedTokenizerFast(name_or_path='/ProjectRoot/tp_origin/distillation/tmp/student_init_model3/0_Transformer', vocab_size=40000, model_max_len=514, is_fast=True, padding_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': '<mask>'}) ```
12-22-2020 04:35:38
12-22-2020 04:35:38
I think the problem is that the saved tokenizer saves `len(tokenizer)` = 40002. So when I load it, the added tokens id starts from 40000, the error occurs.<|||||>Hey @thesby, Did you add any special tokens to `XLMRobertaTokenizer` that weren't there previously? Could you copy/paste the code you used to train the tokenizer here as well? Thanks!<|||||>All special tokens were added by sentencepiece trainer. I have never add by myself. ``` LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib64/ spm_train --input=main.txt --model_prefix=sentencepice3.bpe --vocab_size=40000 --character_coverage=0.9995 --model_type=bpe --max_sentencepiece_length=4 --num_threads=64 --split_digits=true --input_sentence_size=2000000 -shuffle_input_sentence=true ```<|||||>Maybe I should train with `unigram`, not `bpe`.<|||||>@thesby have you solved this problem?<|||||>Hi, I'm having a similar problem. ``` from transformers import GPT2Tokenizer class VisualCometTokenizer(GPT2Tokenizer): def __init__(self, vocab_file, merges_file, errors='replace', unk_token="<|endoftext|>", bos_token="<|endoftext|>", eos_token="<|endoftext|>", begin_img="<|b_img|>", end_img="<|e_img|>", begin_event="<|b_ev|>", end_event="<|e_ev|>", begin_place="<|b_pl|>", end_place="<|e_pl|>", begin_inferences={'before': "<|before|>", 'intent': "<|intent|>", 'after': "<|after|>"}, end_inference="<|e_in|>", **kwargs): super(VisualCometTokenizer, self).__init__( vocab_file, merges_file, errors=errors, bos_token=bos_token, eos_token=eos_token, unk_token=unk_token, **kwargs ) self.begin_img = begin_img self.end_img = end_img self.begin_event = begin_event self.end_event = end_event self.begin_place = begin_place self.end_place = end_place self.begin_inferences = begin_inferences self.end_inference = end_inference self.det_tokens = ['<|det%d|>' % i for i in range(50)] self.add_special_tokens({ "additional_special_tokens": [self.begin_img, self.end_img, self.begin_event, self.end_event, self.begin_place, self.end_place, self.end_inference] + list(self.begin_inferences.values()) + self.det_tokens }) tokenizer = VisualCometTokenizer.from_pretrained("gpt2") tokenizer.save_pretrained("/content/test") tokenizer =VisualCometTokenizer.from_pretrained("/content/test") ``` Will cause this: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-34-e955f380827e> in <module>() 46 tokenizer.save_pretrained("/content/test") 47 ---> 48 tokenizer =VisualCometTokenizer.from_pretrained("/content/test") 1 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs) 1809 for token, index in added_tok_encoder_sorted: 1810 assert index == len(tokenizer), ( -> 1811 f"Non-consecutive added token '{token}' found. " 1812 f"Should have index {len(tokenizer)} but has index {index} in saved vocabulary." 1813 ) AssertionError: Non-consecutive added token '<|b_img|>' found. Should have index 50317 but has index 50257 in saved vocabulary. ++ I moved the "add_special_tokens" outside of init and it loads fine. You have to add tokens outside of init and save and when you load the tokenizer again, it won't try to add the tokens twice. Better fix would be to permit adding the same token to the tokenizer again or throw a warning in huggingface's tokenizer_utils.py<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,245
closed
[s2s] test_finetune_trainer_slow fails when run in group
On dual gpu when running `test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow` alone - all is good. when running it with all the other tests in that file it fails: ``` RUN_SLOW=1 pytest -sv test_finetune_trainer.py [...] self = <seq2seq.test_finetune_trainer.TestFinetuneTrainer testMethod=test_finetune_trainer_slow> @slow def test_finetune_trainer_slow(self): # There is a missing call to __init__process_group somewhere output_dir = self.run_trainer( eval_steps=2, max_len="128", model_name=MARIAN_MODEL, num_train_epochs=10, distributed=False ) # Check metrics logs = TrainerState.load_from_json(os.path.join(output_dir, "trainer_state.json")).log_history eval_metrics = [log for log in logs if "eval_loss" in log.keys()] first_step_stats = eval_metrics[0] last_step_stats = eval_metrics[-1] > assert first_step_stats["eval_bleu"] < last_step_stats["eval_bleu"] # model learned nothing E AssertionError: assert 0.0 < 0.0 test_finetune_trainer.py:130: AssertionError ----------------------------------------------------------------------- Captured log call ----------------------------------------------------------------------- WARNING seq2seq.finetune_trainer:finetune_trainer.py:160 Process rank: -1, device: cuda:0, n_gpu: 2, distributed training: False, 16-bits training: False ==================================================================== short test summary info ==================================================================== FAILED test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow - AssertionError: assert 0.0 < 0.0 ================================================ 1 failed, 7 passed, 1 skipped, 17 warnings in 102.82s (0:01:42) ================================================ ``` For some reason it fails to learn anything when some other tests run before it. tested with pytorch-nightly + py38.
12-22-2020 04:29:31
12-22-2020 04:29:31
I also have the same failure on one GPU on my side FYI (but no failure when run on its own).<|||||>Thank you, @sgugger - your input helped a lot to reduce the sequence quickly! So this sequence fails: ``` CUDA_VISIBLE_DEVICES=0 RUN_SLOW=1 pytest \ test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_apex \ test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow ``` Something about apex. This sequence with another similar test before it but no apex doesn't fail: ``` CUDA_VISIBLE_DEVICES=0 RUN_SLOW=1 pytest \ test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_no_dist \ test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow ```<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,244
closed
BatchEncoding.to accepted types too restrictive
## Environment info - `transformers` version: 4.1.1 - Platform: Linux-4.14.81.bm.15-amd64-x86_64-with-debian-9.11 - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help ## Information In `BatchEncoding.to`, the only accepted class types are `str` and `torch.device`. I think some libraries like pytorch-lightning call `.to` with the integer value for the GPU number, and HF complains about this when it is perfectly valid: ``` >>> x = torch.zeros(1) >>> x.to(0) tensor([0.], device='cuda:0') ``` ## Expected behavior Also allow int values in BatchEncoding.to
12-22-2020 03:47:37
12-22-2020 03:47:37
transformers
9,243
closed
AssertionError with model_parallel in run_clm.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: AWS Sagemaker - Python version: 3.6 - PyTorch version (GPU?): 1.6 - Tensorflow version (GPU?): - Using GPU in script?: YES - Using distributed or parallel set-up in script?: YES ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @LysandreJik @sgugger @alexorona ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) I am fine-tuning GPT2 using my own dataset, using the examples/language-modelling/run_clm.py script, and I want to use the new model_parallel feature in v4.1.1. I am using a multi-gpu instance (AWS p3.8xlarge - with 4 gpus). But I get this error: AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [2], output_device 2, and module parameters {device(type='cpu')}. ## To reproduce Steps to reproduce the behavior: 1. Run run_clm.py with the following params: ``` python -m torch.distributed.launch \ --nproc_per_node 4 run_clm.py \ --do_train \ --do_eval \ --fp16 \ --logging_first_step \ --model_parallel \ --evaluation_strategy epoch \ --logging_steps 50 \ --model_name_or_path gpt2 \ --model_type gpt2 \ --num_train_epochs 1 \ --output_dir /opt/ml/model/ \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 2 \ --save_steps 50 \ --save_total_limit 1 \ --train_file /opt/ml/input/data/data/train.txt \ --validation_file /opt/ml/input/data/data/val.txt ``` 2. I get this error when training starts: ``` Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ #015 95%|█████████▌| 62/65 [00:04<00:00, 12.58ba/s] ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [1], output_device 1, and module parameters {device(type='cpu')}. #015 98%|█████████▊| 64/65 [00:05<00:00, 12.26ba/s]#015100%|██████████| 65/65 [00:05<00:00, 12.61ba/s] Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [2], output_device 2, and module parameters {device(type='cpu')}. #015 95%|█████████▌| 62/65 [00:05<00:00, 11.84ba/s]#015 98%|█████████▊| 64/65 [00:05<00:00, 12.18ba/s]#015100%|██████████| 65/65 [00:05<00:00, 12.60ba/s] Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [3], output_device 3, and module parameters {device(type='cpu')}. #015 98%|█████████▊| 64/65 [00:05<00:00, 11.50ba/s]#015100%|██████████| 65/65 [00:05<00:00, 12.48ba/s] [INFO|trainer.py:388] 2020-12-22 00:54:34,892 >> The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: . [INFO|trainer.py:388] 2020-12-22 00:54:34,892 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: . Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [0], output_device 0, and module parameters {device(type='cpu')}. Traceback (most recent call last): File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_clm.py', '--local_rank=3', '--do_train', '--do_eval', '--fp16', '--logging_first_step', '--model_parallel', '--evaluation_strategy', 'epoch', '--logging_steps', '50', '--model_name_or_path', 'gpt2', '--model_type', 'gpt2', '--num_train_epochs', '1', '--output_dir', '/opt/ml/model/', '--per_device_eval_batch_size', '2', '--per_device_train_batch_size', '2', '--save_steps', '50', '--save_total_limit', '1', '--train_file', '/opt/ml/input/data/data/train.txt', '--validation_file', '/opt/ml/input/data/data/val.txt']' returned non-zero exit status 1. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expect the fine-tuning script to run successfully. If I remove the --model_parallel in the args, then it does run successfully in distributed mode. But I want to use this new feature to reduce memory usage, and increase batch_size
12-22-2020 01:27:58
12-22-2020 01:27:58
I'm not the expert on the model parallel feature, but I think it's not supposed to be launched with `torch.distributed` as it will only use one process, then split the layers of your model on several GPUs.<|||||>Thanks for the fast response @sgugger I tried your suggestion and ran the following (removing the torch.distributed.launch): ``` python run_clm.py \ --do_train \ --do_eval \ --fp16 \ --logging_first_step \ --model_parallel \ --evaluation_strategy epoch \ --logging_steps 50 \ --model_name_or_path gpt2 \ --model_type gpt2 \ --num_train_epochs 1 \ --output_dir /opt/ml/model/ \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 2 \ --save_steps 50 \ --save_total_limit 1 \ --train_file /opt/ml/input/data/data/train.txt \ --validation_file /opt/ml/input/data/data/val.txt ``` I then see this error: ``` File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 799, in train tr_loss += self.training_step(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1137, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1163, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 732, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 895, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 732, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 681, in forward inputs_embeds = self.wte(input_ids) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 732, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 126, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1814, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select ```<|||||>Like I said, not the parallel expert, you should try tagging the person who added the functionality :-)<|||||>@laphang Somewhere **data parallelism** is being triggered in run_clm.py. I highly suspect it's because you set `per_device_train_batch_size` and `per_device_eval_batch_size` to a value greater than 1, so the script is probably confused. You indicated you wanted to use **model parallelism** -- i.e. _split a single model into pieces and distribute those pieces across several devices_ so that, for example, the embedding layers and the first several attention blocks are only on the first GPU. A sample starts on the first GPU and is automatically handed off to another GPU as it goes through the mode. Then you indicated that you want to assign different batches to different GPUs when they all have to start on the first GPU. This might not be a problem though because it actually sounds like you want data parallelism, which _duplicates the model to each device to train a larger batch_. You don't need model parallelism for that. Data parallelism is the default behavior of Trainer. So to summarize: - **Model parallelism**: let's you train bigger models (e.g. gpt2-xl). Set the `per_device_eval_batch_size `and `per_device_train_batch_size `to 1. - **Data parallelism**: let's you train bigger batch sizes by duplicating the model to several GPUs and training on more samples at the same time. Set `model_parallel `to false and the trainer will automatically default to data parallelism when you have more than one GPU.<|||||>@sgugger Short term, we need to add this to `TrainingArguments`: ``` if self.model_parallel: assert self.per_device_train_batch_size == 1, "Model is parallelized, but per_device_train_batch_size is not 1. Model parallelism only supports a batch size of one at this time." assert self.per_device_eval_batch_size == 1, "Model is parallelized, but per_device_eval_batch_size is not 1. Model parallelism only supports a batch size of one at this time." ``` In the long-term, we need to figure out how to enable batches for model parallelism. Batches aren't assigned to devices, so the current arguments `per_device`... only makes sense for data parallelism. <|||||>Hi @alexorona, thanks for the quick response. I've just gotten back from the Christmas / New year break, and getting back into things. I tried setting the batch_sizes to 1, but I still seem to get basically the same errors. (I also switched to gpt2-large) A) when running this: ``` python -m torch.distributed.launch \ --nproc_per_node 4 run_clm.py \ --do_train \ --do_eval \ --fp16 \ --logging_first_step \ --model_parallel \ --evaluation_strategy epoch \ --logging_steps 50 \ --model_name_or_path gpt2-large \ --model_type gpt2 \ --num_train_epochs 1 \ --output_dir /opt/ml/model/ \ --per_device_eval_batch_size 1 \ --per_device_train_batch_size 1 \ --save_steps 50 \ --save_total_limit 1 \ --train_file /opt/ml/input/data/data/train.txt \ --validation_file /opt/ml/input/data/data/val.txt ``` I get this: ``` Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ #015 86%|████████▌ | 56/65 [00:04<00:00, 12.03ba/s] ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [1], output_device 1, and module parameters {device(type='cpu')}. #015 86%|████████▌ | 56/65 [00:04<00:00, 11.81ba/s]#015 89%|████████▉ | 58/65 [00:04<00:00, 12.31ba/s]#015 89%|████████▉ | 58/65 [00:04<00:00, 12.26ba/s]#015 89%|████████▉ | 58/65 [00:04<00:00, 12.08ba/s]#015 92%|█████████▏| 60/65 [00:04<00:00, 12.52ba/s]#015 92%|█████████▏| 60/65 [00:04<00:00, 12.40ba/s]#015 92%|█████████▏| 60/65 [00:04<00:00, 12.21ba/s]#015 95%|█████████▌| 62/65 [00:04<00:00, 12.58ba/s]#015 95%|█████████▌| 62/65 [00:04<00:00, 12.47ba/s]#015 95%|█████████▌| 62/65 [00:05<00:00, 12.22ba/s]#015 98%|█████████▊| 64/65 [00:05<00:00, 12.20ba/s]#015100%|██████████| 65/65 [00:05<00:00, 12.64ba/s] Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [2], output_device 2, and module parameters {device(type='cpu')}. #015 98%|█████████▊| 64/65 [00:05<00:00, 12.09ba/s]#015100%|██████████| 65/65 [00:05<00:00, 12.56ba/s] Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [3], output_device 3, and module parameters {device(type='cpu')}. #015 98%|█████████▊| 64/65 [00:05<00:00, 11.61ba/s]#015100%|██████████| 65/65 [00:05<00:00, 12.42ba/s] [INFO|trainer.py:388] 2021-01-04 23:04:51,956 >> The following columns in the training set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: . [INFO|trainer.py:388] 2021-01-04 23:04:51,957 >> The following columns in the evaluation set don't have a corresponding argument in `GPT2LMHeadModel.forward` and have been ignored: . Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 681, in train else True File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 282, in __init__ ).format(device_ids, output_device, {p.device for p in module.parameters()}) AssertionError: DistributedDataParallel device_ids and output_device arguments only work with single-device CUDA modules, but got device_ids [0], output_device 0, and module parameters {device(type='cpu')}. Traceback (most recent call last): File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 261, in <module> main() File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 257, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_clm.py', '--local_rank=3', '--do_train', '--do_eval', '--fp16', '--logging_first_step', '--model_parallel', '--evaluation_strategy', 'epoch', '--logging_steps', '50', '--model_name_or_path', 'gpt2-large', '--model_type', 'gpt2', '--num_train_epochs', '1', '--output_dir', '/opt/ml/model/', '--per_device_eval_batch_size', '1', '--per_device_train_batch_size', '1', '--save_steps', '50', '--save_total_limit', '1', '--train_file', '/opt/ml/input/data/data/train.txt', '--validation_file', '/opt/ml/input/data/data/val.txt']' returned non-zero exit status 1. ``` B) when running this: ``` python run_clm.py \ --do_train \ --do_eval \ --fp16 \ --logging_first_step \ --model_parallel \ --evaluation_strategy epoch \ --logging_steps 50 \ --model_name_or_path gpt2-large \ --model_type gpt2 \ --num_train_epochs 1 \ --output_dir /opt/ml/model/ \ --per_device_eval_batch_size 1 \ --per_device_train_batch_size 1 \ --save_steps 50 \ --save_total_limit 1 \ --train_file /opt/ml/input/data/data/train.txt \ --validation_file /opt/ml/input/data/data/val.txt ``` I get this: ``` Traceback (most recent call last): File "run_clm.py", line 374, in <module> main() File "run_clm.py", line 344, in main trainer.train(model_path=model_path) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 799, in train tr_loss += self.training_step(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1137, in training_step loss = self.compute_loss(model, inputs) File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1163, in compute_loss outputs = model(**inputs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 732, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 895, in forward return_dict=return_dict, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 732, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 681, in forward inputs_embeds = self.wte(input_ids) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 732, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 126, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py", line 1814, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select #015 0%| | 0/813 [00:00<?, ?it/s] ``` I had a few questions / comments: 1) For model_parallel, am I supposed to use torch.distributed.launch or not? I wasn't 100% clear on that. 2) Seems like I'm still getting similar errors with batch_sizes 1 as I was getting previously, any other thoughts on what the issue is? 3) For some use cases, I would be interested in using model parallelism and data parallelism together (e.g. for models that currently just fit on the gpu memory with batch size 1 or 2 - I am presuming that splitting the model with model parallelism would allow space in the gpu memory for larger batch sizes and increase speed?). So would definitely be interested in any future changes that allow for that. (per your last comment)<|||||>FYI I noticed that this was included in v4.2.0, removing model_parallel arg from trainer. It hadn't been made to work yet. https://github.com/huggingface/transformers/pull/9451 I'll wait for it to be included in the trainer. <|||||>It is included, there is just no need for the flag that wasn't doing anything special (parallelizing the model was the user's responsibility and still is). We just automatically detect if the model is parallelized now, without needing the flag.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,242
closed
Load from a TF 1.0 checkpoint in modeling_tf_utils.py
In the file modeling_utils.py, we can load a TF 1.0 checkpoint as is indicated in this [line](https://github.com/huggingface/transformers/blob/fb650df8590f796663226132482d09da5b0fb613/src/transformers/modeling_utils.py#L930). However, in the file modeling_tf_utils.py, which is the same version for TF, we can not load models from TF 1.0, and it says expecifically that you can as: ` >>> model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config)` But there is no _if_ for `os.path.isfile(os.path.join(pretrained_model_name_or_path, TF_WEIGHTS_NAME + ".index"))`
12-22-2020 00:43:48
12-22-2020 00:43:48
Hey @vsuarezpaniagua, Great point! I noticed the same thing actually a couple of days ago as well with @jplu. I think we should add this functionality to `modeling_tf_utils.py`. It should be very similar to how it's done in the corresponding code in `modeling_utils.py`, and would require a new `load_tf1_weights` for TF2 models. Pinging @jplu, @LysandreJik, @sgugger here as well for some brainstorming on the importance of this feature request and how to best design it if neeed.<|||||>Thank you for taking it into consideration. Also, I saw that the _**EvaluationStrategy**_ for _epoch_ is not working using it in _training_args_tf.py_ for building a [TFTrainer](https://github.com/huggingface/transformers/blob/ec07da65e25562040581febaf9b400a462962961/src/transformers/trainer_tf.py#L49) in _trainer_tf.py_. And I think this is because there are not _self.control.should_evaluate_ or _self.control.should_save_ as there are in the Torch implementations _trainer.py_ and _training_args.py_. Having similar code for both implementations could solve all these problems and easier to follow. > Hey @vsuarezpaniagua, > > Great point! I noticed the same thing actually a couple of days ago as well with @jplu. I think we should add this functionality to `modeling_tf_utils.py`. It should be very similar to how it's done in the corresponding code in `modeling_utils.py`, and would require a new `load_tf1_weights` for TF2 models. > > Pinging @jplu, @LysandreJik, @sgugger here as well for some brainstorming on the importance of this feature request and how to best design it if neeed. <|||||>The TF Trainer is off of maintenance since a while in order to be rethought when we can dedicate a bit of time to it. Not the current TF priority unfortunately. But at some point it is our plan to make the TF Trainer catching up his late on the PT one.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,241
closed
Seq2seq trainer
# What does this PR do? This PR graduates `Seq2SeqTrainer` and moves it inside the transformers library. By doing so, it moves some of the features of the `Seq2SeqTrainer` inside the main `Trainer` and leaves some in the subclass. More precisely, the following features will be available in the general Trainer: - label smoothing is passed to the main `Trainer` (so it can be used in all classification problems), with an easier API, bug fixes (the current implementation did not work with -100 has an ignore index for the loss, now it does; also the current implementation returns a loss that is not averaged thus being too big, see below) - the ability to pick any scheduler - the ability to use Adafactor instead of AdamW The sortish-sampling and predict with generate are left in the subclass. There are also a few breaking changes in the `Seq2SeqTrainer` API to make its init match the one of `Trainer` mainly the init does not take a `config` and a `DataArguments`, instead: - the token IDs are taken from the tokenizer - the arguments for generation are passed to `evaluate` or `predict` - the `ignore_pad_token_for_loss` is passed in the init but is deprecated since it should be removed once BART and subclasses use -100. About label smoothing and the mean vs sum. The current implementation takes the sum over the batch size and sequence length (only counting tokens that have a label != -100). This gives something that does not have the same scale as the usual cross entropy loss (which is the mean on the same dimensions) thus would require a special learning rate to be useful. With the fix, the label-smoothed loss had the same scale as the non-smoothed loss, which means the same command with the same learning rate should produce comparable results.
12-21-2020 22:36:01
12-21-2020 22:36:01
Oh and this PR does not delete the old `Seq2SeqTrainer` just yet since I'm planning to merge it just before my vacation. That way if something goes horribly wrong, people can still use the old `Seq2SeqTrainer`. The plan would be to delete it in January after the new `Seq2SeqTrainer` has been tested.<|||||>> change naming from `max_target_length` -> `max_length`. Think it's clearer this way that the args of `predict` and `eval` correspond 1-to-1 to the `max_length` of `generate()` @patrickvonplaten - did you mean to suggest to change the new arguments to evaluate/predict that this PR adds or to rename `--max_target_length`, `--val_max_target_length` cl args?<|||||>> suggest I was more referring to the args of the functions, but more generally I think it would actually be better if there would be only one `max_length` in the data_args - so leave the `max_length` that we have now and completely remove the `source_max_length` argument. IMO, `source_max_length` should be fully defined by the tokenizer of the model. I don't really see a need to let the user define the maximum input length, but this is probably better to be done in a separate PR. On the other hand, we also do have a `max_seq_length` argument in `run_mlm.py` so not 100% sure what's best here...What is your opinion here @stas00 @sgugger @patil-suraj ?<|||||>I think it's better to keep `max_source_length`, `max_target_length` since in some cases the input length could be way shorter than the tokenizer or model's max length and these two could be used to truncate the text . We can get rid of `val_max_target_length` and `test_max_target_length` in `DataTrainingArguments`, since in almost all scripts we are using the same length for all three arguments (`max_target_length`, `val_max_target_length`, `test_max_target_length`). Then we could pass the same `max_target_length` to both `evaluate` and `predict` methods. sorry about the miscommunication.<|||||>I don't know whether this is a good time, but should we add `min_length` as well here while this part of the API is being redesigned? Surely if generate has `min_length` someone might need to redefine it too? But I'm totally fine to deferring this until and if someone asks for it.<|||||>The best way to decide about the naming is to show a few use cases - and then it's clear whether these are needed or not. Please don't forget that Sam or whoever added those in first place had a good reason for it, so it'd be hard to make a decision in the void. Perhaps such decision shouldn't be rushed - but made into an RFC, invite input from users who have a lot more use cases?
transformers
9,240
closed
Help: How to deploy a fine tuned t5 model in production
Hi All, I am trying to deploy a fine-tuned t5 model in production. This is something new to me, to deploy a PyTorch model in production. I went through the presentation from Hugging Face on youtube, about how they deploy the model. And some of the other blog posts. It is mentioned by HF that they deploy the model on Cython environment as it gives a ~100 times boost to the inference. So, is it always advisable to run a model in production on Cython? Converting a model in Pytorch to TF does it help and is advisable or not? What is the preferred container approach to adopt to run multiple models on a set of GPUs? I know some of these questions would be basic, I apologize for it, but I want to make sure that I follow the correct guidelines to deploy a model in production. Thank you Amit
12-21-2020 19:13:27
12-21-2020 19:13:27
Hey @as-stevens, Could you maybe post this question on the forum: https://discuss.huggingface.co/? We try to move more user-specific questions to the forum and limit Github mostly to bug reports. Thank you!<|||||>@patrickvonplaten thank you much! I will close this issue.
transformers
9,239
closed
Adding performer fine-tuning research exampke
# What does this PR do? This PR adds a performer fine-tuning research example based on `run_mlm_flax.py`. The user can fine-tune a Performer/FAVOR+ Bert starting from the Bert checkpoint or blank model of their choice, and compare it to a vanilla Bert model, also from a checkpoint or blank. @patrickvonplaten
12-21-2020 18:50:34
12-21-2020 18:50:34
@sgugger - we want to do some fine-tuning experiments with the new performer model: https://arxiv.org/abs/2009.14794 before adding it to the `src/transformers/`. I think this is a good first place for it where we don't have to be super careful about the API choices yet. Is that fine for you?<|||||>Yes, this completely works for me!
transformers
9,238
closed
Bug SqueezeBERT stops with no error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Ubuntu - Python version: anaconda python 3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in the script?: The GPUs available were : ``` Geforce GTX 980 4gb Geforce GTX Titan 12gb ``` ``` transformers == 4.1.1 torch==1.7.0 torchvision == 0.8.1 ``` ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): SqueezeBERT The problem arises when using: * [ ] the official example scripts: (give details below) using [Sequence Classification with IMDb Reviews ](https://huggingface.co/transformers/custom_datasets.html#seq-imdb) example I have made my own script * [x] my own modified scripts: (give details below) The only changes I have made in this script are 1. to use Yelp dataset, 2. use SqueezeBERT instead of DistilBERT, 3. also do a 5label sentiment... ```python training_args = TrainingArguments( output_dir='./SqueezeBERT_10ep_result', # output directory per_device_train_batch_size=3, # batch size per device during training per_device_eval_batch_size=3, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./SqueezeBERT_10ep_log', # directory for storing logs logging_steps=500, num_train_epochs=10, # total number of training epochs evaluation_strategy="epoch", do_train=True, do_eval=True, ) model = SqueezeBertForSequenceClassification.from_pretrained('squeezebert/squeezebert-mnli-headless', return_dict=True) model.num_labels = 5 model.classifier = nn.Linear(768,5) from sklearn.metrics import accuracy_score, precision_recall_fscore_support def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='macro') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'f1': f1, 'precision': precision, 'recall': recall } print("Displaying model architecture... !\n") print(model) print("Training model starting...!\n") trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset compute_metrics=compute_metrics, ) trainer.train() ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Using the yelp full dataset ## To reproduce Steps to reproduce the behavior: 1. Run the scripts as mentioned 2. Reaching Epoch 3, it will suddenly stop using the GPU and although no error is showing up, nothing changes... 3. the Last checkpoint that saves is `checkpoint-310000` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> it should have just go on and finished the training process.
12-21-2020 17:48:19
12-21-2020 17:48:19
Hi! Is there a way for you to reproduce this error in a colab notebook? <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,237
closed
Update the README of the text classification example
# What does this PR do? This PR updates the main README of the examples folder and the one in the text classification example to take into account the recent changes in the scripts. In particular, I re-ran the command shown for all tasks with/without FP16 to make a clean table of results. I moved all stuff about distributed training/TPUs in the general README of the examples as all example scripts now use Trainer, so have this working out of the box.
12-21-2020 16:34:50
12-21-2020 16:34:50
transformers
9,236
closed
mBART finetuned on XSUM
## Environment info - `transformers` version: 4.1.0.dev0 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0.dev20201216+cu110 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: distributed ### Who can help mBART: @patrickvonplaten examples/seq2seq: @patil-suraj ## Information Model I am using: mBART The problem arises when using: * [ X ] the official example scripts: I used the official seq2seq training example here: https://github.com/huggingface/transformers/tree/master/examples/seq2seq * [ X ] my own modified scripts: (give details below) my training script is as follows (no changes to finetune_trainer.py): ```shell python -m torch.distributed.launch --nproc_per_node=2 finetune_trainer.py \ --data_dir "./xsum" \ --output_dir "./my_models" \ --overwrite_output_dir \ --model_name_or_path "facebook/mbart-large-cc25" \ --fp16 \ --freeze_encoder \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --learning_rate=3e-5 \ --do_train --do_eval --do_predict \ --evaluation_strategy steps \ --predict_with_generate \ --n_val 1000 \ --max_target_length=60 \ --val_max_target_length=60 \ --test_max_target_length=100 \ "$@" ``` The tasks I am working on is: * [ X ] XSUM ## To reproduce Steps to reproduce the behavior: 1. follow https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md 2. launch training script with my modifications (batch_size, freeze_encoder, max_target_length ..) 3. inference on two texts (french and english) using the following code: ```python def sum_mbart_xsum(text): print("---------------------------------------------------------------------------------") print(" MBART large xsum ") print("---------------------------------------------------------------------------------") tokenizer = MBartTokenizer.from_pretrained("/home/mohamed/Desktop/Summarization/mbart-xsum") model = MBartForConditionalGeneration.from_pretrained("/home/mohamed/Desktop/Summarization/mbart-xsum") article_input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt', max_length=1024, truncation=True)[ 'input_ids'] summary_ids = model.generate(article_input_ids, num_beams=6, length_penalty=1.0, max_length=142, no_repeat_ngram_size=3) summary_txt = tokenizer.decode(summary_ids.squeeze(), skip_special_tokens=True) return summary_txt ``` ## Results 1- eval/test results: ```jsonc { "epoch": 3.0, "test_gen_len": 28.1, "test_loss": 1.7692, "test_n_ojbs": -1, "test_rouge1": 32.7618, "test_rouge2": 12.022, "test_rougeL": 25.6512, "test_rougeLsum": 25.6499, "test_runtime": 2778.8939, "test_samples_per_second": -0.0, "train_n_ojbs": -1, "train_runtime": 94633.1507, "train_samples_per_second": -0.0, "val_gen_len": 28.0, "val_loss": 1.7993, "val_n_ojbs": 1000, "val_rouge1": 32.9862, "val_rouge2": 11.528, "val_rougeL": 25.6517, "val_rougeLsum": 25.7055, "val_runtime": 267.0092, "val_samples_per_second": 3.745 } ``` 2- Inference: * Out of context summarizations (gives sth related to training data) --> (sth wrong with my finetuning configuration or inference function?) * For French texts, results are in English and poor summary --> (why the language changes?) ## Expected behavior My main objective of finetuning mBART on Xsum is to evaluate the multilingual level of mBART. Basically answering the following question: Should I finetune on a dataset with multiple languages to be able to summarize in multiple languages? Or the multilingual characteristic is preserved with mBART and just finetune on english (xsum) only Current problems: 1- (inference results and questions above) 2- Why `facebook/bart-large-xsum` understands french (even if bart is trained on english)?
12-21-2020 16:29:17
12-21-2020 16:29:17
> For French texts, results are in English and poor summary --> (why the language changes?) Since you are fine-tuning on English data, I don't think it will be good at generating french summaries. Probably a good idea to fine-tune with multiple languages. > Why facebook/bart-large-xsum understands french (even if bart is trained on English)? since the training data is scraped from the web there is a chance that there could be some french text in it. I think the authors would answer this question better.<|||||> @patil-suraj > > For French texts, results are in English and poor summary --> (why the language changes?) > > Since you are fine-tuning on English data, I don't think it will be food at generating french summaries. Probably a good idea to fine-tune with multiple languages. So I imagined mBART would be even better at this since even english BART finetuned on Xsum only can summarize pretty well in French. However, I don't understand why my model (mbart FT on Xsum) gives out of context results (in any language) like I mentioned above? I followed the steps at https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md and made very few changes. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,235
closed
run_mlm.py crashes when saving model checkpoint
## Environment info - `transformers` version: 4.0.1 - Platform: Google Cloud - Python version: 3.6.10 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - Using GPU in script?: NO. Using TPU - Using distributed or parallel set-up in script?: YES ### Who can help @LysandreJik @mfuntowicz @sgugger ## Information I'm trying to train al Albert model from scratch, with a custom tokenizer, on Google Cloud TPUS. The problem arises when saving the model checkpoints, more specifically when trying to save the tokenizer. I'm using your example script run_mlm.py. The problem arises when using: * [ ] the official example scripts: run_mlm.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: Masked Language Modelling. ## To reproduce Steps to reproduce the behavior: 1. Run run_mlm.py with the following params: python transformers/examples/xla_spawn.py --num_cores 8 \ transformers/examples/language-modeling/run_mlm.py \ --model_type albert \ --train_file texts_train.txt \ --validation_file good_texts_valid.txt \ --output_dir modelo_prueba \ --tokenizer_name ./tokenizadores/definitivo \ --overwrite_output_dir \ --line_by_line \ --pad_to_max_len \ --do_train \ --do_eval \ --evaluation_strategy steps \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --learning_rate 1e-3 \ --max_steps 500 \ --save_steps 100 \ --save_total_limit 15 \ --overwrite_cache \ --max_seq_length 512 \ --eval_accumulation_steps 10 \ --logging_steps 100 \ --config_name ./config/albert-base-v2.json \ At step 100, the following error arises: ``` INFO|trainer.py:1141] 2020-12-21 15:46:34,157 >> Saving model checkpoint to modelo_prueba/checkpoint-100 [INFO|configuration_utils.py:281] 2020-12-21 15:46:34,158 >> Configuration saved in modelo_prueba/checkpoint-100/config.json [INFO|modeling_utils.py:741] 2020-12-21 15:46:34,556 >> Model weights saved in modelo_prueba/checkpoint-100/pytorch_model.bin Exception in device=TPU:0: expected str, bytes or os.PathLike object, not NoneType Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm.py", line 405, in _mp_fn main() File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm.py", line 379, in main trainer.train(model_path=model_path) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 777, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 848, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 869, in _save_checkpoint self.save_model(output_dir) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 1135, in save_model self._save_tpu(output_dir) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 1157, in _save_tpu self.tokenizer.save_pretrained(output_dir) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1972, in save_pretrained filename_prefix=filename_prefix, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 524, in _save_pretrained vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert_fast.py", line 252, in save_vocabulary if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/posixpath.py", line 378, in abspath path = os.fspath(path) TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Expected behavior The expected behavior is that the script doesn't crash. Moreover, it's completely unnecessary to save the tokenizer in trainer.py, as the tokenizer is already trained and doesn't need to be saved again.
12-21-2020 15:54:45
12-21-2020 15:54:45
We don't have your tokenizer, so the reproducer you give us does not work. I tried on my side to run the same command with a saved tokenizer and a saved config file and it works without any trouble. > Moreover, it's completely unnecessary to save the tokenizer in trainer.py, as the tokenizer is already trained and doesn't need to be saved again. It is necessary to allow users to resume training from the latest checkpoint.<|||||>Yeah, it's necessary to allow users to resume training, but that concerns the model only, not the tokenizer. The tokenizer is trained prior to training the model and doesn't change during training. I'll upload the tokenizer so that you can reproduce the issue. <|||||>I'm trying to post my tokenizer but Github doesn't let me, it has too many characters.... Can I send it to you via email ? @sgugger <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,234
closed
Fix TF template
# What does this PR do? Fix the TF template for the new einsum dense layer.
12-21-2020 12:41:15
12-21-2020 12:41:15
transformers
9,233
closed
[MPNet] Add slow to fast tokenizer converter
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9194 This PR adds a converter from slow to fast MPNetTokenizers. This way fast tokenizers can be correctly serialized and loaded again. To prevent future issues like #9194, we should maybe think about not allowing to add a "FastTokenizer" without a corresponding converter...what do you think @sgugger, @LysandreJik, @thomwolf ? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
12-21-2020 11:19:22
12-21-2020 11:19:22
transformers
9,232
closed
command line_by_line missing in https://github.com/huggingface/transformers/tree/master/examples/language-modeling
## Environment info - `transformers` version: 3.5.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @LysandreJik @patrickvonplaten @TevenLeScao ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) https://github.com/huggingface/transformers/tree/master/examples/language-modeling ## To reproduce In the old version of the script ``run_clm.py`` called ``run_language_modeling`` there was an argument ``line_by_line`` which allowed to read the data by putting each sequence on a line. This argument seems to be missing in the newer version ``run_clm.py``. ## Expected behavior Perhaps there is an argument that has replaced ``line_by_line`` but I don't really see that. Sorry if I missed something.
12-21-2020 11:13:14
12-21-2020 11:13:14
Pinging @sgugger here - think he knows `run_clm.py` best<|||||>`run_clm` does not have the `line_by_line` option as it doesn't make sense for causal language modeling: pretraining for causal language modeling is done by concatenating all available texts separated by a special token, then building sequences of a certain `block_size` with them. Using `line_by_line` and keeping the sentences separate result in the model having to predict the padding token quite often (which pretrained causal models usually don't have) and without knowing when to stop predicting that padding token. Only `run_mlm` keeps that option as it makes sense to have sentences of different lengths for masked language modeling. You can always copy the relevant bit of code in `run_mlm` to use it in `run_clm` but I would strongly advise against it.<|||||>Thank you for your reply, it makes sense. So that means that I need to give the script the concatenated data (and separate sequences by a special token)? Or does the script ``run_clm`` take care of that? <|||||>The script takes care of that for you :-)<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>@sgugger sorry to revive that thread. I think a clm with two special tokens BOS / EOS would make sense to be trained in line by line mode, what do you think ? (btw are you saying that all pre-trained gpt2 models are trained in fixed blocks ? if so do you confirm that original papers do the same when benchmarking with standards like 1BW ?) thanks for your insight.
transformers
9,231
closed
[T5] Fix warning for changed EncDec Attention Bias weight
In this PR: https://github.com/huggingface/transformers/pull/8518 a bug was fixed that removed an unnecessary weight from the T5 Cross Attention layer. In the following, this layer was added to the wrong "ignore_weight" list. This weight will never be missing since it doesn't exist in the model anymore it can only be "not used" since it's still present in saved checkpoints. This PR fixes the incorrect warning by placing the regex in the correct list.
12-21-2020 09:33:23
12-21-2020 09:33:23
transformers
9,230
closed
add base model classes to bart subclassed models
# What does this PR do? This PR adds base model classes for `MBart`, `Pegasus` and `Blenderbot`, and adds these in `MODEL_MAPPING` `dict`. This will enable to load these models using the `AutoModel` class and `pipelines`. Right now these models can't be loaded using `pipeline` since pipeline relies on the `AutoModel` class. https://github.com/huggingface/transformers/blob/a4b21cdd20328f71448123ce7c962a78a5d75612/src/transformers/pipelines.py#L105-L110
12-21-2020 07:58:03
12-21-2020 07:58:03
transformers
9,229
closed
Generate function does not work with GPU
Hello, I want to use generate function with single GPU. Specifically, I fine tuned a GPT-2 model (on GPU) and subsequently, I want to generate text with it. When I run this ``` input_ids.to(device) sample_output = model.generate( input_ids, do_sample=True, max_length=150, top_k=50, top_p=0.92 ) ``` I get this error `RuntimeError: Input, output and indices must be on the current device` When i move the model and the input `.to('cpu')`, it works.
12-21-2020 07:29:03
12-21-2020 07:29:03
Hey @contribcode, could you please provide a complete code snippet that would allow us to reproduce the error? Thanks!<|||||>@patrickvonplaten thank you for your response. ```python import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') from transformers import GPT2LMHeadModel from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") special_tokens_dict = {'additional_special_tokens': ['[spanstarts]','[spanends]']} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id) model.resize_token_embeddings(len(tokenizer)) model.to(device) ``` I have posted the snippets about model definition, since I think the rest of the code does not affect the `generate` method. After this I just train the model. Regards.<|||||>Hey @contribcode, I can execute the above code snippet without error -> could you include the part of the code where the error arises?<|||||>Hey @patrickvonplaten, the part of the code where the error occurs is the code in the original post ``` input_ids = tokenizer.encode(train_texts['text'].iloc[0], return_tensors='pt') model.to('cuda') input_ids.to('cuda') sample_output = model.generate( input_ids, do_sample=True, max_length=150, top_k=50, top_p=0.92 ) ``` and I get the error `RuntimeError: Input, output and indices must be on the current device` As I mentioned in the original post, if I move the model and input_ids `to('cpu')`, it works. I also wanted to ask about parameters (`top_k`, `top_p`), are these values common for the generate function?<|||||>Could you please post a single code snippet that I can copy/paste into a terminal and run to reproduce the error? The above snippet is not executable because `train_texts['text'].iloc[0]` is not defined. It's impossible to help you if we are not able to reproduce the error, I'm afraid.<|||||>`train_texts['text'].iloc[0]` is just some text, for example: This man is a joke. The whole code snippet is ```python import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') from transformers import GPT2LMHeadModel from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") special_tokens_dict = {'additional_special_tokens': ['[spanstarts]','[spanends]']} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id) model.resize_token_embeddings(len(tokenizer)) model.to(device) input_ids = tokenizer.encode('This man is a joke.', return_tensors='pt') input_ids.to(device) sample_output = model.generate( input_ids, do_sample=True, max_length=150, top_k=50, top_p=0.92 ) ``` and the error is what I mentioned earlier.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.<|||||>I have the same issue with Transformers 4.4.2 and PyTorch 1.8.0 UPD: also tried Transformers 4.5.1, the issue is still here<|||||>Very nice @cloveranon ! Now, do you know how/if GPUs can accelerate generation? Can this work in batches? Maybe, @patrickvonplaten can shed some light.
transformers
9,228
closed
Differences between original implementation and HuggingFace implementation
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: **4.0.0** - Platform: **Windows** - Python version: **3.6.5** - PyTorch version (GPU?): **1.6.0+cu101** - Tensorflow version (GPU?): **-** - Using GPU in script?: **Yes** - Using distributed or parallel set-up in script?: **No** ### Who can help **@stefan-it** <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information The model I am using (Bert, XLNet ...): LayoutLMforTokenClassification The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) **My own modified script** The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) **My own dataset** ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> This is more of a question rather than an issue. When I trained the layoutlm model using my data and I used the tokenclassification model from huggingface, I got a small drop in performance. I wanted to ask if there are any differences between the two models? I have kept the hyper-parameters to be exactly the same in both cases. Two key points where I found the differences were: (1). When taking in the dataset - in the Microsoft version, there is a concept called "segment_ids" which is not a parameter in the huggingface layoutlm documentation. (2). I loaded both the models and printed the number of layers in both, I saw that there is 1 extra layer called layoutlm.embeddings.position_ids in the huggingface implementation. I am trying to find out the reason for the drop in performance. Hence, wanted to find out if there is any difference between the model implementations itself. It would be great help if you could help explain the two differences I found! Thanks!
12-21-2020 06:19:42
12-21-2020 06:19:42
Hi there, I made some [integration tests](https://github.com/NielsRogge/transformers/blob/e5431da34ab2d03d6114303f18fd70192c880913/tests/test_modeling_layoutlm.py#L318) for both the base model (`LayoutLM`) as well as the model with a token classification head on top (`LayoutLMForTokenClassification`). These integration tests do not reveal any differences in terms of output on the same input data between the original implementation and the one in HuggingFace Transformers. So the implementation seems to be OK. Btw, the `segment_ids` you are referring to are called `token_type_ids` in the Transformers library. I also made a demo [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) that showcases how to fine-tune LayoutLMForTokenClassification on the FUNSD dataset, I'm getting quite good performance even though I'm not using Mask-RCNN features. Let me know if this helps you. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,227
closed
Can't lazy initialize BART model on GPU
## Environment info - `transformers` version: 3.5.1 - Platform: Linux-4.14.81.bm.15-amd64-x86_64-with-debian-9.11 - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help Bart: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Bart We have finetuned a BART model for Seq2SeqLM, and want to serve it using a Python web microframework. This microframework uses `multiprocessing` underneath, so we wish to lazily initialize the model on the GPU on the first call. However, in `modeling_bart.py`, some of the layers already call CUDA in the main process: https://github.com/huggingface/transformers/blob/f38c4ad302dd34faef1137b13e9636f4408b0462/src/transformers/models/bart/modeling_bart.py#L125 So trying to do so results in the following error: ``` RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` ## To reproduce ``` class Model: def __init__(self, path): self.path = path self.initialized = False def init(self): if not self.initialized: self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.model = AutoModelForSeq2SeqLM.from_pretrained(self.path) self.model.to(self.device) self.model.eval() self.initialized = True ``` ``` model = Model("bart-base") # in multiprocessing subprocess... model.init() # RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ``` Sorry about the pseudocode, the microframework used is proprietary. Commenting out the linked lines removes the error. ## Expected behavior Able to call `model.cuda()` from subprocess.
12-21-2020 06:17:12
12-21-2020 06:17:12
@jethrokuan - Please try this on the transformers 4.1.0; issue looks to be fixed in this version<|||||>Actually it looks like the problem is exasperated in newer HF. In a devbox with no CUDA: ``` Python 3.7.9 (default, Nov 18 2020, 11:18:33) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers .../lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 ``` This suggests that the simple action of importing anything from hf causes cuda initialization. I worked around it by deferring the import altogether: ``` class Model: def __init__(self, path): self.path = path self.initialized = False def init(self): if not self.initialized: from transformers import ... self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.model = AutoModelForSeq2SeqLM.from_pretrained(self.path) self.model.to(self.device) self.model.eval() self.initialized = True ```
transformers
9,226
closed
n_gpus is set to 1 in case of distributed training on multiple gpus, how to access to the correct n_gpus
Hi I am using transformer 4.1.1 on multiple gpus with distributed launch script below, the n_gpu set in this case is 1 instead of actual n_gpus, could you tell me how I can access to the total number of ranks calling the launcher with? Is there any variable in huggingface library which could show the number of gpus the script is called with? Here is the command I run: `export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9915 finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation --val_max_target_length 128 --warmup_steps 500 --n_train 500 ` thanks
12-21-2020 00:34:17
12-21-2020 00:34:17
Please do not spam the issues: this is a duplicate of #9225. You can edit the title and the description, there is no need to send a new notification to everyone watching the repo. Also, please use the [forum](https://discuss.huggingface.co/) for questions like this, we keep the issues for bugs and features requests only. In your case, this is a PyTorch command: `toch.distributed.get_world_size()` will return you the number of GPUs used.<|||||>Hi Slvain, thank you, sorry I did not really realized the duplicate, sure, thanks
transformers
9,225
closed
n_gpu=1 in case of using distributed pytorch launcher
Hi I am using transformer 4.1.1 on multiple gpus with distributed launch script below, the n_gpu set in this case is 1 instead of actual n_gpus, could you tell me how I can access to the total number of ranks calling the launcher with? Is there any variable in huggingface library which could show the number of gpus the script is called with? Here is the command I run: `export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9915 finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation --val_max_target_length 128 --warmup_steps 500 --n_train 500 ` thanks
12-21-2020 00:33:34
12-21-2020 00:33:34
Duplicate of #9226
transformers
9,224
closed
allowing the user to set booleans with HfArgumentParser
Hi this is more general issue, if the user defines some booleans as arguments like --flag false/true once calling the finetune_trainer.py one would get following error as below with HfArgumentParser, it would be great to allow users set explicit values for booleans too, currently this is only possible with config files. thanks ``` raceback (most recent call last): Traceback (most recent call last): File "finetune_t5_trainer.py", line 329, in <module> File "finetune_t5_trainer.py", line 329, in <module> main()main() File "finetune_t5_trainer.py", line 48, in main File "finetune_t5_trainer.py", line 48, in main model_args, data_args, training_args, adapter_args = parser.parse_args_into_dataclasses()model_args, data_args, training_args, adapter_args = parser.parse_args_into_dataclasses() File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/hf_argparser.py", line 144, in parse_args_into_dataclasses File "/opt/conda/envs/internship/lib/python3.7/site-packages/transformers-3.5.1-py3.7.egg/transformers/hf_argparser.py", line 144, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueErrorValueError: : Some specified arguments are not used by the HfArgumentParser: ['True', 'true', 'False', 'False', 'True', 'True', 'True', 'True', 'True', 'True', 'True', '--add_adapters_in_decoder', 'False']Some specified arguments are not used by the HfArgumentParser: ['True', 'true', 'False', 'False', 'True', 'True', 'True', 'True', 'True', 'True', 'True', '--add_adapters_in_decoder', 'False'] Traceback (most recent call last): File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 260, in <module> main() File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/internship/bin/python', '-u', 'finetune_t5_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-small', '--tokenizer_name', 't5-small', '--learning_rate', '1e-2', '--output_dir', 'outputs/test', '--max_source_length', '128', '--max_target_length', '128', '--val_max_target_length', '128', '--test_max_target_length', '128', '--num_train_epochs', '10', '--warmup_steps', '500', '--eval_steps', '200', '--overwrite_output_dir', 'True', '--tasks', '[scitail,', 'boolq]', '--eval_tasks', '[rte,', 'boolq]', '--sampling', 'true', '--label_smoothing', '0.1', '--freeze_encoder', 'False', '--freeze_embeds', 'False', '--per_device_train_batch_size', '64', '--per_device_eval_batch_size', '64', '--save_steps', '20', '--logging_first_step', 'True', '--logging_steps', '200', '--save_total_limit', '1', '--train_adapters', 'True', '--adapter_config_name', 'parametric-meta-adapter', '--temperature', '10', '--do_eval', 'True', '--predict_with_generate', 'True', '--n_train', '10', '--task_embedding_dir', 'test_data/task_embeddings/n-train-all', '--task_embedding_dim', '512', '--n_val', '10', '--n_train', '10', '--do_finetune', 'True', '--do_train', 'True', '--n_finetune', '100', '--eval_output_dir', 'outputs/eval_test', '--reduction_factor', '16', '--non_linearity', 'relu', '--train_task_embeddings', 'True', '--projected_task_embedding_dim', '512', '--add_adapters_in_decoder', 'False', '--unfreeze_lm_head', '--unfreeze_layer_norms']' returned non-zero exit status 1. ```
12-20-2020 20:43:36
12-20-2020 20:43:36
Hey @rabeehkarimimahabadi, One should not call the `HfArgumentParser` with `--overwrite_output_dir True`, but just with `--overwrite_output_dir` to set the bool from `False` to `True`. If one wants to leave it as `False`, the arg should just not be passed. If the default field of this variable is already `True` and one wants to set it to `False` the argument should be passed as follows :`--no_remove_unused_columns`, see: https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py#L83 .
transformers
9,223
closed
passing config file to train with on multiple gpus
## Environment info - transformers 3.5.1 - Python version: 3.7 - PyTorch version (GPU?): 1.6 - Tensorflow version (GPU?): - - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help Trainer: @sgugger T5: @patrickvonplaten FSMT: @stas00 examples/seq2seq: @patil-suraj documentation: @sgugger ## Information Hi, I want to run finetune_trainer with multiple gpus with a config file, I am using the following command: `export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 finetune_t5_trainer.py configs/experiments/test.json ` getting this error, looks like config file is not passed properly in this case, the command on one gpu works, could you tell me please how I can call the codes with config, it would be great to have it in the read me. thanks ``` inetune_t5_trainer.py: error: the following arguments are required: --model_name_or_path, --output_dir usage: finetune_t5_trainer.py [-h] --model_name_or_path MODEL_NAME_OR_PATH [--not_load_t5_checkpoint] [--config_name CONFIG_NAME] [--tokenizer_name TOKENIZER_NAME] [--cache_dir CACHE_DIR] [--freeze_encoder] [--freeze_embeds] [--freeze_model_but_lm_head] [--unfreeze_lm_head] [--freeze_model_but_task_embeddings] [--unfreeze_layer_norms] [--sampling] [--tasks TASKS [TASKS ...]] [--eval_tasks EVAL_TASKS [EVAL_TASKS ...]] [--adapters ADAPTERS [ADAPTERS ...]] [--max_source_length MAX_SOURCE_LENGTH] [--max_target_length MAX_TARGET_LENGTH] [--val_max_target_length VAL_MAX_TARGET_LENGTH] [--test_max_target_length TEST_MAX_TARGET_LENGTH] [--n_train N_TRAIN] [--n_val N_VAL] [--n_test N_TEST] [--eval_beams EVAL_BEAMS] [--no_ignore_pad_token_for_loss] [--n_finetune N_FINETUNE] --output_dir OUTPUT_DIR [--overwrite_output_dir] [--do_train] [--do_eval] [--do_predict] [--evaluate_during_training] [--evaluation_strategy {EvaluationStrategy.NO,EvaluationStrategy.STEPS,EvaluationStrategy.EPOCH}] [--prediction_loss_only] [--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE] [--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE] [--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE] [--per_gpu_eval_batch_size PER_GPU_EVAL_BATCH_SIZE] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] [--eval_accumulation_steps EVAL_ACCUMULATION_STEPS] [--learning_rate LEARNING_RATE] [--weight_decay WEIGHT_DECAY] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2] [--adam_epsilon ADAM_EPSILON] [--max_grad_norm MAX_GRAD_NORM] [--num_train_epochs NUM_TRAIN_EPOCHS] [--max_steps MAX_STEPS] [--warmup_steps WARMUP_STEPS] [--logging_dir LOGGING_DIR] [--logging_first_step] [--logging_steps LOGGING_STEPS] [--save_steps SAVE_STEPS] [--save_total_limit SAVE_TOTAL_LIMIT] [--no_cuda] [--seed SEED] [--fp16] [--fp16_opt_level FP16_OPT_LEVEL] [--local_rank LOCAL_RANK] [--tpu_num_cores TPU_NUM_CORES] [--tpu_metrics_debug] [--debug] [--dataloader_drop_last] [--eval_steps EVAL_STEPS] [--dataloader_num_workers DATALOADER_NUM_WORKERS] [--past_index PAST_INDEX] [--run_name RUN_NAME] [--disable_tqdm DISABLE_TQDM] [--no_remove_unused_columns] [--label_names LABEL_NAMES [LABEL_NAMES ...]] [--load_best_model_at_end] [--metric_for_best_model METRIC_FOR_BEST_MODEL] [--greater_is_better GREATER_IS_BETTER] [--label_smoothing LABEL_SMOOTHING] [--sortish_sampler] [--predict_with_generate] [--adafactor] [--encoder_layerdrop ENCODER_LAYERDROP] [--decoder_layerdrop DECODER_LAYERDROP] [--dropout DROPOUT] [--attention_dropout ATTENTION_DROPOUT] [--lr_scheduler LR_SCHEDULER] [--fixed_length_emb] [--encoder_projection ENCODER_PROJECTION] [--encoder_pooling ENCODER_POOLING] [--projection_length PROJECTION_LENGTH] [--only_projection_bottleneck] [--concat_projection_token] [--gcs_bucket GCS_BUCKET] [--temperature TEMPERATURE] [--train_adapters] [--do_finetune] [--parametric_task_embedding] [--eval_output_dir EVAL_OUTPUT_DIR] [--generate_classifier_weights] [--adapter_config_name ADAPTER_CONFIG_NAME] [--task_embedding_dir TASK_EMBEDDING_DIR] [--task_embedding_dim TASK_EMBEDDING_DIM] [--add_layer_norm_before_adapter] [--no_add_layer_norm_after_adapter] [--hidden_dim HIDDEN_DIM] [--reduction_factor REDUCTION_FACTOR] [--train_task_embeddings] [--non_linearity NON_LINEARITY] [--projected_task_embedding_dim PROJECTED_TASK_EMBEDDING_DIM] [--no_add_adapters_in_decoder] finetune_t5_trainer.py: error: the following arguments are required: --model_name_or_path, --output_dir Traceback (most recent call last): File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/envs/internship/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 260, in <module> main() File "/opt/conda/envs/internship/lib/python3.7/site-packages/torch-1.7.0-py3.7-linux-x86_64.egg/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/internship/bin/python', '-u', 'finetune_t5_trainer.py', '--local_rank=1', 'configs/experiments/test.json']' returned non-zero exit status 2. ```
12-20-2020 20:11:34
12-20-2020 20:11:34
this does not work since finetuner_trainer.py gets config as only one argument and local_rank is also passed with this command, is there an easy fix for this without parsing arguments from config? thanks <|||||>I solved it with updating finetune_trainer accepting --local_rank + config file
transformers
9,222
closed
Does provide any scripts about how to convert mpnet pretrain model to transformers pretrain one?
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation I want to convert my pretrain chinese mpnet with the fairseq format to transformers format <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
12-20-2020 04:52:18
12-20-2020 04:52:18
Maybe the author of MPNet can help here, @StillKeepTry <|||||>@patrickvonplaten problem is fix
transformers
9,221
closed
TAPAS: IndexError: index out of range in self
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.0.dev0 - Platform: Linux-4.15.0-122-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, 1080ti - Using distributed or parallel set-up in script?: No ### Who can help May @LysandreJik help? ## Information Model I am using (Bert, XLNet ...): TAPAS The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. data in json string ``` data = '{"table": [[{"value": "County", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Name", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Irish name", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Date", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Area (acres, 1872)", "is_header": true, "column_span": 1, "row_span": 1}, {"value": "Notes", "is_header": true, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Antrim Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Aontroim \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "80,826", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Antrim town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Antrim Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Aontroim Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,489", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Antrim town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Belfast Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al Feirste \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,142", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Belfast town (now city)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Belfast Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al Feirste Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,942", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Belfast town (now city)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carrickfergus", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carraig Fhearghais", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1325", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,702", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate: the County of the Town of Carrickfergus", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cary or Carey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "75,035", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Cothrugu (Cotraigib, Crotraigib), an ancient tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunluce Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Libhse \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,575", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "See also Dunluce Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunluce Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Libhse Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,788", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "See also Dunluce Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenarm Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann Arma \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,945", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Glenarm village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenarm Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann Arma Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "24,032", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Glenarm village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilconway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill Chonmha\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "68,640", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"forest of the Conmha\\u00edcne\\".", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Massereene Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00e1sa R\\u00edona \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,228", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Massereene Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00e1sa R\\u00edona Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,675", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Toome Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tuaim \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Toome village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Antrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Toome Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tuaim Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131798", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,571", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Toome village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ard Mhacha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,645", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Armagh town (now city)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fews Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na Fe\\u00e1 \\u00cdochtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1745; Fews by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,757", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Irish Na Feadha, \\"The lengths\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fews Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na Fe\\u00e1 Uachtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1745; Fews by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,433", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Irish Na Feadha, \\"The lengths\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Oneilland East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Niall\\u00e1in Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Oneilland by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "20,890", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Oneilland West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Niall\\u00e1in Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Oneilland by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,584", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orior Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na hOirthir \\u00cdochtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Orior by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,927", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orior Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na hOirthir Uachtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1792\\u20131807; Orior by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,086", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Armagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tiranny or Turaney", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tuath Threana", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,397", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Threna tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceatharlach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,353", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carlow town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Forth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fotharta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,510", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named from the Irish Fothairt Mag Fe\\u00e1, \\"fothairt of the beech plain.\\" A fothairt was a kingdom not ruled by a branch of the provincial ruling family.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Idrone East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Dhr\\u00f3na Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1799", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,857", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Idrone West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Dhr\\u00f3na Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1799", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,066", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathvilly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Bhile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,806", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rathvilly village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "St. Mullin\'s Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh Moling \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,914", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after St Mullin\'s village. Does not border St. Mullin\'s Upper.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Carlow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "St. Mullin\'s Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh Moling Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "7,784", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after St. Mullin\'s village; the land was a detached fragment of the original St.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlerahan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Caisle\\u00e1n Raithin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,279", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlerahan parish.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clankee", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Chaoich", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,377", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Caoch\'s clan\\"; Caoch (meaning \\"blind\\" or \\"squint\\") was the nickname of Niall mac Cathal na Beith\\u00ed mac Annadh \\u00d3 Raghallaigh (died 1296).", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanmahon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Mhath\\u00fana", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,170", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Math\\u00fain\'s clan.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughtee Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lucht T\\u00ed \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821; Loughtee by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,240", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughtee Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lucht T\\u00ed Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821; Loughtee by 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,842", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tullygarvey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Teallach Ghairbh\\u00edth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,871", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"tribe of Gairbh\\u00e9ith\\".", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tullyhaw", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Teallach Eathach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "89,852", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Eochaid\'s tribe\\", referring to a king of c. AD 700.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cavan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tullyhunco or Tulloghonoho", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Teallach Dh\\u00fanchadha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,624", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"D\\u00fanchadh\'s tribe.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bunratty Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bun Raite \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,314", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bunratty Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bun Raite Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "53,595", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Burren", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Boirinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "74,360", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The barony is called \\"Burren\\"; the region is now usually \\"The Burren\\", a name meaning \\"great rock.\\" Formerly aka Gragans.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonderalaw", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain idir Dh\\u00e1 L\\u00e1", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "75,878", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clonderalaw Castle. Formerly aka East Corkewasken.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corcomroe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corca Mrua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "61,385", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Corco Modhruadh, formerly the ruling dynasty in the area. Formerly aka Dowaghy connoghor/Tuoghmore y Conour.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ibrickan or Ibrickane", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Bhreac\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,696", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Bhreac\\u00e1in, formerly the ruling dynasty in the area", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inchiquin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inse U\\u00ed Chuinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "88,387", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name is Irish for \\"Quinn\'s water meadow.\\" Namesake of Baron Inchiquin", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Islands", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na hOile\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,592", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name refers to the islands of the Fergus estuary. Formerly aka Cloynerawde/Clonraude", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyarta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Fhearta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "68,679", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name from Irish Mag Fearta, \\"plain of graves\\". Formerly aka West Corkewasken.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tulla Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Tulach \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "73,454", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tulla Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Tulach Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "94,919", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bantry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Beanntra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,216", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Bantry town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Barretts", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bar\\u00f3idigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,761", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Barrett family.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Barrymore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Barraigh Mh\\u00f3ra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "148,143", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Earl of Barrymore. Name means \\"Great Barrys.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bear", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9arra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "89,986", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Beara Peninsula. It is said to be named after a princess named B\\u00e9irre, or possibly settlers from Iberia.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery East, East Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thoir, an Roinn Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "67,235", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery East, West Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thoir, an Roinn Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "105,141", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery West, East Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thiar, an Roinn Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,263", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbery West, West Division", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbrigh Thiar, an Roinn Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "109,178", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Condons and Clangibbon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cond\\u00fanaigh agus Clann Ghiob\\u00fain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "78,481", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The territories of two families: the Condons or Cauntons, and the FitzGibbons or White Knight", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cork City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Chorca\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1608", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,265", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate, originally including the Liberties which later formed the separate Barony of Cork. It contains 7 civil parishes.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corcaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "43,813", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formed from the \\"Liberties of Cork\\", the portion previously within the County of the city of Cork which was not within the borough of Cork.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Courceys", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "C\\u00farsaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "8,812", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the de Courcy barons.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Duhallow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00faiche Ealla", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "232,328", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"land of the Munster Blackwater\\".", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fermoy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mainistir Fhear Ma\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,188", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Fermoy town, which is actually in Condons and Clangibbon", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ibane and Barryroe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Bhamhna agus Barraigh Rua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1711", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,291", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ibane and Barryroe are peninsulas on opposite sides of Clonakilty Bay The names mean, respectively, \\"Descendants of Bamna\\" and \\"Red-haired Barrys.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Imokilly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Mhic Coille", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "93,617", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Meic Caille, a sept of the U\\u00ed Liath\\u00e1in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kerrycurrihy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ciarra\\u00ed Cuirche", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,957", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kerrycurrihy and Kinalea united in Down Survey. A tribal name: the Ciarraige Cuirchi.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinalea", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cine\\u00e1l Aodha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "50,692", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kerrycurrihy and Kinalea united in Down Survey. The \\"tribe of A\\u00e9d.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinalmeaky", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cine\\u00e1l mB\\u00e9ice", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,068", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Cen\\u00e9l mBeice \\"Beice\'s people\\", a sept of the O\'Mahonys.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinnatalloon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill na Tal\\u00fan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,718", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"Tolamhnach\'s forest,\\" referring to a 7th-century chief of the U\\u00ed Liath\\u00e1in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinsale", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cionn tS\\u00e1ile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "12,430", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kinsale town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muskerry East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00fascra\\u00ed Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "122,874", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Baron Muskerry. The only barony split between the East and West Ridings of County Cork.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muskerry West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00fascra\\u00ed Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "188,487", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Baron Muskerry. Named after the ancient tribe of the M\\u00fascraige.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Cork", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orrery and Kilmore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Orbhra\\u00ed agus An Choill Mh\\u00f3r", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,346", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Earl of Orrery. Named after the Orbhraighe tribe, while Kilmore means \\"great forest.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Banagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e1inigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "177,288", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the Cinel Boghaine, descended from Niall of the Nine Hostages. Combined with Boylagh till 1791", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Boylagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baollaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "156,245", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the O\'Boyles. Combined with Banagh till 1791", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inishowen (or Innishowen) East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inis Eoghain Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "123,356", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Eoghan\'s peninsula.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inishowen (or Innishowen) West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Inis Eoghain Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "76,828", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Eoghan\'s peninsula.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilmacrenan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Mhic R\\u00e9an\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "310,325", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilmacrenan village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Raphoe North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Bhoth Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u20131821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "80,610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Raphoe town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Raphoe South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Bhoth Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u20131821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "140,841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Raphoe town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Donegal", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirhugh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Aodha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "125,828", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Aodh\'s country.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ards (or Ardes) Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Aird \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,462", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ards (or Ardes) Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Aird Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,697", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlereagh Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Riabhach \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,452", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlereagh townland. Gives its name to the borough of Castlereagh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlereagh Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Riabhach Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "53,856", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlereagh townland. Gives its name to the borough of Castlereagh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dufferin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Duifrian", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "17,208", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name from the Irish duibhthrian (black third).", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Lower, Lower Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach \\u00cdochtarach, An Leath \\u00cdochtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "46,057", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Lower, Upper Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach \\u00cdochtarach, An Leath Uachtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,538", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Upper, Lower Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach Uachtarach, An Leath \\u00cdochtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,317", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveagh Upper, Upper Half", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eachach Uachtarach, An Leath Uachtair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,249", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kinelarty", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cine\\u00e1l Fh\\u00e1rtaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,322", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Faghartach\'s kindred.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lecale Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leath Cathail \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,920", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lecale Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leath Cathail Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,521", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lordship of Newry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An tI\\u00far", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "15,813", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The historic Lordship encompassed lands on both sides of the Down-Armagh border. Later, the jurisdiction of the \\"Lordship of Newry\\" for baronial presentment sessions extended only to County Down.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Down", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mourne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "M\\u00farna", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,822", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Mourne Mountains. A half-barony in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Balrothery East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Ridire Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1842", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,005", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Balrothery village. Balrothery existed by 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Balrothery West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Ridire Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1842", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,195", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Balrothery village. Balrothery existed by 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castleknock", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Caisle\\u00e1n Cnucha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,371", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castleknock village (now suburban); from 1861, reduced in size by the expanded borders of Dublin city", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coolock", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ch\\u00fal\\u00f3g", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "26,614", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the historical village of Coolock, now suburban; from 1861, reduced in size by the expanded borders of Dublin city", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Cliath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1840", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1,693", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Created by the 1840 Acts from land previously liberties in the county of the City. Its name and area were confirmed by the Dublin Baronies Act 1842.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dublin City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Bhaile \\u00c1tha Cliath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1548", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,114", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Nethercross", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Chrois \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,818", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after a cross erected by Saint Cainnech in Finglas. Compare Uppercross.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Newcastle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Nua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "22,876", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the village of Newcastle, County Dublin. Not related to the Wicklow barony of Newcastle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathdown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th an D\\u00fain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,974", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony from 1606, with the Wicklow half-barony of Rathdown separated out. From 1861, reduced in size by the expanded borders of Dublin city.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Dublin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uppercross", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Chrois Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1792\\u20131821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,307", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Compare Nethercross. In the Down Survey, Uppercross and Newcastle were not distinguished.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanawley or Glenawley", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Amhlaoibh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "72,894", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Awley\\" is from Mac Amhlaoibh and Mac Amhalghaidh (Irish septs)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clankelly or Clonkelly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Cheallaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,067", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clan of the Kellys", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coole", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ch\\u00fail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "17,320", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony in the Down Survey. Name means \\"corner.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Knockninny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cnoc Ninnidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,732", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the hill of Saint Ninnidh", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lurg", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lorg", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "66,163", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Tuath Luirg (Fir Luirg; \\"tribe/men of the path\\").", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Magheraboy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Machaire Bu\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,038", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"yellow plain\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Magherastephana", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Machaire Steaf\\u00e1nach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "58,979", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name origin unclear; \\"plain of the FitzStephens?\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Fermanagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirkennedy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Cheannada", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,267", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Fergus son of Cremthann, nicknamed Cennfhota (\\"long head\\"). No relation to the surname Kennedy.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Aran or Arran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\u00c1rainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "11,287", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Conterminous with the Aran Islands; Inishmore (\\u00c1rainn Mh\\u00f3r) is named for its shape (ara = kidney)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Athenry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha an R\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,782", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Athenry town; called \\"Halfe Barony and liberties of Athenrey\\" in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballymoe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al \\u00c1tha M\\u00f3", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "89,270", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballymoe village; Half with Ballymoe, County Roscommon. Full barony existed in Galway by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballynahinch", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile na hInse", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "189,813", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballynahinch town; \\"Ballenanen\\" in Down Survey (or Hibernia Delinateo)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Chl\\u00e1ir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "127,486", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of the River Clare and village of Claregalway. The name means \\"[river of the] plain.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonmacnowen or Clonmacnoon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain Mhac nEoghain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,467", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Clanemtoneen\\" in Down Survey (or Hibernia Delinateo). Name means \\"Valley of the sons of Eoghan.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunkellin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Coill\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "83,371", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Coill\\u00edn\'s hillfort\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunmore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan M\\u00f3r", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "71,011", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dunmore village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gaillimh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "22,492", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate: the county of the Town (now city) of Galway", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilconnell or Kilconnnel", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chonaill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,819", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilconnell village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Killian", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Liath\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,388", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Liath\\u00e1in\'s church\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kiltartan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Tartan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "65,664", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Killcartar\\" in Down Survey (or Hibernia Delinateo). Was originally named after Saint Attracta\'s church. Kiltaraght in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Liatroim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "109,567", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Clare. Name means \\"grey ridge.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Longfort", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,506", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"ship landing-ground\\", referring to a longphort on a tributary of the River Shannon.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughrea", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Locha Riach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "64,406", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Loughrea town; called \\"Half Barony of Lougheagh\\" in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moycullen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Cuilinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "202,386", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Moycullen village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ross", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ros", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "77,351", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "In County Mayo in 1574; transferred to Galway within decades; since 1898 partly in Mayo. The name means \\"The promontory.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Galway", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tiaquin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh Dachoinne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "110,135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"House of double coign.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanmaurice", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Mhuiris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "120,520", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Maurice\'s clan\\", referring to Maurice FitzGerald, 1st Earl of Desmond.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corkaguiny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corca Dhuibhne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "138,605", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ancient ruling tribe, the Corcu Duibne.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunkerron North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Ciar\\u00e1in Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "72,414", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunkerron South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Ciar\\u00e1in Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,289", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glanarought or Glanerought", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann na Ruachta\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,865", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Valley of the O\'Roughty.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iraghticonnor", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Oireacht U\\u00ed Chonch\\u00fair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "88,105", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Inheritance of the O\'Connors.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iveragh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh R\\u00e1thach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "159,980", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Descendants of R\\u00e1thach.\\" On the Kilcoolaght East ogham stone (CIIC 211), this name appears in the Primitive Irish form Rittaveccas.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Magunihy or Magonhy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh gCoinchinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "166,427", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Coinchinn\'s plain\\"; a personal name meaning wolf-warrior.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kerry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Trughanacmy or Trughenackmy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tri\\u00facha an Aicme", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "194,593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"cantred of the tribe.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbury or Carbery", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,286", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carbury", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clane", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Claonadh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,023", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clane village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connell or Great Connell", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "34,785", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after [Old] Connell, a holy site and ford near Newbridge.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ikeathy and Oughterany", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Ch\\u00e9ithigh agus Uachtar Fhine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1608", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,753", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The baronies of Ikeathy and Oughterany were united some time between 1558 and 1608. \\"Okeathy Ocerny\\" in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilcullen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chuillinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "8,492", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilcullen town. A half-barony in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilkea and Moone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Ch\\u00e1 agus Maoin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "46,286", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the villages of Kilkea and Moone.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Naas North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An N\\u00e1s Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,579", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Naas town. \\"Naas Upper\\" in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Naas South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An N\\u00e1s Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,478", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Naas town. \\"Naas Nether\\" in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Narragh and Reban East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Fhorrach agus an R\\u00e9ab\\u00e1n Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,374", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Narragh and Rheban Castle. Namesake of the hereditary Barony of Norragh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Narragh and Reban West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Fhorrach agus an R\\u00e9ab\\u00e1n Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "22,136", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See Narragh and Reban East)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Offaly East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Fhail\\u00ed Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,029", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after U\\u00ed Failghe; also the name of County Offaly to the west. Barony of Offaly existed in 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Offaly West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Fhail\\u00ed Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,603", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(see Offaly West)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North Salt", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An L\\u00e9im Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,930", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Salt\\" derived from Saltus Salmonis, the Latin name for Leixlip. Barony of Salt existed by 1593.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kildare", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "South Salt", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An L\\u00e9im Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,655", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See North Salt)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Callan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Callainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "5,653", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Callan town; \\"Callen Liberties\\" in Down Survey. The 1836 Act \\"for removing doubts\\" explicitly states the town and liberties \\"shall be deemed and taken to be a barony\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Crannagh or Crannach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Crannach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "58,675", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"abounding in trees.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fassadinin or Fassadining", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "F\\u00e1sach an Deighn\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "68,174", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"wilderness by the River Dinan.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Galmoy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gabhalmhaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,236", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"plain of the River Goul.3", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gowran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gabhr\\u00e1n", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "111,706", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Gowran village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ida, or \\"Ida, Igrinn and Iberchon\\"", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Dhe\\u00e1", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "60,132", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Wexford. A tribal name: the U\\u00ed Dheaghaidh, descendants of Deagaid.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iverk", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eirc", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,528", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendents of Erc.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kells", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceanannas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,376", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kells, County Kilkenny.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilculliheen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Choilch\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1848", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,139", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Originally a civil parish in the county of the city of Waterford, transferred to the county in 1840. Its status as a barony separate from Gaultier was not recognised by the census until 1871.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chainnigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "921", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate: the County of the city of Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Knocktopher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cnoc an T\\u00f3chair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "46,765", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Knocktopher village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Kilkenny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shillelogher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Fhaolchair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,684", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name, meaning \\"descendants of Faolchar\\", a name meaning \\"wolf-love.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballyadams", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1daim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "24,081", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballyadams Castle", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clandonagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Donnchadha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "43,733", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clarmallagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cl\\u00e1r Ma\\u00ed Locha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "43,533", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cullenagh or Cullinagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cuileannach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,094", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Cullenagh Mountains.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maryborough East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Port Laoise Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,160", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Portlaoise, formerly named Maryborough", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maryborough West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Port Laoise Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "41,914", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Portlaoise, formerly named Maryborough", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Portnahinch or Portnehinch", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Port na hInse", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,835", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Portnahinch, a landing-ground on the River Barrow.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slievemargy, Slewmergie, Slieuemargue, Slieuemargy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Sliabh Mairge", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,490", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Slievemargy hills. Now also partly in Carlow", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Stradbally", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Sr\\u00e1idbhaile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,895", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Stradbally village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tinnahinch or Tinnehinch", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tigh na hInse", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "54,187", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Tinnahinch village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Laois", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Upper Woods or Upperwoods", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Choill Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,926", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carrigallen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carraig \\u00c1lainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "62,395", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carrigallen", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Drumahaire", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Droim Dh\\u00e1 Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "110,146", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Drumahaire. Considered part of Sligo in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Liatroim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,164", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Leitrim village. Considered part of Sligo in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mohill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maothail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "62,904", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Mohill", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Leitrim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rosclougher or Rossclogher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ros Clochair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,601", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rosclogher Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanwilliam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Liam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "55,627", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"clan of William de Burgh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connello (or Conello) Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Conallaigh \\u00cdochtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,850", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the O\'Connells.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Connello (or Conello) Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Conallaigh Uachtaracha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "61,256", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Territory of the O\'Connells.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coonagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Chuanach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,323", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendants of Cuana.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coshlea", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cois Sl\\u00e9ibhe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "95,232", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name literally means \\"foot of the mountain.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coshma", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cois M\\u00e1ighe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,018", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"edge of the plain.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenquin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann an Choim", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "96,402", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Prior to 1841, part of Connello Upper.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kenry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Caonra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "26,222", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the C\\u00e1enraige, an ancient tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilmallock or Kilmallock Liberties", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Mocheall\\u00f3g", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "4,074", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilmallock. Not enumerated in the 1821 census.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Limerick City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Luimnigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "2,074", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate; includes the \\"[South] Liberties\\" of Down Survey", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North Liberties of Limerick city", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na L\\u00edbearta\\u00ed Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1872", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "3,050", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "formerly Liberties; the \\"North Liberties\\" were record separately from the \\"South Liberties\\" in the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Owneybeg", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uaithne Beag", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "27,211", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The territory of Uaithni encompassed Owneybeg and part of Owney and Arra", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Pubblebrien", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Pobal Bhriain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,138", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Brian\'s people\\", referring to Brian Boru.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shanid", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Seanaid", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "84,075", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Prior to 1841, part of Connello Lower.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Limerick", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Smallcounty", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An D\\u00e9is Bheag", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,424", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The Irish name means \\"the little vassal tribe\\"; see Deisi.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coleraine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "C\\u00fail Raithin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "85,836", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Coleraine town, although the town itself is in the North East Liberties of Coleraine. A half-barony in 1807, including the south-west liberties of Coleraine.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Keenaght or Kenaught", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cianachta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591 (as Limavady)", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "130,329", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Ciannachta tribe, descended from Tadc mac C\\u00e9in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loughinsholin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Loch Inse U\\u00ed Fhloinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "171,662", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"lough of O\'Lynn\'s island\\", referring to a lake containing a crann\\u00f3g.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North East Liberties of Coleraine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "L\\u00edbearta\\u00ed Thoir Thuaidh Ch\\u00fail Raithin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "18,005", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "formerly Liberties of Coleraine town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "North-West Liberties of Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "L\\u00edbearta\\u00ed Thiar Thuaidh Dhoire", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "11,506", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "formerly Liberties of Londonderry city.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Londonderry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirkeeran or Tyrkeeran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Mhic Caoirthinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591 (as Anagh)", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "94,014", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony in 1807, including the south-east liberties of Londonderry. Name means \\"land of the sons of Cartin.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ardagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ardach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,223", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ardagh village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Granard", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gr\\u00e1nard", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,857", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Granard village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Longfort", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,243", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Longford town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moydow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Dumha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "34,470", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Moydow village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathcline", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Claon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,421", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rathcline Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Longford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shrule or Abbeyshrule", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Sruthail", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1629", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,006", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Abbeyshrule", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ardee", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Fhirdhia", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "53,832", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ardee town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Drogheda", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Droichead \\u00c1tha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1412", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "4,497", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate. A barony separate from the county was formed in 1840 from the portion previously within the County of the town of Drogheda which was not within the town of Drogheda.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dundalk Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Dealgan \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,803", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dundalk town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dundalk Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Dealgan Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1821", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,750", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dundalk town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ferrard", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fir Arda", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1593", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,806", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Fera Arda Ciannachta, \\"men of high Ciannachta.\\" Namesake of Viscount Massereene and Ferrard", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Louth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "L\\u00fa", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,704", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Louth village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Burrishoole", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Buir\\u00edos Umhaill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "145,172", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Burrishoole Castle; a few sources list Burrishoole split into \\"Burrishoole North\\" and \\"Burrishoole South\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceara", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "134,206", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Carra village. Called Burriscarra/Burisker in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanmorris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Mhuiris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,252", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Namesake of Baron Clanmorris. Name means \\"Muiris\' family.\\" Called Croslwyhin/Crossboyne in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Costello or Clancostello", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coistealaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "143,874", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Roscommon. Named after the Hiberno-Norman MacOisdealbhaigh (Costello) family. Called Beallahaunes/Ballyhaunis in 1574", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Erris", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iorras", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "230,452", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Erris village. A half-barony in the Gilbert Manuscript of the Down Survey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gallen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gaileanga", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "119,153", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Gailenga tribe. Beallalahane in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilmaine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Mhe\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "95,284", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilmaine village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Murrisk", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muraisc", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "137,061", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Murrisk village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Mayo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirawley or Tyrawley", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Amhlaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "246,822", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Amlaid\'s land\\", referring to Amalgaid mac Fiachrae. \\"Many\\"/Moyne in 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00e9ise \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "20,013", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece barony present by 1542. Named after the D\\u00e9isi Becc.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00e9ise Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,763", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Deece barony present by 1542. Named after the D\\u00e9isi Becc.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Duleek Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Damhliag \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,772", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Duleek village. Now also partly in Louth. Duleek barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Duleek Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Damhliag Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,463", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Duleek village. Duleek barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dunboyne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan B\\u00fainne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,781", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dunboyne town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fore or Demifore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Fhobhair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "42,388", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Fore, County Westmeath since 1542. Named after Fore Abbey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kells Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceanannas \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "36,171", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kells town. Kells barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kells Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ceanannas Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,552", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kells town. Kells barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lune", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lu\\u00edne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,326", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Luighne tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Morgallion", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Machaire Gaileang", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,492", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"plain of the Gailenga\\", a medieval tribe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath (or Moyfenragh) Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Fionnr\\u00e1ithe \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,313", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath (or Moyfenragh) Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Fionnr\\u00e1ithe Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,696", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Navan Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Uaimh \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,835", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Navan town. Navan barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Navan Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Uaimh Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "17,651", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Navan town. Navan barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ratoath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th T\\u00f3", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,697", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ratoath village.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Skreen or Skryne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Scr\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,891", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Skryne village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slane Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Shl\\u00e1ine \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "26,224", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Slane village. Slane barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Meath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slane Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Shl\\u00e1ine Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1791", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,211", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Slane village. Slane barony present by 1542", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cremorne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cr\\u00edoch Mh\\u00farn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "84,508", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From Irish meaning \\"border of the Mugdorna.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dartree or Dartry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dartra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name from the ancient kingdom of Dartraighe.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Farney", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fearnaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "67,333", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named from the ancient kingdom of Fernmag, \\"plain of alders.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Muineach\\u00e1n", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,735", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Monaghan town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Monaghan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Trough", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Tri\\u00facha", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1585", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,376", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From the Irish tr\\u00edcha c\\u00e9t, a unit of territory in Medieval Ireland.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballyboy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Bu\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,398", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballyboy village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballybritt", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Bhriotaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "52,378", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballybritt Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballycowen", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Mhic Comhainn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,610", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballycowan Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonlisk", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain Leisc", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,052", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clonlisk Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coolestown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Ch\\u00fala\\u00edgh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,866", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Coolestown, the former name of Edenderry.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Eglish or Fercale", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Eaglais", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "28,697", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The name means \\"church,\\" while Fercale means \\"men of the churches.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Garrycastle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Garra\\u00ed an Chaisle\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "102,841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Garrycastle", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Geashill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "G\\u00e9isill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,864", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Geashill village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilcoursey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chuairs\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "19,274", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilcoursey Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Philipstown Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Daingean \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,669", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Philipstown, now renamed Daingean", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Philipstown Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Daingean Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,087", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Philipstown, now renamed Daingean", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Offaly", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Warrenstown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Bhair\\u00ednigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "21,456", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballybrittain (Warrenstown) Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Athlone North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Luain Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,863", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Athlone town. North and South not separated in 1871 census. The original Athlone barony existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Athlone South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile \\u00c1tha Luain Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,659", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Athlone town. North and South not separated in 1871 census.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballintober North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Tobair Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "30,853", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballintober South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Tobair Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,113", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballymoe", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "B\\u00e9al \\u00c1tha M\\u00f3", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,287", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Ballymoe, County Galway. Named after Ballymoe village, on the County Galway side of the River Suck.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Boyle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Mainistir na B\\u00faille", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,163", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Boyle town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Castlereagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Riabhach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "82,081", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Castlerea town. Previously one of three sections of Ballintober barony.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Frenchpark", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Gar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "71,203", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Frenchpark village; previously part of the barony of Boyle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moycarn or Moycarnon or Moycarne or Moycarnan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Charn\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,595", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Galway. A half-barony in 1807.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Roscommon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ros Com\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,584", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Roscommon town, which is in Ballintober South", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Carbury", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cairbre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "73,685", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided into Upper and Lower baronies before 1841. Named after the ancient t\\u00faath of the Cairbre Drom Cliabh.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coolavin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "C\\u00fail \\u00d3 bhFinn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "25,473", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"corner of the descendants of Finn.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corran", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Corann", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "45,376", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Corann village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Leyny or Leney", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Lu\\u00edne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,233", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Luighne Connacht tribe", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tireragh or Tyreragh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Fhiachrach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "106,598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Now also partly in Mayo. Name means \\"land of the U\\u00ed Fiachrach.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Sligo", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Tirerril or Tyraghrill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "T\\u00edr Oirill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "75,812", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Olliol\'s land\\", referring to Ailill mac Echach Mugmed\\u00f3in.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clanwilliam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clann Liam", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "115,755", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"clan of William de Burgh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Eliogarty", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\u00c9ile U\\u00ed Fh\\u00f3garta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "90,257", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony (with Ikerrin) in the Down Survey. Name means \\"\\u00c9ile of the U\\u00ed Fhogartaigh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iffa and Offa East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "56,819", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Iffa and Offa West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1807", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "117,175", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ikerrin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Chair\\u00edn", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "69,805", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A half-barony (with Eliogarty) in the Down Survey. Name means \\"descendents of Cair\\u00edn.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilnamanagh Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill na Manach \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1838", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "42,041", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilnamanagh town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilnamanagh Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coill na Manach Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided in 1838", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "59,990", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Kilnamanagh town.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Middle Third", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Trian Me\\u00e1nach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "113,544", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From trian meaning \\"third\\" or \\"portion.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ormond Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Urumhain \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "127,222", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Compare Ormond (\\"east Munster\\")", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ormond Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Urumhain Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "79,471", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Compare Ormond (\\"east Munster\\")", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Owney and Arra", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uaithne agus Ara", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United 1672\\u20131792", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "85,494", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Owney Mulrian\\" and Arra were separate baronies in the Down Survey, named respectively after the ancient kingdom of Uaithni and the River Ara.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tipperary", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Slievardagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Sliabh Ardach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "90,772", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "\\"Slevardagh & Compsy\\" in the Down Survey. The name means \\"high mountain of the Eoganachta.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clogher", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clochar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "97,569", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Clogher town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dungannon Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Geanainn \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Dungannon by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "42,794", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dungannon town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dungannon Middle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Geanainn L\\u00e1ir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Dungannon by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "87,541", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dungannon town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dungannon Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "D\\u00fan Geanainn Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Dungannon by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "85,995", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Dungannon town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Omagh East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An \\u00d3maigh Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u201321; Omagh by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "132,149", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Omagh town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Omagh West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An \\u00d3maigh Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1807\\u201321; Omagh by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "93,321", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Omagh town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Strabane Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Srath B\\u00e1n \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Strabane by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "117,419", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Strabane town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Tyrone", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Strabane Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Srath B\\u00e1n Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1851; Strabane by 1591", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "121,282", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Strabane town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Coshmore and Coshbride", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cois Abha M\\u00f3ire agus Cois Bhr\\u00edde", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United by 1831", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "88,253", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baronies of Coshmore and Coshbride were separate in the 1821 census. The names mean, respectively, \\"Bank of the Munster Blackwater\\" and \\"Bank of the River Bride.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies-within-Drum", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na D\\u00e9ise laistigh den Drom", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies divided by 1746", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "57,325", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies south of the Drum Hills.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies-without-Drum", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Na D\\u00e9ise lasmuigh den Drom", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies divided by 1746", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "129,894", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Decies north of the Drum Hills. \\"Without\\" is used with the meaning of \\"beyond\\" or \\"outside.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gaultier or Gaultiere", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Ghaillt\\u00edr", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "29,447", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilculliheen was formerly a parish of this barony. Name means \\"land of foreigners,\\" referring to Vikings.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Glenahiry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gleann na hUidhre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,940", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"valley of the Nier\\", referring to the Nier River.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Middle Third or Middlethird", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Trian Me\\u00e1nach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,609", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "From trian meaning \\"third\\" or \\"portion.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Upperthird or Upper Third", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Uachtar T\\u00edre", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "63,846", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name originally meant \\"Upper country\\"; probably acquired \\"third\\" in name by analogy with Middle Third.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Waterford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Waterford City", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cathair Phort L\\u00e1irge", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1574", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "532", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Formerly a county corporate.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Brawny", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bre\\u00e1mhaine", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "10,070", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "The ancient territory of Bregmaine.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Clonlonan", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cluain Lon\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "32,095", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"Lon\\u00e1n\'s meadow.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corkaree", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Corca Raoi", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "23,787", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name, \\"descendants of Raoi.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Delvin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Dealbhna", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,062", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Delvin village", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Farbill", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fir Bhile", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "35,453", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name: \\"men of the sacred tree.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fartullagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fir Thulach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "37,512", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Previously Tyrrells country. Name means \\"men of the hillock\\", a tribal name.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fore or Demifore", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile Fhobhair", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "49,056", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Fore, County Meath. Named after Fore Abbey.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Kilkenny West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Cill Chainnigh Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "31,169", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Previously Maherquirke, Dillons country", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyashel and Magheradernon", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Asail agus Machaire \\u00d3 dTiarn\\u00e1in", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,565", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moyashel and Magheradernon listed separately in 1542. They formed the ancient territories of Mag nAssail (Assail\'s plain) and the plain of the O\'Tiernans.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moycashel", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Maigh Chaisil", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "47,097", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Originally the Barony of Rossaughe; before that, Delamares country. Name means \\"plain of the stone ringfort.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Moygoish", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Mhac gCuais", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "39,483", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A tribal name: \\"Descendants of the Son of Cuas.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Westmeath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathconrath", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th Conarta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1542", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "48,415", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Rathconrath village; previously Daltons country", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Bealach Caoin Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen created 1606; Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "45,413", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen means \\"way of sorrow.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Bealach Caoin Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen created 1606; Divided by 1868", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,986", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballaghkeen means \\"way of sorrow.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bantry", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Beanntra\\u00ed", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "101,598", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the Bendtraigi Laigen, the former ruling people.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Bargy", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "U\\u00ed Bhairrche", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "40,002", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ruling U\\u00ed Bairrche family, who claimed descent from D\\u00e1ire Barrach.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Forth", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Fotharta", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "38,384", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "A Fortuatha was a kingdom not ruled directly by members of the dominant dynasty of a province. This area was ruled by Fothairt in Chairn.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Gorey", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Guaire", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "81,913", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Gorey town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Scarawalsh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Scairbh Bhailis", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "106,650", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Name means \\"rocky ford of light.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shelburne", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Bhroin", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "By 1672", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,103", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the tribe, S\\u00edl Broin, \\"offspring of Broin.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shelmaliere East", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Maolu\\u00edr Thoir", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "16,363", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wexford", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shelmaliere West", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol Maolu\\u00edr Thiar", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1841", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "50,299", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\"", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Arklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An tInbhear M\\u00f3r", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "66,980", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Arklow town", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballinacor North", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile na Corra Thuaidh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1832\\u20135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "74,109", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "United barony of Talbotstown created in 1606, and divided into half-baronies for civil law purposes in 1798. Named after Ballinacor Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Ballinacor South", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile na Corra Theas", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided 1832\\u20135", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "78,316", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See Ballinacor North)", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Newcastle", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "An Caisle\\u00e1n Nua", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "51,938", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after the village of Newcastle, County Wicklow. Not related to County Dublin barony of the same name.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Rathdown", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "R\\u00e1th an D\\u00fain", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "33,462", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Half with Rathdown, County Dublin. Named after Rathdown Castle.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Shillelagh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "S\\u00edol \\u00c9alaigh", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "1606", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "44,348", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Shillelagh village. A half-barony in 1807.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Talbotstown Lower", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Talb\\u00f3idigh \\u00cdochtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1801", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "86,857", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Named after Talbotstown village. United barony of Talbotstown created in 1606.", "is_header": false, "column_span": 1, "row_span": 1}], [{"value": "Wicklow", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Talbotstown Upper", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Baile an Talb\\u00f3idigh Uachtarach", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "Divided by 1801", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "62,510", "is_header": false, "column_span": 1, "row_span": 1}, {"value": "(See Talbotstown Lower)", "is_header": false, "column_span": 1, "row_span": 1}]], "table_webpage_url": "http://en.wikipedia.org/wiki/List_of_baronies_of_Ireland", "table_page_title": "List of baronies of Ireland", "table_section_title": "Final list", "table_section_text": "The final catalogue of baronies numbered 331, with an average area of 255 km\\u00b2 (98 sq mi; 63,000 acres); therefore, each county was divided, on average, into 10 or 11 baronies.", "highlighted_cells": [[315, 0], [315, 1]], "example_id": -7506134436007289724, "sentence_annotations": [{"original_sentence": "Rathconrath is also one of the baronies in Co. Westmeath, see list of baronies of Ireland.", "sentence_after_deletion": "Rathconrath is one of the baronies in Co. Westmeath of baronies of Ireland.", "sentence_after_ambiguity": "Rathconrath is one of the baronies in Co. Westmeath of baronies of Ireland.", "final_sentence": "Rathconrath is one of the baronies of the county of Westmeath in Ireland."}], "table_list": [["County", "Name", "Irish name", "Date", "Area (acres, 1872)", "Notes"], ["Antrim", "Antrim Lower", "Aontroim \\u00cdochtarach", "Divided 1792\\u20131798", "80,826", "Named after Antrim town"], ["Antrim", "Antrim Upper", "Aontroim Uachtarach", "Divided 1792\\u20131798", "36,489", "Named after Antrim town"], ["Antrim", "Belfast Lower", "B\\u00e9al Feirste \\u00cdochtarach", "Divided 1792\\u20131798", "56,142", "Named after Belfast town (now city)"], ["Antrim", "Belfast Upper", "B\\u00e9al Feirste Uachtarach", "Divided 1792\\u20131798", "32,942", "Named after Belfast town (now city)"], ["Antrim", "Carrickfergus", "Carraig Fhearghais", "By 1325", "16,702", "Formerly a county corporate: the County of the Town of Carrickfergus"], ["Antrim", "Cary or Carey", "Cathra\\u00ed", "By 1672", "75,035", "Named after the Cothrugu (Cotraigib, Crotraigib), an ancient tribe."], ["Antrim", "Dunluce Lower", "D\\u00fan Libhse \\u00cdochtarach", "Divided 1792\\u20131798", "30,575", "See also Dunluce Castle."], ["Antrim", "Dunluce Upper", "D\\u00fan Libhse Uachtarach", "Divided 1792\\u20131798", "52,788", "See also Dunluce Castle."], ["Antrim", "Glenarm Lower", "Gleann Arma \\u00cdochtarach", "Divided 1792\\u20131798", "64,945", "Named after Glenarm village"], ["Antrim", "Glenarm Upper", "Gleann Arma Uachtarach", "Divided 1792\\u20131798", "24,032", "Named after Glenarm village"], ["Antrim", "Kilconway", "Coill Chonmha\\u00ed", "By 1672", "68,640", "Name means \\"forest of the Conmha\\u00edcne\\"."], ["Antrim", "Massereene Lower", "M\\u00e1sa R\\u00edona \\u00cdochtarach", "Divided 1792\\u20131798", "27,228", "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery."], ["Antrim", "Massereene Upper", "M\\u00e1sa R\\u00edona Uachtarach", "Divided 1792\\u20131798", "56,675", "Namesake of Viscount Massereene. The name means \\"Queen\'s hill\\" and originally belonged to a monastery."], ["Antrim", "Toome Lower", "Tuaim \\u00cdochtarach", "Divided 1792\\u20131798", "36,135", "Named after Toome village"], ["Antrim", "Toome Upper", "Tuaim Uachtarach", "Divided 1792\\u20131798", "47,571", "Named after Toome village"], ["Armagh", "Armagh", "Ard Mhacha", "By 1609", "47,645", "Named after Armagh town (now city)"], ["Armagh", "Fews Lower", "Na Fe\\u00e1 \\u00cdochtaracha", "Divided by 1745; Fews by 1609", "29,757", "From Irish Na Feadha, \\"The lengths\\""], ["Armagh", "Fews Upper", "Na Fe\\u00e1 Uachtaracha", "Divided by 1745; Fews by 1609", "47,433", "From Irish Na Feadha, \\"The lengths\\""], ["Armagh", "Oneilland East", "U\\u00ed Niall\\u00e1in Thoir", "Divided 1792\\u20131807; Oneilland by 1609", "20,890", "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills."], ["Armagh", "Oneilland West", "U\\u00ed Niall\\u00e1in Thiar", "Divided 1792\\u20131807; Oneilland by 1609", "57,584", "Named after the U\\u00ed Niall\\u00e1in tribe \\u2014 not to be confused with the O\'Neills."], ["Armagh", "Orior Lower", "Na hOirthir \\u00cdochtaracha", "Divided 1792\\u20131807; Orior by 1609", "31,927", "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla."], ["Armagh", "Orior Upper", "Na hOirthir Uachtaracha", "Divided 1792\\u20131807; Orior by 1609", "49,086", "From the tribe of the Airthir (\\"easterners\\"), part of the Airg\\u00edalla."], ["Armagh", "Tiranny or Turaney", "Tuath Threana", "By 1609", "27,397", "Named after the U\\u00ed Threna tribe."], ["Carlow", "Carlow", "Ceatharlach", "By 1672", "31,353", "Named after Carlow town"], ["Carlow", "Forth", "Fotharta", "By 1672", "39,510", "Named from the Irish Fothairt Mag Fe\\u00e1, \\"fothairt of the beech plain.\\" A fothairt was a kingdom not ruled by a branch of the provincial ruling family."], ["Carlow", "Idrone East", "U\\u00ed Dhr\\u00f3na Thoir", "Divided in 1799", "52,857", "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na."], ["Carlow", "Idrone West", "U\\u00ed Dhr\\u00f3na Thiar", "Divided in 1799", "23,066", "Named after the ancient ruling family, the U\\u00ed Dr\\u00f3na."], ["Carlow", "Rathvilly", "R\\u00e1th Bhile", "By 1672", "44,806", "Named after Rathvilly village"], ["Carlow", "St. Mullin\'s Lower", "Tigh Moling \\u00cdochtarach", "Divided by 1841", "21,914", "Named after St Mullin\'s village. Does not border St. Mullin\'s Upper."], ["Carlow", "St. Mullin\'s Upper", "Tigh Moling Uachtarach", "Divided by 1841", "7,784", "Named after St. Mullin\'s village; the land was a detached fragment of the original St."], ["Cavan", "Castlerahan", "Caisle\\u00e1n Raithin", "By 1609", "69,279", "Named after Castlerahan parish."], ["Cavan", "Clankee", "Clann Chaoich", "By 1609", "64,377", "The name means \\"Caoch\'s clan\\"; Caoch (meaning \\"blind\\" or \\"squint\\") was the nickname of Niall mac Cathal na Beith\\u00ed mac Annadh \\u00d3 Raghallaigh (died 1296)."], ["Cavan", "Clanmahon", "Clann Mhath\\u00fana", "By 1609", "51,170", "The name means \\"Math\\u00fain\'s clan.\\""], ["Cavan", "Loughtee Lower", "Lucht T\\u00ed \\u00cdochtarach", "Divided by 1821; Loughtee by 1609", "28,240", "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\""], ["Cavan", "Loughtee Upper", "Lucht T\\u00ed Uachtarach", "Divided by 1821; Loughtee by 1609", "63,842", "Name derives from Loch an To\\u00edghe, \\"lake of the house.\\""], ["Cavan", "Tullygarvey", "Teallach Ghairbh\\u00edth", "By 1609", "59,871", "The name means \\"tribe of Gairbh\\u00e9ith\\"."], ["Cavan", "Tullyhaw", "Teallach Eathach", "By 1609", "89,852", "The name means \\"Eochaid\'s tribe\\", referring to a king of c. AD 700."], ["Cavan", "Tullyhunco or Tulloghonoho", "Teallach Dh\\u00fanchadha", "By 1609", "39,624", "The name means \\"D\\u00fanchadh\'s tribe.\\""], ["Clare", "Bunratty Lower", "Bun Raite \\u00cdochtarach", "Divided by 1841", "57,314", "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574."], ["Clare", "Bunratty Upper", "Bun Raite Uachtarach", "Divided by 1841", "53,595", "Named after Bunratty village. Bunratty aka Dangan-i-viggan or Dangan existed by 1574."], ["Clare", "Burren", "Boirinn", "By 1574", "74,360", "The barony is called \\"Burren\\"; the region is now usually \\"The Burren\\", a name meaning \\"great rock.\\" Formerly aka Gragans."], ["Clare", "Clonderalaw", "Cluain idir Dh\\u00e1 L\\u00e1", "By 1574", "75,878", "Named after Clonderalaw Castle. Formerly aka East Corkewasken."], ["Clare", "Corcomroe", "Corca Mrua", "By 1574", "61,385", "Named after the Corco Modhruadh, formerly the ruling dynasty in the area. Formerly aka Dowaghy connoghor/Tuoghmore y Conour."], ["Clare", "Ibrickan or Ibrickane", "U\\u00ed Bhreac\\u00e1in", "By 1672", "56,696", "Named after the U\\u00ed Bhreac\\u00e1in, formerly the ruling dynasty in the area"], ["Clare", "Inchiquin", "Inse U\\u00ed Chuinn", "By 1672", "88,387", "Name is Irish for \\"Quinn\'s water meadow.\\" Namesake of Baron Inchiquin"], ["Clare", "Islands", "Na hOile\\u00e1in", "By 1574", "63,592", "Name refers to the islands of the Fergus estuary. Formerly aka Cloynerawde/Clonraude"], ["Clare", "Moyarta", "Maigh Fhearta", "By 1574", "68,679", "Name from Irish Mag Fearta, \\"plain of graves\\". Formerly aka West Corkewasken."], ["Clare", "Tulla Lower", "An Tulach \\u00cdochtarach", "Divided by 1841", "73,454", "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574"], ["Clare", "Tulla Upper", "An Tulach Uachtarach", "Divided by 1841", "94,919", "Named after Tulla town. Tully (formerly aka Tullaghnenaspule/Tullaghenaspy) existed by 1574"], ["Cork", "Bantry", "Beanntra\\u00ed", "By 1672", "59,216", "Named after Bantry town"], ["Cork", "Barretts", "Bar\\u00f3idigh", "By 1672", "31,761", "Named after the Barrett family."], ["Cork", "Barrymore", "Barraigh Mh\\u00f3ra", "By 1672", "148,143", "Namesake of the Earl of Barrymore. Name means \\"Great Barrys.\\""], ["Cork", "Bear", "B\\u00e9arra", "By 1672", "89,986", "Namesake of the Beara Peninsula. It is said to be named after a princess named B\\u00e9irre, or possibly settlers from Iberia."], ["Cork", "Carbery East, East Division", "Cairbrigh Thoir, an Roinn Thoir", "Divided by 1821", "67,235", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Carbery East, West Division", "Cairbrigh Thoir, an Roinn Thiar", "Divided by 1821", "105,141", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Carbery West, East Division", "Cairbrigh Thiar, an Roinn Thoir", "Divided by 1821", "79,263", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Carbery West, West Division", "Cairbrigh Thiar, an Roinn Thiar", "Divided by 1821", "109,178", "Formerly one large barony of Carbery, named after the U\\u00ed Chairpre."], ["Cork", "Condons and Clangibbon", "Cond\\u00fanaigh agus Clann Ghiob\\u00fain", "By 1672", "78,481", "The territories of two families: the Condons or Cauntons, and the FitzGibbons or White Knight"], ["Cork", "Cork City", "Cathair Chorca\\u00ed", "1608", "2,265", "Formerly a county corporate, originally including the Liberties which later formed the separate Barony of Cork. It contains 7 civil parishes."], ["Cork", "Cork", "Corcaigh", "By 1841", "43,813", "Formed from the \\"Liberties of Cork\\", the portion previously within the County of the city of Cork which was not within the borough of Cork."], ["Cork", "Courceys", "C\\u00farsaigh", "By 1672", "8,812", "Named after the de Courcy barons."], ["Cork", "Duhallow", "D\\u00faiche Ealla", "By 1672", "232,328", "Name means \\"land of the Munster Blackwater\\"."], ["Cork", "Fermoy", "Mainistir Fhear Ma\\u00ed", "By 1672", "121,188", "Namesake of Fermoy town, which is actually in Condons and Clangibbon"], ["Cork", "Ibane and Barryroe", "U\\u00ed Bhamhna agus Barraigh Rua", "United by 1711", "35,291", "Ibane and Barryroe are peninsulas on opposite sides of Clonakilty Bay The names mean, respectively, \\"Descendants of Bamna\\" and \\"Red-haired Barrys.\\""], ["Cork", "Imokilly", "U\\u00ed Mhic Coille", "By 1672", "93,617", "Named after the U\\u00ed Meic Caille, a sept of the U\\u00ed Liath\\u00e1in."], ["Cork", "Kerrycurrihy", "Ciarra\\u00ed Cuirche", "Divided by 1821", "23,957", "Kerrycurrihy and Kinalea united in Down Survey. A tribal name: the Ciarraige Cuirchi."], ["Cork", "Kinalea", "Cine\\u00e1l Aodha", "Divided by 1821", "50,692", "Kerrycurrihy and Kinalea united in Down Survey. The \\"tribe of A\\u00e9d.\\""], ["Cork", "Kinalmeaky", "Cine\\u00e1l mB\\u00e9ice", "By 1672", "36,068", "Named after the Cen\\u00e9l mBeice \\"Beice\'s people\\", a sept of the O\'Mahonys."], ["Cork", "Kinnatalloon", "Coill na Tal\\u00fan", "By 1672", "27,718", "The name means \\"Tolamhnach\'s forest,\\" referring to a 7th-century chief of the U\\u00ed Liath\\u00e1in."], ["Cork", "Kinsale", "Cionn tS\\u00e1ile", "By 1672", "12,430", "Named after Kinsale town"], ["Cork", "Muskerry East", "M\\u00fascra\\u00ed Thoir", "Divided by 1821", "122,874", "Namesake of Baron Muskerry. The only barony split between the East and West Ridings of County Cork."], ["Cork", "Muskerry West", "M\\u00fascra\\u00ed Thiar", "Divided by 1821", "188,487", "Namesake of Baron Muskerry. Named after the ancient tribe of the M\\u00fascraige."], ["Cork", "Orrery and Kilmore", "Orbhra\\u00ed agus An Choill Mh\\u00f3r", "United by 1821", "69,346", "Namesake of Earl of Orrery. Named after the Orbhraighe tribe, while Kilmore means \\"great forest.\\""], ["Donegal", "Banagh", "B\\u00e1inigh", "Divided in 1791", "177,288", "Territory of the Cinel Boghaine, descended from Niall of the Nine Hostages. Combined with Boylagh till 1791"], ["Donegal", "Boylagh", "Baollaigh", "Divided in 1791", "156,245", "Territory of the O\'Boyles. Combined with Banagh till 1791"], ["Donegal", "Inishowen (or Innishowen) East", "Inis Eoghain Thoir", "Divided by 1851", "123,356", "Name means \\"Eoghan\'s peninsula.\\""], ["Donegal", "Inishowen (or Innishowen) West", "Inis Eoghain Thiar", "Divided by 1851", "76,828", "Name means \\"Eoghan\'s peninsula.\\""], ["Donegal", "Kilmacrenan", "Cill Mhic R\\u00e9an\\u00e1in", "By 1672", "310,325", "Named after Kilmacrenan village"], ["Donegal", "Raphoe North", "R\\u00e1th Bhoth Thuaidh", "Divided 1807\\u20131821", "80,610", "Named after Raphoe town"], ["Donegal", "Raphoe South", "R\\u00e1th Bhoth Theas", "Divided 1807\\u20131821", "140,841", "Named after Raphoe town"], ["Donegal", "Tirhugh", "T\\u00edr Aodha", "By 1672", "125,828", "Name means \\"Aodh\'s country.\\""], ["Down", "Ards (or Ardes) Lower", "An Aird \\u00cdochtarach", "Divided by 1851", "38,462", "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\""], ["Down", "Ards (or Ardes) Upper", "An Aird Uachtarach", "Divided by 1851", "29,697", "Namesake of the Ards Peninsula. Aird is Irish for \\"promontory.\\""], ["Down", "Castlereagh Lower", "An Caisle\\u00e1n Riabhach \\u00cdochtarach", "Divided by 1841", "51,452", "Named after Castlereagh townland. Gives its name to the borough of Castlereagh."], ["Down", "Castlereagh Upper", "An Caisle\\u00e1n Riabhach Uachtarach", "Divided by 1841", "53,856", "Named after Castlereagh townland. Gives its name to the borough of Castlereagh."], ["Down", "Dufferin", "An Duifrian", "By 1672", "17,208", "Name from the Irish duibhthrian (black third)."], ["Down", "Iveagh Lower, Lower Half", "U\\u00edbh Eachach \\u00cdochtarach, An Leath \\u00cdochtair", "Divided by 1851", "46,057", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Iveagh Lower, Upper Half", "U\\u00edbh Eachach \\u00cdochtarach, An Leath Uachtair", "Divided by 1851", "47,538", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Iveagh Upper, Lower Half", "U\\u00edbh Eachach Uachtarach, An Leath \\u00cdochtair", "Divided by 1851", "96,317", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Iveagh Upper, Upper Half", "U\\u00edbh Eachach Uachtarach, An Leath Uachtair", "Divided by 1851", "63,249", "Named after the U\\u00ed Echach Cobo, a Gaelic people and territory in the region."], ["Down", "Kinelarty", "Cine\\u00e1l Fh\\u00e1rtaigh", "By 1672", "40,322", "Name means \\"Faghartach\'s kindred.\\""], ["Down", "Lecale Lower", "Leath Cathail \\u00cdochtarach", "Divided by 1851", "30,920", "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\""], ["Down", "Lecale Upper", "Leath Cathail Uachtarach", "Divided by 1851", "30,521", "Namesake of the Lecale peninsula. The name means \\"Cathal\'s half.\\""], ["Down", "Lordship of Newry", "An tI\\u00far", "By 1672", "15,813", "The historic Lordship encompassed lands on both sides of the Down-Armagh border. Later, the jurisdiction of the \\"Lordship of Newry\\" for baronial presentment sessions extended only to County Down."], ["Down", "Mourne", "M\\u00farna", "By 1672", "47,822", "Named after the Mourne Mountains. A half-barony in the Down Survey."], ["Dublin", "Balrothery East", "Baile an Ridire Thoir", "Divided 1842", "30,005", "Named after Balrothery village. Balrothery existed by 1593."], ["Dublin", "Balrothery West", "Baile an Ridire Thiar", "Divided 1842", "25,195", "Named after Balrothery village. Balrothery existed by 1593."], ["Dublin", "Castleknock", "Caisle\\u00e1n Cnucha", "By 1593", "21,371", "Named after Castleknock village (now suburban); from 1861, reduced in size by the expanded borders of Dublin city"], ["Dublin", "Coolock", "An Ch\\u00fal\\u00f3g", "By 1593", "26,614", "Named after the historical village of Coolock, now suburban; from 1861, reduced in size by the expanded borders of Dublin city"], ["Dublin", "Dublin", "Baile \\u00c1tha Cliath", "1840", "1,693", "Created by the 1840 Acts from land previously liberties in the county of the City. Its name and area were confirmed by the Dublin Baronies Act 1842."], ["Dublin", "Dublin City", "Cathair Bhaile \\u00c1tha Cliath", "1548", "2,114", "Formerly a county corporate"], ["Dublin", "Nethercross", "An Chrois \\u00cdochtarach", "By 1672", "21,818", "Named after a cross erected by Saint Cainnech in Finglas. Compare Uppercross."], ["Dublin", "Newcastle", "An Caisle\\u00e1n Nua", "By 1593", "22,876", "Named after the village of Newcastle, County Dublin. Not related to the Wicklow barony of Newcastle."], ["Dublin", "Rathdown", "R\\u00e1th an D\\u00fain", "By 1593", "29,974", "A half-barony from 1606, with the Wicklow half-barony of Rathdown separated out. From 1861, reduced in size by the expanded borders of Dublin city."], ["Dublin", "Uppercross", "An Chrois Uachtarach", "1792\\u20131821", "37,307", "Compare Nethercross. In the Down Survey, Uppercross and Newcastle were not distinguished."], ["Fermanagh", "Clanawley or Glenawley", "Clann Amhlaoibh", "By 1603", "72,894", "\\"Awley\\" is from Mac Amhlaoibh and Mac Amhalghaidh (Irish septs)"], ["Fermanagh", "Clankelly or Clonkelly", "Clann Cheallaigh", "By 1603", "39,067", "Clan of the Kellys"], ["Fermanagh", "Coole", "An Ch\\u00fail", "By 1603", "17,320", "A half-barony in the Down Survey. Name means \\"corner.\\""], ["Fermanagh", "Knockninny", "Cnoc Ninnidh", "By 1603", "27,732", "Named after the hill of Saint Ninnidh"], ["Fermanagh", "Lurg", "Lorg", "By 1603", "66,163", "Named after the Tuath Luirg (Fir Luirg; \\"tribe/men of the path\\")."], ["Fermanagh", "Magheraboy", "An Machaire Bu\\u00ed", "By 1603", "79,038", "Name means \\"yellow plain\\""], ["Fermanagh", "Magherastephana", "An Machaire Steaf\\u00e1nach", "By 1603", "58,979", "Name origin unclear; \\"plain of the FitzStephens?\\""], ["Fermanagh", "Tirkennedy", "T\\u00edr Cheannada", "By 1603", "56,267", "Named after Fergus son of Cremthann, nicknamed Cennfhota (\\"long head\\"). No relation to the surname Kennedy."], ["Galway", "Aran or Arran", "\\u00c1rainn", "By 1574", "11,287", "Conterminous with the Aran Islands; Inishmore (\\u00c1rainn Mh\\u00f3r) is named for its shape (ara = kidney)"], ["Galway", "Athenry", "Baile \\u00c1tha an R\\u00ed", "By 1672", "25,782", "Named after Athenry town; called \\"Halfe Barony and liberties of Athenrey\\" in the Down Survey."], ["Galway", "Ballymoe", "B\\u00e9al \\u00c1tha M\\u00f3", "By 1672", "89,270", "Named after Ballymoe village; Half with Ballymoe, County Roscommon. Full barony existed in Galway by 1574."], ["Galway", "Ballynahinch", "Baile na hInse", "By 1574", "189,813", "Named after Ballynahinch town; \\"Ballenanen\\" in Down Survey (or Hibernia Delinateo)"], ["Galway", "Clare", "Baile Chl\\u00e1ir", "By 1574", "127,486", "Namesake of the River Clare and village of Claregalway. The name means \\"[river of the] plain.\\""], ["Galway", "Clonmacnowen or Clonmacnoon", "Cluain Mhac nEoghain", "By 1672", "35,467", "\\"Clanemtoneen\\" in Down Survey (or Hibernia Delinateo). Name means \\"Valley of the sons of Eoghan.\\""], ["Galway", "Dunkellin", "D\\u00fan Coill\\u00edn", "By 1574", "83,371", "Name means \\"Coill\\u00edn\'s hillfort\\""], ["Galway", "Dunmore", "D\\u00fan M\\u00f3r", "By 1574", "71,011", "Named after Dunmore village"], ["Galway", "Galway", "Gaillimh", "1610", "22,492", "Formerly a county corporate: the county of the Town (now city) of Galway"], ["Galway", "Kilconnell or Kilconnnel", "Cill Chonaill", "By 1574", "64,819", "Named after Kilconnell village"], ["Galway", "Killian", "Cill Liath\\u00e1in", "By 1574", "52,388", "Name means \\"Liath\\u00e1in\'s church\\""], ["Galway", "Kiltartan", "Cill Tartan", "By 1574", "65,664", "\\"Killcartar\\" in Down Survey (or Hibernia Delinateo). Was originally named after Saint Attracta\'s church. Kiltaraght in 1574."], ["Galway", "Leitrim", "Liatroim", "By 1574", "109,567", "Now also partly in Clare. Name means \\"grey ridge.\\""], ["Galway", "Longford", "An Longfort", "By 1574", "96,506", "Name means \\"ship landing-ground\\", referring to a longphort on a tributary of the River Shannon."], ["Galway", "Loughrea", "Baile Locha Riach", "By 1574", "64,406", "Named after Loughrea town; called \\"Half Barony of Lougheagh\\" in the Down Survey."], ["Galway", "Moycullen", "Maigh Cuilinn", "By 1574", "202,386", "Named after Moycullen village"], ["Galway", "Ross", "An Ros", "By 1574", "77,351", "In County Mayo in 1574; transferred to Galway within decades; since 1898 partly in Mayo. The name means \\"The promontory.\\""], ["Galway", "Tiaquin", "Tigh Dachoinne", "By 1574", "110,135", "Name means \\"House of double coign.\\""], ["Kerry", "Clanmaurice", "Clann Mhuiris", "By 1598", "120,520", "Name means \\"Maurice\'s clan\\", referring to Maurice FitzGerald, 1st Earl of Desmond."], ["Kerry", "Corkaguiny", "Corca Dhuibhne", "By 1598", "138,605", "Named after the ancient ruling tribe, the Corcu Duibne."], ["Kerry", "Dunkerron North", "D\\u00fan Ciar\\u00e1in Thuaidh", "Divided by 1851", "72,414", "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\""], ["Kerry", "Dunkerron South", "D\\u00fan Ciar\\u00e1in Theas", "Divided by 1851", "96,289", "Namesake of Dunkerron Castle. Name means \\"Ciar\\u00e1n\'s hillfort.\\""], ["Kerry", "Glanarought or Glanerought", "Gleann na Ruachta\\u00ed", "By 1598", "121,865", "Name means \\"Valley of the O\'Roughty.\\""], ["Kerry", "Iraghticonnor", "Oireacht U\\u00ed Chonch\\u00fair", "By 1598", "88,105", "Name means \\"Inheritance of the O\'Connors.\\""], ["Kerry", "Iveragh", "U\\u00edbh R\\u00e1thach", "By 1598", "159,980", "Name means \\"Descendants of R\\u00e1thach.\\" On the Kilcoolaght East ogham stone (CIIC 211), this name appears in the Primitive Irish form Rittaveccas."], ["Kerry", "Magunihy or Magonhy", "Maigh gCoinchinn", "By 1598", "166,427", "Name means \\"Coinchinn\'s plain\\"; a personal name meaning wolf-warrior.\\""], ["Kerry", "Trughanacmy or Trughenackmy", "Tri\\u00facha an Aicme", "By 1598", "194,593", "Name means \\"cantred of the tribe.\\""], ["Kildare", "Carbury or Carbery", "Cairbre", "By 1672", "48,286", "Named after Carbury"], ["Kildare", "Clane", "Claonadh", "By 1593", "32,023", "Named after Clane village"], ["Kildare", "Connell or Great Connell", "Connail", "By 1593", "34,785", "Named after [Old] Connell, a holy site and ford near Newbridge."], ["Kildare", "Ikeathy and Oughterany", "U\\u00ed Ch\\u00e9ithigh agus Uachtar Fhine", "United by 1608", "25,753", "The baronies of Ikeathy and Oughterany were united some time between 1558 and 1608. \\"Okeathy Ocerny\\" in 1593."], ["Kildare", "Kilcullen", "Cill Chuillinn", "By 1593", "8,492", "Named after Kilcullen town. A half-barony in the Down Survey."], ["Kildare", "Kilkea and Moone", "Cill Ch\\u00e1 agus Maoin", "By 1593", "46,286", "Named after the villages of Kilkea and Moone."], ["Kildare", "Naas North", "An N\\u00e1s Thuaidh", "By 1593", "25,579", "Named after Naas town. \\"Naas Upper\\" in 1593."], ["Kildare", "Naas South", "An N\\u00e1s Theas", "By 1593", "27,478", "Named after Naas town. \\"Naas Nether\\" in 1593."], ["Kildare", "Narragh and Reban East", "An Fhorrach agus an R\\u00e9ab\\u00e1n Thoir", "Divided by 1807", "21,374", "Named after Narragh and Rheban Castle. Namesake of the hereditary Barony of Norragh."], ["Kildare", "Narragh and Reban West", "An Fhorrach agus an R\\u00e9ab\\u00e1n Thiar", "Divided by 1807", "22,136", "(See Narragh and Reban East)"], ["Kildare", "Offaly East", "U\\u00edbh Fhail\\u00ed Thoir", "Divided by 1807", "47,029", "Named after U\\u00ed Failghe; also the name of County Offaly to the west. Barony of Offaly existed in 1593."], ["Kildare", "Offaly West", "U\\u00edbh Fhail\\u00ed Thiar", "Divided by 1807", "40,603", "(see Offaly West)"], ["Kildare", "North Salt", "An L\\u00e9im Thuaidh", "Divided by 1807", "21,930", "\\"Salt\\" derived from Saltus Salmonis, the Latin name for Leixlip. Barony of Salt existed by 1593."], ["Kildare", "South Salt", "An L\\u00e9im Theas", "Divided by 1807", "16,655", "(See North Salt)"], ["Kilkenny", "Callan", "Callainn", "By 1672", "5,653", "Named after Callan town; \\"Callen Liberties\\" in Down Survey. The 1836 Act \\"for removing doubts\\" explicitly states the town and liberties \\"shall be deemed and taken to be a barony\\""], ["Kilkenny", "Crannagh or Crannach", "Crannach", "By 1672", "58,675", "Name means \\"abounding in trees.\\""], ["Kilkenny", "Fassadinin or Fassadining", "F\\u00e1sach an Deighn\\u00edn", "By 1672", "68,174", "Name means \\"wilderness by the River Dinan.\\""], ["Kilkenny", "Galmoy", "Gabhalmhaigh", "By 1672", "40,236", "Name means \\"plain of the River Goul.3"], ["Kilkenny", "Gowran", "Gabhr\\u00e1n", "By 1672", "111,706", "Named after Gowran village"], ["Kilkenny", "Ida, or \\"Ida, Igrinn and Iberchon\\"", "U\\u00ed Dhe\\u00e1", "By 1672", "60,132", "Now also partly in Wexford. A tribal name: the U\\u00ed Dheaghaidh, descendants of Deagaid."], ["Kilkenny", "Iverk", "U\\u00edbh Eirc", "By 1672", "40,528", "Name means \\"descendents of Erc.\\""], ["Kilkenny", "Kells", "Ceanannas", "By 1672", "38,376", "Named after Kells, County Kilkenny."], ["Kilkenny", "Kilculliheen", "Cill Choilch\\u00edn", "By 1848", "2,139", "Originally a civil parish in the county of the city of Waterford, transferred to the county in 1840. Its status as a barony separate from Gaultier was not recognised by the census until 1871."], ["Kilkenny", "Kilkenny", "Cill Chainnigh", "1610", "921", "Formerly a county corporate: the County of the city of Kilkenny"], ["Kilkenny", "Knocktopher", "Cnoc an T\\u00f3chair", "By 1672", "46,765", "Named after Knocktopher village"], ["Kilkenny", "Shillelogher", "S\\u00edol Fhaolchair", "By 1672", "36,684", "A tribal name, meaning \\"descendants of Faolchar\\", a name meaning \\"wolf-love.\\""], ["Laois", "Ballyadams", "Baile \\u00c1daim", "By 1672", "24,081", "Named after Ballyadams Castle"], ["Laois", "Clandonagh", "Clann Donnchadha", "1846", "43,733", "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846."], ["Laois", "Clarmallagh", "Cl\\u00e1r Ma\\u00ed Locha", "1846", "43,533", "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846."], ["Laois", "Cullenagh or Cullinagh", "Cuileannach", "By 1672", "44,094", "Named after the Cullenagh Mountains."], ["Laois", "Maryborough East", "Port Laoise Thoir", "Divided by 1807", "25,160", "Named after Portlaoise, formerly named Maryborough"], ["Laois", "Maryborough West", "Port Laoise Thiar", "Divided by 1807", "41,914", "Named after Portlaoise, formerly named Maryborough"], ["Laois", "Portnahinch or Portnehinch", "Port na hInse", "By 1672", "35,835", "Named after Portnahinch, a landing-ground on the River Barrow."], ["Laois", "Slievemargy, Slewmergie, Slieuemargue, Slieuemargy", "Sliabh Mairge", "By 1672", "35,490", "Named after the Slievemargy hills. Now also partly in Carlow"], ["Laois", "Stradbally", "An Sr\\u00e1idbhaile", "By 1672", "27,895", "Named after Stradbally village"], ["Laois", "Tinnahinch or Tinnehinch", "Tigh na hInse", "By 1672", "54,187", "Named after Tinnahinch village"], ["Laois", "Upper Woods or Upperwoods", "An Choill Uachtarach", "1846", "48,926", "One of three traditional subunits of Upper Ossory, which was extant as a barony by 1657 and formally abolished in 1846."], ["Leitrim", "Carrigallen", "Carraig \\u00c1lainn", "By 1672", "62,395", "Named after Carrigallen"], ["Leitrim", "Drumahaire", "Droim Dh\\u00e1 Thiar", "By 1574", "110,146", "Named after Drumahaire. Considered part of Sligo in 1574."], ["Leitrim", "Leitrim", "Liatroim", "By 1574", "59,164", "Named after Leitrim village. Considered part of Sligo in 1574."], ["Leitrim", "Mohill", "Maothail", "By 1672", "62,904", "Named after Mohill"], ["Leitrim", "Rosclougher or Rossclogher", "Ros Clochair", "By 1672", "81,601", "Named after Rosclogher Castle."], ["Limerick", "Clanwilliam", "Clann Liam", "By 1672", "55,627", "Name means \\"clan of William de Burgh.\\""], ["Limerick", "Connello (or Conello) Lower", "Conallaigh \\u00cdochtaracha", "Divided by 1821", "47,850", "Territory of the O\'Connells."], ["Limerick", "Connello (or Conello) Upper", "Conallaigh Uachtaracha", "Divided by 1821", "61,256", "Territory of the O\'Connells."], ["Limerick", "Coonagh", "U\\u00ed Chuanach", "By 1672", "36,323", "Name means \\"descendants of Cuana.\\""], ["Limerick", "Coshlea", "Cois Sl\\u00e9ibhe", "By 1672", "95,232", "Name literally means \\"foot of the mountain.\\""], ["Limerick", "Coshma", "Cois M\\u00e1ighe", "By 1672", "49,018", "Name means \\"edge of the plain.\\""], ["Limerick", "Glenquin", "Gleann an Choim", "By 1841", "96,402", "Prior to 1841, part of Connello Upper."], ["Limerick", "Kenry", "Caonra\\u00ed", "By 1672", "26,222", "From the C\\u00e1enraige, an ancient tribe."], ["Limerick", "Kilmallock or Kilmallock Liberties", "Cill Mocheall\\u00f3g", "By 1672", "4,074", "Named after Kilmallock. Not enumerated in the 1821 census."], ["Limerick", "Limerick City", "Cathair Luimnigh", "1609", "2,074", "Formerly a county corporate; includes the \\"[South] Liberties\\" of Down Survey"], ["Limerick", "North Liberties of Limerick city", "Na L\\u00edbearta\\u00ed Thuaidh", "By 1872", "3,050", "formerly Liberties; the \\"North Liberties\\" were record separately from the \\"South Liberties\\" in the Down Survey."], ["Limerick", "Owneybeg", "Uaithne Beag", "By 1672", "27,211", "The territory of Uaithni encompassed Owneybeg and part of Owney and Arra"], ["Limerick", "Pubblebrien", "Pobal Bhriain", "By 1672", "30,138", "Name means \\"Brian\'s people\\", referring to Brian Boru."], ["Limerick", "Shanid", "Seanaid", "By 1841", "84,075", "Prior to 1841, part of Connello Lower."], ["Limerick", "Smallcounty", "An D\\u00e9is Bheag", "By 1672", "44,424", "The Irish name means \\"the little vassal tribe\\"; see Deisi."], ["Londonderry", "Coleraine", "C\\u00fail Raithin", "By 1591", "85,836", "Named after Coleraine town, although the town itself is in the North East Liberties of Coleraine. A half-barony in 1807, including the south-west liberties of Coleraine."], ["Londonderry", "Keenaght or Kenaught", "Cianachta", "By 1591 (as Limavady)", "130,329", "Named after the Ciannachta tribe, descended from Tadc mac C\\u00e9in."], ["Londonderry", "Loughinsholin", "Loch Inse U\\u00ed Fhloinn", "By 1591", "171,662", "Name means \\"lough of O\'Lynn\'s island\\", referring to a lake containing a crann\\u00f3g."], ["Londonderry", "North East Liberties of Coleraine", "L\\u00edbearta\\u00ed Thoir Thuaidh Ch\\u00fail Raithin", "By 1672", "18,005", "formerly Liberties of Coleraine town."], ["Londonderry", "North-West Liberties of Londonderry", "L\\u00edbearta\\u00ed Thiar Thuaidh Dhoire", "By 1672", "11,506", "formerly Liberties of Londonderry city."], ["Londonderry", "Tirkeeran or Tyrkeeran", "T\\u00edr Mhic Caoirthinn", "By 1591 (as Anagh)", "94,014", "A half-barony in 1807, including the south-east liberties of Londonderry. Name means \\"land of the sons of Cartin.\\""], ["Longford", "Ardagh", "Ardach", "By 1629", "40,223", "Named after Ardagh village"], ["Longford", "Granard", "Gr\\u00e1nard", "By 1629", "63,857", "Named after Granard village"], ["Longford", "Longford", "An Longfort", "By 1629", "57,243", "Named after Longford town"], ["Longford", "Moydow", "Maigh Dumha", "By 1629", "34,470", "Named after Moydow village"], ["Longford", "Rathcline", "R\\u00e1th Claon", "By 1629", "40,421", "Named after Rathcline Castle."], ["Longford", "Shrule or Abbeyshrule", "Sruthail", "By 1629", "21,006", "Named after Abbeyshrule"], ["Louth", "Ardee", "Baile \\u00c1tha Fhirdhia", "By 1593", "53,832", "Named after Ardee town"], ["Louth", "Drogheda", "Droichead \\u00c1tha", "1412", "4,497", "Formerly a county corporate. A barony separate from the county was formed in 1840 from the portion previously within the County of the town of Drogheda which was not within the town of Drogheda."], ["Louth", "Dundalk Lower", "D\\u00fan Dealgan \\u00cdochtarach", "Divided by 1821", "37,803", "Named after Dundalk town"], ["Louth", "Dundalk Upper", "D\\u00fan Dealgan Uachtarach", "Divided by 1821", "30,750", "Named after Dundalk town"], ["Louth", "Ferrard", "Fir Arda", "By 1593", "48,806", "From Fera Arda Ciannachta, \\"men of high Ciannachta.\\" Namesake of Viscount Massereene and Ferrard"], ["Louth", "Louth", "L\\u00fa", "By 1672", "25,704", "Named after Louth village"], ["Mayo", "Burrishoole", "Buir\\u00edos Umhaill", "By 1574", "145,172", "Named after Burrishoole Castle; a few sources list Burrishoole split into \\"Burrishoole North\\" and \\"Burrishoole South\\""], ["Mayo", "Carra", "Ceara", "By 1574", "134,206", "Named after Carra village. Called Burriscarra/Burisker in 1574."], ["Mayo", "Clanmorris", "Clann Mhuiris", "By 1574", "69,252", "Namesake of Baron Clanmorris. Name means \\"Muiris\' family.\\" Called Croslwyhin/Crossboyne in 1574."], ["Mayo", "Costello or Clancostello", "Coistealaigh", "By 1574", "143,874", "Now also partly in Roscommon. Named after the Hiberno-Norman MacOisdealbhaigh (Costello) family. Called Beallahaunes/Ballyhaunis in 1574"], ["Mayo", "Erris", "Iorras", "By 1672", "230,452", "Named after Erris village. A half-barony in the Gilbert Manuscript of the Down Survey."], ["Mayo", "Gallen", "Gaileanga", "By 1574", "119,153", "Named after the Gailenga tribe. Beallalahane in 1574."], ["Mayo", "Kilmaine", "Cill Mhe\\u00e1in", "By 1574", "95,284", "Named after Kilmaine village"], ["Mayo", "Murrisk", "Muraisc", "By 1574", "137,061", "Named after Murrisk village"], ["Mayo", "Tirawley or Tyrawley", "T\\u00edr Amhlaidh", "By 1574", "246,822", "Name means \\"Amlaid\'s land\\", referring to Amalgaid mac Fiachrae. \\"Many\\"/Moyne in 1574."], ["Meath", "Deece Lower", "D\\u00e9ise \\u00cdochtarach", "Divided by 1807", "20,013", "Deece barony present by 1542. Named after the D\\u00e9isi Becc."], ["Meath", "Deece Upper", "D\\u00e9ise Uachtarach", "Divided by 1807", "28,763", "Deece barony present by 1542. Named after the D\\u00e9isi Becc."], ["Meath", "Duleek Lower", "Damhliag \\u00cdochtarach", "Divided by 1807", "37,772", "Named after Duleek village. Now also partly in Louth. Duleek barony present by 1542"], ["Meath", "Duleek Upper", "Damhliag Uachtarach", "Divided by 1807", "28,463", "Named after Duleek village. Duleek barony present by 1542"], ["Meath", "Dunboyne", "D\\u00fan B\\u00fainne", "By 1542", "16,781", "Named after Dunboyne town."], ["Meath", "Fore or Demifore", "Baile Fhobhair", "By 1542", "42,388", "Half with Fore, County Westmeath since 1542. Named after Fore Abbey."], ["Meath", "Kells Lower", "Ceanannas \\u00cdochtarach", "Divided by 1807", "36,171", "Named after Kells town. Kells barony present by 1542"], ["Meath", "Kells Upper", "Ceanannas Uachtarach", "Divided by 1807", "49,552", "Named after Kells town. Kells barony present by 1542"], ["Meath", "Lune", "Lu\\u00edne", "By 1542", "39,326", "Named after the Luighne tribe."], ["Meath", "Morgallion", "Machaire Gaileang", "By 1542", "31,492", "Name means \\"plain of the Gailenga\\", a medieval tribe."], ["Meath", "Moyfenrath (or Moyfenragh) Lower", "Maigh Fionnr\\u00e1ithe \\u00cdochtarach", "Divided by 1807", "40,313", "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\""], ["Meath", "Moyfenrath (or Moyfenragh) Upper", "Maigh Fionnr\\u00e1ithe Uachtarach", "Divided by 1807", "31,696", "Moyfenrath barony present by 1542. The name means \\"plain of the fair fort.\\""], ["Meath", "Navan Lower", "An Uaimh \\u00cdochtarach", "Divided by 1807", "25,835", "Named after Navan town. Navan barony present by 1542"], ["Meath", "Navan Upper", "An Uaimh Uachtarach", "Divided by 1807", "17,651", "Named after Navan town. Navan barony present by 1542"], ["Meath", "Ratoath", "R\\u00e1th T\\u00f3", "By 1542", "35,697", "Named after Ratoath village."], ["Meath", "Skreen or Skryne", "An Scr\\u00edn", "By 1542", "40,891", "Named after Skryne village"], ["Meath", "Slane Lower", "Baile Shl\\u00e1ine \\u00cdochtarach", "Divided in 1791", "26,224", "Named after Slane village. Slane barony present by 1542"], ["Meath", "Slane Upper", "Baile Shl\\u00e1ine Uachtarach", "Divided in 1791", "29,211", "Named after Slane village. Slane barony present by 1542"], ["Monaghan", "Cremorne", "Cr\\u00edoch Mh\\u00farn", "1585", "84,508", "From Irish meaning \\"border of the Mugdorna.\\""], ["Monaghan", "Dartree or Dartry", "Dartra\\u00ed", "1585", "59,610", "Name from the ancient kingdom of Dartraighe."], ["Monaghan", "Farney", "Fearnaigh", "1585", "67,333", "Named from the ancient kingdom of Fernmag, \\"plain of alders.\\""], ["Monaghan", "Monaghan", "Muineach\\u00e1n", "1585", "69,735", "Named after Monaghan town."], ["Monaghan", "Trough", "An Tri\\u00facha", "1585", "37,376", "From the Irish tr\\u00edcha c\\u00e9t, a unit of territory in Medieval Ireland."], ["Offaly", "Ballyboy", "Baile \\u00c1tha Bu\\u00ed", "By 1672", "32,398", "Named after Ballyboy village"], ["Offaly", "Ballybritt", "Baile an Bhriotaigh", "By 1672", "52,378", "Named after Ballybritt Castle."], ["Offaly", "Ballycowen", "Baile Mhic Comhainn", "By 1672", "38,610", "Named after Ballycowan Castle."], ["Offaly", "Clonlisk", "Cluain Leisc", "By 1672", "49,052", "Named after Clonlisk Castle."], ["Offaly", "Coolestown", "Baile an Ch\\u00fala\\u00edgh", "By 1672", "47,866", "Named after Coolestown, the former name of Edenderry."], ["Offaly", "Eglish or Fercale", "An Eaglais", "By 1672", "28,697", "The name means \\"church,\\" while Fercale means \\"men of the churches.\\""], ["Offaly", "Garrycastle", "Garra\\u00ed an Chaisle\\u00e1in", "By 1672", "102,841", "Named after Garrycastle"], ["Offaly", "Geashill", "G\\u00e9isill", "By 1672", "30,864", "Named after Geashill village"], ["Offaly", "Kilcoursey", "Cill Chuairs\\u00ed", "By 1672", "19,274", "Named after Kilcoursey Castle."], ["Offaly", "Philipstown Lower", "An Daingean \\u00cdochtarach", "Divided by 1807", "30,669", "Named after Philipstown, now renamed Daingean"], ["Offaly", "Philipstown Upper", "An Daingean Uachtarach", "Divided by 1807", "37,087", "Named after Philipstown, now renamed Daingean"], ["Offaly", "Warrenstown", "Baile an Bhair\\u00ednigh", "By 1672", "21,456", "Named after Ballybrittain (Warrenstown) Castle."], ["Roscommon", "Athlone North", "Baile \\u00c1tha Luain Thuaidh", "Divided by 1868", "57,863", "Named after Athlone town. North and South not separated in 1871 census. The original Athlone barony existed by 1574."], ["Roscommon", "Athlone South", "Baile \\u00c1tha Luain Theas", "Divided by 1868", "79,659", "Named after Athlone town. North and South not separated in 1871 census."], ["Roscommon", "Ballintober North", "Baile an Tobair Thuaidh", "Divided by 1841", "30,853", "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574."], ["Roscommon", "Ballintober South", "Baile an Tobair Theas", "Divided by 1841", "48,113", "Named after Ballintober town (now in Castlereagh barony.) The original Ballintober barony existed by 1574."], ["Roscommon", "Ballymoe", "B\\u00e9al \\u00c1tha M\\u00f3", "By 1672", "23,287", "Half with Ballymoe, County Galway. Named after Ballymoe village, on the County Galway side of the River Suck."], ["Roscommon", "Boyle", "Mainistir na B\\u00faille", "By 1574", "81,163", "Named after Boyle town"], ["Roscommon", "Castlereagh", "An Caisle\\u00e1n Riabhach", "By 1841", "82,081", "Named after Castlerea town. Previously one of three sections of Ballintober barony."], ["Roscommon", "Frenchpark", "D\\u00fan Gar", "By 1841", "71,203", "Named after Frenchpark village; previously part of the barony of Boyle."], ["Roscommon", "Moycarn or Moycarnon or Moycarne or Moycarnan", "Maigh Charn\\u00e1in", "By 1574", "29,595", "Now also partly in Galway. A half-barony in 1807."], ["Roscommon", "Roscommon", "Ros Com\\u00e1in", "By 1574", "81,584", "Named after Roscommon town, which is in Ballintober South"], ["Sligo", "Carbury", "Cairbre", "United by 1841", "73,685", "Divided into Upper and Lower baronies before 1841. Named after the ancient t\\u00faath of the Cairbre Drom Cliabh."], ["Sligo", "Coolavin", "C\\u00fail \\u00d3 bhFinn", "By 1672", "25,473", "Name means \\"corner of the descendants of Finn.\\""], ["Sligo", "Corran", "An Corann", "By 1672", "45,376", "Named after Corann village"], ["Sligo", "Leyny or Leney", "Lu\\u00edne", "By 1672", "121,233", "Named after the Luighne Connacht tribe"], ["Sligo", "Tireragh or Tyreragh", "T\\u00edr Fhiachrach", "By 1672", "106,598", "Now also partly in Mayo. Name means \\"land of the U\\u00ed Fiachrach.\\""], ["Sligo", "Tirerril or Tyraghrill", "T\\u00edr Oirill", "By 1672", "75,812", "Name means \\"Olliol\'s land\\", referring to Ailill mac Echach Mugmed\\u00f3in."], ["Tipperary", "Clanwilliam", "Clann Liam", "By 1672", "115,755", "Name means \\"clan of William de Burgh.\\""], ["Tipperary", "Eliogarty", "\\u00c9ile U\\u00ed Fh\\u00f3garta", "By 1672", "90,257", "A half-barony (with Ikerrin) in the Down Survey. Name means \\"\\u00c9ile of the U\\u00ed Fhogartaigh.\\""], ["Tipperary", "Iffa and Offa East", "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thoir", "Divided by 1807", "56,819", "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\""], ["Tipperary", "Iffa and Offa West", "U\\u00edbh Eoghain agus U\\u00edbh Fhathaidh Thiar", "Divided by 1807", "117,175", "Name means \\"descendants of Eoghan and descendants of Fathaidh.\\""], ["Tipperary", "Ikerrin", "U\\u00ed Chair\\u00edn", "By 1672", "69,805", "A half-barony (with Eliogarty) in the Down Survey. Name means \\"descendents of Cair\\u00edn.\\""], ["Tipperary", "Kilnamanagh Lower", "Coill na Manach \\u00cdochtarach", "Divided in 1838", "42,041", "Named after Kilnamanagh town"], ["Tipperary", "Kilnamanagh Upper", "Coill na Manach Uachtarach", "Divided in 1838", "59,990", "Named after Kilnamanagh town."], ["Tipperary", "Middle Third", "An Trian Me\\u00e1nach", "By 1672", "113,544", "From trian meaning \\"third\\" or \\"portion.\\""], ["Tipperary", "Ormond Lower", "Urumhain \\u00cdochtarach", "Divided by 1672", "127,222", "Compare Ormond (\\"east Munster\\")"], ["Tipperary", "Ormond Upper", "Urumhain Uachtarach", "Divided by 1672", "79,471", "Compare Ormond (\\"east Munster\\")"], ["Tipperary", "Owney and Arra", "Uaithne agus Ara", "United 1672\\u20131792", "85,494", "\\"Owney Mulrian\\" and Arra were separate baronies in the Down Survey, named respectively after the ancient kingdom of Uaithni and the River Ara."], ["Tipperary", "Slievardagh", "Sliabh Ardach", "By 1672", "90,772", "\\"Slevardagh & Compsy\\" in the Down Survey. The name means \\"high mountain of the Eoganachta.\\""], ["Tyrone", "Clogher", "Clochar", "By 1591", "97,569", "Named after Clogher town"], ["Tyrone", "Dungannon Lower", "D\\u00fan Geanainn \\u00cdochtarach", "Divided by 1851; Dungannon by 1591", "42,794", "Named after Dungannon town"], ["Tyrone", "Dungannon Middle", "D\\u00fan Geanainn L\\u00e1ir", "Divided by 1851; Dungannon by 1591", "87,541", "Named after Dungannon town"], ["Tyrone", "Dungannon Upper", "D\\u00fan Geanainn Uachtarach", "Divided by 1851; Dungannon by 1591", "85,995", "Named after Dungannon town"], ["Tyrone", "Omagh East", "An \\u00d3maigh Thoir", "Divided 1807\\u201321; Omagh by 1591", "132,149", "Named after Omagh town"], ["Tyrone", "Omagh West", "An \\u00d3maigh Thiar", "Divided 1807\\u201321; Omagh by 1591", "93,321", "Named after Omagh town"], ["Tyrone", "Strabane Lower", "An Srath B\\u00e1n \\u00cdochtarach", "Divided by 1851; Strabane by 1591", "117,419", "Named after Strabane town"], ["Tyrone", "Strabane Upper", "An Srath B\\u00e1n Uachtarach", "Divided by 1851; Strabane by 1591", "121,282", "Named after Strabane town"], ["Waterford", "Coshmore and Coshbride", "Cois Abha M\\u00f3ire agus Cois Bhr\\u00edde", "United by 1831", "88,253", "Baronies of Coshmore and Coshbride were separate in the 1821 census. The names mean, respectively, \\"Bank of the Munster Blackwater\\" and \\"Bank of the River Bride.\\""], ["Waterford", "Decies-within-Drum", "Na D\\u00e9ise laistigh den Drom", "Decies divided by 1746", "57,325", "Decies south of the Drum Hills."], ["Waterford", "Decies-without-Drum", "Na D\\u00e9ise lasmuigh den Drom", "Decies divided by 1746", "129,894", "Decies north of the Drum Hills. \\"Without\\" is used with the meaning of \\"beyond\\" or \\"outside.\\""], ["Waterford", "Gaultier or Gaultiere", "An Ghaillt\\u00edr", "By 1672", "29,447", "Kilculliheen was formerly a parish of this barony. Name means \\"land of foreigners,\\" referring to Vikings."], ["Waterford", "Glenahiry", "Gleann na hUidhre", "By 1672", "38,940", "Name means \\"valley of the Nier\\", referring to the Nier River."], ["Waterford", "Middle Third or Middlethird", "An Trian Me\\u00e1nach", "By 1672", "44,609", "From trian meaning \\"third\\" or \\"portion.\\""], ["Waterford", "Upperthird or Upper Third", "Uachtar T\\u00edre", "By 1672", "63,846", "Name originally meant \\"Upper country\\"; probably acquired \\"third\\" in name by analogy with Middle Third."], ["Waterford", "Waterford City", "Cathair Phort L\\u00e1irge", "1574", "532", "Formerly a county corporate."], ["Westmeath", "Brawny", "Bre\\u00e1mhaine", "By 1672", "10,070", "The ancient territory of Bregmaine."], ["Westmeath", "Clonlonan", "Cluain Lon\\u00e1in", "By 1672", "32,095", "Name means \\"Lon\\u00e1n\'s meadow.\\""], ["Westmeath", "Corkaree", "Corca Raoi", "By 1542", "23,787", "A tribal name, \\"descendants of Raoi.\\""], ["Westmeath", "Delvin", "Dealbhna", "By 1542", "39,062", "Named after Delvin village"], ["Westmeath", "Farbill", "Fir Bhile", "By 1542", "35,453", "A tribal name: \\"men of the sacred tree.\\""], ["Westmeath", "Fartullagh", "Fir Thulach", "1542", "37,512", "Previously Tyrrells country. Name means \\"men of the hillock\\", a tribal name."], ["Westmeath", "Fore or Demifore", "Baile Fhobhair", "1542", "49,056", "Half with Fore, County Meath. Named after Fore Abbey."], ["Westmeath", "Kilkenny West", "Cill Chainnigh Thiar", "1542", "31,169", "Previously Maherquirke, Dillons country"], ["Westmeath", "Moyashel and Magheradernon", "Maigh Asail agus Machaire \\u00d3 dTiarn\\u00e1in", "By 1672", "40,565", "Moyashel and Magheradernon listed separately in 1542. They formed the ancient territories of Mag nAssail (Assail\'s plain) and the plain of the O\'Tiernans."], ["Westmeath", "Moycashel", "Maigh Chaisil", "1542", "47,097", "Originally the Barony of Rossaughe; before that, Delamares country. Name means \\"plain of the stone ringfort.\\""], ["Westmeath", "Moygoish", "U\\u00ed Mhac gCuais", "By 1542", "39,483", "A tribal name: \\"Descendants of the Son of Cuas.\\""], ["Westmeath", "Rathconrath", "R\\u00e1th Conarta", "1542", "48,415", "Named after Rathconrath village; previously Daltons country"], ["Wexford", "Ballaghkeen North", "An Bealach Caoin Thuaidh", "Ballaghkeen created 1606; Divided by 1868", "45,413", "Ballaghkeen means \\"way of sorrow.\\""], ["Wexford", "Ballaghkeen South", "An Bealach Caoin Theas", "Ballaghkeen created 1606; Divided by 1868", "40,986", "Ballaghkeen means \\"way of sorrow.\\""], ["Wexford", "Bantry", "Beanntra\\u00ed", "By 1672", "101,598", "Named after the Bendtraigi Laigen, the former ruling people."], ["Wexford", "Bargy", "U\\u00ed Bhairrche", "By 1672", "40,002", "Named after the ruling U\\u00ed Bairrche family, who claimed descent from D\\u00e1ire Barrach."], ["Wexford", "Forth", "Fotharta", "By 1672", "38,384", "A Fortuatha was a kingdom not ruled directly by members of the dominant dynasty of a province. This area was ruled by Fothairt in Chairn."], ["Wexford", "Gorey", "Guaire", "1606", "81,913", "Named after Gorey town"], ["Wexford", "Scarawalsh", "Scairbh Bhailis", "1606", "106,650", "Name means \\"rocky ford of light.\\""], ["Wexford", "Shelburne", "S\\u00edol Bhroin", "By 1672", "51,103", "Named after the tribe, S\\u00edl Broin, \\"offspring of Broin.\\""], ["Wexford", "Shelmaliere East", "S\\u00edol Maolu\\u00edr Thoir", "Divided by 1841", "16,363", "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\""], ["Wexford", "Shelmaliere West", "S\\u00edol Maolu\\u00edr Thiar", "Divided by 1841", "50,299", "Named after the ruling people, the S\\u00edl M\\u00e1el Uidir, \\"Offspring of Bald Uidir.\\""], ["Wicklow", "Arklow", "An tInbhear M\\u00f3r", "1606", "66,980", "Named after Arklow town"], ["Wicklow", "Ballinacor North", "Baile na Corra Thuaidh", "Divided 1832\\u20135", "74,109", "United barony of Talbotstown created in 1606, and divided into half-baronies for civil law purposes in 1798. Named after Ballinacor Castle."], ["Wicklow", "Ballinacor South", "Baile na Corra Theas", "Divided 1832\\u20135", "78,316", "(See Ballinacor North)"], ["Wicklow", "Newcastle", "An Caisle\\u00e1n Nua", "1606", "51,938", "Named after the village of Newcastle, County Wicklow. Not related to County Dublin barony of the same name."], ["Wicklow", "Rathdown", "R\\u00e1th an D\\u00fain", "1606", "33,462", "Half with Rathdown, County Dublin. Named after Rathdown Castle."], ["Wicklow", "Shillelagh", "S\\u00edol \\u00c9alaigh", "1606", "44,348", "Named after Shillelagh village. A half-barony in 1807."], ["Wicklow", "Talbotstown Lower", "Baile an Talb\\u00f3idigh \\u00cdochtarach", "Divided by 1801", "86,857", "Named after Talbotstown village. United barony of Talbotstown created in 1606."], ["Wicklow", "Talbotstown Upper", "Baile an Talb\\u00f3idigh Uachtarach", "Divided by 1801", "62,510", "(See Talbotstown Lower)"]], "label": [[315, 0], [315, 1]]}' ``` 2. ``` import json import pandas as pd from transformers import TapasModel, TapasTokenizer data = json.loads(data) model = TapasModel.from_pretrained("google/tapas-base-finetuned-tabfact") tokenizer = TapasTokenizer.from_pretrained("google/tapas-base-finetuned-tabfact") table = pd.DataFrame(data['table_list'][1:], columns=data['table_list'][0]).astype(str) queries = [data['sentence_annotations'][0]['final_sentence']] inputs = tokenizer(table=table, queries=queries, padding="max_length", return_tensors="pt", max_length=512, truncation=True) input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] token_type_ids = inputs['token_type_ids'] x = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids) ``` 3. error message ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-15-fc5524a5d018> in <module>() 1 x = model(input_ids=input_ids, 2 attention_mask=attention_mask, ----> 3 token_type_ids=token_type_ids) /shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/transformers/models/tapas/modeling_tapas.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict) 853 854 embedding_output = self.embeddings( --> 855 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 856 ) 857 encoder_outputs = self.encoder( /shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/transformers/models/tapas/modeling_tapas.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 323 for i in range(self.number_of_token_type_embeddings): 324 name = f"token_type_embeddings_{i}" --> 325 embeddings += getattr(self, name)(token_type_ids[:, :, i]) 326 327 embeddings = self.LayerNorm(embeddings) /shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input) 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse) 127 128 def extra_repr(self) -> str: /shared_home/r08922129/anaconda3/envs/tabfact/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854 IndexError: index out of range in self ``` ## Expected behavior I have inspected the content of `token_type_ids` and all the shapes of tensors involved in this snippet but found nothing strange.
12-20-2020 04:35:34
12-20-2020 04:35:34
cc @NielsRogge <|||||>It doesn't work because `type_vocab_sizes=[3, 256, 256, 2, 256, 256, 10]` which means the default max size of a table is 256*256. However, the tokenizer doesn't check it. Strictly speaking, it's not a bug but I think in that kind of tasks, many tables' size can exceed the default value. Maybe it's good to add codes to handle this problem.<|||||>Ok I've investigated this a bit more. The problem here is that there's a column called "Area (acres, 1872)" in the table whose values cause the `column_ranks` token types to exceed the vocab size of 256. Deleting this column resolves the issue. Also, the table that you provide is actually (way) too large for TAPAS. Note that it only has a max sequence length of 512 tokens, and that in the original paper, only tables having at most 500 cells were considered. To correctly truncate the table, you have to initialize TapasTokenizer as follows: ``` from transformers import TapasTokenizer tokenizer = TapasTokenizer.from_pretrained("google/tapas-base-finetuned-tabfact", drop_rows_to_fit=True) ``` and then encode the table + query as follows: ``` inputs = tokenizer(table=table, queries=queries, padding="max_length", truncation=True, return_tensors="pt") ``` cc @LysandreJik It's maybe a bit weird that people have to initialize `TapasTokenizer` with `drop_rows_to_fit` to True and set `truncation` to True when calling it. <|||||>Thank you for the reply. It's really helpful!<|||||>You're right @NielsRogge; should we switch the `drop_rows_to_fit` to `True` by default given that no models can handle it set to `False` when overflowing?<|||||>I guess it should only be set to `True` when calling the tokenizer with `truncation` set to `True` or to `drop_rows_to_fit`. If the user does not specify truncation, and the table is too large, then an error will be thrown as shown [here](https://github.com/huggingface/transformers/blob/8eb7f26d5d9ce42eb88be6f0150b22a41d76a93d/src/transformers/models/tapas/tokenization_tapas.py#L1426).<|||||>Fixed by #9507
transformers
9,220
closed
Proposed Fix : [RagSequenceForGeneration] generate "without" input_ids
# What does this PR do? Proposal of option (2) mentioned in the issue `RagSequenceForGeneration.generate()` without `input_ids` https://github.com/huggingface/transformers/issues/9184 ### Re-paste here: In `RagSequenceForGeneration` method `generate()` function, the doc said that both `input_ids` and `context_input_ids` are optional (one of them must be specified) . However, in the code https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L907 It specifically need `input_ids` in all cases. Not sure which option is the best (1) simply said `input_ids` is always needed , OR (2) add code to calculate `nll` if only `context_input_ids` is provided , but in this case `doc_scores` and `context_attention_mask` have to be provided as well (similar to RagModel requirement ) : https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L588 I think option (2) should be reasonable since `RagTokenForGeneration` method `generate()` also requires the same. ### who can review? @lhoestq or @patrickvonplaten ### Make style applied but failed quality check I already applied `make style` and passed, but somehow still fails the "code quality check" sorry :(
12-20-2020 04:22:34
12-20-2020 04:22:34
@ratthachat - thanks a lot for your PR! I agree that option 2) is the better option and think the general approach is exactly right. I left a couple of statements making that improve readability a bit IMO. Also could we add one test here: https://github.com/huggingface/transformers/blob/f38c4ad302dd34faef1137b13e9636f4408b0462/tests/test_modeling_rag.py#L502 called `test_model_generate_from_context_input_ids` that verifies that using `input_ids` and `context_input_ids` yields the same result?<|||||>@ratthachat Don't worry about `make style` I can fix that later!<|||||>Thanks for the review, Patrick! I pretty much agree on every point, esp. the test case. I will do it. :)<|||||>Hi Patrick, I think I addressed every suggestion :) [ Still fails on code-quality even already applied `make style` again :D ] @patrickvonplaten <|||||>Great job @ratthachat - I'm currently running all RAG slow tests to make sure nothing was unintentionally broken. Will merge once they all pass <|||||>Slow tests all passing<|||||>Nice @patrickvonplaten !
transformers
9,219
closed
Fix beam search generation for GPT2 and T5 on model parallelism
# What does this PR do? This PR fixes beam search generation crash when the model layers are distributed on several devices (model parallelism). I have also added a test which showcase the bug on master and pass with the provided fix. Fixes issue #9200 This is my first contribution to Transformers, so please let me know if something seems wrong or can be improved! The added test doesn't actually test anything at the moment, just raises an error on master. Looking forward to some feedback on how you'd like to see that. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Issue #9200 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik @alexorona @patrickvonplaten
12-19-2020 23:34:39
12-19-2020 23:34:39
@sgugger can you maybe review as well and merge if ok?
transformers
9,218
closed
run_clm.py AttributeError: 'NoneType' object has no attribute 'keys'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.2.0dev0 - Platform: Linux-4.19.0-13-cloud-amd64-x86_64-with-debian-10.7 - Python version: 3.7.8 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: No (using TPU) - Using distributed or parallel set-up in script?: don't know ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik Trainer: @sgugger examples/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): gpt2 The problem arises when using: * [ *] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ *] an official GLUE/SQUaD task: pre-training on wikitext dataset * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1.create a tpu 2.install transformers and dataset 3.run the code ```ruby python3 run_clm.py \ --model_name_or_path gpt2 \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --do_train \ --do_eval \ --output_dir /tmp/test-clm ``` error message: ```ruby Traceback (most recent call last): File "run_clm.py", line 51, in <module> MODEL_CONFIG_CLASSES = list(MODEL_FOR_CAUSAL_LM_MAPPING.keys()) AttributeError: 'NoneType' object has no attribute 'keys' ``` ## Expected behavior To start pretraining on the TPU
12-19-2020 21:56:21
12-19-2020 21:56:21
Hey @mukhtar-algezoli, It seems like you are using tensorflow, but the script `run_clm.py` only works for PyTorch -> I'm afraid you have to use PyTorch in order to use `run_clm.py`.<|||||>@patrickvonplaten thank you.<|||||>hi @patrickvonplaten , if i want use tensorflow to pretrain bert model with transformers , are there some examples like run_mlm.py ?<|||||>https://github.com/huggingface/transformers/tree/master/examples/tensorflow/language-modeling use tensorflow examples get the same error, How can i fix it?<|||||>@Rocketknight1 for TF examples :-)<|||||>@hrdxwandg This is an odd issue! Can you please install the most recent version of transformers (e.g. with `pip install --upgrade transformers`) and then try running the following in a console or notebook and tell me what you see? ``` from transformers import MODEL_FOR_CAUSAL_LM_MAPPING print(MODEL_FOR_CAUSAL_LM_MAPPING) ```<|||||>> print(MODEL_FOR_CAUSAL_LM_MAPPING) OrderedDict([(<class 'transformers.models.roformer.configuration_roformer.RoFormerConfig'>, <class 'transformers.models.roformer.modeling_roformer.RoFormerForCausalLM'>), (<class 'transformers.models.bigbird_pegasus.configuration_bigbird_pegasus.BigBirdPegasusConfig'>, <class 'transformers.models.bigbird_pegasus.modeling_bigbird_pegasus.BigBirdPegasusForCausalLM'>), (<class 'transformers.models.gpt_neo.configuration_gpt_neo.GPTNeoConfig'>, <class 'transformers.models.gpt_neo.modeling_gpt_neo.GPTNeoForCausalLM'>), (<class 'transformers.models.big_bird.configuration_big_bird.BigBirdConfig'>, <class 'transformers.models.big_bird.modeling_big_bird.BigBirdForCausalLM'>), (<class 'transformers.models.camembert.configuration_camembert.CamembertConfig'>, <class 'transformers.models.camembert.modeling_camembert.CamembertForCausalLM'>), (<class 'transformers.models.xlm_roberta.configuration_xlm_roberta.XLMRobertaConfig'>, <class 'transformers.models.xlm_roberta.modeling_xlm_roberta.XLMRobertaForCausalLM'>), (<class 'transformers.models.roberta.configuration_roberta.RobertaConfig'>, <class 'transformers.models.roberta.modeling_roberta.RobertaForCausalLM'>), (<class 'transformers.models.bert.configuration_bert.BertConfig'>, <class 'transformers.models.bert.modeling_bert.BertLMHeadModel'>), (<class 'transformers.models.openai.configuration_openai.OpenAIGPTConfig'>, <class 'transformers.models.openai.modeling_openai.OpenAIGPTLMHeadModel'>), (<class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'>, <class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>), (<class 'transformers.models.transfo_xl.configuration_transfo_xl.TransfoXLConfig'>, <class 'transformers.models.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModel'>), (<class 'transformers.models.xlnet.configuration_xlnet.XLNetConfig'>, <class 'transformers.models.xlnet.modeling_xlnet.XLNetLMHeadModel'>), (<class 'transformers.models.xlm.configuration_xlm.XLMConfig'>, <class 'transformers.models.xlm.modeling_xlm.XLMWithLMHeadModel'>), (<class 'transformers.models.ctrl.configuration_ctrl.CTRLConfig'>, <class 'transformers.models.ctrl.modeling_ctrl.CTRLLMHeadModel'>), (<class 'transformers.models.reformer.configuration_reformer.ReformerConfig'>, <class 'transformers.models.reformer.modeling_reformer.ReformerModelWithLMHead'>), (<class 'transformers.models.bert_generation.configuration_bert_generation.BertGenerationConfig'>, <class 'transformers.models.bert_generation.modeling_bert_generation.BertGenerationDecoder'>), (<class 'transformers.models.xlm_prophetnet.configuration_xlm_prophetnet.XLMProphetNetConfig'>, <class 'transformers.models.xlm_prophetnet.modeling_xlm_prophetnet.XLMProphetNetForCausalLM'>), (<class 'transformers.models.prophetnet.configuration_prophetnet.ProphetNetConfig'>, <class 'transformers.models.prophetnet.modeling_prophetnet.ProphetNetForCausalLM'>), (<class 'transformers.models.bart.configuration_bart.BartConfig'>, <class 'transformers.models.bart.modeling_bart.BartForCausalLM'>), (<class 'transformers.models.mbart.configuration_mbart.MBartConfig'>, <class 'transformers.models.mbart.modeling_mbart.MBartForCausalLM'>), (<class 'transformers.models.pegasus.configuration_pegasus.PegasusConfig'>, <class 'transformers.models.pegasus.modeling_pegasus.PegasusForCausalLM'>), (<class 'transformers.models.marian.configuration_marian.MarianConfig'>, <class 'transformers.models.marian.modeling_marian.MarianForCausalLM'>), (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>, <class 'transformers.models.blenderbot.modeling_blenderbot.BlenderbotForCausalLM'>), (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>, <class 'transformers.models.blenderbot_small.modeling_blenderbot_small.BlenderbotSmallForCausalLM'>), (<class 'transformers.models.megatron_bert.configuration_megatron_bert.MegatronBertConfig'>, <class 'transformers.models.megatron_bert.modeling_megatron_bert.MegatronBertForCausalLM'>)])<|||||>That's extremely strange - the error above said that `MODEL_FOR_CAUSAL_LM_MAPPING` was `None`, but your code there clearly shows that it's importing the dict fine. Are you sure you're getting the same error as the commenter above?<|||||>> That's extremely strange - the error above said that `MODEL_FOR_CAUSAL_LM_MAPPING` was `None`, but your code there clearly shows that it's importing the dict fine. > > Are you sure you're getting the same error as the commenter above? sorry, I don't check clearly. My error msg is below: Traceback (most recent call last): File "run_mlm_tf.py", line 63, in <module> MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_LM_MAPPING.keys()) AttributeError: 'NoneType' object has no attribute 'keys' I use your method and recheck: from transformers import MODEL_FOR_MASKED_LM_MAPPING print(MODEL_FOR_MASKED_LM_MAPPING) the output is None<|||||>Hi, sorry! I lost track of this issue because we marked it as closed, but I see we didn't resolve @hrdxwandg's problem. Are you still encountering the same error?<|||||>@Rocketknight1 ; sorry to bump this, but I am getting the same error with the latest transformers version
transformers
9,217
closed
Fix documentation links always pointing to master.
# What does this PR do? This PR fixes documentation issue. The issue is that hyperlinks pointing to code file in the docs always point to the master. Reference issue : https://github.com/huggingface/transformers/issues/7988 I found [extlinks](https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html) more appropriate than [rst_epilog ](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-rst_epilog) as mentioned in the original issue because hyperlink replacement is not possible with `rst_epilog` Stackoverflow issue reference : https://stackoverflow.com/questions/1227037/substitutions-inside-links-in-rest-sphinx Although this fixes the issues as discussed I am unable to fix the issue in markup files which are https://huggingface.co/transformers/master/contributing.html and https://huggingface.co/transformers/master/notebooks.html Please suggest if it is possible to do that as well. Fixes https://github.com/huggingface/transformers/issues/7988 @sgugger @LysandreJik
12-19-2020 18:49:49
12-19-2020 18:49:49
Thanks for merging @LysandreJik but I want to know why the errors occurred when I ran `make style ` as I see in https://github.com/huggingface/transformers/pull/9217#discussion_r548898170
transformers
9,216
closed
checkpoint callbacks
Hi I need to save more things during checkpoint and I am using finetune_trainer.py not the pytorch lightening one, could you provide me with example on how I can modify the save part of the checkpoints? thanks
12-19-2020 18:03:34
12-19-2020 18:03:34
In particular I need to define a callback with on_save function, for this I need to be able to acces "is_world_process_zero" and I am not sure how to do it. If I want to pass trainer to callback, how can I make sure this is passed? thanks<|||||>solved with understanding is_world_process_zero is accessible via state.is_world_process_zero
transformers
9,215
closed
File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 280, in _setup_backward_hooks assert p_tmp.grad_fn is not None
Hi @stas00 @sgugger I am testing last version of finetune_trainer on multiple gpus, with this command `export BS=4; CUDA_VISIBLE_DEVICES=0,1 USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp ` these two packages neeed to be added to requirements.txt pytorch-lightning=1.1.1 fairscale but still getting this error ``` 12/19/2020 16:56:16 - INFO - utils - using task specific params for translation: {} Traceback (most recent call last): File "finetune_trainer.py", line 353, in <module> main() File "finetune_trainer.py", line 291, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 672, in train model = ShardedDDP(model, self.optimizer) File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 99, in __init__ self._setup_backward_hooks() File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 280, in _setup_backward_hooks assert p_tmp.grad_fn is not None AssertionError 12/19/2020 16:56:16 - INFO - __main__ - *** Train *** Traceback (most recent call last): File "finetune_trainer.py", line 353, in <module> main() File "finetune_trainer.py", line 291, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 672, in train model = ShardedDDP(model, self.optimizer) File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 99, in __init__ self._setup_backward_hooks() File "/opt/conda/envs/updated/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 280, in _setup_backward_hooks assert p_tmp.grad_fn is not None AssertionError Traceback (most recent call last): File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module> main() File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/updated/bin/python', '-u', 'finetune_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-small', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--data_dir', 'wmt_en_ro', '--do_train', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_train_batch_size', '4', '--sortish_sampler', '--task', 'translation', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '500', '--sharded_ddp']' returned non-zero exit status 1. ``` thanks requirements erorr ``` Traceback (most recent call last): File "finetune_trainer.py", line 353, in <module> main() File "finetune_trainer.py", line 282, in main data_args=data_args, File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/seq2seq_trainer.py", line 58, in __init__ super().__init__(*args, **kwargs) File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 299, in __init__ raise ImportError("Sharded DDP training requires fairscale: `pip install fairscale`.") ImportError: Sharded DDP training requires fairscale: `pip install fairscale`. Traceback (most recent call last): File "finetune_trainer.py", line 353, in <module> main() File "finetune_trainer.py", line 282, in main data_args=data_args, File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/seq2seq_trainer.py", line 58, in __init__ super().__init__(*args, **kwargs) File "/home/rabeeh/ruse/seq2seq/temp/transformer4.1.1/trainer.py", line 299, in __init__ raise ImportError("Sharded DDP training requires fairscale: `pip install fairscale`.") ImportError: Sharded DDP training requires fairscale: `pip install fairscale`. Traceback (most recent call last): File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/envs/updated/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in <module> main() File "/opt/conda/envs/updated/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/updated/bin/python', '-u', 'finetune_trainer.py', '--local_rank=1', '--model_name_or_path', 't5-small', '--output_dir', 'output_dir', '--adam_eps', '1e-06', '--data_dir', 'wmt_en_ro', '--do_train', '--freeze_embeds', '--label_smoothing', '0.1', '--learning_rate', '3e-5', '--logging_first_step', '--logging_steps', '1000', '--max_source_length', '128', '--max_target_length', '128', '--num_train_epochs', '1', '--overwrite_output_dir', '--per_device_train_batch_size', '4', '--sortish_sampler', '--task', 'translation', '--val_max_target_length', '128', '--warmup_steps', '500', '--n_train', '500', '--sharded_ddp']' returned non-zero exit status 1. ```
12-19-2020 16:57:52
12-19-2020 16:57:52
As I tried to explain earlier it's not ready yet for general consumption. If you want it sooner see: https://github.com/huggingface/transformers/issues/9156#issuecomment-748501582
transformers
9,214
closed
Save underlying BertModel only
I currently have a custom model on top of a pretrained BERT Model, which is effectively just dropout and a linear layer on top for classification. I want to be able to train this classifier, updating the underlying weights, but then to save the `BERTModel` underneath (w/o linear classification layer) so that I can load from file and use this as my input to the same custom model. Is there a way I can access and save the underlying transformer to then be used again like this? Code for clarification: ``` class BERTBinaryClassifier(torch.nn.Module): def __init__(self, model_name_or_path: str): super(BERTBinaryClassifier, self).__init__() self.bert = ModelSelection.get_model( model_name=model_name_or_path) self.drop = torch.nn.Dropout(p=0.3) self.out = torch.nn.Linear(self.bert.config.hidden_size, 2).cuda() def forward(self, inputs): logits, embs, *_ = self.bert(**inputs) output = self.drop(logits) return self.out(output), embs ``` Right now I'm just outputting the logits and hidden_state for future usage but I'd like to be able to use this same function to effectively load a `BertModel` like I do at the start here with `self.bert` (which just loads `BertModel.from_pretrained` and save my Pytorch classifier as a whole, separately. Would it be as simple as say accessing `self.bert.bert` and then saving it that way?
12-19-2020 14:28:07
12-19-2020 14:28:07
If your bert model is an instance of `BertModel` then you should be able to save it using `self.your_model.bert.save_pretrained(path)` and load using `.from_pretrained`<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,213
closed
bert-mlm-converting from tensorflow to pytorch
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
12-19-2020 14:09:51
12-19-2020 14:09:51
hello, i convert bert model from tensorflow to pytorch to fine tune it and in train function specially in loss , pred_masks = model(inputs, labels) i got this error not enough values to unpack (expected 2, got 1) why model return one only it should return two one for loss and the second for pred_masks<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,212
closed
LXMERT cross_modality_matching logits order in code vs. documentation
Issue #7266 evolved into an entirely different issue described below. # Possible documentation error Independently of considering #8333 or not, it is clear that the results of the cross-modality matching in LXMERT (for example in [this demo](https://colab.research.google.com/drive/18TyuMfZYlgQ_nXo-tr8LCnzUaoX0KS-h?usp=sharing)) are better (below random chance vs. above random chance) when the first logit of the `output_lxmert['cross_relationship_score']` *– (torch.FloatTensor of shape (batch_size, 2))* represents *a mismatch* rather than *a match*. This **contradicts the [documentation](https://huggingface.co/transformers/model_doc/lxmert.html)** which in my understanding assigns the first logit to `is_match` (True): > cross_relationship_score – (torch.FloatTensor of shape (batch_size, 2)): Prediction scores of the textual matching objective (classification) head (scores of True/False continuation before SoftMax). @LysandreJik, @eltoto1219: Is the documentation right or wrong? How was the model trained and which logit was intended to predict the match and which one the mismatch? I would be very happy to have a clear answer about the order of the logits in training and code and its correspondence to the documentation. I know how easy it is to flip the logits around and choose the one delivering better results on the data at hand. But in my understanding of scientific conduct, I can not just choose this by wishful thinking and ignoring the documentation: there might be other things going on exhibiting this behavior. Thank you in advance!
12-19-2020 10:08:13
12-19-2020 10:08:13
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions. If you think this still needs to be addressed please comment on this thread.
transformers
9,211
closed
[trainer] deepspeed integration
This PR adds experimental support for Deepspeed <https://github.com/microsoft/deepspeed>, whose main feature is ZeRO covered by the paper [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, by Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He](https://arxiv.org/abs/1910.02054). Recently added support for sharded DDP ([fairscale](https://github.com/facebookresearch/fairscale)) also implements parts of ZeRO. Deepspeed implements all of ZeRO. I haven't experimented enough yet, but it indeed delivers incredible results. For example I can get about a 5-8 times bigger batch onto the same hardware as compared to the same code running w/o deepspeed and the speedup is huge too. In the following example I was able to get 4.5x speedup on training, and ~2x on validation/testing: ``` # baseline export BS=3; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python \ -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path \ sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro \ --do_eval --do_predict --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 \ --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 \ --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --src_lang en_XX --task translation \ --test_max_target_length 128 --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 2000 \ --n_val 2000 --n_test 2000 2020-12-18 22:31:40 | INFO | __main__ | train_runtime = 144.9132 2020-12-18 22:37:10 | INFO | __main__ | val_runtime = 329.8146 2020-12-18 22:42:37 | INFO | __main__ | test_runtime = 326.6212 # deepspeed export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed \ ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 \ --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 \ --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 \ --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS \ --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --src_lang en_XX --task translation \ --test_max_target_length 128 --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 2000 \ --n_test 2000 --deepspeed ds_config.json 2020-12-18 22:51:46 | INFO | __main__ | train_runtime = 32.6825 2020-12-18 22:54:47 | INFO | __main__ | val_runtime = 180.5917 2020-12-18 22:57:51 | INFO | __main__ | test_runtime = 183.7731 ``` The bleu eval scores were slightly better than the baseline (~0.5 point higher), but it's not enough to make any conclusions based on a single run. The cool thing is that deepspeed does everything by itself, even the `--fp16` handling, so really it was all about getting out of its way, thus the main part of the integration was to disable a lot of things the trainer does when `--deepspeed` is enabled. Note the different invocation pattern. If normally we run distributed as: ``` python -m torch.distributed.launch --nproc_per_node=2 ./program.py args ``` deepspeed performs its own DDP internally, and requires the program to be started with: ``` deepspeed ./program.py args ``` The only thing I'm not sure about with this PR is that deepspeed enables all of its features via a json config file, so I'm not sure where to stash a sample one. I guess I will just add it to the documentation. Currently I put one under `examples/seq2seq/ds_config.json` since that's where the test that needs it lives. But once this is merged all interested parties can start experimenting with various features, and it won't impact `transformers` code. They will just need to tweak `ds_config.json`. And we convert many trainer cl args into DS config on the fly. There surely will be competition betweeen fairscale and deepspeed integrations. So far from the few experiments I did `deepspeed` allows for a bigger batch size than fairscale. To install deepspeed you can just do `pip install deepspeed` - I'm not sure if all bug fixes are in the release. We can make a request to release a new version when this is merged. If the build fails I recommend pre-compiling its CUDA extensions (otherwise they get built at run time via PTX) via master: ``` git clone https://github.com/microsoft/deepspeed cd deepspeed DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e . 2>&1 | tee build.log ``` If you want a faster build add an env var `TORCH_CUDA_ARCH_LIST` with the cuda compute capabilities you need, e.g. I do: ``` TORCH_CUDA_ARCH_LIST="6.1;8.6" DS_BUILD_OPS=1 pip install --no-clean --no-cache -v --disable-pip-version-check -e . 2>&1 | tee build.log ``` It was awesome that @sgugger has just added `fairscale` support, so it was much easier to do the same for deepspeed seeing how `fairscale` was integrated, so I'm appreciating the work you have done, Sylvain. ---------------------- Do try it so we get better testing! You will need 2+ gpus to use it First install it: ``` pip install deepspeed ``` At the very least do the test: ``` cd examples/seq2seq pytest -sv test_finetune_trainer.py -k deepspeed ``` Or if you want to fiddle with the normal run, here is what I have been using. ``` cd examples/seq2seq wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz tar -xzvf wmt_en_ro.tar.gz export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --deepspeed ds_config.json --fp16 --save_steps 1 ``` ---------------------- Questions that need to be addressed so that all Trainer features continue to work under deepspeed. * [ ] a notebook with benchmarks was requested Probably at a later time, my uneven gpu-sized setup doesn't lend to impressive benchmarks - may be someone will send me another rtx-3090 card ;) @sgugger, @LysandreJik, @patrickvonplaten
12-19-2020 07:13:32
12-19-2020 07:13:32
Thanks @stas00 for putting this together! I think there might be a few things we can do on the deepspeed side to smooth a few pain points out. Most of us our out of office until the new year but will definitely be taking a close look at this soon and help where we can.<|||||>Thank you, @jeffra! New year sounds perfect as a time for you to make suggestions if any, but meanwhile I think it's coming along nicely. <|||||>OK, so this PR also introduces a concept of `self.wrapped_model` so that we have less confusion about which is which in the trainer and user code. * `self.model` is always transformers model - this is the internal model * `self.wrapped_model` is a wrapped model - always the most external model which could be DDP(Transformers Model), DDP(Deepspeed(Transformers Model)), etc. It's not documented yet, but @sgugger when you get a chance could you please check that it looks correct what I did here https://github.com/huggingface/transformers/pull/9211/commits/1510444399a4a088e6b14076bd857a47c3f2b1eb Questions: 1. is it correct that I set it to `None` if there is no wrapped model? 2. would it be better to call it `model_wrapped` - so the two better align side by side during debug or IDE prediction completion engines? 3. I'm not sure where to document this? And we should probably add a public API accessor? 4. We can now probably remove and refactor the following code, as we now have a simpler way to get the internal model - https://github.com/huggingface/transformers/blob/cbe63949d76efd153a1f389f38fe9ce1287e06b0/src/transformers/trainer.py#L1635-L1651 or is it used in some deep code where there is no trainer object? In which case this code won't work as it needs double unwrapping under deepspeed - `model.module.module` (since we have DDP too). I see it's used only in `floating_point_ops` which does have access to `self` so I'm not sure why it was needed in first place. Also in 2 tests, but that could be moved into the tests if need be. If we want a general unwrap function it needs to do it recursively until there is no more `.module`.<|||||>Let me think. For 1, I think the `wrapped_model` should be the model, in this case, just to avoid the inconvenience of testing if `None`. For 2, I have no strong opinion, so you can pick the version you prefer. For 3, none of the attributes of the `Trainer` are properly documented yet. This could be added in the main docstring For 4, yes, absolutely. This was a quick fix that was merged when I didn't get much time to do a nice solution, I though I had removed all uses of that function. Could you add the `wrapped_model` or `model_wrapped` in a separate PR? This would be easier to follow and not highjack the discussion on the deepspeed integration. We can rebase this when that PR is merged.<|||||>> For 1, I think the `wraped_model` should be the model, in this case, just to avoid the inconvenience of testing if `None`. Then we somewhat lose information - `None` is telling us that notihng is wrapping the model. But I suppose we could achieve the same by `self.model == self.wrapped_mode` - OK, that works! Thank you for the rest of the answers, @sgugger. Will integrate those and make a separate PR with wrapped_model.<|||||>@stas00 Should it be mentioned in some README/documentation that folks can only use DeepSpeed with the PyTorch `Trainer` and not the TF `Trainer`? There's a hard dependency of using `torch.distributed` with the NCCL backend to use DeepSpeed. Secondly, what is the plan in terms of introducing DeepSpeed in the `transformers` [`setup.py`](https://github.com/huggingface/transformers/blob/master/setup.py) and the PyTorch GPU [`Dockerfile`](https://github.com/huggingface/transformers/blob/master/docker/transformers-pytorch-gpu/Dockerfile)? While DeepSpeed has a pip installable PyPI package, IIRC it is highly recommended that it be installed from source. Also, in order to use certain features in DeepSpeed such as 1-bit Adam, there are certain special installations to be done that do not come with the PyPI package. Will this PR support every underlying DeepSpeed feature? If not, can the scope of the initial DeepSpeed integration be defined clearly in some README/documentation, while allowing for further iterations in future to enable the utilization of more DeepSpeed features with the `transformers` `Trainer`?<|||||>> @stas00 Should it be mentioned in some README/documentation that folks can only use DeepSpeed with the PyTorch Trainer and not the TF Trainer? There's a hard dependency of using torch.distributed with the NCCL backend to use DeepSpeed. Yes, we should definitely be clear about that. thank you! At the moment the idea is to put all the ZeRO related docs here: https://github.com/huggingface/transformers/pull/9208 (that PR covers fairscale at the moment) > Secondly, what is the plan in terms of introducing DeepSpeed in the transformers setup.py It'll be up to users to install `deepspeed`, just like it's the case with `fairscale` or any other libraries the `transformers` core doesn't require. Currently if you use `--deepspeed` and you don't have it installed the trainer will assert with a suggestion to install that library. > and the PyTorch GPU Dockerfile? I have no idea. I don't see any reason why it can't be included. Let's do it in baby steps. First, make the support available, test it out, solve initial issues if any. Then worry about everything else? > While DeepSpeed has a pip installable PyPI package, IIRC it is highly recommended that it be installed from source. Also, in order to use certain features in DeepSpeed such as 1-bit Adam, there are certain special installations to be done that do not come with the PyPI package. Will this PR support every underlying DeepSpeed feature? If not, can the scope of the initial DeepSpeed integration be defined clearly in some README/documentation, while allowing for further iterations in future to enable the utilization of more DeepSpeed features with the transformers Trainer? As I have shown in the example of the upcoming fairscale-support doc PR (we are waiting for fairscale to make a new pypi release before we merge it), we will document the same for DeepSpeed and address your questions. Your comments would be super-helpful for that document, so please save them for when we get to write that document. With your permission I can tag you on that future PR. Thank you. Wrt to specifics let's see what ends up working out of box and what needs to be polished. I think the main issues will be bugs on the DS side. Otherwise there is a ton of features and I have only been testing a few. If you feel inspired and are already experienced with DS it'd be awesome if you made a checklist of features and then between you and I, and anybody else who wants to contribute, test those features and check what's supported and report back to DS what is not. Since DS does most of the things on its own, I don't think there will be much to change in `transformers` Trainer once this PR is polished. I can be wrong of course. **edit**: actually there is no point waiting - I started adding notes into docs/source/training.rst in this PR. So already added a few of your comments - will need to expand those later.<|||||>> how is the optimizer/scheduler creation handled? E.g. how is the fact we don't apply weight decay to some parameters or the proper schedule with the number of training steps handled? I don't see it in the current version of the code since deepspeed is responsible for creating the optimizer and scheduler It wasn't, but it is now. Please have a look at what I just committed https://github.com/huggingface/transformers/pull/9211/commits/869173f62373a068c2cb932680093e7fd1d4ab78 - schedulers - I was able to remap 2 `constant_with_warmup` + `linear` - optimizers - I think only Adam is possible at the moment with our cl args - but users can do whatever they want in the config file - the main logic is that if either `optimizer` or `scheduler` is configured in the config file we keep those and don't override the corresponding parts - if they are missing - we build our own config. - one thing I could use your help with, @sgugger. To use `linear` I needed `num_training_steps` early in `__init__` before distributed is set up. I tried postponing ds init for the time we have `num_training_steps` available in `train`, but it was too late - must set up dist early in `__init__` - so I added a hacky `get_num_training_steps` - please have a look if you can think of a better way to handle this chicken-n-egg problem. Thank you!<|||||>> Left some comments. Not super happy about the `get_num_training_steps` method, but I don't see a way around it either. I agree. I tried to move `_init_deepspeed` into a later stage when we already have that figured out cleanly, but there is a chicken and egg problem there. I have made another commit to only run `get_num_training_steps` if absolutely necessary, and not by default for `--deepspeed` > Last thing I wonder is if it works okay for checkpointing (saving model/optimizer/scheduler after `save_steps` steps) and then resuming training? I.e., does DeepSpeed gets in the way of saving/reloading a model/optimzier state/scheduler state. It's on the todo list in the OP, where I'm tracking what needs to be done. It's just a matter of doing it. <|||||>@sgugger, as `_init_deepspeed` was born during this PR's evolution and it continues evolving and growing - are you happy with it in the main file or should it be moved into some other file, say `trainer_integrations.py`?<|||||>@sgugger, I'd love some guidance from you to that last point We have `deepspeed.DeepSpeedEngine.save_checkpoint` and `deepspeed.DeepSpeedEngine.load_checkpoint` ([doc](https://deepspeed.readthedocs.io/en/latest/model-checkpointing.html)) So in `_save_checkpoint` I will delegate to `deepspeed.DeepSpeedEngine.save_checkpoint` the 2 parts: ``` # Save model checkpoint # Save optimizer and scheduler ``` and leave the rest untouched, right? Then I see `_tune_save_checkpoint` - looks very similar to `_save_checkpoint` but it's much simpler - I guess doing the same here? `deepspeed.DeepSpeedEngine.save_checkpoint` saves/loads everything in one call - doesn't separate them into model/sched/optim -. And then on loading - there are quite a few places where this is happening. Any suggestions at how to approach this? Just match any place where there is `torch.load`? Alternatively, I can just give it a try and then you can comment on what I have missed / did wrong - if that sounds like a better use of your time. Thank you!<|||||>For the checkpointing, `_tune_save_checkpoint` is only there for Ray tune saving of checkpoints, so it's a bit different. I think it's fine to delegate to DS the saving of model/optimizer/scheduler. For the reloading, I'd go for using `torch.load` when reloading the model at the end of training but use deepspeed for loading optimzier/scheduler (it will also reload the model, but it should match the previous one).<|||||>@sgugger, what should we do about tests - I only have a basic test that runs a full train/eval with `finetune_trainer.py` - like we have for `sharded_ddp`. 1. do we want to test all the myriad of different config combinations - probably not since we can't control them anyway and perhaps it's best to do it on demand if and when we find bugs in our integration code. 2. I could move it from `examples` to `tests` but then it'd be much more difficult to do a functional test I think - please correct me if I'm wrong. If there is an easy way then by all means - and `sharded_ddp` tests need to go there too then. we can also deal with this separately after concluding this PR as it's a project of its own. 3. In either case - should we give it a dedicated test file/have integrations dedicated file/leave it for now as is? I think the latter. The thing is we only pass on the configuration to DS and aren't really doing anything other than really getting out of its way, so there isn't much to test. Perhaps checkpointing should be tested as they will be part of the trainer code, which I think is the last big thing I need to integrate. Thanks. <|||||>> Could we however move it to `integrations` since the trainer code is already quite long? It only seems to use `self` to access the `args` so we could have its signature be `(args, model)` It needs `self` for the num of training steps calculations, and I tried to only run it if it's needed (linear scheduler), so I can't pre-calculate and pass it to the `_init_deepspeed` method as it'd be inefficient for code that doesn't need it. I'm waiting for the DS team to follow up on my question about the chicken and the egg - perhaps there will be a way to avoid that hack. Otherwise I'm in total agreement with you wrt to moving it away and having only the args it needs.<|||||>> It needs `self` for the num of training steps calculations, and I tried to only run it if it's needed (linear scheduler), so I can't pre-calculate and pass it to the `_init_deepspeed` method as it'd be inefficient for code that doesn't need it. This is just a few math operations, I'd prefer we pass it either way to avoid the added complexity in the Trainer for everyone. > @sgugger, what should we do about tests For now, a simple integration test seems enough (I imagine in the multi-GPU tests). It's going to be challenging to properly test all those frameworks that require multi GPU but this is a discussion for another PR. <|||||>> , I'd prefer we pass it either way to avoid the added complexity in the Trainer for everyone. Sorry I don't know what you mean when you say that. Would you please rephrase that sentence with explicit terms - pass what/where - "it" being what? Thank you.<|||||>> Sorry I don't know what you mean when you say that. Would you please rephrase that sentence with explicit terms - pass what/where - "it" being what? Thank you. I was speaking of the number of training steps, always pass it even if we end up not needing it because the scheduler does without.<|||||>> I was speaking of the number of training steps, always pass it even if we end up not needing it because the scheduler does without. Thank you for clarifying. I disagree. I imagine most of the time users will run their own ds_config.json with custom config, which would not use that expensively calculated number - so why introduce a repeated wasted overhead at calculating something that will never be used? (or am I exaggerating and it's not that much of an overhead?) Why is it so bad if we pass the `trainer` object if it helps to optimize speed? I understand that since we are talking about changing `_init_deepspeed` not to be a trainer method it'd be good to make it so - but there is nothing wrong with it taking a `trainer` object as the first arg, no? I hope this whole thing might be resolved in a different, much cleaner way if I can only get help on it here https://github.com/microsoft/DeepSpeed/issues/633 - this is a hack we are doing that shouldn't be needed in first place... <|||||>Oh, if you prefer sending it `self`, I have no objections. We do this too for some of the other integrations method.<|||||>So our training steps calculation hack is no longer needed as @jeffra just happened to add `deepspeed.init_distributed` recently which is what we were missing, and adding which lead to a much cleaner code. yay!<|||||>OK, this is pretty much done, just waiting for some feedback from the DeepSpeed team on the best way to handle checkpoints, but the code is in place. So please kindly send me any final comments/suggestions/requests and let's merge it. Please re-invite old reviewers and add new ones if needed. Thank you!<|||||>deepspeed-0.3.10 has been just released by @jeffra on pypi - I verified that it works - so we are ready to merge this whenever you're happy with it. it'd be great if you tried running it too, since I think it has been only me running it, so my work is only as good as my environment is and I may not know of other culprits - e.g. I can't test with pytorch < pt-nightly since my card doesn't work with those pytorch versions. You will need 2+ gpus to use it First install it: ``` pip install deepspeed ``` At the very least do the test: ``` cd examples/seq2seq pytest -sv test_finetune_trainer.py -k deepspeed ``` Or if you want to fiddle with the normal run, here is what I have been using. ``` cd examples/seq2seq wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz tar -xzvf wmt_en_ro.tar.gz export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --deepspeed ds_config.json --fp16 --save_steps 1 ``` <|||||>@sgugger, 1. While working on docs I discovered that DS does its own gradient clipping (that doc was buried and I didn't see it) so I had to undo the code in the trainer that did that on behalf of DS - just skipping it 2. I did a major rewrite/expansion of the docs (including the fairscale section) - so please kindly have a look. It's mainly mirroring the config logic in the integration code. 3. In the docs I used consistently Trainer (upcase) to refer to HF trainer. I know you didn't like it when I did that for Issue in a different PR, let me know if you prefer it to be a lowercase trainer. While this PR is perfectly ready for a final review, I need to wait for https://github.com/microsoft/DeepSpeed/pull/656 to be answered before we can merge this as I'm unsure about their defaults for gradient clipping. Thank you. <|||||>> Went through the documentation and left comments. Awesome - thank you - all integrated. > On the optimizer side, it doesn't seem like DeepSpeed supports AdamW from what you're saying, so we should document the default optimizer is changed at the very beginning of the DeepSpeed section. It does change drastically the value of `weight_decay` to use. I found a way to use AdamW, thank you for catching that, @sgugger. I documented the nuances. <|||||>I think the DeepSpeed team is on vacation, as there is no response since several days. And since I have no way of talking to anyone there, I have no way of knowing when they will be back. So I will go ahead and merge this so that others can start experimenting and then we can fix whatever needs to be fixed when I get the gradient clipping Issue answered.<|||||>Amazing work @stas00 !