url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/6821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6821/comments
https://api.github.com/repos/huggingface/transformers/issues/6821/events
https://github.com/huggingface/transformers/issues/6821
688,604,891
MDU6SXNzdWU2ODg2MDQ4OTE=
6,821
How to generate on multiple GPUs?
{ "login": "moinnadeem", "id": 813367, "node_id": "MDQ6VXNlcjgxMzM2Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/813367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moinnadeem", "html_url": "https://github.com/moinnadeem", "followers_url": "https://api.github.com/users/moinnadeem/followers", "following_url": "https://api.github.com/users/moinnadeem/following{/other_user}", "gists_url": "https://api.github.com/users/moinnadeem/gists{/gist_id}", "starred_url": "https://api.github.com/users/moinnadeem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moinnadeem/subscriptions", "organizations_url": "https://api.github.com/users/moinnadeem/orgs", "repos_url": "https://api.github.com/users/moinnadeem/repos", "events_url": "https://api.github.com/users/moinnadeem/events{/privacy}", "received_events_url": "https://api.github.com/users/moinnadeem/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @moinnadeem,\r\n\r\nSorry to answer so late...yeah this will require some work I think! Will put it in the generate() project, but not sure when we manage to take a deeper look into this. Feel free to open a PR and tag me if you have some good ideas :-) ", "Just added this to `example/seq2seq/`: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#multi-gpu-evalulation", "+1 on getting `generate()` to run when using nn.dataparallel for evaluation\r\n\r\nIs the current finetune.py script (which I think uses DDP) running generate on all GPUs or just on one device?", "it uses all devices.", "Regarding getting generate() to run using nn.dataparallel, it actually probably isn't worth building the functionality. It seems that dataparallel (e.g. with 2 GPUs) is rarely faster (and actually sometimes quite a bit slower) than running with a single GPU, even when the multi-gpu is on the same node.", "Hello,\r\n\r\nI want to use generate function with single GPU. Specifically, I fine tuned a GPT-2 model (on GPU) and subsequently, I want to generate text with it. \r\n\r\nWhen I run this \r\n\r\n```\r\ninput_ids.to(device)\r\nsample_output = model.generate(\r\n input_ids, \r\n do_sample=True, \r\n max_length=150,\r\n top_k=50,\r\n top_p=0.92\r\n)\r\n```\r\nI get this error\r\n\r\n`RuntimeError: Input, output and indices must be on the current device`\r\n\r\nWhen i move the model and the input `.to('cpu')`, it works.", "@contribcode this looks like another issue, do you mind opening a new issue with the issue template filled-in? Thank you.", "You are right @LysandreJik, I will open a new issue.", "This issue has been stale for 1 month.", "Gently pinging @stas00 here - is this possible now thanks to your recent PR in `generate()`? ", "1. The work I did in `generate`'s search functions is to make those work under deepspeed zero-3+ regime, where all gpus must work in sync to complete, even if some of them finished their sequence early - it uses all gpus because the params are sharded across all gpus and thus all gpus contribute their part to make it happen. \r\n\r\n2. But otherwise, the current design is that we already use all gpus. Is it not the case? We just don't do anything with the results from all but rank 0 process.\r\n\r\n3. In current examples we for some reason save the generated tokens only from rank0: e.g. in `translation_run.py`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/50f4539b8201b26b18085260bf801cdeadfa6640/examples/pytorch/translation/run_translation.py#L564\r\n\r\nI guess it's easier than to try to somehow write them out in non-interleaved way - probably could to add `flock` and append them all to the same file if needed.\r\n\r\n4. metrics we calculate on all gpus, but only save metrics from rank0 process\r\n\r\nhttps://github.com/huggingface/transformers/blob/5c00918681d6b4027701eb46cea8f795da0d4064/src/transformers/trainer_pt_utils.py#L924\r\n\r\nIf I missed something please clarify what is not working in `generate` under multiple gpus or what is the desired functionality?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Also curious about this", "We should definitely enable `generate()` on multiple GPUs - did anybody give it a try with DDP? ", "See: https://github.com/huggingface/transformers/issues/6821#issuecomment-825770020", "> Hi!\r\n> \r\n> How would I run generation on multiple GPUs at the same time? Running model.generate on a DataParallel layer isn't possible, and model.module.generate run on a single GPU.\r\n> \r\n> Any advice would be appreciated!\r\n\r\nHello,\r\nDid you resolve this issue? If you remember can you share me the solution for it?\r\n\r\nThanks in advance", "As I already replied earlier `generate` works on multiple gpus including `deepspeed`", "> \r\n@kkavyashankar0009 @JulesGM \r\n\r\nHi, it seems that model.generatre() does not support DP, but it can be used in DDP. Here is some example code:\r\n\r\n\r\n```\r\nimport torch.multiprocessing as mp\r\nimport torch.distributed as dist\r\nimport argparse\r\nimport torch\r\nfrom transformers import get_linear_schedule_with_warmup, BartConfig, BartTokenizer, BartForConditionalGeneration\r\n\r\nparser = argparse.ArgumentParser()\r\nparser.add_argument('--nodes', type=int, default=1) # how many nodes (machines) you have\r\nparser.add_argument('--gpus', type=int, default=-1, help='num gpus per node')\r\nparser.add_argument('--nr', type=int, default=0, help='ranking within the nodes')\r\nargs = parser.parse_args()\r\ntokenizer = BartTokenizer.from_pretrained(args.tokenizer_name)\r\n\r\ndef test_model_generation(local_gpu_rank, args):\r\n set_seed(args.seed)\r\n args.rank = args.nr * args.gpus + local_gpu_rank # compute the rank of the current GPU\r\n dist.init_process_group(backend=\"nccl\", init_method=\"env://\", world_size=args.world_size, rank=args.rank)\r\n\r\n test_data = \"../data/\" + args.dataset + \"/test.txt\"\r\n print(\"Processing data: \" + test_data, flush=True)\r\n config = BartConfig.from_pretrained(args.config_name)\r\n bart_ctx = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)\r\n bart_rep = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)\r\n bart_ctx.resize_token_embeddings(len(tokenizer))\r\n bart_rep.resize_token_embeddings(len(tokenizer))\r\n\r\n model = MyModel(bart, config)\r\n model_state_dict = torch.load(\"../output/model/gen.ddp.pt\") \r\n model.load_state_dict(model_state_dict, strict=True) # load model\r\n torch.cuda.set_device(local_gpu_rank) \r\n args.device = torch.device(\"cuda\", local_gpu_rank)\r\n model.to(args.device) # move the model to GPU\r\n # model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_gpu_rank], find_unused_parameters=True)\r\n model.eval()\r\n\r\n test_dataset = FileDataset(test_data, tokenizer, max_context_len=args.max_ctx_len, max_response_len=args.max_rep_len, dataset=args.dataset)\r\n test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)\r\n test_dataloader = DataLoader(test_dataset, batch_size=args.test_batch_size, num_workers=2, sampler=test_sampler) # the batch size on each GPU\r\n\r\n if args.rank == 0:\r\n count = 0\r\n fw = open(\"../output/test.responses.txt\", \"w\", encoding=\"utf-8\") # open file for writting result\r\n with torch.no_grad():\r\n test_sampler.set_epoch(0) # keep all data the same on all GPUs, it is usually used in training, I'm not sure if it is necessary in inference\r\n for test_data in test_dataloader:\r\n for key in test_data.keys():\r\n test_data[key] = test_data[key].to(args.device)\r\n outputs = model.bart_model.generate(\r\n input_ids=test_data[\"input_ids\"],\r\n attention_mask=test_data[\"attention_mask\"],\r\n max_length=args.max_rep_len,\r\n no_repeat_ngram_size=3,\r\n num_beams=10,\r\n ) # my model contains a BART model in self.bart_model,so I use model.module.bart_model to get it\r\n if outputs.size(1) < args.max_rep_len: # need padding because the lengths from different GPUs may be different\r\n batch_pred_padding = torch.ones((outputs.size(0), args.max_rep_len - outputs.size(1)), dtype=outputs.dtype).cuda() # use the padding token of BART, and its token id is 1. Be careful with the data type.\r\n outputs = torch.cat([outputs, batch_pred_padding], dim=1)\r\n batch_pred = [torch.zeros_like(outputs, dtype=outputs.dtype).cuda() for _ in range(args.world_size)] # initialized a list for collecting tensors from all GPUs. Be careful with the data type.\r\n dist.all_gather(batch_pred, outputs) # collect data\r\n batch_pred = torch.stack(batch_pred, dim=1) # use stack, take care of the dimension\r\n batch_pred = batch_pred.reshape(-1, args.max_rep_len)\r\n if args.rank == 0:\r\n batch_out_sentences = tokenizer.batch_decode(batch_pred, skip_special_tokens=True, clean_up_tokenization_spaces=False) # decode the token id to token\r\n for r in batch_out_sentences:\r\n fw.write(r + \"\\n\")\r\n fw.flush()\r\n count += len(batch_out_sentences)\r\n print(count)\r\n if args.rank == 0:\r\n fw.close()\r\n \r\nif __name__ == \"__main__\":\r\n if args.gpus < 0:\r\n args.gpus = torch.cuda.device_count()\r\n args.world_size = args.nodes * args.gpus\r\n os.environ['MASTER_ADDR']='localhost'\r\n os.environ['MASTER_PORT']='8888'\r\n mp.spawn(test_model_generation, nprocs=args.gpus, args=(args, ))\r\n\r\n```\r\n\r\n\r\n\r\n\r\nBe careful with the data order. It is better to add IDs and write them to the file, which can be used for checking the order.", "@DaoD \r\n\r\nThanks a lot for this example! If I understand correctly, I think you don't need to use `torch.nn.parallel.DistributedDataParallel` wrapper on the model if only running inference though, since we don't care about gradients. I found that only using the distributed data sampler and leaving the model as is (without the DDP wrapper) led to better memory usage. Haven't looked into why, but let's me use larger batch sizes just FYI.", "@bkleiner2 Thanks for your information! I just want to use multiple GPUs for inference, so I don't how it works if DDP is not used. ", "Yeah I'm just saying for inference only I don't think you need this line:\r\n\r\n`model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_gpu_rank], find_unused_parameters=True)`\r\n\r\nbecause we don't need to aggregate gradients. Once you've done this:\r\n\r\n` model.to(args.device) # move the model to GPU` you've already copied the model to each device right? Because there is a separate process running for each device. So you can get the data parallelism across devices by simply using `test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)`, as far as I can tell.", "@bkleiner2 Oh, I got it! Thanks for your explanation!", "I think the real problem we are facing is this error when we try to called `model.generate`\r\n``` bash\r\nAttributeError: 'DistributedDataParallel' object has no attribute 'generate'\r\n```\r\n\r\nIndeed, it can be solved by distributed sampler or using the huggingface trainer class. \r\n\r\nBut it could be quite tricky if we don't use them and write our own trainer. Even we have to be careful in using the distributed sampler.", "remove the outside wrapping to get to the `transformers`'s model and it should work, \r\n\r\n`model.module.generate()`\r\n\r\nYou can use the helper that deals with arbitrary number of wrappers.\r\nhttps://github.com/huggingface/transformers/blob/d90a36d192e2981a41122c30a765c63158dd0557/src/transformers/modeling_utils.py#L3027-L3038\r\n\r\n", "> As I already replied earlier `generate` works on multiple gpus including `deepspeed`\r\n\r\n\r\n\r\n> remove the outside wrapping to get to the `transformers`'s model and it should work,\r\n> \r\n> `model.module.generate()`\r\n> \r\n> You can use the helper that deals with arbitrary number of wrappers.\r\n> \r\n> https://github.com/huggingface/transformers/blob/d90a36d192e2981a41122c30a765c63158dd0557/src/transformers/modeling_utils.py#L3027-L3038\r\n\r\nI tried unwraping the model as \"model.module.generate\" but I guess there's an issue in the generators it's all special tokens' index as decoding returns empty strings. \r\n\r\ncould you please clarify what model.module is exactly doing?", "@HebaGamalElDin I think your problem is not related to this issue. Or we might need more context here, in your case.\r\n", "> could you please clarify what model.module is exactly do?\r\n\r\nWrappers like DDP, Deepspeed, and others hide the original model inside their objects - usually under `model.module`, so you can get from the wrapped object to the original model using `unwrap_model` which will handle multiple wrappers. If it's just one you can just access the original PreTrainedModel HF subclass model with `model.module`.\r\n\r\nThe wrappers have no `generate` method, only the PreTrainedModel subclasses have it. That's why if you want to call `generate` you must call it on the HF model and not its wrappers.\r\n\r\nIf you have issues I suggest you first remove DDP and debug your issue on a single GPU, once working it'd most likely work under DDP..\r\n\r\nAs @allanj suggested your issue most likely has nothing to do with unwrapping, so anything is possible if you write your own code. Hence I suggest to sort it out on 1-gpu first, then try 1+.", "@stas00 thank you, what I'm wondering is which model version exactly does model.module access or which model version in which GPU exactly? \r\n\r\n**It was working already in 1 GPU, since I switched to ml.p3.16xlarge instance It makes this behavior.**\r\n\r\nHere's the full training process:\r\n```\r\n\r\nimport os\r\nimport torch\r\nimport pandas as pd\r\nimport random\r\nimport math\r\nfrom copy import deepcopy\r\nfrom tqdm import tqdm\r\nimport os\r\nimport re\r\nimport shutil\r\nimport tarfile, zipfile\r\nimport pickle,json\r\nimport numpy as np\r\nimport itertools\r\nfrom PIL import Image\r\nimport PIL.ImageOps\r\nimport cv2\r\nfrom torch.utils.data import DataLoader\r\nfrom transformers import AdamW, TrOCRProcessor, VisionEncoderDecoderModel, get_scheduler\r\nfrom Data_pipeline import Context, HCRDataset, OCRDataLoad\r\nfrom Validation_Metrics import getWordLevelError, getCharacterLevelError\r\n\r\n\r\nfrom datasets import load_metric\r\ncer_metric = load_metric(\"cer\")\r\n\r\n# SageMaker data parallel: Import the library PyTorch API\r\n#round(random.uniform(),2)\r\nimport smdistributed.dataparallel.torch.torch_smddp\r\n\r\n# SageMaker data parallel: Import PyTorch's distributed API\r\nimport torch.distributed as dist\r\nfrom torch.nn.parallel import DistributedDataParallel as DDP\r\n\r\n# SageMaker data parallel: Initialize the process group\r\ndist.init_process_group(backend='smddp')\r\n\r\n# LOAD MODEL\r\ndef load_model() -> VisionEncoderDecoderModel:\r\n model: VisionEncoderDecoderModel = VisionEncoderDecoderModel.from_pretrained('gagan3012/ArOCRv4')\r\n return model.cuda()\r\n\r\n# SETUP MODEL CONFIGUATIONS\r\ndef init_model_for_training(model: VisionEncoderDecoderModel, processor: TrOCRProcessor):\r\n model.config.decoder_start_token_id = processor.tokenizer.cls_token_id\r\n model.config.pad_token_id = processor.tokenizer.pad_token_id\r\n model.config.vocab_size = model.config.decoder.vocab_size\r\n model.config.bos_token_id = processor.tokenizer.bos_token_id\r\n model.config.max_length = 162\r\n model.config.decoder.is_decoder = True\r\n model.config.decoder.add_cross_attention = True\r\n torch.cuda.manual_seed_all(42)\r\n\r\n \r\n\r\ndef compute_cer(processor, pred_ids, label_ids):\r\n pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)\r\n label_ids[label_ids == -100] = processor.tokenizer.pad_token_id\r\n label_str = processor.batch_decode(label_ids, skip_special_tokens=True)\r\n\r\n cer = cer_metric.compute(predictions=pred_str, references=label_str)\r\n return cer\r\n\r\n\r\n# LOAD PRE_PROCESSOR\r\ndef load_processor() -> TrOCRProcessor:\r\n return TrOCRProcessor.from_pretrained('gagan3012/ArOCRv4')\r\n\r\ndef train(context: Context, train_epochs, learning_rate):\r\n model = context.model\r\n optimizer = AdamW(model.parameters(), lr=learning_rate)\r\n \r\n num_training_steps = train_epochs * len(context.train_dataloader)\r\n lr_scheduler = get_scheduler(\"linear\", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps)\r\n \r\n train_loss = 0.0\r\n min_cer = 1.0\r\n min_train_loss = 1.0\r\n for epoch in range(train_epochs):\r\n model.train() \r\n for j, batch in enumerate(context.train_dataloader):\r\n inputs: torch.Tensor = batch[\"input\"].cuda(non_blocking=True)\r\n labels: torch.Tensor = batch[\"label\"].cuda(non_blocking=True)\r\n #print(inputs)\r\n #print(labels)\r\n \r\n outputs = model(pixel_values=inputs, labels=labels)\r\n #print(outputs)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n lr_scheduler.step()\r\n optimizer.zero_grad(set_to_none=True)\r\n \r\n train_loss+=loss\r\n if (loss < min_train_loss) or (min_train_loss==1.0):\r\n min_train_loss = loss\r\n print(f\"Epoch {epoch}-----Loss---{train_loss/len(context.train_dataloader)}--------- min-cer: {min_train_loss}\")\r\n \r\n # evaluate\r\n #model.eval()\r\n valid_cer = 0.0\r\n with torch.no_grad():\r\n for batch in tqdm(context.val_dataloader):\r\n #print(f\"INPUT: {batch['input']} ------ LABEL: {batch['label']}\")\r\n outputs = model.module.generate(batch[\"input\"].cuda(non_blocking=True))\r\n #print(f\"OUTPUTS: {outputs}\")\r\n #print(f\"OUTPUTS on CPU: {outputs.cpu().numpy()}\")\r\n cer = compute_cer(context.processor, pred_ids=outputs.detach(), label_ids=batch[\"label\"])\r\n valid_cer += cer \r\n\r\n print(\"Validation CER:\", valid_cer / len(context.val_dataloader))\r\n\r\ndef main():\r\n batch_size = 64\r\n train_epochs = 100\r\n learning_rate = 0.0001\r\n checkpoints_path = \"checkpoints\"\r\n \r\n # SageMaker data parallel: Scale batch size by world size\r\n batch_size //= dist.get_world_size()\r\n batch_size = max(batch_size, 1)\r\n\r\n # Prepare dataset\r\n #train_dataset = torchvision.datasets.MNIST(...)\r\n \r\n processor = load_processor()\r\n (x_train,y_train),(x_valid,y_valid),(x_test,y_test) = OCRDataLoad()\r\n train_dataset = HCRDataset(x_train, y_train, processor)\r\n\r\n # SageMaker data parallel: Set num_replicas and rank in DistributedSampler\r\n train_sampler = torch.utils.data.distributed.DistributedSampler(\r\n train_dataset,\r\n num_replicas=dist.get_world_size(),\r\n rank=dist.get_rank())\r\n \r\n train_dataloader = DataLoader(train_dataset, batch_size, shuffle=False, sampler=train_sampler)\r\n \r\n val_dataset = HCRDataset(x_valid, y_valid, processor)\r\n val_sampler = torch.utils.data.distributed.DistributedSampler(\r\n val_dataset,\r\n num_replicas=dist.get_world_size(),\r\n rank=dist.get_rank())\r\n val_dataloader = DataLoader(val_dataset, batch_size, shuffle=False, sampler=val_sampler)\r\n \r\n # SageMaker data parallel: Wrap the PyTorch model with the library's DDP\r\n model = load_model()\r\n init_model_for_training(model, processor)\r\n \r\n model = DDP(model, find_unused_parameters=True)\r\n context = Context(model, processor, train_dataset, train_dataloader, val_dataset, val_dataloader)\r\n\r\n \r\n # SageMaker data parallel: Pin each GPU to a single library process.\r\n local_rank = os.environ[\"LOCAL_RANK\"] \r\n torch.cuda.set_device(int(local_rank))\r\n model.cuda(int(local_rank))\r\n train(context, train_epochs, learning_rate)\r\n\r\n # SageMaker data parallel: Save model on master node.\r\n if dist.get_rank() == 0:\r\n model.module.save_pretrained(checkpoints_path)\r\n\r\nif __name__ == '__main__':\r\n main()\r\n \r\n```\r\n\r\n@allanj @stas00 ", "In the simple case of DDP you basically have multiple GPUs and each gpu just runs its own `generate` on the unwrapped model.\r\n\r\nI see that you use SageMaker here, I don't have experience with this environment, perhaps there is something non-standard about it?\r\n\r\nI think you want to start testing with a very simple base case reducing your test to just creating DDP and doing `generate` with some hardcoded string - you can remove all the other code including training and dataloaders, etc.\r\n\r\n1. create smddp on 2 gpus\r\n2. run `generate` on that model\r\n\r\nnothing else. \r\n\r\nperhaps smddp does something different from other wrappers? Once you have a simple repro code then we can tag someone who knows SMDDP better.", "> > \r\n> \r\n> @kkavyashankar0009 @JulesGM\r\n> \r\n> Hi, it seems that model.generatre() does not support DP, but it can be used in DDP. Here is some example code:\r\n> \r\n> ```\r\n> import torch.multiprocessing as mp\r\n> import torch.distributed as dist\r\n> import argparse\r\n> import torch\r\n> from transformers import get_linear_schedule_with_warmup, BartConfig, BartTokenizer, BartForConditionalGeneration\r\n> \r\n> parser = argparse.ArgumentParser()\r\n> parser.add_argument('--nodes', type=int, default=1) # how many nodes (machines) you have\r\n> parser.add_argument('--gpus', type=int, default=-1, help='num gpus per node')\r\n> parser.add_argument('--nr', type=int, default=0, help='ranking within the nodes')\r\n> args = parser.parse_args()\r\n> tokenizer = BartTokenizer.from_pretrained(args.tokenizer_name)\r\n> \r\n> def test_model_generation(local_gpu_rank, args):\r\n> set_seed(args.seed)\r\n> args.rank = args.nr * args.gpus + local_gpu_rank # compute the rank of the current GPU\r\n> dist.init_process_group(backend=\"nccl\", init_method=\"env://\", world_size=args.world_size, rank=args.rank)\r\n> \r\n> test_data = \"../data/\" + args.dataset + \"/test.txt\"\r\n> print(\"Processing data: \" + test_data, flush=True)\r\n> config = BartConfig.from_pretrained(args.config_name)\r\n> bart_ctx = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)\r\n> bart_rep = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)\r\n> bart_ctx.resize_token_embeddings(len(tokenizer))\r\n> bart_rep.resize_token_embeddings(len(tokenizer))\r\n> \r\n> model = MyModel(bart, config)\r\n> model_state_dict = torch.load(\"../output/model/gen.ddp.pt\") \r\n> model.load_state_dict(model_state_dict, strict=True) # load model\r\n> torch.cuda.set_device(local_gpu_rank) \r\n> args.device = torch.device(\"cuda\", local_gpu_rank)\r\n> model.to(args.device) # move the model to GPU\r\n> # model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_gpu_rank], find_unused_parameters=True)\r\n> model.eval()\r\n> \r\n> test_dataset = FileDataset(test_data, tokenizer, max_context_len=args.max_ctx_len, max_response_len=args.max_rep_len, dataset=args.dataset)\r\n> test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)\r\n> test_dataloader = DataLoader(test_dataset, batch_size=args.test_batch_size, num_workers=2, sampler=test_sampler) # the batch size on each GPU\r\n> \r\n> if args.rank == 0:\r\n> count = 0\r\n> fw = open(\"../output/test.responses.txt\", \"w\", encoding=\"utf-8\") # open file for writting result\r\n> with torch.no_grad():\r\n> test_sampler.set_epoch(0) # keep all data the same on all GPUs, it is usually used in training, I'm not sure if it is necessary in inference\r\n> for test_data in test_dataloader:\r\n> for key in test_data.keys():\r\n> test_data[key] = test_data[key].to(args.device)\r\n> outputs = model.bart_model.generate(\r\n> input_ids=test_data[\"input_ids\"],\r\n> attention_mask=test_data[\"attention_mask\"],\r\n> max_length=args.max_rep_len,\r\n> no_repeat_ngram_size=3,\r\n> num_beams=10,\r\n> ) # my model contains a BART model in self.bart_model,so I use model.module.bart_model to get it\r\n> if outputs.size(1) < args.max_rep_len: # need padding because the lengths from different GPUs may be different\r\n> batch_pred_padding = torch.ones((outputs.size(0), args.max_rep_len - outputs.size(1)), dtype=outputs.dtype).cuda() # use the padding token of BART, and its token id is 1. Be careful with the data type.\r\n> outputs = torch.cat([outputs, batch_pred_padding], dim=1)\r\n> batch_pred = [torch.zeros_like(outputs, dtype=outputs.dtype).cuda() for _ in range(args.world_size)] # initialized a list for collecting tensors from all GPUs. Be careful with the data type.\r\n> dist.all_gather(batch_pred, outputs) # collect data\r\n> batch_pred = torch.stack(batch_pred, dim=1) # use stack, take care of the dimension\r\n> batch_pred = batch_pred.reshape(-1, args.max_rep_len)\r\n> if args.rank == 0:\r\n> batch_out_sentences = tokenizer.batch_decode(batch_pred, skip_special_tokens=True, clean_up_tokenization_spaces=False) # decode the token id to token\r\n> for r in batch_out_sentences:\r\n> fw.write(r + \"\\n\")\r\n> fw.flush()\r\n> count += len(batch_out_sentences)\r\n> print(count)\r\n> if args.rank == 0:\r\n> fw.close()\r\n> \r\n> if __name__ == \"__main__\":\r\n> if args.gpus < 0:\r\n> args.gpus = torch.cuda.device_count()\r\n> args.world_size = args.nodes * args.gpus\r\n> os.environ['MASTER_ADDR']='localhost'\r\n> os.environ['MASTER_PORT']='8888'\r\n> mp.spawn(test_model_generation, nprocs=args.gpus, args=(args, ))\r\n> ```\r\n> \r\n> Be careful with the data order. It is better to add IDs and write them to the file, which can be used for checking the order.\r\n\r\nMay have two errors:\r\n\r\n1. gather\r\n```python\r\ndist.all_gather(batch_pred, outputs) # collect data\r\nbatch_pred = torch.stack(batch_pred, dim=1) # use stack, take care of the dimension\r\nbatch_pred = batch_pred.reshape(-1, args.max_rep_len)\r\n```\r\nchange to\r\n```python\r\ndist.all_gather(batch_pred, outputs) # collect data\r\nbatch_pred = torch.cat(batch_pred, dim=0)\r\n```\r\n\r\n2. DistributedSampler causes more samples\r\n`batch_out_sentences` need to slice:`batch_out_sentences[:total_examples]`" ]
1,598
1,707
1,622
NONE
null
Hi! How would I run generation on multiple GPUs at the same time? Running model.generate on a DataParallel layer isn't possible, and model.module.generate run on a single GPU. Any advice would be appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6821/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6821/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6820/comments
https://api.github.com/repos/huggingface/transformers/issues/6820/events
https://github.com/huggingface/transformers/issues/6820
688,585,634
MDU6SXNzdWU2ODg1ODU2MzQ=
6,820
Bert transformer issue
{ "login": "skiran252", "id": 60535327, "node_id": "MDQ6VXNlcjYwNTM1MzI3", "avatar_url": "https://avatars.githubusercontent.com/u/60535327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skiran252", "html_url": "https://github.com/skiran252", "followers_url": "https://api.github.com/users/skiran252/followers", "following_url": "https://api.github.com/users/skiran252/following{/other_user}", "gists_url": "https://api.github.com/users/skiran252/gists{/gist_id}", "starred_url": "https://api.github.com/users/skiran252/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skiran252/subscriptions", "organizations_url": "https://api.github.com/users/skiran252/orgs", "repos_url": "https://api.github.com/users/skiran252/repos", "events_url": "https://api.github.com/users/skiran252/events{/privacy}", "received_events_url": "https://api.github.com/users/skiran252/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Token indices sequence length is longer than the specified maximum sequence length for this model (5 > 512). Running this sequence through the model will result in indexing errors <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6820/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6819/comments
https://api.github.com/repos/huggingface/transformers/issues/6819/events
https://github.com/huggingface/transformers/issues/6819
688,585,408
MDU6SXNzdWU2ODg1ODU0MDg=
6,819
Unable to establish Lock on cached tokenizer output from RobertaTokenizer
{ "login": "aalok-sathe", "id": 10784697, "node_id": "MDQ6VXNlcjEwNzg0Njk3", "avatar_url": "https://avatars.githubusercontent.com/u/10784697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aalok-sathe", "html_url": "https://github.com/aalok-sathe", "followers_url": "https://api.github.com/users/aalok-sathe/followers", "following_url": "https://api.github.com/users/aalok-sathe/following{/other_user}", "gists_url": "https://api.github.com/users/aalok-sathe/gists{/gist_id}", "starred_url": "https://api.github.com/users/aalok-sathe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aalok-sathe/subscriptions", "organizations_url": "https://api.github.com/users/aalok-sathe/orgs", "repos_url": "https://api.github.com/users/aalok-sathe/repos", "events_url": "https://api.github.com/users/aalok-sathe/events{/privacy}", "received_events_url": "https://api.github.com/users/aalok-sathe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The issue was with the file structur: using the downloaded glue data in the filesystem.", "@aalok-sathe - Could you please explain how you resolved it ?. I am having the same problem with XLNET for glue(STS-B)", "I had the data placed in the wrong location, and I was giving the incorrect path." ]
1,598
1,599
1,598
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-redhat-7.8-Maipo - Python version: 3.7.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> tokenizers: @mfuntowicz ## Information Model I am using (Bert, XLNet ...): Roberta (`roberta-large-mnli`) The problem arises when using: * [X] the official example scripts: `run_glue.py` * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: MNLI * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. download GLUE data using official download script and place it in the root of `transformers` 2. `python run_glue.py --model_name_or_path roberta-large-mnli --data_dir ../glue_data --output_dir tmp --task_name MNLI --do_eval <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```bash Traceback (most recent call last): File "./run_glue.py", line 247, in <module> main() File "./run_glue.py", line 143, in main if training_args.do_eval File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/transformers/data/datasets/glue.py", line 106, in __init__ with FileLock(lock_path): File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/filelock.py", line 323, in __enter__ self.acquire() File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/filelock.py", line 271, in acquire self._acquire() File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/filelock.py", line 384, in _acquire fd = os.open(self._lock_file, open_mode) FileNotFoundError: [Errno 2] No such file or directory: '../glue_data/cached_dev_RobertaTokenizer_128_mnli.lock' ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The script should be able to create the above-mentioned cached file if one doesn't exist, and acquire lock and load it if it does.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6819/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6818/comments
https://api.github.com/repos/huggingface/transformers/issues/6818/events
https://github.com/huggingface/transformers/pull/6818
688,579,819
MDExOlB1bGxSZXF1ZXN0NDc1ODEwNzQ4
6,818
[tests] fix typos in inputs
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=h1) Report\n> Merging [#6818](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ab21b072fa2a122da930386381d23f95de06e28?el=desc) will **increase** coverage by `0.51%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6818/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6818 +/- ##\n==========================================\n+ Coverage 79.58% 80.10% +0.51% \n==========================================\n Files 157 157 \n Lines 28588 28588 \n==========================================\n+ Hits 22752 22900 +148 \n+ Misses 5836 5688 -148 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.31% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.45% <0.00%> (-0.40%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (+11.36%)` | :arrow_up: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=footer). Last update [5ab21b0...8b59678](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
fixes a typo in inputs and the corresponding ids
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6818/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6818", "html_url": "https://github.com/huggingface/transformers/pull/6818", "diff_url": "https://github.com/huggingface/transformers/pull/6818.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6818.patch", "merged_at": 1598782798000 }
https://api.github.com/repos/huggingface/transformers/issues/6817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6817/comments
https://api.github.com/repos/huggingface/transformers/issues/6817/events
https://github.com/huggingface/transformers/issues/6817
688,507,312
MDU6SXNzdWU2ODg1MDczMTI=
6,817
Tensorflow 2 Finetuning TF T5 using keras fit
{ "login": "HarrisDePerceptron", "id": 17620536, "node_id": "MDQ6VXNlcjE3NjIwNTM2", "avatar_url": "https://avatars.githubusercontent.com/u/17620536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HarrisDePerceptron", "html_url": "https://github.com/HarrisDePerceptron", "followers_url": "https://api.github.com/users/HarrisDePerceptron/followers", "following_url": "https://api.github.com/users/HarrisDePerceptron/following{/other_user}", "gists_url": "https://api.github.com/users/HarrisDePerceptron/gists{/gist_id}", "starred_url": "https://api.github.com/users/HarrisDePerceptron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HarrisDePerceptron/subscriptions", "organizations_url": "https://api.github.com/users/HarrisDePerceptron/orgs", "repos_url": "https://api.github.com/users/HarrisDePerceptron/repos", "events_url": "https://api.github.com/users/HarrisDePerceptron/events{/privacy}", "received_events_url": "https://api.github.com/users/HarrisDePerceptron/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten , @jplu any insights into what could be the problem?", "You should create your model into a strategy.", "> You should create your model into a strategy.\r\n\r\nAs in tf distributed strategies? but i am using a single gpu at the moment. ", "This one https://www.tensorflow.org/api_docs/python/tf/distribute/OneDeviceStrategy", "Device placement strategy works and the error is no longer there. i should point out this is not the usual way to train a model in TF. We normally do not need to place the model explicitly on a device while creating a model. ", "Is this method correct?\r\n```\r\nwith mirrored_strategy.scope():\r\n ...\r\n model.compile(...)\r\nmodel.fit(...)\r\n```\r\nThis still gives me the same error on GPT2LMHeadModel.", "@ksjae Please open a new issue with more detail of your issue." ]
1,598
1,600
1,600
CONTRIBUTOR
null
## The Problem I have been trying to finetune the T5 model model using tensorflow and keras. there is no documentation/ offcial+community or **notebook** for finetuning T5 in tensorflow. There are a bunch of lines [here](https://huggingface.co/transformers/model_doc/t5.html#tft5forconditionalgeneration) and some finetuning instructions [here](https://huggingface.co/transformers/model_doc/t5.html#tft5forconditionalgeneration) other than that there is nothing for tensorflow. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: `3.0.2` - Platform: `Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid` - Python version: `Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid` - PyTorch version (GPU?): `1.6.0 (True)` - Tensorflow version (GPU?): `2.2.0 (True)` - Using GPU in script?: `yes` - Using distributed or parallel set-up in script?: `no` ### Who can help @patrickvonplaten @jplu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): `TFT5ForConditionalGeneration (TFAutoModelWithLMHead) pretrained` The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (SQuad from tfds) * [ ] my own task or dataset: (give details below) ## To reproduce ``` model = TFAutoModelWithLMHead.from_pretrained("t5-base") tokenizer = AutoTokenizer.from_pretrained("t5-base") train_dataset, info = tfds.load('squad', split='train', with_info=True) def encode_tf(inputs): """Encodes the squad inputs and uses the tokenizer to encode inputs returns the appropriate model 'input_ids', 'attention masks`, `decoder_attention_mask`, 'labels' Returns: dict: returns a dictionary with keys: 'input_ids', 'attention masks`, `decoder_attention_mask`, 'labels' with appropriate tensor values """ pass dataset = train_dataset.map(encode_tf) dataset = dataset.shuffle(1000) dataset = dataset.batch(8) ``` ### Sample data output: ``` data = next(iter(dataset)) data ``` ``` {'input_ids': <tf.Tensor: shape=(8, 200), dtype=int32, numpy= array([[ 987, 834, 7771, ..., 0, 0, 0], [ 987, 834, 7771, ..., 2749, 3385, 12187], [ 987, 834, 7771, ..., 0, 0, 0], ..., [ 987, 834, 7771, ..., 0, 0, 0], [ 987, 834, 7771, ..., 0, 0, 0], [ 987, 834, 7771, ..., 6, 30, 8]], dtype=int32)>, 'labels': <tf.Tensor: shape=(8, 200), dtype=int32, numpy= array([[ 363, 19, 80, ..., 0, 0, 0], [4504, 149, 186, ..., 0, 0, 0], [ 571, 54, 3298, ..., 0, 0, 0], ..., [2645, 2832, 4599, ..., 0, 0, 0], [ 571, 103, 7000, ..., 0, 0, 0], [ 366, 410, 8, ..., 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(8, 200), dtype=int32, numpy= array([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 1, 1, 1]], dtype=int32)>, 'decoder_attention_mask': <tf.Tensor: shape=(8, 200), dtype=int32, numpy= array([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], dtype=int32)>} ``` ### Training ``` optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) model.fit(dataset, epochs=10) ``` model.fit result in the following error about **ValueError: No gradients provided for any variable** ### The Stacktrace ``` Epoch 1/10 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-163-f8c5e0c71664> in <module> ----> 1 model.fit(dataset, epochs=10) ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs) 64 def _method_wrapper(self, *args, **kwargs): 65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access ---> 66 return method(self, *args, **kwargs) 67 68 # Running inside `run_distribute_coordinator` already. ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 846 batch_size=batch_size): 847 callbacks.on_train_batch_begin(step) --> 848 tmp_logs = train_function(iterator) 849 # Catch OutOfRangeError for Datasets of unknown size. 850 # This blocks until the batch has finished executing. ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 578 xla_context.Exit() 579 else: --> 580 result = self._call(*args, **kwds) 581 582 if tracing_count == self._get_tracing_count(): ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 625 # This is the first call of __call__, so we have to initialize. 626 initializers = [] --> 627 self._initialize(args, kwds, add_initializers_to=initializers) 628 finally: 629 # At this point we know that the initialization is complete (or less ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 504 self._concrete_stateful_fn = ( 505 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 506 *args, **kwds)) 507 508 def invalid_creator_scope(*unused_args, **unused_kwds): ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 2444 args, kwargs = None, None 2445 with self._lock: -> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs) 2447 return graph_function 2448 ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 2775 2776 self._function_cache.missed.add(call_context_key) -> 2777 graph_function = self._create_graph_function(args, kwargs) 2778 self._function_cache.primary[cache_key] = graph_function 2779 return graph_function, args, kwargs ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 2665 arg_names=arg_names, 2666 override_flat_arg_shapes=override_flat_arg_shapes, -> 2667 capture_by_value=self._capture_by_value), 2668 self._function_attributes, 2669 # Tell the ConcreteFunction to clean up its graph once it goes out of ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 979 _, original_func = tf_decorator.unwrap(python_func) 980 --> 981 func_outputs = python_func(*func_args, **func_kwargs) 982 983 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 439 # __wrapped__ allows AutoGraph to swap in a converted function. We give 440 # the function a weak reference to itself to avoid a reference cycle. --> 441 return weak_wrapped_fn().__wrapped__(*args, **kwds) 442 weak_wrapped_fn = weakref.ref(wrapped_fn) 443 ~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 966 except Exception as e: # pylint:disable=broad-except 967 if hasattr(e, "ag_error_metadata"): --> 968 raise e.ag_error_metadata.to_exception(e) 969 else: 970 raise ValueError: in user code: /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function * outputs = self.distribute_strategy.run( /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run ** return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica return fn(*args, **kwargs) /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:541 train_step ** self.trainable_variables) /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1804 _minimize trainable_variables)) /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients filtered_grads_and_vars = _filter_grads(grads_and_vars) /home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads ([v.name for _, v in grads_and_vars],)) ValueError: No gradients provided for any variable: ['shared/shared/weight:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/relative_attention_bias/embeddings:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/final_layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/relative_attention_bias/embeddings:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/relative_attention_bias/embeddings:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/final_layer_norm/weight:0']. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Should be able to run the training loop for the specified epochs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6817/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6817/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6816/comments
https://api.github.com/repos/huggingface/transformers/issues/6816/events
https://github.com/huggingface/transformers/pull/6816
688,459,392
MDExOlB1bGxSZXF1ZXN0NDc1NzIxMDEy
6,816
control framework loglevel in scripts and tests
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=h1) Report\n> Merging [#6816](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/367235ee52537ff7cada5e1c5c41cdd78731f092?el=desc) will **increase** coverage by `2.48%`.\n> The diff coverage is `67.85%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6816/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6816 +/- ##\n==========================================\n+ Coverage 76.27% 78.76% +2.48% \n==========================================\n Files 157 157 \n Lines 28795 28823 +28 \n==========================================\n+ Hits 21963 22701 +738 \n+ Misses 6832 6122 -710 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `66.24% <67.85%> (+0.35%)` | :arrow_up: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=footer). Last update [367235e...05ec0d0](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "[moved to the normal comment from the code review comment, as it might be removed]\r\n\r\n> What is the difference between this method and the previous set_verbosity?\r\n> I think we should select just one way to set the verbosity of the library.\r\n\r\n@thomwolf, I agree. Now that I added a test I can see that `set_verbosity` is an equivalent of `set_global_logging_level(prefices=[\"transformers\"])` (the proposed function).\r\n\r\nSo the main question then is this: do we want to provide a util that allows to do the setting not just for `transformers.`? or leave that to the user - sort of contrib library somewhere?\r\n\r\nThe main reason for setting a global log level not just for `transfromers`, but also for `torch`, `wandb`, etc. is to be able to quickly turn off the noise when it's interfering. And currently each of these external libraries `transformers` uses add their noise to the output. When debugging tests it's very helpful to control the noise-levels. So having a quick switch --logger-be-quiet saves a lot of time.\r\n\r\n\r\n", "I also added: \r\n- a logger setting integration test\r\n- a helper `CaptureLogger` ctx manager", "Could someone please explain why CI gets `logging.ERROR` as the default logging level, when it should be `logging.WARNING` https://github.com/stas00/transformers/blob/loglevels/src/transformers/utils/logging.py#L58 (I rebased this branch to catch that very recent change)\r\n\r\nWhen I run it on my machine, I get `logging.WARNING`. \r\n\r\nOn CI the failure is:\r\n\r\n```\r\n[gw4] linux -- Python 3.7.9 /usr/local/bin/python\r\n\r\nself = <tests.test_logging.HfArgumentParserTest testMethod=test_set_level>\r\n\r\n def test_set_level(self):\r\n logger = logging.get_logger()\r\n \r\n level_origin = logging.get_verbosity()\r\n> self.assertEqual(level_origin, logging.WARNING)\r\nE AssertionError: 40 != 30\r\n```\r\n(`logging.ERROR == 40`, `logging.WARNING == 30`)\r\n\r\n**edit**: found the culprit - it was another test not cleaning up after itself. fixed in this PR.", "Thank you all for your excellent feedback. I made changes and updated the first post to reflect the PR's current state of things.", "I'm not sure we really need to control the logging level of all libraries. Since the logging level was changed back to its initial level `WARNING`, do you feel like there are too much logs during tests?", "For Bart tests there is a repetitive warning, which I raised here: https://github.com/huggingface/transformers/issues/6652\r\n\r\nIf you run others, you will see a bunch still, e.g.:\r\n\r\n```RUN_SLOW=1 pytest -sv --disable-warnings tests/test_modeling_t5.py ```\r\n\r\n```\r\ntests/test_modeling_t5.py::T5ModelTest::test_generate_with_past_key_value_states You might want to consider setting `use_cache=True` to speed up decoding\r\nYou might want to consider setting `use_cache=True` to speed up decoding\r\nYou might want to consider setting `use_cache=True` to speed up decoding\r\nYou might want to consider setting `use_cache=True` to speed up decoding\r\n[...]\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 242M/242M [00:06<00:00, 40.1MB/s]\r\nSome weights of T5Model were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nPASSED\r\n```\r\n\r\nAnd this is just one test.\r\n\r\nOf course, the other approach is to go and fix all those warnings, so that the tests that are fully under our control can be written according to the requirements the library sets and then warnings won't be there :) But see the next comment with a large dump of loggers that aren't `transformers`.\r\n\r\n----\r\n\r\nYet another alternative solution is instead of flag we add an env var, `LOG_LEVEL_GLOBAL`\r\n", "Here is some more samples of noise coming from outside `transformers` - a lot of it:\r\n\r\n```\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_albert_model 2020-09-02 10:32:56.462871: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1\r\n2020-09-02 10:32:56.467570: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.469326: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: \r\npciBusID: 0000:01:00.0 name: GeForce GTX TITAN X computeCapability: 5.2\r\ncoreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s\r\n2020-09-02 10:32:56.469400: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.470032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 1 with properties: \r\npciBusID: 0000:02:00.0 name: GeForce GTX TITAN X computeCapability: 5.2\r\ncoreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s\r\n2020-09-02 10:32:56.470303: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n2020-09-02 10:32:56.470670: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10\r\n2020-09-02 10:32:56.470719: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10\r\n2020-09-02 10:32:56.470752: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10\r\n2020-09-02 10:32:56.495979: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10\r\n2020-09-02 10:32:56.496076: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10\r\n2020-09-02 10:32:56.594292: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7\r\n2020-09-02 10:32:56.594768: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.597007: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.599207: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.601306: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.603994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0, 1\r\n2020-09-02 10:32:56.612943: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2020-09-02 10:32:56.672605: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3199980000 Hz\r\n2020-09-02 10:32:56.675701: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x556ed093a910 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\r\n2020-09-02 10:32:56.675767: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\r\n2020-09-02 10:32:56.678402: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.680525: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: \r\npciBusID: 0000:01:00.0 name: GeForce GTX TITAN X computeCapability: 5.2\r\ncoreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s\r\n2020-09-02 10:32:56.680889: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.683022: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 1 with properties: \r\npciBusID: 0000:02:00.0 name: GeForce GTX TITAN X computeCapability: 5.2\r\ncoreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s\r\n2020-09-02 10:32:56.683197: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n2020-09-02 10:32:56.683257: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10\r\n2020-09-02 10:32:56.683304: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10\r\n2020-09-02 10:32:56.683346: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10\r\n2020-09-02 10:32:56.683504: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10\r\n2020-09-02 10:32:56.683566: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10\r\n2020-09-02 10:32:56.683693: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7\r\n2020-09-02 10:32:56.684014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.686245: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.688465: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.690589: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.692497: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0, 1\r\n2020-09-02 10:32:56.692670: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:\r\n2020-09-02 10:32:56.692706: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 1 \r\n2020-09-02 10:32:56.693071: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N Y \r\n2020-09-02 10:32:56.693135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 1: Y N \r\n2020-09-02 10:32:56.694784: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.696986: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.699094: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.701214: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.703406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10865 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0, compute capability: 5.2)\r\n2020-09-02 10:32:56.706854: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.709029: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-09-02 10:32:56.710969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10856 MB memory) -> physical GPU (device: 1, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0, compute capability: 5.2)\r\n2020-09-02 10:32:56.718970: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x556e0ea3c200 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\n2020-09-02 10:32:56.719031: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX TITAN X, Compute Capability 5.2\r\n2020-09-02 10:32:56.719056: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): GeForce GTX TITAN X, Compute Capability 5.2\r\n2020-09-02 10:32:57.410269: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\r\nPASSED\r\n[...]\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_for_sequence_classification PASSED\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_graph_mode WARNING:tensorflow:5 out of the last 5 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f273870ba70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:6 out of the last 6 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f2738779ef0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:7 out of the last 7 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f2742cb89e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nPASSED\r\n[...]\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_common_attributes PASSED\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 63.0M/63.0M [00:02<00:00, 28.8MB/s]\r\nPASSED\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_outputs_equivalence PASSED\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_pt_tf_model_equivalence PASSED\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_resize_token_embeddings PASSED\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_save_load PASSED\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_attentions_output WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7f273831d450>, because it is not built.\r\nWARNING:tensorflow:From /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\r\nWARNING:tensorflow:From /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nThis property should not be used in TensorFlow 2.0, as updates are applied automatically.\r\nWARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.\r\nFAILED\r\ntests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_hidden_states_output WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7f26cb69b7d0>, because it is not built.\r\nWARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.\r\nWARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7f26c625ccd0>, because it is not built.\r\nWARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.\r\nFAILED\r\ntests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_identifier_from_model_type PASSED\r\ntests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_pretrained_identifier PASSED\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 536M/536M [00:18<00:00, 28.9MB/s]\r\n2020-09-02 10:34:15.259729: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.\r\n2020-09-02 10:34:15.394172: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.\r\nPASSED\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 498M/498M [00:12<00:00, 40.0MB/s]\r\n2020-09-02 10:34:29.779196: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 154389504 exceeds 10% of free system memory.\r\n2020-09-02 10:34:30.859094: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 154389504 exceeds 10% of free system memory.\r\nPASSED\r\ntests/test_modeling_tf_auto.py::TFAutoModelTest::test_model_for_encoder_decoder_lm 2020-09-02 10:34:32.437951: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 65798144 exceeds 10% of free system memory.\r\nPASSED\r\n[...]\r\ntests/test_modeling_tf_bert.py::TFBertModelTest::test_for_token_classification PASSED\r\ntests/test_modeling_tf_bert.py::TFBertModelTest::test_graph_mode WARNING:tensorflow:8 out of the last 8 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c5ccbef0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:9 out of the last 9 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c5db8950> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:10 out of the last 10 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c5ccbcb0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f277c0eb5f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c8298dd0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f273816d5f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c8235560> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26caab45f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nWARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c9da34d0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\r\nPASSED\r\n```\r\n", "Thinking about it more while working with other tools, it'd be of a great help to have an env var that can set the default logging level for `transformers`. e.g. I wanted to change the logging level for `run_eval.py` and I couldn't do that w/o modifying it. If we had an env var that would have been trivial and much faster to use.\r\n\r\nThis is regardless of the outcome of this discussion of whether we should have a way to turn non-transformers-related loggers off.", "I understand the issue, and while I agree that some frameworks are extremely log intensive (TensorFlow ...), I wonder if it's such a bad thing to have too many logs during testing. If a test fails, the logs may help to understand the issue quicker when the stack trace isn't helping much. Removing these logs would mean needing to restart the CI with a different logging level to see what's happening in the logs around this error.\r\n\r\nRegarding your second point, yes, I think it would be nice to control the default logging level with an environment variable! Would welcome such a PR.", "I would find some more control over logging very useful! A lot of our users are on colab, and warnings waste a ton of screen space there. Same with my debugging workflow -- there are so many logger statements that can't see my full traceback on the screen.", "> I would find some more control over logging very useful! A lot of our users are on colab, and warnings waste a ton of screen space there. Same with my debugging workflow -- there are so many logger statements that can't see my full traceback on the screen.\r\n\r\nI wonder whether we should just have an env var `DISABLE_LOGGING=info` that will just do:\r\n```\r\nimport logging\r\nlogging.disable(logging.INFO) # disable INFO and DEBUG logger everywhere\r\n```\r\n`DISABLE_LOGGING=warning` for WARNING, INFO and DEBUG...\r\n\r\nIn addition to the transformers-specific one `TRANSFORMERS_VERBOSITY=info...` which I will add.\r\n", "> I understand the issue, and while I agree that some frameworks are extremely log intensive (TensorFlow ...), I wonder if it's such a bad thing to have too many logs during testing. If a test fails, the logs may help to understand the issue quicker when the stack trace isn't helping much. Removing these logs would mean needing to restart the CI with a different logging level to see what's happening in the logs around this error.\r\n\r\nIn no way I am proposing to impact CI in any way - on the contrary - on CI the more debug info the merrier. I'm only proposing a way for a developer to turn the logging off on their own setup. i.e. we won't be enabling any such features on CI.\r\n\r\nDifferent developers have different needs and for me, for example, noise is very counterproductive for development. When debugging something I only want to see outputs that are relevant to what I'm debugging and nothing else - and seconding @sshleifer's comment - I too want them to fit into the current screen so I don't need to scroll. Especially in complicated situations when I need to look at output numbers. I understand how this can be a total non-issue for others.\r\n\r\n> Regarding your second point, yes, I think it would be nice to control the default logging level with an environment variable! Would welcome such a PR.\r\n\r\nI will do so. Thank you!", "> I would find some more control over logging very useful! A lot of our users are on colab, and warnings waste a ton of screen space there. Same with my debugging workflow -- there are so many logger statements that can't see my full traceback on the screen.\r\n\r\n@sshleifer have you tried the new library-wide control for logging that Lysandre added in #6434?\r\nThe doc is here: https://huggingface.co/transformers/master/main_classes/logging.html", "Added the env var to control the transformers verbosity level: https://github.com/huggingface/transformers/pull/6961", "It feels that this proposal is a no go at the moment, so I'm closing it down.\r\n\r\nThe extended tests and added testing utils which were part of this PR have been merged in https://github.com/huggingface/transformers/pull/6961\r\n\r\nThank you all who contributed to this discussion." ]
1,598
1,599
1,599
CONTRIBUTOR
null
There is too much logging going on at times under `transformers` and friends. One needs to be able to turn the noise off easily. This is a follow up to https://github.com/huggingface/transformers/issues/3050#issuecomment-682167272 **edit**: The default was changed yesterday to`logging.WARNING` https://github.com/huggingface/transformers/commit/4561f05c5fafc2b636a2fc1d0dded9057d439745 so there is much less noise now. **edit**: this PR has evolved since it was initially submitted, so this OP has been updated to reflect the current state of things. This PR introduces 2 things. # 1. new function: `set_verbosity_all` Usage: a. override all module-specific loggers to a desired level (except whatever got logged during modules importing) ``` import everything, you, need import transformers transformers.testing_utils.set_verbosity_all(transformers.logging.ERROR) ``` b. If you want to disable specific loggers you can call it with specific top level names: ``` import transformers, torch, ... transformers.testing_utils.set_verbosity_all(transformers.logging.ERROR, ["transformers", "nlp", "torch", "tensorflow", "tensorboard", "wandb"]) ``` add/remove module name prefices as needed. I initially placed it under `transformers.utils.logging` but then since it's beyond the core functionality, I moved it to testing_utils, since this is where we really want it. Please correct me if it should better belong elsewhere. # 2. new pytest option: `--log-level-all=error` when debugging tests, sometimes framework-wide logging gets seriously in the way, e.g. try: ``` RUN_SLOW=1 pytest -sv --disable-warnings tests/test_modeling_bart.py::BartModelIntegrationTests::test_inference_no_head ``` gives a lot of noise. (it was so until `s/info/warn` recent change mention above, but there is noise still) Now you will be able to turn it off, focusing only on the debug you want, by adding `--loglevel=error ` to the `pytest` options (or another level of your choice): ``` RUN_SLOW=1 pytest -sv --log-level-all=error --disable-warnings tests/test_modeling_bart.py::BartModelIntegrationTests::test_inference_no_head ``` voila - the noise is gone, while you can still do debug printing, etc. ``` pytest -h [...] --log-level-all={debug,info,warning,error,critical} set global logger level before each test ``` # 3. new test + `CaptureLogger` context manager While working on this a few integration tests were added and a helper `CaptureLogger` context manager to easily test the logger outputs. # 4. cleaned up one test removed verbosity setting in one test, which impacted other tests as it wasn't resetting the level to the original ----- Quite a few testing features were added recently, I guess it's time to start `testing.md` or something. ---- Fixes: https://github.com/huggingface/transformers/issues/3050
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6816/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6816", "html_url": "https://github.com/huggingface/transformers/pull/6816", "diff_url": "https://github.com/huggingface/transformers/pull/6816.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6816.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6815/comments
https://api.github.com/repos/huggingface/transformers/issues/6815/events
https://github.com/huggingface/transformers/pull/6815
688,432,159
MDExOlB1bGxSZXF1ZXN0NDc1NzAwMzEy
6,815
make the tmp dir configurable/persistent in tokenizer tests
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=h1) Report\n> Merging [#6815](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ab21b072fa2a122da930386381d23f95de06e28?el=desc) will **decrease** coverage by `1.46%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6815/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6815 +/- ##\n==========================================\n- Coverage 79.58% 78.11% -1.47% \n==========================================\n Files 157 157 \n Lines 28588 28588 \n==========================================\n- Hits 22752 22332 -420 \n- Misses 5836 6256 +420 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `74.81% <0.00%> (-22.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.55% <0.00%> (-20.48%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: |\n| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=footer). Last update [5ab21b0...d984fd8](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "After sleeping on this, I'm not quite sure of 2 things.\r\n\r\n1. the main switch from mixin to normal subclassing - if it's done it should be done for all other common testing mixins - the benefit would be - having simpler access to `unittest.TestCase` and the extended `unittest.TestCasePlus` features. As I proposed in the alternative solution, it's not at all required, as a different solution can be used for temp dirs during debug.\r\n\r\n2. a totally unrelated issue of having debugging code in place. Do we want to gradually make the test suite easier to debug, by leaving `if DEBUG: ...` in strategic places (currently, consisting of just one thing - having a fixed tmp dir and not having it removed, but there are probably others). \r\n\r\n For example, I find myself adding a debug message for various asserts, so it's easier to see what's not matching, but those are usually a 2nd/3rd argument to the assert function (or `msg=`), so it's a smooth feature requiring no `if DEBUG`.\r\n\r\ni.e. I'd love to hear what others think - if you think this is a useful discussion - I can open 2 unrelated issues if it helps to make discussing these 2 unrelated issues focused.\r\n\r\nMy inclination right now is to just provide a quick way to make a fixed temp dir w/o it being deleted, i.e. the alt solution in OP, and leave the original PR for maybe some time in the future if we see other benefits to doing so.", "I agree with having a quicker fix for this specific problem and think a bit more about a general way to have a specific debug behavior for our use.", "If you're joining in now, please ignore the proposed code (as it also requires changing from Mixin to a subclass), and what this needs is your feedback on this question: **do we want to have a simple DEBUG flag in tests, that once enabled it would switch to not deleting temp dirs and would use a fixed temp dir path, so that it's easy to monitor?** So instead of needing to manually tweak the tests, we have the debug setup already in place. That's the question.\r\n\r\nLet me know if perhaps I should scratch that PR and start a new one discussing just that, so that the initial attempts at solving the issue won't be confusing to you, the readers.\r\n\r\nAnd to quickly give you context, we are talking about:\r\n```\r\n def setUp(self):\r\n self.tmpdirname = tempfile.mkdtemp()\r\n```\r\nand the modified version is:\r\n```\r\nDEBUG=0\r\n[...]\r\n def setUp(self):\r\n super().setUp()\r\n \r\n # if you need to debug the contents of the tmpdirname, set DEBUG to True, which will then use\r\n # a hardcoded path and won't delete it at the end of the test\r\n if not DEBUG:\r\n self.tmpdirname = self.get_auto_remove_tmp_dir()\r\n else:\r\n self.tmpdirname = self.get_auto_remove_tmp_dir(tmp_dir=\"./tmp/token-test\", after=False)\r\n```\r\nhttps://github.com/huggingface/transformers/blob/d984fd82bf940c62700919da5735e60f3f883348/tests/test_tokenization_common.py#L69\r\n\r\nexcept the code itself will be different as we can't make it work with mixins in that way.\r\n\r\nIf it helps, here is the last time a related issue of working with temp dirs has been worked on with a successful PR merge:\r\nhttps://github.com/huggingface/transformers/pull/6494 - i.e. this is a continuation of the same to other parts of the test suite.\r\n", "> do we want to have a simple DEBUG flag in tests, that once enabled it would switch to not deleting temp dirs and would use a fixed temp dir path, so that it's easy to monitor?\r\n\r\nyes, this would be useful if you can do it in a way that doesn't add overhead for people trying to add new tokenizers.\r\n\r\nI didn't look at the code.", "I will close it for now and revisit the next time I deal with this unless someone beats me to it." ]
1,598
1,603
1,602
CONTRIBUTOR
null
Currently, debugging tokenizers is difficult since the temp dir is random and it gets wiped out at the end of the test run. It can be done, but it takes so much repetitive work. This PR uses the recently added [`TestCasePlus`](https://github.com/huggingface/transformers/pull/6494) which automatically sets up temp dirs and optionally doesn't remove them at the end of the test. It makes it very easy to configure the temp dir to be fixed rather than random, and also not delete itself. As a side-effect of inheriting from `TestCasePlus`, the mixin approach of `FooTest(TokenizerTesterMixin, unittest.TestCase)` doesn't work - as it now tries to run the super-class tests directly and not from within the subclass. Therefore, the code switches to normal sub-classing and instructs `unittest` not to run super-class' tests on its own, using the following machinery as explained [here](https://stackoverflow.com/a/50922971/9201239). Specifically here: ``` from transformers.testing_utils import TestCasePlus class TokenizerCommonTester(TestCasePlus): __test__ = False # and then the sub-class: from .test_tokenization_common import TokenizerCommonTester class BartTokenizationTest(TokenizerCommonTester): __test__ = True ``` The PR makes the code ready to debug by changing just one flag: ``` DEBUG = False # if you need to debug the contents of the tmpdirname, set DEBUG to True, which will then use # a hardcoded path and won't delete it at the end of the test if not DEBUG: self.tmpdirname = self.get_auto_remove_tmp_dir() else: self.tmpdirname = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/token-test", after=False) ``` So just make `DEBUG` `True` and nothing else needs to be tweaked. I hope this is useful for developers. There are a few other test mixins that could be improved the same way, but let's see if this approach is welcomed first. ---- ## An alternative solution If mixin is preferable, then let's leave everything as is and do this instead: ``` # transformers/testing_utils.py from pathlib import Path def make_dir(path): Path(path).resolve().mkdir(parents=True, exist_ok=True) return path # tests/test_tokenization_common.py from transformers.testing_utils import make_dir DEBUG = True class TokenizerTesterMixin: tokenizer_class = None test_rust_tokenizer = False def setUp(self): if not DEBUG: self.tmpdirname = tempfile.mkdtemp() else: self.tmpdirname = make_dir("./tmp/test-tok") def tearDown(self): if not DEBUG: shutil.rmtree(self.tmpdirname) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6815/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6815", "html_url": "https://github.com/huggingface/transformers/pull/6815", "diff_url": "https://github.com/huggingface/transformers/pull/6815.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6815.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6814/comments
https://api.github.com/repos/huggingface/transformers/issues/6814/events
https://github.com/huggingface/transformers/issues/6814
688,431,731
MDU6SXNzdWU2ODg0MzE3MzE=
6,814
Create smaller number of heads in attn without pruning using shared parameters
{ "login": "huu4ontocord", "id": 8900094, "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huu4ontocord", "html_url": "https://github.com/huu4ontocord", "followers_url": "https://api.github.com/users/huu4ontocord/followers", "following_url": "https://api.github.com/users/huu4ontocord/following{/other_user}", "gists_url": "https://api.github.com/users/huu4ontocord/gists{/gist_id}", "starred_url": "https://api.github.com/users/huu4ontocord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huu4ontocord/subscriptions", "organizations_url": "https://api.github.com/users/huu4ontocord/orgs", "repos_url": "https://api.github.com/users/huu4ontocord/repos", "events_url": "https://api.github.com/users/huu4ontocord/events{/privacy}", "received_events_url": "https://api.github.com/users/huu4ontocord/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# 🚀 Feature request Instead of prunning heads or masking heads, create new linear layers with views of the larger linear layers that contains all heads. ## Motivation For various models (Bert, Distilbert, etc.) I would like to be able to experiment on using separate heads without having to prune the heads. For example, training layers with separate heads. There is a way to masks heads, but the computation is still performed over all of the k, q, v tensors, and then masking is performed. Is there a way to create views of the heads that you would otherwise prune, so that the extra computation per unused heads is not performed. My understanding is that the attention computation is one of the more expensive operation, since it is quadratic in the seq_len. For example, with DistilBert, I was thinking of replacing, q_lin, v_lin, and k_lin with a shared linear layer that is a view into the original q_lin, k_line, v_lin. ## Contribution I am thinking of adapting this code from prune_linear_layer from modeling_utils.py. Do you think this will work? Is there a better way to do this? `` def share_linear_layer_by_index(layer, index, dim=0): """ Create a new linear layer (a model parameters) with shared parameters from index of the old layer Return new layer with requires_grad=True. """ index = index.to(layer.weight.device) W = layer.weight.index_select(dim, index) if layer.bias is not None: if dim == 1: b = layer.bias else: b = layer.bias[index] new_size = list(layer.weight.size()) new_size[dim] = len(index) new_layer = nn.Linear(new_size[1], new_size[0], bias=layer.bias is not None).to(layer.weight.device) new_layer.bias.requires_grad = False new_layer.weight = W new_layer.weight.requires_grad = True if layer.bias is not None: new_layer.bias.requires_grad = False new_layer.bias = b new_layer.bias.requires_grad = True return new_layer ``
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6814/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6813/comments
https://api.github.com/repos/huggingface/transformers/issues/6813/events
https://github.com/huggingface/transformers/pull/6813
688,390,016
MDExOlB1bGxSZXF1ZXN0NDc1NjY4NzM1
6,813
RAG
{ "login": "ola13", "id": 1528523, "node_id": "MDQ6VXNlcjE1Mjg1MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/1528523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ola13", "html_url": "https://github.com/ola13", "followers_url": "https://api.github.com/users/ola13/followers", "following_url": "https://api.github.com/users/ola13/following{/other_user}", "gists_url": "https://api.github.com/users/ola13/gists{/gist_id}", "starred_url": "https://api.github.com/users/ola13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ola13/subscriptions", "organizations_url": "https://api.github.com/users/ola13/orgs", "repos_url": "https://api.github.com/users/ola13/repos", "events_url": "https://api.github.com/users/ola13/events{/privacy}", "received_events_url": "https://api.github.com/users/ola13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=h1) Report\n> Merging [#6813](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/244e1b5ba331cb4c1ed96d88d0895c252567f7f3?el=desc) will **decrease** coverage by `0.85%`.\n> The diff coverage is `82.89%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6813/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6813 +/- ##\n==========================================\n- Coverage 78.81% 77.95% -0.86% \n==========================================\n Files 174 178 +4 \n Lines 33670 34125 +455 \n==========================================\n+ Hits 26537 26603 +66 \n- Misses 7133 7522 +389 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmFnLnB5) | `69.76% <69.76%> (ø)` | |\n| [src/transformers/modeling\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `76.98% <76.98%> (ø)` | |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `67.28% <77.77%> (+0.40%)` | :arrow_up: |\n| [src/transformers/retrieval\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `91.27% <91.27%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.37% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.25% <100.00%> (+0.09%)` | :arrow_up: |\n| [src/transformers/configuration\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <100.00%> (+0.53%)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `83.96% <100.00%> (+1.58%)` | :arrow_up: |\n| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=footer). Last update [3ebb1b3...db3e5e0](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@lhoestq thanks for the comments! I did consider moving retrieval outside of the model - the benefit of this that I see would be that we would move all training-related logic (e.g. handling distributed processing in Retriever) from `transformers` to `examples`.\r\n\r\nThat said, I'm still in favor of keeping the call to `contextualize` as part of the forward pass. Here's my thinking:\r\n- retrieval is more than just data pre-processing step, it is a core part of the model's architecture. E.g. we can't pre-compute retrieved docs for a batch of data beforehand as the question encoder will be updated at every step of training, so the set of retrieved docs would be changing dynamically. If we move retrieval outside of the model people may be tempted to do that.\r\n- we would need to call `contextualize` before every forward pass on the model, so not only in finetuning, but also e.g. in evaluation code. On top of that anyone who would want to run the model for demo purposes would have to instantiate the retriever first and remember to call `contextualize`, instead of doing the two simple steps that other HF models require (encoding the sequence and running the model) - we could potentially consider making contextualization a part of tokenizer's `encode` method (not sure this would be intuitive for people used to HF's APIs) - however, the retrieval logic would still remain in `transformers` then\r\n- In terms of flexibility - I think with the current approach it'd still be possible for people to build different retrievers and pass them to the model\r\n\r\nWhat do you think? I'd be curious to know what others think about it, cc @patrick-s-h-lewis, @thomwolf", "So we've been brainstorming with @patrickvonplaten and @lhoestq on this yesterday and here is a proposal.\r\n\r\nThe general context is that we expect to include more hybrid models with a retrieval component in the future (like REALM, MARGE, knn-LM) so it's nice if we can setup an API which is general enough to include a diversity of models.\r\n\r\nHere is the general idea: we can add a new base class to the library (in addition to the `Tokenizer` and `Model`) which we can call a `Retriever` for instance. The `Retriever` class:\r\n- won't include trainable components and will thus be framework independent like the tokenizer,\r\n- does both the retrieval of the documents given an encoding and the post processing (in our case the retokenization),\r\n- it is sub-classed to be model specific (`RAGRetriever`) like the models and tokenizers,\r\n- we can add an `AutoRetriever` version later.\r\n\r\nWe probably want to keep it's API fairly generic for now since this field is only beginning to be explored. The `Retriever` can just have a `__call__` method (like the tokenizers) which expect an encoding and does retrieval and postprocessing.\r\n\r\nThen (copying @patrickvonplaten's idea) we would be to have a `RetrievalGenerationModel` that comprises the trainable (and PyTorch/TF specific) elements, i.e. the encoder and generator models. In its forward pass the `RetrievalGenerationModel` would have besides the usual `input_ids` one more input argument which is of class `Retriever`. The RetrivalGenerationModel would just call Retriever.forward(encoded_input) and expect an encoding that the self.generator could then be conditioned on.\r\n\r\nWhat do you think? We would help you implement this option of course since it impacts more significantly the library.", "I'm just wondering if TF will like a model which accept a class as input. What do you think @jplu @patrickvonplaten?\r\nWe could also have a method in the class to set the retriever instead of dynamically providing it. ", "Never tried, but I doubt it should be possible in compiled mode. I think what you propose would be a better way to go.", "> Never tried, but I doubt it should be possible in compiled mode. I think what you propose would be a better way to go.\r\n\r\n@jplu - I think there was a slight misunderstanding. \r\nWith the proposed approach we actually would pass an instantiation of a class as an argument to the forward pass of the `RetrievalGenerationModel` -> so before following this path we should check if this can nicely be done with TF...", "Oh ok! I thought the question was about to pass one class that contains all the arguments. My bad 😢 \r\n\r\nSo, after reading your explanation I can say, yes it is doable!", "Hey @ola13,\r\n\r\nThanks for your comment, this is indeed a very important aspect that I didn't really think of before. \r\nWith @lhoestq, we have been brainstorming a bit and thought maybe a slighly different design could make sense:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\n\r\nclass RetrievalGenerationModel(PretrainedModel):\r\n \r\n def __init__(self, config: RetrievalGenerationConfig, encoder: PretrainedModel, retrieval_generator: PretrainedModel):\r\n if encoder is not None and retrieval_generator is not None: \r\n self.encoder = encoder\r\n self.retrieval_generator = retrieval_generator\r\n self.config = RetrievalGenerationConfig.from_encoder_generator_config(self.encoder.config, self.retrieval_generator.config)\r\n\r\n assert config is not None\r\n super().__init__(config)\r\n\r\n if encoder is None:\r\n self.encoder = AutoModel.from_config(config.encoder)\r\n if retrieval_generator is None:\r\n self.retrieval_generator.from_config(config.generator)\r\n\r\n @classmethod\r\n def from_pretrained_encoder_generator(cls, encoder_model_id, generator_model_id):\r\n encoder = AutoModel.from_pretrained(...) # load any query encoding model\r\n retrieval_generator = AutoRetrievalGeneratorModel.from_pretrained(...) # this would be a new class that contains any model that can be used as the `retrieval_generator` model.\r\n return cls(encoder=encoder, retrieval_generator=retrieval_generator)\r\n\r\n\r\n def forward(input_ids, retriever: PretrainedRetriever):\r\n # 1. Every retriever model encodes the query -> any `AutoModel` can be used here\r\n input_ids_encodings = self.encoder(input_ids) # model with weights\r\n\r\n # 2. Use costumized retriever (tokenizer-like) class instance, like `RAGRetriever` that \r\n # - query the index\r\n # - reformats the document outputs\r\n # - tokenizes the document outpus\r\n retrieved_docs_input_ids, retrieved_docs_encodings = retriever(input_ids_encodings, input_ids) # tokenizer like postprocessor that returns the tokenized docs input and the docs encodings\r\n\r\n # 3. Now the retrieval_generator requires a specific forward pass which accepts at least four kinds of tensors: 1) the input_ids (query), 2) the encoded_input_ids (encoded query), 3) retrieved_docs_input (tokenized context) and 4) retrieved_docs_encodings\r\n output_ids = self.retrieval_generator(input_ids, encoded_query, retrieved_docs_input_ids, retrieved_docs_encodings) # any `AutoRetrievalGeneratorModel` can be used here\r\n\r\nclass RagRetrievalGenerator(PretrainedModel):\r\n\r\n def __init__(self):\r\n self.generator = AutoModelForSeq2Seq.from_pretrained(...) # e.g. Bart\r\n\r\n def forward(input_ids, encodings, docs_input_ids, docs_encodings):\r\n doc_scores = torch.bmm(encodings.unsqueeze(1), docs_encodings.transpose(1, 2)).squeeze(1)\r\n ....\r\n output_ids = self.generator.generate(...)\r\n\r\nclass RAGRetriever(PretrainedRetriever)\r\n \"\"\"\r\n This retriever is framework independant (for both TF and PT) \r\n similar to a tokenizer\r\n \"\"\"\r\n\r\n def __init__(self):\r\n self.docs = nlp.load_dataset(...)\r\n ...\r\n\r\n def __call__(input_ids_encodings, input_ids):\r\n # no tensor operations happen here\r\n ...\r\n\r\nclass DPRRetrivalGenerator(PretrainedModel):\r\n\r\n def __init__(self):\r\n self.genator = AutoModelForQuestionsAnswering.from_pretrained(...) # QA model \r\n\r\n\r\n def forward(input_ids, encodings, docs_input_ids, docs_encodings):\r\n \r\n concated_qa_input = torch.cat([input_ids, docs_input_ids], dim=-1)\r\n output_ids = self.generator(concated_qa_input)\r\n\r\n\r\nclass DPRRetriever(PretrainedRetriever)\r\n \"\"\"\r\n This retriever is framework independant (for both TF and PT) \r\n similar to a tokenizer\r\n \"\"\"\r\n\r\n def __init__(self):\r\n self.docs = nlp.load_dataset(...)\r\n ...\r\n\r\n def __call__(input_ids_encodings, input_ids):\r\n # no tensor operations happen here\r\n ...\r\n```\r\n\r\nHopefully this is somewhat understandable @ola13 @thomwolf ...\r\n\r\n@lhoestq and I think that for each RetrivalAugmentedModel we need 2 specific parts:\r\n\r\n1) A specific Retriever: how are documents retrieved and formated and tokenized -> e.g. `RAGRetriever`\r\n2) A specific Generator: Here we can also have multiple possibilities: DPR uses a `AutoModelForQuestionAnswering` while RAG uses a `AutoModelForSeq2Seq`\r\n\r\nSo with this framework we would have to introduce 1 general class that would be used for all RetrievalAugementedModels, called `RetrievalGenerationModel` (or whatever name fits better) and 2 architecture specific classes `RAGRetriever` and `RagRetrievalGenerator`.\r\n\r\nWould be keen to hear your thoughts :-) ", "Hey @patrickvonplaten, makes sense and in fact it's not very different from how we structured the code already the key differences that I see are:\r\n- we move re-tokenization between query_encoder and generator to the Retriever (so respective tokenizers will be encapsulated by the Retriever not a model class as we currently do it)\r\n- we move retrieval score calculation to the model so that no tensor operations happen in the retriever\r\n\r\nwhich both should be pretty straightforward to implement.\r\n\r\nThe one thing that I'm still on the fence about is passing a `retriever` to each `forward` pass on a `RetrievalGenerationModel`, instead of making it a member of `RetrievalGenerationModel` class. Why do you feel the former is preferable over the latter?", "Yeah, good point! It's a bit weird to pass a class instance just to make a forward pass with it. \r\n\r\nMy main reason is the following: \r\n\r\nCurrently, the library makes a very clear distinction between `config`, `tokenizer` and `model` which are all independent of each other. Each of these classes have a seperate `.from_pretrained()` and `.save_pretrained()` method where as the `PretrainedModel.save_pretrained(...)` and `PretrainedModel.from_pretrained(...)` internally call `PretrainedConfig.save_pretrained(...)` and `PretrainedConfig.save_pretrained(...)`, but **never** the `PretrainedTokenizer.from_pretrained(...)` an d`PretrainedTokenizer.save_pretrained(...)` methods. For a `RetrievalGenerationModel` I would like to reuse `PretrainedModel`'s `from_pretrained(...)` and `save_pretrained(...)` methods which means that a tokenizer instance should not be part of the model because other wise we would have to tweak this function (which I don't think is a good idea). \r\nAlso, this will make the `RetrievalGenerationModel` a \"clean\" and relatively light `Model` object without any string processing logic in it whatsoever which is more in line with other `PretrainedModel` classes. ", "@patrickvonplaten, got it, yeah makes sense! We would still want to call `PretrainedTokenizer.from_pretrained(...)` when initializing `RagRetriever` but I guess this should be fine?\r\n\r\nOkay, so I would propose to do the following - I will refactor this PR to follow the design we discussed. It seems though that implementing the generic `Retriever` logic as discussed earlier by @thomwolf would require extra effort and time, and is not necessarily within the scope of this PR. In the interest of time, we could land this PR and then proceed with generalizing the retrieval logic? I'm then happy to work with the RAG implementation to make it compatible.", "Exactly! I was thinking that we either create a genereric `PretrainedRetriever` class with a `from_pretrained()` method that calls the tokenizer `from_pretrained()` methods or add `from_pretrained()` method directly to `RagRetriever`. Maybe @lhoestq and @thomwolf have better insight on the \"tokenizer\" side. \r\n\r\n@ola13 maybe we can wait quickly if @lhoestq and @thomwolf are fine with the design as discussed above :-) ", "Sounds awesome to me!", "Hey I just refactored the model following suggestions above. One point is that I had to modify `generation_utils.py` to account for a model which takes a `retriever` as an argument to the encoder. Let me know what you think!", "Hi, a question - to use RAG I need a couple of non-standard dependencies (faiss, psutil, nlp) - can I define a special test environment which would install those for rag tests? any pointers on how to handle this?", "> Hey I just refactored the model following suggestions above. One point is that I had to modify `generation_utils.py` to account for a model which takes a `retriever` as an argument to the encoder. Let me know what you think!\r\n\r\nAwesome ! I'll take a look. Also cc @patrickvonplaten \r\n\r\n> Hi, a question - to use RAG I need a couple of non-standard dependencies (faiss, psutil, nlp) - can I define a special test environment which would install those for rag tests? any pointers on how to handle this?\r\n\r\nMaybe @LysandreJik knows more about how to handle tests with dependencies ?\r\n", "Hey @ola13, \r\n\r\nI think the general code design is exactly what we have imagined to go for, defining a `RagRetriever` and passing the `retriever` to the forward pass, so this is great! ", "Regarding the test dependencies, you can add the libraries here: https://github.com/huggingface/transformers/blob/d6c08b07a087e83915b4b3156bbf464cebc7b9b5/setup.py#L92 and it should automatically be installed for testing on circle ci :-) `psutil` is already in the test dependency", "@ola13 - it would be awesome if you could add one \"full\" integration test with hardcoded input and output under @slow \r\n\r\nBy that I mean, *e.g.* hardcoding an input question \"Why does it rain\", loading a relevant dataset using the `HfIndex` and the full pretrained encoder and generator model and hardcoding the expected output answer in thet test. I think all operations are deterministic (beam search, etc...), so no random seeds have to be set.\r\n\r\n This way we have one test where we can be sure that the model works as expected and every change to the model in the future can be checked against that.\r\n\r\nThe tests you have in `test_modeling_rag.py` so far look great. We could also add a full `RagModel` test by defining a dummy dataset that will be instantiated from a hardcoded dict at test time and instantiating a very light `RagRetriever` at test time this way. But we can manually add those tests later, they are not super important.\r\n\r\nIn terms of a timeline, it would be be awesome if you manage to make the `test_modeling_rag.py` tests pass and if you could add one \"full\" integration test showing reasonable results. After this is finished, I think the best idea is if we add some changes on top of your PR (this should take another 1,2 days) and then merge the model into the lib :-) \r\n\r\nThanks a mille for your awesome work so far!!!", "Hey @patrickvonplaten, sounds good! yes definitely adding an integration test was on my agenda, right now having merged the `master` I'm also dealing with some issues arising after the refactor from https://github.com/huggingface/transformers/commit/afc4ece462ad83a090af620ff4da099a0272e171#diff-72b038fcff0de4ae5e094e3cde9471f1 as we were relying on the old structure of `past`. I'm hoping to be done with both of these things by tomorrow :) ", "Hi, I just added an integration test for RAG using the dummy variant of `wiki_dpr`. However, I had to locally hack `datasets` to make it run locally, as there seems to be a discrepancy between the dummy index name hardcoded in `wiki_dpr.py` here: https://github.com/huggingface/datasets/blob/37d4840a39eeff5d472beb890c8f850dc7723bb8/datasets/wiki_dpr/wiki_dpr.py#L72 (expecting `dummy.psgs_w100.nq.IndexHNSWFlat-IP-train.faiss`) and what's available on HF's google cloud bucket:\r\n```\r\n~$ gsutil ls -r gs://huggingface-nlp/datasets/wiki_dpr/*\r\ngs://huggingface-nlp/datasets/wiki_dpr/\r\ngs://huggingface-nlp/datasets/wiki_dpr/dummy_psgs_w100_with_nq_embeddings_IndexFlatIP-train.faiss\r\ngs://huggingface-nlp/datasets/wiki_dpr/psgs_w100.nq.IVFPQ4096_HNSW32_PQ64-IP-train.faiss\r\ngs://huggingface-nlp/datasets/wiki_dpr/psgs_w100_with_nq_embeddings_IVFPQ4096_HNSW32,PQ64-IP-train.faiss\r\n```\r\n\r\ncc @lhoestq - this would have to be fixed quickly, alternatively I could use full `wiki_dpr` in tests, but that's 78GB, not sure if it makes sense.\r\n\r\nLet me know what you think!", "> cc @lhoestq - this would have to be fixed quickly, alternatively I could use full `wiki_dpr` in tests, but that's 78GB, not sure if it makes sense.\r\n\r\nI fixed it, dummy.psgs_w100.nq.IndexHNSWFlat-IP-train.faiss is now available on gcs\r\n\r\n", "Previous RAG code is now saved in this PR: #7200", "Last fail is due to time-out. All import tests are passing => merging to master." ]
1,598
1,600
1,600
CONTRIBUTOR
null
# Intro This pull request implements a RAG-sequence and a RAG-token models, as defined in [the paper](https://arxiv.org/pdf/2005.11401.pdf). RAG (for Retrieval Augmented Generation) is an architecture combining a retriever with a generator model. During a forward pass, the input sequence is used as a question to the retriever, which surfaces relevant context documents. The documents are then prepended to the input and such contextualized input is passed to the generator. In the paper, we experiment with DPR-based retrieval and BART generator. # Implementation RAG is a seq2seq model which encapsulates three core components: - a retriever - a wrapper around a faiss index of the documents, - a question encoder which encodes the input sequence before passing it to the retriever, - a generator which learns to generate the output from the contextualized input as well as respective tokenizers (we need to be able to decode the input sequence, encode it with the question encoder tokenizer, and then encode contextualized input with the generator tokenizer again). --- We implement two variants of the model, both presented in the paper: - `RagSequence`, which uses `DPRQuestionEncoder` as the question encoder. As for the generator, two compatible architectures have been tested: `BartForConditionalGeneration` and `T5ForConditionalGeneration`. - `RagToken`, which uses `DPRQuestionEncoder` and `BartForConditionalGeneration` as the generator. --- Key files in the pull request: - `modeling_rag.py`, `tokenization_rag.py`, `configuration_rag.py` the core model implementation - `retrieval_rag.py` - a distributed retriever built on top of the `torch.distributed` communication package. The retriever is an interface between the model and the faiss index of the encoded documents. During training, all workers initialize their own instance of the retriever, however, only the main worker loads the index into memory, which prevents OOMs on machines with multiple GPUs (we store the index in RAM). The index itself is based on the `nlp.Datasets`. We also implement a variant compatible with indices built using the original DPR implementation (https://github.com/facebookresearch/DPR) - `eval_rag.py` - an evaluation script which allows to perform the evaluation end to end (measures the exact match and F1 on the downstream task) as well as the evaluation of the retrieval component alone (measures precision@k). - `finetune.py` - a training script for finetuning RAG models. # Testing We have successfully managed to reproduce original paper results on Natural Questions for a couple of scenarios: - converting original `fairseq` checkpoints to `HuggingFace` and evaluating them on `HuggingFace` - converting original `fairseq` checkpoints to `HuggingFace` and continuing fine-tuning on `HuggingFace` - training from scratch on `HuggingFace` # Pretrained, ready-to-use models (after PR is merged). - RagToken: https://huggingface.co/facebook/rag-token-nq - RagSequence: https://huggingface.co/facebook/rag-sequence-nq - RagTokenBase: https://huggingface.co/facebook/rag-token-base - RagSequneceBase: https://huggingface.co/facebook/rag-sequence-base # Future PR - [ ] Add and test distributed Pytorch and possible Ray - Retriever @lhoestq - [ ] Upload more rag model combinations. - [ ] Clean examples
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6813/reactions", "total_count": 24, "+1": 0, "-1": 0, "laugh": 0, "hooray": 7, "confused": 0, "heart": 10, "rocket": 7, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6813/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6813", "html_url": "https://github.com/huggingface/transformers/pull/6813", "diff_url": "https://github.com/huggingface/transformers/pull/6813.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6813.patch", "merged_at": 1600792199000 }
https://api.github.com/repos/huggingface/transformers/issues/6812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6812/comments
https://api.github.com/repos/huggingface/transformers/issues/6812/events
https://github.com/huggingface/transformers/issues/6812
688,388,520
MDU6SXNzdWU2ODgzODg1MjA=
6,812
Potential bug in PLM training
{ "login": "HarshTrivedi", "id": 3285313, "node_id": "MDQ6VXNlcjMyODUzMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/3285313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HarshTrivedi", "html_url": "https://github.com/HarshTrivedi", "followers_url": "https://api.github.com/users/HarshTrivedi/followers", "following_url": "https://api.github.com/users/HarshTrivedi/following{/other_user}", "gists_url": "https://api.github.com/users/HarshTrivedi/gists{/gist_id}", "starred_url": "https://api.github.com/users/HarshTrivedi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HarshTrivedi/subscriptions", "organizations_url": "https://api.github.com/users/HarshTrivedi/orgs", "repos_url": "https://api.github.com/users/HarshTrivedi/repos", "events_url": "https://api.github.com/users/HarshTrivedi/events{/privacy}", "received_events_url": "https://api.github.com/users/HarshTrivedi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I think this is correct. It should be replaced by an `|`. Do you get a better perplexity if you change this line?", "Thanks a lot for opening this issue @HarshTrivedi ! I also agree that the logic should be OR and not AND. @shngt - can you maybe comment here as well?", "Thank you for confirming this!\r\n\r\nIf I remember correctly, changing `&` to `|` didn't fix the high zero-shot perplexity for me. I'll try it again later today or tomorrow and report back the numbers with `&` vs `|`.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I agree - the logic should be OR and not AND. Could you please confirm if the numbers change @HarshTrivedi?\r\n\r\nSorry for the delay - I missed the notification at the time. I'll submit a PR for AND -> OR fix asap, and try to do some more stringent testing to catch the reason for the perplexity difference. How can I proceed with the latter @patrickvonplaten @LysandreJik ?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "(Resolved by https://github.com/huggingface/transformers/pull/8409 I believe)" ]
1,598
1,619
1,610
CONTRIBUTOR
null
There seems to be a bug in `mask_tokens` method of `DataCollatorForPermutationLanguageModeling`. Based on the comment, [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L294) is supposed to compute mask for non-functional tokens, ie. anything but padding and special tokens. So there should be an OR between the `padding_mask` and `special_tokens_mask`, and not AND. For reference, the [corresponding line](https://github.com/zihangdai/xlnet/blob/master/data_utils.py#L602) in the original XLNet code also has an OR. I should acknowledge that I haven't understood the permutation masking code properly yet. But raising an issue, because it seems wrong to me. ----- Besides the above problem, I'm also getting a very bad perplexity (**296.0**) on evaluating (w/o finetuning) `xlnet-base-cased` PLM model on plain wikitext2 dataset (`wiki.test.raw`). I've used XLNet example from [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) (without `--do-train` flag) to get the perplexity. The PLM code only works if the sequence lengths are even. To workaround this, I append a padding token when sequence length is odd. Concretely, I replaced the [error here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L255) with: ``` padding = inputs.new_ones((inputs.size(0), 1))*self.tokenizer.pad_token_id inputs = torch.cat([inputs, padding], dim=1) ``` For comparison, the perplexity of BERT in this dataset is around 10. Transformer Version: from master. @patrickvonplaten @TevenLeScao @LysandreJik @shngt
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6812/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6811/comments
https://api.github.com/repos/huggingface/transformers/issues/6811/events
https://github.com/huggingface/transformers/pull/6811
688,361,436
MDExOlB1bGxSZXF1ZXN0NDc1NjQ2MjUw
6,811
Pegasus finetune script: add --adafactor
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=h1) Report\n> Merging [#6811](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ab21b072fa2a122da930386381d23f95de06e28?el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6811/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6811 +/- ##\n==========================================\n- Coverage 79.58% 79.47% -0.11% \n==========================================\n Files 157 157 \n Lines 28588 28586 -2 \n==========================================\n- Hits 22752 22719 -33 \n- Misses 5836 5867 +31 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+0.27%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=footer). Last update [5ab21b0...67322db](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6811/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6811", "html_url": "https://github.com/huggingface/transformers/pull/6811", "diff_url": "https://github.com/huggingface/transformers/pull/6811.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6811.patch", "merged_at": 1598737413000 }
https://api.github.com/repos/huggingface/transformers/issues/6810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6810/comments
https://api.github.com/repos/huggingface/transformers/issues/6810/events
https://github.com/huggingface/transformers/pull/6810
688,356,203
MDExOlB1bGxSZXF1ZXN0NDc1NjQxOTcw
6,810
[s2s] save first batch to json for debugging purposes
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,602
1,602
CONTRIBUTOR
null
helps debugging and understanding at a very low cost, by writing `text_batch.json` and `tok_batch.json` to `output_dir/`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6810/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6810", "html_url": "https://github.com/huggingface/transformers/pull/6810", "diff_url": "https://github.com/huggingface/transformers/pull/6810.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6810.patch", "merged_at": 1602015117000 }
https://api.github.com/repos/huggingface/transformers/issues/6809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6809/comments
https://api.github.com/repos/huggingface/transformers/issues/6809/events
https://github.com/huggingface/transformers/pull/6809
688,354,432
MDExOlB1bGxSZXF1ZXN0NDc1NjQwNTMz
6,809
[s2s] Test hub configs in self-scheduled CI
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" } ]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Regression test for #6806
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6809", "html_url": "https://github.com/huggingface/transformers/pull/6809", "diff_url": "https://github.com/huggingface/transformers/pull/6809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6809.patch", "merged_at": 1598648753000 }
https://api.github.com/repos/huggingface/transformers/issues/6808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6808/comments
https://api.github.com/repos/huggingface/transformers/issues/6808/events
https://github.com/huggingface/transformers/issues/6808
688,350,571
MDU6SXNzdWU2ODgzNTA1NzE=
6,808
bart-large-cnn ROUGE-L scores
{ "login": "swethmandava", "id": 17828952, "node_id": "MDQ6VXNlcjE3ODI4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/17828952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swethmandava", "html_url": "https://github.com/swethmandava", "followers_url": "https://api.github.com/users/swethmandava/followers", "following_url": "https://api.github.com/users/swethmandava/following{/other_user}", "gists_url": "https://api.github.com/users/swethmandava/gists{/gist_id}", "starred_url": "https://api.github.com/users/swethmandava/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swethmandava/subscriptions", "organizations_url": "https://api.github.com/users/swethmandava/orgs", "repos_url": "https://api.github.com/users/swethmandava/repos", "events_url": "https://api.github.com/users/swethmandava/events{/privacy}", "received_events_url": "https://api.github.com/users/swethmandava/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Also noticed this! I have convinced myself that it's a scoring difference because the summaries generated are the same between this model and the fairseq implementation.\r\n", "This might help:\r\nhttps://github.com/google-research/google-research/issues/168\r\nI used pyrogue and R1, R2, RL = 44.32, 21.15, 37.53\r\n", "@yxyzzz can you tell me how you're using it? I get similar scores with py-rogue\r\n\r\n```\r\ndef calculate_rouge(output_lns: List[str], reference_lns: List[str], use_stemmer=True) -> Dict:\r\n scorer = rouge_scorer.RougeScorer(ROUGE_KEYS, use_stemmer=use_stemmer)\r\n aggregator = scoring.BootstrapAggregator()\r\n\r\n for reference_ln, output_ln in zip(reference_lns, output_lns):\r\n scores = scorer.score(reference_ln, output_ln)\r\n aggregator.add_scores(scores)\r\n\r\n result = aggregator.aggregate()\r\n\r\n\r\n import rouge\r\n import nltk\r\n nltk.download('punkt')\r\n\r\n evaluator = rouge.Rouge(metrics=['rouge-n', 'rouge-l'],\r\n max_n=2,\r\n limit_length=False,\r\n apply_avg=True)\r\n scores = evaluator.get_scores(reference_lns, output_lns)\r\n print(\"py-rogue\", scores)\r\n\r\n print(\"rogue_scorer\", {k: round(v.mid.fmeasure * 100, 4) for k, v in result.items()})\r\n```\r\n\r\nResults in:\r\n\r\n```\r\npy-rogue {'rouge-1': {'f': 0.44335299665102107, 'p': 0.5174289830764615, 'r': 0.40466586165106366}, 'rouge-2': {'f': 0.21133693864752542, 'p': 0.2465209393822732, 'r': 0.19324181648769206}, 'rouge-l': {'f': 0.3073058732169781, 'p': 0.35988134598642835, 'r': 0.2798097075410874}}\r\n\r\nrogue_scorer {'rouge1': 44.0698, 'rouge2': 21.0711, 'rougeLsum': 30.6233}\r\n```", "1. rouge_score split sentences by '\\n'. You can add a '\\n' to separate sentences in the summaries and evaluate. The summary level rougeL (rougeLsum) should be a lot higher and close to the one in the literature. \r\n'{'rouge1': 44.0536, 'rouge2': 21.0711, 'rougeL': 30.6157, 'rougeLsum': 40.9812}'\r\n```\r\noutput_ln2 = []\r\nfor o in `output_ln:\r\n s = sent_tokenize(p)\r\n output_ln2.append('\\n'.join(s))\r\n```\r\n2. Use pyrouge -> https://pypi.org/project/pyrouge/ ", "replacing \r\n```\r\noutput_lns = [x.rstrip() for x in open(args.save_path).readlines()]\r\nreference_lns = [x.rstrip() for x in open(args.reference_path).readlines()][: len(output_lns)]\r\n```\r\nwith works for rouge_score\r\n\r\n```\r\noutput_lns = [\" . \\n\".join(x.rstrip().split('. ')) for x in open(args.save_path).readlines()]\r\nreference_lns = [\" . \\n\".join(x.rstrip().split(' . ')) for x in open(args.reference_path).readlines()][: len(output_lns)]\r\n```\r\n\r\nThanks @yxyzzz !", "should we change run_eval.py ?\n", "Opened a PR at #7356 that fixes this issue @sshleifer " ]
1,598
1,601
1,601
CONTRIBUTOR
null
## Environment info ### Who can help BART + Summarization @sshleifer ## Information Model I am using is BART. The problem arises when: verifying accuracy numbers of facebook/bart-large-cnn on CNN+Daily Mail. The paper reports R1, R2, RL of 44.16, 21.28, 40.90 but I can get only 44.05, 21.07, 30.62. I used [this](https://github.com/abisee/cnn-dailymail) to make my dataset. Is this expected? The tasks I am working on is: * CNN-Dm summarization task ## To reproduce Steps to reproduce the behavior: 1. Follow instructions to download dataset 2. Run with `python run_summarization.py --reference_path=data/cnn_dm/test.target data/cnn_dm/test.source results/test.log`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6808/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6807/comments
https://api.github.com/repos/huggingface/transformers/issues/6807/events
https://github.com/huggingface/transformers/pull/6807
688,339,992
MDExOlB1bGxSZXF1ZXN0NDc1NjI4NzEx
6,807
[WIP] Added token_type_id support to GPT2Model
{ "login": "StuartMesham", "id": 28049022, "node_id": "MDQ6VXNlcjI4MDQ5MDIy", "avatar_url": "https://avatars.githubusercontent.com/u/28049022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StuartMesham", "html_url": "https://github.com/StuartMesham", "followers_url": "https://api.github.com/users/StuartMesham/followers", "following_url": "https://api.github.com/users/StuartMesham/following{/other_user}", "gists_url": "https://api.github.com/users/StuartMesham/gists{/gist_id}", "starred_url": "https://api.github.com/users/StuartMesham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StuartMesham/subscriptions", "organizations_url": "https://api.github.com/users/StuartMesham/orgs", "repos_url": "https://api.github.com/users/StuartMesham/repos", "events_url": "https://api.github.com/users/StuartMesham/events{/privacy}", "received_events_url": "https://api.github.com/users/StuartMesham/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
CONTRIBUTOR
null
Fixes #6794 #4922 #1339 Added token type embedding support to GPT2Model. To use `token_type_ids` with `GPT2Model` the user can now supply `type_vocab_size` to the `GPT2Config` constructor and then pass `token_type_ids` as input to the `forward` method as with other models. Using all the default parameters, the model should continue to work as before, since I have set `type_vocab_size=None` as the default of the `GPT2Config` class, which disables my token type embeddings. If the user tries to pass `token_type_ids` to the `forward` method when token type embeddings have not been enabled for a model, it will raise an informative error message. The same is true for when the user does not pass `token_type_ids` to the `forward` method when token type embeddings have been enabled for a model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6807/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6807", "html_url": "https://github.com/huggingface/transformers/pull/6807", "diff_url": "https://github.com/huggingface/transformers/pull/6807.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6807.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6806/comments
https://api.github.com/repos/huggingface/transformers/issues/6806/events
https://github.com/huggingface/transformers/issues/6806
688,338,597
MDU6SXNzdWU2ODgzMzg1OTc=
6,806
distilbart-cnn reproduction
{ "login": "dhruvrnaik", "id": 22565320, "node_id": "MDQ6VXNlcjIyNTY1MzIw", "avatar_url": "https://avatars.githubusercontent.com/u/22565320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhruvrnaik", "html_url": "https://github.com/dhruvrnaik", "followers_url": "https://api.github.com/users/dhruvrnaik/followers", "following_url": "https://api.github.com/users/dhruvrnaik/following{/other_user}", "gists_url": "https://api.github.com/users/dhruvrnaik/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhruvrnaik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhruvrnaik/subscriptions", "organizations_url": "https://api.github.com/users/dhruvrnaik/orgs", "repos_url": "https://api.github.com/users/dhruvrnaik/repos", "events_url": "https://api.github.com/users/dhruvrnaik/events{/privacy}", "received_events_url": "https://api.github.com/users/dhruvrnaik/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Fixed, will add a check. \r\nAre you running distillation experiments!? FYI that model is not trained.", "> Are you running distillation experiments!? FYI that model is not trained.\r\n\r\nYes, I know. Reproducing the results, then planning to run a few experiments with it.\r\n\r\nWasn't able to use `--fp16 `, kept getting OOM errors (using 4 2080TIs).", "Cool! \r\nre: fp16:\r\nAre you in torch 1.6?\r\nTry torch 1.5.1 with apex installed.\r\n\r\nI haven't run anything successfully in torch 1.6 and am very suspicious of native amp.", "> Try torch 1.5.1 with apex installed.\r\n> \r\n> I haven't run anything successfully in torch 1.6 and am very suspicious of native amp.\r\n\r\nThanks, I will try that.\r\nAlso, did you use `run_eval.py` for the results [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0)? \r\nI tried using `sshleifer/distilbart-cnn-12-6` as well as one I finetuned from `sshleifer/student_cnn_12_6`, but got comparatively lower results.\r\n", "Yes I did, what were your results?", "Validation - `{'rouge1': 36.902390083382635, 'rouge2': 15.98520126771937, 'rougeL': 25.75566724592724} `\r\nTest -` {'rouge1': 33.980893339399074, 'rouge2': 13.925809496977044, 'rougeL': 23.731267594610095} `", "That's awful! Can I see your command?\r\n", "```\r\npython run_eval.py distilbart-cnn-12-6/best_tfmr $DATA_DIR/val.source dbart_val_generations.txt \\\r\n --reference_path $DATA_DIR/val.target \\\r\n --score_path distilbart-cnn-12-6/cnn_rouge.json \\\r\n --task summarization \\\r\n --n_obs 100 \\\r\n --device cuda \\\r\n --bs 32 \\\r\n```", "On 100 observations that might not be so bad. \r\nThe 21.26 Rouge 2 is from the following command (a few months ago):\r\n\r\n```bash\r\npython run_eval.py sshleifer/distilbart-cnn-12-6 \\\r\ncnn_dm/test.source \\\r\ndbart_cnn_12_6_test_gens.txt \\\r\n--reference_path cnn_dm/test.target \\\r\n--score_path dbart_cnn_12_6_test_rouge.json \\\r\n--task summarization --bs 32 --fp16\r\n```\r\nin torch 1.5.1.\r\n\r\nReran Today (it took an hour)\r\n```\r\n{'rouge1': 44.2503, 'rouge2': 21.2586, 'rougeL': 30.3729, 'n_obs': 11490, 'runtime': 3569, 'seconds_per_sample': 0.3106}\r\n```\r\n\r\n\r\n", "I had tried with 1000 (based on the [comment](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0)), had similar results. I wouldn't have expected the result to change that much, my bad. Thanks for your help!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
NONE
null
@sshleifer Unable to read `config.json` for `sshleifer/student_cnn_12_6` [config.json](https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/student_cnn_12_6/config.json) Line 64 has a format error : ``` "force_bos_token_to_be_generated", true } ``` This should be: ``` "force_bos_token_to_be_generated" : true } ``` This is causing an issue in loading `sshleifer/student_cnn_12_6` using `.from_pretrained()`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6806/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6805/comments
https://api.github.com/repos/huggingface/transformers/issues/6805/events
https://github.com/huggingface/transformers/pull/6805
688,327,635
MDExOlB1bGxSZXF1ZXN0NDc1NjE4NDQz
6,805
[WIP] Added token_type_id support to GPT2Model
{ "login": "StuartMesham", "id": 28049022, "node_id": "MDQ6VXNlcjI4MDQ5MDIy", "avatar_url": "https://avatars.githubusercontent.com/u/28049022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StuartMesham", "html_url": "https://github.com/StuartMesham", "followers_url": "https://api.github.com/users/StuartMesham/followers", "following_url": "https://api.github.com/users/StuartMesham/following{/other_user}", "gists_url": "https://api.github.com/users/StuartMesham/gists{/gist_id}", "starred_url": "https://api.github.com/users/StuartMesham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StuartMesham/subscriptions", "organizations_url": "https://api.github.com/users/StuartMesham/orgs", "repos_url": "https://api.github.com/users/StuartMesham/repos", "events_url": "https://api.github.com/users/StuartMesham/events{/privacy}", "received_events_url": "https://api.github.com/users/StuartMesham/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
Fixes #6794 #4922 #1339 Added token type embedding support to GPT2Model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6805/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6805", "html_url": "https://github.com/huggingface/transformers/pull/6805", "diff_url": "https://github.com/huggingface/transformers/pull/6805.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6805.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6804/comments
https://api.github.com/repos/huggingface/transformers/issues/6804/events
https://github.com/huggingface/transformers/pull/6804
688,293,580
MDExOlB1bGxSZXF1ZXN0NDc1NTg5ODYx
6,804
BART ce loss ignores pad_token_id instead of -100
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=h1) Report\n> Merging [#6804](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3cac867fac3f8717b25e3026b97b456a4e748039?el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6804/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6804 +/- ##\n==========================================\n+ Coverage 79.21% 79.25% +0.03% \n==========================================\n Files 157 157 \n Lines 28588 28588 \n==========================================\n+ Hits 22646 22656 +10 \n+ Misses 5942 5932 -10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.06% <100.00%> (-0.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=footer). Last update [3cac867...7a6bf5a](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "What do you mean default in config?\r\nThe default `ignore_index` is -100 for CrossEntropyLoss.\r\n`pad_token_id` is overwritten by BartConfig.", "Sorry mixed things up. This would make `BartForConditionalGeneration` behave differently from all the others models (all ModelForMaskedLM and T5ForConditionalGeneration use -100) so I think this is pretty breaking. Users probably have special code to changed padded token to -100, plus you may want to mask other things than the padding for loss computation (more relevant for masked LM than seq2seq but still).\r\n\r\nI think this is some preprocessing work to do on the labels, for instance the `DataCollatorForLanguageModeling` replaces all non-masked tokens by -100 in the labels." ]
1,598
1,598
1,598
CONTRIBUTOR
null
cc @ibeltagy who noticed this. I think it is better to ignore pad_token_id then -100, but this is semi-breaking because old training code might have replaced pad token id with -100 in labels. Maybe I should check for both?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6804/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6804", "html_url": "https://github.com/huggingface/transformers/pull/6804", "diff_url": "https://github.com/huggingface/transformers/pull/6804.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6804.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6803/comments
https://api.github.com/repos/huggingface/transformers/issues/6803/events
https://github.com/huggingface/transformers/pull/6803
688,290,490
MDExOlB1bGxSZXF1ZXN0NDc1NTg3MTQ3
6,803
Fix style
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6803/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6803", "html_url": "https://github.com/huggingface/transformers/pull/6803", "diff_url": "https://github.com/huggingface/transformers/pull/6803.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6803.patch", "merged_at": 1598641345000 }
https://api.github.com/repos/huggingface/transformers/issues/6802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6802/comments
https://api.github.com/repos/huggingface/transformers/issues/6802/events
https://github.com/huggingface/transformers/pull/6802
688,287,270
MDExOlB1bGxSZXF1ZXN0NDc1NTg0NDYz
6,802
Only access loss tensor every logging_steps
{ "login": "jysohn23", "id": 19496130, "node_id": "MDQ6VXNlcjE5NDk2MTMw", "avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jysohn23", "html_url": "https://github.com/jysohn23", "followers_url": "https://api.github.com/users/jysohn23/followers", "following_url": "https://api.github.com/users/jysohn23/following{/other_user}", "gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}", "starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions", "organizations_url": "https://api.github.com/users/jysohn23/orgs", "repos_url": "https://api.github.com/users/jysohn23/repos", "events_url": "https://api.github.com/users/jysohn23/events{/privacy}", "received_events_url": "https://api.github.com/users/jysohn23/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Style issue will be solved with merge @sgugger ", "> Thanks for fixing this! I'd remove the first change in the logs though.\r\n\r\nThanks for the review! Done.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=h1) Report\n> Merging [#6802](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9336086ab5d232cccd9512333518cf4299528882?el=desc) will **decrease** coverage by `0.42%`.\n> The diff coverage is `89.47%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6802/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6802 +/- ##\n==========================================\n- Coverage 80.32% 79.89% -0.43% \n==========================================\n Files 157 157 \n Lines 28589 28739 +150 \n==========================================\n- Hits 22963 22960 -3 \n- Misses 5626 5779 +153 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <ø> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <ø> (+63.80%)` | :arrow_up: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `82.28% <ø> (ø)` | |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.28% <ø> (-0.05%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.23% <46.66%> (-0.43%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `90.69% <89.18%> (-1.14%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <94.59%> (+2.19%)` | :arrow_up: |\n| [src/transformers/configuration\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/data/datasets/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL19faW5pdF9fLnB5) | `100.00% <100.00%> (ø)` | |\n| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6802/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=footer). Last update [9336086...2b981cd](https://codecov.io/gh/huggingface/transformers/pull/6802?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
COLLABORATOR
null
* tensor.item() was being called every step. This must not be done for XLA:TPU tensors as it's terrible for performance causing TPU<>CPU communication at each step. On RoBERTa MLM for example, it reduces step time by 30%, should be larger for smaller step time models/tasks. * Train batch size was not correct in case a user uses the `per_gpu_train_batch_size` flag * Avg reduce loss accross eval shards * Log TPU debug metrics before last epoch break
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6802/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6802/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6802", "html_url": "https://github.com/huggingface/transformers/pull/6802", "diff_url": "https://github.com/huggingface/transformers/pull/6802.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6802.patch", "merged_at": 1598888152000 }
https://api.github.com/repos/huggingface/transformers/issues/6801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6801/comments
https://api.github.com/repos/huggingface/transformers/issues/6801/events
https://github.com/huggingface/transformers/pull/6801
688,214,603
MDExOlB1bGxSZXF1ZXN0NDc1NTIyNzc3
6,801
Model card for primer/BART-Squad2
{ "login": "tomgrek", "id": 2245347, "node_id": "MDQ6VXNlcjIyNDUzNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/2245347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomgrek", "html_url": "https://github.com/tomgrek", "followers_url": "https://api.github.com/users/tomgrek/followers", "following_url": "https://api.github.com/users/tomgrek/following{/other_user}", "gists_url": "https://api.github.com/users/tomgrek/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomgrek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomgrek/subscriptions", "organizations_url": "https://api.github.com/users/tomgrek/orgs", "repos_url": "https://api.github.com/users/tomgrek/repos", "events_url": "https://api.github.com/users/tomgrek/events{/privacy}", "received_events_url": "https://api.github.com/users/tomgrek/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,598
1,599
1,598
CONTRIBUTOR
null
Adds model card (model is already uploaded).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6801/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6801", "html_url": "https://github.com/huggingface/transformers/pull/6801", "diff_url": "https://github.com/huggingface/transformers/pull/6801.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6801.patch", "merged_at": 1598997152000 }
https://api.github.com/repos/huggingface/transformers/issues/6800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6800/comments
https://api.github.com/repos/huggingface/transformers/issues/6800/events
https://github.com/huggingface/transformers/pull/6800
688,210,253
MDExOlB1bGxSZXF1ZXN0NDc1NTE5MDc4
6,800
t5 model should make decoder_attention_mask
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> This undoes an aggressive change that was merged in #6654. I should discuss w Patrick first when he returns.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6800/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6800", "html_url": "https://github.com/huggingface/transformers/pull/6800", "diff_url": "https://github.com/huggingface/transformers/pull/6800.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6800.patch", "merged_at": 1598642554000 }
https://api.github.com/repos/huggingface/transformers/issues/6799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6799/comments
https://api.github.com/repos/huggingface/transformers/issues/6799/events
https://github.com/huggingface/transformers/pull/6799
688,208,687
MDExOlB1bGxSZXF1ZXN0NDc1NTE3NzMy
6,799
Marian distill scripts + integration test
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=h1) Report\n> Merging [#6799](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02d09c8fcc6bda2c345c84cec53289abbe7532ac?el=desc) will **increase** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6799/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6799 +/- ##\n==========================================\n+ Coverage 79.01% 79.11% +0.10% \n==========================================\n Files 157 157 \n Lines 28739 28739 \n==========================================\n+ Hits 22707 22736 +29 \n+ Misses 6032 6003 -29 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.00% <0.00%> (-0.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.06% <0.00%> (-0.52%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6799/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=footer). Last update [02d09c8...d6e38c4](https://codecov.io/gh/huggingface/transformers/pull/6799?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
- Fix 2 integration tests in examples - Add new integration test for marian distillation script - add 2 marian distillation scripts (but dont document them, will do that once they are slightly better).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6799", "html_url": "https://github.com/huggingface/transformers/pull/6799", "diff_url": "https://github.com/huggingface/transformers/pull/6799.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6799.patch", "merged_at": 1598896107000 }
https://api.github.com/repos/huggingface/transformers/issues/6798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6798/comments
https://api.github.com/repos/huggingface/transformers/issues/6798/events
https://github.com/huggingface/transformers/pull/6798
688,164,006
MDExOlB1bGxSZXF1ZXN0NDc1NDc5Mjg3
6,798
[s2s] round runtime in run_eval
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6798/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6798", "html_url": "https://github.com/huggingface/transformers/pull/6798", "diff_url": "https://github.com/huggingface/transformers/pull/6798.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6798.patch", "merged_at": 1598736992000 }
https://api.github.com/repos/huggingface/transformers/issues/6797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6797/comments
https://api.github.com/repos/huggingface/transformers/issues/6797/events
https://github.com/huggingface/transformers/issues/6797
688,159,900
MDU6SXNzdWU2ODgxNTk5MDA=
6,797
2 Slow Test Failures That Sam Can Fix
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1834088753, "node_id": "MDU6TGFiZWwxODM0MDg4NzUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Tests", "name": "Tests", "color": "a6fcca", "default": false, "description": "Related to tests" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
Both related to fp16 1) ``` if torch.cuda.is_available(): testargs += ["--fp16", "--gpus=1"] with patch.object(sys, "argv", testargs): > result = run_pl_glue.main() ``` 2) ``` examples/seq2seq/test_bash_script.py::test_train_mbart_cc25_enro_script ``` needs rerun/ cannot use fp16
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6797/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6796/comments
https://api.github.com/repos/huggingface/transformers/issues/6796/events
https://github.com/huggingface/transformers/issues/6796
688,048,410
MDU6SXNzdWU2ODgwNDg0MTA=
6,796
AttributeError: 'MarianTokenizer' object has no attribute 'prepare_translation_batch'
{ "login": "sdhzlxm", "id": 7666659, "node_id": "MDQ6VXNlcjc2NjY2NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7666659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sdhzlxm", "html_url": "https://github.com/sdhzlxm", "followers_url": "https://api.github.com/users/sdhzlxm/followers", "following_url": "https://api.github.com/users/sdhzlxm/following{/other_user}", "gists_url": "https://api.github.com/users/sdhzlxm/gists{/gist_id}", "starred_url": "https://api.github.com/users/sdhzlxm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sdhzlxm/subscriptions", "organizations_url": "https://api.github.com/users/sdhzlxm/orgs", "repos_url": "https://api.github.com/users/sdhzlxm/repos", "events_url": "https://api.github.com/users/sdhzlxm/events{/privacy}", "received_events_url": "https://api.github.com/users/sdhzlxm/received_events", "type": "User", "site_admin": false }
[ { "id": 2039044877, "node_id": "MDU6TGFiZWwyMDM5MDQ0ODc3", "url": "https://api.github.com/repos/huggingface/transformers/labels/marian", "name": "marian", "color": "30cc95", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Which tutorial? its called `prepare_seq2seq_batch` now.", "\"prepare_seq2seq_batch\" works. Thanks a lot.\r\n\r\nI follow this tutorial, https://huggingface.co/transformers/model_doc/marian.html\r\n\r\nWhere can I find the new user manual for MarianMT model? Thank you.\r\n", "https://huggingface.co/transformers/master/model_doc/marian.html", "Have they change it again and add a maximum length?", "I am getting: `AttributeError: 'MarianTokenizer' object has no attribute 'prepare_seq2seq_batch'`\r\n\r\nI changed it to `prepare_translation_batch` and it works", "It fails again... and changing to `prepare_seq2seq_batch` throws the deprecation warning..." ]
1,598
1,637
1,598
NONE
null
Problem 1: When I load tokenizer from local directory. And use it according MarianMT tutorial. There will be an error occurs. AttributeError: 'MarianTokenizer' object has no attribute 'prepare_translation_batch' Problem 2: When I downloads models and tokenizers from the web, such as running 'tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")'. There will be some errors. RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(818) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6796/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6795/comments
https://api.github.com/repos/huggingface/transformers/issues/6795/events
https://github.com/huggingface/transformers/issues/6795
688,042,528
MDU6SXNzdWU2ODgwNDI1Mjg=
6,795
Maybe global step should be initialized to 0
{ "login": "HuipengXu", "id": 23466003, "node_id": "MDQ6VXNlcjIzNDY2MDAz", "avatar_url": "https://avatars.githubusercontent.com/u/23466003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HuipengXu", "html_url": "https://github.com/HuipengXu", "followers_url": "https://api.github.com/users/HuipengXu/followers", "following_url": "https://api.github.com/users/HuipengXu/following{/other_user}", "gists_url": "https://api.github.com/users/HuipengXu/gists{/gist_id}", "starred_url": "https://api.github.com/users/HuipengXu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuipengXu/subscriptions", "organizations_url": "https://api.github.com/users/HuipengXu/orgs", "repos_url": "https://api.github.com/users/HuipengXu/repos", "events_url": "https://api.github.com/users/HuipengXu/events{/privacy}", "received_events_url": "https://api.github.com/users/HuipengXu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 in script:https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py location: line 143
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6795/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6795/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6794/comments
https://api.github.com/repos/huggingface/transformers/issues/6794/events
https://github.com/huggingface/transformers/issues/6794
688,032,012
MDU6SXNzdWU2ODgwMzIwMTI=
6,794
Unexpected behavior encoding token_type_ids in GPT models
{ "login": "StuartMesham", "id": 28049022, "node_id": "MDQ6VXNlcjI4MDQ5MDIy", "avatar_url": "https://avatars.githubusercontent.com/u/28049022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StuartMesham", "html_url": "https://github.com/StuartMesham", "followers_url": "https://api.github.com/users/StuartMesham/followers", "following_url": "https://api.github.com/users/StuartMesham/following{/other_user}", "gists_url": "https://api.github.com/users/StuartMesham/gists{/gist_id}", "starred_url": "https://api.github.com/users/StuartMesham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StuartMesham/subscriptions", "organizations_url": "https://api.github.com/users/StuartMesham/orgs", "repos_url": "https://api.github.com/users/StuartMesham/repos", "events_url": "https://api.github.com/users/StuartMesham/events{/privacy}", "received_events_url": "https://api.github.com/users/StuartMesham/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
CONTRIBUTOR
null
The issues #4922 and #1339 have been closed, but bug is still present in the `GPT2Model` code resulting in unexpected behaviour as described in #4922 . I am trying to use token_type_ids to annotate different languages being input to a multilingual model. I am using my own tokenization code which outputs token_type_ids indicating the language of the tokens. I have run into unexpected behaviour where the embeddings for the first few tokens in my vocabulary are also being used for my token type embeddings. I am using version 3.0.2. The issue was marked as closed since the GPT2 Tokenizer was no longer outputting token_type_ids, but the `GPT2Model` class still accepts token_type_ids in its forward method. # Expected Behavior Either: 1. addition of a separate `nn.Embedding` matrix for token_type_ids [here](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_gpt2.py#L352) (such as [in BERTModel](https://github.com/huggingface/transformers/blob/3ae2e86baffc1fea8b8b93695fb5a10941fd63dc/src/transformers/modeling_bert.py#L153)) 2. discourage or throw a warning when `token_type_ids` are passed to GPTModel instances
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6794/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6793/comments
https://api.github.com/repos/huggingface/transformers/issues/6793/events
https://github.com/huggingface/transformers/pull/6793
688,019,151
MDExOlB1bGxSZXF1ZXN0NDc1MzU5OTY1
6,793
Update README of my model
{ "login": "rdenadai", "id": 917516, "node_id": "MDQ6VXNlcjkxNzUxNg==", "avatar_url": "https://avatars.githubusercontent.com/u/917516?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rdenadai", "html_url": "https://github.com/rdenadai", "followers_url": "https://api.github.com/users/rdenadai/followers", "following_url": "https://api.github.com/users/rdenadai/following{/other_user}", "gists_url": "https://api.github.com/users/rdenadai/gists{/gist_id}", "starred_url": "https://api.github.com/users/rdenadai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rdenadai/subscriptions", "organizations_url": "https://api.github.com/users/rdenadai/orgs", "repos_url": "https://api.github.com/users/rdenadai/repos", "events_url": "https://api.github.com/users/rdenadai/events{/privacy}", "received_events_url": "https://api.github.com/users/rdenadai/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6793/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6793", "html_url": "https://github.com/huggingface/transformers/pull/6793", "diff_url": "https://github.com/huggingface/transformers/pull/6793.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6793.patch", "merged_at": 1598785367000 }
https://api.github.com/repos/huggingface/transformers/issues/6792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6792/comments
https://api.github.com/repos/huggingface/transformers/issues/6792/events
https://github.com/huggingface/transformers/issues/6792
687,947,709
MDU6SXNzdWU2ODc5NDc3MDk=
6,792
F16 support for DistilBert
{ "login": "xuxingya", "id": 13343428, "node_id": "MDQ6VXNlcjEzMzQzNDI4", "avatar_url": "https://avatars.githubusercontent.com/u/13343428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xuxingya", "html_url": "https://github.com/xuxingya", "followers_url": "https://api.github.com/users/xuxingya/followers", "following_url": "https://api.github.com/users/xuxingya/following{/other_user}", "gists_url": "https://api.github.com/users/xuxingya/gists{/gist_id}", "starred_url": "https://api.github.com/users/xuxingya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xuxingya/subscriptions", "organizations_url": "https://api.github.com/users/xuxingya/orgs", "repos_url": "https://api.github.com/users/xuxingya/repos", "events_url": "https://api.github.com/users/xuxingya/events{/privacy}", "received_events_url": "https://api.github.com/users/xuxingya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed by #6858 " ]
1,598
1,599
1,599
NONE
null
# 🚀 Feature request DIstilBert model donesn't support f16 or mixed precision training. The hard-coded uses of float32 from the tensorflow implementation of BERT and ELECTRA has been fixed by #3320 . The same problem in DistilBert needs a fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6792/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6791/comments
https://api.github.com/repos/huggingface/transformers/issues/6791/events
https://github.com/huggingface/transformers/pull/6791
687,942,356
MDExOlB1bGxSZXF1ZXN0NDc1Mjk1NTkw
6,791
Enable wandb logging for Ray Tune HPO runs
{ "login": "krfricke", "id": 14904111, "node_id": "MDQ6VXNlcjE0OTA0MTEx", "avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krfricke", "html_url": "https://github.com/krfricke", "followers_url": "https://api.github.com/users/krfricke/followers", "following_url": "https://api.github.com/users/krfricke/following{/other_user}", "gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}", "starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krfricke/subscriptions", "organizations_url": "https://api.github.com/users/krfricke/orgs", "repos_url": "https://api.github.com/users/krfricke/repos", "events_url": "https://api.github.com/users/krfricke/events{/privacy}", "received_events_url": "https://api.github.com/users/krfricke/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=h1) Report\n> Merging [#6791](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **increase** coverage by `0.41%`.\n> The diff coverage is `0.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6791/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6791 +/- ##\n==========================================\n+ Coverage 79.36% 79.78% +0.41% \n==========================================\n Files 157 157 \n Lines 28569 28578 +9 \n==========================================\n+ Hits 22675 22800 +125 \n+ Misses 5894 5778 -116 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `52.90% <0.00%> (+39.68%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `35.93% <0.00%> (-59.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-8.71%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6791/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=footer). Last update [930153e...001a17f](https://codecov.io/gh/huggingface/transformers/pull/6791?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "making sure that @borisdayma sees this PR.", "Thanks, I've not played too much with ray tune yet but there seems to be 2 ways to integrate through Ray Tune libraries as per [the docs](https://docs.wandb.com/library/integrations/ray-tune).\r\n\r\nHowever ideally, the setup, logging, etc would be handled directly by `Trainer` existing functions for clarity and concision (and also to support all existing loggers). Handling multiple logging runs should be done within `hyperparameter_search` if possible.\r\n\r\nCould the setup methods be wrapped in a new function and called during the search, in order to avoid duplicating the same logic.\r\nFor wandb, forcing a new run just requires `wandb.init(reinit=True)` so it works both in notebooks and scripts.\r\n\r\nNote: Use this argument **only** while using `hyperparameter_search` as users can currently call manually `wandb.init` before (for example when using pytorch-lightning, sweeps, or keras + huggingface), making the call within the `Trainer` a \"noop\" (because it does not have `reinit=True`).", "Thanks for your comments.\r\n\r\n@borisdayma, simply moving the logger setup to `train()` would do the trick in any case, as it is called from the hyperparameter search methods. This should also work for Optuna, not only for Ray Tune.\r\n\r\nI created a PR for that here: #6850. Is this what you meant?", "Closed in favor of #6850." ]
1,598
1,611
1,599
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Currently wandb logging does not work with Ray Tune, as each process tries to call `wandb.log` without initializing wandb first. This PR fixes this by wrapping the objective in Ray Tune's `wandb_mixin`, which makes sure each trial initializes wandb and logs to it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6791/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6791", "html_url": "https://github.com/huggingface/transformers/pull/6791", "diff_url": "https://github.com/huggingface/transformers/pull/6791.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6791.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6790/comments
https://api.github.com/repos/huggingface/transformers/issues/6790/events
https://github.com/huggingface/transformers/issues/6790
687,934,087
MDU6SXNzdWU2ODc5MzQwODc=
6,790
What is the size of the context window in the 'openai-gpt' pre-trained model?
{ "login": "lzl19971215", "id": 63151530, "node_id": "MDQ6VXNlcjYzMTUxNTMw", "avatar_url": "https://avatars.githubusercontent.com/u/63151530?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lzl19971215", "html_url": "https://github.com/lzl19971215", "followers_url": "https://api.github.com/users/lzl19971215/followers", "following_url": "https://api.github.com/users/lzl19971215/following{/other_user}", "gists_url": "https://api.github.com/users/lzl19971215/gists{/gist_id}", "starred_url": "https://api.github.com/users/lzl19971215/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lzl19971215/subscriptions", "organizations_url": "https://api.github.com/users/lzl19971215/orgs", "repos_url": "https://api.github.com/users/lzl19971215/repos", "events_url": "https://api.github.com/users/lzl19971215/events{/privacy}", "received_events_url": "https://api.github.com/users/lzl19971215/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
What is the size of the context window in the 'openai-gpt' pre-trained model? # ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6790/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6789/comments
https://api.github.com/repos/huggingface/transformers/issues/6789/events
https://github.com/huggingface/transformers/issues/6789
687,914,174
MDU6SXNzdWU2ODc5MTQxNzQ=
6,789
How to fine-tune T5 with some additional special tokens ?
{ "login": "xdqkid", "id": 18492840, "node_id": "MDQ6VXNlcjE4NDkyODQw", "avatar_url": "https://avatars.githubusercontent.com/u/18492840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xdqkid", "html_url": "https://github.com/xdqkid", "followers_url": "https://api.github.com/users/xdqkid/followers", "following_url": "https://api.github.com/users/xdqkid/following{/other_user}", "gists_url": "https://api.github.com/users/xdqkid/gists{/gist_id}", "starred_url": "https://api.github.com/users/xdqkid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xdqkid/subscriptions", "organizations_url": "https://api.github.com/users/xdqkid/orgs", "repos_url": "https://api.github.com/users/xdqkid/repos", "events_url": "https://api.github.com/users/xdqkid/events{/privacy}", "received_events_url": "https://api.github.com/users/xdqkid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Let's say the tokens you want to add are <some_token_1> and <some_token_2> (including angle brackets)\r\n```\r\nfrom transformers import T5Tokenizer\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-base\")\r\ntokenizer.add_tokens(['<some_token_1>', '<some_token_2'>])\r\n```", "I want to add this line in addition to modifying the tokenizer for the model to work with the new tokenizer:\r\n\r\n`model.resize_token_embeddings(len(tokenizer))`" ]
1,598
1,621
1,598
NONE
null
I wanna to fine-tune T5 with seq2seq task, but there are some special tokens in this seq2seq task. T5 performs bad without these tokens. How could I use some additional special tokens to fine-tune T5 with scripts https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_t5.sh?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6789/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6788/comments
https://api.github.com/repos/huggingface/transformers/issues/6788/events
https://github.com/huggingface/transformers/pull/6788
687,839,705
MDExOlB1bGxSZXF1ZXN0NDc1MjA4MzYx
6,788
Update multilingual passage rereanking model card
{ "login": "iglimanaj", "id": 12574741, "node_id": "MDQ6VXNlcjEyNTc0NzQx", "avatar_url": "https://avatars.githubusercontent.com/u/12574741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iglimanaj", "html_url": "https://github.com/iglimanaj", "followers_url": "https://api.github.com/users/iglimanaj/followers", "following_url": "https://api.github.com/users/iglimanaj/following{/other_user}", "gists_url": "https://api.github.com/users/iglimanaj/gists{/gist_id}", "starred_url": "https://api.github.com/users/iglimanaj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iglimanaj/subscriptions", "organizations_url": "https://api.github.com/users/iglimanaj/orgs", "repos_url": "https://api.github.com/users/iglimanaj/repos", "events_url": "https://api.github.com/users/iglimanaj/events{/privacy}", "received_events_url": "https://api.github.com/users/iglimanaj/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=h1) Report\n> Merging [#6788](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **decrease** coverage by `2.98%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6788/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6788 +/- ##\n==========================================\n- Coverage 79.36% 76.38% -2.99% \n==========================================\n Files 157 157 \n Lines 28569 28569 \n==========================================\n- Hits 22675 21822 -853 \n- Misses 5894 6747 +853 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `35.93% <0.00%> (-59.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `81.42% <0.00%> (-12.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6788/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=footer). Last update [930153e...3482a3e](https://codecov.io/gh/huggingface/transformers/pull/6788?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Fix range of possible score, add inference .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6788/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6788", "html_url": "https://github.com/huggingface/transformers/pull/6788", "diff_url": "https://github.com/huggingface/transformers/pull/6788.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6788.patch", "merged_at": 1598997379000 }
https://api.github.com/repos/huggingface/transformers/issues/6787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6787/comments
https://api.github.com/repos/huggingface/transformers/issues/6787/events
https://github.com/huggingface/transformers/issues/6787
687,703,139
MDU6SXNzdWU2ODc3MDMxMzk=
6,787
Dear transformers team, how could I use bert to NER task?
{ "login": "jasonsu123", "id": 16911126, "node_id": "MDQ6VXNlcjE2OTExMTI2", "avatar_url": "https://avatars.githubusercontent.com/u/16911126?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jasonsu123", "html_url": "https://github.com/jasonsu123", "followers_url": "https://api.github.com/users/jasonsu123/followers", "following_url": "https://api.github.com/users/jasonsu123/following{/other_user}", "gists_url": "https://api.github.com/users/jasonsu123/gists{/gist_id}", "starred_url": "https://api.github.com/users/jasonsu123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jasonsu123/subscriptions", "organizations_url": "https://api.github.com/users/jasonsu123/orgs", "repos_url": "https://api.github.com/users/jasonsu123/repos", "events_url": "https://api.github.com/users/jasonsu123/events{/privacy}", "received_events_url": "https://api.github.com/users/jasonsu123/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi,\r\n\r\nHave a look at the following script from Huggingface:\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py\r\n\r\nIt shows how you can finetune a pretrained BERT-model for NER.\r\nRemember to take a look at the utility code as well, since this is the code preparing and creating your features (and tensors in general).\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/token-classification/utils_ner.py\r\n\r\nRegards", "\r\n> Hi,\r\n> \r\n> Have a look at the following script from Huggingface:\r\n> \r\n> https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py\r\n> \r\n> It shows how you can finetune a pretrained BERT-model for NER.\r\n> Remember to take a look at the utility code as well, since this is the code preparing and creating your features (and tensors in general).\r\n> \r\n> https://github.com/huggingface/transformers/blob/master/examples/token-classification/utils_ner.py\r\n> \r\n> Regards\r\n\r\nThanks for your response very much.\r\n\r\nActually I am not good at python programming.\r\n\r\nrun_ner.py could create a new pretrained model or I could us it to fine tune a existed model to my target NER task?\r\n\r\nWhat are the functions of these two programs? And what parameters should I set?\r\nWhat are the formats of train and test data?\r\n\r\nI also could not find these two programs in the download folder. \r\n![image](https://user-images.githubusercontent.com/16911126/93067378-ba15fe00-f6ad-11ea-99ab-e4fd87742e0b.png)\r\n\r\nBy the way, could you offer the tutorial of colab format of NER task just like below link?\r\nhttps://www.depends-on-the-definition.com/named-entity-recognition-with-bert/\r\n\r\nIt is complicated and consuming to set virtual environment in PC.\r\n\r\nThank you\r\n Best regards; \r\n\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,606
1,606
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: Dear transformers team, I have train and test corpus with BIO tags, like below: The O patient O was O noted O with O first O depressive O episode O at O aged O 36 O . O How could I use bert to train my data to produce models and to predict the BIO tags of test data? The resource have many programs, but I have no ideas that which program is what I need. I would like to use google colab to run the program that could save python environmental problems. Could you offer the tutorial of Bert NER task? Thank you Best regards;
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6787/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6786/comments
https://api.github.com/repos/huggingface/transformers/issues/6786/events
https://github.com/huggingface/transformers/issues/6786
687,690,979
MDU6SXNzdWU2ODc2OTA5Nzk=
6,786
T5 batch inference same input data gives different outputs?
{ "login": "ArvinZhuang", "id": 46237844, "node_id": "MDQ6VXNlcjQ2MjM3ODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/46237844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArvinZhuang", "html_url": "https://github.com/ArvinZhuang", "followers_url": "https://api.github.com/users/ArvinZhuang/followers", "following_url": "https://api.github.com/users/ArvinZhuang/following{/other_user}", "gists_url": "https://api.github.com/users/ArvinZhuang/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArvinZhuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArvinZhuang/subscriptions", "organizations_url": "https://api.github.com/users/ArvinZhuang/orgs", "repos_url": "https://api.github.com/users/ArvinZhuang/repos", "events_url": "https://api.github.com/users/ArvinZhuang/events{/privacy}", "received_events_url": "https://api.github.com/users/ArvinZhuang/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "When I add torch.manual_seed(0) at the beginning the outputs will be the same.\r\ndoes the model from tf pretrained has some randomization when loading the model weights?", "Hmm, there sholud always be the same. Could you add your script to a google colab that allows to run your checkpoint, so that we can debug?", "Hi here is the colab script with my checkpoint uploaded:\r\nhttps://colab.research.google.com/drive/1Yx7zRkzpaGMMraTkIeIGnJJLnlpUKzQH?usp=sharing\r\n\r\nyou can download my checkpoint here: https://drive.google.com/drive/folders/1521pvzvkqvEBUvRqZn7CRO-soCXOUZCu?usp=sharing", "I think some layers in your T5 model have not been trained and are therefore not saved in your model checkpoint. At initialization this layer is then randomely initialized. \r\n\r\nOne thing you can do to verify is to load the model as follows, save it as a PyTorch model and then load the PyTorch model as follows.\r\n\r\n```python \r\nconfig = T5Config.from_pretrained('t5-base')\r\nmodel = T5ForConditionalGeneration.from_pretrained(\r\n \"sample_data/model/model.ckpt-1004000\", from_tf=True, config=config)\r\nmodel.save_pretrained(\"./model\")\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"/.model\")\r\n```\r\n\r\nif this command throws a warning that some layers are not initialized, then you know what the problem is.\r\nIf not, I will take a look again :-) \r\n", "hi, I tried what you said, no warning shows up.\r\nso I guess it is not the case?\r\n\r\nDo you think it could be that weights from float64 to float32 or the opposite causing this problem? because the difference in outputs is tiny. I don't know, this still cannot explain why outputs are different every time.", "any updates?", "Hey @ArvinZhuang, \r\n\r\nI just downloaded the weights and tried to reproduce the error using your code snippet. In my case the output is deterministic, as expected. Could you make sure that you are using the newest version of transformers and try again?", "Hi, the outputs still different on my machine.... very strange.\r\n\r\nI'm using transformer v3.3.0\r\ntorch v1.6.0\r\nTensorFlow v2.3.1 \r\n\r\n![image](https://user-images.githubusercontent.com/46237844/94499979-607f0900-0241-11eb-8b64-3b91d4ef1d31.png)\r\n\r\nand btw, I cannot directly load t5 config by using T5Config.from_pretrained('t5-base') now, but the 't5-base' is still in the \"https://huggingface.co/models\" list. So I copy and past config.json from \"https://huggingface.co/t5-base\" but the results show this time is very different from the post above, which I think should not be the case because the tf checkpoint and input string are exactly the same as before....\r\n\r\n\r\nupdates: can directly load t5 config by T5Config.from_pretrained('t5-base') now, however, the output logits still very different from my first post above.....", "+1 on the issue\r\nI'm using transformer ==3.4.0\r\ntorch==1.6.0\r\n\r\nI run:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoConfig\r\n\r\nconfig = AutoConfig.from_pretrained(\"Vamsi/T5_Paraphrase_Paws\", output_hidden_states=True) \r\ntokenizer = AutoTokenizer.from_pretrained(\"Vamsi/T5_Paraphrase_Paws\") \r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"Vamsi/T5_Paraphrase_Paws\", config=config).to('cuda')\r\n\r\ndef prediction(documents, query):\r\n querys = [query] * len(documents)\r\n encoded_decoder_inputs = tokenizer(documents, padding=True, truncation=True, return_tensors=\"pt\").to('cuda')\r\n encoded_encoder_inputs = tokenizer(querys, padding=True, truncation=True, return_tensors=\"pt\").to('cuda')\r\n with torch.no_grad():\r\n outputs = model(input_ids=encoded_encoder_inputs[\"input_ids\"],\r\n labels=encoded_decoder_inputs[\"input_ids\"],\r\n attention_mask=encoded_encoder_inputs[\"attention_mask\"])\r\n batch_logits = outputs[1]\r\n print(batch_logits)\r\n\r\ndocuments = ['a', 'b']\r\nquery = \"who am I?\"\r\nprediction(documents, query) \r\n```\r\n\r\nand got:\r\n```\r\ntensor([[[-21.6500, -9.8658, -13.6561, ..., -43.1233, -43.0788, -43.0745],\r\n [-30.3906, -12.7200, -1.2460, ..., -41.7208, -41.6774, -41.6465],\r\n [-15.7073, -5.9496, -5.9364, ..., -36.8553, -36.8221, -36.8052]],\r\n\r\ntensor([[[-21.6500, -9.8658, -13.6561, ..., -43.1233, -43.0788, -43.0745],\r\n [-30.3906, -12.7200, -1.2460, ..., -41.7208, -41.6774, -41.6465],\r\n [-20.1459, -5.3198, -4.7644, ..., -37.7978, -37.7850, -37.8202]]],\r\n device='cuda:0')\r\n```\r\nNote: rerunning ` prediction(documents, query)` produces same deterministic results, suggesting that the inputs to `labels` do affect the `logits` outcome.", "Hi ednussi, \r\nyes, rerunning prediction(documents, query) will give deterministic results. \r\nHowever, my issue is rerunning the above chunk of code twice (reload everything, including model.). the outputs are different.\r\nDoes this also happen to you?\r\n\r\nUpdate:\r\nI tried the model provided by ednussi, and the outputs of running the code twice are the same. But my tf model still gives two different results, suggesting that loading model from tf gives different outcomes.", "Hi @ArvinZhuang,\r\nYes, similar to you when I run the model call with same input_ids but different labels I got different outcomes (using pytorch). Realized I missed a part of the print so edited my comment above to match the output.\r\nHoping @patrickvonplaten or someone from the `huggingface` team can take a second look, so we can get to the bottom of this.", "Hey @ednussi, if you look into the code, you can see that T5 uses the `labels` to create the `decoder_input_ids` which necessarily do affect the outputs. You can try using deterministic `input_ids` and `decoder_input_ids` and no labels and see if the output stays deterministic (it should).", "Thanks @patrickvonplaten.\r\nWas following your suggestion, and after reading through the documentation and code of how the `decoder_input_ids` is used, it became clear why it affects the `logits` and helped clear my confusion.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@ednussi - wanted to ask for your help if you could provide a brief explanation from your reading? Thanks." ]
1,598
1,663
1,610
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> Hi, I am using a T5 model from a tf check point. I have set model.eval(), but every time I forward the exactly same input data, the outputs were always different. here is my code: ``` import torch from transformers import * tokenizer = T5Tokenizer.from_pretrained('t5-base') config = T5Config.from_pretrained('t5-base') model = T5ForConditionalGeneration.from_pretrained( "../model/t5-base/model.ckpt-1004000", from_tf=True, config=config) model.eval() def prediction(documents, query): querys = [query] * len(documents) encoded_encoder_inputs = tokenizer(documents, padding=True, truncation=True, return_tensors="pt") encoded_decoder_inputs = tokenizer(querys, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): outputs = model(input_ids=encoded_encoder_inputs["input_ids"], labels=encoded_decoder_inputs["input_ids"], attention_mask=encoded_encoder_inputs["attention_mask"]) batch_logits = outputs[1] print(batch_logits) documents = ['my dog is cute!', "so am I."] query = "who am I?" prediction(documents, query) ``` here is my twice outputs (run the above code twice): ``` tensor([[[-14.6796, -6.2236, -13.3517, ..., -42.7422, -42.7124, -42.8204], [-18.5999, -4.9112, -10.6610, ..., -40.7506, -40.7313, -40.8373], [-17.3894, -5.3482, -11.4917, ..., -41.2223, -41.1643, -41.3228], [-18.4449, -5.9145, -12.0056, ..., -42.3857, -42.3579, -42.4859]], [[-15.7967, -6.9833, -14.5827, ..., -41.3168, -41.0326, -40.9567], [-18.4241, -5.7193, -12.0748, ..., -40.1744, -40.0635, -39.9045], [-19.5852, -5.1691, -12.7764, ..., -42.2655, -42.0788, -41.9885], [-20.3673, -3.6864, -12.5264, ..., -40.1189, -40.0787, -39.8976]]]) ``` ``` tensor([[[-14.6796, -6.2236, -13.3517, ..., -42.7422, -42.7124, -42.8204], [-18.5848, -4.9116, -10.6607, ..., -40.7443, -40.7251, -40.8300], [-17.3248, -5.3050, -11.4988, ..., -41.1802, -41.1236, -41.2818], [-18.3950, -5.8967, -11.9756, ..., -42.3553, -42.3273, -42.4553]], [[-15.7967, -6.9833, -14.5827, ..., -41.3168, -41.0326, -40.9567], [-18.5133, -5.7466, -12.1301, ..., -40.2199, -40.1078, -39.9497], [-19.3636, -5.1246, -12.7134, ..., -42.1881, -41.9993, -41.9106], [-20.3247, -3.6335, -12.4559, ..., -40.0285, -39.9912, -39.8106]]]) ``` Am I did anything wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6786/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6785/comments
https://api.github.com/repos/huggingface/transformers/issues/6785/events
https://github.com/huggingface/transformers/issues/6785
687,688,475
MDU6SXNzdWU2ODc2ODg0NzU=
6,785
[help] Add multigpu evalulation script for seq2seq
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649053, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted", "name": "Help wanted", "color": "008672", "default": false, "description": "Extra attention is needed, help appreciated" }, { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" }, { "id": 2009457320, "node_id": "MDU6TGFiZWwyMDA5NDU3MzIw", "url": "https://api.github.com/repos/huggingface/transformers/labels/translation", "name": "translation", "color": "b2d2f4", "default": false, "description": "machine translation utilities and models" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,598
1,600
1,600
CONTRIBUTOR
null
### Goal At the end the command, ``` cd examples/seq2seq export DATA_DIR=test_data # use wmt_en_ro for more data, download instructions in README.md export save_dir=translations python multigpu_eval.py Helsinki-NLP/opus-mt-en-ro $DATA_DIR/test.source $save_dir/test_translations.txt --reference_path $DATA_DIR/test.target --score_path $save_dir/test_bleu.json --fp16 --task translation --bs 16 ``` On wmt_en_ro, thissShould get ~27.7 (same result as run_eval.py on 1 GPU) and should take less time. On test_data I don't know the 1 GPU result. side note: The motivation here is actually summarization, where the validation sets are larger and on 1 GPU for pegasus running eval takes an hour. ### Implementation This looks similar, https://github.com/facebookresearch/ParlAI/blob/00efcbebb49524918692638ab580cadeebe70cf8/parlai/scripts/multiprocessing_eval.py#L34 but I'm open to many strategies. Lots of freedom. You will probably need to use one of the datasets in utils + an unshuffled dataloader of some sort. Try to keep the command line as similar to run_eval.py as possible, with the exception of a --gpus flag. Alternatively, use all available gpus and tell the user to use CUDA_VISIBLE_DEVICES=0,1 to restrict. ### Process 1. You can claim the bug here, but don't wait for me to reply to get started. This is hard enough that it is very unlikely that two people will be working on this at once. If I start working on this (unlikely) I will post here and remove the help-wanted label. 2. When you send a PR, tag me and link this issue. 3. It will be difficult to test your solution in CI, but try to make the code work with 1 GPU so we can test it that way in CI. You can make a new test_.py file or add to test_seq2seq_examples.py 4. Report BLEU scores in PR description. Happy hacking!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6785/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6784/comments
https://api.github.com/repos/huggingface/transformers/issues/6784/events
https://github.com/huggingface/transformers/pull/6784
687,683,827
MDExOlB1bGxSZXF1ZXN0NDc1MDY2MjMy
6,784
[style] set the minimal required version for `black`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
`make style` with `black < 20.8b1` is a no go (in case some other package forced a lower version) - so make it explicit to avoid confusion
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6784/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6784", "html_url": "https://github.com/huggingface/transformers/pull/6784", "diff_url": "https://github.com/huggingface/transformers/pull/6784.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6784.patch", "merged_at": 1598585890000 }
https://api.github.com/repos/huggingface/transformers/issues/6783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6783/comments
https://api.github.com/repos/huggingface/transformers/issues/6783/events
https://github.com/huggingface/transformers/issues/6783
687,667,415
MDU6SXNzdWU2ODc2Njc0MTU=
6,783
Why does examples/seq2seq/finetune.py only use sortish sampler for train?
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1845609017, "node_id": "MDU6TGFiZWwxODQ1NjA5MDE3", "url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq", "name": "seq2seq", "color": "fef2c0", "default": false, "description": "" }, { "id": 1936351150, "node_id": "MDU6TGFiZWwxOTM2MzUxMTUw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Examples", "name": "Examples", "color": "d4c5f9", "default": false, "description": "Which is related to examples in general" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "@sshleifer I can help out here. What exactly are you looking for?", "I'll take care of it actually, sorry! There is a harder one at #6785 if you're interested.", "Fixed, using sortish sampler for val. Much faster!" ]
1,598
1,600
1,600
CONTRIBUTOR
null
Would it make eval faster? Would it make validation sanity check obnoxiously slow? This should be a ~1 line PR with a longer PR description.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6783/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6782/comments
https://api.github.com/repos/huggingface/transformers/issues/6782/events
https://github.com/huggingface/transformers/pull/6782
687,655,916
MDExOlB1bGxSZXF1ZXN0NDc1MDQzNzQ0
6,782
[style] a new `black` version is out there (20.8)
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "weird, I thought I had a newer version, but it was old for some reason. updating to `pip install black==20.8b1` resolved this." ]
1,598
1,598
1,598
CONTRIBUTOR
null
it now seems to respect `--line-length 119` in the `make style/quality` command so a big reformat - no code change These are just the changes after running `make style`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6782/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6782", "html_url": "https://github.com/huggingface/transformers/pull/6782", "diff_url": "https://github.com/huggingface/transformers/pull/6782.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6782.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6781/comments
https://api.github.com/repos/huggingface/transformers/issues/6781/events
https://github.com/huggingface/transformers/issues/6781
687,640,128
MDU6SXNzdWU2ODc2NDAxMjg=
6,781
Typo in modling_tf_bert
{ "login": "AileenLin", "id": 20249184, "node_id": "MDQ6VXNlcjIwMjQ5MTg0", "avatar_url": "https://avatars.githubusercontent.com/u/20249184?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AileenLin", "html_url": "https://github.com/AileenLin", "followers_url": "https://api.github.com/users/AileenLin/followers", "following_url": "https://api.github.com/users/AileenLin/following{/other_user}", "gists_url": "https://api.github.com/users/AileenLin/gists{/gist_id}", "starred_url": "https://api.github.com/users/AileenLin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AileenLin/subscriptions", "organizations_url": "https://api.github.com/users/AileenLin/orgs", "repos_url": "https://api.github.com/users/AileenLin/repos", "events_url": "https://api.github.com/users/AileenLin/events{/privacy}", "received_events_url": "https://api.github.com/users/AileenLin/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Good catch, thanks a lot! We will fix it ASAP.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
NONE
null
end_pos will be overwritten in some case https://github.com/huggingface/transformers/blob/858b7d5873577d7997918439eb7429d424d07dd5/src/transformers/modeling_tf_bert.py#L1422
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6781/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6780/comments
https://api.github.com/repos/huggingface/transformers/issues/6780/events
https://github.com/huggingface/transformers/issues/6780
687,633,167
MDU6SXNzdWU2ODc2MzMxNjc=
6,780
How can I get rid of this message every time I run GPT2?
{ "login": "BigSalmon2", "id": 61605789, "node_id": "MDQ6VXNlcjYxNjA1Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BigSalmon2", "html_url": "https://github.com/BigSalmon2", "followers_url": "https://api.github.com/users/BigSalmon2/followers", "following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}", "gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions", "organizations_url": "https://api.github.com/users/BigSalmon2/orgs", "repos_url": "https://api.github.com/users/BigSalmon2/repos", "events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}", "received_events_url": "https://api.github.com/users/BigSalmon2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
``` Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6780/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6780/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6779/comments
https://api.github.com/repos/huggingface/transformers/issues/6779/events
https://github.com/huggingface/transformers/issues/6779
687,603,968
MDU6SXNzdWU2ODc2MDM5Njg=
6,779
Character-level tokenization?
{ "login": "gwenger4", "id": 66102339, "node_id": "MDQ6VXNlcjY2MTAyMzM5", "avatar_url": "https://avatars.githubusercontent.com/u/66102339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwenger4", "html_url": "https://github.com/gwenger4", "followers_url": "https://api.github.com/users/gwenger4/followers", "following_url": "https://api.github.com/users/gwenger4/following{/other_user}", "gists_url": "https://api.github.com/users/gwenger4/gists{/gist_id}", "starred_url": "https://api.github.com/users/gwenger4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gwenger4/subscriptions", "organizations_url": "https://api.github.com/users/gwenger4/orgs", "repos_url": "https://api.github.com/users/gwenger4/repos", "events_url": "https://api.github.com/users/gwenger4/events{/privacy}", "received_events_url": "https://api.github.com/users/gwenger4/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# ❓ Questions & Help In a project I'm using the Hugging Face library for, I would like to use transformers to perform arithmetic (framed as a text-to-text problem). However, it seems that the current method of tokenization adopted by most existing models in the Hugging Face library (BPE, WordPiece, etc.) are not character-level, and thereby, would not allow for the network to properly process the input for later arithmetic. Are there any ways to get this sort of analysis to work properly with the existing models in Hugging Face (and their accompanying tokenizers), otherwise, what recommendations do you have? For reference, I am currently using the BART model from Hugging Face.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6779/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6778/comments
https://api.github.com/repos/huggingface/transformers/issues/6778/events
https://github.com/huggingface/transformers/issues/6778
687,602,106
MDU6SXNzdWU2ODc2MDIxMDY=
6,778
Bertology-like Analysis for BART, T5?
{ "login": "gwenger4", "id": 66102339, "node_id": "MDQ6VXNlcjY2MTAyMzM5", "avatar_url": "https://avatars.githubusercontent.com/u/66102339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwenger4", "html_url": "https://github.com/gwenger4", "followers_url": "https://api.github.com/users/gwenger4/followers", "following_url": "https://api.github.com/users/gwenger4/following{/other_user}", "gists_url": "https://api.github.com/users/gwenger4/gists{/gist_id}", "starred_url": "https://api.github.com/users/gwenger4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gwenger4/subscriptions", "organizations_url": "https://api.github.com/users/gwenger4/orgs", "repos_url": "https://api.github.com/users/gwenger4/repos", "events_url": "https://api.github.com/users/gwenger4/events{/privacy}", "received_events_url": "https://api.github.com/users/gwenger4/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# ❓ Questions & Help In my current project, I am working on training encoder-decoder models (BART, T5, etc.) and the Transformers library has been absolutely invaluable! After seeing several Bertology analyses (i.e. looking at the information the model's attention mechanism learns to attend to), I would like to know if a similar analysis is possible with the BART and T5 models in the Hugging Face library. Any recommendations are certainly appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6778/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6777/comments
https://api.github.com/repos/huggingface/transformers/issues/6777/events
https://github.com/huggingface/transformers/pull/6777
687,594,144
MDExOlB1bGxSZXF1ZXN0NDc0OTkzNDY1
6,777
[transformers-cli] fix logger getter
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=h1) Report\n> Merging [#6777](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.90%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6777/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6777 +/- ##\n==========================================\n+ Coverage 78.47% 80.37% +1.90% \n==========================================\n Files 157 157 \n Lines 28569 28569 \n==========================================\n+ Hits 22420 22963 +543 \n+ Misses 6149 5606 -543 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `55.88% <100.00%> (+55.88%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.01% <0.00%> (-2.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6777/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=footer). Last update [42fddac...6fd85c4](https://codecov.io/gh/huggingface/transformers/pull/6777?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
`transformers-cli` is currently broken: ``` File "src/transformers/commands/transformers_cli.py", line 8, in <module> from transformers.commands.serving import ServeCommand File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/commands/serving.py", line 28, in <module> logger = logging.getLogger("transformers-cli/serving") AttributeError: module 'transformers.utils.logging' has no attribute 'getLogger' ``` this fixes it. plus added a basic test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6777/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6777", "html_url": "https://github.com/huggingface/transformers/pull/6777", "diff_url": "https://github.com/huggingface/transformers/pull/6777.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6777.patch", "merged_at": 1598572877000 }
https://api.github.com/repos/huggingface/transformers/issues/6776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6776/comments
https://api.github.com/repos/huggingface/transformers/issues/6776/events
https://github.com/huggingface/transformers/pull/6776
687,586,161
MDExOlB1bGxSZXF1ZXN0NDc0OTg3Mjk0
6,776
PL: --adafactor option
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=h1) Report\n> Merging [#6776](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **decrease** coverage by `0.99%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6776/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6776 +/- ##\n==========================================\n- Coverage 78.47% 77.48% -1.00% \n==========================================\n Files 157 157 \n Lines 28569 28569 \n==========================================\n- Hits 22420 22137 -283 \n- Misses 6149 6432 +283 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: |\n| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6776/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=footer). Last update [42fddac...c769370](https://codecov.io/gh/huggingface/transformers/pull/6776?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
CC @moscow25, @patil-suraj I used the "External LR" setup and verified that is saves a significant amount of memory on pegasus finetuning. Happy to add to Trainer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6776/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6776", "html_url": "https://github.com/huggingface/transformers/pull/6776", "diff_url": "https://github.com/huggingface/transformers/pull/6776.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6776.patch", "merged_at": 1598581187000 }
https://api.github.com/repos/huggingface/transformers/issues/6775
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6775/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6775/comments
https://api.github.com/repos/huggingface/transformers/issues/6775/events
https://github.com/huggingface/transformers/pull/6775
687,570,506
MDExOlB1bGxSZXF1ZXN0NDc0OTc0MjI1
6,775
Support fast tokenizer in Question Answering Pipeline
{ "login": "bdalal", "id": 3478378, "node_id": "MDQ6VXNlcjM0NzgzNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/3478378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bdalal", "html_url": "https://github.com/bdalal", "followers_url": "https://api.github.com/users/bdalal/followers", "following_url": "https://api.github.com/users/bdalal/following{/other_user}", "gists_url": "https://api.github.com/users/bdalal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bdalal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bdalal/subscriptions", "organizations_url": "https://api.github.com/users/bdalal/orgs", "repos_url": "https://api.github.com/users/bdalal/repos", "events_url": "https://api.github.com/users/bdalal/events{/privacy}", "received_events_url": "https://api.github.com/users/bdalal/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=h1) Report\n> Merging [#6775](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.22%`.\n> The diff coverage is `13.33%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6775/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6775 +/- ##\n==========================================\n+ Coverage 78.47% 79.70% +1.22% \n==========================================\n Files 157 157 \n Lines 28569 28579 +10 \n==========================================\n+ Hits 22420 22779 +359 \n+ Misses 6149 5800 -349 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `27.59% <13.33%> (-0.54%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.07% <0.00%> (+0.12%)` | :arrow_up: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6775/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=footer). Last update [42fddac...54cbfb1](https://codecov.io/gh/huggingface/transformers/pull/6775?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@LysandreJik @mfuntowicz \r\nJust checking in to see if this PR is good, or does it need some more improvements?\r\n\r\nThanks", "Hi @bdalal,\n\nWill have a look at it ASAP, sorry for the delay \n\nMorgan", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,605
1,605
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6545 The problem is that the behavior of the python tokenizer and the rust based fast tokenizer is very different. The python tokenizer handles cases where inputs are in different formats (str tokens and int tokens and vice versa), where as the fast tokenizer is unable to do so. Some edge cases where the fast tokenizer fails because of limited functionality (compare to python tok) have been added and I've included comments in those places.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6775/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6775", "html_url": "https://github.com/huggingface/transformers/pull/6775", "diff_url": "https://github.com/huggingface/transformers/pull/6775.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6775.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6774
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6774/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6774/comments
https://api.github.com/repos/huggingface/transformers/issues/6774/events
https://github.com/huggingface/transformers/issues/6774
687,510,773
MDU6SXNzdWU2ODc1MTA3NzM=
6,774
Task specific params for pegasus-large to allow finetuning with correct generation_parameters
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Done!\r\n```python\r\n# Config values that vary between checkpoints: for testing and conversion\r\ntask_specific_params = {\r\n # These are task specific params for pegasus-large and normal params for finetuned checkpoints\r\n \"summarization_xsum\": {\"length_penalty\": 0.8, \"max_length\": 64, \"max_position_embeddings\": 512},\r\n \"summarization_cnn_dailymail\": {\"length_penalty\": 0.8, \"max_length\": 128, \"max_position_embeddings\": 1024},\r\n \"summarization_newsroom\": {\"length_penalty\": 0.8, \"max_length\": 128, \"max_position_embeddings\": 512},\r\n \"summarization_wikihow\": {\"length_penalty\": 0.6, \"max_length\": 256, \"max_position_embeddings\": 512},\r\n \"summarization_multi_news\": {\"length_penalty\": 0.8, \"max_length\": 256, \"max_position_embeddings\": 1024},\r\n \"summarization_reddit_tifu\": {\"length_penalty\": 0.6, \"max_length\": 128, \"max_position_embeddings\": 512},\r\n \"summarization_big_patent\": {\"length_penalty\": 0.7, \"max_length\": 256, \"max_position_embeddings\": 1024},\r\n \"summarization_arxiv\": {\"length_penalty\": 0.8, \"max_length\": 256, \"max_position_embeddings\": 1024},\r\n \"summarization_pubmed\": {\"length_penalty\": 0.8, \"max_length\": 256, \"max_position_embeddings\": 1024},\r\n \"summarization_gigaword\": {\"length_penalty\": 0.6, \"max_length\": 32, \"max_position_embeddings\": 128},\r\n \"summarization_aeslc\": {\"length_penalty\": 0.6, \"max_length\": 32, \"max_position_embeddings\": 512},\r\n \"summarization_billsum\": {\"length_penalty\": 0.6, \"max_length\": 256, \"max_position_embeddings\": 1024},\r\n # this last entry is useless -- just for consistency\r\n \"summarization_large\": {\"length_penalty\": 0.8, \"max_length\": 256, \"max_position_embeddings\": 1024},\r\n}\r\n```" ]
1,598
1,598
1,598
CONTRIBUTOR
null
``` 'task_specific_params': {'summ_xsum': {'max_length': 56, 'length_penalty': 0.8}}} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6774/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6774/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6773
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6773/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6773/comments
https://api.github.com/repos/huggingface/transformers/issues/6773/events
https://github.com/huggingface/transformers/issues/6773
687,484,153
MDU6SXNzdWU2ODc0ODQxNTM=
6,773
Error in run_ner.py
{ "login": "moniquebm", "id": 60358442, "node_id": "MDQ6VXNlcjYwMzU4NDQy", "avatar_url": "https://avatars.githubusercontent.com/u/60358442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moniquebm", "html_url": "https://github.com/moniquebm", "followers_url": "https://api.github.com/users/moniquebm/followers", "following_url": "https://api.github.com/users/moniquebm/following{/other_user}", "gists_url": "https://api.github.com/users/moniquebm/gists{/gist_id}", "starred_url": "https://api.github.com/users/moniquebm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moniquebm/subscriptions", "organizations_url": "https://api.github.com/users/moniquebm/orgs", "repos_url": "https://api.github.com/users/moniquebm/repos", "events_url": "https://api.github.com/users/moniquebm/events{/privacy}", "received_events_url": "https://api.github.com/users/moniquebm/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.3 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @stefan-it ## Information The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: Rare Entities task: WNUT’17 (English NER) dataset * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: When I try to execute run_ner.py, the following error occurs: 08/27/2020 16:10:10 - INFO - filelock - Lock 2174122787120 acquired on ./data_wnut_17\cached_dev_BertTokenizer_128.lock 08/27/2020 16:10:24 - INFO - utils_ner - Creating features from dataset file at ./data_wnut_17 08/27/2020 16:12:04 - INFO - filelock - Lock 2174122787120 released on ./data_wnut_17\cached_dev_BertTokenizer_128.lock Traceback (most recent call last): File "C:\Users\Monique\Documents\IA\NLP\transformers\examples\token-classification\utils_ner.py", line 247, in __init__ examples = token_classification_task.read_examples_from_file(data_dir, mode) File "C:\Users\Monique\Documents\IA\NLP\transformers\examples\token-classification\tasks.py", line 27, in read_examples_from_file for line in f: File "C:\ProgramData\Anaconda3\lib\codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb4 in position 46: invalid start byte My only local change was the following in preprocess.py: `with open(dataset, "rt", encoding='utf-8') as f_p:` (I added encoding='utf-8' due to a similar error in preprocessing). ## Expected behavior The dataset should have been created without encoding errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6773/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6772
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6772/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6772/comments
https://api.github.com/repos/huggingface/transformers/issues/6772/events
https://github.com/huggingface/transformers/issues/6772
687,468,837
MDU6SXNzdWU2ODc0Njg4Mzc=
6,772
Unable to install Transformers Master version
{ "login": "AnkitVarshney02", "id": 42990727, "node_id": "MDQ6VXNlcjQyOTkwNzI3", "avatar_url": "https://avatars.githubusercontent.com/u/42990727?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnkitVarshney02", "html_url": "https://github.com/AnkitVarshney02", "followers_url": "https://api.github.com/users/AnkitVarshney02/followers", "following_url": "https://api.github.com/users/AnkitVarshney02/following{/other_user}", "gists_url": "https://api.github.com/users/AnkitVarshney02/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnkitVarshney02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnkitVarshney02/subscriptions", "organizations_url": "https://api.github.com/users/AnkitVarshney02/orgs", "repos_url": "https://api.github.com/users/AnkitVarshney02/repos", "events_url": "https://api.github.com/users/AnkitVarshney02/events{/privacy}", "received_events_url": "https://api.github.com/users/AnkitVarshney02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Answered in #6752" ]
1,598
1,598
1,598
NONE
null
@patil-suraj @joeddav Is there some other way to install master version of transformers? I tried using the URL to install the master version but it installed v3.0.2 _Originally posted by @AnkitVarshney02 in https://github.com/huggingface/transformers/issues/6752#issuecomment-681716615_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6772/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6771
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6771/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6771/comments
https://api.github.com/repos/huggingface/transformers/issues/6771/events
https://github.com/huggingface/transformers/issues/6771
687,438,725
MDU6SXNzdWU2ODc0Mzg3MjU=
6,771
Exported TF Bert model is much slower than that exported from Google's Bert
{ "login": "jlei2", "id": 70337521, "node_id": "MDQ6VXNlcjcwMzM3NTIx", "avatar_url": "https://avatars.githubusercontent.com/u/70337521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlei2", "html_url": "https://github.com/jlei2", "followers_url": "https://api.github.com/users/jlei2/followers", "following_url": "https://api.github.com/users/jlei2/following{/other_user}", "gists_url": "https://api.github.com/users/jlei2/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlei2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlei2/subscriptions", "organizations_url": "https://api.github.com/users/jlei2/orgs", "repos_url": "https://api.github.com/users/jlei2/repos", "events_url": "https://api.github.com/users/jlei2/events{/privacy}", "received_events_url": "https://api.github.com/users/jlei2/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This looks super interesting @jlei2 ! I openen a PR #6877 to see if changing to EinusmDense speeds up the runtime...In the PR I cannot see an improvement - could you take a look and maybe comment on how you changed the TF Bert code in transformers to benchmark your changes? :-) ", "Hi Patrick, thank you for taking time to investigate this issue! Your PR is not completely correct. Here is my draft [PR](https://github.com/jlei2/transformers/pull/1). DenseEinsum actually is the combination of matrix reshape/transpose and dense layer. So we need to be careful when replacing keras.dense with DenseEinsum, otherwise we will not be able to get the same output as we can get before this change.\r\n\r\nI tested on my end that there is no speedup after this change **at runtime**, using the huggingface bechmark command tool given by you. But I did see expected speedup after exporting DenseEinsum models. I used tensorflow profiling tool(https://www.tensorflow.org/tfx/serving/tensorboard) to benchmark the inference time of exported model when serving on TF-serving 2.2.0. So actually I don't have my own benchmark code.\r\n\r\nDevice: 1 GPU-V100\r\nBatch Size: 1\r\nModel: Bert-base FP32\r\n\r\n<img width=\"220\" alt=\"Screen Shot 2020-09-02 at 1 52 08 PM\" src=\"https://user-images.githubusercontent.com/70337521/92035460-981e9200-ed23-11ea-8141-a5f8e600559a.png\"> <img width=\"220\" alt=\"Screen Shot 2020-09-02 at 1 52 28 PM\" src=\"https://user-images.githubusercontent.com/70337521/92035489-a2409080-ed23-11ea-9414-dd2bb6e8ea68.png\">\r\nSo the left picture is from the Huggingface model after applying my PR. The right one is from original Huggingface model using current master. You can see that there is almost 100% speedup. So I suspect this issue only happens on exported model for tf-serving. \r\n\r\nThe code I used to export saved model:\r\n```\r\nimport tensorflow as tf\r\nimport transformers\r\n\r\nfrom transformers import BertModel\r\npt_model_path = 'bert-base-uncased'\r\n\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\nmodel.save_pretrained(pt_model_path)\r\n\r\nMAX_LEN = 128\r\nmodel_path = 'saved_model/tmp_model'\r\ninput_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='input_ids')\r\nattention_mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='attention_mask')\r\ntoken_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='token_type_ids')\r\nbase_model = transformers.TFBertModel.from_pretrained(pt_model_path, output_hidden_states=False, from_pt=True)\r\nbase_output = base_model.bert([input_ids, attention_mask, token_type_ids])\r\nseq_out, pool_out = base_output[0], base_output[1]\r\nbase_model.trainable = False\r\nmodel = tf.keras.models.Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[pool_out])\r\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])\r\nprint(model.summary())\r\nmodel.save(model_path, include_optimizer=False, save_format=\"tf\")\r\n```\r\n\r\nIs this something you will want to solve on your end? though we see that during runtime there is no difference.", "Hey @jlei2, \r\n\r\nThanks a lot for checking and your draft PR. I think we will be able to implement your proposition given that we can keep complete backwards compatibility. Do you have an idea by any chance why we can see the speed up only on TF-serving? \r\n\r\nAlso cc @jplu - this looks very interesting! ", "To be honest I have no idea about the root difference on TFS side. I can only observe that there are some CPU processes wasting time before the Matmul op. My feeling is that the MatMul op triggered by tf.keras.layers.Dense() may not be implemented to be very efficient on TF-serving. Though this issue seems to have more to do with tensorflow keras team or TFS team instead of Huggingface code base imo, it would be helpful to all Huggingface users if you are able to resolve this issue on your end.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Re-activated cc @jplu ", "Hey @jlei2!\r\n\r\nSorry for this long silence. We are currently working on this performance issue and to be sure to make the proper changes, can you share with us your benchmark script that you used to compute the values in the table?\r\n\r\nThanks!", "Hi @jplu , thanks for keeping an eye on this! Unfortunately I can not share the benchmark script I used to get the numbers in the table with you because it's confidential. But I can share how you can obtain the same slow-down observation using Tensorflow Profiler tool (to get the two Performance Summary Screenshots I uploaded above 15.7ms vs 28.8ms).\r\n\r\n### Export baseline and modified SavedModel\r\n**Baseline transformers repo**: The commit I used is the same as the master branch in my fork (https://github.com/jlei2/transformers)\r\n\r\n**Modified transformers repo using tf.keras.layers.experimental.EinsumDense**: (https://github.com/jlei2/transformers/tree/dense-einsum-tf2.3)\r\n\r\n**The code to export the model**: \r\n```\r\nimport tensorflow as tf\r\nimport transformers\r\n\r\nfrom transformers import BertModel\r\npt_model_path = 'bert-base-uncased'\r\n\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\nmodel.save_pretrained(pt_model_path)\r\n\r\nMAX_LEN = 128\r\nmodel_path = 'saved_model/tmp_model'\r\ninput_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='input_ids')\r\nattention_mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='attention_mask')\r\ntoken_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='token_type_ids')\r\nbase_model = transformers.TFBertModel.from_pretrained(pt_model_path, output_hidden_states=False, from_pt=True)\r\nbase_output = base_model.bert([input_ids, attention_mask, token_type_ids])\r\nseq_out, pool_out = base_output[0], base_output[1]\r\nbase_model.trainable = False\r\nmodel = tf.keras.models.Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[pool_out])\r\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])\r\nprint(model.summary())\r\nmodel.save(model_path, include_optimizer=False, save_format=\"tf\")\r\n```\r\n\r\nThen on a machine with 1 GPU-V100 and TF-Serving-gpu== 2.3.0 and Tensorflow==2.3.0:\r\n### Spin up SavedModel:\r\n```\r\nexport LD_LIBRARY_PATH=/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH\r\nexport MODEL_DIR=your_model_dir\r\nexport MODEL_NAME=your_model_name\r\nnohup tensorflow_model_server --rest_api_port=8501 --model_name=$MODEL_NAME --model_base_path=$MODEL_DIR >server.log 2>&1\r\n```\r\n\r\n### Spin up Tensorboard\r\n```\r\ntensorboard --logdir ~/logs/inference_demo/ --port your_port_number --bind_all\r\n```\r\n\r\nGo to the profile page on Tensorboard\r\n<img width=\"447\" alt=\"Screen Shot 2020-12-01 at 8 58 18 PM\" src=\"https://user-images.githubusercontent.com/70337521/100830662-03f2c280-3419-11eb-95bc-f0d70caec6db.png\">\r\n\r\nAfter clicking capture, I would send request to TFS to profile the whole process\r\n\r\n**Code to send request to served model**:\r\n```\r\nimport json\r\nimport requests\r\n\r\nBATCH_SIZE = 1\r\nMAX_SEQ_LEN = 128\r\nbatch = []\r\nMODEL_NAME = your_model_name\r\n\r\nfor _ in range(BATCH_SIZE):\r\n batch.append({\"input_ids\":[999] * MAX_SEQ_LEN, \"attention_mask\":[1] * MAX_SEQ_LEN, \"token_type_ids\":[0] * MAX_SEQ_LEN})\r\n\r\ninput_data = {\"instances\": batch}\r\nr = requests.post(\"http://localhost:8501/v1/models/%s:predict\"%(MODEL_NAME), data=json.dumps(input_data))\r\nprint(r.text)\r\n```\r\nI would run this scripts for several times first to warm up the model and then start to profile formally. \r\n\r\nAnd finally you will see profiling results on Tensorboard UI page just like what I uploaded.\r\n\r\nHope this could be helpful to you!", "Actually I write a very simple benchmark script that can show the difference:\r\n\r\n```\r\nimport json\r\nimport requests\r\nimport time\r\nBATCH_SIZE = 1\r\nMAX_SEQ_LEN = 128\r\nbatch = []\r\nMODEL_NAME = your_model_name\r\n\r\nfor _ in range(BATCH_SIZE):\r\n\tbatch.append({\"input_ids\":[999] * MAX_SEQ_LEN, \"attention_mask\":[1] * MAX_SEQ_LEN, \"token_type_ids\":[0] * MAX_SEQ_LEN})\r\n\r\ninput_data = {\"instances\": batch}\r\n\r\nstart = time.time()\r\nfor _ in range(100):\r\n\tr = requests.post(\"http://localhost:8501/v1/models/%s:predict\"%(MODEL_NAME), data=json.dumps(input_data))\r\nend = time.time()\r\nprint(end-start)\r\n```\r\nBaseline time: ~2.8s\r\nMy version's time: ~1.5s.\r\n\r\nSo we can easily see ~2x speed up.", "Awesome!! Thanks a lot @jlei2!! This is already a good start to check the differences. I will let you know here once we have done the changes!", "@jlei2 I have open a PR for integrating this change. Unfortunately, as I'm on Windows, the GPU profiling is not yet available in WSL, can you clone my branch and run it on your side with your own benchmark in order to be sure that it looks ok.\r\n\r\nThere is a small update on the code to create the saved model:\r\n```\r\nimport tensorflow as tf\r\nimport transformers\r\n\r\nMAX_LEN = 128\r\nmodel_path = 'saved_model/tmp_model'\r\ninput_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='input_ids')\r\nattention_mask = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='attention_mask')\r\ntoken_type_ids = tf.keras.layers.Input((MAX_LEN,), dtype=tf.int32, name='token_type_ids')\r\nbase_model = transformers.TFBertModel.from_pretrained(\"bert-base-uncased\")\r\ninputs = {\"input_ids\": input_ids, \"attention_mask\": attention_mask, \"token_type_ids\": token_type_ids}\r\nbase_output = base_model.bert(inputs)\r\nseq_out, pool_out = base_output[0], base_output[1]\r\nbase_model.trainable = False\r\nmodel = tf.keras.models.Model(inputs=inputs, outputs=[pool_out])\r\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])\r\nprint(model.summary())\r\nmodel.save(model_path, include_optimizer=False, save_format=\"tf\")\r\n```\r\n\r\nNo need anymore to load from the PT model, thanks to some update we have applied on the loading weights mechanism.", "Hi @jplu ,\r\n\r\nThanks for opening the PR to integrate this change. I have cloned your branch and run benchmark on my side. The results are as expected!\r\n\r\n**latency: (ms/batch) on 1 GPU-V100**\r\nBatch Size | current master | tf-einsumdense |\r\n-- | -- | --\r\n1 | 20.9 | 6.26\r\n2 | 24.1 | 8.68\r\n4 | 27.6 | 13.1\r\n8 | 36.3 | 21.5\r\n16 | 58.8 | 42.3\r\n32 | 94.7 | 80.4\r\n64 | 170 | 156\r\n128 | 321 | 309\r\n\r\nAnd on GPU profiling, obtained the same observation like I posted in the very first comment: GPU computation is continuous and compact. It is not cut off by CPU process anymore.\r\n\r\n**current master, batch_size=128**\r\n![image](https://user-images.githubusercontent.com/70337521/102310915-c580fb80-3f20-11eb-8062-b89e8b11dc49.png)\r\n\r\n**tf-einsumdense, batch_size=128**\r\n![image](https://user-images.githubusercontent.com/70337521/102311089-12fd6880-3f21-11eb-9129-100081eef716.png)\r\n\r\n\r\n\r\n" ]
1,598
1,611
1,611
NONE
null
I benchmarked the Bert model exported from Huggingface TF Bert code and observed that it is much slower than Google's TF2 Bert model(https://github.com/tensorflow/models/tree/master/official/nlp/bert) on TF Serving (GPU device). When batch_size=1, Huggingface Bert model can be more than 200% slower than Google's one. Here are my benchmark results: **latency: (ms/batch) on 1 GPU-V100** Batch Size | google official | HF_TF2 | relative change | -- | -- | -- | -- 1 | 6.7 | 21.3 | 238.10% 2 | 9.4 | 24.2 | 171.91% 4 | 14.4 | 28.1 | 99.29% 8 | 24.0 | 36.9 | 55.70% 16 | 46.6 | 58.6 | 26.57% 64 | 171.5 | 171.4 | 0.35% 128 | 338.5 | 324.5 | -3.71% I used the latest Tensorflow profiling tool (https://www.tensorflow.org/tfx/serving/tensorboard) to compare these two: **Hugging-face Bert-Base FP32 Batch_size = 128** <img width="1518" alt="Screen Shot 2020-06-05 at 8 20 34 PM" src="https://user-images.githubusercontent.com/70337521/91475490-87b47600-e850-11ea-8e19-ac2b5103073d.png"> **Google Bert-Base FP32 Batch_size = 128** <img width="1518" alt="image (1)" src="https://user-images.githubusercontent.com/70337521/91475500-8a16d000-e850-11ea-880c-eab176c0db59.png"> Comparing the event traces of HF's and google's, we see that before each *MatMul* operation on GPU, there are always some CPU processes running (wasn’t able to get more info about these CPU processes because the Profiling tool doesn’t provide meaningful labels or description for those CPU processes but it seems like they are doing some gather/reduce operations) and the GPU is inactive for a while. If you look at the trace of google official, you can find that all GPU ops are compact and there are no idle time for GPU. In parallel, only one CPU process keeps running and doesn’t cause GPU ops to wait. The short idle time will hurt the performance more when batch_size or model_size is small because short time is spent on GPU calculations and idle time is not ignorable anymore. See time comparison grouped by type below: **Hugging-face Bert-Base FP32 Batch_size = 4** <img width="425" alt="image (3)" src="https://user-images.githubusercontent.com/70337521/91475887-175a2480-e851-11ea-8493-a25f0cea4914.png"> **Google Bert-Base FP32 Batch_size = 4** <img width="418" alt="image (4)" src="https://user-images.githubusercontent.com/70337521/91475882-145f3400-e851-11ea-830d-7ed59a04e3bc.png"> Under small batch_size=4, Hugging-face model has 60% GPU idle time while that of google model is only 25%. This also explains the reason why for small batch_size, there is over 200% slow down and for large batch_size=128, the situation gets better. The *MatMul* op is triggered by tf.keras.layers.Dense(), which is widely used in Transformer encoder self-attention, intermediate layer and output layer. In comparison, Google's Bert uses DenseEinsum()([link](https://github.com/tensorflow/models/blob/1ebff962832ec8a599bd67ac8c43b9ae1b05c3fa/official/nlp/transformer/attention_layer.py#L59-L69)) to replace all usage of Dense layer. I personally modified Huggingface's TF Bert code base to use DenseEinsum(). And the slow down issue got solved: **latency: (ms/batch) on 1 GPU-V100** Batch Size | google official | HF_TF2 | relative change | After Fixing | relative change -- | -- | -- | -- | -- | -- 1 | 6.7 | 21.3 | 238.10% | 6.60 | 4.76% 2 | 9.4 | 24.2 | 171.91% | 8.90 | 0.00% 4 | 14.4 | 28.1 | 99.29% | 13.40 | -4.96% 8 | 24.0 | 36.9 | 55.70% | 22.10 | -6.75% 16 | 46.6 | 58.6 | 26.57% | 43.20 | -6.70% 64 | 171.5 | 171.4 | 0.35% | 158.80 | -7.03% 128 | 338.5 | 324.5 | -3.71% | 313.10 | -7.09% Profile after fixing: <img width="1517" alt="image (4) copy" src="https://user-images.githubusercontent.com/70337521/91475978-3789e380-e851-11ea-9dae-a5136fac1074.png"> I noticed that multiple issues may be related to this (https://github.com/huggingface/transformers/issues/6264). Do you plan to solve this issue? Thanks. @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6771/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6770
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6770/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6770/comments
https://api.github.com/repos/huggingface/transformers/issues/6770/events
https://github.com/huggingface/transformers/issues/6770
687,432,872
MDU6SXNzdWU2ODc0MzI4NzI=
6,770
Roberta Large not working with SNLI dataset always gives random baseline that is 33 percent accuracy.
{ "login": "bhavdeep98", "id": 6837380, "node_id": "MDQ6VXNlcjY4MzczODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6837380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavdeep98", "html_url": "https://github.com/bhavdeep98", "followers_url": "https://api.github.com/users/bhavdeep98/followers", "following_url": "https://api.github.com/users/bhavdeep98/following{/other_user}", "gists_url": "https://api.github.com/users/bhavdeep98/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavdeep98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavdeep98/subscriptions", "organizations_url": "https://api.github.com/users/bhavdeep98/orgs", "repos_url": "https://api.github.com/users/bhavdeep98/repos", "events_url": "https://api.github.com/users/bhavdeep98/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavdeep98/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
I have been working with parts of SNLI dataset, even after hours and hours of pretraining the model always gives 33.24 percent accuracy no matter the amount of the data used in pretraining. This issue is fixed if I revert my transformers version to 2.11.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6770/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6769
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6769/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6769/comments
https://api.github.com/repos/huggingface/transformers/issues/6769/events
https://github.com/huggingface/transformers/pull/6769
687,418,775
MDExOlB1bGxSZXF1ZXN0NDc0ODQ4NDQ1
6,769
Seq2SeqTrainer
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Command for running `finetune_trainer.py`\r\n\r\n```bash\r\npython finetune_trainer.py \\\r\n --model_name_or_path sshleifer/bart-tiny-random \\\r\n --data_dir xsum \\\r\n --output_dir test \\\r\n --overwrite_output_dir \\\r\n --n_train 8 \\\r\n --n_val 8 \\\r\n --max_source_length 512 \\\r\n --max_target_length 56 \\\r\n --val_max_target_length 56 \\\r\n --do_train \\\r\n --do_eval \\\r\n --num_train_epochs 2 \\\r\n --per_device_train_batch_size 4 \\\r\n --per_device_eval_batch_size 4 \\\r\n --evaluate_during_training \\\r\n --predict_from_generate \\\r\n --logging_steps 2 \\\r\n --save_steps 2 \\\r\n --eval_steps 2 \\\r\n --sortish_sampler \\\r\n```", "Note: Eventually we need to refactor seq2seq/README.md to accommodate this", "This looks great. Very excited to try this out with the `EncoderDecoder` model.", "LGTM pending resolution of padding mystery.", "Great work @patil-suraj !" ]
1,598
1,601
1,600
MEMBER
null
This PR adds `Seq2SeqTrainer` to use `Trainer` for summarization, translation and other seq2seq tasks. `Seq2SeqTrainer` includes: - label smoothing loss - sortish sampler - predict from generate to allow calculating generative metrics `finetune_trainer.py` includes: - `Seq2SeqDataCollator` to correctly prepare `labels` and `decoder_input_ids` - `Seq2SeqTrainingArguments` for extra training arguments. - encoder and embedding freezing. - `compute_metrics_fn` for translation(BLEU) and summarization (ROUGE) - evaluation Most of the data and model arguments are kept identical to `finetune.py` file Seq2Seq4Eva!🔥
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6769/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6769/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6769", "html_url": "https://github.com/huggingface/transformers/pull/6769", "diff_url": "https://github.com/huggingface/transformers/pull/6769.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6769.patch", "merged_at": 1600987619000 }
https://api.github.com/repos/huggingface/transformers/issues/6768
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6768/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6768/comments
https://api.github.com/repos/huggingface/transformers/issues/6768/events
https://github.com/huggingface/transformers/pull/6768
687,402,297
MDExOlB1bGxSZXF1ZXN0NDc0ODM0NTY0
6,768
Floating-point operations logging in trainer
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=h1) Report\n> Merging [#6768](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.17%`.\n> The diff coverage is `38.57%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6768/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6768 +/- ##\n==========================================\n+ Coverage 78.47% 79.65% +1.17% \n==========================================\n Files 157 157 \n Lines 28569 28625 +56 \n==========================================\n+ Hits 22420 22800 +380 \n+ Misses 6149 5825 -324 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `51.85% <31.48%> (-1.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.66% <62.50%> (-0.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.56% <0.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <0.00%> (+4.00%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `88.13% <0.00%> (+68.28%)` | :arrow_up: |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6768/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `86.00% <0.00%> (+76.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=footer). Last update [42fddac...4becfac](https://codecov.io/gh/huggingface/transformers/pull/6768?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I think even with domain-agnostic models we'd like to keep the configuration, no? I'm not sure the trainer would behave correctly without a configuration, so if we want to remove the dependency towards configurations, we might as well do it all at once, right?\r\n\r\nWould the goal be to have the trainer accept all `nn.Module`s?", "Like agreed upon internally, we will move to Trainer accepting models instantiating a base abstractclass/conforming to some protocol. I think the config will be in the required field but have to work a bit more on this to be sure.\r\n\r\nIn any case, this is work for a subsequent PR :-)" ]
1,598
1,599
1,599
CONTRIBUTOR
null
First of two PRs to implement #4847 : - logging loss vs floating-point operations - using the results for scaling laws analysis This directly logs floating-point operations in wandb and comet, and creates a `log_history.json` file with training metrics. To do so, it adds methods to `PretrainedModel` to count parameters with and without embeddings, and the number of floating-point operations. It also has a few Trainer fixes, most importantly averaging the eval loss across processes rather than logging the one in process 0, and a bug with checkpoint folder creation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6768/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6768/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6768", "html_url": "https://github.com/huggingface/transformers/pull/6768", "diff_url": "https://github.com/huggingface/transformers/pull/6768.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6768.patch", "merged_at": 1599573657000 }
https://api.github.com/repos/huggingface/transformers/issues/6767
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6767/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6767/comments
https://api.github.com/repos/huggingface/transformers/issues/6767/events
https://github.com/huggingface/transformers/pull/6767
687,320,069
MDExOlB1bGxSZXF1ZXN0NDc0NzY1NTM1
6,767
Add NLP install to self-scheduled CI
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
Same change as @sgugger already made for `self-push.yml`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6767/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6767", "html_url": "https://github.com/huggingface/transformers/pull/6767", "diff_url": "https://github.com/huggingface/transformers/pull/6767.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6767.patch", "merged_at": 1598540895000 }
https://api.github.com/repos/huggingface/transformers/issues/6766
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6766/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6766/comments
https://api.github.com/repos/huggingface/transformers/issues/6766/events
https://github.com/huggingface/transformers/issues/6766
687,183,318
MDU6SXNzdWU2ODcxODMzMTg=
6,766
Distributed training doesn’t work
{ "login": "mcloarec001", "id": 49441956, "node_id": "MDQ6VXNlcjQ5NDQxOTU2", "avatar_url": "https://avatars.githubusercontent.com/u/49441956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcloarec001", "html_url": "https://github.com/mcloarec001", "followers_url": "https://api.github.com/users/mcloarec001/followers", "following_url": "https://api.github.com/users/mcloarec001/following{/other_user}", "gists_url": "https://api.github.com/users/mcloarec001/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcloarec001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcloarec001/subscriptions", "organizations_url": "https://api.github.com/users/mcloarec001/orgs", "repos_url": "https://api.github.com/users/mcloarec001/repos", "events_url": "https://api.github.com/users/mcloarec001/events{/privacy}", "received_events_url": "https://api.github.com/users/mcloarec001/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! What are you using to launch your ditributed training? What script are you using? Could you show the command you used, and could you paste your environment information as required by the template? Thank you.", "## Environment info\r\n- `transformers` version: 2.11.0\r\n- Platform: Databricks\r\n- Python version: 3.6.9\r\n- PyTorch version: 1.3.1 \r\n- Tensorflow version : 2.0\r\n- Using GPU in script: yes\r\n- Using distributed or parallel set-up in script: try to\r\n\r\n\r\n## Information\r\n\r\nI am using the model xlm-roberta-base. The tasks I am working on is a further train on my own dataset.\r\n\r\nThe problem arises when using the script examples/language_modeling/run_language_modeling.py with the following command :\r\n\r\n```\r\npython transformers/examples/language-modeling/run_language_modeling.py \r\n --model_type xlm-roberta \r\n --model_name_or_path xlm-roberta-base \r\n --train_data_file data/processed/piaf.txt \r\n --output_dir ./output --learning_rate 0.1 \r\n --per_gpu_train_batch_size 2 \r\n --local_rank 0 \r\n --num_train_epochs 1 \r\n --do_train \r\n --mlm\r\n```", "Okay, could you try using the `torch.distributed.launch` utility used to launch a distributed training?\r\n\r\nYour command, given that you have 8 GPUs, would be:\r\n\r\n```\r\npython -m torch.distributed.launch \\\r\n --nproc_per_node 8 transformers/examples/language-modeling/run_language_modeling.py \r\n --model_type xlm-roberta \r\n --model_name_or_path xlm-roberta-base \r\n --train_data_file data/processed/piaf.txt \r\n --output_dir ./output --learning_rate 0.1 \r\n --per_gpu_train_batch_size 2 \r\n --local_rank 0 \r\n --num_train_epochs 1 \r\n --do_train \r\n --mlm\r\n```\r\n\r\nFeel free to modify this part: `--nproc_per_node 8` according to the number of GPUs.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
Even if CUDA detects all the GPU of the machine, there is no distributed training : `RuntimeError: CUDA out of memory. Tried to allocate 3.82 GiB (GPU 0; 11.17 GiB total capacity; 7.59 GiB already allocated; 594.31 MiB free; 10.28 GiB reserved in total by PyTorch) (malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:289)` The training is launched only on the first GPU. I’m using language modeling code. I tried to set local_rank at 0 and to use the Torch environment variables MASTER_ADDR, MASTER_PORT, WORLD_SIZE and RANK, but without success. Is there another way to do distributed training with transformers ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6766/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6765
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6765/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6765/comments
https://api.github.com/repos/huggingface/transformers/issues/6765/events
https://github.com/huggingface/transformers/pull/6765
687,081,616
MDExOlB1bGxSZXF1ZXN0NDc0NTY1MDIy
6,765
Adds Adafactor to the docs and slightly fixes the formatting
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6765/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6765", "html_url": "https://github.com/huggingface/transformers/pull/6765", "diff_url": "https://github.com/huggingface/transformers/pull/6765.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6765.patch", "merged_at": 1598519811000 }
https://api.github.com/repos/huggingface/transformers/issues/6764
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6764/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6764/comments
https://api.github.com/repos/huggingface/transformers/issues/6764/events
https://github.com/huggingface/transformers/pull/6764
687,035,126
MDExOlB1bGxSZXF1ZXN0NDc0NTI2NjE4
6,764
Add ProtBert model card
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Any idea why there is one failed test case ?\r\nI have checked the code on the notebook and it runs without any issue on Colab.", "This seems completely unrelated, just relaunched the failed test.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=h1) Report\n> Merging [#6764](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bd7be9a4268221d2a0000c7e8033aaeb365c03b?el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6764/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6764 +/- ##\n==========================================\n- Coverage 79.74% 79.70% -0.05% \n==========================================\n Files 157 157 \n Lines 28479 28479 \n==========================================\n- Hits 22712 22698 -14 \n- Misses 5767 5781 +14 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.45% <0.00%> (-0.40%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `99.16% <0.00%> (+32.50%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6764/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=footer). Last update [4bd7be9...c7eb12c](https://codecov.io/gh/huggingface/transformers/pull/6764?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi @agemagician, your model https://huggingface.co/Rostlab/prot_bert is not recognized by our Hosted inference API, probably because it doesn't have a `architectures` field in its config.json:\r\n\r\n<img width=\"565\" alt=\"Screenshot 2020-10-21 at 21 52 57\" src=\"https://user-images.githubusercontent.com/326577/96775413-849fb700-13b5-11eb-8075-11d7792548f6.png\">\r\n\r\n\r\nDo you mind if we update the config.json on the model hub? Alternatively, do you prefer doing it yourself?\r\n\r\nThanks!", "Hi @julien-c , No problem.\r\nI already have seen you updated it.\r\n\r\nHowever, it doesn't work properly because we don't lower case tokens:\r\nhttps://huggingface.co/Rostlab/prot_bert?text=A+T+G+%5BMASK%5D+C\r\n\r\nI have uploaded the \"tokenizer_config.json\" file which should fix it, but we have to wait until the model is no longer in memory and reload it.\r\n\r\nI have also did the same for our better bert model \"prot_bert_bfd\", and it is working fine:\r\nhttps://huggingface.co/Rostlab/prot_bert_bfd?text=A+T+G+%5BMASK%5D+C\r\n\r\nThanks for your help.", "Now properly loaded, and looks great!\r\n\r\nhttps://huggingface.co/Rostlab/prot_bert?text=D+L+I+P+T+S+S+K+V+V+%5BMASK%5D+D+T+S+L+Q+V+K+K+A+F+F+A+L+V+T\r\n\r\n<img width=\"693\" alt=\"Screenshot 2020-10-21 at 22 56 44\" src=\"https://user-images.githubusercontent.com/326577/96786965-ff210480-13be-11eb-9133-375bb20e60a1.png\">\r\n\r\nThanks a lot @agemagician " ]
1,598
1,603
1,598
CONTRIBUTOR
null
This is a model card for our ProtBert: https://huggingface.co/Rostlab/prot_bert
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6764/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6764", "html_url": "https://github.com/huggingface/transformers/pull/6764", "diff_url": "https://github.com/huggingface/transformers/pull/6764.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6764.patch", "merged_at": 1598587949000 }
https://api.github.com/repos/huggingface/transformers/issues/6763
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6763/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6763/comments
https://api.github.com/repos/huggingface/transformers/issues/6763/events
https://github.com/huggingface/transformers/issues/6763
686,998,952
MDU6SXNzdWU2ODY5OTg5NTI=
6,763
Cuda out of memory when fp16 training
{ "login": "kotinigor", "id": 43569190, "node_id": "MDQ6VXNlcjQzNTY5MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/43569190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kotinigor", "html_url": "https://github.com/kotinigor", "followers_url": "https://api.github.com/users/kotinigor/followers", "following_url": "https://api.github.com/users/kotinigor/following{/other_user}", "gists_url": "https://api.github.com/users/kotinigor/gists{/gist_id}", "starred_url": "https://api.github.com/users/kotinigor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kotinigor/subscriptions", "organizations_url": "https://api.github.com/users/kotinigor/orgs", "repos_url": "https://api.github.com/users/kotinigor/repos", "events_url": "https://api.github.com/users/kotinigor/events{/privacy}", "received_events_url": "https://api.github.com/users/kotinigor/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Same issue when running distilbart", "I face the same issue with PEGASUS", "I've run into this with several seq2seq models (Pegasus, BART, T5). Ironically, running with fp16 causes increased GPU memory usage.", "I solved my issue by downgrading Pytorch to 1.5.1 and installing Nvidia/Apex. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,607
1,607
NONE
null
# ❓ Questions & Help I have error when run train with fp16 <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I want to train Albert model on Russian. I use run_language_modeling.py with some changes in dataset creating. If i run training with fp32 (without --fp flag) and batch_size=2, I will ok. But if i run training with --fp16 flag, i will obtain cuda out of memory error. I tried to decdeare batch size to 1, however, I obtained same error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6763/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6762
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6762/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6762/comments
https://api.github.com/repos/huggingface/transformers/issues/6762/events
https://github.com/huggingface/transformers/issues/6762
686,903,575
MDU6SXNzdWU2ODY5MDM1NzU=
6,762
Checklist for Model Hub
{ "login": "prajjwal1", "id": 24690051, "node_id": "MDQ6VXNlcjI0NjkwMDUx", "avatar_url": "https://avatars.githubusercontent.com/u/24690051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prajjwal1", "html_url": "https://github.com/prajjwal1", "followers_url": "https://api.github.com/users/prajjwal1/followers", "following_url": "https://api.github.com/users/prajjwal1/following{/other_user}", "gists_url": "https://api.github.com/users/prajjwal1/gists{/gist_id}", "starred_url": "https://api.github.com/users/prajjwal1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prajjwal1/subscriptions", "organizations_url": "https://api.github.com/users/prajjwal1/orgs", "repos_url": "https://api.github.com/users/prajjwal1/repos", "events_url": "https://api.github.com/users/prajjwal1/events{/privacy}", "received_events_url": "https://api.github.com/users/prajjwal1/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Yes! This is something we'll build, step by step. I'll post more about our roadmap for this in the coming weeks.", "> Yes! This is something we'll build, step by step. I'll post more about our roadmap for this in the coming weeks.\r\n\r\nMaybe you can create a web-based interface with input fields/drop downs that can generate the model card automatically. Just a thought. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "posting so that the bot doesn't close this issue.", "@prajjwal1 Have you checked out our new model storage/model versioning features on huggingface.co? You can also now edit your model card directly from the website (or from `git`) and we'll make this workflow more prominent (vs. adding model cards to `transformers`) in the coming days/weeks.\r\n\r\nFeedback welcome, here or on the Forum.", "No, I haven't. I am aware of it. I will check out soon. Loving the direction where HF is headed.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,598
1,614
1,614
CONTRIBUTOR
null
# 🚀 Feature request I want to propose a checklist for model hub. Model hub is increasingly being used by `transformers` users. It is imperative that users exactly know what model they are downloading and importing for transparency and research-related reasons. It should be mandatory for users to upload a `REAME.md` while uploading model on model hub. Additionally, it will reduce the need to send a seperate PR for model cards. Checklist should contain the following elements: - Details of training and validation set used. Specify the split if not using default. - A table with results on the validation set - Model name (can be inferred from `config.json`) - Objective used for training - Hyperparameter configurations if default ones are not being used - Link to implemented code (if available, and is different from default examples) - For how many epochs the model was trained for ? - Associated paper (if published) These are just the general ones. Happy to hear about other suggestions. This is just a suggestion which would help many users. Thanks for this feature.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6762/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6762/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6761
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6761/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6761/comments
https://api.github.com/repos/huggingface/transformers/issues/6761/events
https://github.com/huggingface/transformers/pull/6761
686,903,427
MDExOlB1bGxSZXF1ZXN0NDc0NDE3NDQw
6,761
s2s distillation uses AutoModelForSeqToSeqLM
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=h1) Report\n> Merging [#6761](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05e7150a53cc6c1571c0e3acb1b4d692737976d9?el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6761/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6761 +/- ##\n==========================================\n- Coverage 79.70% 79.45% -0.25% \n==========================================\n Files 157 157 \n Lines 28479 28479 \n==========================================\n- Hits 22698 22627 -71 \n- Misses 5781 5852 +71 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-11.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6761/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=footer). Last update [05e7150...88974cc](https://codecov.io/gh/huggingface/transformers/pull/6761?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
hard coding bart caused subsequent AutoTokenizer to fail for marian/mbart distillation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6761/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6761", "html_url": "https://github.com/huggingface/transformers/pull/6761", "diff_url": "https://github.com/huggingface/transformers/pull/6761.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6761.patch", "merged_at": 1598498711000 }
https://api.github.com/repos/huggingface/transformers/issues/6760
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6760/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6760/comments
https://api.github.com/repos/huggingface/transformers/issues/6760/events
https://github.com/huggingface/transformers/issues/6760
686,876,955
MDU6SXNzdWU2ODY4NzY5NTU=
6,760
Question: Differentiable Search in generate
{ "login": "Palipoor", "id": 16380397, "node_id": "MDQ6VXNlcjE2MzgwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/16380397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Palipoor", "html_url": "https://github.com/Palipoor", "followers_url": "https://api.github.com/users/Palipoor/followers", "following_url": "https://api.github.com/users/Palipoor/following{/other_user}", "gists_url": "https://api.github.com/users/Palipoor/gists{/gist_id}", "starred_url": "https://api.github.com/users/Palipoor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Palipoor/subscriptions", "organizations_url": "https://api.github.com/users/Palipoor/orgs", "repos_url": "https://api.github.com/users/Palipoor/repos", "events_url": "https://api.github.com/users/Palipoor/events{/privacy}", "received_events_url": "https://api.github.com/users/Palipoor/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
Hi, This might be a very dumb question to ask, but is there a way that we can have differentiable search and pickings in the "generate" method in models like T5 and GPT-2? If not, is there an example of using the pure real-valued model output in a task? Or an article giving some intuition on what the real-valued output means? I'd really appreciate your help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6760/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6760/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6759
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6759/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6759/comments
https://api.github.com/repos/huggingface/transformers/issues/6759/events
https://github.com/huggingface/transformers/pull/6759
686,715,734
MDExOlB1bGxSZXF1ZXN0NDc0MjQ1NDM5
6,759
added model card for flexudys t5 model
{ "login": "zolekode", "id": 25635679, "node_id": "MDQ6VXNlcjI1NjM1Njc5", "avatar_url": "https://avatars.githubusercontent.com/u/25635679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zolekode", "html_url": "https://github.com/zolekode", "followers_url": "https://api.github.com/users/zolekode/followers", "following_url": "https://api.github.com/users/zolekode/following{/other_user}", "gists_url": "https://api.github.com/users/zolekode/gists{/gist_id}", "starred_url": "https://api.github.com/users/zolekode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zolekode/subscriptions", "organizations_url": "https://api.github.com/users/zolekode/orgs", "repos_url": "https://api.github.com/users/zolekode/repos", "events_url": "https://api.github.com/users/zolekode/events{/privacy}", "received_events_url": "https://api.github.com/users/zolekode/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=h1) Report\n> Merging [#6759](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/434936f34a8b3154a79564c87f4cb50f5d57e050?el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6759/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6759 +/- ##\n==========================================\n+ Coverage 79.47% 79.51% +0.03% \n==========================================\n Files 157 157 \n Lines 28479 28479 \n==========================================\n+ Hits 22635 22644 +9 \n+ Misses 5844 5835 -9 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6759/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=footer). Last update [434936f...fd0b38d](https://codecov.io/gh/huggingface/transformers/pull/6759?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "really cool model card, thanks for sharing", "File path was incorrect but I fixed it in d822ab636b6a14ed50f7bca0797c1de42c19de61. Thanks!", "@julien-c thanks" ]
1,598
1,598
1,598
CONTRIBUTOR
null
- Created a model card for a fine tuned model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6759/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6759", "html_url": "https://github.com/huggingface/transformers/pull/6759", "diff_url": "https://github.com/huggingface/transformers/pull/6759.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6759.patch", "merged_at": 1598996336000 }
https://api.github.com/repos/huggingface/transformers/issues/6758
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6758/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6758/comments
https://api.github.com/repos/huggingface/transformers/issues/6758/events
https://github.com/huggingface/transformers/pull/6758
686,695,846
MDExOlB1bGxSZXF1ZXN0NDc0MjI3NjYz
6,758
Bart can make decoder_input_ids from labels
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=h1) Report\n> Merging [#6758](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6758/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6758 +/- ##\n==========================================\n- Coverage 78.96% 78.96% -0.01% \n==========================================\n Files 157 157 \n Lines 28486 28488 +2 \n==========================================\n+ Hits 22495 22496 +1 \n- Misses 5991 5992 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6758/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.57% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6758/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=footer). Last update [a75c64d...06e7cc5](https://codecov.io/gh/huggingface/transformers/pull/6758?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
mimics a nice t5 feature
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6758/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6758", "html_url": "https://github.com/huggingface/transformers/pull/6758", "diff_url": "https://github.com/huggingface/transformers/pull/6758.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6758.patch", "merged_at": 1598905007000 }
https://api.github.com/repos/huggingface/transformers/issues/6757
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6757/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6757/comments
https://api.github.com/repos/huggingface/transformers/issues/6757/events
https://github.com/huggingface/transformers/pull/6757
686,687,859
MDExOlB1bGxSZXF1ZXN0NDc0MjIwMTg5
6,757
Seq2Seq tokenization and training fixes
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6757/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6757", "html_url": "https://github.com/huggingface/transformers/pull/6757", "diff_url": "https://github.com/huggingface/transformers/pull/6757.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6757.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6756
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6756/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6756/comments
https://api.github.com/repos/huggingface/transformers/issues/6756/events
https://github.com/huggingface/transformers/pull/6756
686,684,192
MDExOlB1bGxSZXF1ZXN0NDc0MjE2Nzc1
6,756
Fix run_squad.py to work with BART
{ "login": "tomgrek", "id": 2245347, "node_id": "MDQ6VXNlcjIyNDUzNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/2245347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomgrek", "html_url": "https://github.com/tomgrek", "followers_url": "https://api.github.com/users/tomgrek/followers", "following_url": "https://api.github.com/users/tomgrek/following{/other_user}", "gists_url": "https://api.github.com/users/tomgrek/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomgrek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomgrek/subscriptions", "organizations_url": "https://api.github.com/users/tomgrek/orgs", "repos_url": "https://api.github.com/users/tomgrek/repos", "events_url": "https://api.github.com/users/tomgrek/events{/privacy}", "received_events_url": "https://api.github.com/users/tomgrek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=h1) Report\n> Merging [#6756](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/61b9ed80742f564dc522783a33bf001d6d871a2c?el=desc) will **increase** coverage by `0.16%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6756/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6756 +/- ##\n==========================================\n+ Coverage 79.65% 79.82% +0.16% \n==========================================\n Files 157 157 \n Lines 28479 28479 \n==========================================\n+ Hits 22686 22734 +48 \n+ Misses 5793 5745 -48 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+3.00%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+7.18%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6756/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=footer). Last update [61b9ed8...7ba95ec](https://codecov.io/gh/huggingface/transformers/pull/6756?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
BART model should be among the set that doesn't use `token_type_ids`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6756/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6756", "html_url": "https://github.com/huggingface/transformers/pull/6756", "diff_url": "https://github.com/huggingface/transformers/pull/6756.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6756.patch", "merged_at": 1598533491000 }
https://api.github.com/repos/huggingface/transformers/issues/6755
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6755/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6755/comments
https://api.github.com/repos/huggingface/transformers/issues/6755/events
https://github.com/huggingface/transformers/pull/6755
686,642,320
MDExOlB1bGxSZXF1ZXN0NDc0MTc3MjA3
6,755
Model Card for Multilingual Passage Reranking BERT
{ "login": "iglimanaj", "id": 12574741, "node_id": "MDQ6VXNlcjEyNTc0NzQx", "avatar_url": "https://avatars.githubusercontent.com/u/12574741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iglimanaj", "html_url": "https://github.com/iglimanaj", "followers_url": "https://api.github.com/users/iglimanaj/followers", "following_url": "https://api.github.com/users/iglimanaj/following{/other_user}", "gists_url": "https://api.github.com/users/iglimanaj/gists{/gist_id}", "starred_url": "https://api.github.com/users/iglimanaj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iglimanaj/subscriptions", "organizations_url": "https://api.github.com/users/iglimanaj/orgs", "repos_url": "https://api.github.com/users/iglimanaj/repos", "events_url": "https://api.github.com/users/iglimanaj/events{/privacy}", "received_events_url": "https://api.github.com/users/iglimanaj/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=h1) Report\n> Merging [#6755](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/61b9ed80742f564dc522783a33bf001d6d871a2c?el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6755/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6755 +/- ##\n==========================================\n- Coverage 79.65% 79.41% -0.24% \n==========================================\n Files 157 157 \n Lines 28479 28479 \n==========================================\n- Hits 22686 22618 -68 \n- Misses 5793 5861 +68 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6755/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=footer). Last update [61b9ed8...9ff6c2f](https://codecov.io/gh/huggingface/transformers/pull/6755?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "That is an awesome model card, thanks for sharing. \r\n\r\n➡️ **[amberoad/bert-multilingual-passage-reranking-msmarco](https://huggingface.co/amberoad/bert-multilingual-passage-reranking-msmarco)**" ]
1,598
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6755/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6755", "html_url": "https://github.com/huggingface/transformers/pull/6755", "diff_url": "https://github.com/huggingface/transformers/pull/6755.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6755.patch", "merged_at": 1598479227000 }
https://api.github.com/repos/huggingface/transformers/issues/6754
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6754/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6754/comments
https://api.github.com/repos/huggingface/transformers/issues/6754/events
https://github.com/huggingface/transformers/pull/6754
686,617,033
MDExOlB1bGxSZXF1ZXN0NDc0MTUzODM5
6,754
add __init__.py to utils
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=h1) Report\n> Merging [#6754](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/99407f9d1ece38d62a257fa8c65c3a2e114164e6?el=desc) will **increase** coverage by `0.42%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6754/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6754 +/- ##\n==========================================\n+ Coverage 79.02% 79.45% +0.42% \n==========================================\n Files 157 157 \n Lines 28479 28479 \n==========================================\n+ Hits 22505 22627 +122 \n+ Misses 5974 5852 -122 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `56.25% <0.00%> (-39.07%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6754/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=footer). Last update [99407f9...c6e6400](https://codecov.io/gh/huggingface/transformers/pull/6754?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This is great! I can confirm this fixes it. It would be good to merge it fast as the current master fails." ]
1,598
1,598
1,598
CONTRIBUTOR
null
Adds `__init__.py` to the `utils` dir introduced in #6434, which caused a namespace issues breaking imports in some cases. Mentioned in #6752. To reproduce, pip install from source without the editable `-e` flag. ``` >>> from transformers import pipeline ... ~/.virtualenvs/transformers/lib/python3.7/site-packages/transformers/file_utils.py in <module> ---> 32 from .utils import logging ModuleNotFoundError: No module named 'transformers.utils' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6754/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6754/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6754", "html_url": "https://github.com/huggingface/transformers/pull/6754", "diff_url": "https://github.com/huggingface/transformers/pull/6754.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6754.patch", "merged_at": 1598478671000 }
https://api.github.com/repos/huggingface/transformers/issues/6753
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6753/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6753/comments
https://api.github.com/repos/huggingface/transformers/issues/6753/events
https://github.com/huggingface/transformers/issues/6753
686,551,557
MDU6SXNzdWU2ODY1NTE1NTc=
6,753
Removing memory/deleting a model: how to properly do this
{ "login": "yakazimir", "id": 1296330, "node_id": "MDQ6VXNlcjEyOTYzMzA=", "avatar_url": "https://avatars.githubusercontent.com/u/1296330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yakazimir", "html_url": "https://github.com/yakazimir", "followers_url": "https://api.github.com/users/yakazimir/followers", "following_url": "https://api.github.com/users/yakazimir/following{/other_user}", "gists_url": "https://api.github.com/users/yakazimir/gists{/gist_id}", "starred_url": "https://api.github.com/users/yakazimir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yakazimir/subscriptions", "organizations_url": "https://api.github.com/users/yakazimir/orgs", "repos_url": "https://api.github.com/users/yakazimir/repos", "events_url": "https://api.github.com/users/yakazimir/events{/privacy}", "received_events_url": "https://api.github.com/users/yakazimir/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I encountered similar problems of freeing GPU memory while implementing the benchmark tools. A trick that worked for me was to wrap the function into a multi-process. Maybe you can take a look at this implementation and change your code accordingly so that the model is run in a subprocess: \r\nhttps://github.com/huggingface/transformers/blob/3726754a6c646adcf9cb2135ab7f72dffe074473/src/transformers/benchmark/benchmark_utils.py#L64", "Thanks for getting back! \r\n\r\nAfter investigating a bit further, my particular problems seem to be partly related to PyTorch-Lightning (specificially, related to not properly detaching tensors in some of the eval code), but this general bit of advice is good since this seems to be a more general problem that I've seen in other contexts (like you mentioned). I will look more closely at running a multi-process. \r\n\r\nAs a terrible hack (which probably shouldn't be repeated), I found that converting all models/tensors/training params/.. to cpu then deleting them and applying manual garbage collection fixed my issue. ", "> \r\n> \r\n> I encountered similar problems of freeing GPU memory while implementing the benchmark tools. A trick that worked for me was to wrap the function into a multi-process. Maybe you can take a look at this implementation and change your code accordingly so that the model is run in a subprocess:\r\n> \r\n> https://github.com/huggingface/transformers/blob/3726754a6c646adcf9cb2135ab7f72dffe074473/src/transformers/benchmark/benchmark_utils.py#L64\r\n\r\n@patrickvonplaten have you ran into the following error using this method?\r\n\r\n```\r\nCannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\r\n```\r\n\r\nTried setting the context as follows with no success:\r\n\r\n```python\r\nimport multiprocessing as mp\r\nmp.set_start_method('spawn')\r\n```", "met the same problem, anything update ?", "Very useful!! Thank you so much for sharing your solution!" ]
1,598
1,695
1,598
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: - Python version: 3.6.7 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): T5-large, T5-3b, bert-base-uncased The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Load a model 2. Try to remove it via `del`, clear GPU memory and cache <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` import torch from transformers import AutoTokenizer, AutoModelWithLMHead model = AutoModelWithLMHead.from_pretrained("t5-large") # same behavior for `bert-base-uncased`, larger T5 models.. model = model.cuda() model = model.train() ## delete model del model torch._C._cuda_emptyCache() ## alternatively # with torch.cuda.device("cuda:0"): # ...: torch.cuda.empty_cache() ## (as per the discussion here: https://discuss.pytorch.org/t/how-to-debug-causes-of-gpu-memory-leaks/6741/3, seeing all the hanging tensors) import gc for obj in gc.get_objects(): try: if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)): print(type(obj), obj.size()) except: pass ``` ## Expected behavior I would expect this to clear the GPU memory, though the tensors still seem to linger (fuller context: In a larger Pytorch-Lightning script, I'm simply trying to re-load the best model after training (and exiting the pl.Trainer) to run a final evaluation; behavior seems the same as in this simple example (ultimately I run out of memory when loading the best model because the model is the absolutely massive T5-3b).). <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6753/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6752
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6752/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6752/comments
https://api.github.com/repos/huggingface/transformers/issues/6752/events
https://github.com/huggingface/transformers/issues/6752
686,551,228
MDU6SXNzdWU2ODY1NTEyMjg=
6,752
Unable to install Transformers Master version
{ "login": "AnkitVarshney02", "id": 42990727, "node_id": "MDQ6VXNlcjQyOTkwNzI3", "avatar_url": "https://avatars.githubusercontent.com/u/42990727?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnkitVarshney02", "html_url": "https://github.com/AnkitVarshney02", "followers_url": "https://api.github.com/users/AnkitVarshney02/followers", "following_url": "https://api.github.com/users/AnkitVarshney02/following{/other_user}", "gists_url": "https://api.github.com/users/AnkitVarshney02/gists{/gist_id}", "starred_url": "https://api.github.com/users/AnkitVarshney02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AnkitVarshney02/subscriptions", "organizations_url": "https://api.github.com/users/AnkitVarshney02/orgs", "repos_url": "https://api.github.com/users/AnkitVarshney02/repos", "events_url": "https://api.github.com/users/AnkitVarshney02/events{/privacy}", "received_events_url": "https://api.github.com/users/AnkitVarshney02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem with the current `master` is that I have the following error\r\n\r\n`ModuleNotFoundError: No module named 'transformers.utils'`\r\n\r\n `version 3.0.2` does not include Pegasus.\r\n\r\nCan anyone suggest to us the latest stable version of master (not release `version 3.0.2`)? So we will be able to run the Pegasus Model.", "`pip install -U git+https://github.com/huggingface/transformers.git`", "I actually think this is a breaking change which @joeddav seems to have fixed in his PR. It is due to a change 6 hours ago which makes the import of the utils [here](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py) fail!", "K, import issue is fixed by #6754 and installs from master as @patil-suraj mentioned should be good now.", "@patil-suraj @joeddav \r\nIs there some other way to install master version of transformers?\r\nI tried using the URL to install the master version but it installed v3.0.2", "@AnkitVarshney02 \r\n\r\n1. Since master is not tagged as a release it will still register as `3.02` in your environment when you've installed from master.\r\n2. If you already have transformers installed in your env make sure you're also passing `--upgrade`\r\n\r\n`pip install --upgrade git+https://github.com/huggingface/transformers.git`\r\n\r\n", "@joeddav \r\nI tried using the URL to install the master version but it again installed v3.0.2.\r\nNot sure what am I missing here. Please see the terminal output below:\r\n\r\n`pip install --upgrade git+https://github.com/huggingface/transformers.git\r\nCollecting git+https://github.com/huggingface/transformers.git\r\n Cloning https://github.com/huggingface/transformers.git to c:\\users\\varshney ankit\\appdata\\local\\temp\\pip-req-build-iddyb2c1\r\n Running command git clone -q https://github.com/huggingface/transformers.git 'C:\\Users\\varshney ankit\\AppData\\Local\\Temp\\pip-req-build-iddyb2c1'\r\nRequirement already satisfied, skipping upgrade: numpy in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (1.18.5)\r\nCollecting tokenizers==0.8.1.rc2\r\n Using cached tokenizers-0.8.1rc2-cp38-cp38-win_amd64.whl (1.9 MB)\r\nRequirement already satisfied, skipping upgrade: packaging in c:\\users\\varshney ankit\\appdata\\roaming\\python\\python38\\site-packages (from transformers==3.0.2) (20.4)\r\nRequirement already satisfied, skipping upgrade: filelock in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (3.0.12)\r\nRequirement already satisfied, skipping upgrade: requests in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (2.24.0)\r\nRequirement already satisfied, skipping upgrade: tqdm>=4.27 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (4.47.0)\r\nRequirement already satisfied, skipping upgrade: regex!=2019.12.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers==3.0.2) (2020.6.8)\r\nCollecting sentencepiece!=0.1.92\r\n Using cached sentencepiece-0.1.91-cp38-cp38-win_amd64.whl (1.2 MB)\r\nCollecting sacremoses\r\n Using cached sacremoses-0.0.43.tar.gz (883 kB)\r\nRequirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in c:\\users\\varshney ankit\\appdata\\roaming\\python\\python38\\site-packages (from packaging->transformers==3.0.2) (2.4.7)\r\nRequirement already satisfied, skipping upgrade: six in c:\\users\\varshney ankit\\appdata\\roaming\\python\\python38\\site-packages (from packaging->transformers==3.0.2) (1.15.0)\r\nRequirement already satisfied, skipping upgrade: certifi>=2017.4.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (2020.6.20)\r\nRequirement already satisfied, skipping upgrade: idna<3,>=2.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (2.10)\r\nRequirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (3.0.4)\r\nRequirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers==3.0.2) (1.25.9)\r\nRequirement already satisfied, skipping upgrade: click in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers==3.0.2) (7.1.2)\r\nRequirement already satisfied, skipping upgrade: joblib in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers==3.0.2) (0.16.0)\r\nBuilding wheels for collected packages: transformers, sacremoses\r\n Building wheel for transformers (setup.py) ... done\r\n Created wheel for transformers: filename=transformers-3.0.2-py3-none-any.whl size=886632 sha256=fde9ef47b87c3c42f0dc98920877a9cb6a2446395dce5e03eb3a6e3802d73f06\r\n Stored in directory: C:\\Users\\varshney ankit\\AppData\\Local\\Temp\\pip-ephem-wheel-cache-dptow2tc\\wheels\\05\\0a\\97\\64ae47c27ba95fae2cb5838e7b4b7247a34d4a8ba5f7092de2\r\n Building wheel for sacremoses (setup.py) ... done\r\n Created wheel for sacremoses: filename=sacremoses-0.0.43-py3-none-any.whl size=893262 sha256=d9c55c4f55923ebf6ffba1f0a27a9034af0eebfb76a5dc6475c1de1a4e977abd\r\n Stored in directory: c:\\users\\varshney ankit\\appdata\\local\\pip\\cache\\wheels\\7b\\78\\f4\\27d43a65043e1b75dbddaa421b573eddc67e712be4b1c80677\r\nSuccessfully built transformers sacremoses\r\nInstalling collected packages: tokenizers, sentencepiece, sacremoses, transformers\r\nSuccessfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.8.1rc2 transformers-3.0.2`", "Master isn't tagged with its own release, so it will actually still show as `3.02` right now even if you've installed from master correctly. Did you try importing Pegasus after the above?", "> Master isn't tagged with its own release, so it will actually still show as `3.02` right now even if you've installed from master correctly. Did you try importing Pegasus after the above?\r\n\r\nThanks @joeddav ! It is working!!" ]
1,598
1,598
1,598
NONE
null
I want to run Pegasus Model for which master version of transformers is required but I am unable install it. I tried installing with pip command but it always install version 3.0.2.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6752/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6751
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6751/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6751/comments
https://api.github.com/repos/huggingface/transformers/issues/6751/events
https://github.com/huggingface/transformers/issues/6751
686,541,258
MDU6SXNzdWU2ODY1NDEyNTg=
6,751
Add Language Agnostic Bert Sentence Embedding
{ "login": "alexcleu", "id": 10251204, "node_id": "MDQ6VXNlcjEwMjUxMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/10251204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexcleu", "html_url": "https://github.com/alexcleu", "followers_url": "https://api.github.com/users/alexcleu/followers", "following_url": "https://api.github.com/users/alexcleu/following{/other_user}", "gists_url": "https://api.github.com/users/alexcleu/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexcleu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexcleu/subscriptions", "organizations_url": "https://api.github.com/users/alexcleu/orgs", "repos_url": "https://api.github.com/users/alexcleu/repos", "events_url": "https://api.github.com/users/alexcleu/events{/privacy}", "received_events_url": "https://api.github.com/users/alexcleu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "Any update on this if this will be taken up by the team?", "I think this is already done right? seen that we have this https://github.com/huggingface/transformers/tree/master/model_cards/pvl/labse_bert", "The model uploaded by @pvl mentioned by @aalloul performs the wrong pooling, i.e., embeddings produced by that model are NOT the same as the embeddings from the TFHub version.\r\n\r\nI uploaded the model with the right pooling here:\r\nhttps://huggingface.co/sentence-transformers/LaBSE\r\n\r\nIt tested it against the TF Hub version and it produces similar embeddings (small epsilon difference due to different padding variations). On down stream tasks, the TFHub and the HF Pytorch version achieve the same performance.", "For LABSE model can we use FAISS to build parallel corpus. If not what is the best best algo to that. LABSE paper was suggesting ANN. Where can I find its implementation for building parallel corpus.", "Have a look here for some examples with ANN, including FAISS\r\nhttps://github.com/UKPLab/sentence-transformers/tree/master/examples/applications\r\n\r\nPersonally I prefer hnswlib over Faiss:\r\nhttps://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic_search_quora_hnswlib.py\r\n\r\n\r\nIt is nicer and easier to use, better documented and offer certain features that are missing in Faiss. FAISS added later hnswlib as index structure, as it is also faster than the other index types FAISS were offering.\r\n\r\nYes, you can use LaBSE with ANN. ", "@nreimers Which ANN LaBSE paper was recommending?", "@aj7tesh I think they are not mentioning which ANN is used.\r\n\r\nAs it is a google paper, I could imagine they use SCANN:\r\nhttps://github.com/google-research/google-research/tree/master/scann\r\n\r\nA good source for comparison is:\r\nhttp://ann-benchmarks.com/index.html\r\n\r\nHere, HNSWLib performs really well:\r\nhttps://github.com/nmslib/hnswlib", "@nreimers thank you. Surely I will explore above comparisons. Will see which one is helping me more to generate parallel corpus.", "32GB should be more than fine. Issues can be:\r\n- too long sequences. Try to use max_length in the tokenizer\r\n- Too large batch sizes. Try to use a smaller batch size.\r\n", "basically\r\n# in my case i have lets say more than 2k sentences in array\r\n# its passing the encoded_input step, however its going OOM in model_output. Usually on my machine its works fine for upto 10k sentences when using LASER, however for LABSE its failing after 150 only\r\nencoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt')\r\n\r\nwith torch.no_grad():\r\n model_output = model(**encoded_input, return_dict=True)", "@nreimers \r\nits working fine when I used below method\r\n\r\nfrom sentence_transformers import SentenceTransformer\r\n\r\nnot exaclty sure the reason(hope model weights are similar to tf model)\r\n\r\nI have a questions, does these Labse and Laser kind of multilingual model works on language which is not related to major languages on which these models are trained? I believe for zero shot learning the language should have some similarity to other major languages. ", "sentence transformers performs batching of your data. If you pass 10k sentences, it splits it into batches of e.g. 32 and encodes them. So that you don't run out of memory.\r\n\r\nEmbeddings are nearly identical to those of Tensorflow. I tested both models on Tatoeba test set on 100 languages, and Pytorch and TF Version perform equally.\r\n\r\n\r\nDepends on the language. If the language is really similar to another language that was used for training, then it works. If it uses a different vocab or is really different, then it doesn't work.", "> Have a look here for some examples with ANN, including FAISS\r\n> https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications\r\n> \r\n> Personally I prefer hnswlib over Faiss:\r\n> https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic_search_quora_hnswlib.py\r\n> \r\n> It is nicer and easier to use, better documented and offer certain features that are missing in Faiss. FAISS added later hnswlib as index structure, as it is also faster than the other index types FAISS were offering.\r\n> \r\n> Yes, you can use LaBSE with ANN.\r\n\r\nI was going through this HNSW implementation. At some places it was written that ANN is not perfect and then HNSWs results were compared against util.semantic search, which again was executing cosine similarity for bitext mining. What is the reason of performing this step.\r\nI understand this thread is not the right place to ask this question, kindly suggest some other thread or place for such queries", "Approximate Nearest Neighbor only returns approximately the 10 nearest neighbor. It can be that it misses points that are closer. This is expressed as recall. \r\n\r\nIn the examples, it compares ANN against an exact nearest neighbor to see if there might be an issue with the index construction.\r\n\r\nFor large datasets, exact search is too slow, so you have to live with it that ANN does not find perfectly all nearest neighbors.", "so for corpus building where I expect the sentence with highest cosine similarity be the translation pair for corresponding source sentence. I will have to go for exact matches using something like util.semantic_search or scipy spatial distance", "If the corpora are not too large, yes, you can use exact matches. But these methods have quadratic runtime, i.e., when you have 10 Millions of sentences, searching will take a long time.\r\n\r\nIf your corpus is smaller, you can use exact search.", "Got it @nreimers thanks", "> The model uploaded by @pvl mentioned by @aalloul performs the wrong pooling, i.e., embeddings produced by that model are NOT the same as the embeddings from the TFHub version.\r\n> \r\n> I uploaded the model with the right pooling here:\r\n> https://huggingface.co/sentence-transformers/LaBSE\r\n> \r\n> It tested it against the TF Hub version and it produces similar embeddings (small epsilon difference due to different padding variations). On down stream tasks, the TFHub and the HF Pytorch version achieve the same performance.\r\n\r\n@nreimers I'm guessing there should be only 1 implementation of LaBSE and people might get confused with which one to use. How should we go about this?", "Coming late to this thread, we also uploaded a pytorch and TF compatible versions of the LaBSE model here - https://huggingface.co/rasa/LaBSE . This will also be available inside Rasa Open Source very soon.\r\nI do agree with @aalloul about the confusion this can create. Looking for thoughts from folks on this.", "@dakshvar22 did you run any comparisons with the official model?", "> I'm guessing there should be only 1 implementation of LaBSE and people might get confused with which one to use. How should we go about this?\r\n\r\nWe could imagine building a curation system built on top of (e.g.) a combination of downloads and an explicit marker like a \"Star\" button, but I don't want to overfit too much to the first few examples – given that this use case is still not super frequent.\r\n\r\nHappy to hear anyone's thoughts on this", "@aalloul I cross-checked the embeddings from the TFhub version and the transformers compatible versions we uploaded and they are almost identical. This was on a corpus of around 50k sentences across 5 different languages. Please feel free to test them out on the Tatoeba dataset on all 100 languages. I might not be able to do that myself right now.", "@aalloul @dakshvar22 \r\nI tested https://huggingface.co/sentence-transformers/LaBSE on the Tatoeba dataset on all 100+ languages and the performances were comparable to the TFHub model (+/- 0.1 accuracy for some languages due to different padding and numerical stability in pytorch vs. tensorflow)", "ah nice, thanks @nreimers for letting us know! I'll have a look at it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Late to this thread (and noticed it existed after publishing the model), but I ported the model from TF Hub and uploaded it here: https://huggingface.co/setu4993/LaBSE\r\n\r\nAdditionally, my code to port it, alongside tests that verify the embeddings generated by the source TF Hub model and the ported PyTorch model (uploaded above) are in my repo: https://github.com/setu4993/convert-labse-tf-pt\r\n\r\nShould be easy to extend it / add other tests and verify the embeddings match, if someone is interested. I haven't run tests on downstream performance, though." ]
1,598
1,610
1,610
NONE
null
# 🌟 New model addition ## Model description Google released a new model LaBSe, which is a Language agnostic Bert sentence embedding with 109 language support. https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html?utm_source=Deep+Learning+Weekly&utm_campaign=cdb8afedb7-EMAIL_CAMPAIGN_2019_04_24_03_18_COPY_01&utm_medium=email&utm_term=0_384567b42d-cdb8afedb7-72940109 The model is also hosted on tfhub as well:https://tfhub.dev/google/LaBSE/1 <!-- Important information --> ## Open source status * [ ] the model implementation is available: (https://arxiv.org/abs/2007.01852) * [ ] the model weights are available: https://tfhub.dev/google/LaBSE/1 * [ ] who are the authors: (mention them, if possible by @gh-username)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6751/reactions", "total_count": 24, "+1": 14, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 3, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6751/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6750
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6750/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6750/comments
https://api.github.com/repos/huggingface/transformers/issues/6750/events
https://github.com/huggingface/transformers/issues/6750
686,536,570
MDU6SXNzdWU2ODY1MzY1NzA=
6,750
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "## System Info\r\nDebian 10\r\nPytorch: 1.6.0\r\nTransformers: 3.0.2\r\nPython: 3.7.8\r\nPretrained Model: AlbertPreTrainedModel (albert-base-v2)\r\nPretrained Tokenizer: AlbertTokenizer (albert-base-v2)\r\n\r\n## Question\r\nI'm getting the same error, but when trying to evaluate the model after training.\r\n```python\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-39-7c7016f6f03e> in <module>\r\n----> 1 res = trainer.evaluate(val_dataset)\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in evaluate(self, eval_dataset)\r\n 743 eval_dataloader = self.get_eval_dataloader(eval_dataset)\r\n 744 \r\n--> 745 output = self._prediction_loop(eval_dataloader, description=\"Evaluation\")\r\n 746 \r\n 747 self._log(output.metrics)\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in _prediction_loop(self, dataloader, description, prediction_loss_only)\r\n 834 preds = logits.detach()\r\n 835 else:\r\n--> 836 preds = torch.cat((preds, logits.detach()), dim=0)\r\n 837 if inputs.get(\"labels\") is not None:\r\n 838 if label_ids is None:\r\n\r\nRuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated\r\n```\r\nThe values of `preds` and logits at this point are:\r\n```\r\nipdb> preds\r\ntensor(0.4661, device='cuda:0')\r\nipdb> logits\r\ntensor(0.4578, device='cuda:0')\r\n```\r\nReplacing `torch.cat` with `torch.stack` seemed to do the job, is there a reason for using `torch.cat` here?\r\n```\r\nipdb> torch.stack((preds, logits.detach()), dim=0)\r\ntensor([0.4661, 0.4578], device='cuda:0')\r\n```\r\nThese are my training arguments and trainer:\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir='./results',\r\n num_train_epochs=1,\r\n per_device_train_batch_size=16,\r\n per_device_eval_batch_size=64,\r\n warmup_steps=500,\r\n weight_decay=0.01,\r\n logging_dir='./logs',\r\n logging_steps=10,\r\n)\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=train_dataset,\r\n compute_metrics=compute_metrics,\r\n)\r\n```", "Same issue", "I am getting the same issue. CC: @sgugger", "Same issue!", "It's not that we don't want to fix that issue but no one as given us a reproducer and it was linked to a version of transformers that is now quite old. So please do let us know if the error persists on v3.5.1 (after upgrading transformers) and on which script.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Found the same issue on `transformers==3.4.0`. But after upgrading to `transformers==4.2.2` the problem fixed. FYI.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.", "Facing the same issue with > 4.2, now the issue is File \"***/lib/python3.6/site-packages/transformers/trainer_pt_utils.py\", line 48, in torch_pad_and_concatenate\r\n if len(tensor1.shape) == 1 or tensor1.shape[1] == tensor2.shape[1]:\r\nIndexError: tuple index out of range" ]
1,598
1,637
1,614
NONE
null
## System Info Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Tokenizers: 0.8.1rc1 Python: 3.7.6 Pretrained Model: GPT2 Pretrained Tokenizer: GPT2 ## Question I'm getting the following error when I try to evaluate a model while it's training: ```python wandb: Waiting for W&B process to finish, PID 242876 Traceback (most recent call last): File "run_finetune_gpt2.py", line 163, in <module> main() File "run_finetune_gpt2.py", line 150, in main trainer.train() File "/path/to/venvs/my_venv/lib/python3.6/site-packages/transformers/trainer.py", line 537, in train self.evaluate() File "/path/to/venvs/my_venv/lib/python3.6/site-packages/transformers/trainer.py", line 745, in evaluate output = self._prediction_loop(eval_dataloader, description="Evaluation") File "/path/to/venvs/my_venv/lib/python3.6/site-packages/transformers/trainer.py", line 838, in _prediction_loop preds = torch.cat((preds, logits.detach()), dim=0) RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated ``` I printed out the `preds` and `logits` shapes inside `trainer.py`: ```python preds.shape = torch.Size([]) logits.shape = torch.Size([]) ``` I can't recall exactly how my `TrainingArguments` object was set up but I think it was something like this: ```python training_args = TrainingArguments( output_dir=output_dir, do_train=True, evaluate_during_training=True, eval_steps=5, logging_dir=logging_dir, per_device_train_batch_size=32, num_train_epochs=1, save_steps=100) trainer = Trainer( model=model, args=training_args, train_dataset=sd_dataset_train, eval_dataset=sd_dataset_test, data_collator=sd_data_collator) ``` I was using `GPT2LMHeadModel`. I tried to put together a code sample but couldn't reproduce the error. Has anyone run into this before when trying to evaluate while training?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6750/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6750/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6749
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6749/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6749/comments
https://api.github.com/repos/huggingface/transformers/issues/6749/events
https://github.com/huggingface/transformers/issues/6749
686,525,378
MDU6SXNzdWU2ODY1MjUzNzg=
6,749
RuntimeError: grad can be implicitly created only for scalar outputs
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "When the model receives inputs that include the labels, it's supposed to produce a tuple of (loss, predictions), where the loss is a scalar. The trainer then uses the loss to calculate the gradients. In this case (or at least in my case when I get a similar error) the trainer appears to be trying to use the predictions not the loss to calculate the gradient. This appears to be because the model is not receiving the 'labels' as input and so is only producing a one tuple of (predictions). You should be able to fix it by passing a value for \"labels\" in your collator. See for example transformers.DataCollatorForLanguageModeling.", "For me, I am getting the same error because the model I choose does not return loss even though I pass labels. It's better to check the model documentation you are using whether model forward() return loss or not. This is the snapshot of BertModel (Model which I choose first) forward() returns. Which does not return any loss value.\r\n![image](https://user-images.githubusercontent.com/47693507/96410535-e40c9400-1208-11eb-95aa-df4f58928932.png)\r\nAnd this is the snapshot of BertModelLMHeadModel (Model which I choose later) forward() returns. Which return loss value.\r\n![image](https://user-images.githubusercontent.com/47693507/96410933-80cf3180-1209-11eb-8ef1-19effe5ea93a.png)\r\n", "@ameasure @MojammelHossain Thank you both for your feedback! Checking the GPT2 documentation showed me an example of what I could set the `labels` value to in my collator." ]
1,598
1,603
1,603
NONE
null
## System Info Pop!_OS 20.04 Pytorch: 1.5.1 Transformers: 3.0.2 Tokenizers: 0.8.1rc1 Python: 3.7.6 Pretrained Model: GPT2 Pretrained Tokenizer: GPT2 ## Question I'm getting the following error and I'm not sure how to resolve it: ```python Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.0.attn.masked_bias', 'h.1.attn.masked_bias', 'h.2.attn.masked_bias', 'h.3.attn.masked_bias', 'h.4.attn.masked_bias', 'h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.7.attn.masked_bias', 'h.8.attn.masked_bias', 'h.9.attn.masked_bias', 'h.10.attn.masked_bias', 'h.11.attn.masked_bias', 'lm_head.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Epoch: 0%| | 0/1 [00:00<?, ?it/s] Iteration: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "/path/to/misc_tests.py", line 78, in <module> trainer.train() File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/path/to/anaconda3/lib/python3.7/site-packages/transformers/trainer.py", line 637, in _training_step loss.backward() File "/path/to/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 94, in backward grad_tensors = _make_grads(tensors, grad_tensors) File "/path/to/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 35, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar outputs ``` Here's some sample code: ```python from transformers import Trainer, TrainingArguments, GPT2LMHeadModel, GPT2Tokenizer import torch from torch.utils.data import Dataset class SDAbstractsDataset(Dataset): def __init__(self, set_type_str): if set_type_str == 'train': prompt1 = 'We present an update on the results of the Double Chooz experiment. Double Chooz searches for the neutrino mixing angle, θ13, in the three-neutrino mixing matrix via the disappearance of produced by the dual 4.27 GW/th Chooz B Reactors. Here we discuss updated oscillation fit results using both the rate and the shape of the anti-neutrino energy spectrum. In the most recent oscillation analysis we included data with neutron captures on Gadolinium and Hydrogen along with the reactor off data that we collected. This is an important step in our multi-year program to establish the value of θ13.' prompt2 = 'The paper covers detailed discussion on novel control system developed for adaptive fluid-based shock-absorbers serving for mitigation of unknown impact excitations. In order to provide complete independence of the control system from the loading conditions, the Hybrid Prediction Control (HPC) was elaborated. The proposed method is an extension of previously introduced kinematic feedback control which ensures optimal path finding, tracking and path update in case of high disturbance or sudden change of loading conditions. Implementation of the presented control system allows to obtain self-adaptive fluid-based absorbers providing robust impact mitigation. In contrast to previously developed methods of Adaptive Impact Absorption, the proposed control strategy does not require prior knowledge of impact excitation or its preliminary identification. The independence of applied control system from parameters of impact loading results in the capability of automatic path correction in the case of disturbance occurrence and re-adaptation to a number of subsequent impacts. The successful operation of the self-adaptive system is investigated with the use of numerical examples involving double-chamber pneumatic shock-absorber equipped with controllable valve. Efficiency of the HPC is proved by comparison with passive absorber as well as device equipped with adaptive and optimal control modules.' prompt3 = 'This study aimed to produce biosurfactant from Pseudozyma tsukubaensis using cassava wastewater and an inoculum (biomass) for galactooligosaccharides synthesis from lactose as an integrated system. First, the use of cassava wastewater as a low cost culture medium by P. tsukubaensis to produce biomass and biosurfactant was evaluated and optimized. Then, the microbial cells (biomass) obtained from the optimized process were used to produce galactooligosaccharides from lactose. The optimum conditions for biosurfactant and biomass synthesis were found to be 80% (v/v) of cassava wastewater at 30°C and 200rpm for 48h. The highest concentration of biosurfactant, that is, minimum surface tension value and maximum biomass concentration predicted were experimentally confirmed as 26.87mN/m and 10.5g/L, respectively. The biosurfactant obtained showed good thermal (121°C/1h), pH (2–11) and ionic strength (0–25% NaCl) stability. Excellent emulsifier activity was also verified, suggesting a potential application in enhanced oil recovery. Galactooligosaccharides synthesized by the Kluyveromyces genus have been extensively investigated, however, few studies have reported transgalactosylation ability by other yeast genera. The transgalactosylation activity of the yeast biomass at optimized conditions from 40% (w/w) lactose resulted in galactooligosaccharides production of 73.12g/L and a yield of 18.28% (w/w) at pH 8.0 and 30°C in 24h. This research showed the technical feasibility of an integrated process: biosurfactant and GOS production from P. tsukubaensis, which takes advantage of the remarkable metabolism of this microorganism. To the best of our knowledge, this is the first study reporting the potential of P. tsukubaensis to produce two economical biotechnological products of increase interest as an integrated process.' prompt4 = 'Advantages of a fuzzy predictive control algorithm are discussed in the paper. The fuzzy predictive algorithm is a combination of a DMC (Dynamic Matrix Control) algorithm and Takagi–Sugeno fuzzy modeling, thus it inherits advantages of both techniques. The algorithm is numerically effective. It is in fact generalization of the standard DMC algorithm widely used in the industry, thus the existing implementations of the DMC algorithm can be extended using the presented fuzzy approach. A simple and easy to apply method of fuzzy predictive control algorithms synthesis is presented in the paper. It can be easy applied also in the case of Multiple Input Multiple Output (MIMO) control plants. Moreover, information about measured disturbance can be included in the algorithms in an easy way. The advantages of the fuzzy predictive control algorithm are demonstrated in the example control systems of two nonlinear chemical reactors: the first one—with inverse response and the second one—a MIMO plant with time delay.' prompt5 = 'BackgroundBack injury is a common place in our society. Up to two-thirds of back injuries have been associated with trunk rotation. However, the torque production ability with a rotated spine and electromyographic activity of trunk muscles in such efforts is poorly understood. Therefore, the objectives of this study are to study torque production capacity of variously rotated and flexed trunk and to measure the EMG of selected trunk muscles in these activities.MethodsNineteen normal young subjects (7 males and 12 females) were recruited. Subjects were stabilized on a posture-stabilizing platform and were instructed to assume a flexed and right rotated posture (20°, 40° and 60° of rotation and 20°, 40° and 60° of flexion) in a random order. The subjects were asked to exert their maximal voluntary contraction in the asymmetric plane of rotation–extension for a period of 5s. The surface EMG of the external and internal obliques, rectus abdominis, latissimus dorsi, erector spinae at the 10th thoracic and 3rd lumbar vertebral levels was recorded bilaterally along with the torque generated.FindingsWhereas the torque generated was significantly affected by both rotation and extension in both genders (P<0.001), the EMG was independent of rotation but affected by flexion in females only (P<0.01). The torques produced by both genders in each of the nine postures was significantly different from each other (P<0.001). The EMG demonstrated a trend of increase with increasing rotation and flexion. The response surfaces of normalized peak EMG of the right external oblique and internal oblique was somewhat similar, indicating a rotator torque and a stabilizing effect. The left latissimus dorsi and right external oblique provided the rotational torque and the right erector spinae provided the extensor effort. Since the rotation–extension was performed in the plane of asymmetry, the effort required the recruitment of muscles involved in left rotation, stability of rotated spine and an extensor effort.InterpretationThe torque production capacity of the human trunk is posture dependent and declines with increasing rotation. However, with increasing rotation and flexion, the magnitude of EMG increases. This implies that with increasing asymmetry, it requires more muscle effort (thus tissue stress) to generate less torque. Increasing asymmetry tends to weaken the system and may enhance chances of injury.' prompt6 = 'Orthogonal frequency division multiplexing (OFDM) is a promising candidate for light emitting diode (LED)-based optical wireless communication (OWC); however, precise channel estimation is required for synchronization and equalization. In this work, we study and discover that the channel response of the white-lightLED-based OWC was smooth and stable. Hence we propose and demonstrate using a specific and adaptive arrangement of grid-type pilot scheme to estimate the LED OWC channel response. Experimental results show that our scheme can achieve better transmission performance and with some transmission capacity enhancement when compared with the method using training-symbol scheme (also called block-type pilot scheme).' prompt7 = 'The catalytic activities of three nano-sized nickel catalysts Ni/Y2O3, Ni/La2O3 and Ni/Al2O3, using nickel oxalate as precursor and by impregnation–decomposition–reduced method, have been investigated for the reactions of steam reforming of ethanol at low temperature. Properties of structure and surface of catalysts were tested by XRD, XPS, XES, SEM and BET area. The initial reaction kinetics of ethanol over the catalysts was studied by steady-state reaction and a first-order reaction with respect to ethanol was found. It is found that the catalysts Ni/Y2O3 and Ni/La2O3 exhibit relative high activity for ethanol steam reforming at 250∘C with a conversion of ethanol of 81.9% and 80.7%, and a selectivity of hydrogen of 43.1% and 49.5%, respectively. When temperature reached 320∘C, the conversion of ethanol increased to 93.1% and 99.5% and the selectivity of hydrogen was 53.2% and 48.5%, respectively. The catalyst Ni/Al2O3 exhibits relative lower activity for ethanol steam reforming and hydrogen selectivity. However, the three catalysts all have long-term stability for ethanol steam reforming.' prompt8 = 'Exergetic and exergoeconomic analyses are often used to evaluate the performance of energy systems from the thermodynamic and economic points of view. While a conventional exergetic analysis can be used to recognize the sources of inefficiencies, the so-called advanced exergy-based analysis is convenient for identifying the real potential for thermodynamic improvements and the system component interactions by splitting the exergy destruction and the total operating cost within each component into endogenous/exogenous and unavoidable/avoidable parts. In this study for the first time an advanced exergoeconomic analysis is applied to a gas-engine-driven heat pump (GEHP) drying system used in food drying for evaluating its performance along with each component. The advanced exergoeconomic analysis shows that the unavoidable part of the exergy destruction cost rate within the components of the system is lower than the avoidable part. The most important components based on the total avoidable costs are drying ducts, the condenser and the expansion valve. The inefficiencies within the condenser could particularly be improved by structural improvements of the whole system and the remaining system components. Finally, it can be concluded that the internal design changes play a more essential role in determining the cost of each component.' self.data_list = [prompt1, prompt2, prompt3, prompt4, prompt5, prompt6, prompt7, prompt8] else: prompt1 = 'State estimation (SE) is well-established at the transmission system level of the electricity grid, where it has been in use for the last few decades and is a most vital component of energy management systems employed in the monitoring and control centers of electric transmission systems. However, its use for the monitoring and control of power distribution systems (DSs) has not yet been widely implemented because DSs have been majorly passive with uni-directional power flows. This scenario is now changing with the advent of smart grid, which is changing the nature of electric distribution networks by embracing more dispersed generation, demand responsive loads, and measurements devices with different data rates. Thus, the development of distribution system state estimation (DSSE) tool is inevitable for the implementation of protection, optimization, and control techniques, and various other features envisioned by the smart grid concept. Due to the inherent characteristics of DS different from those of transmission systems, transmission system state estimation (TSSE) is not applicable directly to DSs. This paper is an attempt to present the state-of-the-art on DSSE as an enabler function for smart grid features. It broadly reviews the development of DSSE, challenges faced by its development, and various DSSE algorithms. Additionally, it identifies some future research lines for DSSE.' prompt2 = 'A solar-assisted absorption heat transformer (SAAHT) is a useful substitute for the conventional equipment to generate low-pressure steam. A 100kW steam generation system with a SAAHT in Langfang (China) is evaluated in this study. Hourly thermodynamic performance, including system efficiency, exergy efficiency, CO2 emission reduction rate and output heat in typical days in four seasons, is discussed. Results show that ambient temperature has a smaller effect on system performance than solar irradiation. In any one of the typical days in spring, summer and autumn, the system presents higher output heat and CO2 emission reduction rate, more stable system efficiency and exergy efficiency than those in winter. Comparative results from two methods show that ratio method has higher system efficiency with solar irradiation below 600W/m2. A hybrid method combining both the degree method and ratio method is adopted to work with the off-design condition, and results show that performance improvement for system is not so obvious as that in solo absorption heat transformer. Among the four typical days, the most obvious improvement occurs in summer with cumulative output heat increasing from 1318kWh to 1343kWh, and the CO2 emission reduction increasing from 296kg to 301kg.' prompt3 = 'The European Commission is encouraging the Cement, Lime and Magnesium Oxide Manufacturing Industries to reutilize collected particulate matter or wastes in the emission control of SO2 with a 100% removal efficiency. Following this directive, three different by-products from the calcination of natural magnesite were selected in order to evaluate their desulfurization capacity. The saturation time, defined as the time for the total neutralization of SO2 was used to determine consumption values at laboratory scale with 100% removal efficiency. The by-product LG-MgO (∼68% MgO) presented the lowest consumption value, with 2.9kg per m3 of SO2, three times the corresponding to the widely used high grade Ca(OH)2. The liquid-to-gas (L/G) ratio was used for comparison to the industry and taking this into account, the final pH range before achieving saturation was 5.1–6.3. The residual solids obtained at the end of the process were mainly composed of unreacted magnesium and calcium compounds and reaction products CaSO4·2H2O and MgSO4·6H2O which can be used as fertilizers. Therefore, the reutilization of these by-products in a wet flue gas desulfurization process is a feasible and sustainable choice that allows extending their life-cycle.' prompt4 = 'The GP Joule Group is starting the biggest ‘green’ hydrogen mobility project in Germany so far, close to the northern border with Denmark. The eFarm project will create a modular hydrogen infrastructure, covering production and processing through to utilisation in a number of hydrogen-powered vehicles.' self.data_list = [prompt1, prompt2, prompt3, prompt4] def __len__(self): return len(self.data_list) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() abstract_text = self.data_list[idx] return abstract_text def sd_data_collator(dataset_samples_list): tokenizer = GPT2Tokenizer.from_pretrained('gpt2', padding_side='right') tokenizer.pad_token = tokenizer.eos_token encoded_results = tokenizer(dataset_samples_list, padding=True, truncation=True, return_tensors='pt', return_attention_mask=True) batch = {} batch['input_ids'] = torch.stack([result for result in encoded_results['input_ids']]) batch['past'] = None batch['attention_mask'] = torch.stack([result for result in encoded_results['attention_mask']]) batch['position_ids'] = None batch['head_mask'] = None batch['inputs_embeds'] = None batch['labels'] = None batch['use_cache'] = True return batch output_dir = 'TMP_DIR' logging_dir = 'TMP_DIR' training_args = TrainingArguments( output_dir=output_dir, logging_dir=logging_dir, do_train=True, per_device_train_batch_size=2, num_train_epochs=1, ) model = GPT2LMHeadModel.from_pretrained('gpt2') train_dataset = SDAbstractsDataset('train') trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, data_collator=sd_data_collator ) trainer.train() ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6749/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6749/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6748/comments
https://api.github.com/repos/huggingface/transformers/issues/6748/events
https://github.com/huggingface/transformers/pull/6748
686,524,053
MDExOlB1bGxSZXF1ZXN0NDc0MDc2MDc1
6,748
Pin code quality dependencies: black,isort,flake8
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=h1) Report\n> Merging [#6748](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6748/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6748 +/- ##\n==========================================\n- Coverage 78.96% 78.78% -0.19% \n==========================================\n Files 157 157 \n Lines 28486 28486 \n==========================================\n- Hits 22495 22442 -53 \n- Misses 5991 6044 +53 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+3.25%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6748/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=footer). Last update [a75c64d...9ca2abb](https://codecov.io/gh/huggingface/transformers/pull/6748?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Someone else did it!" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Goal: Prevent dependencies from causing transformers CI to be in style violation without transformers code changing. Tested: all isort 5.4 versions, all flake 3.8 versions work identically well. changing black at all causes bad results. I pinned to the tested range.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6748/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6748", "html_url": "https://github.com/huggingface/transformers/pull/6748", "diff_url": "https://github.com/huggingface/transformers/pull/6748.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6748.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6747/comments
https://api.github.com/repos/huggingface/transformers/issues/6747/events
https://github.com/huggingface/transformers/pull/6747
686,445,365
MDExOlB1bGxSZXF1ZXN0NDc0MDExNTIw
6,747
Add checkpointing to Ray Tune HPO
{ "login": "krfricke", "id": 14904111, "node_id": "MDQ6VXNlcjE0OTA0MTEx", "avatar_url": "https://avatars.githubusercontent.com/u/14904111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krfricke", "html_url": "https://github.com/krfricke", "followers_url": "https://api.github.com/users/krfricke/followers", "following_url": "https://api.github.com/users/krfricke/following{/other_user}", "gists_url": "https://api.github.com/users/krfricke/gists{/gist_id}", "starred_url": "https://api.github.com/users/krfricke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/krfricke/subscriptions", "organizations_url": "https://api.github.com/users/krfricke/orgs", "repos_url": "https://api.github.com/users/krfricke/repos", "events_url": "https://api.github.com/users/krfricke/events{/privacy}", "received_events_url": "https://api.github.com/users/krfricke/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=h1) Report\n> Merging [#6747](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02d09c8fcc6bda2c345c84cec53289abbe7532ac?el=desc) will **decrease** coverage by `0.82%`.\n> The diff coverage is `8.33%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6747/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6747 +/- ##\n==========================================\n- Coverage 79.01% 78.18% -0.83% \n==========================================\n Files 157 157 \n Lines 28739 28782 +43 \n==========================================\n- Hits 22707 22503 -204 \n- Misses 6032 6279 +247 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `59.57% <0.00%> (-4.87%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.57% <7.14%> (+0.45%)` | :arrow_up: |\n| [src/transformers/integrations.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9pbnRlZ3JhdGlvbnMucHk=) | `31.11% <9.09%> (-34.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |\n| [src/transformers/tokenization\\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.90% <0.00%> (-0.14%)` | :arrow_down: |\n| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6747/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=footer). Last update [02d09c8...7488b03](https://codecov.io/gh/huggingface/transformers/pull/6747?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "```diff\r\ndiff --git a/src/transformers/trainer.py b/src/transformers/trainer.py\r\nindex 3470a473..acf1503c 100755\r\n--- a/src/transformers/trainer.py\r\n+++ b/src/transformers/trainer.py\r\n@@ -544,7 +544,8 @@ class Trainer:\r\n if trial.should_prune():\r\n raise optuna.TrialPruned()\r\n elif self.hp_search_backend == HPSearchBackend.RAY:\r\n- self._tune_save_checkpoint()\r\n+ if self.global_step % self.args.save_steps == 0:\r\n+ self._tune_save_checkpoint()\r\n tune.report(objective=self.objective, **metrics)\r\n\r\n def _tune_save_checkpoint(self):\r\n@@ -911,6 +912,8 @@ class Trainer:\r\n # search.\r\n _tb_writer = self.tb_writer\r\n self.tb_writer = None\r\n+ _model = self.model\r\n+ self.model = None\r\n # Setup default `resources_per_trial` and `reporter`.\r\n if \"resources_per_trial\" not in kwargs and self.args.n_gpu > 0:\r\n n_jobs = int(kwargs.pop(\"n_jobs\", 1))\r\n```\r\n\r\nThis allows us to:\r\n\r\n1. Not die when tuning BERT and\r\n2. Not be dominated by saving latency.", "Thanks for your suggestions. I moved the bulk of the hp search code to `integrations`, including the objective, since it depends on the search space. Is this what you had in mind?", "Yes. I think we can split the function in two: one for ray, one for optuna and avoid a lot of tests this way to have some cleaner code (with a small duplication in the _objective function). I can do it in a separate PR if you want.", "That would be great, thanks!", "Merging and will follow up then." ]
1,598
1,611
1,598
CONTRIBUTOR
null
Saves checkpoints in Ray Tune HPO, enabling advanced schedulers like PBT.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6747/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6747", "html_url": "https://github.com/huggingface/transformers/pull/6747", "diff_url": "https://github.com/huggingface/transformers/pull/6747.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6747.patch", "merged_at": 1598899126000 }
https://api.github.com/repos/huggingface/transformers/issues/6746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6746/comments
https://api.github.com/repos/huggingface/transformers/issues/6746/events
https://github.com/huggingface/transformers/pull/6746
686,442,369
MDExOlB1bGxSZXF1ZXN0NDc0MDA5MDE4
6,746
[s2s] run_eval.py QOL improvements and cleanup
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=h1) Report\n> Merging [#6746](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6746/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6746 +/- ##\n==========================================\n- Coverage 78.96% 78.93% -0.04% \n==========================================\n Files 157 157 \n Lines 28486 28486 \n==========================================\n- Hits 22495 22485 -10 \n- Misses 5991 6001 +10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=footer). Last update [a75c64d...0b9f1cd](https://codecov.io/gh/huggingface/transformers/pull/6746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
- default to saving metrics to `metrics.json` - cleanup: delete some dead code. - save seconds_per_sample, runtime, num_samples
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6746/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6746", "html_url": "https://github.com/huggingface/transformers/pull/6746", "diff_url": "https://github.com/huggingface/transformers/pull/6746.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6746.patch", "merged_at": 1598482761000 }
https://api.github.com/repos/huggingface/transformers/issues/6745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6745/comments
https://api.github.com/repos/huggingface/transformers/issues/6745/events
https://github.com/huggingface/transformers/issues/6745
686,394,108
MDU6SXNzdWU2ODYzOTQxMDg=
6,745
[s2s] run_eval saves samples/second
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
it's easy to compute and worth noting, since we are already saving a dictionary.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6745/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6744/comments
https://api.github.com/repos/huggingface/transformers/issues/6744/events
https://github.com/huggingface/transformers/pull/6744
686,270,468
MDExOlB1bGxSZXF1ZXN0NDczODY1MDQ3
6,744
[pipelines] Text2TextGenerationPipeline
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=h1) Report\n> Merging [#6744](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/42fddacd1cac3cc57c3326aa51a409f5090b1261?el=desc) will **increase** coverage by `1.13%`.\n> The diff coverage is `96.15%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6744/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6744 +/- ##\n==========================================\n+ Coverage 78.47% 79.60% +1.13% \n==========================================\n Files 157 157 \n Lines 28569 28595 +26 \n==========================================\n+ Hits 22420 22764 +344 \n+ Misses 6149 5831 -318 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `80.46% <96.15%> (+0.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |\n| [src/transformers/modeling\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.01% <0.00%> (-2.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.05% <0.00%> (-0.35%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6744/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=footer). Last update [42fddac...201c854](https://codecov.io/gh/huggingface/transformers/pull/6744?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks @LysandreJik ! It was already in the doc at the bottom, now moved it below `TextGenerationPipeline`", "failing test doesn't look related to this PR" ]
1,598
1,599
1,599
MEMBER
null
This PR adds `Text2TextGenerationPipeline`, as discussed here #4411 <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #4411 @patrickvonplaten , @LysandreJik cc @enzoampil :))
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6744/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6744", "html_url": "https://github.com/huggingface/transformers/pull/6744", "diff_url": "https://github.com/huggingface/transformers/pull/6744.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6744.patch", "merged_at": 1599046476000 }
https://api.github.com/repos/huggingface/transformers/issues/6743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6743/comments
https://api.github.com/repos/huggingface/transformers/issues/6743/events
https://github.com/huggingface/transformers/issues/6743
686,266,479
MDU6SXNzdWU2ODYyNjY0Nzk=
6,743
BART for Pre-Training
{ "login": "swashiro", "id": 19884938, "node_id": "MDQ6VXNlcjE5ODg0OTM4", "avatar_url": "https://avatars.githubusercontent.com/u/19884938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swashiro", "html_url": "https://github.com/swashiro", "followers_url": "https://api.github.com/users/swashiro/followers", "following_url": "https://api.github.com/users/swashiro/following{/other_user}", "gists_url": "https://api.github.com/users/swashiro/gists{/gist_id}", "starred_url": "https://api.github.com/users/swashiro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swashiro/subscriptions", "organizations_url": "https://api.github.com/users/swashiro/orgs", "repos_url": "https://api.github.com/users/swashiro/repos", "events_url": "https://api.github.com/users/swashiro/events{/privacy}", "received_events_url": "https://api.github.com/users/swashiro/received_events", "type": "User", "site_admin": false }
[ { "id": 1834053007, "node_id": "MDU6TGFiZWwxODM0MDUzMDA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)", "name": "Ex: LM (Pretraining)", "color": "76FFAF", "default": false, "description": "Related to language modeling pre-training" }, { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
closed
false
null
[]
[ "This should help: https://github.com/huggingface/transformers/issues/5096#issuecomment-645860271", "@sshleifer - think this is the 3rd issue about Bart pre-training -> maybe it would be a good idea to release a small notebook at some point.", "@patil-suraj you took a stab at this at some point? [this](https://github.com/huggingface/transformers/issues/5096#issuecomment-645848176) may have been optimistic :( ", "Yes, I was trying to port fairseq dataset here, same for t5, I'll try to focus more on it when I'm done with current PRs, should strat with a notebook as Patrick said, then try to include it in examples/", "@patrickvonplaten Does that mean I can train with Masked-input, input(label) and Decoder-input?", "yes, this should be possible", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@patil-suraj any news on the pretraining script for Bart?", "If anyone wants to train their MBART model then feel free to use this.\r\nhttps://github.com/prajdabre/yanmtt\r\n\r\nContributions are welcome!", "@patil-suraj excuse me, is there any news on the pretraining script for Bart? Thanks.", "@thomas-li-sjtu you can try my toolkit if you like. It's based on transformers and allows for Bart/mbart pretraining. https://github.com/prajdabre/yanmtt", "> @thomas-li-sjtu you can try my toolkit if you like. It's based on transformers and allows for Bart/mbart pretraining. https://github.com/prajdabre/yanmtt\r\n\r\nHi there, here is my problem. I hope to pretrain a bart model based on my own dataset and fine tune it for another task (not nmt). I noticed that your toolkit designs for nmt so maybe it is not the one I need. Anyway, thanks for your reply!", "@thomas-li-sjtu ok I understand. It's not just designed for NMT (despite its name). I've used it for summarisation and general NLG without problems. Good luck with your search.", "> @thomas-li-sjtu ok I understand. It's not just designed for NMT (despite its name). I've used it for summarisation and general NLG without problems. Good luck with your search.\r\n\r\nWow that is awesome. I will try it for my task!", "@thomas-li-sjtu cool. Feel free to raise issues as it helps me add new functionality that may be of use to people. If you want to know how to use it for summarisation (or generic nlg) then look here: https://github.com/AI4Bharat/indic-bart", "Sorry to only come back to this issue now. If anyone is interested in adding this example script in `Transformers`, I would be more than happy to help :) \r\n\r\nFor BART pre-training we need the text-infilling + sentence-permutation data collator which you could find here https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223\r\n\r\nWith this collator you could then modify and use `run_summarization.py` script here https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization. \r\n\r\nLet me know if anyone is interested. :) cc @patrickvonplaten \r\n\r\n", "> Sorry to only come back to this issue now. If anyone is interested in adding this example script in `Transformers`, I would be more than happy to help :)\r\n> \r\n> For BART pre-training we need the text-infilling + sentence-permutation data collator which you could find here https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223\r\n> \r\n> With this collator you could then modify and use `run_summarization.py` script here https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization.\r\n> \r\n> Let me know if anyone is interested. :) cc @patrickvonplaten\r\n\r\nI think the BART pre-training script is very useful for my work and many others. It is generous of you to add this example script in 'Transfromers' !!!", "> Sorry to only come back to this issue now. If anyone is interested in adding this example script in `Transformers`, I would be more than happy to help :)\r\n> \r\n> For BART pre-training we need the text-infilling + sentence-permutation data collator which you could find here https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223\r\n> \r\n> With this collator you could then modify and use `run_summarization.py` script here https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization.\r\n> \r\n> Let me know if anyone is interested. :) cc @patrickvonplaten\r\n\r\nThanks for your reply and I think your method is absolutely feasible. But when I try it , I faced some errors that I can't fix. And could you please give me some help?\r\nHere is my changes to `run_summarization.py`(tag 4.11.0)\r\n\r\n1. Import some necessary packages in [https://github.com/morganmcg1/rotobart/blob/main/data_collator.py#L223](url)\r\n2. Add full codes of `DataCollatorForDenoisingTasks` and also let class `DataCollatorForDenoisingTasks` inherit class `DataCollatorForSeq2Seq` in this way: `class DataCollatorForDenoisingTasks(DataCollatorForSeq2Seq):`\r\n3. Use the new collator: `data_collator = DataCollatorForSeq2Seq(......)` -> `data_collator = DataCollatorForDenoisingTasks(.......)`\r\n\r\nRun the changed script and I get errors below.\r\n\r\nTraceback (most recent call last):\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/IPython/core/interactiveshell.py\", line 3457, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-2-991cbc10c55c>\", line 1, in <module>\r\n runfile('/data/whq/tmp/SBartTry/fineBartPretrain.py', args=['--model_name_or_path', 'facebook/bart-base', '--do_train', '--do_eval', '--train_file', '/data/whq/tmp/SBartTry/tryData/clickbait_train.csv', '--validation_file', '/data/whq/tmp/SBartTry/tryData/clickbait_valid.csv', '--source_prefix', '', '--num_train_epochs=3', '--output_dir', '/data/whq/tmp/SBartTry/fineBartPretrain/clickbait', '--overwrite_output_dir', '--per_device_train_batch_size=16', '--per_device_eval_batch_size=16', '--predict_with_generate'], wdir='/data/whq/tmp/SBartTry')\r\n File \"/home/whq/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py\", line 198, in runfile\r\n pydev_imports.execfile(filename, global_vars, local_vars) # execute the script\r\n File \"/home/whq/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/data/whq/tmp/SBartTry/fineBartPretrain.py\", line 823, in <module>\r\n main()\r\n File \"/data/whq/tmp/SBartTry/fineBartPretrain.py\", line 745, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/trainer.py\", line 1325, in train\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/trainer.py\", line 1884, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/trainer.py\", line 1916, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 168, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 178, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 86, in parallel_apply\r\n output.reraise()\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/_utils.py\", line 434, in reraise\r\n raise exception\r\nTypeError: Caught TypeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py\", line 61, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py\", line 1336, in forward\r\n return_dict=return_dict,\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py\", line 1200, in forward\r\n return_dict=return_dict,\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/whq/anaconda3/envs/pytorchenv/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py\", line 769, in forward\r\n input_shape = input_ids.size()\r\nTypeError: 'int' object is not callable\r\n\r\nWaiting for your generous reply! @patil-suraj ", "@Eurus-W make sure you convert the numpy arrays in the batch returned by `data_collator()` into tensors.\r\n`batch[\"input_ids\"] = torch.LongTensor(batch[\"input_ids\"])`, for example." ]
1,598
1,659
1,659
NONE
null
# ❓ Questions & Help How can I run BART pre-training? I have data to pre-training(Masked LM)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6743/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6742/comments
https://api.github.com/repos/huggingface/transformers/issues/6742/events
https://github.com/huggingface/transformers/issues/6742
686,242,226
MDU6SXNzdWU2ODYyNDIyMjY=
6,742
How to generate sentences in batches, instead of generating sentences one by one
{ "login": "SuHe36", "id": 22442305, "node_id": "MDQ6VXNlcjIyNDQyMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/22442305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SuHe36", "html_url": "https://github.com/SuHe36", "followers_url": "https://api.github.com/users/SuHe36/followers", "following_url": "https://api.github.com/users/SuHe36/following{/other_user}", "gists_url": "https://api.github.com/users/SuHe36/gists{/gist_id}", "starred_url": "https://api.github.com/users/SuHe36/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SuHe36/subscriptions", "organizations_url": "https://api.github.com/users/SuHe36/orgs", "repos_url": "https://api.github.com/users/SuHe36/repos", "events_url": "https://api.github.com/users/SuHe36/events{/privacy}", "received_events_url": "https://api.github.com/users/SuHe36/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @SuHe36,\r\n\r\nI'm currently working on adding support for batched generation. At the moment, this is the best answer we can give you: https://github.com/huggingface/transformers/issues/3021#issuecomment-591236688", "Thanks for your reply !", "Hey, @patrickvonplaten is batch generation available for T5conditiongeneration?", "Yes! Please take a look at this test, which does batch=4 generation for summarization using T5: https://github.com/huggingface/transformers/blob/55cb2ee62eb482787cff17585955f7193fe35dfa/tests/test_modeling_t5.py#L559", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,608
1,608
NONE
null
After I finetune GPT-2, I want to use it to generate sentences in batches instead of one by one. So I tried to modify the code of `examples/text-generation/run_generation.py`. the code on line 239 in `run_generation.py` is: `encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt")` the `prompt_text` is a `str` type data, but when I modify it to `List[str]` type data, It always return `50256`. But looking at the source code, the type of `prompt_text` can be `str` , `List[str]` or `List[int]`. I tested this example separately , and for `token_ids` it always returns 50256. ![image](https://user-images.githubusercontent.com/22442305/91297615-344f1300-e7d1-11ea-99d4-b0e28cd1ab14.png) So, the `prompt_text` must be `str` type data? What modifications should I make to generate sentences in batches using `examples/text-generation/run_generation.py`? Looking forward to your reply!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6742/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6741/comments
https://api.github.com/repos/huggingface/transformers/issues/6741/events
https://github.com/huggingface/transformers/pull/6741
686,136,277
MDExOlB1bGxSZXF1ZXN0NDczNzU0NjMw
6,741
Fix tf boolean mask in graph mode
{ "login": "JayYip", "id": 15050572, "node_id": "MDQ6VXNlcjE1MDUwNTcy", "avatar_url": "https://avatars.githubusercontent.com/u/15050572?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JayYip", "html_url": "https://github.com/JayYip", "followers_url": "https://api.github.com/users/JayYip/followers", "following_url": "https://api.github.com/users/JayYip/following{/other_user}", "gists_url": "https://api.github.com/users/JayYip/gists{/gist_id}", "starred_url": "https://api.github.com/users/JayYip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JayYip/subscriptions", "organizations_url": "https://api.github.com/users/JayYip/orgs", "repos_url": "https://api.github.com/users/JayYip/repos", "events_url": "https://api.github.com/users/JayYip/events{/privacy}", "received_events_url": "https://api.github.com/users/JayYip/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=h1) Report\n> Merging [#6741](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64c7c2bc158364ff5c53dce2f19698078b2f9d78?el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6741/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6741 +/- ##\n==========================================\n- Coverage 80.00% 78.96% -1.04% \n==========================================\n Files 156 156 \n Lines 28426 28426 \n==========================================\n- Hits 22741 22446 -295 \n- Misses 5685 5980 +295 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <100.00%> (-2.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.45% <0.00%> (-5.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6741/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=footer). Last update [64c7c2b...fb02d72](https://codecov.io/gh/huggingface/transformers/pull/6741?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6741/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6741", "html_url": "https://github.com/huggingface/transformers/pull/6741", "diff_url": "https://github.com/huggingface/transformers/pull/6741.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6741.patch", "merged_at": 1598433335000 }
https://api.github.com/repos/huggingface/transformers/issues/6740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6740/comments
https://api.github.com/repos/huggingface/transformers/issues/6740/events
https://github.com/huggingface/transformers/pull/6740
686,090,858
MDExOlB1bGxSZXF1ZXN0NDczNzE2OTcw
6,740
[Torchscript] Fix docs
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=h1) Report\n> Merging [#6740](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64c7c2bc158364ff5c53dce2f19698078b2f9d78?el=desc) will **decrease** coverage by `0.29%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6740/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6740 +/- ##\n==========================================\n- Coverage 80.00% 79.70% -0.30% \n==========================================\n Files 156 156 \n Lines 28426 28426 \n==========================================\n- Hits 22741 22656 -85 \n- Misses 5685 5770 +85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/tokenization\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6740/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=footer). Last update [64c7c2b...02d9292](https://codecov.io/gh/huggingface/transformers/pull/6740?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,700
1,598
MEMBER
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6714
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6740/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6740", "html_url": "https://github.com/huggingface/transformers/pull/6740", "diff_url": "https://github.com/huggingface/transformers/pull/6740.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6740.patch", "merged_at": 1598431917000 }
https://api.github.com/repos/huggingface/transformers/issues/6739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6739/comments
https://api.github.com/repos/huggingface/transformers/issues/6739/events
https://github.com/huggingface/transformers/pull/6739
686,082,983
MDExOlB1bGxSZXF1ZXN0NDczNzEwNDc4
6,739
convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf
{ "login": "kbrajwani", "id": 29722986, "node_id": "MDQ6VXNlcjI5NzIyOTg2", "avatar_url": "https://avatars.githubusercontent.com/u/29722986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kbrajwani", "html_url": "https://github.com/kbrajwani", "followers_url": "https://api.github.com/users/kbrajwani/followers", "following_url": "https://api.github.com/users/kbrajwani/following{/other_user}", "gists_url": "https://api.github.com/users/kbrajwani/gists{/gist_id}", "starred_url": "https://api.github.com/users/kbrajwani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kbrajwani/subscriptions", "organizations_url": "https://api.github.com/users/kbrajwani/orgs", "repos_url": "https://api.github.com/users/kbrajwani/repos", "events_url": "https://api.github.com/users/kbrajwani/events{/privacy}", "received_events_url": "https://api.github.com/users/kbrajwani/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=h1) Report\n> Merging [#6739](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64c7c2bc158364ff5c53dce2f19698078b2f9d78?el=desc) will **decrease** coverage by `1.00%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6739/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6739 +/- ##\n==========================================\n- Coverage 80.00% 78.99% -1.01% \n==========================================\n Files 156 156 \n Lines 28426 28426 \n==========================================\n- Hits 22741 22455 -286 \n- Misses 5685 5971 +286 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=footer). Last update [64c7c2b...b9f471b](https://codecov.io/gh/huggingface/transformers/pull/6739?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@LysandreJik hey i have tried to add small script of convert BertForQuestionAnswering pytorch model to tensorflow like below file\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_pytorch_checkpoint_to_original_tf.py\r\nplease look into it.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,598
1,614
1,614
NONE
null
Added support of convert BertForQuestionAnswering pytorch model to tensorflow like below file https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_pytorch_checkpoint_to_original_tf.py I made new file convert_BertForQuestionAnswering_pytorch_checkpoint_to_tf.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6739/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6739", "html_url": "https://github.com/huggingface/transformers/pull/6739", "diff_url": "https://github.com/huggingface/transformers/pull/6739.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6739.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6738/comments
https://api.github.com/repos/huggingface/transformers/issues/6738/events
https://github.com/huggingface/transformers/issues/6738
686,066,246
MDU6SXNzdWU2ODYwNjYyNDY=
6,738
How to use the reformer for question answering?
{ "login": "Krak91", "id": 45461739, "node_id": "MDQ6VXNlcjQ1NDYxNzM5", "avatar_url": "https://avatars.githubusercontent.com/u/45461739?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Krak91", "html_url": "https://github.com/Krak91", "followers_url": "https://api.github.com/users/Krak91/followers", "following_url": "https://api.github.com/users/Krak91/following{/other_user}", "gists_url": "https://api.github.com/users/Krak91/gists{/gist_id}", "starred_url": "https://api.github.com/users/Krak91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Krak91/subscriptions", "organizations_url": "https://api.github.com/users/Krak91/orgs", "repos_url": "https://api.github.com/users/Krak91/repos", "events_url": "https://api.github.com/users/Krak91/events{/privacy}", "received_events_url": "https://api.github.com/users/Krak91/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @Krak91,\r\n\r\nThanks for this issue. The PR attached to this issue should fix it. \r\nNote that even though the new example code will work for Reformer, it won't yield any good results because there is no pretrained ReformerModel yet." ]
1,598
1,599
1,599
NONE
null
In the code example in the documentation the tokenizer gets passed just a single string. Doesn't it need a context and a question? Also, how exactly to get the answer?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6738/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6737/comments
https://api.github.com/repos/huggingface/transformers/issues/6737/events
https://github.com/huggingface/transformers/pull/6737
685,937,457
MDExOlB1bGxSZXF1ZXN0NDczNTg5OTI4
6,737
[s2s README] Add more dataset download instructions
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=h1) Report\n> Merging [#6737](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6737/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6737 +/- ##\n==========================================\n- Coverage 80.02% 79.92% -0.11% \n==========================================\n Files 157 157 \n Lines 28586 28586 \n==========================================\n- Hits 22877 22848 -29 \n- Misses 5709 5738 +29 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |\n| [src/transformers/tokenization\\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.76%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |\n| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6737/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=footer). Last update [22933e6...b4c1c2f](https://codecov.io/gh/huggingface/transformers/pull/6737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Improves formatting in seq2seq readme and adds download instructions for wmt-en-de and cnn_dm_v2 (cleaned cnn_dm without empty examples).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6737/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6737", "html_url": "https://github.com/huggingface/transformers/pull/6737", "diff_url": "https://github.com/huggingface/transformers/pull/6737.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6737.patch", "merged_at": 1598819364000 }
https://api.github.com/repos/huggingface/transformers/issues/6736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6736/comments
https://api.github.com/repos/huggingface/transformers/issues/6736/events
https://github.com/huggingface/transformers/pull/6736
685,904,488
MDExOlB1bGxSZXF1ZXN0NDczNTYzMTU1
6,736
Fix run_squad.py to work with BART & add skip_decoder step for BART
{ "login": "tomgrek", "id": 2245347, "node_id": "MDQ6VXNlcjIyNDUzNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/2245347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomgrek", "html_url": "https://github.com/tomgrek", "followers_url": "https://api.github.com/users/tomgrek/followers", "following_url": "https://api.github.com/users/tomgrek/following{/other_user}", "gists_url": "https://api.github.com/users/tomgrek/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomgrek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomgrek/subscriptions", "organizations_url": "https://api.github.com/users/tomgrek/orgs", "repos_url": "https://api.github.com/users/tomgrek/repos", "events_url": "https://api.github.com/users/tomgrek/events{/privacy}", "received_events_url": "https://api.github.com/users/tomgrek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
* BART should be in the list of models that delete `token_type_ids` * Don't run through decoder when using hidden states only, improves efficiency
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6736/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6736", "html_url": "https://github.com/huggingface/transformers/pull/6736", "diff_url": "https://github.com/huggingface/transformers/pull/6736.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6736.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6735/comments
https://api.github.com/repos/huggingface/transformers/issues/6735/events
https://github.com/huggingface/transformers/pull/6735
685,878,456
MDExOlB1bGxSZXF1ZXN0NDczNTQxMzQ0
6,735
[Generate] Facilitate PyTorch generate using `ModelOutputs`
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=h1) Report\n> Merging [#6735](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a32d85f0d405be53117b96075eef2875d2185892?el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `78.33%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6735/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6735 +/- ##\n==========================================\n- Coverage 80.48% 79.46% -1.03% \n==========================================\n Files 157 157 \n Lines 28794 28822 +28 \n==========================================\n- Hits 23175 22903 -272 \n- Misses 5619 5919 +300 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.24% <50.00%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <57.14%> (-7.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `89.57% <70.27%> (-1.37%)` | :arrow_down: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.00% <93.33%> (-0.40%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.93% <100.00%> (+0.26%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.82% <100.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <100.00%> (-48.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_outputs.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vdXRwdXRzLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.01% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <100.00%> (-72.26%)` | :arrow_down: |\n| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6735/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=footer). Last update [a32d85f...190985c](https://codecov.io/gh/huggingface/transformers/pull/6735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> LGTM! Looks like we can now deprecate the `_use_cache` function in the `GenerationMixin`, no?\r\n\r\nyes!", "**IMPORTANT** This PR does a bigger renaming from \"decoder_past_key_values\" to \"past_key_values\" as suggested by @sshleifer. This required changes for `T5`, `TFT5` and `Bart`. For each of the three models it is made sure that `decoder_past_values` can still be used as an input to keep backwards compatibility. \r\n\r\nWould be great if @LysandreJik (and @sgugger, @sshleifer depending on time difference) can review this quickly one last time.", "@sshleifer - all EncoderDecoder Slow tests pass. There was one bart test that failed because of Broken Internet connection. I ran this single test again separately and it was fine. PR looks good to me now -> merging." ]
1,598
1,598
1,598
MEMBER
null
This PR: - forces to use `return_dict=True` for generation in PyTorch. This should not lead to any problem because .generate() cannot be used to compute gradients - The handling of `encoder_outputs` is simplified for Bart, T5 and the Encoder. Previously, there was an ugly hack that forces the second position of the encoder/decoder outputs for T5 and Bart to be a tuple containing both `decoder_past_key_values` and `encoder_outputs` whereas `encoder_outputs` was a duplicated output. With the new cleaner API, only the `decoder_past_key_values` are returned because the `encoder_outputs` can be accessed differently. - Fixes #6319 - Adds better documentation for the Encoder Decoder model + test + better example - Most importantly, this PR lays the groundwork for a better GenerationOutput object (@sgugger). Returning a list of all attentions and all hidden states when using `.generate()` was a feature many people asked for. This will now be made possible by forcing to use "return_dict" in .generate() so that "decoder_attentions" and "attentions" can be accessed by keyword. **Important**: The new handling of `encoder_outputs` introduces a small breaking change in that "output[1]" is now not a mixed tuple of encoder_outputs and `decoder_past_key_values`, but only `decoder_past_key_values`, whereas the encoder_outputs can be accessed as before.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6735/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6735", "html_url": "https://github.com/huggingface/transformers/pull/6735", "diff_url": "https://github.com/huggingface/transformers/pull/6735.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6735.patch", "merged_at": 1598956706000 }
https://api.github.com/repos/huggingface/transformers/issues/6734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6734/comments
https://api.github.com/repos/huggingface/transformers/issues/6734/events
https://github.com/huggingface/transformers/issues/6734
685,832,860
MDU6SXNzdWU2ODU4MzI4NjA=
6,734
id2lang in tokenization_xlm.py should be int, and removing hardcoding
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure if the suggested rewrite to remove all those numbers is desirable - perhaps it's important to see those numbers, so I left it alone and just fixed the keys of `id2lang` to be int.\r\n\r\nhttps://github.com/huggingface/transformers/pull/7034" ]
1,598
1,599
1,599
CONTRIBUTOR
null
In https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py#L78 we have: ``` "id2lang": {"0": "de", "1": "en"}, "lang2id": {"de": 0, "en": 1}, ``` and then: ``` lang2id (:obj:`Dict[str, int]`, `optional`, defaults to :obj:`None`): Dictionary mapping languages string identifiers to their IDs. id2lang (:obj:`Dict[int, str`, `optional`, defaults to :obj:`None`): ``` So it should be: ``` "id2lang": {0: "de", 1: "en"}, "lang2id": {"de": 0, "en": 1}, ``` All other entries need this change too. The problem hasn't been detected until now since they were used to only count the number of languages it seems. I need to pass src/tgt languages to the tokenizer I'm porting from fairseq, so I was looking at how to do that and `id2lang` seems to fit the purpose. But I actually need to look them up by `int` id, that's how I saw the problem. But I'm also not sure why do we need to hardcode the reversal, when it can be done in 1 line of code? Which would also remove this assertion code: ``` self.lang2id = lang2id self.id2lang = id2lang if lang2id is not None and id2lang is not None: assert len(lang2id) == len(id2lang) ``` Further we don't even need to hardcode the ids. Replace: ``` "id2lang": {0: "de", 1: "en"}, ``` with: ``` "id2lang": ["de", "en"] ``` So all we need is one of the two entries, and now generate the 2 lookup dicts on the fly. And since it's no longer `id2lang` semantically, probably renaming it to just `langs` would be more appropriate. I think I will use this approach regardless of the outcome of this issue. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6734/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6733/comments
https://api.github.com/repos/huggingface/transformers/issues/6733/events
https://github.com/huggingface/transformers/pull/6733
685,816,358
MDExOlB1bGxSZXF1ZXN0NDczNDg1ODM1
6,733
grad checkpoint for T5 model -- and lots of debug not yet removed
{ "login": "moscow25", "id": 1473764, "node_id": "MDQ6VXNlcjE0NzM3NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1473764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moscow25", "html_url": "https://github.com/moscow25", "followers_url": "https://api.github.com/users/moscow25/followers", "following_url": "https://api.github.com/users/moscow25/following{/other_user}", "gists_url": "https://api.github.com/users/moscow25/gists{/gist_id}", "starred_url": "https://api.github.com/users/moscow25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moscow25/subscriptions", "organizations_url": "https://api.github.com/users/moscow25/orgs", "repos_url": "https://api.github.com/users/moscow25/repos", "events_url": "https://api.github.com/users/moscow25/events{/privacy}", "received_events_url": "https://api.github.com/users/moscow25/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,598
1,614
1,614
CONTRIBUTOR
null
@sshleifer here is my grad checkpointing code -- albeit with too much debug info and some FP16 stuff also not deleted. - all FP16 needs to be off -- else the model does not converge - with grad checkpoint as hacked now (for every "T5 Stack") -- T5-Large runs with training batch 32 - still unable to train T5-3B even with batch == 1 - still working on grad checkpointing for DDP (which we need for multi-GPU) - need to add grad checkpointing as config option... and many other cleanup but it does work <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #{issue number}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6733/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6733", "html_url": "https://github.com/huggingface/transformers/pull/6733", "diff_url": "https://github.com/huggingface/transformers/pull/6733.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6733.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/6732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6732/comments
https://api.github.com/repos/huggingface/transformers/issues/6732/events
https://github.com/huggingface/transformers/issues/6732
685,785,854
MDU6SXNzdWU2ODU3ODU4NTQ=
6,732
Documentation detail in (TF)RobertaForSequenceClassification
{ "login": "DiegoKaram", "id": 26146488, "node_id": "MDQ6VXNlcjI2MTQ2NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/26146488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DiegoKaram", "html_url": "https://github.com/DiegoKaram", "followers_url": "https://api.github.com/users/DiegoKaram/followers", "following_url": "https://api.github.com/users/DiegoKaram/following{/other_user}", "gists_url": "https://api.github.com/users/DiegoKaram/gists{/gist_id}", "starred_url": "https://api.github.com/users/DiegoKaram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DiegoKaram/subscriptions", "organizations_url": "https://api.github.com/users/DiegoKaram/orgs", "repos_url": "https://api.github.com/users/DiegoKaram/repos", "events_url": "https://api.github.com/users/DiegoKaram/events{/privacy}", "received_events_url": "https://api.github.com/users/DiegoKaram/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,598
1,604
1,604
NONE
null
# ❓ Questions & Help The documetation for TFRobertaForSequenceClassification lacks a description for the `labels` parameter. That is not the case for RobertaForSequenceClassification. The parameters actually exists for the TF version and the documentation causes a little confusion. Not sure where to post this in the forum. ## Details <!-- Description of your issue --> I see that the docstring for these classes comes from the variable `ROBERTA_INPUTS_DOCSTRING` in the [source](https://huggingface.co/transformers/_modules/transformers/modeling_roberta.html#RobertaForSequenceClassification) and the doc for labels is added after that on the pytorch model, but not on the tf model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6732/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6731/comments
https://api.github.com/repos/huggingface/transformers/issues/6731/events
https://github.com/huggingface/transformers/pull/6731
685,709,450
MDExOlB1bGxSZXF1ZXN0NDczMzkxMDYz
6,731
Model card for kuisailab/albert-xlarge-arabic
{ "login": "alisafaya", "id": 22398153, "node_id": "MDQ6VXNlcjIyMzk4MTUz", "avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alisafaya", "html_url": "https://github.com/alisafaya", "followers_url": "https://api.github.com/users/alisafaya/followers", "following_url": "https://api.github.com/users/alisafaya/following{/other_user}", "gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions", "organizations_url": "https://api.github.com/users/alisafaya/orgs", "repos_url": "https://api.github.com/users/alisafaya/repos", "events_url": "https://api.github.com/users/alisafaya/events{/privacy}", "received_events_url": "https://api.github.com/users/alisafaya/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=h1) Report\n> Merging [#6731](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6731/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6731 +/- ##\n==========================================\n+ Coverage 79.42% 79.48% +0.06% \n==========================================\n Files 156 156 \n Lines 28411 28411 \n==========================================\n+ Hits 22565 22583 +18 \n+ Misses 5846 5828 -18 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6731/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=footer). Last update [e11d923...bfd52f8](https://codecov.io/gh/huggingface/transformers/pull/6731?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6731/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6731", "html_url": "https://github.com/huggingface/transformers/pull/6731", "diff_url": "https://github.com/huggingface/transformers/pull/6731.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6731.patch", "merged_at": 1598477263000 }
https://api.github.com/repos/huggingface/transformers/issues/6730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6730/comments
https://api.github.com/repos/huggingface/transformers/issues/6730/events
https://github.com/huggingface/transformers/pull/6730
685,708,642
MDExOlB1bGxSZXF1ZXN0NDczMzkwMzgz
6,730
Model card for kuisailab/albert-large-arabic
{ "login": "alisafaya", "id": 22398153, "node_id": "MDQ6VXNlcjIyMzk4MTUz", "avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alisafaya", "html_url": "https://github.com/alisafaya", "followers_url": "https://api.github.com/users/alisafaya/followers", "following_url": "https://api.github.com/users/alisafaya/following{/other_user}", "gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions", "organizations_url": "https://api.github.com/users/alisafaya/orgs", "repos_url": "https://api.github.com/users/alisafaya/repos", "events_url": "https://api.github.com/users/alisafaya/events{/privacy}", "received_events_url": "https://api.github.com/users/alisafaya/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=h1) Report\n> Merging [#6730](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6730/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6730 +/- ##\n==========================================\n+ Coverage 79.42% 79.63% +0.21% \n==========================================\n Files 156 156 \n Lines 28411 28411 \n==========================================\n+ Hits 22565 22626 +61 \n+ Misses 5846 5785 -61 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.17% <0.00%> (-12.53%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6730/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=footer). Last update [e11d923...1f0307a](https://codecov.io/gh/huggingface/transformers/pull/6730?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6730/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6730", "html_url": "https://github.com/huggingface/transformers/pull/6730", "diff_url": "https://github.com/huggingface/transformers/pull/6730.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6730.patch", "merged_at": 1598477277000 }
https://api.github.com/repos/huggingface/transformers/issues/6729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6729/comments
https://api.github.com/repos/huggingface/transformers/issues/6729/events
https://github.com/huggingface/transformers/pull/6729
685,707,549
MDExOlB1bGxSZXF1ZXN0NDczMzg5NDc1
6,729
Model card for kuisailab/albert-base-arabic
{ "login": "alisafaya", "id": 22398153, "node_id": "MDQ6VXNlcjIyMzk4MTUz", "avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alisafaya", "html_url": "https://github.com/alisafaya", "followers_url": "https://api.github.com/users/alisafaya/followers", "following_url": "https://api.github.com/users/alisafaya/following{/other_user}", "gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions", "organizations_url": "https://api.github.com/users/alisafaya/orgs", "repos_url": "https://api.github.com/users/alisafaya/repos", "events_url": "https://api.github.com/users/alisafaya/events{/privacy}", "received_events_url": "https://api.github.com/users/alisafaya/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=h1) Report\n> Merging [#6729](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **decrease** coverage by `0.44%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6729/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6729 +/- ##\n==========================================\n- Coverage 79.42% 78.97% -0.45% \n==========================================\n Files 156 156 \n Lines 28411 28411 \n==========================================\n- Hits 22565 22438 -127 \n- Misses 5846 5973 +127 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.70% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=footer). Last update [e11d923...242d1d0](https://codecov.io/gh/huggingface/transformers/pull/6729?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6729/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6729", "html_url": "https://github.com/huggingface/transformers/pull/6729", "diff_url": "https://github.com/huggingface/transformers/pull/6729.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6729.patch", "merged_at": 1598477254000 }
https://api.github.com/repos/huggingface/transformers/issues/6728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6728/comments
https://api.github.com/repos/huggingface/transformers/issues/6728/events
https://github.com/huggingface/transformers/pull/6728
685,707,082
MDExOlB1bGxSZXF1ZXN0NDczMzg5MDgx
6,728
Install nlp for github actions test
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=h1) Report\n> Merging [#6728](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e11d923bfc61ed640bc7e696549578361126485e?el=desc) will **decrease** coverage by `0.43%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6728/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6728 +/- ##\n==========================================\n- Coverage 79.42% 78.99% -0.44% \n==========================================\n Files 156 156 \n Lines 28411 28411 \n==========================================\n- Hits 22565 22442 -123 \n- Misses 5846 5969 +123 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-12.22%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.61%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6728/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=footer). Last update [e11d923...e3fc486](https://codecov.io/gh/huggingface/transformers/pull/6728?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6728/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6728", "html_url": "https://github.com/huggingface/transformers/pull/6728", "diff_url": "https://github.com/huggingface/transformers/pull/6728.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6728.patch", "merged_at": 1598381918000 }
https://api.github.com/repos/huggingface/transformers/issues/6727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6727/comments
https://api.github.com/repos/huggingface/transformers/issues/6727/events
https://github.com/huggingface/transformers/pull/6727
685,688,426
MDExOlB1bGxSZXF1ZXN0NDczMzczNTU2
6,727
added model card for codeswitch-spaeng-sentiment-analysis-lince
{ "login": "sagorbrur", "id": 10723655, "node_id": "MDQ6VXNlcjEwNzIzNjU1", "avatar_url": "https://avatars.githubusercontent.com/u/10723655?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sagorbrur", "html_url": "https://github.com/sagorbrur", "followers_url": "https://api.github.com/users/sagorbrur/followers", "following_url": "https://api.github.com/users/sagorbrur/following{/other_user}", "gists_url": "https://api.github.com/users/sagorbrur/gists{/gist_id}", "starred_url": "https://api.github.com/users/sagorbrur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sagorbrur/subscriptions", "organizations_url": "https://api.github.com/users/sagorbrur/orgs", "repos_url": "https://api.github.com/users/sagorbrur/repos", "events_url": "https://api.github.com/users/sagorbrur/events{/privacy}", "received_events_url": "https://api.github.com/users/sagorbrur/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=h1) Report\n> Merging [#6727](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7e6397a7d8e7433aa4c4cafba98e08e5c73f087c?el=desc) will **decrease** coverage by `1.11%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6727/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6727 +/- ##\n==========================================\n- Coverage 80.10% 78.99% -1.12% \n==========================================\n Files 156 156 \n Lines 28411 28411 \n==========================================\n- Hits 22758 22442 -316 \n- Misses 5653 5969 +316 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.84% <0.00%> (-64.10%)` | :arrow_down: |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.61% <0.00%> (-6.02%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-3.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.69% <0.00%> (-2.29%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6727/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.22% <0.00%> (+47.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=footer). Last update [7e6397a...8d8b352](https://codecov.io/gh/huggingface/transformers/pull/6727?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,598
1,598
1,598
CONTRIBUTOR
null
Hi, I added model card for `codeswitch-spaeng-sentiment-analysis-lince` and also update other model cards. Please review and if possible please merge. thanks and regards Sagor
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6727/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6727", "html_url": "https://github.com/huggingface/transformers/pull/6727", "diff_url": "https://github.com/huggingface/transformers/pull/6727.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6727.patch", "merged_at": 1598477192000 }
https://api.github.com/repos/huggingface/transformers/issues/6726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6726/comments
https://api.github.com/repos/huggingface/transformers/issues/6726/events
https://github.com/huggingface/transformers/pull/6726
685,673,644
MDExOlB1bGxSZXF1ZXN0NDczMzYxNTUx
6,726
Fix pegasus-xsum integration test
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,598
1,598
1,598
CONTRIBUTOR
null
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6705
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6726/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6726", "html_url": "https://github.com/huggingface/transformers/pull/6726", "diff_url": "https://github.com/huggingface/transformers/pull/6726.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6726.patch", "merged_at": 1598378789000 }
https://api.github.com/repos/huggingface/transformers/issues/6725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6725/comments
https://api.github.com/repos/huggingface/transformers/issues/6725/events
https://github.com/huggingface/transformers/issues/6725
685,657,037
MDU6SXNzdWU2ODU2NTcwMzc=
6,725
Dataset Lazyloader for transformers trainer
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As far as I can see there is currently no dataloader functionality to lazy load data into memory. \r\nIt should not be very hard to implement such a feature yourself, I think, see https://discuss.pytorch.org/t/loading-huge-data-functionality/346/2 .\r\nAlso cc @lhoestq @sgugger @thomwolf - Are we planning on providing more support for lazy data loading? ", "There is an [open PR #4009](https://github.com/huggingface/transformers/pull/4009) about this that was mostly close to being merged (cc @BramVanroy) and I must confess I kind of dropped the ball on it.\r\n\r\nShouldn't be a ton of work to complete it and get it merged.\r\n\r\nAs mentioned in that PR, the other option is to use `huggingface/nlp` which can load large text datasets lazily out of the box.", "@patrickvonplaten @julien-c Thanks a lot for your reply.\r\n\r\nShould we close this issue and focus on https://github.com/huggingface/transformers/pull/4009 ?", "The long-term solution is to use nlp for this IMO.", "I have checked NLP and it is slow, maybe I am doing something wrong.\r\nI made a simple python script to check it is speed, which loads 1.1 TB of textual data.\r\nIt has been 8 hours and still, it is on the loading steps.\r\nIt does work when the text dataset size is small about 1 GB, but it doesn't scale.\r\nIt also uses a single thread during the data loading step.\r\n\r\n```\r\ntrain_files = glob.glob(\"xxx/*.txt\",recursive=True)\r\nrandom.shuffle(train_files)\r\n\r\nprint(train_files)\r\n\r\ndataset = nlp.load_dataset('text', \r\n data_files=train_files,\r\n name=\"customDataset\",\r\n version=\"1.0.0\",\r\n cache_dir=\"xxx/nlp\")\r\n```", "You should open this issue on the nlp repo, to make sure it's not forgotten! In particular they are working a lot on performance right now :-)", "done:\r\nhttps://github.com/huggingface/nlp/issues/546\r\n", "I am closing this in favour of:\r\n- https://github.com/huggingface/transformers/pull/4009\r\n- https://github.com/huggingface/nlp/issues/546\r\n" ]
1,598
1,599
1,599
CONTRIBUTOR
null
# 🚀 Feature request Currently, train/fine-tune models using transformers trainer loads the whole dataset in memory. This could work for small datasets, but with large datasets the training will crash. Code: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L128 ## Motivation We have trained 0.4 Billion Bert Model and currently training a 1.8 Billion Bert Model for protein sequences, and we want to train DistilBERT to reduce the model size. Unfortunately, our dataset is very huge about 0.7 Terabyte and since the trainer loads the whole dataset the trainer crashes. It will be more optimised if you could use lazy loader for loading the datasets like fairseq. @patrickvonplaten I am not sure who is responsible for the data loader for LM.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6725/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/6724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6724/comments
https://api.github.com/repos/huggingface/transformers/issues/6724/events
https://github.com/huggingface/transformers/pull/6724
685,640,233
MDExOlB1bGxSZXF1ZXN0NDczMzM0MDUz
6,724
create ProtBert-BFD model card.
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=h1) Report\n> Merging [#6724](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/074340339a6d6aede30c14c94ffe7b59a01786f1?el=desc) will **decrease** coverage by `0.43%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6724/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6724 +/- ##\n==========================================\n- Coverage 79.91% 79.47% -0.44% \n==========================================\n Files 156 156 \n Lines 28406 28406 \n==========================================\n- Hits 22701 22577 -124 \n- Misses 5705 5829 +124 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `21.12% <0.00%> (-71.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.64% <0.00%> (-0.66%)` | :arrow_down: |\n| [src/transformers/tokenization\\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `95.31% <0.00%> (+50.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6724/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `90.90% <0.00%> (+69.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=footer). Last update [0743403...332510c](https://codecov.io/gh/huggingface/transformers/pull/6724?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks for sharing, this looks awesome – did you want to review this @patrickvonplaten?", "Looks great! " ]
1,598
1,598
1,598
CONTRIBUTOR
null
This is a model card for our ProtBert-BFD : https://huggingface.co/Rostlab/prot_bert_bfd
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6724/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6724", "html_url": "https://github.com/huggingface/transformers/pull/6724", "diff_url": "https://github.com/huggingface/transformers/pull/6724.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6724.patch", "merged_at": 1598487560000 }
https://api.github.com/repos/huggingface/transformers/issues/6723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6723/comments
https://api.github.com/repos/huggingface/transformers/issues/6723/events
https://github.com/huggingface/transformers/pull/6723
685,615,085
MDExOlB1bGxSZXF1ZXN0NDczMzEzMzcy
6,723
add xlm-roberta-large-xnli model card
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "organizations_url": "https://api.github.com/users/joeddav/orgs", "repos_url": "https://api.github.com/users/joeddav/repos", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "received_events_url": "https://api.github.com/users/joeddav/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=h1) Report\n> Merging [#6723](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a25c9fc8e14f3e8914116e6142af2a9589dc8e63?el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6723/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6723 +/- ##\n==========================================\n- Coverage 79.00% 78.96% -0.05% \n==========================================\n Files 156 156 \n Lines 28405 28405 \n==========================================\n- Hits 22442 22429 -13 \n- Misses 5963 5976 +13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.70% <0.00%> (-3.01%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6723/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=footer). Last update [a25c9fc...715d491](https://codecov.io/gh/huggingface/transformers/pull/6723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Merging so I can tweet, but lmk if anything is off and I'll update it.", "looks good! Fixed tiny typos in 3242e4d9", "Hi @joeddav! Could you share your hyperparameters for training the model (I assume you used the `run_glue`)? Would you say that the last phase of training helped significantly?" ]
1,598
1,602
1,598
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6723/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6723/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6723", "html_url": "https://github.com/huggingface/transformers/pull/6723", "diff_url": "https://github.com/huggingface/transformers/pull/6723.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6723.patch", "merged_at": 1598472360000 }
https://api.github.com/repos/huggingface/transformers/issues/6722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6722/comments
https://api.github.com/repos/huggingface/transformers/issues/6722/events
https://github.com/huggingface/transformers/pull/6722
685,601,960
MDExOlB1bGxSZXF1ZXN0NDczMzAyNDY1
6,722
Add AdaFactor optimizer from fairseq
{ "login": "moscow25", "id": 1473764, "node_id": "MDQ6VXNlcjE0NzM3NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1473764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moscow25", "html_url": "https://github.com/moscow25", "followers_url": "https://api.github.com/users/moscow25/followers", "following_url": "https://api.github.com/users/moscow25/following{/other_user}", "gists_url": "https://api.github.com/users/moscow25/gists{/gist_id}", "starred_url": "https://api.github.com/users/moscow25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moscow25/subscriptions", "organizations_url": "https://api.github.com/users/moscow25/orgs", "repos_url": "https://api.github.com/users/moscow25/repos", "events_url": "https://api.github.com/users/moscow25/events{/privacy}", "received_events_url": "https://api.github.com/users/moscow25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @sshleifer -- here is belated PR for AdaFactor. Please let me know how to edit this properly, and what tests or examples we should add. Thanks!", "We will integrate into examples/ in a separate PR I think.", "Thanks @sshleifer -- let me try to make those changes. \r\n\r\nAgree that I should be able to add a single test -- appreciate the link -- and you can add examples in separate PR.\r\n\r\nIf I don't get this figure out soon, yes happy for you to make the changes yourself :-)", "Hey @sshleifer -- think I got a test working finally. We can squash the commits. \r\n\r\nStill not sure what I need to clean up for the code standards/linter. \r\n\r\nPlease advise, thanks!", "For local style checking, you need: `pip install isort --upgrade` \r\nThen `make style` and `make quality` to both suggest you have no errors. \r\nThey should autofix things or at least give error messages. My workflow is to define\r\n```bash\r\nsty () {\r\n\tmake style\r\n\tflake8 examples templates tests src utils\r\n}\r\n```\r\nand then run `sty` a lot.", "Also squashing happens automatically at merge time, don't worry about that.", "> For local style checking, you need: `pip install isort --upgrade`\r\n> Then `make style` and `make quality` to both suggest you have no errors.\r\n> They should autofix things or at least give error messages. My workflow is to define\r\n> \r\n> ```shell\r\n> sty () {\r\n> \tmake style\r\n> \tflake8 examples templates tests src utils\r\n> }\r\n> ```\r\n> \r\n> and then run `sty` a lot.\r\n\r\nHmm. Is there a way for `style` to tell me the location in offending file? Output seems pretty minimal.", "if you also run the flake8 command it should just fix it.", "I think I fixed the formatting, as requested. Took a sec to figure that all out...", "@sshleifer -- any idea what happened with the `black` / code quality changes overnite? I'm very confused. Seems as if the standard changed from yesterday... ", "Yes they did, sorry about that. I did some cleanup on this branch.\r\nIf you are curious about the style change: I tried to future proof it here https://github.com/huggingface/transformers/pull/6748", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=h1) Report\n> Merging [#6722](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75c64d80c76c3dc71f735d9197a4a601847e0cd?el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `68.23%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/6722/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #6722 +/- ##\n==========================================\n- Coverage 78.96% 78.94% -0.03% \n==========================================\n Files 157 157 \n Lines 28486 28571 +85 \n==========================================\n+ Hits 22495 22555 +60 \n- Misses 5991 6016 +25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `82.28% <68.23%> (-13.27%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (+0.75%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=footer). Last update [a75c64d...8958b9f](https://codecov.io/gh/huggingface/transformers/pull/6722?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome. Thanks @sshleifer. I'll start working more on the other less mature PRs we discussed. And please ping me if/when you write tests or examples for this. Happy to contribute to that as well if you need.", "I've added Adafactor to the docs and slightly changed the style of the docstrings in https://github.com/huggingface/transformers/pull/6765", "Thanks! I'll add a `--adafactor` option lightning_base and trainer in 2 prs." ]
1,598
1,598
1,598
CONTRIBUTOR
null
Tested for T5 finetuning and MLM -- reduced memory consumption compared to ADAM. <!-- This line specifies which issue to close after the pull request is merged. --> Fixes #1256
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6722/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/6722", "html_url": "https://github.com/huggingface/transformers/pull/6722", "diff_url": "https://github.com/huggingface/transformers/pull/6722.diff", "patch_url": "https://github.com/huggingface/transformers/pull/6722.patch", "merged_at": 1598518694000 }