Load Dataset issue for custom graph dataset

#1
by seyonec - opened

I am hoping to fine-tune the graphormer model on odor prediction using a dataset of compounds and their corresponding labels (which can be 0, 1 or nan). After generating the jsonl format with the proper attributes (edge indices, attributes, num_nodes, y labels, etc) - I'm running into an issue when calling load_dataset. I was hoping to use this dataset to replicate the graphormer tutorial created by @clefourrier (https://huggingface.co/blog/graphml-classification). Would greatly appreciate any advice, thanks!
Error details:

Downloading and preparing dataset json/seyonec--goodscents_leffingwell to /home/t-seyonec/.cache/huggingface/datasets/seyonec___json/seyonec--goodscents_leffingwell-07a9fbb3964fb885/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96...
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6.38M/6.38M [00:00<00:00, 32.2MB/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 784k/784k [00:00<00:00, 12.0MB/s]]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 795k/795k [00:00<00:00, 12.4MB/s]]
Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:01<00:00,  2.72it/s]
Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 2715.35it/s]
                                                        
---------------------------------------------------------------------------
ArrowIndexError                           Traceback (most recent call last)
File /anaconda/envs/dgllife/lib/python3.8/site-packages/datasets/builder.py:1894, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
   1887     writer = writer_class(
   1888         features=writer._features,
   1889         path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
   (...)
   1892         embed_local_files=embed_local_files,
   1893     )
-> 1894 writer.write_table(table)
   1895 num_examples_progress_update += len(table)

File /anaconda/envs/dgllife/lib/python3.8/site-packages/datasets/arrow_writer.py:569, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
    568     self._build_writer(inferred_schema=pa_table.schema)
--> 569 pa_table = pa_table.combine_chunks()
    570 pa_table = table_cast(pa_table, self._schema)

File /anaconda/envs/dgllife/lib/python3.8/site-packages/pyarrow/table.pxi:3439, in pyarrow.lib.Table.combine_chunks()

File /anaconda/envs/dgllife/lib/python3.8/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()

File /anaconda/envs/dgllife/lib/python3.8/site-packages/pyarrow/error.pxi:127, in pyarrow.lib.check_status()

ArrowIndexError: array slice would exceed array length
...
   1911         e = e.__context__
-> 1912     raise DatasetGenerationError("An error occurred while generating the dataset") from e
   1914 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)

DatasetGenerationError: An error occurred while generating the dataset

cc @albertvillanova @lhoestq @severo @clefourrier

seyonec changed discussion title from Dataset Viewer issue to Load Dataset issue for custom graph dataset

Hi! Could you provide your code snippet?

Hi! Could you provide your code snippet?

Thanks for getting back to me! This error occurs when I just try to run β€˜load_dataset(β€œseyonec/goodscents_leffingwell”)’

I can't reproduce your bug with datasets 2.5.2, could you try upgrading your version of datasets?

I can't reproduce your bug with datasets 2.5.2, could you try upgrading your version of datasets?

Will try a previous version, thanks!

this now works, thank you so much for the help! :)

No problem :)

No problem :)

I'm now running into a weird issue when calling trainer.train() with the labels (a list of 0s, 1s, or nulls for 152 tasks) - some kind of type mismatch? I made sure to cast any labels that are not null to int type so I am a little bit confused. Thanks again for all your help!

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[10], line 1
----> 1 train_results = trainer.train()

File /anaconda/envs/dgllife/lib/python3.8/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
   1534     self.model_wrapped = self.model
   1536 inner_training_loop = find_executable_batch_size(
   1537     self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
   1538 )
-> 1539 return inner_training_loop(
   1540     args=args,
   1541     resume_from_checkpoint=resume_from_checkpoint,
   1542     trial=trial,
   1543     ignore_keys_for_eval=ignore_keys_for_eval,
   1544 )

File /anaconda/envs/dgllife/lib/python3.8/site-packages/accelerate/utils/memory.py:136, in find_executable_batch_size..decorator(*args, **kwargs)
    134     raise RuntimeError("No executable batch size found, reached zero.")
    135 try:
--> 136     return function(batch_size, *args, **kwargs)
    137 except Exception as e:
    138     if should_reduce_batch_size(e):

File /anaconda/envs/dgllife/lib/python3.8/site-packages/transformers/trainer.py:1787, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
...
    return self.collate_fn(data)
  File "/anaconda/envs/dgllife/lib/python3.8/site-packages/transformers/models/graphormer/collating_graphormer.py", line 132, in __call__
    batch["labels"] = torch.from_numpy(np.stack([i["labels"] for i in features], axis=0))
TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

Which version of transformers do you use?

Which version of transformers do you use?

4.31.0!

Would you recommend using an older version? It seems to complain that I am trying to convert a numpy array of objects to a tensor but I'm not sure where they are getting casted as objects

Hi @seyonec , no, it should be good, there was a bug on Graphormer I fixed prior to this one.
I suspect the problem happens with the null values you get that cannot be converted to int and are considered as numpy objects. Could you replace them by -1 for ex?

Gotcha! It's now not erroring on that, but rather on the call to BCE w/ logits - not sure if this is a result of a tensor not being specified as torch.float32? (https://stackoverflow.com/questions/70216222/pytorch-is-throwing-an-error-runtimeerror-result-type-float-cant-be-cast-to-th)

Stack trace:
```

RuntimeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 train_results = trainer.train()

File /anaconda/envs/dgllife/lib/python3.8/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1534 self.model_wrapped = self.model
1536 inner_training_loop = find_executable_batch_size(
1537 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
1538 )
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )

File /anaconda/envs/dgllife/lib/python3.8/site-packages/accelerate/utils/memory.py:136, in find_executable_batch_size..decorator(*args, **kwargs)
134 raise RuntimeError("No executable batch size found, reached zero.")
135 try:
--> 136 return function(batch_size, *args, **kwargs)
137 except Exception as e:
138 if should_reduce_batch_size(e):

File /anaconda/envs/dgllife/lib/python3.8/site-packages/transformers/trainer.py:1809, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
...
3162 if not (target.size() == input.size()):
3163 raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
-> 3165 return torch.binary_cross_entropy_with_logits(input, target, weight, pos_weight, reduction_enum)

RuntimeError: result type Float can't be cast to the desired output type Long
```

Hi @seyonec ,
Yes, it's a possiblity! Can you try changing the type of the target tensors?

Modifying target in the line in functional.py to be target.float() seems to remove that issue! However, I run into an issue with the input_edge features in collating_graphormer now, weirdly.

RuntimeError: Caught RuntimeError in DataLoader worker process 1.
Original Traceback (most recent call last):
  File "/anaconda/envs/dgllife/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
    data = fetcher.fetch(index)
  File "/anaconda/envs/dgllife/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
    return self.collate_fn(data)
  File "/anaconda/envs/dgllife/lib/python3.8/site-packages/transformers/models/graphormer/collating_graphormer.py", line 119, in __call__
    batch["input_edges"][
RuntimeError: The expanded size of the tensor (3) must match the existing size (0) at non-singleton dimension 3.  Target sizes: [1, 1, 0, 3].  Tensor sizes: [0]

Sign up or log in to comment