Search is not available for this dataset
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
stake: float
emission: float
trust: float
validator_trust: float
dividends: float
incentive: float
ranks: float
total_stake: float
consensus: float
validator_permit: bool
last_update: int64
block: int64
netuid: int64
uid: int64
hotkey: string
coldkey: string
ip: string
timestamp: timestamp[ns]
__index_level_0__: int64
-- schema metadata --
pandas: '{"index_columns": ["__index_level_0__"], "column_indexes": [{"na' + 2412
to
{'mean_forward_time': Value(dtype='float64', id=None), 'task_distribution': Value(dtype='float64', id=None), 'mean_query_length': Value(dtype='float64', id=None), 'mean_reference_tokens_per_second': Value(dtype='float64', id=None), 'mean_query_tokens_per_second': Value(dtype='float64', id=None), 'unique_challenges': Value(dtype='float64', id=None), 'mean_reward': Value(dtype='float64', id=None), 'zero_rewards': Value(dtype='float64', id=None), 'status_codes': Value(dtype='float64', id=None), 'completion_tokens_per_second': Value(dtype='float64', id=None), '__index_level_0__': Value(dtype='string', id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 437, in query
                  pa_table = pa.concat_tables(
                File "pyarrow/table.pxi", line 5317, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              mean_forward_time: double
              task_distribution: double
              mean_query_length: double
              mean_reference_tokens_per_second: double
              mean_query_tokens_per_second: double
              unique_challenges: double
              mean_reward: double
              zero_rewards: double
              status_codes: double
              completion_tokens_per_second: double
              __index_level_0__: string
              vs
              __index_level_0__: int64
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 105, in get_rows_content
                  pa_table = rows_index.query(offset=0, length=rows_max_number)
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 586, in query
                  return self.parquet_index.query(offset=offset, length=length)
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 444, in query
                  raise SchemaMismatchError("Parquet files have different schema.", err)
              libcommon.parquet_utils.SchemaMismatchError: ('Parquet files have different schema.', ArrowInvalid('Schema at index 1 was different: \nmean_forward_time: double\ntask_distribution: double\nmean_query_length: double\nmean_reference_tokens_per_second: double\nmean_query_tokens_per_second: double\nunique_challenges: double\nmean_reward: double\nzero_rewards: double\nstatus_codes: double\ncompletion_tokens_per_second: double\n__index_level_0__: string\nvs\n__index_level_0__: int64'))
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 119, in compute_first_rows_from_parquet_response
                  return create_first_rows_response(
                File "/src/libs/libcommon/src/libcommon/viewer_utils/rows.py", line 135, in create_first_rows_response
                  rows_content = get_rows_content(rows_max_number)
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 114, in get_rows_content
                  raise SplitParquetSchemaMismatchError(
              libcommon.exceptions.SplitParquetSchemaMismatchError: Split parquet files being processed have different schemas. Ensure all files have identical column names.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 96, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 73, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 97, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 75, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              stake: float
              emission: float
              trust: float
              validator_trust: float
              dividends: float
              incentive: float
              ranks: float
              total_stake: float
              consensus: float
              validator_permit: bool
              last_update: int64
              block: int64
              netuid: int64
              uid: int64
              hotkey: string
              coldkey: string
              ip: string
              timestamp: timestamp[ns]
              __index_level_0__: int64
              -- schema metadata --
              pandas: '{"index_columns": ["__index_level_0__"], "column_indexes": [{"na' + 2412
              to
              {'mean_forward_time': Value(dtype='float64', id=None), 'task_distribution': Value(dtype='float64', id=None), 'mean_query_length': Value(dtype='float64', id=None), 'mean_reference_tokens_per_second': Value(dtype='float64', id=None), 'mean_query_tokens_per_second': Value(dtype='float64', id=None), 'unique_challenges': Value(dtype='float64', id=None), 'mean_reward': Value(dtype='float64', id=None), 'zero_rewards': Value(dtype='float64', id=None), 'status_codes': Value(dtype='float64', id=None), 'completion_tokens_per_second': Value(dtype='float64', id=None), '__index_level_0__': Value(dtype='string', id=None)}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

No dataset card yet

Downloads last month
6