Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError Exception: ArrowNotImplementedError Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table self._build_writer(inferred_schema=pa_table.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize self._build_writer(self.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
sequence | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] | f3d1ae7d3c94d6ca | [
"feat_Adj Close",
"feat_Date",
"feat_High",
"feat_Low",
"feat_Open",
"feat_Volume",
"feat_Volume BTC",
"feat_Volume U",
"feat_Volume USD",
"feat_date",
"feat_high",
"feat_low",
"feat_open",
"feat_unix",
"id",
"target"
] | {} | null | false | null |
AutoTrain Dataset for project: bam-v2
Dataset Description
This dataset has been automatically processed by AutoTrain for project bam-v2.
Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
Data Instances
A sample from this dataset looks as follows:
[
{
"feat_unix": 1548158400,
"feat_date": "2019-01-22 12:00:00",
"id": "BTC/USD",
"feat_open": 3543.58,
"feat_high": 3590.0,
"feat_low": 3523.1,
"target": 3557.860107421875,
"feat_Volume BTC": 3593298.05,
"feat_Volume USD": 1009.88,
"feat_Volume U": null,
"feat_Date": null,
"feat_Open": null,
"feat_High": null,
"feat_Low": null,
"feat_Adj Close": null,
"feat_Volume": null
},
{
"feat_unix": 1627473600,
"feat_date": "2021-07-28 12:00:00",
"id": "BTC/USD",
"feat_open": 40786.1,
"feat_high": 40900.0,
"feat_low": 39601.35,
"target": 39708.12109375,
"feat_Volume BTC": 265.2041301,
"feat_Volume USD": 10530757.42,
"feat_Volume U": null,
"feat_Date": null,
"feat_Open": null,
"feat_High": null,
"feat_Low": null,
"feat_Adj Close": null,
"feat_Volume": null
}
]
Dataset Fields
The dataset has the following fields (also called "features"):
{
"feat_unix": "Value(dtype='int64', id=None)",
"feat_date": "Value(dtype='string', id=None)",
"id": "Value(dtype='string', id=None)",
"feat_open": "Value(dtype='float64', id=None)",
"feat_high": "Value(dtype='float64', id=None)",
"feat_low": "Value(dtype='float64', id=None)",
"target": "Value(dtype='float32', id=None)",
"feat_Volume BTC": "Value(dtype='float64', id=None)",
"feat_Volume USD": "Value(dtype='float64', id=None)",
"feat_Volume U": "Value(dtype='float64', id=None)",
"feat_Date": "Value(dtype='string', id=None)",
"feat_Open": "Value(dtype='float64', id=None)",
"feat_High": "Value(dtype='float64', id=None)",
"feat_Low": "Value(dtype='float64', id=None)",
"feat_Adj Close": "Value(dtype='float64', id=None)",
"feat_Volume": "Value(dtype='int64', id=None)"
}
Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
Split name | Num samples |
---|---|
train | 41362 |
valid | 10349 |
- Downloads last month
- 14