Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'message', 'data'}) and 7 missing columns ({'query', 'toolName', 'hostname', 'parameters', 'pid', 'status', 'time'}). This happened while the json dataset builder was generating data using hf://datasets/evalstate/query_log_test/logs/2025-07-24/logs-2025-07-24T14-46-08-431Z-ec56d497-11c7-4a0c-9b7a-d18cd73c2a1e.jsonl (at revision 4c5fdd03dcc819b5b5aa960bb4ae8aa25ee665ab) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast message: string level: int64 timestamp: string sessionId: string data: string to {'level': Value('int64'), 'time': Value('string'), 'pid': Value('int64'), 'hostname': Value('string'), 'query': Value('string'), 'toolName': Value('string'), 'parameters': Value('string'), 'status': Value('string'), 'sessionId': Value('string'), 'timestamp': Value('string')} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1833, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 2 new columns ({'message', 'data'}) and 7 missing columns ({'query', 'toolName', 'hostname', 'parameters', 'pid', 'status', 'time'}). This happened while the json dataset builder was generating data using hf://datasets/evalstate/query_log_test/logs/2025-07-24/logs-2025-07-24T14-46-08-431Z-ec56d497-11c7-4a0c-9b7a-d18cd73c2a1e.jsonl (at revision 4c5fdd03dcc819b5b5aa960bb4ae8aa25ee665ab) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
level
int64 | time
string | pid
int64 | hostname
string | query
string | toolName
string | parameters
string | status
string | sessionId
string | timestamp
string |
---|---|---|---|---|---|---|---|---|---|
30 |
2025-07-24T14:17:19.416Z
| 69,827 |
SSMITH-PC
|
llama
|
model_search
|
{"query":"llama","author":"","task":"","sort":"trendingScore","limit":20}
|
success
|
aed03a59-0de6-4ab0-811e-991e73fc0bd0
|
2025-07-24T14:17:19.416Z
|
30 |
2025-07-24T14:20:06.336Z
| 69,827 |
SSMITH-PC
|
llama
|
model_search
|
{"query":"llama","author":"meta","task":"","sort":"trendingScore","limit":20}
|
success
|
aed03a59-0de6-4ab0-811e-991e73fc0bd0
|
2025-07-24T14:20:06.336Z
|
30 |
2025-07-24T14:20:12.059Z
| 69,827 |
SSMITH-PC
|
llama
|
model_search
|
{"query":"llama","author":"meta","task":"","library":"transformers","sort":"trendingScore","limit":20}
|
success
|
aed03a59-0de6-4ab0-811e-991e73fc0bd0
|
2025-07-24T14:20:12.059Z
|
30 |
2025-07-24T14:21:03.222Z
| 69,827 |
SSMITH-PC
|
evalstate
|
space_search
|
{"limit":10,"mcp":false}
|
success
|
aed03a59-0de6-4ab0-811e-991e73fc0bd0
|
2025-07-24T14:21:03.222Z
|
30 |
2025-07-24T14:21:05.119Z
| 69,827 |
SSMITH-PC
|
evalstate
|
space_search
|
{"limit":10,"mcp":true}
|
success
|
aed03a59-0de6-4ab0-811e-991e73fc0bd0
|
2025-07-24T14:21:05.119Z
|
30 |
2025-07-24T14:21:58.159Z
| 69,827 |
SSMITH-PC
|
how do i use the diffusers library
|
hf_doc_search
|
{}
|
success
|
aed03a59-0de6-4ab0-811e-991e73fc0bd0
|
2025-07-24T14:21:58.159Z
|
30 |
2025-07-24T14:24:12.411Z
| 69,827 |
SSMITH-PC
|
kazakh
|
dataset_search
|
{"query":"kazakh ","tags":[],"limit":20}
|
success
|
aed03a59-0de6-4ab0-811e-991e73fc0bd0
|
2025-07-24T14:24:12.411Z
|
30 | null | null | null | null | null | null | null |
ec56d497-11c7-4a0c-9b7a-d18cd73c2a1e
|
2025-07-24T14:46:03.201Z
|
30 |
2025-07-24T15:05:31.866Z
| 96,899 |
SSMITH-PC
|
herd of llamas
|
paper_search
|
{"results_limit":12,"concise_only":true}
|
success
|
59d6700d-48f2-4a9b-939f-54dce0297fa4
|
2025-07-24T15:05:31.866Z
|
30 |
2025-07-24T15:06:33.561Z
| 96,899 |
SSMITH-PC
|
herd of llamas
|
paper_search
|
{"results_limit":6,"concise_only":true}
|
success
|
59d6700d-48f2-4a9b-939f-54dce0297fa4
|
2025-07-24T15:06:33.561Z
|
30 |
2025-07-24T15:06:41.094Z
| 96,899 |
SSMITH-PC
|
kazakh language
|
dataset_search
|
{"query":"kazakh language","tags":[],"limit":20}
|
success
|
59d6700d-48f2-4a9b-939f-54dce0297fa4
|
2025-07-24T15:06:41.094Z
|
30 |
2025-07-24T15:06:48.133Z
| 96,899 |
SSMITH-PC
|
tool calling
|
dataset_search
|
{"query":"tool calling","tags":[],"limit":20}
|
success
|
59d6700d-48f2-4a9b-939f-54dce0297fa4
|
2025-07-24T15:06:48.133Z
|
30 |
2025-07-24T15:36:13.786Z
| 17,127 |
SSMITH-PC
|
Byzantine fault tolerance distributed consensus algorithms sentiment analysis
|
hf_doc_search
|
{"product":"transformers"}
|
success
|
f69fd1f0-70cd-4475-87a1-e0138879b8f6
|
2025-07-24T15:36:13.786Z
|
30 |
2025-07-24T15:41:30.450Z
| 17,127 |
SSMITH-PC
|
federated learning Byzantine fault tolerance consensus algorithms
|
hf_doc_search
|
{"product":""}
|
success
|
f69fd1f0-70cd-4475-87a1-e0138879b8f6
|
2025-07-24T15:41:30.450Z
|
30 |
2025-07-24T15:42:02.664Z
| 17,127 |
SSMITH-PC
|
Byzantine fault tolerance consensus algorithms distributed systems
|
paper_search
|
{"results_limit":12,"concise_only":false}
|
success
|
f69fd1f0-70cd-4475-87a1-e0138879b8f6
|
2025-07-24T15:42:02.664Z
|
No dataset card yet
- Downloads last month
- 16