Datasets:
The dataset viewer is not available for this split.
Error code: InfoError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: ac759130-809c-4575-8b20-75e67d4c269f)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 211, in compute_first_rows_from_streaming_response info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 277, in get_dataset_config_info builder = load_dataset_builder( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1849, in load_dataset_builder dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1731, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1688, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1067, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 721, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 634, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 548, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 527, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: ac759130-809c-4575-8b20-75e67d4c269f)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for US Employment Discrimination Cases Dataset
Dataset Summary
This dataset contains metadata information about 167,093 employment discrimination cases in the United States. The data has been collected from CourtListener.com, focusing on precedential opinions. The dataset covers cases from May 2, 1785, to August 1, 2024.
Languages
The dataset is in English.
Dataset Structure
Data Fields
source_file
: The name of the HTML file from which the data was extractedpermalink
: The unique URL for the opinion on CourtListener.comname
: The case name, typically in the format "Party A v. Party B"date_filed
: The date the opinion was filed, in YYYY-MM-DD formatstatus
: The status of the opinion, which is "Precedential" for all entries in this datasetcitations
: The official citation(s) for the opiniondocket_number
: The docket number assigned to the casedescription
: A brief excerpt or summary of the case, often mentioning key aspects related to employment discrimination
Dataset Creation
Source Data
Initial Data Collection and Normalization
- Data Type: Precedential opinions
- Subject Matter: Employment discrimination cases
- Date Range: 1785-05-02 to 2024-08-01
Who are the source language producers?
The source language producers are judges and legal professionals who authored the court opinions.
Considerations for Using the Data
Social Impact of Dataset
This dataset can be used for various purposes, including:
- Legal research on employment discrimination trends over time
- Analysis of precedential cases in different jurisdictions
- Studying the evolution of employment law and protected classes
- Text analysis of legal language in discrimination cases
Other Known Limitations
- All opinions in this dataset are of "Precedential" status, meaning they can be cited as legal authority in future cases.
- The dataset spans over two centuries, providing a comprehensive historical view of employment discrimination case law in the United States.
- The 'description' field may contain keywords related to discrimination, employment, or specific protected characteristics relevant to the case.
Additional Information
Licensing Information
This dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Disclaimer
While efforts have been made to ensure accuracy, users should verify critical information against original sources before relying on it for legal or academic purposes.
- Downloads last month
- 25