Datasets:

ArXiv:
License:
CEED / README.md
kylewhy's picture
update doc
665e575
|
raw
history blame
7.31 kB
metadata
license: mit

CEED: California EarthquakE Dataset for Machine Learning and Cloud Computing

The California EarthquakE Dataset (CEED) is a dataset of earthquake waveforms and metadata for machine learning and cloud computing. The dataset structure is shown below, and you can find more information about the format at AI4EPS

 Group: / len:60424
  |- Group: /ci38457511 len:35
  |  |-* begin_time = 2019-07-06T03:19:23.668000
  |  |-* depth_km = 8.0
  |  |-* end_time = 2019-07-06T03:21:23.668000
  |  |-* event_id = ci38457511
  |  |-* event_time = 2019-07-06T03:19:53.040000
  |  |-* event_time_index = 2937
  |  |-* latitude = 35.7695
  |  |-* longitude = -117.5993
  |  |-* magnitude = 7.1
  |  |-* magnitude_type = w
  |  |-* nt = 12000
  |  |-* nx = 35
  |  |-* sampling_rate = 100
  |  |-* source = SC
  |  |- Dataset: /ci38457511/CI.CCC..HH (shape:(3, 12000))
  |  |  |- (dtype=float32)
  |  |  |  |-* azimuth = 141.849479
  |  |  |  |-* back_azimuth = 321.986302
  |  |  |  |-* component = ENZ
  |  |  |  |-* depth_km = -0.67
  |  |  |  |-* distance_km = 34.471389
  |  |  |  |-* dt_s = 0.01
  |  |  |  |-* elevation_m = 670.0
  |  |  |  |-* event_id = ['ci38457511' 'ci38457511' 'ci37260300']
  |  |  |  |-* instrument = HH
  |  |  |  |-* latitude = 35.52495
  |  |  |  |-* local_depth_m = 0.0
  |  |  |  |-* location = 
  |  |  |  |-* longitude = -117.36453
  |  |  |  |-* network = CI
  |  |  |  |-* p_phase_index = 3575
  |  |  |  |-* p_phase_polarity = U
  |  |  |  |-* p_phase_score = 0.8
  |  |  |  |-* p_phase_status = manual
  |  |  |  |-* p_phase_time = 2019-07-06T03:19:59.422000
  |  |  |  |-* phase_index = [ 3575  4184 11826]
  |  |  |  |-* phase_picking_channel = ['HHZ' 'HNN' 'HHZ']
  |  |  |  |-* phase_polarity = ['U' 'N' 'N']
  |  |  |  |-* phase_remark = ['i' 'e' 'e']
  |  |  |  |-* phase_score = [0.8 0.5 0.5]
  |  |  |  |-* phase_status = manual
  |  |  |  |-* phase_time = ['2019-07-06T03:19:59.422000' '2019-07-06T03:20:05.509000' '2019-07-06T03:21:21.928000']
  |  |  |  |-* phase_type = ['P' 'S' 'P']
  |  |  |  |-* s_phase_index = 4184
  |  |  |  |-* s_phase_polarity = N
  |  |  |  |-* s_phase_score = 0.5
  |  |  |  |-* s_phase_status = manual
  |  |  |  |-* s_phase_time = 2019-07-06T03:20:05.509000
  |  |  |  |-* snr = [ 637.9865898   286.9100766  1433.04052911]
  |  |  |  |-* station = CCC
  |  |  |  |-* unit = 1e-6m/s
  |  |- Dataset: /ci38457511/CI.CCC..HN (shape:(3, 12000))
  |  |  |- (dtype=float32)
  |  |  |  |-* azimuth = 141.849479
  |  |  |  |-* back_azimuth = 321.986302
  |  |  |  |-* component = ENZ
  |  |  |  |-* depth_km = -0.67
  |  |  |  |-* distance_km = 34.471389
  |  |  |  |-* dt_s = 0.01
  |  |  |  |-* elevation_m = 670.0
  |  |  |  |-* event_id = ['ci38457511' 'ci38457511' 'ci37260300']
  ......

Getting Started

Requirements

  • datasets
  • h5py
  • fsspec
  • pytorch

Usage

Import the necessary packages:

import h5py
import numpy as np
import torch
from datasets import load_dataset

We have 6 configurations for the dataset:

  • "station"
  • "event"
  • "station_train"
  • "event_train"
  • "station_test"
  • "event_test"

"station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.

The sample of station is a dictionary with the following keys:

  • data: the waveform with shape (3, nt), the default time length is 8192
  • begin_time: the begin time of the waveform data
  • end_time: the end time of the waveform data
  • phase_time: the phase arrival time
  • phase_index: the time point index of the phase arrival time
  • phase_type: the phase type
  • phase_polarity: the phase polarity in ('U', 'D', 'N')
  • event_time: the event time
  • event_time_index: the time point index of the event time
  • event_location: the event location with shape (3,), including latitude, longitude, depth
  • station_location: the station location with shape (3,), including latitude, longitude and depth

The sample of event is a dictionary with the following keys:

  • data: the waveform with shape (n_station, 3, nt), the default time length is 8192
  • begin_time: the begin time of the waveform data
  • end_time: the end time of the waveform data
  • phase_time: the phase arrival time with shape (n_station,)
  • phase_index: the time point index of the phase arrival time with shape (n_station,)
  • phase_type: the phase type with shape (n_station,)
  • phase_polarity: the phase polarity in ('U', 'D', 'N') with shape (n_station,)
  • event_time: the event time
  • event_time_index: the time point index of the event time
  • event_location: the space-time coordinates of the event with shape (n_staion, 3)
  • station_location: the space coordinates of the station with shape (n_station, 3), including latitude, longitude and depth

The default configuration is station_test. You can specify the configuration by argument name. For example:

# load dataset
# ATTENTION: Streaming(Iterable Dataset) is complex to support because of the feature of HDF5
# So we recommend to directly load the dataset and convert it into iterable later
# The dataset is very large, so you need to wait for some time at the first time

# to load "station_test" with test split
ceed = load_dataset("AI4EPS/CEED", split="test")
# or
ceed = load_dataset("AI4EPS/CEED", name="station_test", split="test")

# to load "event" with train split
ceed = load_dataset("AI4EPS/CEED", name="event", split="train")

Example loading the dataset

ceed = load_dataset("AI4EPS/CEED", name="station_test", split="test")

# print the first sample of the iterable dataset
for example in ceed:
    print("\nIterable test\n")
    print(example.keys())
    for key in example.keys():
        if key == "data":
            print(key, np.array(example[key]).shape)
        else:
            print(key, example[key])
    break

# %%
ceed = ceed.with_format("torch")
dataloader = DataLoader(ceed, batch_size=8, num_workers=0, collate_fn=lambda x: x)

for batch in dataloader:
    print("\nDataloader test\n")
    print(f"Batch size: {len(batch)}")
    print(batch[0].keys())
    for key in batch[0].keys():
        if key == "data":
            print(key, np.array(batch[0][key]).shape)
        else:
            print(key, batch[0][key])
    break

Extension

If you want to introduce new features in to labels, we recommend to make a copy of CEED.py and modify the _generate_examples method. Check AI4EPS/EQNet for an example. To load the dataset with your modified script, specify the path to the script in load_dataset function:

ceed = load_dataset("path/to/your/CEED.py", name="station_test", split="test", trust_remote_code=True)