SwathiManikya's picture
Update README.md
b1eb5a6 verified
metadata
dataset_name: hlo-feature-dataset
pretty_name: HLO Feature Dataset for Deep Learning Resource Estimation
dataset_type: graph-and-tabular
license: apache-2.0
task_categories:
  - graph-ml
  - tabular-regression
language: en
tags:
  - HPC
  - resource-prediction
  - XLA
  - compiler-features
  - deep-learning
  - graph-learning
  - scheduling
size_categories:
  - 1K<n<10K
source_datasets:
  - custom
dataset_summary: >
  The HLO Feature Dataset contains High-Level Optimizer (HLO) graph features and
  metadata extracted  from deep learning training workloads. It is designed for
  tasks such as runtime prediction, resource  estimation, and graph-based
  machine learning in HPC environments.

  Each entry pairs model configuration metadata with compiler graph data stored
  in `.npz` format.

  Ideal for ML system optimization studies, GNN research, and AI workload
  scheduling.
structured_data:
  features:
    - name: batch
      type: integer
    - name: epochs
      type: integer
    - name: learn_rate
      type: float
    - name: gpu_core_count
      type: integer
    - name: gpu_memory_size
      type: integer
    - name: fit_time
      type: float
    - name: npz_path
      type: string
  graph_data:
    node_features: node_feat
    edge_index: edge_index
    additional_keys:
      - node_opcode
      - node_config_ids
      - node_splits
usage_example: |
  ```python
  from datasets import load_dataset
  import numpy as np

  dataset = load_dataset("your-username/hlo-feature-dataset")
  sample = dataset['train'][0]

  graph_data = np.load(sample['npz_path'])
  node_features = graph_data['node_feat']
  edges = graph_data['edge_index']

HLO Feature Dataset for Deep Learning Resource Estimation

Dataset

Dataset Summary

The HLO Feature Dataset is a collection of compiler-level graph features (HLO graphs) extracted from deep learning training workloads. Alongside detailed metadata (model configs, GPU stats), this dataset enables machine learning approaches for:

  • ⏱️ Training Time Prediction
  • 📉 Resource Consumption Estimation
  • HPC and GPU Scheduling Optimization
  • 🧩 Graph-based Neural Architecture Analysis

This dataset is ideal for experimenting with regression models (e.g., XGBoost) and Graph Neural Networks (GNNs) using compiler features.


Supported Tasks

  • ⚙️ Runtime & Resource Prediction: Predict training time (fit_time) based on HLO features.
  • 📊 ML for Systems Optimization: Use tabular + graph data for AI workload management.
  • 🔗 Graph Representation Learning: Apply GNNs on HLO graphs (node_feat, edge_index).

Dataset Structure

Each entry includes:

  • Metadata: From dataset-new.csv (model, optimizer, GPU specs, timing metrics, etc.)
  • HLO Graph Features: .npz files containing:
    • node_opcode, node_feat, edge_index, node_config_ids, node_splits

Usage Example

This example demonstrates how to load metadata, preprocess features, and train an XGBoost model to predict training time (fit_time), as shown in the Colab notebook.

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from xgboost import XGBRegressor

# Load metadata CSV
df = pd.read_csv('dataset-new.csv')

# Example feature selection (drop non-numeric/categorical handling needed)
X = df[['batch', 'epochs', 'learn_rate', 'gpu_core_count', 'gpu_memory_size']]
y = df['fit_time']

# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize XGBoost Regressor
xgb_model = XGBRegressor(n_estimators=100, learning_rate=0.1, max_depth=6, random_state=42)
xgb_model.fit(X_train, y_train)

# Evaluate
preds = xgb_model.predict(X_test)
rmse = mean_squared_error(y_test, preds, squared=False)
print(f"RMSE: {rmse}")

Example Notebooks

🚀 Baseline: XGBoost for Resource Estimation

A sample baseline implementation using XGBoost is provided to demonstrate how to predict resource metrics such as fit_time using the dataset's metadata.

📥 Download the notebook from the repository:

Baseline_XGBoost_Resource_Estimation.ipynb

This notebook covers:

  • Loading and preprocessing metadata from dataset-new.csv
  • Training an XGBoost regressor to predict training time
  • Evaluating model performance (e.g., RMSE)

Note: Make sure to adjust paths if cloning the dataset locally or integrating with Hugging Face datasets API.


Loading HLO Graph Features

For graph-based ML tasks, load the .npz files:

npz_file = df.iloc[0]['npz_path']
graph_data = np.load(npz_file)

node_features = graph_data['node_feat']
edges = graph_data['edge_index']

print("Node Feature Shape:", node_features.shape)
print("Edge Index Shape:", edges.shape)


---

## License
Specify your license here (e.g., MIT, Apache-2.0).

---

## Contributions
Open to contributions! Feel free to suggest improvements or share your models trained on this dataset.