repo_id
stringlengths 21
96
| file_path
stringlengths 31
155
| content
stringlengths 1
92.9M
| __index_level_0__
int64 0
0
|
---|---|---|---|
rapidsai_public_repos/cloud-ml-examples/azure
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/README.md
|
### Example Notebooks to run RAPIDS with Azure Kubernetes Service and Dask Kubernetes
- Detailed instructions to set up RAPIDS with AKS is in the markdown file [Detailed_setup_guide.md](Detailed_setup_guide.md) . Go through this before you try to run any other notebooks.
- Shorter example notebook using Dask + RAPIDS + XGBoost in [MNMG_XGBoost.ipynb](./MNMG_XGBoost.ipynb)
- Full example with performance sweeps over multiple algorithms and larger dataset in [Dask_cuML_Exploration_Full.ipynb](./Dask_cuML_Exploration_Full.ipynb)
| 0 |
rapidsai_public_repos/cloud-ml-examples/azure
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/nvidia-device-plugin-ds.yml
|
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nvidia-device-plugin-daemonset
namespace: gpu-resources
spec:
selector:
matchLabels:
name: nvidia-device-plugin-ds
updateStrategy:
type: RollingUpdate
template:
metadata:
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
# reserves resources for critical add-on pods so that they can be rescheduled after
# a failure. This annotation works in tandem with the toleration below.
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: nvidia-device-plugin-ds
spec:
tolerations:
# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
# This, along with the annotation above marks this pod as a critical add-on.
- key: CriticalAddonsOnly
operator: Exists
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
containers:
- image: mcr.microsoft.com/oss/nvidia/k8s-device-plugin:1.11
name: nvidia-device-plugin-ctr
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
| 0 |
rapidsai_public_repos/cloud-ml-examples/azure
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/Dask_cuML_Exploration_Full.ipynb
|
import certifi
import cudf
import cuml
import cupy as cp
import numpy as np
import os
import pandas as pd
import random
import seaborn as sns
import time
import yaml
from functools import partial
from math import cos, sin, asin, sqrt, pi
from tqdm import tqdm
from typing import Optional
import dask
import dask.array as da
import dask_cudf
from dask_kubernetes import KubeCluster, make_pod_from_dict
from dask.distributed import Client, WorkerPlugin, wait, progress
class SimpleTimer:
def __init__(self):
self.start = None
self.end = None
self.elapsed = None
def __enter__(self):
self.start = time.perf_counter_ns()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.end = time.perf_counter_ns()
self.elapsed = self.end - self.start
def create_pod_from_yaml(yaml_file):
with open(yaml_file, 'r') as reader:
d = yaml.safe_load(reader)
d = dask.config.expand_environment_variables(d)
return make_pod_from_dict(d)
def build_worker_and_scheduler_pods(sched_spec, worker_spec):
assert os.path.isfile(sched_spec)
assert os.path.isfile(worker_spec)
sched_pod = create_pod_from_yaml(sched_spec)
worker_pod = create_pod_from_yaml(worker_spec)
return sched_pod, worker_pod
dask.config.set({"logging.kubernetes": "info",
"logging.distributed": "info",
"kubernetes.scheduler-service-type": "LoadBalancer",
"kubernetes.idle-timeout": None,
"kubernetes.scheduler-service-wait-timeout": 3600,
"kubernetes.deploy-mode": "remote",
"kubernetes.logging": "info",
"distributed.logging": "info",
"distributed.scheduler.idle-timeout": None,
"distributed.scheduler.locks.lease-timeout": None,
"distributed.comm.timeouts.connect": 3600,
"distributed.comm.tls.ca-file": certifi.where()})
sched_spec_path = "./podspecs/azure/scheduler-specs.yml"
worker_spec_path = "./podspecs/azure/cuda-worker-specs.yml"sched_pod, worker_pod = build_worker_and_scheduler_pods(sched_spec=sched_spec_path,
worker_spec=worker_spec_path)cluster = KubeCluster(pod_template=worker_pod,
scheduler_pod_template=sched_pod)
client = Client(cluster)
scheduler_address = cluster.scheduler_addressclientdef scale_workers(client, n_workers, timeout=300):
client.cluster.scale(n_workers)
m = len(client.has_what().keys())
start = end = time.perf_counter_ns()
while ((m != n_workers) and (((end - start) / 1e9) < timeout) ):
time.sleep(5)
m = len(client.has_what().keys())
end = time.perf_counter_ns()
if (((end - start) / 1e9) >= timeout):
raise RuntimeError(f"Failed to rescale cluster in {timeout} sec."
"Try increasing timeout for very large containers, and verify available compute resources.")
scale_workers(client, 4, timeout=600)def construct_worker_pool(client, n_workers, auto_scale=False, timeout=300):
workers = [w for w in client.has_what().keys()]
if (len(workers) < n_workers):
if (auto_scale):
scale_workers(client=client, n_workers=n_workers, timeout=timeout)
workers = [w for w in client.has_what().keys()]
else:
print("Attempt to construct worker pool larger than available worker set, and auto_scale is False."
" Returning entire pool.")
else:
workers = random.sample(population=workers, k=n_workers)
return workers
def estimate_df_rows(client, files, storage_opts={}, testpct=0.01):
workers = client.has_what().keys()
est_size = 0
for file in files:
if (file.endswith('.csv')):
df = dask_cudf.read_csv(file, npartitions=len(workers), storage_options=storage_opts)
elif (file.endswith('.parquet')):
df = dask_cudf.read_parquet(file, npartitions=len(workers), storage_options=storage_opts)
# Select only the index column from our subsample
est_size += (df.sample(frac=testpct).iloc[:,0].shape[0] / testpct).compute()
del df
return est_size
def pretty_print(scheduler_dict):
print(f"All workers for scheduler id: {scheduler_dict['id']}, address: {scheduler_dict['address']}")
for worker in scheduler_dict['workers']:
print(f"Worker: {worker} , gpu_machines: {scheduler_dict['workers'][worker]['gpu']}")def clean(df_part, remap, must_haves):
"""
This function performs the various clean up tasks for the data
and returns the cleaned dataframe.
"""
tmp = {col:col.strip().lower() for col in list(df_part.columns)}
df_part = df_part.rename(columns=tmp)
# rename using the supplied mapping
df_part = df_part.rename(columns=remap)
# iterate through columns in this df partition
for col in df_part.columns:
# drop anything not in our expected list
if col not in must_haves:
df_part = df_part.drop(col, axis=1)
continue
# fixes datetime error found by Ty Mckercher and fixed by Paul Mahler
if df_part[col].dtype == 'object' and col in ['pickup_datetime', 'dropoff_datetime']:
df_part[col] = df_part[col].astype('datetime64[ms]')
continue
# if column was read as a string, recast as float
if df_part[col].dtype == 'object':
df_part[col] = df_part[col].astype('float32')
else:
# downcast from 64bit to 32bit types
# Tesla T4 are faster on 32bit ops
if 'int' in str(df_part[col].dtype):
df_part[col] = df_part[col].astype('int32')
if 'float' in str(df_part[col].dtype):
df_part[col] = df_part[col].astype('float32')
df_part[col] = df_part[col].fillna(-1)
return df_part
def coalesce_taxi_data(fraction, random_state):
base_path = 'gs://anaconda-public-data/nyc-taxi/csv'
# list of column names that need to be re-mapped
remap = {}
remap['tpep_pickup_datetime'] = 'pickup_datetime'
remap['tpep_dropoff_datetime'] = 'dropoff_datetime'
remap['ratecodeid'] = 'rate_code'
#create a list of columns & dtypes the df must have
must_haves = {
'pickup_datetime': 'datetime64[ms]',
'dropoff_datetime': 'datetime64[ms]',
'passenger_count': 'int32',
'trip_distance': 'float32',
'pickup_longitude': 'float32',
'pickup_latitude': 'float32',
'rate_code': 'int32',
'dropoff_longitude': 'float32',
'dropoff_latitude': 'float32',
'fare_amount': 'float32'
}
# apply a list of filter conditions to throw out records with missing or outlier values
query_frags = [
'fare_amount > 0 and fare_amount < 500',
'passenger_count > 0 and passenger_count < 6',
'pickup_longitude > -75 and pickup_longitude < -73',
'dropoff_longitude > -75 and dropoff_longitude < -73',
'pickup_latitude > 40 and pickup_latitude < 42',
'dropoff_latitude > 40 and dropoff_latitude < 42'
]
valid_months_2016 = [str(x).rjust(2, '0') for x in range(1, 7)]
valid_files_2016 = [f'{base_path}/2016/yellow_tripdata_2016-{month}.csv' for month in valid_months_2016]
df_2014_fractional = dask_cudf.read_csv(f'{base_path}/2014/yellow_*.csv', chunksize=25e6).sample(
frac=fraction, random_state=random_state)
df_2014_fractional = clean(df_2014_fractional, remap, must_haves)
df_2015_fractional = dask_cudf.read_csv(f'{base_path}/2015/yellow_*.csv', chunksize=25e6).sample(
frac=fraction, random_state=random_state)
df_2015_fractional = clean(df_2015_fractional, remap, must_haves)
df_2016_fractional = dask_cudf.read_csv(valid_files_2016, chunksize=25e6).sample(
frac=fraction, random_state=random_state)
df_2016_fractional = clean(df_2016_fractional, remap, must_haves)
df_taxi = dask.dataframe.multi.concat([df_2014_fractional, df_2015_fractional, df_2016_fractional])
df_taxi = df_taxi.query(' and '.join(query_frags))
return df_taxidef taxi_csv_data_loader(client, response_dtype=np.float32, fraction=1.0, random_state=0):
response_id = 'fare_amount'
workers = client.has_what().keys()
km_fields = ['passenger_count', 'trip_distance', 'pickup_longitude', 'pickup_latitude', 'rate_code',
'dropoff_longitude', 'dropoff_latitude', 'fare_amount']
taxi_df = coalesce_taxi_data(fraction=fraction, random_state=random_state)
taxi_df = taxi_df[km_fields]
with dask.annotate(workers=set(workers)):
taxi_df = client.persist(collections=taxi_df)
X = taxi_df[taxi_df.columns.difference([response_id])].astype(np.float32)
y = taxi_df[response_id].astype(response_dtype)
wait(taxi_df)
return taxi_df, X, y
def taxi_parquet_data_loader(client, response_dtype=np.float32, fraction=1.0, random_state=0):
# list of column names that need to be re-mapped
remap = {}
remap['tpep_pickup_datetime'] = 'pickup_datetime'
remap['tpep_dropoff_datetime'] = 'dropoff_datetime'
remap['ratecodeid'] = 'rate_code'
#create a list of columns & dtypes the df must have
must_haves = {
'pickup_datetime': 'datetime64[ms]',
'dropoff_datetime': 'datetime64[ms]',
'passenger_count': 'int32',
'trip_distance': 'float32',
'pickup_longitude': 'float32',
'pickup_latitude': 'float32',
'rate_code': 'int32',
'dropoff_longitude': 'float32',
'dropoff_latitude': 'float32',
'fare_amount': 'float32'
}
# apply a list of filter conditions to throw out records with missing or outlier values
query_frags = [
'fare_amount > 0 and fare_amount < 500',
'passenger_count > 0 and passenger_count < 6',
'pickup_longitude > -75 and pickup_longitude < -73',
'dropoff_longitude > -75 and dropoff_longitude < -73',
'pickup_latitude > 40 and pickup_latitude < 42',
'dropoff_latitude > 40 and dropoff_latitude < 42'
]
workers = client.has_what().keys()
taxi_parquet_path = "gs://anaconda-public-data/nyc-taxi/nyc.parquet"
response_id = 'fare_amount'
fields = ['passenger_count', 'trip_distance', 'pickup_longitude', 'pickup_latitude', 'rate_code',
'dropoff_longitude', 'dropoff_latitude', 'fare_amount']
taxi_df = dask_cudf.read_parquet(taxi_parquet_path, npartitions=len(workers), chunksize=25e6).sample(
frac=fraction, random_state=random_state)
taxi_df = clean(taxi_df, remap, must_haves)
taxi_df = taxi_df.query(' and '.join(query_frags))
taxi_df = taxi_df[fields]
with dask.annotate(workers=set(workers)):
taxi_df = client.persist(collections=taxi_df)
wait(taxi_df)
X = taxi_df[taxi_df.columns.difference([response_id])].astype(np.float32)
y = taxi_df[response_id].astype(response_dtype)
return taxi_df, X, ydef record_elapsed_timings_to_df(df, timings, record_template, type, columns, write_to=None):
records = [dict(record_template, **{"sample_index": i,
"elapsed": elapsed,
"type": type})
for i, elapsed in enumerate(timings)]
df = df.append(other=records, ignore_index=True)
if (write_to):
df.to_csv(write_to, columns=columns)
return df
def collect_load_time_samples(load_func, count, return_final_sample=True, verbose=False):
timings = []
for m in tqdm(range(count)):
with SimpleTimer() as timer:
df, X, y = load_func()
timings.append(timer.elapsed)
if (return_final_sample):
return df, X, y, timings
return None, None, None, timings
def collect_func_time_samples(func, count, verbose=False):
timings = []
for k in tqdm(range(count)):
with SimpleTimer() as timer:
func()
timings.append(timer.elapsed)
return timings
def sweep_fit_func(model, func_id, require_compute, X, y, xy_fit, count):
_fit_func_attr = getattr(model, func_id)
if (require_compute):
if (xy_fit):
fit_func = partial(lambda X, y: _fit_func_attr(X, y).compute(), X, y)
else:
fit_func = partial(lambda X: _fit_func_attr(X).compute(), X)
else:
if (xy_fit):
fit_func = partial(_fit_func_attr, X, y)
else:
fit_func = partial(_fit_func_attr, X)
return collect_func_time_samples(func=fit_func, count=count)
def sweep_predict_func(model, func_id, require_compute, X, count):
_predict_func_attr = getattr(model, func_id)
predict_func = partial(lambda X: _predict_func_attr(X).compute(), X)
return collect_func_time_samples(func=predict_func, count=count)
def performance_sweep(client, model, data_loader, hardware_type, worker_counts=[1], samples=1, load_samples=1, max_data_frac=1.0,
predict_frac=0.05, scaling_type='weak', xy_fit=True, fit_requires_compute=False, update_workers_in_kwargs=True,
response_dtype=np.float32, out_path='./perf_sweep.csv', append_to_existing=False, model_name=None,
fit_func_id="fit", predict_func_id="predict", scaling_denom=None, model_args={}, model_kwargs={}):
"""
Primary performance sweep entrypoint.
Parameters
------------
client: DASK client associated with the cluster we're interesting in collecting performance data for.
model: Model object on which to gather performance data. This will be created and destroyed,
once for each element of 'worker_counts'
data_loader: arbitrary data loading function that will be called to load the appropriate testing data.
Function that is responsible for loading and returning the data to be used for a given performance run. Function
signature must accept (client, fraction, and random_state). Client should be used to distribute data, and loaders
should utilize fraction and random_state with dask's dataframe.sample method to allow for control of how much data
is loaded.
When called, its return value should be of the form: df, X, y, where df is the full dask_cudf dataframe, X is a
dask_cudf dataframe which contains all explanatory variables that will be passed to the 'fit' function, and y is a
dask_cudf series or dataframe that contains response variables which should be passed to fit/predict as fit(X, y)
hardware_type: indicates the core hardware the current sweep is running on. ex. 'T4', 'V100', 'A100'
worker_counts: List indicating the number of workers that should be swept. Ex [1, 2, 4]
worker counts must fit within the cluster associated with 'client', if the current DASK worker count is different
from what is requested on a given sweep, attempt to automatically scale the worker count. NOTE: this does not
mean we will scale the available cluster nodes, just the number of deployed worker pods.
samples: number of fit/predict samples to record per worker count
load_samples: number of times to sample data loads. This effectively times how long 'data_loader' runs.
max_data_frac: maximum fraction of data to return.
Strong scaling: each run will utilize max_data_frac data.
Weak scaling: each run will utilize (current worker count) / (max worker count) * max_data_frac data.
predict_frac: fraction of training data used to test inference
scaling_type: values can be 'weak' or 'strong' indicating the type of scaling sweep to perform.
xy_fit: indicates whether or not the model's 'fit' function is of the form (X, y), when xy_fit is False, we assume that
fit is of the form (X), as is the case with various unsupervised methods ex. KNN.
fit_requires_compute: False generally, set this to True if the model's 'fit' function requires a corresponding '.compute()'
call to execute the required work.
update_workers_in_kwargs: Some algorithms accept a 'workers' list, much like DASK, and will require their kwargs to have
workers populated. Setting this flag handles this automatically.
response_dtype: defaults to np.float32, some algorithms require another dtype, such as int32
out_path: path where performance data csv should be saved
append_to_existing: When true, append results to an existing csv, otherwise overwrite.
model_name: Override what we output as the model name
fit_func_id: Defaults to 'fit', only set this if the model has a non-standard naming.
predict_func_id: Defaults to 'predict', only set this if the model has a on-standard predict naming.
scaling_denom: (weak scaling) defaults to max(workers) if unset. Specifies the maximum worker count that weak scaling
should scale against. For example, when using 1 worker in a weak scaling sweep, the worker will attempt to
process a fraction of the total data equal to 1/scaling_denom
model_args: args that will be passed to the model's constructor
model_kwargs: keyword args that will be passed to the model's constructor
Returns
--------
"""
cols = ['n_workers', 'sample_index', 'elapsed', 'type', 'algorithm', 'scaling_type', 'data_fraction', 'hardware']
perf_df = cudf.DataFrame(columns=cols)
if (append_to_existing):
try:
perf_df = cudf.read_csv(out_path)
except:
pass
model_name = model_name if model_name else str(model)
scaling_denom = scaling_denom if (scaling_denom is not None) else max(worker_counts)
max_data_frac = min(1.0, max_data_frac)
start_msg = f"Starting {scaling_type}-scaling performance sweep for:\n"
start_msg += f" model : {model_name}\n"
start_msg += f" data loader: {data_loader}.\n"
start_msg += f"Configuration\n"
start_msg += "==========================\n"
start_msg += f"{'Worker counts':<25} : {worker_counts}\n"
start_msg += f"{'Fit/Predict samples':<25} : {samples}\n"
start_msg += f"{'Data load samples':<25} : {load_samples}\n"
start_msg += f"- {'Max data fraction':<23} : {max_data_frac}\n"
start_msg += f"{'Model fit':<25} : {'X ~ y' if xy_fit else 'X'}\n"
start_msg += f"- {'Response DType':<23} : {response_dtype}\n"
start_msg += f"{'Writing results to':<25} : {out_path}\n"
start_msg += f"- {'Method':<23} : {'overwrite' if not append_to_existing else 'append'}\n"
print(start_msg, flush=True)
for n in worker_counts:
fraction = (n / scaling_denom) * max_data_frac if scaling_type == 'weak' else max_data_frac
record_template = {"n_workers": n, "type": "predict", "algorithm": model_name,
"scaling_type": scaling_type, "data_fraction": fraction, "hardware": hardware_type}
scale_workers(client, n)
print(f"Sampling <{load_samples}> load times with {n} workers.", flush=True)
load_func = partial(data_loader, client=client, response_dtype=response_dtype, fraction=fraction, random_state=0)
df, X, y, load_timings = collect_load_time_samples(load_func=load_func, count=load_samples)
perf_df = record_elapsed_timings_to_df(df=perf_df, timings=load_timings, type='load',
record_template=record_template, columns=cols, write_to=out_path)
print(f"Finished loading <{load_samples}>, samples, to <{n}> workers with a mean time of {np.mean(load_timings)/1e9:0.4f} sec.", flush=True)
print(f"Sweeping {model_name} '{fit_func_id}' with <{n}> workers. Sampling <{samples}> times.", flush=True)
if (update_workers_in_kwargs and 'workers' in model_kwargs):
model_kwargs['workers'] = workers = list(client.has_what().keys())
print(model_args, model_kwargs)
m = model(*model_args, **model_kwargs)
if (fit_func_id):
fit_timings = sweep_fit_func(model=m, func_id=fit_func_id,
require_compute=fit_requires_compute,
X=X, y=y, xy_fit=xy_fit, count=samples)
perf_df = record_elapsed_timings_to_df(df=perf_df, timings=fit_timings, type='fit',
record_template=record_template, columns=cols, write_to=out_path)
print(f"Finished gathering <{samples}>, 'fit' samples using <{n}> workers, with a mean time of {np.mean(fit_timings)/1e9:0.4f} sec.",
flush=True)
else:
print(f"Skipping fit sweep, fit_func_id is None")
if (predict_func_id):
print(f"Sweeping {model_name} '{predict_func_id}' with <{n}> workers. Sampling <{samples}> times.", flush=True)
predict_timings = sweep_predict_func(model=m, func_id=predict_func_id,
require_compute=True, X=X, count=samples)
perf_df = record_elapsed_timings_to_df(df=perf_df, timings=predict_timings, type='predict',
record_template=record_template, columns=cols, write_to=out_path)
print(f"Finished gathering <{samples}>, 'predict' samples using <{n}> workers, with a mean time of {np.mean(predict_timings)/1e9:0.4f} sec.",
flush=True)
else:
print(f"Skipping inference sweep. predict_func_id is None")def simple_ci(df, fields, groupby):
gbdf = df[fields].groupby(groupby).agg(['mean', 'std', 'count'])
ci = (1.96 + gbdf['elapsed']['std'] / np.sqrt(gbdf['elapsed']['count']))
ci_df = ci.reset_index()
ci_df['ci.low'] = gbdf['elapsed'].reset_index()['mean'] - ci_df[0]
ci_df['ci.high'] = gbdf['elapsed'].reset_index()['mean'] + ci_df[0]
return ci_df
def visualize_csv_data(csv_path):
df = cudf.read_csv(csv_path)
fields = ['elapsed', 'elapsed_sec', 'type', 'n_workers', 'hardware', 'scaling_type']
groupby = ['n_workers', 'type', 'hardware', 'scaling_type']
df['elapsed_sec'] = df['elapsed']/1e9
ci_df = simple_ci(df, fields, groupby=groupby)
# Rescale to seconds
ci_df[['ci.low', 'ci.high']] = ci_df[['ci.low', 'ci.high']]/1e9
# Print confidence intervals
print(ci_df[['hardware', 'n_workers', 'type', 'ci.low', 'ci.high']][ci_df['type'] != 'load'])
sns.set_theme(style="whitegrid")
sns.set(rc={'figure.figsize':(20, 10)}, font_scale=2)
# Boxplots for elapsed time at each worker count.
plot_df = df[fields][df[fields].type != 'load'].to_pandas()
ax = sns.catplot(data=plot_df, x="n_workers", y="elapsed_sec",
col="type", row="scaling_type", hue="hardware", kind="box",
height=8, order=None)# Uncomment to test with Taxi Dataset
preload_data = False
append_to_existing = True
samples = 5
load_samples = 1
worker_counts = [4]
scaling_denom = 4
hardware_type = "V100"
max_data_frac = 0.75
scale_type = 'weak' # weak | strong
out_prefix = 'taxi_medium'
if (not preload_data):
data_loader = taxi_parquet_data_loader
else:
data = taxi_parquet_data_loader(client, fraction=max_data_frac)
data_loader = lambda client, response_dtype, fraction, random_state: data
if (not hardware_type):
raise RuntimeError("Please specify the hardware type for this run! ex. (T4, V100, A100)")
sweep_kwargs = {
'append_to_existing': append_to_existing,
'samples': samples,
'load_samples': load_samples,
'worker_counts': worker_counts,
'scaling_denom': scaling_denom,
'hardware_type': hardware_type,
'data_loader': data_loader,
'max_data_frac': max_data_frac,
'scaling_type': scale_type
}taxi_parquet_path = ["gs://anaconda-public-data/nyc-taxi/nyc.parquet/*.parquet"]
estimated_rows = estimate_df_rows(client, files=taxi_parquet_path, testpct=0.0001)
print(estimated_rows)# # Uncomment to sweep with the large Taxi Dataset
# preload_data = True
# append_to_existing = True
# samples = 5
# load_samples = 1
# worker_counts = [8]
# scaling_denom = 8
# hardware_type = "V100"
# data_loader = taxi_csv_data_loader
# max_data_frac = .5
# scale_type = 'weak'
# out_prefix = 'taxi_large'
# if (not preload_data):
# data_loader = taxi_csv_data_loader
# else:
# data = taxi_csv_data_loader(client, fraction=max_data_frac)
# data_loader = lambda client, response_dtype, fraction, random_state: data
# if (not hardware_type):
# raise RuntimeError("Please specify the hardware type for this run! ex. (T4, V100, A100)")
# sweep_kwargs = {
# 'append_to_existing': append_to_existing,
# 'samples': samples,
# 'load_samples': load_samples,
# 'worker_counts': worker_counts,
# 'scaling_denom': scaling_denom,
# 'hardware_type': hardware_type,
# 'data_loader': data_loader,
# 'max_data_frac': max_data_frac,
# 'scaling_type': scale_type
# }remap = {}
remap['tpep_pickup_datetime'] = 'pickup_datetime'
remap['tpep_dropoff_datetime'] = 'dropoff_datetime'
remap['ratecodeid'] = 'rate_code'
#create a list of columns & dtypes the df must have
must_haves = {
'pickup_datetime': 'datetime64[ms]',
'dropoff_datetime': 'datetime64[ms]',
'passenger_count': 'int32',
'trip_distance': 'float32',
'pickup_longitude': 'float32',
'pickup_latitude': 'float32',
'rate_code': 'int32',
'dropoff_longitude': 'float32',
'dropoff_latitude': 'float32',
'fare_amount': 'float32'
}
# apply a list of filter conditions to throw out records with missing or outlier values
query_frags = [
'fare_amount > 0 and fare_amount < 500',
'passenger_count > 0 and passenger_count < 6',
'pickup_longitude > -75 and pickup_longitude < -73',
'dropoff_longitude > -75 and dropoff_longitude < -73',
'pickup_latitude > 40 and pickup_latitude < 42',
'dropoff_latitude > 40 and dropoff_latitude < 42'
]
workers = client.has_what().keys()base_path = 'gcs://anaconda-public-data/nyc-taxi/csv'
with SimpleTimer() as timer_csv:
df_csv_2014 = dask_cudf.read_csv(f'{base_path}/2014/yellow_*.csv', chunksize=25e6)
df_csv_2014 = clean(df_csv_2014, remap, must_haves)
df_csv_2014 = df_csv_2014.query(' and '.join(query_frags))
with dask.annotate(workers=set(workers)):
df_csv_2014 = client.persist(collections=df_csv_2014)
wait(df_csv_2014)
print(df_csv_2014.columns)
rows_csv = df_csv_2014.iloc[:,0].shape[0].compute()
print(f"CSV load took {timer_csv.elapsed/1e9} sec. For {rows_csv} rows of data => {rows_csv/(timer_csv.elapsed/1e9)} rows/sec")client.cancel(df_csv_2014)with SimpleTimer() as timer_parquet:
df_parquet = dask_cudf.read_parquet(f'gs://anaconda-public-data/nyc-taxi/nyc.parquet/*', chunksize=25e6)
df_parquet = clean(df_parquet, remap, must_haves)
df_parquet = df_parquet.query(' and '.join(query_frags))
with dask.annotate(workers=set(workers)):
df_parquet = client.persist(collections=df_parquet)
wait(df_parquet)
print(df_parquet.columns)
rows_parquet = df_parquet.iloc[:,0].shape[0].compute()
print(f"Parquet load took {timer_parquet.elapsed/1e9} sec. For {rows_parquet} rows of data => {rows_parquet/(timer_parquet.elapsed/1e9)} rows/sec")client.cancel(df_parquet)speedup = (rows_parquet/(timer_parquet.elapsed/1e9))/(rows_csv/(timer_csv.elapsed/1e9))
print(speedup)from cuml.dask.ensemble import RandomForestRegressor
rf_kwargs = {
"workers": client.has_what().keys(),
"n_estimators": 10,
"max_depth": 12
}
rf_csv_path = f"./{out_prefix}_random_forest_regression.csv"
performance_sweep(client=client, model=RandomForestRegressor,
**sweep_kwargs,
out_path=rf_csv_path,
response_dtype=np.int32,
model_kwargs=rf_kwargs)rf_csv_path = f"./{out_prefix}_random_forest_regression.csv"
visualize_csv_data(rf_csv_path)from cuml.dask.cluster import KMeans
kmeans_kwargs = {
"client": client,
"n_clusters": 12,
"max_iter": 371,
"tol": 1e-5,
"oversampling_factor": 3,
"max_samples_per_batch": 32768/2,
"verbose": False,
"init": 'random'
}
kmeans_csv_path = f'./{out_prefix}_kmeans.csv'
performance_sweep(client=client, model=KMeans,
**sweep_kwargs,
out_path=kmeans_csv_path,
xy_fit=False,
model_kwargs=kmeans_kwargs)visualize_csv_data(kmeans_csv_path)from cuml.dask.neighbors import NearestNeighbors
nn_kwargs = {}
nn_csv_path = f'./{out_prefix}_nn.csv'
performance_sweep(client=client, model=NearestNeighbors,
**sweep_kwargs,
out_path=nn_csv_path,
xy_fit=False,
predict_func_id='get_neighbors',
model_kwargs=nn_kwargs)nn_csv_path = f'./{out_prefix}_nn.csv'
visualize_csv_data(nn_csv_path)from cuml.dask.decomposition import TruncatedSVD
tsvd_kwargs = {
"client": client,
"n_components": 5
}
tsvd_csv_path = f'./{out_prefix}_tsvd.csv'
performance_sweep(client=client, model=TruncatedSVD,
**sweep_kwargs,
out_path=tsvd_csv_path,
xy_fit=False,
fit_requires_compute=True,
fit_func_id="fit_transform",
predict_func_id=None,
model_kwargs=tsvd_kwargs)visualize_csv_data(tsvd_csv_path)from cuml.dask.linear_model import Lasso as LassoRegression
lasso_kwargs = {
"client": client
}
lasso_csv_path = f'./{out_prefix}_lasso_regression.csv'
performance_sweep(client=client, model=LassoRegression,
**sweep_kwargs,
out_path=lasso_csv_path,
model_kwargs=lasso_kwargs)visualize_csv_data(lasso_csv_path)from cuml.dask.linear_model import ElasticNet as ElasticNetRegression
elastic_kwargs = {
"client": client,
}
enr_csv_path = f'./{out_prefix}_elastic_regression.csv'
performance_sweep(client=client, model=ElasticNetRegression,
**sweep_kwargs,
out_path=enr_csv_path,
model_kwargs=elastic_kwargs)visualize_csv_data(enr_csv_path)from cuml.dask.solvers import CD
# This uses model parallel Coordinate Descent
cd_kwargs = {
}
cd_csv_path = f'./{out_prefix}_mutli_gpu_linear_regression.csv'
performance_sweep(client=client, model=CD,
**sweep_kwargs,
out_path=cd_csv_path,
model_kwargs=cd_kwargs)visualize_csv_data(cd_csv_path)import xgboost as xgb
xg_args = [client]
xg_kwargs = {
'params': {
'tree_method': 'gpu_hist',
},
'num_boost_round': 100
}
xgb_csv_path = f'./{out_prefix}_xgb.csv'
class XGBProxy():
"""
Create a simple API wrapper around XGBoost so that it supports the fit/predict workflow.
Parameters
-------------
data_loader: data loader object intended to be used by the performance sweep.
"""
def __init__(self, data_loader):
self.args = []
self.kwargs = {}
self.data_loader = data_loader
self.trained_model = None
def loader(self, client, response_dtype, fraction, random_state):
"""
Wrap the data loader method so that it creates a DMatrix from the returned data.
"""
df, X, y = self.data_loader(client, response_dtype, fraction, random_state)
dmatrix = xgb.dask.DaskDMatrix(client, X, y)
return dmatrix, dmatrix, dmatrix
def __call__(self, *args, **kwargs):
"""
Acts as a pseudo init function which initializes our model args.
"""
self.args = args
self.kwargs = kwargs
return self
def fit(self, X):
"""
Wrap dask.train, and store the model on our proxy object.
"""
if (self.trained_model):
del self.trained_model
self.trained_model = xgb.dask.train(*self.args,
dtrain=X,
evals=[(X, 'train')],
**self.kwargs)
return self
def predict(self, X):
assert(self.trained_model)
return xgb.dask.predict(*self.args, self.trained_model, X)
xgb_proxy = XGBProxy(data_loader)
performance_sweep(client=client, model=xgb_proxy, data_loader=xgb_proxy.loader, hardware_type=hardware_type,
worker_counts=worker_counts,
samples=samples,
load_samples=load_samples,
max_data_frac=max_data_frac,
scaling_type=scale_type,
out_path=xgb_csv_path,
append_to_existing=append_to_existing,
update_workers_in_kwargs=False,
xy_fit=False,
scaling_denom = scaling_denom,
model_args=xg_args,
model_kwargs=xg_kwargs)visualize_csv_data(xgb_csv_path)client.close()
cluster.close()
| 0 |
rapidsai_public_repos/cloud-ml-examples/azure
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/MNMG_XGBoost.ipynb
|
## Uncomment the following and install some libraries at the beginning.
# # If azureml is not present install azureml-core.
# # Further opendatasets is in preview mode hence it is not available in the main sdk and needs to be installed separately.
# # Installing azureml-opendatasets with for some weird reason installs an old version of Pandas 1.0.0, which we will upgrade to 1.2.4 to work with RAPIDS.
# # We do the same for the numpy and scipy libraries that are also downgraded by azureml-opendatasets.
# ! pip install azureml-core
# ! pip install azureml-opendatasets
# ! pip install azureml-telemetry
# ! pip install pandas==1.2.4 # reverting pandas to 1.2.4
# ! pip install numpy==1.20.2 # reverting numpy to 1.20.2
# ! pip install scipy==1.6.0 # reverting scipy to 1.16.0from dask.distributed import Client, WorkerPlugin, wait, progress, get_worker
from dask_kubernetes import KubeCluster, make_pod_from_dict
import dask_cudf
from azureml.opendatasets import NycTlcYellow
from dask_ml.model_selection import train_test_split
from cuml.dask.common import utils as dask_utils
from cuml.metrics import mean_squared_error
from cuml import ForestInference
import cudf
import xgboost as xgb
from datetime import datetime
from dateutil import parser
import numpy as np
from timeit import default_timer as timer
import certifi
import dask
import os
import time
import yaml
import numpy as npclass SimpleTimer:
def __init__(self):
self.start = None
self.end = None
self.elapsed = None
def __enter__(self):
self.start = time.perf_counter_ns()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.end = time.perf_counter_ns()
self.elapsed = self.end - self.start
def create_pod_from_yaml(yaml_file):
with open(yaml_file, 'r') as reader:
d = yaml.safe_load(reader)
d = dask.config.expand_environment_variables(d)
return make_pod_from_dict(d)
def build_worker_and_scheduler_pods(sched_spec, worker_spec):
assert os.path.isfile(sched_spec)
assert os.path.isfile(worker_spec)
sched_pod = create_pod_from_yaml(sched_spec)
worker_pod = create_pod_from_yaml(worker_spec)
return sched_pod, worker_pod
def scale_workers(client, n_workers, timeout=300):
client.cluster.scale(n_workers)
m = len(client.has_what().keys())
start = end = time.perf_counter_ns()
while ((m != n_workers) and (((end - start) / 1e9) < timeout) ):
time.sleep(5)
m = len(client.has_what().keys())
end = time.perf_counter_ns()
if (((end - start) / 1e9) >= timeout):
raise RuntimeError(f"Failed to rescale cluster in {timeout} sec."
"Try increasing timeout for very large containers, and verify available compute resources.")
dask.config.set({"logging.kubernetes": "info",
"logging.distributed": "info",
"kubernetes.scheduler-service-type": "LoadBalancer",
"kubernetes.idle-timeout": None,
"kubernetes.scheduler-service-wait-timeout": 3600,
"kubernetes.deploy-mode": "remote",
"kubernetes.logging": "info",
"distributed.logging": "info",
"distributed.scheduler.idle-timeout": None,
"distributed.scheduler.locks.lease-timeout": None,
"distributed.comm.timeouts.connect": 3600,
"distributed.comm.tls.ca-file": certifi.where()})
sched_spec_path = "./podspecs/azure/scheduler-specs.yml"
worker_spec_path = "./podspecs/azure/cuda-worker-specs.yml"sched_pod, worker_pod = build_worker_and_scheduler_pods(sched_spec=sched_spec_path,
worker_spec=worker_spec_path)cluster = KubeCluster(pod_template=worker_pod,
scheduler_pod_template=sched_pod)
client = Client(cluster)
scheduler_address = cluster.scheduler_addressscale_workers(client, 4, timeout=600)
npartitions = len(client.has_what().keys())
clientdef pretty_print(scheduler_dict):
print(f"All workers for scheduler id: {scheduler_dict['id']}, address: {scheduler_dict['address']}")
for worker in scheduler_dict['workers']:
print(f"Worker: {worker} , gpu_machines: {scheduler_dict['workers'][worker]['gpu']}")
pretty_print(client.scheduler_info()) # will show information on the len(CUDA_VISIBLE_DEVICES) partitionstic = timer()
start_date = parser.parse('2014-05-01') # lets start at 1st May 2014
end_date = parser.parse('2014-05-31') # Lets stop at 31st May 2014
nyc_tlc = NycTlcYellow(start_date=start_date, end_date=end_date)
nyc_tlc_df = nyc_tlc.to_pandas_dataframe()
toc = timer()
print(f"Wall clock time taken for this cell : {toc-tic} s")print(nyc_tlc_df.shape) # Apprx. 14 Million rows
print(type(nyc_tlc_df))
print(nyc_tlc_df.head())
# since we are going to send the data to the server let's use the first 10million rows
nyc_tlc_df = nyc_tlc_df[:10000000]
print(nyc_tlc_df.shape)import math
from math import cos, sin, asin, sqrt, pi
def haversine_distance_kernel(pickup_latitude_r, pickup_longitude_r, dropoff_latitude_r, dropoff_longitude_r, h_distance, radius):
for i, (x_1, y_1, x_2, y_2) in enumerate(zip(pickup_latitude_r, pickup_longitude_r, dropoff_latitude_r, dropoff_longitude_r,)):
x_1 = pi/180 * x_1
y_1 = pi/180 * y_1
x_2 = pi/180 * x_2
y_2 = pi/180 * y_2
dlon = y_2 - y_1
dlat = x_2 - x_1
a = sin(dlat/2)**2 + cos(x_1) * cos(x_2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
# radius = 6371 # Radius of earth in kilometers # currently passed as input arguments
h_distance[i] = c * radius
def day_of_the_week_kernel(day, month, year, day_of_week):
for i, (d_1, m_1, y_1) in enumerate(zip(day, month, year)):
if month[i] <3:
shift = month[i]
else:
shift = 0
Y = year[i] - (month[i] < 3)
y = Y - 2000
c = 20
d = day[i]
m = month[i] + shift + 1
day_of_week[i] = (d + math.floor(m*2.6) + y + (y//4) + (c//4) -2*c)%7
def add_features(df):
df['hour'] = df['tpepPickupDateTime'].dt.hour
df['year'] = df['tpepPickupDateTime'].dt.year
df['month'] = df['tpepPickupDateTime'].dt.month
df['day'] = df['tpepPickupDateTime'].dt.day
df['diff'] = (df['tpepPickupDateTime'] - df['tpepPickupDateTime']).dt.seconds #convert difference between pickup and dropoff into seconds
df['pickup_latitude_r'] = df['startLat']//.01*.01
df['pickup_longitude_r'] = df['startLon']//.01*.01
df['dropoff_latitude_r'] = df['endLat']//.01*.01
df['dropoff_longitude_r'] = df['endLon']//.01*.01
df = df.drop('tpepDropoffDateTime', axis=1)
df = df.drop('tpepPickupDateTime', axis =1)
df = df.apply_rows(haversine_distance_kernel,
incols=['pickup_latitude_r', 'pickup_longitude_r', 'dropoff_latitude_r', 'dropoff_longitude_r'],
outcols=dict(h_distance=np.float32),
kwargs=dict(radius=6371))
df = df.apply_rows(day_of_the_week_kernel,
incols=['day', 'month', 'year'],
outcols=dict(day_of_week=np.float32),
kwargs=dict())
df['is_weekend'] = (df['day_of_week']<2)
return dfdef persist_train_infer_split(client, df, response_dtype, response_id, infer_frac=1.0, random_state=42, shuffle=True):
workers = client.has_what().keys()
X, y = df.drop([response_id], axis=1), df[response_id].astype('float32')
infer_frac = max(0, min(infer_frac, 1.0))
X_train, X_infer, y_train, y_infer = train_test_split(X, y, shuffle=True, random_state=random_state, test_size=infer_frac)
with dask.annotate(workers=set(workers)):
X_train, y_train = client.persist(
collections=[X_train, y_train])
if (infer_frac != 1.0):
with dask.annotate(workers=set(workers)):
X_infer, y_infer = client.persist(
collections=[X_infer, y_infer])
wait([X_train, y_train, X_infer, y_infer])
else:
X_infer = X_train
y_infer = y_train
wait([X_train, y_train])
return X_train, y_train, X_infer, y_infer
def clean(df_part, must_haves):
"""
This function performs the various clean up tasks for the data
and returns the cleaned dataframe.
"""
# iterate through columns in this df partition
for col in df_part.columns:
# drop anything not in our expected list
if col not in must_haves:
df_part = df_part.drop(col, axis=1)
continue
# fixes datetime error found by Ty Mckercher and fixed by Paul Mahler
if df_part[col].dtype == 'object' and col in ['tpepPickupDateTime', 'tpepDropoffDateTime']:
df_part[col] = df_part[col].astype('datetime64[ms]')
continue
# if column was read as a string, recast as float
if df_part[col].dtype == 'object':
df_part[col] = df_part[col].str.fillna('-1')
df_part[col] = df_part[col].astype('float32')
else:
# downcast from 64bit to 32bit types
# Tesla T4 are faster on 32bit ops
if 'int' in str(df_part[col].dtype):
df_part[col] = df_part[col].astype('int32')
if 'float' in str(df_part[col].dtype):
df_part[col] = df_part[col].astype('float32')
df_part[col] = df_part[col].fillna(-1)
return df_part
def taxi_data_loader(client, nyc_tlc_df, response_dtype=np.float32, infer_frac=1.0, random_state=0):
#create a list of columns & dtypes the df must have
must_haves = {
'tpepPickupDateTime': 'datetime64[ms]',
'tpepDropoffDateTime': 'datetime64[ms]',
'passengerCount': 'int32',
'tripDistance': 'float32',
'startLon': 'float32',
'startLat': 'float32',
'rateCodeId': 'int32',
'endLon': 'float32',
'endLat': 'float32',
'fareAmount': 'float32'
}
workers = client.has_what().keys()
response_id = 'fareAmount'
taxi_data = dask_cudf.from_cudf(cudf.from_pandas(nyc_tlc_df), npartitions=len(workers))
taxi_data = clean(taxi_data, must_haves)
taxi_data = taxi_data.map_partitions(add_features)
# Drop NaN values and convert to float32
taxi_data = taxi_data.dropna()
fields = ['passengerCount', 'tripDistance', 'startLon', 'startLat', 'rateCodeId',
'endLon', 'endLat', 'fareAmount', 'diff', 'h_distance', 'day_of_week', 'is_weekend']
taxi_data = taxi_data.astype("float32")
taxi_data = taxi_data[fields]
return persist_train_infer_split(client, taxi_data, response_dtype, response_id, infer_frac, random_state)
tic = timer()
X_train, y_train, X_infer, y_infer = taxi_data_loader(client, nyc_tlc_df, infer_frac=0.1, random_state=42)
toc = timer()
print(f"Wall clock time taken for ETL and persisting : {toc-tic} s")pretty_print(client.scheduler_info()) # will show information on the len(CUDA_VISIBLE_DEVICES) partitionsparams = {
'learning_rate': 0.15,
'max_depth': 8,
'objective': 'reg:squarederror',
'subsample': 0.7,
'colsample_bytree': 0.7,
'min_child_weight': 1,
'gamma': 1,
'silent': True,
'verbose_eval': True,
'booster' : 'gbtree', # 'gblinear' not implemented in dask
'eval_metric': 'rmse',
'tree_method':'gpu_hist',
'num_boost_rounds': 100
}
data_train = xgb.dask.DaskDMatrix(client, X_train, y_train)
tic = timer()
xgboost_output = xgb.dask.train(client, params,data_train,
num_boost_round=params['num_boost_rounds'])
xgb_gpu_model = xgboost_output['booster']
toc = timer()
print(f"Wall clock time taken for this cell : {toc-tic} s")model_filename = 'trained-model_nyctaxi.xgb'
xgb_gpu_model.save_model(model_filename)_y_test = y_infer.compute()
wait(_y_test)d_test = xgb.dask.DaskDMatrix(client, X_infer)
tic = timer()
y_pred = xgb.dask.predict(client, xgb_gpu_model, d_test)
y_pred= y_pred.compute()
wait(y_pred)
toc = timer()
print(f"Wall clock time taken for xgb.dask.predict : {toc-tic} s")tic = timer()
y_pred = xgb.dask.inplace_predict(client, xgb_gpu_model, X_infer)
y_pred = y_pred.compute()
wait(y_pred)
toc = timer()
print(f"Wall clock time taken for inplace inference : {toc-tic} s")tic = timer()
print("Calculating MSE")
score = mean_squared_error(y_pred, _y_test)
print("Workflow Complete - RMSE: ", np.sqrt(score))
toc = timer()
print(f"Wall clock time taken for this cell : {toc-tic} s")from cuml import ForestInference
from dask.distributed import get_workerworkers = client.has_what().keys()
print(workers)
n_workers = len(workers)
n_partitions = n_workersdef unzipFile(zipname):
worker = get_worker()
import zipfile
import os
with zipfile.ZipFile(os.path.join(worker.local_directory, zipname)) as zf:
zf.extractall(worker.local_directory)
def checkOrMakeLocalDir():
worker = get_worker()
import os
if not os.path.exists(worker.local_directory):
os.makedirs(worker.local_directory)
def workerModelInit(model_file):
# this function will run in each worker and initialize the worker
import os
worker = get_worker()
worker.data["fil_model"] = ForestInference.load(filename=os.path.join(worker.local_directory, model_file),model_type='xgboost')
def predict(input_df):
# this function will run in each worker and predict
worker = get_worker()
return worker.data["fil_model"].predict(input_df)
def persistModelonWorkers(client, zip_file_name, model_file_name):
import zipfile
zf = zipfile.ZipFile(zip_file_name, mode='w')
zf.write(f"./{model_file_name}")
zf.close()
# check to see if local directory present in workers
# if not present make it
fut = client.run(checkOrMakeLocalDir)
wait(fut)
# upload the zip file in workers
fut = client.upload_file(f"./{zip_file_name}")
wait(fut)
# unzip file in the workers
fut = client.run(unzipFile, zip_file_name)
wait(fut)
# load model using FIL in workers
fut = client.run(workerModelInit, model_file_name)
wait(fut)
%%time
persistModelonWorkers(client, "zipfile_write.zip", "trained-model_nyctaxi.xgb")tic = timer()
predictions = X_infer.map_partitions(predict, meta="float") # this is like MPI reduce
y_pred = predictions.compute()
wait(y_pred)
toc = timer()
print(f"Wall clock time taken for this cell : {toc-tic} s")rows_csv = X_infer.iloc[:,0].shape[0].compute()
print(f"It took {toc-tic} seconds to predict on {rows_csv} rows using FIL distributedly on each worker")tic = timer()
score = mean_squared_error(y_pred, _y_test)
toc = timer()
print("Final - RMSE: ", np.sqrt(score))client.close()cluster.close()
| 0 |
rapidsai_public_repos/cloud-ml-examples/azure
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/Dockerfile
|
FROM docker.io/rapidsai/rapidsai-core:21.06-cuda11.2-runtime-ubuntu18.04-py3.8
RUN source activate rapids \
&& pip install xgboost \
&& pip install gcsfs \
&& pip install adlfs
| 0 |
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/podspecs
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/podspecs/azure/cpu-worker-specs.yml
|
kind: Pod
metadata:
labels:
dask_type: "worker"
spec:
restartPolicy: Never
tolerations:
- key: "daskrole"
operator: "Equal"
value: "worker"
effect: "NoSchedule"
containers:
- image: <username>.azurecr.io/aks-mnmg/dask-unified:21.06
imagePullPolicy: IfNotPresent
args: [ dask-worker, $(DASK_SCHEDULER_ADDRESS) ]
name: dask-worker
resources:
limits:
cpu: "4"
memory: 25G
#nvidia.com/gpu: 1
requests:
cpu: "4"
memory: 25G
imagePullSecrets:
- name: "aks-secret"
| 0 |
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/podspecs
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/podspecs/azure/cuda-worker-specs.yml
|
kind: Pod
metadata:
labels:
dask_type: "worker"
spec:
restartPolicy: Never
tolerations:
- key: "daskrole"
operator: "Equal"
value: "worker"
effect: "NoSchedule"
containers:
- image: <username>.azurecr.io/aks-mnmg/dask-unified:21.06
imagePullPolicy: IfNotPresent
args: [ dask-cuda-worker, $(DASK_SCHEDULER_ADDRESS), --rmm-managed-memory ]
name: dask-cuda-worker
resources:
limits:
cpu: "4"
memory: 40G
nvidia.com/gpu: 1
requests:
cpu: "4"
memory: 25G
imagePullSecrets:
- name: "aks-secret"
| 0 |
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/podspecs
|
rapidsai_public_repos/cloud-ml-examples/azure/kubernetes/podspecs/azure/scheduler-specs.yml
|
kind: Pod
metadata:
labels:
dask_type: "scheduler"
spec:
restartPolicy: Never
tolerations:
- key: "daskrole"
operator: "Equal"
value: "scheduler"
effect: "NoSchedule"
containers:
- image: <username>.azurecr.io/aks-mnmg/dask-unified:21.06
imagePullPolicy: IfNotPresent
args: [ dask-scheduler ]
name: dask-scheduler
resources:
limits:
cpu: "3"
memory: 40G
#nvidia.com/gpu: 1
requests:
cpu: "3"
memory: 25G
imagePullSecrets:
- name: "aks-secret"
| 0 |
rapidsai_public_repos/cloud-ml-examples/k8s-dask
|
rapidsai_public_repos/cloud-ml-examples/k8s-dask/notebooks/xgboost-gpu-hpo-job-parallel-k8s.ipynb
|
# Choose the same RAPIDS image you used for launching the notebook session
rapids_image = "rapidsai/rapidsai-core:22.10-cuda11.5-runtime-ubuntu20.04-py3.9"
# Use the number of worker nodes in your Kubernetes cluster.
n_workers = 4from dask_kubernetes.operator import KubeCluster
cluster = KubeCluster(name="rapids-dask",
image=rapids_image,
worker_command="dask-cuda-worker",
n_workers=n_workers,
resources={"limits": {"nvidia.com/gpu": "1"}},
env={"DISABLE_JUPYTER": "true",
"EXTRA_PIP_PACKAGES":
"git+https://github.com/optuna/optuna.git@bc6c05dc655aab7e7a02e91e7306609f2a4524ec"})clusterfrom dask.distributed import Client
client = Client(cluster)def objective(trial):
x = trial.suggest_uniform("x", -10, 10)
return (x - 2) ** 2import optuna
from dask.distributed import wait
# Number of hyperparameter combinations to try in parallel
n_trials = 100
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(direction="minimize", storage=dask_storage)
futures = []
for i in range(0, n_trials, n_workers * 4):
iter_range = (i, min([i + n_workers * 4, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(study.optimize, objective, n_trials=1, pure=False)
for _ in range(*iter_range)
]
}
)
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])study.best_paramsstudy.best_valuefrom sklearn.datasets import load_breast_cancer
from sklearn.model_selection import cross_val_score, KFold
import xgboost as xgb
from optuna.samplers import RandomSampler
def objective(trial):
X, y = load_breast_cancer(return_X_y=True)
params = {
"n_estimators": 10,
"verbosity": 0,
"tree_method": "gpu_hist",
# L2 regularization weight.
"lambda": trial.suggest_float("lambda", 1e-8, 100.0, log=True),
# L1 regularization weight.
"alpha": trial.suggest_float("alpha", 1e-8, 100.0, log=True),
# sampling according to each tree.
"colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
"max_depth": trial.suggest_int("max_depth", 2, 10, step=1),
# minimum child weight, larger the term more conservative the tree.
"min_child_weight": trial.suggest_float("min_child_weight", 1e-8, 100, log=True),
"learning_rate": trial.suggest_float("learning_rate", 1e-8, 1.0, log=True),
# defines how selective algorithm is.
"gamma": trial.suggest_float("gamma", 1e-8, 1.0, log=True),
"grow_policy": "depthwise",
"eval_metric": "logloss"
}
clf = xgb.XGBClassifier(**params)
fold = KFold(n_splits=5, shuffle=True, random_state=0)
score = cross_val_score(clf, X, y, cv=fold, scoring='neg_log_loss')
return score.mean()# Number of hyperparameter combinations to try in parallel
n_trials = 250
# Optimize in parallel on your Dask cluster
backend_storage = optuna.storages.InMemoryStorage()
dask_storage = optuna.integration.DaskStorage(storage=backend_storage, client=client)
study = optuna.create_study(direction="maximize",
sampler=RandomSampler(seed=0),
storage=dask_storage)
futures = []
for i in range(0, n_trials, n_workers * 4):
iter_range = (i, min([i + n_workers * 4, n_trials]))
futures.append(
{
"range": iter_range,
"futures": [
client.submit(study.optimize, objective, n_trials=1, pure=False)
for _ in range(*iter_range)
]
}
)
for partition in futures:
iter_range = partition["range"]
print(f"Testing hyperparameter combinations {iter_range[0]}..{iter_range[1]}")
_ = wait(partition["futures"])study.best_paramsstudy.best_valuefrom optuna.visualization.matplotlib import plot_optimization_history, plot_param_importancesplot_optimization_history(study)plot_param_importances(study)
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_ec2_mnmg.ipynb
|
# !pip install "dask-cloudprovider[aws]"import math
from datetime import datetime
import cudf
import dask
import dask_cudf
import numpy as np
from cuml.dask.common import utils as dask_utils
from cuml.dask.ensemble import RandomForestRegressor
from cuml.metrics import mean_squared_error
from dask_cloudprovider.aws import EC2Cluster
from dask.distributed import Client
from dask_ml.model_selection import train_test_split
from dateutil import parser
import configparser, os, contextlibdef get_aws_credentials(*, aws_profile="default"):
parser = configparser.RawConfigParser()
parser.read(os.path.expanduser('~/.aws/config'))
config = parser.items(
f"profile {aws_profile}" if aws_profile != "default" else "default"
)
parser.read(os.path.expanduser('~/.aws/credentials'))
credentials = parser.items(aws_profile)
all_credentials = {key.upper(): value for key, value in [*config, *credentials]}
with contextlib.suppress(KeyError):
all_credentials["AWS_REGION"] = all_credentials.pop("REGION")
return all_credentialsn_workers = 2
n_gpus_per_worker = 4
security_group = "sg-dask"
region_name = "us-east-1"cluster = EC2Cluster(env_vars=get_aws_credentials(),
instance_type="g4dn.12xlarge", # 4 T4 GPUs
docker_image="rapidsai/rapidsai:21.06-cuda11.0-runtime-ubuntu18.04-py3.8",
worker_class="dask_cuda.CUDAWorker",
worker_options = {'rmm-managed-memory':True},
security_groups=[security_group],
docker_args = '--shm-size=256m',
n_workers=n_workers,
security=False,
availability_zone="",
region=region_name)client = Client(cluster)
client%%time
client.wait_for_workers(n_workers*n_gpus_per_worker)
client# create a list of all columns & dtypes the df must have for reading
col_dtype = {
'VendorID': 'int32',
'tpep_pickup_datetime': 'datetime64[ms]',
'tpep_dropoff_datetime': 'datetime64[ms]',
'passenger_count': 'int32',
'trip_distance': 'float32',
'pickup_longitude': 'float32',
'pickup_latitude': 'float32',
'RatecodeID': 'int32',
'store_and_fwd_flag': 'int32',
'dropoff_longitude': 'float32',
'dropoff_latitude': 'float32',
'payment_type':'int32',
'fare_amount': 'float32',
'extra':'float32',
'mta_tax':'float32',
'tip_amount': 'float32',
'total_amount': 'float32',
'tolls_amount': 'float32',
'improvement_surcharge': 'float32',
}taxi_df = dask_cudf.read_csv("https://storage.googleapis.com/anaconda-public-data/nyc-taxi/csv/2016/yellow_tripdata_2016-02.csv",
dtype=col_dtype)#Dictionary of required columns and their datatypes
must_haves = {
'pickup_datetime': 'datetime64[ms]',
'dropoff_datetime': 'datetime64[ms]',
'passenger_count': 'int32',
'trip_distance': 'float32',
'pickup_longitude': 'float32',
'pickup_latitude': 'float32',
'rate_code': 'int32',
'dropoff_longitude': 'float32',
'dropoff_latitude': 'float32',
'fare_amount': 'float32'
}def clean(ddf, must_haves):
# replace the extraneous spaces in column names and lower the font type
tmp = {col:col.strip().lower() for col in list(ddf.columns)}
ddf = ddf.rename(columns=tmp)
ddf = ddf.rename(columns={
'tpep_pickup_datetime': 'pickup_datetime',
'tpep_dropoff_datetime': 'dropoff_datetime',
'ratecodeid': 'rate_code'
})
ddf['pickup_datetime'] = ddf['pickup_datetime'].astype('datetime64[ms]')
ddf['dropoff_datetime'] = ddf['dropoff_datetime'].astype('datetime64[ms]')
for col in ddf.columns:
if col not in must_haves:
ddf = ddf.drop(columns=col)
continue
if ddf[col].dtype == 'object':
# Fixing error: could not convert arg to str
ddf = ddf.drop(columns=col)
else:
# downcast from 64bit to 32bit types
# Tesla T4 are faster on 32bit ops
if 'int' in str(ddf[col].dtype):
ddf[col] = ddf[col].astype('int32')
if 'float' in str(ddf[col].dtype):
ddf[col] = ddf[col].astype('float32')
ddf[col] = ddf[col].fillna(-1)
return ddftaxi_df = taxi_df.map_partitions(clean, must_haves, meta=must_haves)## add features
taxi_df['hour'] = taxi_df['pickup_datetime'].dt.hour.astype('int32')
taxi_df['year'] = taxi_df['pickup_datetime'].dt.year.astype('int32')
taxi_df['month'] = taxi_df['pickup_datetime'].dt.month.astype('int32')
taxi_df['day'] = taxi_df['pickup_datetime'].dt.day.astype('int32')
taxi_df['day_of_week'] = taxi_df['pickup_datetime'].dt.weekday.astype('int32')
taxi_df['is_weekend'] = (taxi_df['day_of_week']>=5).astype('int32')
#calculate the time difference between dropoff and pickup.
taxi_df['diff'] = taxi_df['dropoff_datetime'].astype('int32') - taxi_df['pickup_datetime'].astype('int32')
taxi_df['diff']=(taxi_df['diff']/1000).astype('int32')
taxi_df['pickup_latitude_r'] = taxi_df['pickup_latitude']//.01*.01
taxi_df['pickup_longitude_r'] = taxi_df['pickup_longitude']//.01*.01
taxi_df['dropoff_latitude_r'] = taxi_df['dropoff_latitude']//.01*.01
taxi_df['dropoff_longitude_r'] = taxi_df['dropoff_longitude']//.01*.01
taxi_df = taxi_df.drop('pickup_datetime', axis=1)
taxi_df = taxi_df.drop('dropoff_datetime', axis=1)
def haversine_dist(df):
import cuspatial
h_distance = cuspatial.haversine_distance(df['pickup_longitude'], df['pickup_latitude'], df['dropoff_longitude'], df['dropoff_latitude'])
df['h_distance']= h_distance
df['h_distance']= df['h_distance'].astype('float32')
return df
taxi_df = taxi_df.map_partitions(haversine_dist)# Split into training and validation sets
X, y = taxi_df.drop(["fare_amount"], axis=1).astype('float32'), taxi_df["fare_amount"].astype('float32')
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle=True)workers = client.has_what().keys()
X_train, X_test, y_train, y_test = dask_utils.persist_across_workers(client,
[X_train, X_test, y_train, y_test],
workers=workers)# create cuml.dask RF regressor
cu_dask_rf = RandomForestRegressor(ignore_empty_partitions=True)# fit RF model
cu_dask_rf = cu_dask_rf.fit(X_train, y_train)#predict on validation set
y_pred = cu_dask_rf.predict(X_test)# compute RMSE
score = mean_squared_error(y_pred.compute().to_array(), y_test.compute().to_array())
print("Workflow Complete - RMSE: ", np.sqrt(score))# Clean up resources
client.close()
cluster.close()
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/aws/README.md
|
# RAPIDS on AWS
There are a few example notebooks to help you get started with running RAPIDS on AWS. Here are the instructions to setup the environment locally to run the examples.
Sections in README
1. Instructions for Running RAPIDS + SageMaker HPO
2. Instructions to run multi-node multi-GPU (MNMG) example on EC2
## 1. Instructions for Running RAPIDS + SageMaker HPO
0. Upload train/test data to S3
- We offer the dataset for this demo in a public bucket hosted in either the `us-east-1` or `us-west-2` regions:
> https://s3.console.aws.amazon.com/s3/buckets/sagemaker-rapids-hpo-us-east-1/
> https://s3.console.aws.amazon.com/s3/buckets/sagemaker-rapids-hpo-us-west-2/
1. Create a SageMaker Notebook Instance
- Sign in to the Amazon SageMaker console at
> https://console.aws.amazon.com/sagemaker/
- Choose **Notebook Instances**, then choose 'Create notebook instance'.
- Note that this notebook is for SageMaker notebook instances only, however instructions for running RAPIDS in SageMaker Studio can be found in the **sagemaker_studio** directory.
<img src='img/sagemaker_notebook_instance.png'>
2. On the Create notebook instance page, provide the following information (if a field is not mentioned, leave the default values):
- For **Notebook instance name**, type a name for your notebook instance.
- For **Instance type**, we recommend you choose a lightweight instance (e.g., ml.t2.medium) since the notebook instance will only be used to build the container and launch work.
- For **IAM role**, choose Create a new role, then choose Create role.
- For **Git repositories**, choose 'Clone a public Git repository to this notebook instance only' and add the cloud-ml-examples repository to the URL
> https://github.com/rapidsai/cloud-ml-examples
- Choose 'Create notebook instance'.
- In a few minutes, Amazon SageMaker launches an ML compute instance — when its ready you should see several links appear in the Actions tab of the **Notebook Instances** section, click on **Open JupyerLab** to launch into the notebook.
> Note: If you see Pending to the right of the notebook instance in the Status column, your notebook is still being created. The status will change to InService when the notebook is ready for use.
3. Run Notebook
- Once inside JupyterLab you should be able to navigate to the notebook in the root directory named **rapids_sagemaker_hpo.ipynb**
## 2. Instructions to run MNMG example on EC2
We recommend using RAPIDS docker image on your local system and using the same image in the notebook so that the libraries can match accurately. You can achieve this using conda environments for RAPIDS too.
For example, in the `rapids_ec2_mnmg.ipynb` notebook, we are using `rapidsai/rapidsai:21.06-cuda11.0-runtime-ubuntu18.04-py3.8` docker image, to pull and run this use the following command. The `-v` flag sets the volume you'd like to mount on the docker container. This way, the changes you make within the docker container are present on your local system to. Make sure to change `local/path` to the path which contains this repository.
`docker run --runtime nvidia --rm -it -p 8888:8888 -p 8787:8787 -v /local/path:/docker/path rapidsai/rapidsai:21.06-cuda11.0-runtime-ubuntu18.04-py3.8`
## Instructions for Running RAPIDS + SageMaker Studio
0. Upload train/test data to S3
- We offer a dataset for the HPO demo in a public bucket hosted in either the `us-east-1` or `us-west-2` regions:
> https://s3.console.aws.amazon.com/s3/buckets/sagemaker-rapids-hpo-us-east-1/
> https://s3.console.aws.amazon.com/s3/buckets/sagemaker-rapids-hpo-us-west-2/
1. Create/open a SageMaker Studio session
- Choose **Amazon SageMaker Studio**, and set up a domain if one does not already exist in the region. See the Quick start procedure for details:
> https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-quick-start.html
- Add a user to the SageMaker Studio Control Panel (if one does not already exist), and Open Studio to start a session.
2. Within the SageMaker Studio session, clone this repository
- Click the Git icon on the far left of the screen (second button, below the folder icon), select Clone a Repository, and paste:
> https://github.com/rapidsai/cloud-ml-examples
- After cloning, you should see the directory **cloud-ml-examples** in your file browser.
3. Run desired notebook
- Within the root directory **cloud-ml-examples**, navigate to **aws**, and open and run the rapids_studio_hpo notebook.
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_hpo.ipynb
|
import sagemaker
import string
import randomexecution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
account=!(aws sts get-caller-identity --query Account --output text)
region=!(aws configure get region)account, regionestimator_info = {
'rapids_container': 'rapidsai/rapidsai-cloud-ml:latest',
'ecr_image': 'sagemaker-rapids-cloud-ml:latest',
'ecr_repository': 'sagemaker-rapids-cloud-ml'
}%%time
!docker pull {estimator_info['rapids_container']}ECR_container_fullname = f"{account[0]}.dkr.ecr.{region[0]}.amazonaws.com/{estimator_info['ecr_image']}"ECR_container_fullnameprint( f"source : {estimator_info['rapids_container']}\n"
f"destination : {ECR_container_fullname}")docker_login_str = !(aws ecr get-login --region {region[0]} --no-include-email)repository_query = !(aws ecr describe-repositories --repository-names {estimator_info['ecr_repository']})
if repository_query[0] == '':
!(aws ecr create-repository --repository-name {estimator_info['ecr_repository']})%%time
!docker push {ECR_container_fullname}s3_data_input = f"s3://sagemaker-rapids-hpo-{region[0]}/1_year"
s3_model_output = f"s3://{session.default_bucket()}/trained-models"# please choose HPO search ranges
hyperparameter_ranges = {
'max_depth' : sagemaker.parameter.IntegerParameter ( 5, 15 ),
'num_boost_round' : sagemaker.parameter.IntegerParameter ( 100, 500 ),
'max_features' : sagemaker.parameter.ContinuousParameter ( 0.1, 1.0 ),
}# please choose total number of HPO experiments[ we have set this number very low to allow for automated CI testing ]
max_jobs = 2# please choose number of experiments that can run in parallel
max_parallel_jobs = 2max_duration_of_experiment_seconds = 60 * 60 * 24# we will recommend a compute instance type, feel free to modify
instance_type = 'ml.p3.2xlarge' #recommend_instance_type(ml_workflow_choice, dataset_directory) # please choose whether spot instances should be used
use_spot_instances_flag = False# 'volume_size' - EBS volume size in GB, default = 30
estimator_params = {
'image_uri': ECR_container_fullname,
'role': execution_role,
'instance_type': instance_type,
'instance_count': 1,
'input_mode': 'File',
'output_path': s3_model_output,
'use_spot_instances': use_spot_instances_flag,
'max_run': max_duration_of_experiment_seconds, # 24 hours
'sagemaker_session': session,
}
if use_spot_instances_flag == True:
estimator_params.update({'max_wait' : max_duration_of_experiment_seconds + 1})estimator = sagemaker.estimator.Estimator(**estimator_params)estimator.fit(inputs = s3_data_input)metric_definitions = [{'Name': 'final-score', 'Regex': 'final-score: (.*);'}]objective_metric_name = 'final-score'hpo = sagemaker.tuner.HyperparameterTuner(estimator=estimator,
metric_definitions=metric_definitions,
objective_metric_name=objective_metric_name,
objective_type='Maximize',
hyperparameter_ranges=hyperparameter_ranges,
strategy='Random',
max_jobs=max_jobs,
max_parallel_jobs=max_parallel_jobs)tuning_job_name = 'unified-hpo-19-' + ''.join(random.choices(string.digits, k = 5))hpo.fit( inputs=s3_data_input,
job_name=tuning_job_name,
wait=True,
logs='All')
hpo.wait() # block until the .fit call above is completedsagemaker.HyperparameterTuningJobAnalytics(tuning_job_name).dataframe()
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/aws/helper_functions.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import random
import uuid
import boto3
import os
import traceback
def recommend_instance_type(code_choice, dataset_directory):
"""
Based on the code and [airline] dataset-size choices we recommend
instance types that we've tested and are known to work.
Feel free to ignore/make a different choice.
"""
recommended_instance_type = None
if 'CPU' in code_choice and dataset_directory in ['1_year', '3_year', 'NYC_taxi']: # noqa
detail_str = '16 cpu cores, 64GB memory'
recommended_instance_type = 'ml.m5.4xlarge'
elif 'CPU' in code_choice and dataset_directory in ['10_year']:
detail_str = '96 cpu cores, 384GB memory'
recommended_instance_type = 'ml.m5.24xlarge'
if code_choice == 'singleGPU':
detail_str = '1x GPU [ V100 ], 16GB GPU memory, 61GB CPU memory'
recommended_instance_type = 'ml.p3.2xlarge'
assert (dataset_directory not in ['10_year']) # ! switch to multi-GPU
elif code_choice == 'multiGPU':
detail_str = '4x GPUs [ V100 ], 64GB GPU memory, 244GB CPU memory'
recommended_instance_type = 'ml.p3.8xlarge'
print(f'recommended instance type : {recommended_instance_type} \n'
f'instance details : {detail_str}')
return recommended_instance_type
def validate_dockerfile(rapids_base_container, dockerfile_name='Dockerfile'):
""" Validate that our desired rapids base image matches the Dockerfile """
with open(dockerfile_name, 'r') as dockerfile_handle:
if rapids_base_container not in dockerfile_handle.read():
raise Exception('Dockerfile base layer [i.e. FROM statment] does'
' not match the variable rapids_base_container')
def summarize_choices(s3_data_input, s3_model_output, code_choice,
algorithm_choice, cv_folds,
instance_type, use_spot_instances_flag,
search_strategy, max_jobs, max_parallel_jobs,
max_duration_of_experiment_seconds):
"""
Print the configuration choices,
often useful before submitting large jobs
"""
print(f's3 data input =\t{s3_data_input}')
print(f's3 model output =\t{s3_model_output}')
print(f'compute =\t{code_choice}')
print(f'algorithm =\t{algorithm_choice}, {cv_folds} cv-fold')
print(f'instance =\t{instance_type}')
print(f'spot instances =\t{use_spot_instances_flag}')
print(f'hpo strategy =\t{search_strategy}')
print(f'max_experiments =\t{max_jobs}')
print(f'max_parallel =\t{max_parallel_jobs}')
print(f'max runtime =\t{max_duration_of_experiment_seconds} sec')
def summarize_hpo_results(tuning_job_name):
"""
Query tuning results and display the best score,
parameters, and job-name
"""
hpo_results = boto3.Session().client(
'sagemaker'
).describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuning_job_name
)
best_job = hpo_results['BestTrainingJob']['TrainingJobName']
best_score = hpo_results['BestTrainingJob']['FinalHyperParameterTuningJobObjectiveMetric']['Value'] # noqa
best_params = hpo_results['BestTrainingJob']['TunedHyperParameters']
print(f'best score: {best_score}')
print(f'best params: {best_params}')
print(f'best job-name: {best_job}')
return hpo_results
def download_best_model(bucket, s3_model_output, hpo_results, local_directory):
""" Download best model from S3"""
try:
target_bucket = boto3.resource('s3').Bucket(bucket)
path_prefix = os.path.join(
s3_model_output.split('/')[-1],
hpo_results['BestTrainingJob']['TrainingJobName'],
'output'
)
objects = target_bucket.objects.filter(Prefix=path_prefix)
for obj in objects:
path, filename = os.path.split(obj.key)
local_filename = os.path.join(
local_directory,
'best_' + filename
)
s3_path_to_model = os.path.join(
's3://',
bucket,
path_prefix,
filename
)
target_bucket.download_file(obj.key, local_filename)
print(f'Successfully downloaded best model\n'
f'> filename: {local_filename}\n'
f'> local directory : {local_directory}\n\n'
f'full S3 path : {s3_path_to_model}')
return local_filename, s3_path_to_model
except Exception as download_error:
print(f'! Unable to download best model: {download_error}')
return None
def new_job_name_from_config(dataset_directory, region, code_choice,
algorithm_choice, cv_folds,
instance_type, trim_limit=32):
"""
Build a jobname string that captures the HPO configuration options.
This is helpful for intepreting logs and for general book-keeping
"""
job_name = None
try:
if dataset_directory in ['1_year', '3_year', '10_year']:
data_choice_str = 'air'
validate_region(region)
elif dataset_directory in ['NYC_taxi']:
data_choice_str = 'nyc'
validate_region(region)
else:
data_choice_str = 'byo'
code_choice_str = code_choice[0] + code_choice[-3:]
if 'randomforest' in algorithm_choice.lower():
algorithm_choice_str = 'RF'
if 'xgboost' in algorithm_choice.lower():
algorithm_choice_str = 'XGB'
if 'kmeans' in algorithm_choice.lower():
algorithm_choice_str = 'KMeans'
# instance_type_str = '-'.join(instance_type.split('.')[1:])
random_str = ''.join(random.choices(uuid.uuid4().hex, k=trim_limit))
job_name = f"{data_choice_str}-{code_choice_str}"\
f"-{algorithm_choice_str}-{cv_folds}cv"\
f"-{random_str}"
job_name = job_name[:trim_limit]
print(f'generated job name : {job_name}\n')
except Exception:
traceback.print_exc()
return job_name
def validate_region(region):
"""
Check that the current [compute] region is one of the
two regions where the demo data is hosted
"""
if isinstance(region, list):
region = region[0]
if region not in ['us-east-1', 'us-west-2']:
raise Exception('Unsupported region based on demo data location,'
' please switch to us-east-1 or us-west-2')
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_intro.ipynb
|
import cudf
url = "https://github.com/plotly/datasets/raw/master/tips.csv"
tips_df = cudf.read_csv(url)
tips_df['tip_percentage'] = tips_df['tip']/tips_df['total_bill']*100
# Display average tip by dining party size
print(tips_df.groupby('size').tip_percentage.mean())from cuml import make_regression, train_test_split
from cuml.linear_model import LinearRegression as cuLinearRegression
from cuml.pipeline import Pipeline as cuPipeline
from cuml.preprocessing import StandardScaler as cuStandardScaler
from cuml.metrics.regression import r2_score
from sklearn.linear_model import LinearRegression as skLinearRegression
from sklearn.pipeline import Pipeline as skPipeline
from sklearn.preprocessing import StandardScaler as skStandardScaler
# Define parameters
n_samples = 2**19 #If you are running on a GPU with less than 16GB RAM, please change to 2**19 or you could run out of memory
n_features = 399
random_state = 23%%time
# Generate data
X, y = make_regression(n_samples=n_samples, n_features=n_features, random_state=random_state)
X = cudf.DataFrame(X)
y = cudf.DataFrame(y)[0]
X_cudf, X_cudf_test, y_cudf, y_cudf_test = train_test_split(X, y, test_size = 0.2, random_state=random_state)# Copy dataset from GPU memory to host memory (CPU)
# This is done to later compare CPU and GPU results
X_train = X_cudf.to_pandas()
X_test = X_cudf_test.to_pandas()
y_train = y_cudf.to_pandas()
y_test = y_cudf_test.to_pandas()%%time
ols_sk = skLinearRegression(fit_intercept=True, n_jobs=-1)
pipe_sk = skPipeline(steps=[('scaler', skStandardScaler()),
('linear', ols_sk)])
pipe_sk.fit(X_train, y_train)%%time
predict_sk = pipe_sk.predict(X_test)%%time
r2_score_sk = r2_score(y_cudf_test, predict_sk)%%time
ols_cuml = cuLinearRegression(fit_intercept=True, algorithm='eig')
pipe_cuml = cuPipeline(steps=[('scaler', cuStandardScaler()),
('linear', ols_cuml)])
pipe_cuml.fit(X_cudf, y_cudf)%%time
predict_cuml = pipe_cuml.predict(X_cudf_test)%%time
r2_score_cuml = r2_score(y_cudf_test, predict_cuml)print(f"R^2 score (SKL): {r2_score_sk}")
print(f"R^2 score (cuML): {r2_score_cuml}")
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_hpo_extended.ipynb
|
%pip install --upgrade boto3import sagemaker
from helper_functions import *execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
account=!(aws sts get-caller-identity --query Account --output text)
region=!(aws configure get region)account, region# please choose dataset S3 bucket and directory
data_bucket = 'sagemaker-rapids-hpo-' + region[0]
dataset_directory = '10_year' # '1_year', '3_year', '10_year', 'NYC_taxi'
# please choose output bucket for trained model(s)
model_output_bucket = session.default_bucket()s3_data_input = f"s3://{data_bucket}/{dataset_directory}"
s3_model_output = f"s3://{model_output_bucket}/trained-models"
best_hpo_model_local_save_directory = os.getcwd()# please choose learning algorithm
algorithm_choice = 'XGBoost'
assert (algorithm_choice in ['XGBoost', 'RandomForest', 'KMeans'])# please choose cross-validation folds
cv_folds = 10
assert (cv_folds >= 1)# please choose code variant
ml_workflow_choice = 'multiGPU'
assert (ml_workflow_choice in ['singleCPU', 'singleGPU', 'multiCPU', 'multiGPU'])# please choose HPO search ranges
hyperparameter_ranges = {
'max_depth' : sagemaker.parameter.IntegerParameter ( 5, 15 ),
'n_estimators' : sagemaker.parameter.IntegerParameter ( 100, 500 ),
'max_features' : sagemaker.parameter.ContinuousParameter ( 0.1, 1.0 ),
} # see note above for adding additional parametersif 'XGBoost' in algorithm_choice:
# number of trees parameter name difference b/w XGBoost and RandomForest
hyperparameter_ranges['num_boost_round'] = hyperparameter_ranges.pop('n_estimators')if 'KMeans' in algorithm_choice:
hyperparameter_ranges = {
'n_clusters' : sagemaker.parameter.IntegerParameter ( 2, 20 ),
'max_iter' : sagemaker.parameter.IntegerParameter ( 100, 500 ),
}# please choose HPO search strategy
search_strategy = 'Random'
assert (search_strategy in ['Random', 'Bayesian'])# please choose total number of HPO experiments[ we have set this number very low to allow for automated CI testing ]
max_jobs = 100# please choose number of experiments that can run in parallel
max_parallel_jobs = 10max_duration_of_experiment_seconds = 60 * 60 * 24# we will recommend a compute instance type, feel free to modify
instance_type = recommend_instance_type(ml_workflow_choice, dataset_directory) # please choose whether spot instances should be used
use_spot_instances_flag = Truesummarize_choices(s3_data_input, s3_model_output, ml_workflow_choice, algorithm_choice,
cv_folds, instance_type, use_spot_instances_flag, search_strategy,
max_jobs, max_parallel_jobs, max_duration_of_experiment_seconds)%cd code# %load train.py# %load workflows/MLWorkflowSingleGPU.pyrapids_base_container = 'rapidsai/rapidsai-core:22.12-cuda11.5-runtime-ubuntu18.04-py3.9'image_base = 'rapids-sagemaker-mnmg-100'
image_tag = rapids_base_container.split(':')[1]ecr_fullname = f"{account[0]}.dkr.ecr.{region[0]}.amazonaws.com/{image_base}:{image_tag}"ecr_fullnamewith open('Dockerfile', 'w') as dockerfile:
dockerfile.writelines( f'FROM {rapids_base_container} \n\n'
f'ENV AWS_DATASET_DIRECTORY="{dataset_directory}"\n'
f'ENV AWS_ALGORITHM_CHOICE="{algorithm_choice}"\n'
f'ENV AWS_ML_WORKFLOW_CHOICE="{ml_workflow_choice}"\n'
f'ENV AWS_CV_FOLDS="{cv_folds}"\n')%%writefile -a Dockerfile
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids \
&& pip3 install sagemaker-training cupy-cuda115 flask dask-ml \
&& pip3 install --upgrade protobuf
# path where SageMaker looks for code when container runs in the cloud
ENV CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $CLOUD_PATH/entrypoint.sh
WORKDIR $CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]validate_dockerfile(rapids_base_container)
!cat Dockerfile%%time
!docker build . -t $ecr_fullname -f Dockerfiledocker_login_str = !(aws ecr get-login --region {region[0]} --no-include-email)repository_query = !(aws ecr describe-repositories --repository-names $image_base)
if repository_query[0] == '':
!(aws ecr create-repository --repository-name $image_base)# 'volume_size' - EBS volume size in GB, default = 30
estimator_params = {
'image_uri': ecr_fullname,
'role': execution_role,
'instance_type': instance_type,
'instance_count': 2,
'input_mode': 'File',
'output_path': s3_model_output,
'use_spot_instances': use_spot_instances_flag,
'max_run': max_duration_of_experiment_seconds, # 24 hours
'sagemaker_session': session,
}
if use_spot_instances_flag == True:
estimator_params.update({'max_wait' : max_duration_of_experiment_seconds + 1})estimator = sagemaker.estimator.Estimator(**estimator_params)summarize_choices(s3_data_input, s3_model_output, ml_workflow_choice, algorithm_choice,
cv_folds, instance_type, use_spot_instances_flag, search_strategy,
max_jobs, max_parallel_jobs, max_duration_of_experiment_seconds )job_name = new_job_name_from_config(dataset_directory, region, ml_workflow_choice,
algorithm_choice, cv_folds,
instance_type )estimator.fit(inputs = s3_data_input, job_name = job_name.lower())metric_definitions = [{'Name': 'final-score', 'Regex': 'final-score: (.*);'}]objective_metric_name = 'final-score'hpo = sagemaker.tuner.HyperparameterTuner(estimator=estimator,
metric_definitions=metric_definitions,
objective_metric_name=objective_metric_name,
objective_type='Maximize',
hyperparameter_ranges=hyperparameter_ranges,
strategy=search_strategy,
max_jobs=max_jobs,
max_parallel_jobs=max_parallel_jobs)summarize_choices( s3_data_input, s3_model_output, ml_workflow_choice, algorithm_choice,
cv_folds, instance_type, use_spot_instances_flag, search_strategy,
max_jobs, max_parallel_jobs, max_duration_of_experiment_seconds )# tuning_job_name = new_job_name_from_config(dataset_directory, region, ml_workflow_choice,
# algorithm_choice, cv_folds,
# # instance_type)
# hpo.fit( inputs=s3_data_input,
# job_name=tuning_job_name,
# wait=True,
# logs='All')
# hpo.wait() # block until the .fit call above is completedtuning_job_name= 'air-mGPU-XGB-10cv-527fd372fa4d8d'hpo_results = summarize_hpo_results(tuning_job_name)sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name).dataframe()local_filename, s3_path_to_best_model = download_best_model(model_output_bucket, s3_model_output,
hpo_results, best_hpo_model_local_save_directory)endpoint_model = sagemaker.model.Model(image_uri=ecr_fullname,
role=execution_role,
model_data=s3_path_to_best_model)ecr_fullnameDEMO_SERVING_FLAG = True
if DEMO_SERVING_FLAG:
endpoint_model.deploy(initial_instance_count=1,
instance_type='ml.g4dn.2xlarge') #'ml.p3.2xlarge'if DEMO_SERVING_FLAG:
predictor = sagemaker.predictor.Predictor(
endpoint_name=str(endpoint_model.endpoint_name),
sagemaker_session=session
)
if dataset_directory in ['1_year', '3_year', '10_year']:
on_time_example = [2019.0, 4.0, 12.0, 2.0, 3647.0, 20452.0, 30977.0, 33244.0, 1943.0, -9.0, 0.0, 75.0, 491.0] # 9 minutes early departure
late_example = [2018.0, 3.0, 9.0, 5.0, 2279.0, 20409.0, 30721.0, 31703.0, 733.0, 123.0, 1.0, 61.0, 200.0]
example_payload = str(list([on_time_example, late_example]))
else:
example_payload = '' # fill in a sample payload
result = predictor.predict(example_payload)
print( result )# if DEMO_SERVING_FLAG:
# predictor.delete_endpoint()
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_studio_hpo.ipynb
|
import sagemaker
from helper_functions import *execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
account=!(aws sts get-caller-identity --query Account --output text)
region = [session.boto_region_name]account, region# please choose dataset S3 bucket and directory
data_bucket = 'sagemaker-rapids-hpo-' + region[0]
dataset_directory = '3_year' # '1_year', '3_year', '10_year', 'NYC_taxi'
# please choose output bucket for trained model(s)
model_output_bucket = session.default_bucket()s3_data_input = f"s3://{data_bucket}/{dataset_directory}"
s3_model_output = f"s3://{model_output_bucket}/trained-models"# please choose HPO search ranges
hyperparameter_ranges = {
'max_depth' : sagemaker.parameter.IntegerParameter ( 5, 15 ),
'num_boost_round' : sagemaker.parameter.IntegerParameter ( 100, 500 ),
'max_features' : sagemaker.parameter.ContinuousParameter ( 0.1, 1.0 ),
}# please choose total number of HPO experiments[ we have set this number very low to allow for automated CI testing ]
max_jobs = 2# please choose number of experiments that can run in parallel
max_parallel_jobs = 2max_duration_of_experiment_seconds = 60 * 60 * 24# we will recommend a compute instance type, feel free to modify
instance_type = 'ml.p3.2xlarge' # recommend_instance_type(ml_workflow_choice, dataset_directory)# please choose whether spot instances should be used
use_spot_instances_flag = True%cd coderapids_base_container = 'rapidsai/rapidsai-cloud-ml:latest'image_base = 'cloud-ml-sagemaker'
image_tag = rapids_base_container.split(':')[1]ecr_fullname = f"{account[0]}.dkr.ecr.{region[0]}.amazonaws.com/{image_base}:{image_tag}"
ecr_fullnamevalidate_dockerfile(rapids_base_container)
!cat Dockerfile%%time
!sm-docker build . --repository cloud-ml-sagemaker:latest# 'volume_size' - EBS volume size in GB, default = 30
estimator_params = {
'image_uri': ecr_fullname,
'role': execution_role,
'instance_type': instance_type,
'instance_count': 1,
'input_mode': 'File',
'output_path': s3_model_output,
'use_spot_instances': use_spot_instances_flag,
'max_run': max_duration_of_experiment_seconds, # 24 hours
'sagemaker_session': session,
}
if use_spot_instances_flag == True:
estimator_params.update({'max_wait' : max_duration_of_experiment_seconds + 1})estimator = sagemaker.estimator.Estimator(**estimator_params)estimator.fit(inputs = s3_data_input)metric_definitions = [{'Name': 'final-score', 'Regex': 'final-score: (.*);'}]objective_metric_name = 'final-score'hpo = sagemaker.tuner.HyperparameterTuner(estimator=estimator,
metric_definitions=metric_definitions,
objective_metric_name=objective_metric_name,
objective_type='Maximize',
hyperparameter_ranges=hyperparameter_ranges,
strategy='Random',
max_jobs=max_jobs,
max_parallel_jobs=max_parallel_jobs)import random
import string
tuning_job_name = 'unified-hpo-' + ''.join(random.choices(string.digits, k = 5))hpo.fit( inputs=s3_data_input,
job_name=tuning_job_name,
wait=True,
logs='All')
hpo.wait() # block until the .fit call above is completedhpo_results = summarize_hpo_results(tuning_job_name)sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name).dataframe()
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_higgs/sagemaker_rapids_higgs.ipynb
|
import sagemaker
import time
import boto3execution_role = sagemaker.get_execution_role()
session = sagemaker.Session()
region = boto3.Session().region_name
account = boto3.client('sts').get_caller_identity().get('Account')account, regions3_data_dir = session.upload_data(path='dataset', key_prefix='dataset/higgs-dataset')s3_data_direstimator_info = {
'rapids_container':'rapidsai/rapidsai-core:22.12-cuda11.5-runtime-ubuntu18.04-py3.9',
'ecr_image':'sagemaker-rapids-higgs:22.12-cuda11.5-runtime-ubuntu18.04-py3.9',
'ecr_repository':'sagemaker-rapids-higgs'
}%%time
!docker pull {estimator_info['rapids_container']}ECR_container_fullname = f"{account}.dkr.ecr.{region}.amazonaws.com/{estimator_info['ecr_image']}"ECR_container_fullname print( f"source : {estimator_info['rapids_container']}\n"
f"destination : {ECR_container_fullname}")hyperparams={
'n_estimators' : 15,
'max_depth' : 5,
'n_bins' : 8,
'split_criterion' : 0, # GINI:0, ENTROPY:1
'bootstrap' : 0, # true: sample with replacement, false: sample without replacement
'max_leaves' : -1, # unlimited leaves
'max_features' : 0.2,
}from sagemaker.estimator import Estimator
rapids_estimator = Estimator(image_uri=ECR_container_fullname,
role=execution_role,
instance_count=1,
instance_type='ml.p3.2xlarge', #'local_gpu'
max_run=60 * 60 * 24,
max_wait=(60 * 60 * 24)+1,
use_spot_instances=True,
hyperparameters=hyperparams,
metric_definitions=[{'Name': 'test_acc', 'Regex': 'test_acc: ([0-9\\.]+)'}])%%time
rapids_estimator.fit(inputs = s3_data_dir)from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {
'n_estimators' : IntegerParameter(10, 200),
'max_depth' : IntegerParameter(1, 22),
'n_bins' : IntegerParameter(5, 24),
'split_criterion' : CategoricalParameter([0, 1]),
'bootstrap' : CategoricalParameter([True, False]),
'max_features' : ContinuousParameter(0.01, 0.5),
}from sagemaker.estimator import Estimator
rapids_estimator = Estimator(image_uri=ECR_container_fullname,
role=execution_role,
instance_count=2,
instance_type='ml.p3.8xlarge',
max_run=60 * 60 * 24,
max_wait=(60 * 60 * 24)+1,
use_spot_instances=True,
hyperparameters=hyperparams,
metric_definitions=[{'Name': 'test_acc', 'Regex': 'test_acc: ([0-9\\.]+)'}])tuner = HyperparameterTuner(rapids_estimator,
objective_metric_name='test_acc',
hyperparameter_ranges=hyperparameter_ranges,
strategy='Bayesian',
max_jobs=2,
max_parallel_jobs=2,
objective_type='Maximize',
metric_definitions=[{'Name': 'test_acc', 'Regex': 'test_acc: ([0-9\\.]+)'}])job_name = 'rapidsHPO' + time.strftime('%Y-%m-%d-%H-%M-%S-%j', time.gmtime())
tuner.fit({'dataset': s3_data_dir}, job_name=job_name)aws ecr delete-repository --force --repository-name
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_higgs/README.md
|
This repository contains code and config files supporting the following blog post:
https://medium.com/@shashankprasanna/running-rapids-experiments-at-scale-using-amazon-sagemaker-d516420f165b
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_higgs
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_higgs/docker/rapids-higgs.py
|
#!/usr/bin/env python
# coding: utf-8
from cuml import RandomForestClassifier as cuRF
from cuml.preprocessing.model_selection import train_test_split
import cudf
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score
import os
from urllib.request import urlretrieve
import gzip
import argparse
def main(args):
# SageMaker options
model_dir = args.model_dir
data_dir = args.data_dir
col_names = ['label'] + ["col-{}".format(i) for i in range(2, 30)] # Assign column names
dtypes_ls = ['int32'] + ['float32' for _ in range(2, 30)] # Assign dtypes to each column
data = cudf.read_csv(data_dir+'HIGGS.csv', names=col_names, dtype=dtypes_ls)
X_train, X_test, y_train, y_test = train_test_split(data, 'label', train_size=0.70)
# Hyper-parameters
hyperparams={
'n_estimators' : args.n_estimators,
'max_depth' : args.max_depth,
'n_bins' : args.n_bins,
'split_criterion' : args.split_criterion,
'bootstrap' : args.bootstrap,
'max_leaves' : args.max_leaves,
'max_features' : args.max_features
}
cu_rf = cuRF(**hyperparams)
cu_rf.fit(X_train, y_train)
print("test_acc:", accuracy_score(cu_rf.predict(X_test), y_test)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# Hyper-parameters
parser.add_argument('--n_estimators', type=int, default=20)
parser.add_argument('--max_depth', type=int, default=16)
parser.add_argument('--n_bins', type=int, default=8)
parser.add_argument('--split_criterion', type=int, default=0)
parser.add_argument('--bootstrap', type=bool, default=True)
parser.add_argument('--max_leaves', type=int, default=-1)
parser.add_argument('--max_features', type=float, default=0.2)
# SageMaker parameters
parser.add_argument('--model_dir', type=str)
parser.add_argument('--model_output_dir', type=str, default='/opt/ml/output/')
parser.add_argument('--data_dir', type=str, default='/opt/ml/input/data/dataset/')
args = parser.parse_args()
main(args)
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_higgs
|
rapidsai_public_repos/cloud-ml-examples/aws/rapids_sagemaker_higgs/docker/Dockerfile
|
FROM rapidsai/rapidsai-core:22.12-cuda11.5-runtime-ubuntu18.04-py3.9
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids \
&& pip3 install sagemaker-training cupy-cuda115 flask \
&& pip3 install --upgrade protobuf
# Copies the training code inside the container
COPY rapids-higgs.py /opt/ml/code/rapids-higgs.py
# Defines rapids-higgs.py as script entry point
ENV SAGEMAKER_PROGRAM rapids-higgs.py
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/gpu_tree_shap/gpu_tree_shap.ipynb
|
import io
import os
import boto3
import sagemaker
import time
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-inference-script-mode"from sagemaker.inputs import TrainingInput
from sagemaker.xgboost.estimator import XGBoost
job_name = "DEMO-xgboost-inference-script-mode-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
print("Training job", job_name)
hyperparameters = {
"max_depth": "6",
"eta": "0.3",
"gamma": "0",
"min_child_weight": "1",
"subsample": "1",
"objective": "reg:squarederror",
"num_round": "500",
"verbosity": "1",
# "tree_method": "hist", "predictor": "cpu_predictor", # for CPU version
# dataset-specific params
# "sklearn_dataset": "sklearn.datasets.fetch_california_housing()", # uncomment to use California housing dataset
"content_type": "csv", # comment out when using California housing dataset
"label_column": "17", # comment out when using California housing dataset
}
instance_type = "ml.g4dn.xlarge" # "ml.c5.xlarge" for CPU, "ml.g4dn.xlarge" for GPU
xgb_script_mode_estimator = XGBoost(
entry_point="train.py",
hyperparameters=hyperparameters,
role=role,
instance_count=1,
instance_type=instance_type,
framework_version="1.3-1",
output_path="s3://{}/{}/{}/output".format(bucket, prefix, job_name),
)"""
Since the estimator requires a valid file type but we are specifying a sklearn_dataset,
we pass in a path to a tiny csv file which will not be used.
"""
content_type = "text/csv" # MIME type
train_input = TrainingInput(
"s3://sagemaker-rapids-hpo-us-east-1/dummy_data.csv", content_type=content_type
)
# Example of using a public CSV dataset - remember to remove "sklearn_dataset" hyperparameter
# Comment out when using California housing dataset
train_input = TrainingInput(
"s3://sagemaker-rapids-hpo-us-east-1/NYC_taxi/NYC_taxi_tripdata_2020-01.csv", content_type="text/csv"
)%%time
xgb_script_mode_estimator.fit({"train": train_input}, job_name=job_name)from sagemaker.xgboost.model import XGBoostModel
model_data = xgb_script_mode_estimator.model_data
print(model_data)
xgb_inference_model = XGBoostModel(
model_data=model_data,
role=role,
entry_point="inference.py",
framework_version="1.3-1",
)predictor = xgb_inference_model.deploy(
initial_instance_count=1,
instance_type=instance_type,
serializer=None, deserializer=None,
)print(predictor.serializer)
predictor.serializer = sagemaker.serializers.CSVSerializer() # for NYC_taxi predictions. Comment out for sklearn predictionsimport pandas as pd
data = pd.read_csv('s3://sagemaker-rapids-hpo-us-east-1/NYC_taxi/NYC_taxi_tripdata_2020-01.csv')
X = data.iloc[:,:-1]cutoff = 0
input_data = []
for _, row in X.iterrows():
cutoff += 1
if cutoff > 20000:
break
to_predict = []
for i in range(row.shape[0]):
to_predict.append(row[i])
input_data.append(to_predict)# input_data = "sklearn.datasets.fetch_california_housing()" # uncomment to make predictions on California housing dataset
predictor_input = str(input_data) + ", predict"
predictions = predictor.predict(predictor_input)import numpy as np
def clean_array(arr, three_dim=False):
cleaned_list = []
arr_count = 0
for num in arr:
if '[' in num:
arr_count += 1
num = num.replace('[', '')
cleaned_list.append(float(num))
elif ']' in num:
num = num.replace(']', '')
cleaned_list.append(float(num))
else:
cleaned_list.append(float(num))
array = np.array(cleaned_list, dtype='float32')
if three_dim: # shap_interactions will be 3D
y = int( len(array) / arr_count )
x = int( arr_count / y )
array = array.reshape(x, y, y)
elif(arr_count > 1):
y = int( len(array) / arr_count )
array = array.reshape(arr_count, y)
return array
predictions = clean_array(predictions[0])predictor_input = str(input_data) + ", pred_contribs"
start = time.time()
shap_values = predictor.predict(predictor_input)
print("SHAP time {}".format(time.time() - start))
shap_values = clean_array(shap_values[0])predictor_input = str(input_data) + ", pred_interactions"
start = time.time()
shap_interactions = predictor.predict(predictor_input)
print("SHAP interactions time {}".format(time.time() - start))
shap_interactions = clean_array(shap_interactions[0], three_dim=True)predictor.delete_endpoint()
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/gpu_tree_shap/train.py
|
from __future__ import print_function
import argparse
import json
import logging
import os
import pickle as pkl
import pandas as pd
import xgboost as xgb
from sagemaker_containers import entry_point
from sagemaker_xgboost_container import distributed
# from sagemaker_xgboost_container.data_utils import get_dmatrix
import time
import shutil
import csv
# Heavily draws from AWS's data_utils.py script:
# https://github.com/aws/sagemaker-xgboost-container/blob/cf2a05a525606225e91d7e588f57143827cbe3f7/src/sagemaker_xgboost_container/data_utils.py
# but we use our own version because the AWS version does not allow you to specify the label column.
def get_dmatrix(data_path, content_type, label_column, csv_weights=0):
"""Create Data Matrix from CSV file.
Assumes that sanity validation for content type has been done.
:param data_path: Either directory or file
:param content_type: Only supports "csv"
:param label_column: Integer corresponding to index of the label column, starting with 0
:param csv_weights: 1 if the instance weights are in the second column of csv file; otherwise, 0
:return: xgb.DMatrix or None
"""
if "csv" not in content_type.lower():
raise Exception("File type '{}' not supported".format(content_type))
if not isinstance(data_path, list):
if not os.path.exists(data_path):
logging.info('File path {} does not exist!'.format(data_path))
return None
files_path = get_files_path(data_path)
else:
# Create a directory with symlinks to input files.
files_path = "/tmp/sagemaker_xgboost_input_data"
shutil.rmtree(files_path, ignore_errors=True)
os.mkdir(files_path)
for path in data_path:
if not os.path.exists(path):
return None
if os.path.isfile(path):
os.symlink(path, os.path.join(files_path, os.path.basename(path)))
else:
for file in os.scandir(path):
os.symlink(file, os.path.join(files_path, file.name))
dmatrix = get_csv_dmatrix(files_path, label_column, csv_weights)
return dmatrix
def get_files_path(data_path):
if os.path.isfile(data_path):
files_path = data_path
else:
for root, dirs, files in os.walk(data_path):
if dirs == []:
files_path = root
break
return files_path
def get_csv_dmatrix(files_path, label_column, csv_weights):
"""Get Data Matrix from CSV data in file mode.
Infer the delimiter of data from first line of first data file.
:param files_path: File path where CSV formatted training data resides, either directory or file
:param label_column: Integer corresponding to index of the label column, starting with 0
:param csv_weights: 1 if instance weights are in second column of CSV data; else 0
:return: xgb.DMatrix
"""
csv_file = files_path if os.path.isfile(files_path) else [
f for f in os.listdir(files_path) if os.path.isfile(os.path.join(files_path, f))][0]
with open(os.path.join(files_path, csv_file)) as read_file:
sample_csv_line = read_file.readline()
delimiter = _get_csv_delimiter(sample_csv_line)
try:
if csv_weights == 1:
dmatrix = xgb.DMatrix(
'{}?format=csv&label_column={}&delimiter={}&weight_column=1'.format(files_path, label_column, delimiter))
else:
dmatrix = xgb.DMatrix('{}?format=csv&label_column={}&delimiter={}'.format(files_path, label_column, delimiter))
except Exception as e:
raise Exception("Failed to load csv data with exception:\n{}".format(e))
return dmatrix
def _get_csv_delimiter(sample_csv_line):
try:
delimiter = csv.Sniffer().sniff(sample_csv_line).delimiter
logging.info("Determined delimiter of CSV input is \'{}\'".format(delimiter))
except Exception as e:
raise Exception("Could not determine delimiter on line {}:\n{}".format(sample_csv_line[:50], e))
return delimiter
def _xgb_train(params, dtrain, evals, num_boost_round, model_dir, is_master):
"""Run xgb train on arguments given with rabit initialized.
This is our rabit execution function.
:param args_dict: Argument dictionary used to run xgb.train().
:param is_master: True if current node is master host in distributed training,
or is running single node training job.
Note that rabit_run will include this argument.
"""
start = time.time()
booster = xgb.train(params=params, dtrain=dtrain, evals=evals, num_boost_round=num_boost_round)
logging.info("XGBoost training time {}".format(time.time() - start))
if is_master:
model_location = model_dir + "/xgboost-model"
pkl.dump(booster, open(model_location, "wb"))
logging.info("Stored trained model at {}".format(model_location))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# Hyperparameters are described here.
parser.add_argument(
"--max_depth",
type=int,
)
parser.add_argument("--eta", type=float)
parser.add_argument("--gamma", type=int)
parser.add_argument("--min_child_weight", type=int)
parser.add_argument("--subsample", type=float)
parser.add_argument("--verbosity", type=int)
parser.add_argument("--objective", type=str)
parser.add_argument("--num_round", type=int)
parser.add_argument("--tree_method", type=str, default="gpu_hist") # "auto", "hist", or "gpu_hist"
parser.add_argument("--predictor", type=str, default="gpu_predictor") # "auto"
# e.g., 'sklearn.datasets.fetch_california_housing()'
parser.add_argument("--sklearn_dataset", type=str, default="None")
# specify file type
parser.add_argument("--content_type", type=str)
# if csv, should specify a label column
parser.add_argument("--label_column", type=int)
# Sagemaker specific arguments. Defaults are set in the environment variables.
parser.add_argument("--output_data_dir", type=str, default=os.environ.get("SM_OUTPUT_DATA_DIR"))
parser.add_argument("--model_dir", type=str, default=os.environ.get("SM_MODEL_DIR"))
parser.add_argument("--train", type=str, default=os.environ.get("SM_CHANNEL_TRAIN"))
parser.add_argument("--validation", type=str, default=os.environ.get("SM_CHANNEL_VALIDATION"))
parser.add_argument("--sm_hosts", type=str, default=os.environ.get("SM_HOSTS"))
parser.add_argument("--sm_current_host", type=str, default=os.environ.get("SM_CURRENT_HOST"))
args, _ = parser.parse_known_args()
# Get SageMaker host information from runtime environment variables
sm_hosts = json.loads(args.sm_hosts)
sm_current_host = args.sm_current_host
sklearn_dataset = args.sklearn_dataset
if "None" in sklearn_dataset:
dtrain = get_dmatrix(args.train, args.content_type, args.label_column)
try:
dval = get_dmatrix(args.validation, args.content_type, args.label_column)
except Exception:
dval = None
else: # Use a dataset from sklearn.datasets
import sklearn.datasets
try:
# e.g., sklearn_dataset = "sklearn.datasets.fetch_california_housing()"
data = eval(sklearn_dataset)
except Exception:
raise ValueError("Function {} is not supported. Try something like 'sklearn.datasets.fetch_california_housing()'"
.format(sklearn_dataset))
X = data.data
y = data.target
dtrain = xgb.DMatrix(X, y)
dval = None
watchlist = (
[(dtrain, "train"), (dval, "validation")] if dval is not None else [(dtrain, "train")]
)
train_hp = {
"max_depth": args.max_depth,
"eta": args.eta,
"gamma": args.gamma,
"min_child_weight": args.min_child_weight,
"subsample": args.subsample,
"verbosity": args.verbosity,
"objective": args.objective,
"tree_method": args.tree_method,
"predictor": args.predictor,
}
xgb_train_args = dict(
params=train_hp,
dtrain=dtrain,
evals=watchlist,
num_boost_round=args.num_round,
model_dir=args.model_dir,
)
if len(sm_hosts) > 1:
# Wait until all hosts are able to find each other
entry_point._wait_hostname_resolution()
# Execute training function after initializing rabit.
distributed.rabit_run(
exec_fun=_xgb_train,
args=xgb_train_args,
include_in_training=(dtrain is not None),
hosts=sm_hosts,
current_host=sm_current_host,
update_rabit_args=True,
)
else:
# If single node training, call training method directly.
if dtrain:
xgb_train_args["is_master"] = True
_xgb_train(**xgb_train_args)
else:
raise ValueError("Training channel must have data to train model.")
def model_fn(model_dir):
"""Deserialize and return fitted model.
Note that this should have the same name as the serialized model in the _xgb_train method
"""
model_file = "xgboost-model"
booster = pkl.load(open(os.path.join(model_dir, model_file), "rb"))
return booster
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/gpu_tree_shap/inference.py
|
import json
import os
import pickle as pkl
import numpy as np
import sagemaker_xgboost_container.encoder as xgb_encoders
import xgboost as xgb
def model_fn(model_dir):
"""
Deserialize and return fitted model.
"""
model_file = "xgboost-model"
booster = pkl.load(open(os.path.join(model_dir, model_file), "rb"))
return booster
def transform_fn(model, request_body, content_type, accept_type):
"""
The SageMaker XGBoost model server receives the request data body and the content type,
we first need to create a DMatrix (an object that can be passed to predict)
"""
multiple_predictions_flag = False
if "csv" not in content_type:
# request_body is a bytes object, which we decode to a string
request_body = request_body.decode()
# request_body is of the form 'dataset, predict_function'
# e.g. 'sklearn.datasets.fetch_california_housing(), pred_contribs'
# comma separated: '[[var1, var2], [var3, var4], ..., varx]], pred_contribs'
prediction_methods = ["predict", "pred_contribs", "pred_interactions"]
if request_body.split(', ')[-1] in prediction_methods:
if "[[" in request_body:
multiple_predictions_flag = True
dataset = json.loads(", ".join(request_body.split(', ')[:-1]))
else:
# "var1, var2, var3, var4, ..., varx, pred_contribs"
dataset = ", ".join(request_body.split(', ')[:-1])
predict = request_body.split(', ')[-1]
else:
dataset = request_body
predict = "predict"
if "sklearn.datasets" in dataset:
import sklearn.datasets
try:
data = eval(dataset)
except Exception:
raise ValueError("Function {} is not supported. Try something like 'sklearn.datasets.fetch_california_housing()'"
.format(dataset))
X = data.data
y = data.target
dmat = xgb.DMatrix(X, y)
input_data = dmat
elif content_type == "text/libsvm":
input_data = xgb_encoders.libsvm_to_dmatrix(dataset)
elif content_type == "text/csv":
if multiple_predictions_flag:
from pandas import DataFrame
dataset = DataFrame(dataset)
# this is for the NYC Taxi columns - may have to adjust for other CSV inputs
dataset.columns = ['f0', 'f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16']
input_data = xgb.DMatrix(dataset)
else:
input_data = xgb_encoders.csv_to_dmatrix(dataset)
else:
raise ValueError("Content type {} is not supported.".format(content_type))
"""
Now that we have the DMatrix and a prediction method,
we invoke the predict method and return the output.
"""
if "predict" in predict:
predictions = model.predict(input_data)
return str(predictions.tolist())
elif "pred_contribs" in predict:
shap_values = model.predict(input_data, pred_contribs=True)
return str(shap_values.tolist())
elif "pred_interactions" in predict:
shap_interactions = model.predict(input_data, pred_interactions=True)
return str(shap_interactions.tolist())
else:
raise ValueError("Prediction parameter {} is not supported.".format(predict))
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/code/entrypoint.sh
|
#!/bin/bash
source activate rapids
if [[ "$1" == "serve" ]]; then
echo -e "@ entrypoint -> launching serving script \n"
python serve.py
else
echo -e "@ entrypoint -> launching training script \n"
python train.py
fi
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/code/train.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
import traceback
import logging
from HPOConfig import HPOConfig
from MLWorkflow import create_workflow
def train():
hpo_config = HPOConfig(input_args=sys.argv[1:])
ml_workflow = create_workflow(hpo_config)
# cross-validation to improve robustness via multiple train/test reshuffles
for i_fold in range(hpo_config.cv_folds):
# ingest
dataset = ml_workflow.ingest_data()
# handle missing samples [ drop ]
dataset = ml_workflow.handle_missing_data(dataset)
# split into train and test set
X_train, X_test, y_train, y_test = ml_workflow.split_dataset(
dataset,
random_state=i_fold
)
# train model
trained_model = ml_workflow.fit(X_train, y_train)
# use trained model to predict target labels of test data
predictions = ml_workflow.predict(trained_model, X_test)
# score test set predictions against ground truth
score = ml_workflow.score(y_test, predictions)
# save trained model [ if it sets a new-high score ]
ml_workflow.save_best_model(score, trained_model)
# restart cluster to avoid memory creep [ for multi-CPU/GPU ]
ml_workflow.cleanup(i_fold)
# emit final score to cloud HPO [i.e., SageMaker]
ml_workflow.emit_final_score()
def configure_logging():
hpo_log = logging.getLogger('hpo_log')
log_handler = logging.StreamHandler()
log_handler.setFormatter(
logging.Formatter('%(asctime)-15s %(levelname)8s %(name)s %(message)s')
)
hpo_log.addHandler(log_handler)
hpo_log.setLevel(logging.DEBUG)
hpo_log.propagate = False
if __name__ == "__main__":
configure_logging()
try:
train()
sys.exit(0) # success exit code
except Exception:
traceback.print_exc()
sys.exit(-1) # failure exit code
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/code/MLWorkflow.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from abc import abstractmethod
import functools
import time
import logging
hpo_log = logging.getLogger('hpo_log')
def create_workflow(hpo_config):
""" Workflow Factory [instantiate MLWorkflow based on config] """
if hpo_config.compute_type == 'single-CPU':
from workflows.MLWorkflowSingleCPU import MLWorkflowSingleCPU
return MLWorkflowSingleCPU(hpo_config)
if hpo_config.compute_type == 'multi-CPU':
from workflows.MLWorkflowMultiCPU import MLWorkflowMultiCPU
return MLWorkflowMultiCPU(hpo_config)
if hpo_config.compute_type == 'single-GPU':
from workflows.MLWorkflowSingleGPU import MLWorkflowSingleGPU
return MLWorkflowSingleGPU(hpo_config)
if hpo_config.compute_type == 'multi-GPU':
from workflows.MLWorkflowMultiGPU import MLWorkflowMultiGPU
return MLWorkflowMultiGPU(hpo_config)
class MLWorkflow():
@abstractmethod
def ingest_data(self): pass
@abstractmethod
def handle_missing_data(self, dataset): pass
@abstractmethod
def split_dataset(self, dataset, i_fold): pass
@abstractmethod
def fit(self, X_train, y_train): pass
@abstractmethod
def predict(self, trained_model, X_test): pass
@abstractmethod
def score(self, y_test, predictions): pass
@abstractmethod
def save_trained_model(self, score, trained_model): pass
@abstractmethod
def cleanup(self, i_fold): pass
@abstractmethod
def emit_final_score(self): pass
def timer_decorator(target_function):
@functools.wraps(target_function)
def timed_execution_wrapper(*args, **kwargs):
start_time = time.perf_counter()
result = target_function(*args, **kwargs)
exec_time = time.perf_counter() - start_time
hpo_log.info(f" --- {target_function.__name__}"
f" completed in {exec_time:.5f} s")
return result
return timed_execution_wrapper
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/code/Dockerfile
|
FROM rapidsai/rapidsai-core:22.12-cuda11.5-runtime-ubuntu18.04-py3.9
ENV AWS_DATASET_DIRECTORY="10_year"
ENV AWS_ALGORITHM_CHOICE="XGBoost"
ENV AWS_ML_WORKFLOW_CHOICE="multiGPU"
ENV AWS_CV_FOLDS="10"
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids \
&& pip3 install sagemaker-training cupy-cuda115 flask dask-ml \
&& pip3 install --upgrade protobuf
# path where SageMaker looks for code when container runs in the cloud
ENV CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $CLOUD_PATH/entrypoint.sh
WORKDIR $CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/code/HPOConfig.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import argparse
import glob
import pprint
import HPODatasets
import logging
hpo_log = logging.getLogger('hpo_log')
class HPOConfig(object):
""" Cloud integrated RAPIDS HPO functionality with AWS SageMaker focus """
sagemaker_directory_structure = {
'train_data': '/opt/ml/input/data/training',
'model_store': '/opt/ml/model',
'output_artifacts': '/opt/ml/output'
}
def __init__(self, input_args,
directory_structure=sagemaker_directory_structure,
worker_limit=None):
# parse configuration from job-name
(self.dataset_type, self.model_type,
self.compute_type, self.cv_folds) = self.parse_configuration()
# parse input parameters for HPO
self.model_params = self.parse_hyper_parameter_inputs(input_args)
# parse dataset files/paths and dataset columns, labels, dtype
(self.target_files, self.input_file_type,
self.dataset_columns, self.label_column,
self.dataset_dtype) = self.detect_data_inputs(directory_structure)
self.model_store_directory = directory_structure['model_store']
self.output_artifacts_directory = directory_structure['output_artifacts'] # noqa
def parse_configuration(self):
""" Parse the ENV variables [ set in the dockerfile ]
to determine configuration settings """
hpo_log.info('\nparsing configuration from environment settings...')
dataset_type = 'Airline'
model_type = 'RandomForest'
compute_type = 'single-GPU'
cv_folds = 3
try:
# parse dataset choice
dataset_selection = os.environ['AWS_DATASET_DIRECTORY'].lower()
if dataset_selection in ['1_year', '3_year', '10_year']:
dataset_type = 'Airline'
elif dataset_selection in ['nyc_taxi']:
dataset_type = 'NYCTaxi'
else:
dataset_type = 'BYOData'
# parse model type
model_selection = os.environ['AWS_ALGORITHM_CHOICE'].lower()
if model_selection in ['randomforest']:
model_type = 'RandomForest'
elif model_selection in ['xgboost']:
model_type = 'XGBoost'
elif model_selection in ['kmeans']:
model_type = 'KMeans'
# parse compute choice
compute_selection = os.environ['AWS_ML_WORKFLOW_CHOICE'].lower()
if 'multigpu' in compute_selection:
compute_type = 'multi-GPU'
elif 'multicpu' in compute_selection:
compute_type = 'multi-CPU'
elif 'singlecpu' in compute_selection:
compute_type = 'single-CPU'
elif 'singlegpu' in compute_selection:
compute_type = 'single-GPU'
# parse CV folds
cv_folds = int(os.environ['AWS_CV_FOLDS'])
except KeyError as error:
hpo_log.info(f'Configuration parser failed : {error}')
assert (dataset_type in ['Airline', 'NYCTaxi', 'BYOData'])
assert (model_type in ['RandomForest', 'XGBoost', 'KMeans'])
assert (compute_type in ['single-GPU', 'multi-GPU',
'single-CPU', 'multi-CPU'])
assert (cv_folds >= 1)
hpo_log.info(f' Dataset: {dataset_type}\n'
f' Compute: {compute_type}\n'
f' Algorithm: {model_type}\n'
f' CV_folds: {cv_folds}\n')
return dataset_type, model_type, compute_type, cv_folds
def parse_hyper_parameter_inputs(self, input_args):
""" Parse hyperparmeters provided by the HPO orchestrator """
hpo_log.info('parsing model hyperparameters from command line arguments...log') # noqa
parser = argparse.ArgumentParser()
if 'XGBoost' in self.model_type:
# intentionally breaking PEP8 below for argument alignment
parser.add_argument( '--max_depth', type = int, default = 5 ) # noqa
parser.add_argument( '--num_boost_round', type = int, default = 10 ) # noqa
parser.add_argument( '--subsample', type = float, default = .9 ) # noqa
parser.add_argument( '--learning_rate', type = float, default = 0.3) # noqa
parser.add_argument( '--reg_lambda', type = float, default = 1) # noqa
parser.add_argument( '--gamma', type = float, default = 0. ) # noqa
parser.add_argument( '--alpha', type = float, default = 0. ) # noqa
parser.add_argument( '--seed', type = int, default = 0 ) # noqa
args, unknown_args = parser.parse_known_args(input_args)
model_params = {
'max_depth': args.max_depth,
'num_boost_round': args.num_boost_round,
'learning_rate': args.learning_rate,
'gamma': args.gamma,
'lambda': args.reg_lambda,
'random_state': args.seed,
'verbosity': 0,
'seed': args.seed,
'objective': 'binary:logistic'
}
if 'single-CPU' in self.compute_type:
model_params.update({'nthreads': os.cpu_count()})
if 'GPU' in self.compute_type:
model_params.update({'tree_method': 'gpu_hist'})
else:
model_params.update({'tree_method': 'hist'})
elif 'RandomForest' in self.model_type:
# intentionally breaking PEP8 below for argument alignment
parser.add_argument( '--max_depth' , type = int, default = 5) # noqa
parser.add_argument( '--n_estimators', type = int, default = 10) # noqa
parser.add_argument( '--max_features', type = float, default = 1.0) # noqa
parser.add_argument( '--n_bins' , type = float, default = 64) # noqa
parser.add_argument( '--bootstrap' , type = bool, default = True) # noqa
parser.add_argument( '--random_state', type = int, default = 0) # noqa
args, unknown_args = parser.parse_known_args(input_args)
model_params = {
'max_depth': args.max_depth,
'n_estimators': args.n_estimators,
'max_features': args.max_features,
'n_bins': args.n_bins,
'bootstrap': args.bootstrap,
'random_state': args.random_state
}
elif 'KMeans' in self.model_type:
parser.add_argument( '--n_clusters' , type = int, default = 8)
parser.add_argument( '--max_iter' , type = int, default = 300)
parser.add_argument( '--random_state', type = int, default = 1)
compute_selection = os.environ['AWS_ML_WORKFLOW_CHOICE'].lower()
if 'gpu' in compute_selection: # 'singlegpu' or 'multigpu'
parser.add_argument( '--init' , type = str, default = 'scalable-k-means++')
elif 'cpu' in compute_selection:
parser.add_argument( '--init' , type = str, default = 'k-means++')
args, unknown_args = parser.parse_known_args(input_args)
model_params = {
'n_clusters': args.n_clusters,
'max_iter': args.max_iter,
'random_state': args.random_state,
'init': args.init
}
else:
raise Exception(f"!error: unknown model type {self.model_type}")
hpo_log.info(pprint.pformat(model_params, indent=5))
return model_params
def detect_data_inputs(self, directory_structure):
"""
Scan mounted data directory to determine files to ingest.
Notes: single-CPU pandas read_parquet needs a directory input
single-GPU cudf read_parquet needs a list of files
multi-CPU/GPU can accept either a list or a directory
"""
parquet_files = glob.glob(
os.path.join(directory_structure['train_data'], '*.parquet')
)
csv_files = glob.glob(
os.path.join(directory_structure['train_data'], '*.csv')
)
if len(csv_files):
hpo_log.info('CSV input files detected')
target_files = csv_files
input_file_type = 'CSV'
elif len(parquet_files):
hpo_log.info('Parquet input files detected')
"""
if 'single-CPU' in self.compute_type:
# pandas read_parquet needs a directory input - no longer the case with newest pandas
target_files = directory_structure['train_data'] + '/'
else:
"""
target_files = parquet_files
input_file_type = 'Parquet'
else:
raise Exception("! No [CSV or Parquet] input files detected")
n_datafiles = len(target_files)
assert (n_datafiles > 0)
pprint.pprint(target_files)
hpo_log.info(f'detected {n_datafiles} files as input')
if 'Airline' in self.dataset_type:
dataset_columns = HPODatasets.airline_feature_columns
dataset_label_column = HPODatasets.airline_label_column
dataset_dtype = HPODatasets.airline_dtype
elif 'NYCTaxi' in self.dataset_type:
dataset_columns = HPODatasets.nyctaxi_feature_columns
dataset_label_column = HPODatasets.nyctaxi_label_column
dataset_dtype = HPODatasets.nyctaxi_dtype
elif 'BYOData' in self.dataset_type:
dataset_columns = HPODatasets.BYOD_feature_columns
dataset_label_column = HPODatasets.BYOD_label_column
dataset_dtype = HPODatasets.BYOD_dtype
return (target_files, input_file_type, dataset_columns,
dataset_label_column, dataset_dtype)
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/code/HPODatasets.py
|
""" Airline Dataset target label and feature column names """
airline_label_column = 'ArrDel15'
airline_feature_columns = ['Year', 'Quarter', 'Month', 'DayOfWeek',
'Flight_Number_Reporting_Airline',
'DOT_ID_Reporting_Airline',
'OriginCityMarketID', 'DestCityMarketID',
'DepTime', 'DepDelay', 'DepDel15', 'ArrDel15',
'AirTime', 'Distance']
airline_dtype = 'float32'
""" NYC TLC Trip Record Data target label and feature column names """
nyctaxi_label_column = 'above_average_tip'
nyctaxi_feature_columns = ['VendorID',
'tpep_pickup_datetime', 'tpep_dropoff_datetime',
'passenger_count',
'trip_distance',
'RatecodeID',
'store_and_fwd_flag',
'PULocationID',
'DOLocationID',
'payment_type', 'fare_amount',
'extra', 'mta_tax', 'tolls_amount',
'improvement_surcharge', 'total_amount',
'congestion_surcharge', 'above_average_tip']
nyctaxi_dtype = 'float32'
""" Insert your dataset here! """
BYOD_label_column = '' # e.g., nyctaxi_label_column
BYOD_feature_columns = [] # e.g., nyctaxi_feature_columns
BYOD_dtype = None # e.g., nyctaxi_dtype
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/code/serve.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os
import sys
import traceback
import joblib
import glob
import json
import time
import xgboost
import numpy
import flask
from flask import Flask, Response
import logging
from functools import lru_cache
try:
""" check for GPU via library imports """
import cupy
from cuml import ForestInference
GPU_INFERENCE_FLAG = True
except ImportError as gpu_import_error:
GPU_INFERENCE_FLAG = False
print(f'\n!GPU import error: {gpu_import_error}\n')
# set to true to print incoming request headers and data
DEBUG_FLAG = False
def serve(xgboost_threshold=0.5):
""" Flask Inference Server for SageMaker hosting of RAPIDS Models """
app = Flask(__name__)
logging.basicConfig(level=logging.DEBUG)
if GPU_INFERENCE_FLAG:
app.logger.info('GPU Model Serving Workflow')
app.logger.info(f'> {cupy.cuda.runtime.getDeviceCount()}'
f' GPUs detected \n')
else:
app.logger.info('CPU Model Serving Workflow')
app.logger.info(f'> {os.cpu_count()} CPUs detected \n')
@app.route("/ping", methods=["GET"])
def ping():
""" SageMaker required method, ping heartbeat """
return Response(response="\n", status=200)
@lru_cache()
def load_trained_model():
"""
Cached loading of trained [ XGBoost or RandomForest ] model into memory
Note: Models selected via filename parsing, edit if necessary
"""
xgb_models = glob.glob('/opt/ml/model/*_xgb')
rf_models = glob.glob('/opt/ml/model/*_rf')
kmeans_models = glob.glob('/opt/ml/model/*_kmeans')
app.logger.info(f'detected xgboost models : {xgb_models}')
app.logger.info(f'detected randomforest models : {rf_models}')
app.logger.info(f'detected kmeans models : {kmeans_models}\n\n')
model_type = None
start_time = time.perf_counter()
if len(xgb_models):
model_type = 'XGBoost'
model_filename = xgb_models[0]
if GPU_INFERENCE_FLAG:
# FIL
reloaded_model = ForestInference.load(model_filename)
else:
# native XGBoost
reloaded_model = xgboost.Booster()
reloaded_model.load_model(fname=model_filename)
elif len(rf_models):
model_type = 'RandomForest'
model_filename = rf_models[0]
reloaded_model = joblib.load(model_filename)
elif len(kmeans_models):
model_type = 'KMeans'
model_filename = kmeans_models[0]
reloaded_model = joblib.load(model_filename)
else:
raise Exception('! No trained models detected')
exec_time = time.perf_counter() - start_time
app.logger.info(f'> model {model_filename} '
f'loaded in {exec_time:.5f} s \n')
return reloaded_model, model_type, model_filename
@app.route("/invocations", methods=["POST"])
def predict():
"""
Run CPU or GPU inference on input data,
called everytime an incoming request arrives
"""
# parse user input
try:
if DEBUG_FLAG:
app.logger.debug(flask.request.headers)
app.logger.debug(flask.request.content_type)
app.logger.debug(flask.request.get_data())
string_data = json.loads(flask.request.get_data())
query_data = numpy.array(string_data)
except Exception:
return Response(
response="Unable to parse input data"
"[ should be json/string encoded list of arrays ]",
status=415,
mimetype='text/csv'
)
# cached [reloading] of trained model to process incoming requests
reloaded_model, model_type, model_filename = load_trained_model()
try:
start_time = time.perf_counter()
if model_type == 'XGBoost':
app.logger.info('running inference using XGBoost model :'
f'{model_filename}')
if GPU_INFERENCE_FLAG:
predictions = reloaded_model.predict(query_data)
else:
dm_deserialized_data = xgboost.DMatrix(query_data)
predictions = reloaded_model.predict(dm_deserialized_data)
predictions = (predictions > xgboost_threshold) * 1.0
elif model_type == 'RandomForest':
app.logger.info('running inference using RandomForest model :'
f'{model_filename}')
if 'gpu' in model_filename and not GPU_INFERENCE_FLAG:
raise Exception('attempting to run CPU inference '
'on a GPU trained RandomForest model')
predictions = reloaded_model.predict(
query_data.astype('float32'))
elif model_type == 'KMeans':
app.logger.info('running inference using KMeans model :'
f'{model_filename}')
if 'gpu' in model_filename and not GPU_INFERENCE_FLAG:
raise Exception('attempting to run CPU inference '
'on a GPU trained KMeans model')
predictions = reloaded_model.predict(
query_data.astype('float32'))
app.logger.info(f'\n predictions: {predictions} \n')
exec_time = time.perf_counter() - start_time
app.logger.info(f' > inference finished in {exec_time:.5f} s \n')
# return predictions
return Response(response=json.dumps(predictions.tolist()),
status=200, mimetype='text/csv')
# error during inference
except Exception as inference_error:
app.logger.error(inference_error)
return Response(response=f"Inference failure: {inference_error}\n",
status=400, mimetype='text/csv')
# initial [non-cached] reload of trained model
reloaded_model, model_type, model_filename = load_trained_model()
# trigger start of Flask app
app.run(host="0.0.0.0", port=8080)
if __name__ == "__main__":
try:
serve()
sys.exit(0) # success exit code
except Exception:
traceback.print_exc()
sys.exit(-1) # failure exit code
"""
airline model inference test [ 3 non-late flights, and a one late flight ]
curl -X POST --header "Content-Type: application/json" --data '[[ 2019.0, 4.0, 12.0, 2.0, 3647.0, 20452.0, 30977.0, 33244.0, 1943.0, -9.0, 0.0, 75.0, 491.0 ], [0.6327389486117129, 0.4306956773589715, 0.269797132011095, 0.9802453595689266, 0.37114359481679515, 0.9916185580669782, 0.07909626511279289, 0.7329633329905694, 0.24776047025280235, 0.5692037733986525, 0.22905629196095134, 0.6247424302941754, 0.2589150304037847], [0.39624412725991653, 0.9227953615174843, 0.03561991722126401, 0.7718573109543159, 0.2700874862088877, 0.9410675866419298, 0.6185692299959633, 0.486955878112717, 0.18877072081876722, 0.8266565188148121, 0.7845597219675844, 0.6534800630725327, 0.97356320515559], [ 2018.0, 3.0, 9.0, 5.0, 2279.0, 20409.0, 30721.0, 31703.0, 733.0, 123.0, 1.0, 61.0, 200.0 ]]' http://0.0.0.0:8080/invocations
"""
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/workflows/MLWorkflowSingleCPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
import os
import pandas
import xgboost
import joblib
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score
from MLWorkflow import MLWorkflow, timer_decorator
import logging
hpo_log = logging.getLogger('hpo_log')
class MLWorkflowSingleCPU(MLWorkflow):
""" Single-CPU Workflow """
def __init__(self, hpo_config):
hpo_log.info('Single-CPU Workflow')
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
@timer_decorator
def ingest_data(self):
""" Ingest dataset, CSV and Parquet supported """
if self.dataset_cache is not None:
hpo_log.info('> skipping ingestion, using cache')
return self.dataset_cache
if 'Parquet' in self.hpo_config.input_file_type:
hpo_log.info('> parquet data ingestion')
# assert isinstance(self.hpo_config.target_files, str)
filepath = self.hpo_config.target_files
dataset = pandas.read_parquet(filepath,
columns=self.hpo_config.dataset_columns, # noqa
engine='pyarrow')
elif 'CSV' in self.hpo_config.input_file_type:
hpo_log.info('> csv data ingestion')
if isinstance(self.hpo_config.target_files, list):
filepath = self.hpo_config.target_files[0]
elif isinstance(self.hpo_config.target_files, str):
filepath = self.hpo_config.target_files
dataset = pandas.read_csv(filepath,
names=self.hpo_config.dataset_columns,
dtype=self.hpo_config.dataset_dtype,
header=0)
hpo_log.info(f'\t dataset shape: {dataset.shape}')
self.dataset_cache = dataset
return dataset
@timer_decorator
def handle_missing_data(self, dataset):
""" Drop samples with missing data [ inplace ] """
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with sklearn KFold
"""
hpo_log.info('> train-test split')
label_column = self.hpo_config.label_column
X_train, X_test, y_train, y_test = \
train_test_split(dataset.loc[:, dataset.columns != label_column],
dataset[label_column], random_state=random_state)
return (X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype))
@timer_decorator
def fit(self, X_train, y_train):
""" Fit decision tree model """
if 'XGBoost' in self.hpo_config.model_type:
hpo_log.info('> fit xgboost model')
dtrain = xgboost.DMatrix(data=X_train, label=y_train)
num_boost_round = self.hpo_config.model_params['num_boost_round']
trained_model = xgboost.train(dtrain=dtrain,
params=self.hpo_config.model_params,
num_boost_round=num_boost_round)
elif 'RandomForest' in self.hpo_config.model_type:
hpo_log.info('> fit randomforest model')
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params['n_estimators'],
max_depth=self.hpo_config.model_params['max_depth'],
max_features=self.hpo_config.model_params['max_features'],
bootstrap=self.hpo_config.model_params['bootstrap'],
n_jobs=-1
).fit(X_train, y_train)
elif 'KMeans' in self.hpo_config.model_type:
hpo_log.info('> fit kmeans model')
trained_model = KMeans(
n_clusters=self.hpo_config.model_params['n_clusters'],
max_iter=self.hpo_config.model_params['max_iter'],
random_state=self.hpo_config.model_params['random_state'],
init=self.hpo_config.model_params['init']
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
""" Inference with the trained model on the unseen test data """
hpo_log.info('> predict with trained model ')
if 'XGBoost' in self.hpo_config.model_type:
dtest = xgboost.DMatrix(X_test)
predictions = trained_model.predict(dtest)
predictions = (predictions > threshold) * 1.0
elif 'RandomForest' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
elif 'KMeans' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
return predictions
@timer_decorator
def score(self, y_test, predictions):
""" Score predictions vs ground truth labels on test data """
dataset_dtype = self.hpo_config.dataset_dtype
score = accuracy_score(y_test.astype(dataset_dtype),
predictions.astype(dataset_dtype))
hpo_log.info(f'\t score = {score}')
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename='saved_model'):
""" Persist/save model that sets a new high score """
if score > self.best_score:
self.best_score = score
hpo_log.info('> saving high-scoring model')
output_filename = os.path.join(
self.hpo_config.model_store_directory,
filename
)
if 'XGBoost' in self.hpo_config.model_type:
trained_model.save_model(f'{output_filename}_scpu_xgb')
elif 'RandomForest' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_scpu_rf')
elif 'KMeans' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_scpu_kmeans')
def cleanup(self, i_fold):
hpo_log.info('> end of fold \n')
def emit_final_score(self):
""" Emit score for parsing by the cloud HPO orchestrator """
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f'total_time = {exec_time:.5f} s ')
if self.hpo_config.cv_folds > 1:
hpo_log.info(f'fold scores : {self.cv_fold_scores} \n')
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f'final-score: {final_score}; \n')
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/workflows/MLWorkflowMultiCPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
import os
import dask
from dask.distributed import LocalCluster, Client, wait
import xgboost
import joblib
from dask_ml.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score
import logging
import warnings
from MLWorkflow import MLWorkflow, timer_decorator
hpo_log = logging.getLogger('hpo_log')
warnings.filterwarnings("ignore")
class MLWorkflowMultiCPU(MLWorkflow):
""" Multi-CPU Workflow """
def __init__(self, hpo_config):
hpo_log.info('Multi-CPU Workflow')
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
self.cluster, self.client = self.cluster_initialize()
@timer_decorator
def cluster_initialize(self):
""" Initialize dask CPU cluster """
cluster = None
client = None
self.n_workers = os.cpu_count()
cluster = LocalCluster(n_workers=self.n_workers)
client = Client(cluster)
hpo_log.info(f'dask multi-CPU cluster with {self.n_workers} workers ')
dask.config.set({
'temporary_directory': self.hpo_config.output_artifacts_directory,
'logging': {'loggers': {'distributed.nanny': {'level': 'CRITICAL'}}} # noqa
})
return cluster, client
def ingest_data(self):
""" Ingest dataset, CSV and Parquet supported """
if self.dataset_cache is not None:
hpo_log.info('> skipping ingestion, using cache')
return self.dataset_cache
if 'Parquet' in self.hpo_config.input_file_type:
hpo_log.info('> parquet data ingestion')
dataset = dask.dataframe.read_parquet(
self.hpo_config.target_files,
columns=self.hpo_config.dataset_columns
)
elif 'CSV' in self.hpo_config.input_file_type:
hpo_log.info('> csv data ingestion')
dataset = dask.dataframe.read_csv(
self.hpo_config.target_files,
names=self.hpo_config.dataset_columns,
dtype=self.hpo_config.dataset_dtype,
header=0
)
hpo_log.info(f'\t dataset len: {len(dataset)}')
self.dataset_cache = dataset
return dataset
def handle_missing_data(self, dataset):
""" Drop samples with missing data [ inplace ] """
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with dask_ml KFold
"""
hpo_log.info('> train-test split')
label_column = self.hpo_config.label_column
train, test = train_test_split(dataset, random_state=random_state)
# build X [ features ], y [ labels ] for the train and test subsets
y_train = train[label_column]
X_train = train.drop(label_column, axis=1)
y_test = test[label_column]
X_test = test.drop(label_column, axis=1)
# persist
X_train = X_train.persist()
y_train = y_train.persist()
wait([X_train, y_train])
return (X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype))
@timer_decorator
def fit(self, X_train, y_train):
""" Fit decision tree model """
if 'XGBoost' in self.hpo_config.model_type:
hpo_log.info('> fit xgboost model')
dtrain = xgboost.dask.DaskDMatrix(self.client, X_train, y_train)
num_boost_round = self.hpo_config.model_params['num_boost_round']
xgboost_output = xgboost.dask.train(
self.client,
self.hpo_config.model_params,
dtrain,
num_boost_round=num_boost_round
)
trained_model = xgboost_output['booster']
elif 'RandomForest' in self.hpo_config.model_type:
hpo_log.info('> fit randomforest model')
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params['n_estimators'],
max_depth=self.hpo_config.model_params['max_depth'],
max_features=self.hpo_config.model_params['max_features'],
n_jobs=-1
).fit(X_train, y_train.astype('int32'))
elif 'KMeans' in self.hpo_config.model_type:
hpo_log.info('> fit kmeans model')
trained_model = KMeans(
n_clusters=self.hpo_config.model_params['n_clusters'],
max_iter=self.hpo_config.model_params['max_iter'],
random_state=self.hpo_config.model_params['random_state'],
init=self.hpo_config.model_params['init'],
n_jobs=-1 # Deprecated since version 0.23 and will be removed in 1.0 (renaming of 0.25)
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
""" Inference with the trained model on the unseen test data """
hpo_log.info('> predict with trained model ')
if 'XGBoost' in self.hpo_config.model_type:
dtest = xgboost.dask.DaskDMatrix(self.client, X_test)
predictions = xgboost.dask.predict(
self.client,
trained_model,
dtest
)
predictions = (predictions > threshold) * 1.0
elif 'RandomForest' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
elif 'KMeans' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
return predictions
@timer_decorator
def score(self, y_test, predictions):
""" Score predictions vs ground truth labels on test data """
hpo_log.info('> score predictions')
score = accuracy_score(
y_test.astype(self.hpo_config.dataset_dtype),
predictions.astype(self.hpo_config.dataset_dtype)
)
hpo_log.info(f'\t score = {score}')
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename='saved_model'):
""" Persist/save model that sets a new high score """
if score > self.best_score:
self.best_score = score
hpo_log.info('> saving high-scoring model')
output_filename = os.path.join(
self.hpo_config.model_store_directory,
filename
)
if 'XGBoost' in self.hpo_config.model_type:
trained_model.save_model(f'{output_filename}_mcpu_xgb')
elif 'RandomForest' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_mcpu_rf')
elif 'KMeans' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_mcpu_kmeans')
@timer_decorator
async def cleanup(self, i_fold):
"""
Close and restart the cluster when multiple cross validation
folds are used to prevent memory creep.
"""
if i_fold == self.hpo_config.cv_folds - 1:
hpo_log.info('> done all folds; closing cluster\n')
await self.client.close()
await self.cluster.close()
elif i_fold < self.hpo_config.cv_folds - 1:
hpo_log.info('> end of fold; reinitializing cluster\n')
await self.client.close()
await self.cluster.close()
self.cluster, self.client = self.cluster_initialize()
def emit_final_score(self):
""" Emit score for parsing by the cloud HPO orchestrator """
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f'total_time = {exec_time:.5f} s ')
if self.hpo_config.cv_folds > 1:
hpo_log.info(f'fold scores : {self.cv_fold_scores} \n')
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f'final-score: {final_score}; \n')
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/workflows/MLWorkflowMultiGPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
import os
import dask
import dask_cudf
from dask_cuda import LocalCUDACluster
from dask.distributed import wait, Client
import cupy
import xgboost
import joblib
from dask_ml.model_selection import train_test_split
from cuml.dask.common.utils import persist_across_workers
from cuml.dask.ensemble import RandomForestClassifier
from cuml.dask.cluster import KMeans
from cuml.metrics import accuracy_score
from MLWorkflow import MLWorkflow, timer_decorator
import logging
import warnings
hpo_log = logging.getLogger('hpo_log')
warnings.filterwarnings("ignore")
class MLWorkflowMultiGPU(MLWorkflow):
""" Multi-GPU Workflow """
def __init__(self, hpo_config):
hpo_log.info('Multi-GPU Workflow')
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
self.cluster, self.client = self.cluster_initialize()
@timer_decorator
def cluster_initialize(self):
""" Initialize dask GPU cluster"""
cluster = None
client = None
self.n_workers = cupy.cuda.runtime.getDeviceCount()
cluster = LocalCUDACluster(n_workers=self.n_workers)
client = Client(cluster)
hpo_log.info(f'dask multi-GPU cluster with {self.n_workers} workers ')
dask.config.set({
'temporary_directory': self.hpo_config.output_artifacts_directory,
'logging': {'loggers': {'distributed.nanny': {'level': 'CRITICAL'}}} # noqa
})
return cluster, client
def ingest_data(self):
""" Ingest dataset, CSV and Parquet supported [ async/lazy ]"""
if self.dataset_cache is not None:
hpo_log.info('> skipping ingestion, using cache')
return self.dataset_cache
if 'Parquet' in self.hpo_config.input_file_type:
hpo_log.info('> parquet data ingestion')
dataset = dask_cudf.read_parquet(
self.hpo_config.target_files,
columns=self.hpo_config.dataset_columns
)
elif 'CSV' in self.hpo_config.input_file_type:
hpo_log.info('> csv data ingestion')
dataset = dask_cudf.read_csv(
self.hpo_config.target_files,
names=self.hpo_config.dataset_columns,
header=0
)
hpo_log.info(f'\t dataset len: {len(dataset)}')
self.dataset_cache = dataset
return dataset
def handle_missing_data(self, dataset):
""" Drop samples with missing data [ inplace ] """
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with dask_ml KFold
"""
hpo_log.info('> train-test split')
label_column = self.hpo_config.label_column
train, test = train_test_split(dataset, random_state=random_state)
# build X [ features ], y [ labels ] for the train and test subsets
y_train = train[label_column]
X_train = train.drop(label_column, axis=1)
y_test = test[label_column]
X_test = test.drop(label_column, axis=1)
# force execution
X_train, y_train, X_test, y_test = persist_across_workers(
self.client,
[X_train, y_train, X_test, y_test],
workers=self.client.has_what().keys()
)
# wait!
wait([X_train, y_train, X_test, y_test])
return (X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype))
@timer_decorator
def fit(self, X_train, y_train):
""" Fit decision tree model """
if 'XGBoost' in self.hpo_config.model_type:
hpo_log.info('> fit xgboost model')
dtrain = xgboost.dask.DaskDMatrix(self.client, X_train, y_train)
num_boost_round = self.hpo_config.model_params['num_boost_round']
xgboost_output = xgboost.dask.train(
self.client,
self.hpo_config.model_params, dtrain,
num_boost_round=num_boost_round
)
trained_model = xgboost_output['booster']
elif 'RandomForest' in self.hpo_config.model_type:
hpo_log.info('> fit randomforest model')
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params['n_estimators'],
max_depth=self.hpo_config.model_params['max_depth'],
max_features=self.hpo_config.model_params['max_features'],
n_bins=self.hpo_config.model_params['n_bins']
).fit(X_train, y_train.astype('int32'))
elif 'KMeans' in self.hpo_config.model_type:
hpo_log.info('> fit kmeans model')
trained_model = KMeans(
n_clusters=self.hpo_config.model_params['n_clusters'],
max_iter=self.hpo_config.model_params['max_iter'],
random_state=self.hpo_config.model_params['random_state'],
init=self.hpo_config.model_params['init']
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
""" Inference with the trained model on the unseen test data """
hpo_log.info('> predict with trained model ')
if 'XGBoost' in self.hpo_config.model_type:
dtest = xgboost.dask.DaskDMatrix(self.client, X_test)
predictions = xgboost.dask.predict(
self.client,
trained_model,
dtest
).compute()
predictions = (predictions > threshold) * 1.0
elif 'RandomForest' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test).compute()
elif 'KMeans' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test).compute()
return predictions
@timer_decorator
def score(self, y_test, predictions):
""" Score predictions vs ground truth labels on test data """
hpo_log.info('> score predictions')
y_test = y_test.compute()
score = accuracy_score(
y_test.astype(self.hpo_config.dataset_dtype),
predictions.astype(self.hpo_config.dataset_dtype)
)
hpo_log.info(f'\t score = {score}')
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename='saved_model'):
""" Persist/save model that sets a new high score """
if score > self.best_score:
self.best_score = score
hpo_log.info('> saving high-scoring model')
output_filename = os.path.join(
self.hpo_config.model_store_directory,
filename
)
if 'XGBoost' in self.hpo_config.model_type:
trained_model.save_model(f'{output_filename}_mgpu_xgb')
elif 'RandomForest' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_mgpu_rf')
elif 'KMeans' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_mgpu_kmeans')
@timer_decorator
async def cleanup( self, i_fold):
"""
Close and restart the cluster when multiple cross validation folds
are used to prevent memory creep.
"""
if i_fold == self.hpo_config.cv_folds - 1:
hpo_log.info('> done all folds; closing cluster')
await self.client.close()
await self.cluster.close()
elif i_fold < self.hpo_config.cv_folds - 1:
hpo_log.info('> end of fold; reinitializing cluster')
await self.client.close()
await self.cluster.close()
self.cluster, self.client = self.cluster_initialize()
def emit_final_score(self):
""" Emit score for parsing by the cloud HPO orchestrator """
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f'total_time = {exec_time:.5f} s ')
if self.hpo_config.cv_folds > 1:
hpo_log.info(f'fold scores : {self.cv_fold_scores}')
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f'final-score: {final_score};')
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/workflows/MLWorkflowSingleGPU.py
|
#
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import time
import os
import cudf
import xgboost
import joblib
from cuml.model_selection import train_test_split
from cuml.ensemble import RandomForestClassifier
from cuml.cluster import KMeans
from cuml.metrics import accuracy_score
from MLWorkflow import MLWorkflow, timer_decorator
import logging
hpo_log = logging.getLogger('hpo_log')
class MLWorkflowSingleGPU(MLWorkflow):
""" Single-GPU Workflow """
def __init__(self, hpo_config):
hpo_log.info('Single-GPU Workflow \n')
self.start_time = time.perf_counter()
self.hpo_config = hpo_config
self.dataset_cache = None
self.cv_fold_scores = []
self.best_score = -1
@timer_decorator
def ingest_data(self):
""" Ingest dataset, CSV and Parquet supported """
if self.dataset_cache is not None:
hpo_log.info('skipping ingestion, using cache')
return self.dataset_cache
if 'Parquet' in self.hpo_config.input_file_type:
dataset = cudf.read_parquet(self.hpo_config.target_files,
columns=self.hpo_config.dataset_columns) # noqa
elif 'CSV' in self.hpo_config.input_file_type:
if isinstance(self.hpo_config.target_files, list):
filepath = self.hpo_config.target_files[0]
elif isinstance(self.hpo_config.target_files, str):
filepath = self.hpo_config.target_files
hpo_log.info(self.hpo_config.dataset_columns)
dataset = cudf.read_csv(filepath,
names=self.hpo_config.dataset_columns,
header=0)
hpo_log.info(f'ingested {self.hpo_config.input_file_type} dataset;'
f' shape = {dataset.shape}')
self.dataset_cache = dataset
return dataset
@timer_decorator
def handle_missing_data(self, dataset):
""" Drop samples with missing data [ inplace ] """
dataset = dataset.dropna()
return dataset
@timer_decorator
def split_dataset(self, dataset, random_state):
"""
Split dataset into train and test data subsets,
currently using CV-fold index for randomness.
Plan to refactor with sklearn KFold
"""
hpo_log.info('> train-test split')
label_column = self.hpo_config.label_column
X_train, X_test, y_train, y_test = \
train_test_split(dataset, label_column,
random_state=random_state)
return (X_train.astype(self.hpo_config.dataset_dtype),
X_test.astype(self.hpo_config.dataset_dtype),
y_train.astype(self.hpo_config.dataset_dtype),
y_test.astype(self.hpo_config.dataset_dtype))
@timer_decorator
def fit(self, X_train, y_train):
""" Fit decision tree model """
if 'XGBoost' in self.hpo_config.model_type:
hpo_log.info('> fit xgboost model')
dtrain = xgboost.DMatrix(data=X_train, label=y_train)
num_boost_round = self.hpo_config.model_params['num_boost_round']
trained_model = xgboost.train(dtrain=dtrain,
params=self.hpo_config.model_params,
num_boost_round=num_boost_round)
elif 'RandomForest' in self.hpo_config.model_type:
hpo_log.info('> fit randomforest model')
trained_model = RandomForestClassifier(
n_estimators=self.hpo_config.model_params['n_estimators'],
max_depth=self.hpo_config.model_params['max_depth'],
max_features=self.hpo_config.model_params['max_features'],
n_bins=self.hpo_config.model_params['n_bins']
).fit(X_train, y_train.astype('int32'))
elif 'KMeans' in self.hpo_config.model_type:
hpo_log.info('> fit kmeans model')
trained_model = KMeans(
n_clusters=self.hpo_config.model_params['n_clusters'],
max_iter=self.hpo_config.model_params['max_iter'],
random_state=self.hpo_config.model_params['random_state'],
init=self.hpo_config.model_params['init']
).fit(X_train)
return trained_model
@timer_decorator
def predict(self, trained_model, X_test, threshold=0.5):
""" Inference with the trained model on the unseen test data """
hpo_log.info('predict with trained model ')
if 'XGBoost' in self.hpo_config.model_type:
dtest = xgboost.DMatrix(X_test)
predictions = trained_model.predict(dtest)
predictions = (predictions > threshold) * 1.0
elif 'RandomForest' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
elif 'KMeans' in self.hpo_config.model_type:
predictions = trained_model.predict(X_test)
return predictions
@timer_decorator
def score(self, y_test, predictions):
""" Score predictions vs ground truth labels on test data """
dataset_dtype = self.hpo_config.dataset_dtype
score = accuracy_score(y_test.astype(dataset_dtype),
predictions.astype(dataset_dtype))
hpo_log.info(f'score = {round(score,5)}')
self.cv_fold_scores.append(score)
return score
def save_best_model(self, score, trained_model, filename='saved_model'):
""" Persist/save model that sets a new high score """
if score > self.best_score:
self.best_score = score
hpo_log.info('saving high-scoring model')
output_filename = os.path.join(
self.hpo_config.model_store_directory,
filename
)
if 'XGBoost' in self.hpo_config.model_type:
trained_model.save_model(f'{output_filename}_sgpu_xgb')
elif 'RandomForest' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_sgpu_rf')
elif 'KMeans' in self.hpo_config.model_type:
joblib.dump(trained_model, f'{output_filename}_sgpu_kmeans')
def cleanup(self, i_fold):
hpo_log.info('end of cv-fold \n')
def emit_final_score(self):
""" Emit score for parsing by the cloud HPO orchestrator """
exec_time = time.perf_counter() - self.start_time
hpo_log.info(f'total_time = {exec_time:.5f} s ')
if self.hpo_config.cv_folds > 1:
hpo_log.info(f'cv-fold scores : {self.cv_fold_scores} \n')
# average over CV folds
final_score = sum(self.cv_fold_scores) / len(self.cv_fold_scores)
hpo_log.info(f'final-score: {final_score}; \n')
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/local_testing/Dockerfile.14
|
FROM rapidsai/rapidsai:0.14-cuda11.0-base-ubuntu18.04-py3.7
ENV AWS_DATASET_DIRECTORY="1_year"
ENV AWS_ALGORITHM_CHOICE="XGBoost"
ENV AWS_ML_WORKFLOW_CHOICE="singleGPU"
ENV AWS_CV_FOLDS="3"
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids && pip3 install sagemaker-training \
&& conda install -c anaconda flask \
&& conda install -c conda-forge dask-ml
# path where SageMaker looks for code when container runs in the cloud
ENV AWS_CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $AWS_CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $AWS_CLOUD_PATH/entrypoint.sh
WORKDIR $AWS_CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/local_testing/build_and_run_local_hpo.sh
|
#!/usr/bin/env bash
# launch container with local directory paths mounted to mirror SageMaker
echo 'run RAPIDS HPO container with local directory mirroring SageMaker paths'
# --------------------------------
# decide what runs in this script
# --------------------------------
# test multiple configurations [ xgboost/rf and single/multi-cpu/gpu ]
RUN_TESTS_FLAG=true
# run HPO container in training mode
RUN_TRAINING_FLAG=true
# run HPO container in serving mode, with or without GPU inference
RUN_SERVING_FLAG=false
GPU_SERVING_FLAG=true
# --------------------------------
# directory and dataset choices
# --------------------------------
# SageMaker directory structure [ container internal] which we'll build on
SAGEMAKER_ROOT_DIR="/opt/ml"
# path to local directory which we'll set up to mirror cloud structure
LOCAL_TEST_DIR=~/local_sagemaker
# declare location of local Parquet and/or CSV datasets
CSV_DATA=/home/m/data/NYC_taxi
PARQUET_DATA=/home/m/data/1_year_2019
# by default script runs from /cloud-ml-examples/aws/code/local_testing
CODE_PATH=../
# expand relative to full paths for docker
LOCAL_TEST_DIR=$(realpath ${LOCAL_TEST_DIR})
# clear directories before adding code
rm -rf ${LOCAL_TEST_DIR}/code/*
rm -rf ${LOCAL_TEST_DIR}/output/*
# create directory structure to replicate SageMaker
mkdir -p ${LOCAL_TEST_DIR}/code
mkdir -p ${LOCAL_TEST_DIR}/code/workflows
mkdir -p ${LOCAL_TEST_DIR}/model
mkdir -p ${LOCAL_TEST_DIR}/output
mkdir -p ${LOCAL_TEST_DIR}/input/config
mkdir -p ${LOCAL_TEST_DIR}/input/data/training
# --------------------------------
# build container
# --------------------------------
# select wich version of the RAPIDS container is used as base for HPO
if [ "$1" == "14" ]; then
# previous
RAPIDS_VERSION="14"
REPO_PREFIX="rapidsai/rapidsai"
CUDA_VERSION="10.2"
RUNTIME_OR_BASE="base"
elif [ "$1" == "16" ]; then
# next
RAPIDS_VERSION="16"
REPO_PREFIX="rapidsai/rapidsai-nightly"
CUDA_VERSION="11.0"
RUNTIME_OR_BASE="base"
else
# stable [ default ]
RAPIDS_VERSION="15"
REPO_PREFIX="rapidsai/rapidsai"
CUDA_VERSION="10.2"
RUNTIME_OR_BASE="base"
fi
DOCKERFILE_NAME="Dockerfile.$RAPIDS_VERSION"
CONTAINER_IMAGE="cloud-ml-sagemaker"
CONTAINER_TAG="0.$RAPIDS_VERSION-cuda$CUDA_VERSION-$RUNTIME_OR_BASE-ubuntu18.04-py3.7"
JUPYTER_PORT="8899"
# build the container locally
echo "pull build and tag container"
sudo docker pull ${REPO_PREFIX}:${CONTAINER_TAG}
sudo docker build ${CODE_PATH} --tag ${CONTAINER_IMAGE}:${CONTAINER_TAG} -f ${CODE_PATH}local_testing/${DOCKERFILE_NAME}
# copy custom logic into local folder
cp -r ${CODE_PATH} ${LOCAL_TEST_DIR}/code
# --------------------------------
# launch command
# --------------------------------
function launch_container {
# train or serve
RUN_COMMAND=${1-"train"}
# mounted dataset choice
LOCAL_DATA_DIR=${2:-$PARQUET_DATA}
# configuration settings
AWS_DATASET_DIRECTORY=${3:-"1_year"}
AWS_ALGORITHM_CHOICE=${4:-"xgboost"}
AWS_ML_WORKFLOW_CHOICE=${5:-"singlegpu"}
# GPUs en/dis-abled within container
GPU_ENABLED_FLAG=${6:-true}
AWS_CV_FOLDS=${7:-"1"}
JOB_NAME="local-test"
# select whether GPUs are enabled
if $GPU_ENABLED_FLAG; then
GPU_ENUMERATION="--gpus all"
else
GPU_ENUMERATION=""
fi
sudo docker run --rm -it \
${GPU_ENUMERATION} \
-p $JUPYTER_PORT:8888 -p 8080:8080 \
--env SM_TRAINING_ENV='{"job_name":''"'${JOB_NAME}'"''}'\
--env AWS_DATASET_DIRECTORY=${AWS_DATASET_DIRECTORY} \
--env AWS_ALGORITHM_CHOICE=${AWS_ALGORITHM_CHOICE} \
--env AWS_ML_WORKFLOW_CHOICE=${AWS_ML_WORKFLOW_CHOICE} \
--env AWS_CV_FOLDS=${AWS_CV_FOLDS} \
-v ${LOCAL_TEST_DIR}:${SAGEMAKER_ROOT_DIR} \
-v ${LOCAL_DATA_DIR}:${SAGEMAKER_ROOT_DIR}/input/data/training \
--workdir ${SAGEMAKER_ROOT_DIR}/code \
${CONTAINER_IMAGE}:${CONTAINER_TAG} ${RUN_COMMAND}
}
# --------------------------------
# test definitions
# --------------------------------
function test_multiple_configurations {
# dataset
for idataset in {1..2}
do
if (( $idataset==1 )); then
DATASET_CHOICE=$PARQUET_DATA
AWS_DATASET_DIRECTORY="1_year"
else
DATASET_CHOICE=$CSV_DATA
AWS_DATASET_DIRECTORY="nyc_taxi"
fi
echo ${DATASET_JOB_PREFIX}
# algorithm
for ialgorithm in {1..2}
do
if (( $ialgorithm==1 )); then
AWS_ALGORITHM_CHOICE="xgboost"
else
AWS_ALGORITHM_CHOICE="randomforest"
fi
# workfow
for iworkflow in {1..4}
do
if (( $iworkflow==1 )); then
AWS_ML_WORKFLOW_CHOICE="singlegpu"
GPU_ENABLED_FLAG=true
elif (( $iworkflow==2 )); then
AWS_ML_WORKFLOW_CHOICE="multigpu"
GPU_ENABLED_FLAG=true
elif (( $iworkflow==3 )); then
AWS_ML_WORKFLOW_CHOICE="singlecpu"
GPU_ENABLED_FLAG=false
elif (( $iworkflow==4 )); then
AWS_ML_WORKFLOW_CHOICE="multicpu"
GPU_ENABLED_FLAG=false
fi
echo -e "----------------------------------------------\n"
echo -e " starting test "
echo -e "----------------------------------------------\n"
launch_container "train" $DATASET_CHOICE $AWS_DATASET_DIRECTORY $AWS_ALGORITHM_CHOICE $AWS_ML_WORKFLOW_CHOICE $GPU_ENABLED_FLAG
echo -e "--- end of test #${iconfig} ---\n"
done
done
done
return
}
# --------------------------------
# execute selected choices
# --------------------------------
# launch container in multiple configurations
if $RUN_TESTS_FLAG; then
test_multiple_configurations
fi
# launch container in training mode
if $RUN_TRAINING_FLAG; then
# delete previous models if re-training
rm -rf ${LOCAL_TEST_DIR}/model/*
launch_container "train"
fi
# launch container in serving mode
if $RUN_SERVING_FLAG; then
launch_container "serve" "" "${GPU_SERVING_FLAG}"
fi
exit
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/local_testing/Dockerfile.16
|
FROM rapidsai/rapidsai-nightly:0.16-cuda11.0-base-ubuntu18.04-py3.7
ENV AWS_DATASET_DIRECTORY="1_year"
ENV AWS_ALGORITHM_CHOICE="XGBoost"
ENV AWS_ML_WORKFLOW_CHOICE="singleGPU"
ENV AWS_CV_FOLDS="3"
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids && pip3 install sagemaker-training \
&& conda install -c anaconda flask \
&& conda install -c conda-forge dask-ml
# path where SageMaker looks for code when container runs in the cloud
ENV AWS_CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $AWS_CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $AWS_CLOUD_PATH/entrypoint.sh
WORKDIR $AWS_CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws/code
|
rapidsai_public_repos/cloud-ml-examples/aws/code/local_testing/Dockerfile.15
|
FROM rapidsai/rapidsai:0.15-cuda11.0-base-ubuntu18.04-py3.7
ENV AWS_DATASET_DIRECTORY="1_year"
ENV AWS_ALGORITHM_CHOICE="XGBoost"
ENV AWS_ML_WORKFLOW_CHOICE="singleGPU"
ENV AWS_CV_FOLDS="3"
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
# add sagemaker-training-toolkit [ requires build tools ], flask [ serving ], and dask-ml
RUN apt-get update && apt-get install -y --no-install-recommends build-essential \
&& source activate rapids && pip3 install sagemaker-training \
&& conda install -c anaconda flask \
&& conda install -c conda-forge dask-ml
# path where SageMaker looks for code when container runs in the cloud
ENV AWS_CLOUD_PATH="/opt/ml/code"
# copy our latest [local] code into the container
COPY . $AWS_CLOUD_PATH
# make the entrypoint script executable
RUN chmod +x $AWS_CLOUD_PATH/entrypoint.sh
WORKDIR $AWS_CLOUD_PATH
ENTRYPOINT ["./entrypoint.sh"]
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/environment_setup/README.md
|
## **Augment SageMaker with a RAPIDS Conda Kernel**
This section describes the process required to augment a SageMaker notebook instance with a RAPIDS conda environment.
The RAPIDS Ops team builds and publishes the latest RAPIDS release as a packed conda tarball.
> e.g.: https://data.rapids.ai/conda-pack/rapidsai/rapids22.06_cuda11.5_py3.9.tar.gz
We will use this packed conda environment to augment the set of Jupyter ipython kernels available in our SageMaker notebook instance.
The key steps of this are as follows:
1. During SageMaker Notebook Instance Startup
- Select a RAPIDS compatible GPU as the SageMaker Notebook instance type (e.g., ml.p3.2xlarge)
- Attach the lifecycle configuration (via the 'Additional Options' dropdown) provided in this directory
2. Launch the instance
3. Once Jupyter is accessible select the 'rapids-XX' kernel when working with a new notebook.
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/environment_setup/lifecycle_script
|
#!/bin/bash
set -e
sudo -u ec2-user -i <<'EOF'
mkdir -p rapids_kernel
cd rapids_kernel
wget -q https://data.rapids.ai/conda-pack/rapidsai/rapids22.06_cuda11.5_py3.8.tar.gz
echo "wget completed"
tar -xzf *.gz
echo "unzip completed"
source /home/ec2-user/rapids_kernel/bin/activate
conda-unpack
echo "unpack completed"
# optionally install AutoGluon for AutoML GPU demo
# source /home/ec2-user/rapids_kernel/bin/activate && pip install --pre autogluon
python -m ipykernel install --user --name rapids-2206
echo "kernel install completed"
EOF
| 0 |
rapidsai_public_repos/cloud-ml-examples/aws
|
rapidsai_public_repos/cloud-ml-examples/aws/autogluon/autogluon_airline.ipynb
|
import warnings
warnings.filterwarnings('ignore')from autogluon.tabular import TabularDataset, TabularPredictor
from autogluon.core.utils import generate_train_test_splitpath_prefix = 'https://sagemaker-rapids-hpo-us-west-2.s3-us-west-2.amazonaws.com/autogluon/'
path_train = path_prefix + 'train_data.parquet'
data = TabularDataset(path_train)dataLABEL = 'target'
SAMPLE = 1_000_000if SAMPLE is not None and SAMPLE < len(data):
data = data.sample(n=SAMPLE, random_state=0)data.shapetrain_data, test_data, train_labels, test_labels = generate_train_test_split(
X=data.drop(LABEL, axis=1),
y=data[LABEL],
problem_type='binary',
test_size=0.1
)
train_data[LABEL] = train_labels
test_data[LABEL] = test_labelsfrom autogluon.tabular.models.rf.rf_rapids_model import RFRapidsModel
from autogluon.tabular.models.knn.knn_rapids_model import KNNRapidsModel
from autogluon.tabular.models.lr.lr_rapids_model import LinearRapidsModel
predictor = TabularPredictor(
label=LABEL,
verbosity=3,
).fit(
train_data=train_data,
hyperparameters={
KNNRapidsModel : {},
LinearRapidsModel : {},
RFRapidsModel : {'n_estimators': 100},
'XGB': {'ag_args_fit': {'num_gpus': 1}, 'tree_method': 'gpu_hist', 'ag.early_stop': 10000},
},
time_limit=2000,
)leaderboard = predictor.leaderboard()
leaderboard = predictor.leaderboard(test_data)
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/ci/axis.yaml
|
CUDA_VER:
- "11.5"
- "11.2"
- "11.0"
IMG_TYPE:
- base
LINUX_VER:
- ubuntu20.04
PYTHON_VER:
- "3.9"
RAPIDS_VER:
- "22.10"
| 0 |
rapidsai_public_repos/cloud-ml-examples
|
rapidsai_public_repos/cloud-ml-examples/ci/run.sh
|
#!/bin/bash
set -e
# Overwrite HOME to WORKSPACE
export HOME=$WORKSPACE
# Install gpuCI tools
curl -s https://raw.githubusercontent.com/rapidsai/gpuci-tools/main/install.sh | bash
source ~/.bashrc
cd ~
# Set vars
export DOCKER_IMG="rapidsai/rapidsai-cloud-ml"
export DOCKER_TAG="${RAPIDS_VER}-cuda${CUDA_VER}-${IMG_TYPE}-${LINUX_VER}-py${PYTHON_VER}"
export DOCKERFILE="common/docker/Dockerfile.training.unified"
# Show env
gpuci_logger "Exposing current environment..."
env
# Print dockerfile
gpuci_logger ">>>> BEGIN Dockerfile <<<<"
cat ${DOCKERFILE}
gpuci_logger ">>>> END Dockerfile <<<<"
# Docker Login
echo "${DH_TOKEN}" | docker login --username "${DH_USER}" --password-stdin
# Build Image
gpuci_logger "Starting build..."
set -x # Print build command
docker build \
--pull \
--squash \
--build-arg "RAPIDS_VER=${RAPIDS_VER}" \
--build-arg "CUDA_VER=${CUDA_VER}" \
--build-arg "IMG_TYPE=${IMG_TYPE}" \
--build-arg "LINUX_VER=${LINUX_VER}" \
--build-arg "PYTHON_VER=${PYTHON_VER}" \
-t "${DOCKER_IMG}:${DOCKER_TAG}" \
-f "${DOCKERFILE}" \
.
set +x
# List image info
gpuci_logger "Displaying image info..."
docker images ${DOCKER_IMG}:${DOCKER_TAG}
# Upload image
gpuci_logger "Starting upload..."
GPUCI_RETRY_MAX=5
GPUCI_RETRY_SLEEP=120
gpuci_retry docker push ${DOCKER_IMG}:${DOCKER_TAG}
if [ "$DOCKER_TAG" = "${RAPIDS_VER}-cuda11.5-base-ubuntu20.04-py3.8" ]; then
docker tag ${DOCKER_IMG}:${DOCKER_TAG} ${DOCKER_IMG}:latest
gpuci_retry docker push ${DOCKER_IMG}:latest
fi
| 0 |
rapidsai_public_repos/cloud-ml-examples/common
|
rapidsai_public_repos/cloud-ml-examples/common/code/create_packed_conda_env
|
#!/usr/bin/env bash
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
# RAPIDS conda packing script
# This script is used to build the component(s) in this repo from
# source, and can be called with various options to customize the
# build as needed (see the help output for details)
# Abort script on first error
set -e
NUMARGS=$#
ARGS=$*
VALIDARGS="-h --help help -v --verbose --action --cuda --python --rapids --rapids-channel --time-actions"
HELP="$0 [<target> ...] [<flag> ...]
-v, --version - verbose build mode
-h, --help - print this text
--action [pack|unpack] - action to take (default: pack)
--cuda [version] - cuda version to install (default: 11.0)
--python [version] - python version to install (default: 3.9)
--rapids [version] - rapids version to install (default: 22.10)
--rapids-channel [ch] - rapids channel to install from [rapidsai|rapidsai-nightly] (default: rapidsai)
--unpack-to [path] - path where we should unpack the conda environment
requires 'action unpack' (default: ./rapids_[rapids_version]_py[python version])
"
#--time-actions [flag] - flag indicating if commands should include timing information [0|1] (default: 0)
VERBOSE=0
ACTIVATE=$(dirname `which conda`)/../bin/activate
declare -A argvals
argvals["--action"]="pack"
argvals["--cuda"]="11.5"
argvals["--python"]="3.9"
argvals["--rapids"]="22.10"
argvals["--rapids-channel"]="rapidsai"
function usage() {
echo "Usage: $HELP"
}
function hasArg {
(( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ")
}
if hasArg -h || hasArg --help || hasArg help; then
echo "${HELP}"
exit 0
fi
if hasArg -v || hasArg --verbose; then
VERBOSE=1
fi
# Check for valid usage and process arguments
if (( ${NUMARGS} != 0 )); then
idx=0
prev=""
for arg in $ARGS; do
if ! (echo " ${VALIDARGS} " | grep -q " ${arg} "); then
if [[ ${arg} == -* ]]; then
echo "Option $idx is invalid: ${arg}"
exit 1
else
if (( $VERBOSE == 1 )); then
echo "Setting $prev value as $arg"
fi
argvals["$prev"]="$arg"
fi
fi
prev=$arg
let idx=idx+1
done
fi
argvals["--unpack-path"]="./rapids_${argvals["--rapids"]}_py${argvals["--python"]}"
ACTION=${argvals["--action"]}
CUDA_VERSION=${argvals["--cuda"]}
PYTHON_VERSION=${argvals["--python"]}
RAPIDS_VERSION=${argvals["--rapids"]}
RAPIDS_CHANNEL=${argvals["--rapids-channel"]}
UNPACK_PATH=${argvals["--unpack-path"]}
CONDA_ENV_NAME="rapids${RAPIDS_VERSION}_py${PYTHON_VERSION}"
if [[ "$ACTION" == "pack" ]]; then
echo "Creating CONDA environment $CONDA_ENV_NAME"
conda create -y --name=$CONDA_ENV_NAME python=$PYTHON_VERSION
source $ACTIVATE $CONDA_ENV_NAME
echo "Installing conda-pack"
pip install ipykernel
conda install -y -c conda-forge conda-pack
echo "Installing RAPIDS libraries (this can take a while)"
time conda install -y -c $RAPIDS_CHANNEL -c nvidia -c conda-forge \
rapids=$RAPIDS_VERSION python=$PYTHON_VERSION cudatoolkit=$CUDA_VERSION
echo "Packing conda environment"
conda-pack -n $CONDA_ENV_NAME -o ${CONDA_ENV_NAME}.tar.gz
else
echo "Unpacking into $UNPACK_PATH"
mkdir -p "$UNPACK_PATH"
tar -xzf ${CONDA_ENV_NAME}.tar.gz -C "$UNPACK_PATH"
echo "Updating conda environment"
source "$UNPACK_PATH/bin/activate"
conda-unpack
python -m ipykernel install --user --name $CONDA_ENV_NAME
fi
| 0 |
rapidsai_public_repos/cloud-ml-examples/common
|
rapidsai_public_repos/cloud-ml-examples/common/docker/DockerHubREADME.md
|
# RAPIDS Cloud Machine Learning
RAPIDS is a suite of open-source libraries that bring GPU acceleration to data science pipelines. Users building cloud-based machine learning experiments can take advantage of this acceleration throughout their workloads to build models faster, cheaper, and more easily on the cloud platform of their choice. The [cloud-ml-examples](https://github.com/rapidsai/cloud-ml-examples) repository provides example notebooks and "getting started" code samples and this Docker repository provides a ready to run Docker container with RAPIDS and libraries/SDKs for AWS SageMaker, Azure ML and Google AI Platform.
**NOTE:** Review our [prerequisites](#prerequisites) section to ensure your system meets the minimum requirements for RAPIDS.
### Current Version - RAPIDS v22.10
The RAPIDS images are based on [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda), and are intended to be drop-in replacements for the corresponding CUDA
images in order to make it easy to add RAPIDS libraries while maintaining support for existing CUDA applications.
### Image Tag Naming Scheme
The tag naming scheme for RAPIDS images incorporates key platform details into the tag as shown below:
```
22.06-cuda11.5-base-ubuntu20.04-py3.9
^ ^ ^ ^ ^
| | type | python version
| | |
| cuda version |
| |
RAPIDS version linux version
```
## Prerequisites
- NVIDIA Pascal™ GPU architecture or better
- CUDA [11.0 - 11.5](https://developer.nvidia.com/cuda-downloads) with a compatible NVIDIA driver
- Ubuntu 20.04 or CentOS 7
- Docker CE v18+
- [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker)
## More Information
Check out the [RAPIDS HPO](https://rapids.ai/hpo.html) webpage for video tutorials and blog posts.
Please submit issues with the container to this GitHub repository: https://github.com/rapidsai/docker
For issues with cloud-ml-examples file an issue in: https://github.com/rapidsai/cloud-ml-examples
| 0 |
rapidsai_public_repos/cloud-ml-examples/common
|
rapidsai_public_repos/cloud-ml-examples/common/docker/Dockerfile.training.unified
|
ARG RAPIDS_VER="22.10"
ARG CUDA_VER="11.5"
ARG IMG_TYPE="base"
ARG LINUX_VER="ubuntu20.04"
ARG PYTHON_VER="3.9"
FROM rapidsai/rapidsai-core:${RAPIDS_VER}-cuda${CUDA_VER}-${IMG_TYPE}-${LINUX_VER}-py${PYTHON_VER}
# ensure printed output/log-messages retain correct order
ENV PYTHONUNBUFFERED=True
RUN apt update -y \
&& apt install -y --no-install-recommends build-essential \
&& apt autoremove -y \
&& apt clean -y \
&& rm -rf /var/lib/apt/lists/*
# AWS Requirements
ARG aws_dataset_directory="1_year"
ARG aws_algorithm_choice="XGBoost"
ARG aws_ml_workflow_choice="singleGPU"
ARG aws_cv_folds="3"
ARG aws_cloud_path="/opt/ml/code"
#---------------------------------------------------
ENV AWS_DATASET_DIRECTORY=${aws_dataset_directory}
ENV AWS_ALGORITHM_CHOICE=${aws_algorithm_choice}
ENV AWS_ML_WORKFLOW_CHOICE=${aws_ml_workflow_choice}
ENV AWS_CV_FOLDS=${aws_cv_folds}
ENV AWS_CLOUD_PATH=${aws_cloud_path}
ENV RAPIDS_AWS_INSTALL_PATH=${aws_cloud_path}
#===================================================
## Azure Requirements
ARG azure_install_path="/opt/rapids/azure"
#----------------------------------------------
ENV RAPIDS_AZURE_INSTALL_PATH=${azure_install_path}
#==============================================
## GCP Requirements
ARG gcp_install_path="/opt/rapids/gcp"
#----------------------------------------------
ENV RAPIDS_GCP_INSTALL_PATH=${gcp_install_path}
#==============================================
# AWS Install
RUN mkdir -p ${RAPIDS_AWS_INSTALL_PATH}
COPY aws/code/entrypoint.sh \
aws/code/HPOConfig.py \
aws/code/HPODatasets.py \
aws/code/MLWorkflow.py \
aws/code/serve.py \
aws/code/train.py \
${aws_cloud_path}"/"
COPY aws/code/workflows/MLWorkflowMultiCPU.py \
aws/code/workflows/MLWorkflowMultiGPU.py \
aws/code/workflows/MLWorkflowSingleCPU.py \
aws/code/workflows/MLWorkflowSingleGPU.py \
${aws_cloud_path}"/workflows/"
RUN . /opt/conda/etc/profile.d/conda.sh \
&& conda activate rapids \
&& conda install -c conda-forge \
flask \
dask-ml \
&& conda clean --all \
&& pip install \
sagemaker-training \
protobuf==3.20.1 \
&& pip cache purge
# Azure Install
RUN mkdir -p ${RAPIDS_AZURE_INSTALL_PATH}
COPY azure/* \
/opt/rapids/azure/
# azureml-sdk installs pyarrow=3.0.0 (issue: https://github.com/rapidsai/cloud-ml-examples/issues/165)
# RUN . /opt/conda/etc/profile.d/conda.sh \
# && conda activate rapids \
# && pip install \
# azureml-sdk \
# azureml-widgets \
# azureml-mlflow \
# optuna \
# dask-optuna \
# && pip cache purge
# GCP Install
RUN mkdir -p ${RAPIDS_GCP_INSTALL_PATH}
COPY gcp/docker/infrastructure/* \
/opt/rapids/gcp/
RUN . /opt/conda/etc/profile.d/conda.sh \
&& conda activate rapids \
&& conda install -c conda-forge \
gcsfs \
sqlalchemy \
ray-tune \
conda-forge/label/cloudml_hypertune_dev::cloudml-hypertune \
&& conda clean --all
RUN mkdir -p /opt/rapids_cloudml
COPY common/docker/infrastructure/* \
/opt/rapids_cloudml/
## Unified entrypoint
WORKDIR "/opt/rapids_cloudml"
ENTRYPOINT [ "bash", "/opt/rapids_cloudml/entrypoint.sh" ]
| 0 |
rapidsai_public_repos/cloud-ml-examples/common/docker
|
rapidsai_public_repos/cloud-ml-examples/common/docker/infrastructure/entrypoint.sh
|
source /conda/etc/profile.d/conda.sh
conda activate rapids
ARGS=( "$@" )
EXEC_CONTEXT=""
# If we're doing SageMaker HPO, this file will exist
aws_hpo_params_path="/opt/ml/input/config/hyperparameters.json"
if [[ -f "${aws_hpo_params_path}" ]]; then
EXEC_CONTEXT="aws_sagemaker_hpo"
fi
# If we're doing GCP AI-Platform HPO, a number of AIP_XXX values will be set.
if [[ -n "$CLOUD_ML_HP_METRIC_FILE" ]]; then
EXEC_CONTEXT="gcp_aip_hpo"
fi
if [[ $EXEC_CONTEXT == "aws_sagemaker_hpo" ]]; then
## SageMaker
echo "Running SageMaker HPO entrypoint."
cd ${AWS_CLOUD_PATH} || exit 1
if [[ "$1" == "serve" ]]; then
echo -e "@ entrypoint -> launching serving script \n"
python serve.py
else
echo -e "@ entrypoint -> launching training script \n"
python train.py
fi
elif [[ $EXEC_CONTEXT == "gcp_aip_hpo" ]]; then
# GCP
echo "Running GCP AI-Platform HPO entrypoint."
cd /opt/rapids/gcp || exit 1
echo "Running: entrypoint.py ${ARGS[@]}"
python entrypoint.py ${ARGS[@]}
else
# Azure
# TODO: Azure workflow is substantially different.
echo "Running AzureML HPO entrypoint."
cd /opt/rapids/azure || exit 1
echo "Running: bash ${ARGS[@]}"
bash "${ARGS[@]}"
fi
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/integration/README.md
|
# <div align="left"><img src="https://rapids.ai/assets/images/rapids_logo.png" width="90px"/> Integration
RAPIDS - combined conda package for all of RAPIDS libraries
## RAPIDS Meta-packages
The conda recipe in the `conda` folder provides the RAPIDS meta-packages, which when installed will provide the latest RAPIDS libraries for the given version.
See the [README](conda/recipes/README.md) for more information about the meta-packages and how to update versions.
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/integration/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2019 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/integration/conda
|
rapidsai_public_repos/integration/conda/recipes/README.md
|
# <div align="left"><img src="https://rapids.ai/assets/images/rapids_logo.png" width="90px"/> Meta-packages
## Overview
These packages provide one-line installs for RAPIDS as well as environment
setups for RAPIDS users and the RAPIDS [containers](https://github.com/rapidsai/build).
## Meta-packages
### Package Availability
These meta-packages are available in two channels:
Channel Name | Purpose
--- | ---
`rapidsai` | Release versions of the packages; tied to a stable release of RAPIDS
`rapidsai-nightly` | Nightly versions of the packages; allows for install of WIP nightly versions of RAPIDS
### Install Packages
The install meta-packages are for RAPIDS installation and version pinning of core
libraries to a RAPIDS release:
Package Name | Purpose
--- | ---
`rapids` | Provide a one package install for all RAPIDS libraries, version matched to a RAPIDS release
`rapids-xgboost` | Defines the version of `xgboost` used for a RAPIDS release
## Managing Versions
Packages without version restrictions do not need to use the following process
and can be simply added as a `conda` package name to the recipe. For all other
packages, follow this process to add/update versions used across all
meta-packages:
1. Examine the `meta.yaml` recipe to be modified
2. Check if there is a pre-existing version definition like
```
cupy {{ cupy_version }}
```
3. If so, skip to the section [Updating Versions](#updating-versions)
4. If not, continue with the section [Adding Versions](#adding-versions)
### Adding Versions
For new packages or those that do not have defined versions they need to be
added.
#### Modifying Recipes
To add a package with versioning to the recipe we need the `PACKAGE_NAME` and
the `VERSIONING_NAME` added to the file.
- `PACKAGE_NAME` - is the conda package name
- `VERSIONING_NAME` - is the conda package name with `-` replaced with `_` and a suffix of `_version` added
- For example
- `cupy` would become `cupy_version`
- `scikit-learn` would become `scikit_learn_version`
Once the `PACKAGE_NAME` and `VERSIONING_NAME` are ready, we can add them to
the `meta.yml` as follows:
```
PACKAGE_NAME {{ VERSIONING_NAME }}
```
- **NOTE:** The `VERSIONING_NAME` must be surrounded by the `{{ }}` for the substitution to work.
Using our examples of `cupy` and `scikit-learn` we would have these entries in
the `meta.yaml`:
```
cupy {{ cupy_version }}
```
```
scikit-learn {{ scikit_learn_version }}
```
#### Modifying Versions File
In `conda/recipes` is `versions.yaml` - These are versions used by CI for testing in PRs and conda builds.
In this file we specify the version for the newly created `VERSIONING_NAME`.
For each `VERSIONING_NAME` we need a `VERSION_SPEC`. This can be any of the
standard `conda` version specifiers:
```
>=1.8.0
>=0.48,<0.49
>=7.0,<8.0.0a0
=2.5
```
##### TIP - Correct version specs
**NOTE:** `=2.5.*` is not a valid version spec. Please use `=2.5` instead
which will be interpreted as `=2.5.*`. Otherwise `conda build` throws a
warning message with the definition of `.*`. For example:
```
WARNING conda.models.version:get_matcher(531): Using .* with relational operator
is superfluous and deprecated and will be removed in a future version of conda.
Your spec was 0.23.*, but conda is ignoring the .* and treating it as 0.23
```
Combined together each of the versions files would add the following for each
`VERSIONING_NAME`:
```
VERSIONING_NAME:
- 'VERSION_SPEC'
```
Using our examples of `cupy` and `scikit-learn` we would have these entries in
the `meta.yaml`:
```
cupy_version:
- '>=7.0,<8.0.0a0'
```
```
scikit_learn_version:
- '=0.21.3'
```
### Updating Versions
Edit the `versions.yaml` file in `conda/recipes` and update the `VERSION_SPEC`
as desired. If there is no defined version spec, see [Modifying Versions Files](#modifying-versions-files)
for information on how to add one.
| 0 |
rapidsai_public_repos/integration/conda
|
rapidsai_public_repos/integration/conda/recipes/versions.yaml
|
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
# Versions for `rapids-xgboost` meta-pkg
xgboost_version:
- '=1.7.6'
cuda11_cuda_python_version:
- '>=11.7.1,<12.0a'
cuda12_cuda_python_version:
- '>=12.0.0,<13.0a'
cupy_version:
- '>=12.0.0'
nccl_version:
- '>=2.9.9,<3.0a0'
networkx_version:
- '>=2.5.1'
numba_version:
- '>=0.57'
numpy_version:
- '>=1.21'
nvtx_version:
- '>=0.2.1,<0.3'
ucx_version:
- '>=1.14.1'
| 0 |
rapidsai_public_repos/integration/conda/recipes
|
rapidsai_public_repos/integration/conda/recipes/rapids-xgboost/meta.yaml
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
{% set rapids_version = environ.get('GIT_DESCRIBE_TAG', '0.0.0.dev').lstrip('v') %}
{% set major_minor_version = rapids_version.split('.')[0] + '.' + rapids_version.split('.')[1] %}
{% set cuda_version = '.'.join(environ['RAPIDS_CUDA_VERSION'].split('.')[:2]) %}
{% set cuda_major = cuda_version.split('.')[0] %}
{% set py_version = environ['CONDA_PY'] %}
{% set date_string = environ['RAPIDS_DATE_STRING'] %}
###
# Versions referenced below are set in `conda/recipe/*versions.yaml` except for
# those set above (e.g. `cuda_version`)
###
package:
name: rapids-xgboost
version: {{ rapids_version }}
source:
git_url: ../../..
build:
number: {{ GIT_DESCRIBE_NUMBER }}
string: cuda{{ cuda_major }}_py{{ py_version }}_{{ date_string }}_{{ GIT_DESCRIBE_HASH }}_{{ GIT_DESCRIBE_NUMBER }}
requirements:
host:
- python
- cuda-version ={{ cuda_version }}
run:
- {{ pin_compatible('cuda-version', max_pin='x', min_pin='x') }}
{% if cuda_major == "11" %}
- cudatoolkit
{% endif %}
- nccl {{ nccl_version }}
- python
- libxgboost {{ xgboost_version }} rapidsai_h*
- xgboost {{ xgboost_version }} rapidsai_py*
test:
requires:
- cuda-version ={{ cuda_version }}
commands:
- exit 0
about:
home: https://rapids.ai/
license: Custom
license_file: conda/recipes/rapids-xgboost/LICENSE
summary: 'RAPIDS + DMLC XGBoost Integration'
description: |
Meta-package for RAPIDS + DMLC XGBoost integration; version matched for RAPIDS releases.
doc_url: https://docs.rapids.ai/
dev_url: https://github.com/rapidsai/xgboost
| 0 |
rapidsai_public_repos/integration/conda/recipes
|
rapidsai_public_repos/integration/conda/recipes/rapids-xgboost/LICENSE
|
The license of this package is a combination of the dependent packages contained herein.
| 0 |
rapidsai_public_repos/integration/conda/recipes
|
rapidsai_public_repos/integration/conda/recipes/rapids/meta.yaml
|
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
{% set rapids_version = environ.get('GIT_DESCRIBE_TAG', '0.0.0.dev').lstrip('v') %}
{% set major_minor_version = rapids_version.split('.')[0] + '.' + rapids_version.split('.')[1] %}
{% set cuda_version = '.'.join(environ['RAPIDS_CUDA_VERSION'].split('.')[:2]) %}
{% set cuda_major = cuda_version.split('.')[0] %}
{% set py_version = environ['CONDA_PY'] %}
{% set date_string = environ['RAPIDS_DATE_STRING'] %}
###
# Versions referenced below are set in `conda/recipe/*versions.yaml` except for
# those set above (e.g. `cuda_version`)
###
package:
name: rapids
version: {{ rapids_version }}
source:
git_url: ../../..
build:
number: {{ GIT_DESCRIBE_NUMBER }}
string: cuda{{ cuda_major }}_py{{ py_version }}_{{ date_string }}_{{ GIT_DESCRIBE_HASH }}_{{ GIT_DESCRIBE_NUMBER }}
requirements:
host:
- python
- cuda-version ={{ cuda_version }}
run:
- {{ pin_compatible('cuda-version', max_pin='x', min_pin='x') }}
{% if cuda_major == "11" %}
- cuda-python {{ cuda11_cuda_python_version }}
- cudatoolkit
{% else %}
- cuda-python {{ cuda12_cuda_python_version }}
{% endif %}
- cupy {{ cupy_version }}
- nccl {{ nccl_version }}
- networkx {{ networkx_version }}
- numba {{ numba_version }}
- numpy {{ numpy_version }}
- nvtx {{ nvtx_version }}
- python
- cudf ={{ major_minor_version }}.*
- cugraph ={{ major_minor_version }}.*
- cuml ={{ major_minor_version }}.*
- cucim ={{ major_minor_version }}.*
- cuspatial ={{ major_minor_version }}.*
- cuproj ={{ major_minor_version }}.*
- custreamz ={{ major_minor_version }}.*
- cuxfilter ={{ major_minor_version }}.*
- dask-cuda ={{ major_minor_version }}.*
- rapids-xgboost ={{ major_minor_version }}.*
- rmm ={{ major_minor_version }}.*
- pylibcugraph ={{ major_minor_version }}.*
- libcugraph_etl ={{ major_minor_version }}.*
{% if cuda_major == "11" %}
- ptxcompiler # CUDA enhanced compat. See https://github.com/rapidsai/ptxcompiler
{% endif %}
- conda-forge::ucx {{ ucx_version }}
test:
requires:
- cuda-version ={{ cuda_version }}
commands:
- exit 0
about:
home: https://rapids.ai/
license: Custom
license_file: conda/recipes/rapids/LICENSE
summary: 'RAPIDS Suite - Open GPU Data Science'
description: |
Meta-package for the RAPIDS suite of software libraries. RAPIDS gives you the freedom to execute end-to-end data science
and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization,
but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
doc_url: https://docs.rapids.ai/
dev_url: https://github.com/rapidsai/
| 0 |
rapidsai_public_repos/integration/conda/recipes
|
rapidsai_public_repos/integration/conda/recipes/rapids/LICENSE
|
The license of this package is a combination of the dependent packages contained herein.
| 0 |
rapidsai_public_repos/integration
|
rapidsai_public_repos/integration/ci/build_python.sh
|
#!/bin/bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
set -euo pipefail
source rapids-env-update
CONDA_CONFIG_FILE="conda/recipes/versions.yaml"
rapids-print-env
rapids-logger "Build rapids-xgboost"
rapids-conda-retry mambabuild \
--use-local \
--variant-config-files "${CONDA_CONFIG_FILE}" \
conda/recipes/rapids-xgboost
rapids-logger "Build rapids"
rapids-conda-retry mambabuild \
--use-local \
--variant-config-files "${CONDA_CONFIG_FILE}" \
conda/recipes/rapids
rapids-upload-conda-to-s3 python
| 0 |
rapidsai_public_repos/integration
|
rapidsai_public_repos/integration/ci/conda-pack.sh
|
#!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
set -e
RAPIDS_VER="23.12"
VERSION_DESCRIPTOR="a"
CONDA_USERNAME="rapidsai-nightly"
if [ "$GITHUB_REF_TYPE" = "tag" ]; then
VERSION_DESCRIPTOR=""
CONDA_USERNAME="rapidsai"
fi
CUDA_VERSION="${RAPIDS_CUDA_VERSION%.*}"
CONDA_ENV_NAME="rapids${RAPIDS_VER}${VERSION_DESCRIPTOR}_cuda${CUDA_VERSION}_py${RAPIDS_PY_VERSION}"
echo "Install conda-pack"
rapids-mamba-retry install -n base -c conda-forge "conda-pack"
echo "Creating conda environment $CONDA_ENV_NAME"
rapids-mamba-retry create -y -n $CONDA_ENV_NAME \
-c $CONDA_USERNAME -c conda-forge -c nvidia \
"rapids=$RAPIDS_VER" \
"cuda-version=$CUDA_VERSION" \
"python=$RAPIDS_PY_VERSION"
echo "Packing conda environment"
conda-pack --quiet --ignore-missing-files -n "$CONDA_ENV_NAME" -o "${CONDA_ENV_NAME}.tar.gz"
export AWS_DEFAULT_REGION="us-east-2"
echo "Upload packed conda"
aws s3 cp --only-show-errors --acl public-read "${CONDA_ENV_NAME}.tar.gz" "s3://rapidsai-data/conda-pack/${CONDA_USERNAME}/${CONDA_ENV_NAME}.tar.gz"
| 0 |
rapidsai_public_repos/integration/ci
|
rapidsai_public_repos/integration/ci/release/update-version.sh
|
#!/bin/bash
# Copyright (c) 2021-2023, NVIDIA CORPORATION.
###############################
# Integration Version Updater #
###############################
## Usage
# bash update-version.sh <new_version>
# Workaround for MacOS where BSD sed doesn't support the flags
# Install MacOS gsed with `brew install gnu-sed`
unameOut="$(uname -s)"
case "${unameOut}" in
Linux*) sedCmd=sed;;
Darwin*) sedCmd=gsed;;
*) echo "Unknown OS"; exit 1;;
esac
# Format is YY.MM.PP - no leading 'v' or trailing 'a'
NEXT_FULL_TAG=$1
# Get current version
CURRENT_TAG=$(git tag --merged HEAD | grep -xE '^v.*' | sort --version-sort | tail -n 1 | tr -d 'v')
CURRENT_MAJOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[1]}')
CURRENT_MINOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[2]}')
CURRENT_PATCH=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[3]}' | tr -d 'a')
CURRENT_SHORT_TAG=${CURRENT_MAJOR}.${CURRENT_MINOR}
#Get <major>.<minor> for next version
NEXT_MAJOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[1]}')
NEXT_MINOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[2]}')
NEXT_SHORT_TAG=${NEXT_MAJOR}.${NEXT_MINOR}
echo "Preparing release $CURRENT_TAG => $NEXT_FULL_TAG"
# Inplace sed replace; workaround for Linux and Mac
function sed_runner() {
$sedCmd -i.bak ''"$1"'' $2 && rm -f ${2}.bak
}
sed_runner "/RAPIDS_VER=/ s/[0-9][0-9].[0-9][0-9]/${NEXT_SHORT_TAG}/" ci/conda-pack.sh
for FILE in .github/workflows/*.yaml; do
sed_runner "/shared-workflows/ s/@.*/@branch-${NEXT_SHORT_TAG}/g" "${FILE}"
done
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/cugraph-pg/README.md
|
# cugraph-pg
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dask-cuda-benchmarks/.pre-commit-config.yaml
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
repos:
- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
types: [python]
- repo: https://github.com/psf/black
rev: 22.10.0
hooks:
- id: black
types: [python]
- repo: https://github.com/PyCQA/flake8
rev: 5.0.4
hooks:
- id: flake8
args: ["--config=setup.cfg"]
types: [python]
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dask-cuda-benchmarks/setup.cfg
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
[flake8]
filename = *.py
max-line-length = 88
extend-ignore =
# line break before binary operator
W503,
# whitespace before :
E203
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dask-cuda-benchmarks/pyproject.toml
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
[tool.black]
line-length = 88
target-version = ["py38"]
include = '\.pyi?$'
[tool.isort]
atomic = true
profile = "black"
line_length = 88
skip_gitignore = true
known_dask = """
dask
distributed
dask_cuda
"""
known_rapids = """
rmm
cudf
strings_udf
"""
default_section = "THIRDPARTY"
sections = "FUTURE,STDLIB,THIRDPARTY,DASK,RAPIDS,FIRSTPARTY,LOCALFOLDER"
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dask-cuda-benchmarks/README.md
|
# RAPIDS benchmarks
This repository contains a collection of benchmarks and run scripts
for single and multi-node benchmarking of RAPIDS components with a
focus on [dask/distributed](https://dask.org) with
[CUDF-](https://github.com/rapidsai/cudf)) and
[cupy-](https://github.com/cupy/cupy) accelerated backends.
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dask-cuda-benchmarks/CONTRIBUTING.md
|
# Contributing
If you are interested in contributing to dask-cuda-benchmarks, your contributions will fall
into three categories:
1. You want to report a bug, feature request, or documentation issue
- File an [issue](https://github.com/rapidsai/dask-cuda-benchmarks/issues/new)
describing what you encountered or what you want to see changed.
- The RAPIDS team will evaluate the issues and triage them, scheduling
them for a release. If you believe the issue needs priority attention
comment on the issue to notify the team.
2. You want to propose a new Feature and implement it
- Post about your intended feature, and we can discuss the design and
implementation.
- Once we agree that the plan looks good, go ahead and implement
it, and submit a pull request. Your contributions will
need a `Signed-Off-By` line.
3. You want to implement a feature or bug-fix for an outstanding issue
- If you need more context on a particular issue, please ask and we will
provide.
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dask-cuda-benchmarks/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks
|
rapidsai_public_repos/dask-cuda-benchmarks/analysis/make-multi-node-charts.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
from collections.abc import Iterable
from itertools import chain
from pathlib import Path
import altair as alt
import numpy as np
import pandas as pd
import typer
from altair import datum, expr
from altair.utils import sanitize_dataframe
def hmean(a):
"""Harmonic mean"""
if len(a):
return 1 / np.mean(1 / a)
else:
return 0
def hstd(a):
"""Harmonic standard deviation"""
if len(a):
rmean = np.mean(1 / a)
rvar = np.var(1 / a)
return np.sqrt(rvar / (len(a) * rmean**4))
else:
return 0
def remove_warmup(df):
summary = df.groupby("num_workers")
return df.loc[
df.wallclock.values
< (summary.wallclock.mean() + summary.wallclock.std() * 2)[
df.num_workers
].values
].copy()
def process_output(directories: Iterable[Path]):
all_merge_data = []
all_transpose_data = []
for d in directories:
ucx_version = d.name
date = d.parent.name
date = pd.to_datetime(date)
dfs = []
for f in chain(
d.glob("nnodes*cudf-merge-dask.json"),
d.glob("nnodes*cudf-merge-explicit-comms.json"),
):
merge_df = pd.read_json(f)
dfs.append(merge_df)
if dfs:
merge_df = pd.concat(dfs, ignore_index=True)
merge_df["date"] = date
merge_df["ucx_version"] = ucx_version
all_merge_data.append(merge_df)
dfs = []
for f in d.glob("nnodes*transpose-sum.json"):
transpose_df = pd.read_json(f)
dfs.append(transpose_df)
if dfs:
transpose_df = pd.concat(dfs, ignore_index=True)
transpose_df["date"] = date
transpose_df["ucx_version"] = ucx_version
all_transpose_data.append(transpose_df)
merge_df = pd.concat(all_merge_data, ignore_index=True)
transpose_df = pd.concat(all_transpose_data, ignore_index=True)
# These are useless results for now
merge_df = merge_df.loc[lambda df: df.num_workers < 256]
transpose_df = transpose_df.loc[lambda df: df.num_workers < 256]
return merge_df, transpose_df
def summarise_merge_data(df):
# data = data.groupby(["num_workers", "backend", "date"], as_index=False).mean()
df["throughput"] = (df.data_processed / df.wallclock / df.num_workers) / 1e9
grouped = df.groupby(["date", "num_workers", "backend", "ucx_version"])
throughput = grouped["throughput"]
throughput = throughput.aggregate(throughput_mean=hmean, throughput_std=hstd)
grouped = grouped.mean(numeric_only=True).drop(columns="throughput")
grouped = grouped.merge(
throughput, on=["date", "num_workers", "backend", "ucx_version"]
).reset_index()
tmp = grouped.loc[
lambda df: (df.backend == "dask") & (df.ucx_version == "ucx-1.12.1")
].copy()
tmp["backend"] = "no-dask"
# distributed-joins measurements
for n, bw in zip(
[8, 16, 32, 64, 128, 256],
[5.4875, 4.325, 3.56875, 2.884375, 2.090625, 1.71835937],
):
tmp.loc[lambda df: df.num_workers == n, "throughput_mean"] = bw
tmp["throughput_std"] = 0
return pd.concat([grouped, tmp], ignore_index=True)
def summarise_transpose_data(df):
# df = remove_warmup(df)
df["throughput"] = (df.data_processed / df.wallclock / df.num_workers) / 1e9
grouped = df.groupby(["date", "num_workers", "ucx_version"])
throughput = grouped["throughput"]
throughput = throughput.aggregate(
throughput_mean=hmean,
throughput_std=hstd,
wallclock_mean="mean",
wallclock_std="std",
)
grouped = grouped.mean(numeric_only=True).drop(columns="throughput")
df = grouped.merge(
throughput, on=["date", "num_workers", "ucx_version"]
).reset_index()
return df
def make_merge_chart(df):
data = (
alt.Chart(df)
.encode(
x=alt.X("date:T", title="Date"),
)
.transform_calculate(category="datum.backend + '-' + datum.ucx_version")
)
selector = alt.selection(
type="point", fields=["category"], bind="legend", name="selected-version"
)
line = data.mark_line(point=True).encode(
y=alt.Y("throughput_mean:Q", title="Throughput [GB/s/GPU]"),
color="category:N",
opacity=alt.condition(selector, alt.value(1), alt.value(0.25)),
)
band = (
data.mark_area()
.transform_calculate(
y=expr.toNumber(datum.throughput_mean)
- expr.toNumber(datum.throughput_std),
y2=expr.toNumber(datum.throughput_mean)
+ expr.toNumber(datum.throughput_std),
)
.encode(
y="y:Q",
y2="y2:Q",
color="category:N",
opacity=alt.condition(selector, alt.value(0.3), alt.value(0.025)),
)
)
chart = line + band
return (
chart.add_params(selector)
.properties(width=600)
.facet(
facet=alt.Text(
"num_workers:N",
title="Number of GPUs",
# Hacky, since faceting on quantitative data is
# not allowed? And sorting is lexicographic.
sort=["8", "16", "32", "64", "128"],
),
columns=2,
)
)
def make_transpose_chart(df):
data = alt.Chart(df).encode(
x=alt.X("date:T", title="Date"),
)
selector = alt.selection(
type="point", fields=["ucx_version"], bind="legend", name="selected-version"
)
line = data.mark_line(point=True).encode(
y=alt.Y("throughput_mean:Q", title="Throughput [GB/s/GPU]"),
color="ucx_version:N",
opacity=alt.condition(selector, alt.value(1), alt.value(0.25)),
)
band = (
data.mark_area()
.transform_calculate(
y=expr.toNumber(datum.throughput_mean)
- expr.toNumber(datum.throughput_std),
y2=expr.toNumber(datum.throughput_mean)
+ expr.toNumber(datum.throughput_std),
)
.encode(
y="y:Q",
y2="y2:Q",
opacity=alt.condition(selector, alt.value(0.3), alt.value(0.025)),
color="ucx_version:N",
)
)
chart = line + band
return (
chart.add_params(selector)
.properties(width=600)
.facet(
facet=alt.Text(
"num_workers:N",
title="Number of GPUs",
# Hacky, since faceting on quantitative data is
# not allowed? And sorting is lexicographic.
sort=["8", "16", "32", "64", "128"],
),
columns=2,
)
)
def main(
data_directory: Path = typer.Argument(..., help="Directory storing raw results"),
output_directory: Path = typer.Argument(..., help="Directory storing outputs"),
make_charts: bool = typer.Option(True, help="Make HTML pages for charts?"),
):
merge, transpose = process_output(data_directory.glob("*/ucx-*"))
merge = summarise_merge_data(merge)
transpose = summarise_transpose_data(transpose)
merge_filename = output_directory / "multi-node-merge.csv"
transpose_filename = output_directory / "multi-node-transpose.csv"
sanitize_dataframe(merge).to_csv(merge_filename, index=False)
sanitize_dataframe(transpose).to_csv(transpose_filename, index=False)
if make_charts:
merge = make_merge_chart(f"./{merge_filename.name}")
transpose = make_transpose_chart(f"./{transpose_filename.name}")
merge.save(merge_filename.with_suffix(".html"))
transpose.save(transpose_filename.with_suffix(".html"))
if __name__ == "__main__":
typer.run(main)
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks
|
rapidsai_public_repos/dask-cuda-benchmarks/analysis/pull-and-update-data.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
set -ex
SINGLE_NODE_REMOTE_LOCATION=$1
MULTI_NODE_REMOTE_LOCATION=$2
LOCAL_DATA_LOCATION=$3
WEBSITE_DIRECTORY=$4
rsync -rvupm ${SINGLE_NODE_REMOTE_LOCATION} ${LOCAL_DATA_LOCATION}/single-node \
--filter '+ */' \
--filter '+ local*.log' \
--filter '+ ucx-py-bandwidth.csv' \
--filter '- *' \
--max-size=50K
if [[ ! -d ${WEBSITE_DIRECTORY} ]]; then
echo "Output directory ${WEBSITE_DIRECTORY} not found!"
exit 1
fi
LOC=$(dirname $0)
python ${LOC}/make-single-node-charts.py ${LOCAL_DATA_LOCATION}/single-node/ ${WEBSITE_DIRECTORY} --make-charts
rsync -rvupm ${MULTI_NODE_REMOTE_LOCATION} ${LOCAL_DATA_LOCATION}/multi-node/ \
--filter '+ */' \
--filter '+ *.json' \
--filter '- *'
python ${LOC}/make-multi-node-charts.py ${LOCAL_DATA_LOCATION}/multi-node/ ${WEBSITE_DIRECTORY} --make-charts
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks
|
rapidsai_public_repos/dask-cuda-benchmarks/analysis/make-single-node-charts.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
import ast
import json
import re
from collections.abc import Callable
from functools import partial
from itertools import chain
from operator import itemgetter, methodcaller
from pathlib import Path
from typing import cast
from warnings import warn
import altair as alt
import numpy as np
import pandas as pd
import typer
from altair import datum
from altair.utils import sanitize_dataframe
from dask.utils import parse_bytes, parse_timedelta
def hmean(a):
"""Harmonic mean"""
if len(a):
return 1 / np.mean(1 / a)
else:
return 0
def hstd(a):
"""Harmonic standard deviation"""
if len(a):
rmean = np.mean(1 / a)
rvar = np.var(1 / a)
return np.sqrt(rvar / (len(a) * rmean**4))
else:
return 0
def parse_filename(filename: Path):
splits = set(filename.stem.split("_"))
if "ucx" in splits:
return {
"protocol": "ucx",
"nvlink": "nvlink" in splits,
"infiniband": "ib" in splits,
"tcp": "tcp" in splits,
}
else:
assert "tcp" in splits
return {"protocol": "tcp", "nvlink": False, "infiniband": False, "tcp": True}
def parse_merge(dirname: Path):
filenames = sorted(dirname.glob("local_cudf_merge*.log"))
series = []
try:
df = pd.read_csv(dirname / "ucx-py-bandwidth.csv")
(numpy_version,) = set(df["NumPy Version"])
(cupy_version,) = set(df["CuPy Version"])
(rmm_version,) = set(df["RMM Version"])
(ucx_version,) = set(df["UCX Version"])
(ucxpy_version,) = set(df["UCX-Py Version"])
(ucx_revision,) = set(df["UCX Revision"])
version_info = {
"numpy_version": numpy_version,
"cupy_version": cupy_version,
"rmm_version": rmm_version,
"ucx_version": ucx_version,
"ucxpy_version": ucxpy_version,
"ucx_revision": ucx_revision,
}
except (FileNotFoundError, ValueError):
version_info = {
"numpy_version": None,
"cupy_version": None,
"rmm_version": None,
"ucx_version": None,
"ucxpy_version": None,
"ucx_revision": None,
}
for filename in filenames:
with open(filename, "r") as f:
fileinfo = parse_filename(filename)
data = f.read()
start = data.find("Merge benchmark")
if start < 0:
warn(f"Can't parse data for {filename}")
continue
data = data[start:].strip().split("\n")
info = []
_, _, *data = data
data = iter(data)
for line in data:
if line.startswith("="):
break
header, val = line.split("|")
info.append((header, val))
line, *data = data
if line.startswith("Wall-clock") or line.startswith("Wall clock"):
_, *data = data
else:
raise RuntimeError(f"Invalid file format {filename}")
data = iter(data)
for line in data:
if line.startswith("="):
break
header, val = line.split("|")
info.append((header, val))
line, *data = data
if line.startswith("Throughput"):
line, *data = data
if line.startswith("Bandwidth"): # New format
line, *data = data
assert line.startswith("Wall clock"), filename
else:
assert line.startswith("Wall-Clock") or line.startswith(
"Wall clock"
), filename
line, *data = data
assert line.startswith("=")
line, *data = data
assert line.startswith("(w1,w2)")
_, *data = data
for line in data:
if line.startswith("Worker index"):
break
try:
header, val = line.split("|")
info.append((header, val))
except ValueError:
continue
mangled_info = []
name_map = {
"backend": (lambda a: "backend", str),
"merge type": (lambda a: "merge_type", str),
"rows-per-chunk": (lambda a: "rows_per_chunk", int),
"base-chunks": (lambda a: "base_chunks", int),
"other-chunks": (lambda a: "other_chunks", int),
"broadcast": (lambda a: "broadcast", str),
"protocol": (lambda a: "protocol", lambda a: a),
"device(s)": (lambda a: "devices", lambda a: tuple(map(int, a.split(",")))),
"rmm-pool": (lambda a: "rmm_pool", ast.literal_eval),
"frac-match": (lambda a: "frac_match", float),
"tcp": (lambda a: "tcp", str),
"ib": (lambda a: "ib", str),
"infiniband": (lambda a: "ib", str),
"nvlink": (lambda a: "nvlink", str),
"data-processed": (lambda a: "data_processed", parse_bytes),
"data processed": (lambda a: "data_processed", parse_bytes),
"rmm pool": (lambda a: "rmm_pool", ast.literal_eval),
"worker thread(s)": (lambda a: "worker_threads", int),
"number of workers": (lambda a: "num_workers", int),
}
wallclocks = []
bandwidths = []
for name, val in info:
name = name.strip().lower()
val = val.strip()
if name in {"tcp", "ib", "nvlink", "infiniband"}:
# Get these from the filename
continue
try:
mangle_name, mangle_val = name_map[name]
name = mangle_name(name)
val = mangle_val(val)
mangled_info.append((name, val))
except KeyError:
if name.startswith("("):
source, dest = map(int, name[1:-1].split(","))
*bw_quartiles, data_volume = val.split("/s")
bw_quartiles = tuple(map(parse_bytes, bw_quartiles))
data_volume = parse_bytes(data_volume.strip()[1:-1])
bw = [
("source_device", source),
("destination_device", dest),
("data_volume", data_volume),
]
for n, q in zip(
(
"bandwidth_quartile_25",
"bandwidth_quartile_50",
"bandwidth_quartile_75",
),
bw_quartiles,
):
bw.append((n, q))
bandwidths.append(bw)
else:
wallclocks.append((parse_timedelta(name), parse_bytes(val[:-2])))
wallclocks = np.asarray(wallclocks)
num_gpus = 8
wallclocks[:, 1] /= num_gpus
mangled_info.append(("wallclock_mean", np.mean(wallclocks[:, 0])))
mangled_info.append(("wallclock_std", np.std(wallclocks[:, 0])))
mangled_info.append(("throughput_mean", hmean(wallclocks[:, 1])))
mangled_info.append(("throughput_std", hstd(wallclocks[:, 1])))
mangled_info.append(("nreps", len(wallclocks)))
date, _ = re.match(".*(202[0-9]{5})([0-9]{4}).*", str(filename)).groups()
date = pd.to_datetime(f"{date}")
mangled_info.append(("timestamp", date))
mangled_info = dict(mangled_info)
assert mangled_info["protocol"] == fileinfo["protocol"]
mangled_info.update(fileinfo)
mangled_info.update(version_info)
series.append(pd.Series(mangled_info))
# for bw in bandwidths:
# series.append(pd.Series(mangled_info | dict(bw)))
return series
def parse_transpose(dirname: Path):
filenames = sorted(dirname.glob("local_cupy_transpose*.log"))
series = []
try:
df = pd.read_csv(dirname / "ucx-py-bandwidth.csv")
(numpy_version,) = set(df["NumPy Version"])
(cupy_version,) = set(df["CuPy Version"])
(rmm_version,) = set(df["RMM Version"])
(ucx_version,) = set(df["UCX Version"])
(ucxpy_version,) = set(df["UCX-Py Version"])
(ucx_revision,) = set(df["UCX Revision"])
version_info = {
"numpy_version": numpy_version,
"cupy_version": cupy_version,
"rmm_version": rmm_version,
"ucx_version": ucx_version,
"ucxpy_version": ucxpy_version,
"ucx_revision": ucx_revision,
}
except (FileNotFoundError, ValueError):
version_info = {
"numpy_version": None,
"cupy_version": None,
"rmm_version": None,
"ucx_version": None,
"ucxpy_version": None,
"ucx_revision": None,
}
for filename in filenames:
old_format = True
with open(filename, "r") as f:
fileinfo = parse_filename(filename)
data = f.read()
start = data.find("Roundtrip benchmark")
if start < 0:
warn(f"Can't parse data for {filename}")
continue
data = data[start:].strip().split("\n")
info = []
_, _, *data = data
data = iter(data)
for line in data:
if line.startswith("="):
break
header, val = line.split("|")
info.append((header, val))
line, *data = data
if line.startswith("Wall-clock") or line.startswith("Wall clock"):
_, x = line.split("|")
if x.strip().lower() == "throughput":
old_format = False
_, *data = data
else:
raise RuntimeError(f"Invalid file format {filename}")
data = iter(data)
for line in data:
if line.startswith("="):
break
header, val = line.split("|")
info.append((header, val))
line, *data = data
if old_format:
assert line.startswith("(w1,w2)")
_, *data = data
else:
assert line.startswith("Throughput")
line, *data = data
assert line.startswith("Bandwidth")
line, *data = data
assert line.startswith("Wall clock")
line, *data = data
assert line.startswith("=")
line, *data = data
assert line.startswith("(w1,w2)")
_, *data = data
for line in data:
if line.startswith("Worker index"):
break
try:
header, val = line.split("|")
info.append((header, val))
except ValueError:
continue
mangled_info = []
name_map = {
"operation": (lambda a: "operation", lambda a: a),
"backend": (lambda a: "backend", lambda a: a),
"array type": (lambda a: "array_type", lambda a: a),
"user size": (lambda a: "user_size", int),
"user second size": (lambda a: "user_second_size", int),
"user chunk-size": (lambda a: "user_chunk_size", int),
"user chunk size": (lambda a: "user_chunk_size", int),
# TODO, what to do with these tuples?
"compute shape": (
lambda a: "compute_shape",
lambda a: tuple(map(int, a[1:-1].split(","))),
),
"compute chunk-size": (
lambda a: "compute_chunk_size",
lambda a: tuple(map(int, a[1:-1].split(","))),
),
"compute chunk size": (
lambda a: "compute_chunk_size",
lambda a: tuple(map(int, a[1:-1].split(","))),
),
"tcp": (lambda a: "tcp", str),
"ib": (lambda a: "ib", str),
"infiniband": (lambda a: "ib", str),
"nvlink": (lambda a: "nvlink", str),
"ignore-size": (lambda a: "ignore_size", parse_bytes),
"ignore size": (lambda a: "ignore_size", parse_bytes),
"data processed": (lambda a: "data_processed", parse_bytes),
"rmm pool": (lambda a: "rmm_pool", ast.literal_eval),
"protocol": (lambda a: "protocol", lambda a: a),
"device(s)": (lambda a: "devices", lambda a: tuple(map(int, a.split(",")))),
"worker thread(s)": (
lambda a: "worker_threads",
lambda a: tuple(map(int, a.split(","))),
),
"number of workers": (lambda a: "num_workers", int),
}
wallclocks = []
bandwidths = []
for name, val in info:
name = name.strip().lower()
val = val.strip()
if name in {"tcp", "ib", "nvlink", "infiniband"}:
# Get these from the filename
continue
try:
mangle_name, mangle_val = name_map[name]
name = mangle_name(name)
val = mangle_val(val)
mangled_info.append((name, val))
except KeyError:
if name.startswith("("):
source, dest = map(int, name[1:-1].split(","))
*bw_quartiles, data_volume = val.split("/s")
bw_quartiles = tuple(map(parse_bytes, bw_quartiles))
data_volume = parse_bytes(data_volume.strip()[1:-1])
bw = [
("source_device", source),
("destination_device", dest),
("data_volume", data_volume),
]
for n, q in zip(
(
"bandwidth_quartile_25",
"bandwidth_quartile_50",
"bandwidth_quartile_75",
),
bw_quartiles,
):
bw.append((n, q))
bandwidths.append(bw)
else:
wallclocks.append(parse_timedelta(name))
if old_format:
assert int(val) == 100
wallclocks = np.asarray(wallclocks)
rows = dict(mangled_info)["user_size"]
data_volume = rows * rows * 8
mangled_info.append(("data_processed", data_volume))
m = wallclocks.mean()
s = wallclocks.std()
num_gpus = 8
mangled_info.append(("wallclock_mean", m))
mangled_info.append(("wallclock_std", s))
mangled_info.append(("throughput_mean", (data_volume / num_gpus) / m))
mangled_info.append(
("throughput_std", (data_volume / num_gpus) * s / (m * (m + s)))
)
mangled_info.append(("nreps", len(wallclocks)))
date, _ = re.match(".*(202[0-9]{5})([0-9]{4}).*", str(filename)).groups()
date = pd.to_datetime(f"{date}")
mangled_info.append(("timestamp", date))
mangled_info = dict(mangled_info)
assert mangled_info["protocol"] == fileinfo["protocol"]
mangled_info.update(fileinfo)
mangled_info.update(version_info)
series.append(pd.Series(mangled_info))
# for bw in bandwidths:
# series.append(pd.Series(mangled_info | dict(bw)))
return series
def get_merge_metadata(df):
candidates = [
"backend",
"merge_type",
"rows_per_chunk",
"devices",
"rmm_pool",
"frac_match",
"nreps",
]
meta = {}
for candidate in candidates:
try:
(val,) = set(df[candidate])
meta[candidate] = val
except ValueError:
continue
for key in meta:
del df[key]
return df, meta
def get_transpose_metadata(df):
candidates = [
"operation",
"user_size",
"user_second_size",
"compute_shape",
"compute_chunk_size",
"devices",
"worker_threads",
"nreps",
]
meta = {}
for candidate in candidates:
try:
(val,) = set(df[candidate])
meta[candidate] = val
except ValueError:
continue
for key in meta:
del df[key]
return df, meta
def is_new_data(p, known_dates):
return p.is_dir() and p.parent.name[:8] not in known_dates
def get_results(
data_directory: Path,
csv_name: Path,
meta_name: Path,
parser: Callable[[Path], list[pd.Series]],
extract_metadata: Callable[[pd.DataFrame], tuple[pd.DataFrame, dict]],
) -> tuple[pd.DataFrame, dict]:
if csv_name.exists():
existing = pd.read_csv(csv_name)
existing["timestamp"] = existing.timestamp.astype(np.datetime64)
known_dates = set(
map(
methodcaller("strftime", "%Y%m%d"),
pd.DatetimeIndex(existing.timestamp).date,
)
)
else:
known_dates = set()
existing = None
if meta_name.exists():
with open(meta_name, "r") as f:
meta = json.load(f)
else:
meta = None
df = pd.DataFrame(
chain.from_iterable(
map(
parser,
sorted(
filter(
partial(is_new_data, known_dates=known_dates),
data_directory.glob("*/*/"),
)
),
)
)
)
if not df.empty:
df, meta = extract_metadata(df)
if existing is not None:
df = pd.concat([existing, df]).sort_values("timestamp", kind="mergesort")
assert meta is not None
return sanitize_dataframe(df), meta
def create_throughput_chart(
data: str, protocol: str, nvlink: str, infiniband: str, tcp: str
) -> alt.LayerChart:
selector = alt.selection(type="point", fields=["ucx_version"], bind="legend")
base = (
alt.Chart(data)
.transform_filter(
{
"and": [
alt.FieldEqualPredicate(field="protocol", equal=protocol),
# CSV read is stringly-typed
alt.FieldEqualPredicate(field="nvlink", equal=nvlink),
alt.FieldEqualPredicate(field="infiniband", equal=infiniband),
alt.FieldEqualPredicate(field="tcp", equal=tcp),
]
}
)
.transform_calculate(
throughput_mean=datum.throughput_mean / 1e9,
throughput_std=datum.throughput_std / 1e9,
)
.encode(x=alt.X("timestamp:T", axis=alt.Axis(title="Date")))
)
throughput = base.mark_line().encode(
alt.Y("throughput_mean:Q", axis=alt.Axis(title="Throughput [GB/s/GPU]")),
color="ucx_version:N",
opacity=alt.condition(selector, alt.value(1), alt.value(0.2)),
)
throughput_ci = (
base.mark_area()
.transform_calculate(
y=datum.throughput_mean - datum.throughput_std,
y2=datum.throughput_mean + datum.throughput_std,
)
.encode(
y="y:Q",
y2="y2:Q",
color="ucx_version:N",
opacity=alt.condition(selector, alt.value(0.3), alt.value(0.01)),
)
)
return cast(
alt.LayerChart, alt.layer(throughput, throughput_ci).add_params(selector)
)
MERGE_TEMPLATE = """<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/npm/vega@{vega_version}"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-lite@{vegalite_version}"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-embed@{vegaembed_version}"></script>
</head>
<title>
dask-cuda local cudf merge performance
</title>
<body>
<h1>
Historical performance of CUDF merge
</h1>
Legend entries are clickable to highlight that set of data,
filled regions show standard deviation confidence intervals for
throughput (using harmonic means and standard deviations,
because it's a rate-based statistics).
<h2>Hardware setup</h2>
Single node DGX-1 with 8 V100 cards. In-node NVLink bisection
bandwidth 150GB/s (per <a
href="https://images.nvidia.com/content/pdf/dgx1-v100-system-architecture-whitepaper.pdf">whitepaper</a>).
<h2>Benchmark setup</h2>
{metadata}
{divs}
<script type="text/javascript">
{embeds}
</script>
</body>
</html>
"""
TRANSPOSE_TEMPLATE = """<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/npm/vega@{vega_version}"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-lite@{vegalite_version}"></script>
<script src="https://cdn.jsdelivr.net/npm/vega-embed@{vegaembed_version}"></script>
</head>
<title>
dask-cuda local cupy transpose performance
</title>
<body>
<h1>
Historical performance of cupy transpose
</h1>
Legend entries are clickable to highlight that set of data,
filled regions show standard deviation confidence intervals for
throughput (using harmonic means and standard deviations,
because it's a rate-based statistics).
<h2>Hardware setup</h2>
Single node DGX-1 with 8 V100 cards. In-node NVLink bisection
bandwidth 150GB/s (per <a
href="https://images.nvidia.com/content/pdf/dgx1-v100-system-architecture-whitepaper.pdf">whitepaper</a>).
<h2>Benchmark setup</h2>
{metadata}
{divs}
<script type="text/javascript">
{embeds}
</script>
</body>
</html>
"""
def make_chart(template: str, csv_name: str, metadata: dict, output_file: Path):
divs = []
embeds = []
for i, (protocol, nv, ib, tcp_over, name) in enumerate(
[
("ucx", "True", "True", "False", "UCX NVLink + InfiniBand"),
("ucx", "True", "False", "False", "UCX NVLink only"),
("ucx", "False", "True", "False", "UCX InfiniBand only"),
("ucx", "False", "False", "True", "TCP over UCX"),
("tcp", "False", "False", "True", "Standard TCP"),
]
):
throughput = create_throughput_chart(csv_name, protocol, nv, ib, tcp_over)
divs.append(f"<h2>{name}</h2>")
divs.append("<h3>Throughput/worker</h3>")
divs.append(f'<div id="vis_throughput{i}"></div>')
embeds.append(
f"vegaEmbed('#vis_throughput{i}', {throughput.to_json(indent=None)})"
".catch(console.error);"
)
table_metadata = "\n".join(
chain(
["<table><th>Name</th><th>Value</th>"],
(
f"<tr><td>{k}</td><td>{v}</td></tr>"
for k, v in sorted(metadata.items(), key=itemgetter(0))
),
["</table>"],
)
)
with open(output_file, "w") as f:
f.write(
template.format(
vega_version=alt.VEGA_VERSION,
vegalite_version=alt.VEGALITE_VERSION,
vegaembed_version=alt.VEGAEMBED_VERSION,
metadata=table_metadata,
divs="\n".join(divs),
embeds="\n".join(embeds),
)
)
def main(
data_directory: Path = typer.Argument(..., help="Directory storing raw results"),
output_directory: Path = typer.Argument(..., help="Directory storing outputs"),
make_charts: bool = typer.Option(True, help="Make HTML pages for charts?"),
):
merge_df, merge_meta = get_results(
data_directory,
output_directory / "single_node_merge_performance.csv",
output_directory / "single_node_merge_performance-metadata.json",
parse_merge,
get_merge_metadata,
)
transpose_df, transpose_meta = get_results(
data_directory,
output_directory / "single_node_transpose_performance.csv",
output_directory / "single_node_transpose_performance-metadata.json",
parse_transpose,
get_transpose_metadata,
)
merge_df.to_csv(output_directory / "single_node_merge_performance.csv", index=False)
transpose_df.to_csv(
output_directory / "single_node_transpose_performance.csv", index=False
)
with open(
output_directory / "single_node_merge_performance-metadata.json", "w"
) as f:
json.dump(merge_meta, f)
with open(
output_directory / "single_node_transpose_performance-metadata.json", "w"
) as f:
json.dump(transpose_meta, f)
if make_charts:
make_chart(
MERGE_TEMPLATE,
"single_node_merge_performance.csv",
merge_meta,
output_directory / "single-node-merge.html",
)
make_chart(
TRANSPOSE_TEMPLATE,
"single_node_transpose_performance.csv",
merge_meta,
output_directory / "single-node-transpose.html",
)
if __name__ == "__main__":
typer.run(main)
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks
|
rapidsai_public_repos/dask-cuda-benchmarks/analysis/build-and-submit.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
set -ex
DOCKER_BUILD_SERVER=$1
DOCKER_BUILD_DIRECTORY=$3
JOB_SUBMISSION_SERVER=$2
JOB_SUBMISSION_DIRECTORY=$4
ssh ${DOCKER_BUILD_SERVER} "(cd ${DOCKER_BUILD_DIRECTORY}; ./build-images.sh)"
ssh ${JOB_SUBMISSION_SERVER} "(cd ~/${JOB_SUBMISSION_DIRECTORY}/docker; ./pull-images.sh)"
ssh ${JOB_SUBMISSION_SERVER} "(cd ~/${JOB_SUBMISSION_DIRECTORY}; for n in 1 2 4 8 16; do sbatch --nodes \$n job.slurm; done)"
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/merge-outputs.py
|
import glob
import os
from itertools import chain
import altair as alt
import click
import numpy as np
import pandas as pd
from altair import datum, expr
from altair.utils import sanitize_dataframe
def hmean(a):
"""Harmonic mean"""
if len(a):
return 1 / np.mean(1 / a)
else:
return 0
def hstd(a):
"""Harmonic standard deviation"""
if len(a):
rmean = np.mean(1 / a)
rvar = np.var(1 / a)
return np.sqrt(rvar / (len(a) * rmean**4))
else:
return 0
def remove_warmup(df):
summary = df.groupby("num_workers")
return df.loc[
df.wallclock.values
< (summary.wallclock.mean() + summary.wallclock.std() * 2)[
df.num_workers
].values
].copy()
def process_output(directories):
all_merge_data = []
all_transpose_data = []
for d in directories:
_, date, ucx_version = d.split("/")
date = pd.to_datetime(date)
dfs = []
for f in chain(
glob.glob(os.path.join(d, "nnodes*cudf-merge-dask.json")),
glob.glob(os.path.join(d, "nnodes*cudf-merge-explicit-comms.json")),
):
merge_df = pd.read_json(f)
dfs.append(merge_df)
if dfs:
merge_df = pd.concat(dfs, ignore_index=True)
merge_df["date"] = date
merge_df["ucx_version"] = ucx_version
all_merge_data.append(merge_df)
dfs = []
for f in glob.glob(os.path.join(d, "nnodes*transpose-sum.json")):
transpose_df = pd.read_json(f)
dfs.append(transpose_df)
if dfs:
transpose_df = pd.concat(dfs, ignore_index=True)
transpose_df["date"] = date
transpose_df["ucx_version"] = ucx_version
all_transpose_data.append(transpose_df)
merge_df = pd.concat(all_merge_data, ignore_index=True)
transpose_df = pd.concat(all_transpose_data, ignore_index=True)
return merge_df, transpose_df
def summarise_merge_data(df):
# data = data.groupby(["num_workers", "backend", "date"], as_index=False).mean()
df["throughput"] = (df.data_processed / df.wallclock / df.num_workers) / 1e9
grouped = df.groupby(["date", "num_workers", "backend", "ucx_version"])
throughput = grouped["throughput"]
throughput = throughput.aggregate(throughput_mean=hmean, throughput_std=hstd)
grouped = grouped.mean().drop(columns="throughput")
grouped = grouped.merge(
throughput, on=["date", "num_workers", "backend", "ucx_version"]
).reset_index()
tmp = grouped.loc[
lambda df: (df.backend == "dask") & (df.ucx_version == "ucx-1.12.1")
].copy()
tmp["backend"] = "no-dask"
# distributed-joins measurements
for n, bw in zip(
[8, 16, 32, 64, 128, 256],
[5.4875, 4.325, 3.56875, 2.884375, 2.090625, 1.71835937],
):
tmp.loc[lambda df: df.num_workers == n, "throughput_mean"] = bw
tmp["throughput_std"] = 0
return pd.concat([grouped, tmp], ignore_index=True)
def summarise_transpose_data(df):
# df = remove_warmup(df)
df["throughput"] = (df.data_processed / df.wallclock / df.num_workers) / 1e9
grouped = df.groupby(["date", "num_workers", "ucx_version"])
throughput = grouped["throughput"]
throughput = throughput.aggregate(
throughput_mean=hmean,
throughput_std=hstd,
wallclock_mean="mean",
wallclock_std="std",
)
grouped = grouped.mean().drop(columns="throughput")
df = grouped.merge(
throughput, on=["date", "num_workers", "ucx_version"]
).reset_index()
return df
def make_merge_chart(df):
data = (
alt.Chart(df)
.encode(
x=alt.X("date:T", title="Date"),
)
.transform_calculate(category="datum.backend + '-' + datum.ucx_version")
)
selector = alt.selection(
type="point", fields=["category"], bind="legend", name="selected-version"
)
line = data.mark_line(point=True).encode(
y=alt.Y("throughput_mean:Q", title="Throughput [GB/s/GPU]"),
color="category:N",
opacity=alt.condition(selector, alt.value(1), alt.value(0.25)),
)
band = (
data.mark_area()
.transform_calculate(
y=expr.toNumber(datum.throughput_mean)
- expr.toNumber(datum.throughput_std),
y2=expr.toNumber(datum.throughput_mean)
+ expr.toNumber(datum.throughput_std),
)
.encode(
y="y:Q",
y2="y2:Q",
color="category:N",
opacity=alt.condition(selector, alt.value(0.3), alt.value(0.025)),
)
)
chart = line + band
return chart.add_params(selector).facet(
facet=alt.Text(
"num_workers:N",
title="Number of GPUs",
# Hacky, since faceting on quantitative data is
# not allowed? And sorting is lexicographic.
sort=["8", "16", "32", "64", "128", "256"],
),
columns=3,
)
def make_transpose_chart(df):
data = alt.Chart(df).encode(
x=alt.X("date:T", title="Date"),
)
selector = alt.selection(
type="point", fields=["ucx_version"], bind="legend", name="selected-version"
)
line = data.mark_line(point=True).encode(
y=alt.Y("throughput_mean:Q", title="Throughput [GB/s/GPU]"),
color="ucx_version:N",
opacity=alt.condition(selector, alt.value(1), alt.value(0.25)),
)
band = (
data.mark_area()
.transform_calculate(
y=expr.toNumber(datum.throughput_mean)
- expr.toNumber(datum.throughput_std),
y2=expr.toNumber(datum.throughput_mean)
+ expr.toNumber(datum.throughput_std),
)
.encode(
y="y:Q",
y2="y2:Q",
opacity=alt.condition(selector, alt.value(0.3), alt.value(0.025)),
color="ucx_version:N",
)
)
chart = line + band
return chart.add_params(selector).facet(
facet=alt.Text(
"num_workers:N",
title="Number of GPUs",
# Hacky, since faceting on quantitative data is
# not allowed? And sorting is lexicographic.
sort=["8", "16", "32", "64", "128", "256"],
),
columns=3,
)
@click.command()
@click.argument("merge_filename")
@click.argument("transpose_filename")
@click.option("--charts/--no-charts", type=bool, default=False, help="Make charts?")
def main(merge_filename: str, transpose_filename: str, charts: bool):
directories = glob.glob("outputs/*/ucx-*")
merge, transpose = process_output(directories)
merge = summarise_merge_data(merge)
transpose = summarise_transpose_data(transpose)
sanitize_dataframe(merge).to_csv(merge_filename, index=False)
sanitize_dataframe(transpose).to_csv(transpose_filename, index=False)
if charts:
merge = make_merge_chart(f"./{merge_filename}")
transpose = make_transpose_chart(f"./{transpose_filename}")
merge_basename, _ = os.path.splitext(os.path.basename(merge_filename))
transpose_basename, _ = os.path.splitext(os.path.basename(transpose_filename))
merge.save(f"{merge_basename}.html")
transpose.save(f"{transpose_basename}.html")
if __name__ == "__main__":
main()
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/gc-workers.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
import click
from distributed import Client
def cleanup_lru_cache():
import gc
from distributed.worker import cache_loads
cache_loads.clear()
gc.collect()
@click.command()
@click.argument("scheduler_file", type=str)
def main(scheduler_file):
client = Client(scheduler_file=scheduler_file)
client.run(cleanup_lru_cache)
client.close()
if __name__ == "__main__":
main()
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/README.md
|
## Run scripts for benchmarking on Draco
These scripts run benchmarks from
[`dask-cuda`](https://github.com/rapidsai/dask-cuda) in a multi-node
setting. These are set up to run on Draco.
Draco is a SLURM-based system that uses pyxis and enroot for
containerisation.
These scripts assume that the containers are already imported and
available as squashfs images at `$HOME/workdir/enroot-images/`.
`$HOME/workdir` should be a symlink something on a parallel filesystem
on draco (it is resolved via `readlink -f`).
Since the main goal is to benchmark performance of different
[UCX](https://github.com/openucx/ucx) versions, the image naming is as
`ucx-py-$UCX_VERSION-$DATE.sqsh`. Where `UCX_VERSION` is one of
`v1.12.x`, `v1.13.x`, `v1.14.x`, `master`; and `DATE` is the date of
the image.
`job.slurm` is the batch submission script, set up to request an
allocation with eight GPUs/node and then run all UCX versions with
images from the date of submission on the requested number of nodes.
In this loop, the run itself is controlled by `job.sh`.
Note that there is a [bug in UCX
v1.13.x](https://github.com/openucx/ucx/issues/8461) that causes
crashes on more than four nodes, so we skip that image if the
requested number of nodes is greater than four.
On node 0, `job.sh` starts the distributed scheduler, a dask cuda
worker (using eight GPUs), and eventually the client scripts; on all
other nodes we just start workers.
`job.sh` runs in the container, and expects to see environment
variables `RUNDIR`, `OUTDIR`, and `SCRATCHDIR` that are mounted in
from the outside (`job.slurm` sets this up).
### Recommended scaling
Up to 16 nodes is reasonable.
### Docker images
The `docker` subdirectory contains a docker file that builds images
suitable for running. `build-images.sh` builds images for each version
of UCX we want to test. You'll need to adapt the container registry
location to somewhere appropriate. A script that can be run on the
draco frontend to import the images is in `pull-images.sh`.
## Extracting data
For now, data are extracted from the output runs through separate
scripts. Assuming one has the `outputs` directory available locally,
then `python merge-outputs.py --charts merge-data.csv
transpose-data.csv` will munge all data and use
[altair](https://altair-viz.github.io) to produce simple HTML pages
that contain plots. You'll need a pre-release version of altair.
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/job.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
source /opt/conda/etc/profile.d/conda.sh
source /opt/conda/etc/profile.d/mamba.sh
mamba activate ucx
SCHED_FILE=${SCRATCHDIR}/scheduler-${SLURM_JOBID}.json
if [[ $SLURM_PROCID == 0 ]]; then
echo "******* UCX INFORMATION *********"
ucx_info -v
fi
NGPUS=$(nvidia-smi -L | wc -l)
export EXPECTED_NUM_WORKERS=$((SLURM_JOB_NUM_NODES * NGPUS))
export UCX_HANDLE_ERRORS=bt,freeze
# https://github.com/openucx/ucx/issues/8639
export UCX_RNDV_SCHEME=get_zcopy
export PROTOCOL=ucx
# FIXME is the interface correct?
export COMMON_ARGS="--protocol ${PROTOCOL} \
--interface ibp132s0 \
--scheduler-file ${SCRATCHDIR}/scheduler-${SLURM_JOBID}.json"
export PROTOCOL_ARGS=""
export WORKER_ARGS="--local-directory /tmp/dask-${SLURM_PROCID} \
--multiprocessing-method forkserver"
export PTXCOMPILER_CHECK_NUMBA_CODEGEN_PATCH_NEEDED=0
export PTXCOMPILER_KNOWN_DRIVER_VERSION=11.2
export PTXCOMPILER_KNOWN_RUNTIME_VERSION=11.2
# Still needed?
export UCX_MEMTYPE_CACHE=n
NUM_WORKERS=$(printf "%03d" ${EXPECTED_NUM_WORKERS})
UCX_VERSION=$(python -c "import ucp; print('.'.join(map(str, ucp.get_ucx_version())))")
OUTPUT_DIR=${OUTDIR}/ucx-${UCX_VERSION}
# Idea: we allocate ntasks-per-node for workers, but those are started in the
# background by dask-cuda-worker.
# So we need to pick one process per node to run the worker commands.
# This assumes that the mapping from nodes to ranks is dense and contiguous. If
# there is rank-remapping then something more complicated would be needed.
if [[ $(((SLURM_PROCID / SLURM_NTASKS_PER_NODE) * SLURM_NTASKS_PER_NODE)) == ${SLURM_PROCID} ]]; then
# rank zero starts scheduler and client as well
if [[ $SLURM_NODEID == 0 ]]; then
echo "Environment status"
mkdir -p $OUTPUT_DIR
python ${RUNDIR}/get-versions.py ${OUTPUT_DIR}/version-info.json
mamba list --json > ${OUTPUT_DIR}/environment-info.json
echo "${SLURM_PROCID} on node ${SLURM_NODEID} starting scheduler/client"
dask scheduler \
--no-dashboard \
${COMMON_ARGS} &
sleep 6
dask cuda worker \
--no-dashboard \
${COMMON_ARGS} \
${PROTOCOL_ARGS} \
${WORKER_ARGS} &
# Weak scaling
# Only the first run initializes the RMM pool, which is then set up on
# workers. After that the clients connect to workers with a pool already
# in place, so we pass --disable-rmm-pool
python -m dask_cuda.benchmarks.local_cudf_merge \
-c 40_000_000 \
--frac-match 0.6 \
--runs 30 \
${COMMON_ARGS} \
${PROTOCOL_ARGS} \
--backend dask \
--output-basename ${OUTPUT_DIR}/nnodes-${NUM_WORKERS}-cudf-merge-dask \
--multiprocessing-method forkserver \
--no-show-p2p-bandwidth \
|| /bin/true # always exit cleanly
python ${RUNDIR}/gc-workers.py ${SCHED_FILE} || /bin/true
python -m dask_cuda.benchmarks.local_cudf_merge \
-c 40_000_000 \
--frac-match 0.6 \
--runs 30 \
${COMMON_ARGS} \
${PROTOCOL_ARGS} \
--disable-rmm-pool \
--backend dask-noop \
--no-show-p2p-bandwidth \
--output-basename ${OUTPUT_DIR}/nnodes-${NUM_WORKERS}-cudf-merge-dask-noop \
--multiprocessing-method forkserver \
|| /bin/true # always exit cleanly
python ${RUNDIR}/gc-workers.py ${SCHED_FILE} || /bin/true
python -m dask_cuda.benchmarks.local_cudf_merge \
-c 40_000_000 \
--frac-match 0.6 \
--runs 30 \
${COMMON_ARGS} \
${PROTOCOL_ARGS} \
--disable-rmm-pool \
--backend explicit-comms \
--output-basename ${OUTPUT_DIR}/nnodes-${NUM_WORKERS}-cudf-merge-explicit-comms \
--multiprocessing-method forkserver \
--no-show-p2p-bandwidth \
|| /bin/true # always exit cleanly
python ${RUNDIR}/gc-workers.py ${SCHED_FILE} || /bin/true
python -m dask_cuda.benchmarks.local_cupy \
-o transpose_sum \
-s 50000 \
-c 2500 \
--runs 30 \
--disable-rmm-pool \
${COMMON_ARGS} \
${PROTOCOL_ARGS} \
--output-basename ${OUTPUT_DIR}/nnodes-${NUM_WORKERS}-transpose-sum \
--multiprocessing-method forkserver \
--no-show-p2p-bandwidth \
|| /bin/true
python ${RUNDIR}/gc-workers.py ${SCHED_FILE} || /bin/true
# Strong scaling
python -m dask_cuda.benchmarks.local_cupy \
-o transpose_sum \
-s 50000 \
-c 2500 \
--runs 30 \
--disable-rmm-pool \
${COMMON_ARGS} \
${PROTOCOL_ARGS} \
--backend dask-noop \
--no-show-p2p-bandwidth \
--shutdown-external-cluster-on-exit \
--output-basename ${OUTPUT_DIR}/nnodes-${NUM_WORKERS}-transpose-sum-noop \
--multiprocessing-method forkserver \
|| /bin/true
else
echo "${SLURM_PROCID} on node ${SLURM_NODEID} starting worker"
sleep 6
dask cuda worker \
--no-dashboard \
${COMMON_ARGS} \
${PROTOCOL_ARGS} \
${WORKER_ARGS} \
|| /bin/true # always exit cleanly
fi
else
echo "${SLURM_PROCID} on node ${SLURM_NODEID} sitting in background"
fi
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/job.slurm
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
#SBATCH -p batch_dgx1_m2
#SBATCH -t 02:00:00
#SBATCH -A sw_rapids_testing
#SBATCH --nv-meta=ml-model.rapids-benchmarks
#SBATCH --gpus-per-node 8
#SBATCH --ntasks-per-node 1
#SBATCH --cpus-per-task 16
#SBATCH -e slurm-%x-%J.err
#SBATCH -o slurm-%x-%J.out
#SBATCH --job-name dask-cuda-bench
DATE=$(date +%Y%m%d)
export RUNDIR_HOST=$(readlink -f $(pwd))
export OUTDIR_HOST=$(readlink -f $(pwd)/outputs/${DATE})
export SCRATCHDIR_HOST=$(readlink -f $(pwd)/scratch)
mkdir -p ${OUTDIR_HOST}
mkdir -p ${SCRATCHDIR_HOST}
export RUNDIR=/root/rundir
export OUTDIR=/root/outdir
export SCRATCHDIR=/root/scratchdir
export JOB_SCRIPT=${RUNDIR}/job.sh
for ucx_version in v1.12.x v1.13.x v1.14.x master; do
if [ $ucx_version == "v1.13.x" -a $SLURM_JOB_NUM_NODES -ge 4 ]; then
continue
fi
export CONTAINER=$(readlink -f ~/workdir/enroot-images/ucx-py-${ucx_version}-${DATE}.sqsh)
echo "************************"
echo "Running ${CONTAINER}"
echo "***********************"
srun --container-image=${CONTAINER} --no-container-mount-home \
--container-mounts=${RUNDIR_HOST}:${RUNDIR}:ro,${OUTDIR_HOST}:${OUTDIR}:rw,${SCRATCHDIR_HOST}:${SCRATCHDIR}:rw \
${JOB_SCRIPT}
done
NNODES=$(printf "%02d" $SLURM_JOB_NUM_NODES)
for file in slurm-$SLURM_JOB_NAME-$SLURM_JOB_ID*; do
mv $file ${OUTDIR_HOST}/${file/-$SLURM_JOB_ID/-$NNODES}
done
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/get-versions.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
import json
import subprocess
import click
def get_versions():
import cupy
import numpy
import ucp
import dask
import dask_cuda
import distributed
import cudf
import rmm
ucx_info = subprocess.check_output(["ucx_info", "-v"]).decode().strip()
revmarker = "revision "
revloc = ucx_info.find(revmarker)
if revloc >= 0:
ucx_revision, *_ = ucx_info[revloc + len(revmarker) :].split("\n")
else:
ucx_revision = ucx_info # keep something so we can get it back later
return {
"numpy": numpy.__version__,
"cupy": cupy.__version__,
"rmm": rmm.__version__,
"ucp": ucp.__version__,
"ucx": ucx_revision,
"dask": dask.__version__,
"distributed": distributed.__version__,
"dask_cuda": dask_cuda.__version__,
"cudf": cudf.__version__,
}
@click.command()
@click.argument("output_file", type=str)
def main(output_file):
with open(output_file, "w") as f:
json.dump(get_versions(), f)
if __name__ == "__main__":
main()
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/docker/pull-images.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
DATE=$(date +%Y%m%d)
DOCKER_HOST=gitlab-master.nvidia.com
REPO=lmitchell/docker
OUTPUT_DIR=$(readlink -f ~/workdir/enroot-images)
for ucx_version in v1.12.x v1.13.x v1.14.x master; do
TAG=${DOCKER_HOST}\#${REPO}:ucx-py-${ucx_version}-${DATE}
srun -p interactive_dgx1_m2 -t 00:30:00 -A sw_rapids_testing \
--nv-meta=ml-model.rapids-debug --gpus-per-node 0 --nodes 1 \
--exclusive \
enroot import -o ${OUTPUT_DIR}/ucx-py-${ucx_version}-${DATE}.sqsh docker://${TAG}
done
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/docker/build-ucx.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
set -ex
UCX_VERSION_TAG=${1:-"v1.13.0"}
CONDA_HOME=${2:-"/opt/conda"}
CONDA_ENV=${3:-"ucx"}
CUDA_HOME=${4:-"/usr/local/cuda"}
# Send any remaining arguments to configure
CONFIGURE_ARGS=${@:5}
source ${CONDA_HOME}/etc/profile.d/conda.sh
source ${CONDA_HOME}/etc/profile.d/mamba.sh
mamba activate ${CONDA_ENV}
git clone https://github.com/openucx/ucx.git
cd ucx
git checkout ${UCX_VERSION_TAG}
./autogen.sh
mkdir build-linux && cd build-linux
../contrib/configure-release --prefix=${CONDA_PREFIX} --with-sysroot --enable-cma \
--enable-mt --enable-numa --with-gnu-ld --with-rdmacm --with-verbs \
--with-cuda=${CUDA_HOME} \
${CONFIGURE_ARGS}
make -j install
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/docker/environment.yml
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
channels:
- rapidsai-nightly
- dask/label/dev
- numba
- conda-forge
- nvidia
dependencies:
- dask
- distributed
- cudf
- dask-cudf
- cupy
- rmm
- dask-cuda
- dask-cudf
- pynvml>=11.0.0,<11.5
- numba>=0.46
- cudatoolkit=11.2
- python=3.10
- setuptools
- psutil
- cython>=0.29.14,<3.0.0a0
- pytest
- pytest-asyncio
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/docker/UCXPy-rdma-core.dockerfile
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
ARG CUDA_VERSION=11.2.2
ARG DISTRIBUTION_VERSION=ubuntu20.04
FROM nvidia/cuda:${CUDA_VERSION}-devel-${DISTRIBUTION_VERSION}
# Tag to checkout from UCX repository
ARG UCX_VERSION_TAG=v1.12.x
# Where to install conda, and what to name the created environment
ARG CONDA_HOME=/opt/conda
ARG CONDA_ENV=ucx
# Name of conda spec file in the current working directory that
# will be used to build the conda environment.
ARG CONDA_ENV_SPEC=environment.yml
ENV CONDA_ENV="${CONDA_ENV}"
ENV CONDA_HOME="${CONDA_HOME}"
# Where cuda is installed
ENV CUDA_HOME="/usr/local/cuda"
SHELL ["/bin/bash", "-c"]
RUN apt-get update -y \
&& apt-get --fix-missing upgrade -y \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends tzdata \
&& apt-get install -y \
automake \
dh-make \
git \
libcap2 \
libnuma-dev \
libtool \
make \
pkg-config \
udev \
curl \
librdmacm-dev \
rdma-core \
&& apt-get autoremove -y \
&& apt-get clean
RUN curl -fsSL https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh \
-o /minimamba.sh \
&& bash /minimamba.sh -b -p ${CONDA_HOME} \
&& rm /minimamba.sh
ENV PATH="${CONDA_HOME}/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:${CUDA_HOME}/bin"
WORKDIR /root
COPY ${CONDA_ENV_SPEC} /root/conda-env.yml
COPY build-ucx.sh /root/build-ucx.sh
COPY build-ucx-py.sh /root/build-ucx-py.sh
COPY post-install.sh /root/post-install.sh
RUN mamba env create -n ${CONDA_ENV} --file /root/conda-env.yml
RUN bash ./build-ucx.sh ${UCX_VERSION_TAG} ${CONDA_HOME} ${CONDA_ENV} ${CUDA_HOME}
RUN bash ./build-ucx-py.sh ${CONDA_HOME} ${CONDA_ENV}
RUN bash ./post-install.sh ${CONDA_HOME} ${CONDA_ENV}
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/docker/post-install.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
set -ex
CONDA_HOME=${1:-"/opt/conda"}
CONDA_ENV=${2:-"ucx"}
source ${CONDA_HOME}/etc/profile.d/conda.sh
source ${CONDA_HOME}/etc/profile.d/mamba.sh
mamba activate ${CONDA_ENV}
git clone https://github.com/gjoseph92/dask-noop.git
pip install --no-deps dask-noop/
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/docker/build-images.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
DATE=$(date +%Y%m%d)
DOCKER_HOST=gitlab-master.nvidia.com:5005
REPO=lmitchell/docker
for ucx_version in v1.12.x v1.13.x v1.14.x master; do
TAG=${DOCKER_HOST}/${REPO}:ucx-py-${ucx_version}-${DATE}
docker build --build-arg UCX_VERSION_TAG=${ucx_version} --no-cache -t ${TAG} -f UCXPy-rdma-core.dockerfile .
docker push ${TAG}
done
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco
|
rapidsai_public_repos/dask-cuda-benchmarks/runscripts/draco/docker/build-ucx-py.sh
|
#!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
set -ex
CONDA_HOME=${1:-"/opt/conda"}
CONDA_ENV=${2:-"ucx"}
source ${CONDA_HOME}/etc/profile.d/conda.sh
source ${CONDA_HOME}/etc/profile.d/mamba.sh
mamba activate ${CONDA_ENV}
git clone https://github.com/rapidsai/ucx-py.git
pip install -v ucx-py/
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/src
|
rapidsai_public_repos/dask-cuda-benchmarks/src/distributed_merge/cudf_merge.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
import asyncio
import sys
from enum import Enum
from itertools import chain
from typing import TYPE_CHECKING, Any, Optional, Tuple
import cupy as cp
import numpy as np
import typer
from cuda import cuda, cudart
from mpi4py import MPI
from ucp._libs import ucx_api
from ucp._libs.arr import Array
import cudf
import rmm
try:
import nvtx
except ImportError:
class nvtx:
@staticmethod
def noop(*args, **kwargs):
pass
push_range = noop
pop_range = noop
@staticmethod
def annotate(*args, **kwargs):
def noop_wrapper(fn):
return fn
return noop_wrapper
# UCP must be imported after cudaSetDevice on each rank (for correct IPC
# registration?), why?
if TYPE_CHECKING:
import ucp
else:
ucp = None
def format_bytes(b):
return f"{b/1e9:.2f} GB"
class Request:
__slots__ = ("n",)
n: int
def __init__(self):
self.n = 0
class CommunicatorBase:
def __init__(self, comm: MPI.Intracomm):
self.mpicomm = comm
self.rank = comm.rank
self.size = comm.size
def _send(self, ep, msg: "ucx_api.arr.Array", tag: int, request: Optional[Request]):
raise NotImplementedError()
def _recv(self, msg: "ucx_api.arr.Array", tag: int, request: Optional[Request]):
raise NotImplementedError()
@nvtx.annotate(domain="MERGE")
def wireup(self):
# Perform an all-to-all to wire up endpoints
sendbuf = np.zeros(self.size, dtype=np.uint8)
recvbuf = np.empty_like(sendbuf)
request = self.ialltoall(sendbuf, recvbuf)
self.wait(request)
def isend(
self,
buf: np.ndarray,
dest: int,
tag: int = 0,
request: Optional[Request] = None,
) -> Optional["ucx_api.UCXRequest"]:
msg = Array(buf)
# Tag matching to distinguish by source.
comm_tag = (tag << 32) | self.rank
return self._send(self.endpoints[dest], msg, comm_tag, request)
def irecv(
self,
buf: np.ndarray,
source: int,
tag: int = 0,
request: Optional[Request] = None,
) -> Optional["ucx_api.UCXRequest"]:
msg = Array(buf)
comm_tag = (tag << 32) | source
return self._recv(msg, comm_tag, request)
def __getattr__(self, name):
try:
return getattr(self.mpicomm, name)
except AttributeError:
raise AttributeError(f"No support for {name}")
class AsyncIOCommunicator(CommunicatorBase):
@nvtx.annotate(domain="MERGE", message="AsyncIO-init")
def __init__(self, comm: MPI.Intracomm):
super().__init__(comm)
self.event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.event_loop)
self.address = ucp.get_worker_address()
buf = np.array(self.address)
addresses = np.empty((comm.size, self.address.length), dtype=buf.dtype)
comm.Allgather(buf, addresses)
self.endpoints = self.event_loop.run_until_complete(
asyncio.gather(
*(
ucp.create_endpoint_from_worker_address(
ucp.get_ucx_address_from_buffer(address)
)
for address in addresses
)
)
)
@nvtx.annotate(domain="MERGE")
def ialltoall(self, sendbuf: np.ndarray, recvbuf: np.ndarray):
return asyncio.gather(
*chain(
(
self.irecv(recvbuf[i, ...], source=i, tag=10)
for i in range(self.size)
),
(self.isend(sendbuf[i, ...], dest=i, tag=10) for i in range(self.size)),
)
)
@nvtx.annotate(domain="MERGE")
def ialltoallv(self, sendbuf, sendcounts, recvbuf, recvcounts):
requests = []
off = 0
for i in range(self.size):
count = recvcounts[i]
requests.append(self.irecv(recvbuf[off : off + count], source=i, tag=10))
off += count
off = 0
for i in range(self.size):
count = sendcounts[i]
requests.append(self.isend(sendbuf[off : off + count], dest=i, tag=10))
off += count
return asyncio.gather(*requests)
def _send(self, ep, msg, tag, _request):
return ep.send(msg, tag=tag, force_tag=True)
def _recv(self, msg, tag, _request):
return ucp.recv(msg, tag=tag)
@nvtx.annotate(domain="MERGE")
def wait(self, request):
return self.event_loop.run_until_complete(request)
@nvtx.annotate(domain="MERGE")
def waitall(self, requests):
return self.event_loop.run_until_complete(asyncio.gather(*requests))
class UCXCommunicator(CommunicatorBase):
@staticmethod
def _callback(ucx_req, exception, msg, req):
assert exception is None
req.n -= 1
@nvtx.annotate(domain="MERGE", message="UCXPy-init")
def __init__(self, comm: MPI.Intracomm):
super().__init__(comm)
ctx = ucx_api.UCXContext(feature_flags=(ucx_api.Feature.TAG,))
self.worker = ucx_api.UCXWorker(ctx)
self.address = self.worker.get_address()
buf = np.array(self.address)
addresses = np.empty((comm.size, self.address.length), dtype=buf.dtype)
comm.Allgather(buf, addresses)
self.endpoints = tuple(
ucx_api.UCXEndpoint.create_from_worker_address(
self.worker,
ucx_api.UCXAddress.from_buffer(address),
endpoint_error_handling=True,
)
for address in addresses
)
def _send(self, ep, msg, tag, request):
req = request or Request()
if (
ucx_api.tag_send_nb(
ep,
msg,
msg.nbytes,
tag,
cb_func=UCXCommunicator._callback,
cb_args=(msg, req),
)
is not None
):
req.n += 1
return req
def _recv(self, msg, tag, request):
req = request or Request()
if (
ucx_api.tag_recv_nb(
self.worker,
msg,
msg.nbytes,
tag,
cb_func=UCXCommunicator._callback,
cb_args=(msg, req),
)
is not None
):
req.n += 1
return req
@nvtx.annotate(domain="MERGE")
def wait(self, request):
while request.n > 0:
self.worker.progress()
@nvtx.annotate(domain="MERGE")
def waitall(self, requests):
while any(r.n > 0 for r in requests):
self.worker.progress()
@nvtx.annotate(domain="MERGE")
def ialltoall(self, sendbuf: np.ndarray, recvbuf: np.ndarray):
req = Request()
for i in range(self.size):
req = self.irecv(recvbuf[i, ...], source=i, tag=10, request=req)
for i in range(self.size):
req = self.isend(sendbuf[i, ...], dest=i, tag=10, request=req)
return req
@nvtx.annotate(domain="MERGE")
def ialltoallv(self, sendbuf, sendcounts, recvbuf, recvcounts):
req = Request()
off = 0
for i in range(self.size):
count = recvcounts[i]
req = self.irecv(recvbuf[off : off + count], source=i, tag=10, request=req)
off += count
off = 0
for i in range(self.size):
count = sendcounts[i]
req = self.isend(sendbuf[off : off + count], dest=i, tag=10, request=req)
off += count
return req
class MPICommunicator(CommunicatorBase):
def __init__(self, comm: MPI.Intracomm):
self.mpicomm = comm
def ialltoall(self, send, recv):
return (self.mpicomm.Ialltoall(send, recv),)
def ialltoallv(self, sendbuf, sendcounts, recvbuf, recvcounts):
return self.mpicomm.Ialltoallv((sendbuf, sendcounts), (recvbuf, recvcounts))
def wireup(self):
self.mpicomm.Barrier()
def wait(self, request):
return MPI.Request.Wait(request)
def waitall(self, requests):
return MPI.Request.Waitall(requests)
def __getattr__(self, name):
try:
return getattr(self.mpicomm, name)
except AttributeError:
raise AttributeError(f"No support for {name}")
@nvtx.annotate(domain="MERGE")
def initialize_rmm(device: int):
# Work around cuda-python initialization bugs
from rmm.allocators.cupy import rmm_cupy_allocator
_, dev = cudart.cudaGetDevice()
cuda.cuDevicePrimaryCtxRelease(dev)
cuda.cuDevicePrimaryCtxReset(dev)
cudart.cudaSetDevice(device)
# It should be possible to just do
# cudart.cudaSetDevice(device)
# but this doesn't setup cudart.cudaGetDevice() correctly right now
rmm.reinitialize(
pool_allocator=True,
managed_memory=False,
devices=device,
)
cp.cuda.set_allocator(rmm_cupy_allocator)
@nvtx.annotate(domain="MERGE")
def build_dataframes(
comm: CommunicatorBase,
chunk_size: int,
match_fraction: float,
) -> Tuple[cudf.DataFrame, cudf.DataFrame]:
cp.random.seed(10)
rng = cp.random
rank = comm.rank
size = comm.size
start = chunk_size * rank
stop = start + chunk_size
left = cudf.DataFrame(
{
"key": cp.arange(start, stop, dtype=np.int64),
"payload": cp.arange(start, stop, dtype=np.int64),
}
)
piece_size = chunk_size // size
piece_size_used = max(int(piece_size * match_fraction), 1)
arrays = []
for i in range(size):
start = chunk_size * i + piece_size * rank
stop = start + piece_size
arrays.append(cp.arange(start, stop, dtype=np.int64))
key_match = cp.concatenate(
[rng.permutation(array)[:piece_size_used] for array in arrays], axis=0
)
missing = chunk_size - key_match.shape[0]
start = chunk_size * size + chunk_size * rank
stop = start + missing
key_no_match = cp.arange(start, stop, dtype=np.int64)
key = cp.concatenate([key_match, key_no_match], axis=0)
right = cudf.DataFrame(
{
"key": rng.permutation(key),
"payload": cp.arange(
chunk_size * rank, chunk_size * (rank + 1), dtype=np.int64
),
}
)
return (left, right)
@nvtx.annotate(domain="MERGE")
def partition_by_hash(
df: cudf.DataFrame,
npartitions: int,
) -> Tuple[cudf.DataFrame, np.ndarray]:
hash_partition = cudf._lib.hash.hash_partition
columns = ["key"]
key_indices = [df._column_names.index(k) for k in columns]
output_columns, offsets = hash_partition([*df._columns], key_indices, npartitions)
out_df = cudf.DataFrame(dict(zip(df._column_names, output_columns)))
counts = np.concatenate([np.diff(offsets), [len(out_df) - offsets[-1]]]).astype(
np.int32
)
return out_df, counts
@nvtx.annotate(domain="MERGE")
def exchange_by_hash_bucket(
comm: CommunicatorBase,
left: cudf.DataFrame,
right: cudf.DataFrame,
) -> Tuple[cudf.DataFrame, cudf.DataFrame]:
left_send_df, left_sendcounts = partition_by_hash(left, comm.size)
nvtx.push_range(domain="MERGE", message="Allocate left")
left_recvcounts = np.zeros(comm.size, dtype=np.int32)
comm.wait(comm.ialltoall(left_sendcounts, left_recvcounts))
nrows = left_recvcounts.sum()
left_recv_df = cudf.DataFrame(
{name: cp.empty(nrows, dtype=left[name].dtype) for name in left.columns}
)
nvtx.pop_range(domain="MERGE")
requests = list(
comm.ialltoallv(
left_send_df[name].values,
left_sendcounts,
left_recv_df[name].values,
left_recvcounts,
)
for name in left_send_df.columns
)
right_send_df, right_sendcounts = partition_by_hash(right, comm.size)
nvtx.push_range(domain="MERGE", message="Allocate right")
right_recvcounts = np.zeros(comm.size, dtype=np.int32)
comm.wait(comm.ialltoall(right_sendcounts, right_recvcounts))
nrows = right_recvcounts.sum()
right_recv_df = cudf.DataFrame(
{name: cp.empty(nrows, dtype=right[name].dtype) for name in right.columns}
)
nvtx.pop_range(domain="MERGE")
requests.extend(
comm.ialltoallv(
right_send_df[name].values,
right_sendcounts,
right_recv_df[name].values,
right_recvcounts,
)
for name in right_send_df.columns
)
comm.waitall(requests)
return left_recv_df, right_recv_df
@nvtx.annotate(domain="MERGE")
def distributed_join(
comm: CommunicatorBase,
left: cudf.DataFrame,
right: cudf.DataFrame,
) -> cudf.DataFrame:
left, right = exchange_by_hash_bucket(comm, left, right)
nvtx.push_range(domain="MERGE", message="cudf_merge")
val = left.merge(right, on="key")
nvtx.pop_range(domain="MERGE")
return val
def sync_print(comm: CommunicatorBase, val: Any) -> None:
if comm.rank == 0:
print(f"[{comm.rank}]\n{val}", flush=True)
for source in range(1, comm.size):
val = comm.recv(source=source)
print(f"[{source}]\n{val}", flush=True)
else:
comm.send(f"{val}", dest=0)
def one_print(comm: CommunicatorBase, val: Any) -> None:
if comm.rank == 0:
print(f"{val}", flush=True)
@nvtx.annotate(domain="MERGE")
def bench_once(
comm: CommunicatorBase,
left: cudf.DataFrame,
right: cudf.DataFrame,
) -> float:
start = MPI.Wtime()
_ = distributed_join(comm, left, right)
end = MPI.Wtime()
val = np.array(end - start, dtype=float)
comm.Allreduce(MPI.IN_PLACE, val, op=MPI.MAX)
return float(val)
class CommunicatorType(str, Enum):
MPI = "mpi"
UCXPY_ASYNC = "ucxpy-asyncio"
UCXPY_NB = "ucxpy-nb"
def main(
rows_per_rank: int = typer.Option(
1000, help="Number of dataframe rows on each rank"
),
match_fraction: float = typer.Option(
0.3, help="Fraction of rows that should match"
),
communicator_type: CommunicatorType = typer.Option(
CommunicatorType.UCXPY_NB, help="Which communicator to use"
),
warmup_iterations: int = typer.Option(
2, help="Number of warmup iterations that are not benchmarked"
),
iterations: int = typer.Option(10, help="Number of iterations to benchmark"),
gpus_per_node: Optional[int] = typer.Option(
None,
help="Number of GPUs per node, used to assign MPI ranks to GPUs, "
"if not provided will use cuDeviceGetCount",
),
):
global ucp
mpicomm = MPI.COMM_WORLD
cuda.cuInit(0)
if gpus_per_node is None:
gpus_per_node: int
err, gpus_per_node = cuda.cuDeviceGetCount()
if err != 0:
raise RuntimeError("Can't get device count")
initialize_rmm(mpicomm.rank % gpus_per_node)
# Must happen after initializing RMM (which sets up device contexts)
if communicator_type != CommunicatorType.MPI:
import ucp
ucp.init()
if communicator_type == CommunicatorType.UCXPY_ASYNC:
comm = AsyncIOCommunicator(mpicomm)
elif communicator_type == CommunicatorType.UCXPY_NB:
comm = UCXCommunicator(mpicomm)
elif communicator_type == CommunicatorType.MPI:
comm = MPICommunicator(mpicomm)
else:
raise ValueError(f"Unsupported communicator type {communicator_type}")
start = MPI.Wtime()
left, right = build_dataframes(comm, rows_per_rank, match_fraction)
end = MPI.Wtime()
duration = np.asarray(end - start, dtype=float)
comm.Allreduce(MPI.IN_PLACE, duration, op=MPI.MAX)
one_print(comm, f"Dataframe build: {duration:.2g}s")
start = MPI.Wtime()
comm.wireup()
end = MPI.Wtime()
duration = np.asarray(end - start, dtype=float)
comm.Allreduce(MPI.IN_PLACE, duration, op=MPI.MAX)
one_print(comm, f"Wireup time: {duration:.2g}s")
for _ in range(warmup_iterations):
bench_once(comm, left, right)
comm.Barrier()
total = 0
nvtx.push_range(domain="MERGE", message="Benchmarking")
for _ in range(iterations):
duration = bench_once(comm, left, right)
one_print(comm, f"Total join time: {duration:.2g}s")
total += duration
nvtx.pop_range(domain="MERGE")
def nbytes(df):
size = np.asarray(len(df) * sum(t.itemsize for t in df.dtypes), dtype=np.int64)
comm.Allreduce(MPI.IN_PLACE, size, op=MPI.SUM)
return size
data_volume = nbytes(left) + nbytes(right)
mean_duration = total / iterations
throughput = data_volume / mean_duration
one_print(comm, "Dataframe type: cudf")
one_print(comm, f"Rows per rank: {rows_per_rank}")
one_print(comm, f"Communicator type: {communicator_type}")
one_print(comm, f"Data processed: {format_bytes(data_volume)}")
one_print(comm, f"Mean join time: {mean_duration:.2g}s")
one_print(comm, f"Throughput: {format_bytes(throughput)}/s")
one_print(comm, f"Throughput/rank: {format_bytes(throughput/comm.size)}/s/rank")
one_print(comm, f"Total ranks: {comm.size}")
if __name__ == "__main__":
if "--help" in sys.argv:
# Only print help on a single rank.
if MPI.COMM_WORLD.rank == 0:
typer.run(main)
else:
typer.run(main)
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/src
|
rapidsai_public_repos/dask-cuda-benchmarks/src/distributed_merge/README.md
|
## Overview
This implements some distributed memory joins using CUDF and Pandas
built on top of MPI and UCX-Py.
The CUDF implementation uses MPI for UCX bringup (so UCX must be
CUDA-aware, but the MPI need not be), but then the core all-to-all is
performed using UCX-Py calls.
The Pandas implementation just uses MPI.
### Dependencies
- `ucx-py`
- `cudf`
- `rmm`
- `mpi4py`
- `cupy`
- `numpy`
- `pandas`
- `typer`
- `nvtx` (optional, for hookup with [Nisght
Systems](https://developer.nvidia.com/nsight-systems))
## Algorithm
A straightforward in-core implementation:
1. Bucket the rows of the dataframe by a deterministic hash (one
bucket per rank)
2. Exchange sizes, and data using `MPI_Alltoall`-like and
`MPI_Alltoallv`-like patterns respectively. For the pandas
implementation actually uses the MPI version, for the cudf
implementation can use MPI (if CUDA-aware), or else uses UCX
non-blocking point to point tag send/receives.
3. Locally merge exchanged data
A more complicated approach is described in [Gao and Sakharnykh,
_Scaling Joins to a Thousand GPUs_, ADMS
2021](http://www.adms-conf.org/2021-camera-ready/gao_adms21.pdf).
| 0 |
rapidsai_public_repos/dask-cuda-benchmarks/src
|
rapidsai_public_repos/dask-cuda-benchmarks/src/distributed_merge/pandas_merge.py
|
# Copyright (c) 2022, NVIDIA CORPORATION.
# SPDX-License-Identifier: Apache-2.0
import sys
from typing import Any, Tuple
import numpy as np
import pandas as pd
import typer
from mpi4py import MPI
from pandas._libs import algos as libalgos
from pandas.core.util.hashing import hash_pandas_object
try:
import nvtx
except ImportError:
class nvtx:
@staticmethod
def noop(*args, **kwargs):
pass
push_range = noop
pop_range = noop
@staticmethod
def annotate(*args, **kwargs):
def noop_wrapper(fn):
return fn
return noop_wrapper
DataFrame = pd.DataFrame
def format_bytes(b):
return f"{b/1e9:.2f} GB"
@nvtx.annotate(domain="MERGE")
def build_dataframes(
comm: MPI.Intracomm,
chunk_size: int,
match_fraction: float,
) -> Tuple[DataFrame, DataFrame]:
np.random.seed(10)
rng = np.random
rank = comm.rank
size = comm.size
start = chunk_size * rank
stop = start + chunk_size
left = pd.DataFrame(
{
"key": np.arange(start, stop, dtype=np.int64),
"payload": np.arange(start, stop, dtype=np.int64),
}
)
piece_size = chunk_size // size
piece_size_used = max(int(piece_size * match_fraction), 1)
arrays = []
for i in range(size):
start = chunk_size * i + piece_size * rank
stop = start + piece_size
arrays.append(np.arange(start, stop, dtype=np.int64))
key_match = np.concatenate(
[rng.permutation(array)[:piece_size_used] for array in arrays], axis=0
)
missing = chunk_size - key_match.shape[0]
start = chunk_size * size + chunk_size * rank
stop = start + missing
key_no_match = np.arange(start, stop, dtype=np.int64)
key = np.concatenate([key_match, key_no_match], axis=0)
right = pd.DataFrame(
{
"key": rng.permutation(key),
"payload": np.arange(
chunk_size * rank, chunk_size * (rank + 1), dtype=np.int64
),
}
)
return (left, right)
@nvtx.annotate(domain="MERGE")
def partition_by_hash(
df: DataFrame,
npartitions: int,
) -> Tuple[DataFrame, np.ndarray]:
indexer, locs = libalgos.groupsort_indexer(
(hash_pandas_object(df["key"], index=False) % npartitions)
.astype(np.int32)
.values.view()
.astype(np.intp, copy=False),
npartitions,
)
return df.take(indexer), locs[1:].astype(np.int32)
@nvtx.annotate(domain="MERGE")
def exchange_by_hash_bucket(
comm: MPI.Intracomm,
left: DataFrame,
right: DataFrame,
) -> Tuple[DataFrame, DataFrame]:
left_send_df, left_sendcounts = partition_by_hash(left, comm.size)
nvtx.push_range(domain="MERGE", message="Allocate left")
left_recvcounts = np.zeros(comm.size, dtype=np.int32)
comm.Alltoall(left_sendcounts, left_recvcounts)
nrows = left_recvcounts.sum()
left_recv_df = pd.DataFrame(
{name: np.empty(nrows, dtype=left[name].dtype) for name in left.columns}
)
nvtx.pop_range(domain="MERGE")
requests = list(
comm.Ialltoallv(
(left_send_df[name].values, left_sendcounts),
(left_recv_df[name].values, left_recvcounts),
)
for name in left_send_df.columns
)
right_send_df, right_sendcounts = partition_by_hash(right, comm.size)
nvtx.push_range(domain="MERGE", message="Allocate right")
right_recvcounts = np.zeros(comm.size, dtype=np.int32)
comm.Alltoall(right_sendcounts, right_recvcounts)
nrows = right_recvcounts.sum()
right_recv_df = pd.DataFrame(
{name: np.empty(nrows, dtype=right[name].dtype) for name in right.columns}
)
nvtx.pop_range(domain="MERGE")
requests.extend(
comm.Ialltoallv(
(right_send_df[name].values, right_sendcounts),
(right_recv_df[name].values, right_recvcounts),
)
for name in right_send_df.columns
)
MPI.Request.Waitall(requests)
return left_recv_df, right_recv_df
@nvtx.annotate(domain="MERGE")
def distributed_join(
comm: MPI.Intracomm,
left: DataFrame,
right: DataFrame,
) -> DataFrame:
left, right = exchange_by_hash_bucket(comm, left, right)
nvtx.push_range(domain="MERGE", message="pandas_merge")
val = left.merge(right, on="key")
nvtx.pop_range(domain="MERGE")
return val
def sync_print(comm: MPI.Intracomm, val: Any) -> None:
if comm.rank == 0:
print(f"[{comm.rank}]\n{val}", flush=True)
for source in range(1, comm.size):
val = comm.recv(source=source)
print(f"[{source}]\n{val}", flush=True)
else:
comm.send(f"{val}", dest=0)
def one_print(comm: MPI.Intracomm, val: Any) -> None:
if comm.rank == 0:
print(f"{val}", flush=True)
@nvtx.annotate(domain="MERGE")
def bench_once(
comm: MPI.Intracomm,
left: DataFrame,
right: DataFrame,
) -> float:
start = MPI.Wtime()
_ = distributed_join(comm, left, right)
end = MPI.Wtime()
val = np.array(end - start, dtype=float)
comm.Allreduce(MPI.IN_PLACE, val, op=MPI.MAX)
return float(val)
def main(
rows_per_rank: int = typer.Option(
1000, help="Number of dataframe rows on each rank"
),
match_fraction: float = typer.Option(
0.3, help="Fraction of rows that should match"
),
warmup_iterations: int = typer.Option(
2, help="Number of warmup iterations that are not benchmarked"
),
iterations: int = typer.Option(10, help="Number of iterations to benchmark"),
):
comm = MPI.COMM_WORLD
start = MPI.Wtime()
left, right = build_dataframes(comm, rows_per_rank, match_fraction)
end = MPI.Wtime()
duration = comm.allreduce(end - start, op=MPI.MAX)
one_print(comm, f"Dataframe build: {duration:.2g}s")
for _ in range(warmup_iterations):
bench_once(comm, left, right)
comm.Barrier()
total = 0
nvtx.push_range(domain="MERGE", message="Benchmarking")
for _ in range(iterations):
duration = bench_once(comm, left, right)
one_print(comm, f"Total join time: {duration:.2g}s")
total += duration
nvtx.pop_range(domain="MERGE")
def nbytes(df):
return comm.allreduce(len(df) * sum(t.itemsize for t in df.dtypes), op=MPI.SUM)
data_volume = nbytes(left) + nbytes(right)
mean_duration = total / iterations
throughput = data_volume / mean_duration
one_print(comm, "Dataframe type: pandas")
one_print(comm, f"Rows per rank: {rows_per_rank}")
one_print(comm, f"Data processed: {format_bytes(data_volume)}")
one_print(comm, f"Mean join time: {mean_duration:.2g}s")
one_print(comm, f"Throughput: {format_bytes(throughput)}/s")
one_print(comm, f"Throughput/rank: {format_bytes(throughput/comm.size)}/s/rank")
one_print(comm, f"Total ranks: {comm.size}")
if __name__ == "__main__":
if "--help" in sys.argv:
# Only print help on a single rank.
if MPI.COMM_WORLD.rank == 0:
typer.run(main)
else:
typer.run(main)
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/asvdb/README.md
|
# ASVDb
Python and command-line interface to a ASV "database", as described [here](https://asv.readthedocs.io/en/stable/dev.html?highlight=%24results_dir#benchmark-suite-layout-and-file-formats).
`asvdb` can be used for creating and updating an ASV database from another benchmarking tool, notebook, test code, etc., where the results in that database can then be viewed in ASV, just as if the benchmarks were written directly for ASV. Likewise, `asvdb` can be used to read an existing ASV database and extract results for use in other tools or reporting mechanisms, such as a spreadsheet or other analysis tool.
The `asvdb` package includes both the Python API and a command-line tool for easy use from shell scripts if necessary.

## Examples:
### `asvdb` Python library - Read results from the "database"
```
>>> import asvdb
>>> db = asvdb.ASVDb("/path/to/benchmarks/asv")
>>>
>>> results = db.getResults() # Get a list of (BenchmarkInfo obj, [BenchmarkResult obj, ...]) tuples.
>>> len(results)
9
>>> firstResult = results[0]
>>> firstResult[0]
BenchmarkInfo(machineName='my_machine', cudaVer='9.2', osType='debian', pythonVer='3.6', commitHash='f6242e77bf32ed12c78ddb3f9a06321b2fd11806', commitTime=1589322352000, gpuType='Tesla V100-SXM2-32GB', cpuType='x86_64', arch='x86_64', ram='540954406912')
>>> len(firstResult[1])
132
>>> firstResult[1][0]
BenchmarkResult(funcName='bench_algos.bench_create_edgelist_time', result=0.46636209040880205, argNameValuePairs=[('csvFileName', '../datasets/csv/undirected/hollywood.csv')], unit='seconds')
>>>
```
### `asvdb` Python library - Add benchmark results to the "database"
```
import platform
import psutil
from asvdb import utils, BenchmarkInfo, BenchmarkResult, ASVDb
# Create a BenchmarkInfo object describing the benchmarking environment.
# This can/should be reused when adding multiple results from the same environment.
uname = platform.uname()
(commitHash, commitTime) = utils.getCommitInfo() # gets commit info from CWD by default
bInfo = BenchmarkInfo(machineName=uname.machine,
cudaVer="10.0",
osType="%s %s" % (uname.system, uname.release),
pythonVer=platform.python_version(),
commitHash=commitHash,
commitTime=commitTime,
gpuType="n/a",
cpuType=uname.processor,
arch=uname.machine,
ram="%d" % psutil.virtual_memory().total)
# Create result objects for each benchmark result. Each result object
# represents a result from a single benchmark run, including any specific
# parameter settings the benchmark used (ie. arg values to a benchmark function)
bResult1 = BenchmarkResult(funcName="myAlgoBenchmarkFunc",
argNameValuePairs=[
("iterations", 100),
("dataset", "januaryData")
],
result=301.23)
bResult2 = BenchmarkResult(funcName="myAlgoBenchmarkFunc",
argNameValuePairs=[
("iterations", 100),
("dataset", "februaryData")
],
result=287.93)
# Create an interface to an ASV "database" to write the results to.
(repo, branch) = utils.getRepoInfo() # gets repo info from CWD by default
db = ASVDb(dbDir="/datasets/benchmarks/asv",
repo=repo,
branches=[branch])
# Each addResult() call adds the result and creates/updates all JSON files
db.addResult(bInfo, bResult1)
db.addResult(bInfo, bResult2)
```
This results in a `asv.conf.json` file in `/datasets/benchmarks/asv` containing:
```
{
"results_dir": "results",
"html_dir": "html",
"repo": <the repo URL>,
"branches": [
<the branch name>
],
"version": 1.0
}
```
and `results/benchmarks.json` containing:
```
{
"myAlgoBenchmarkFunc": {
"code": "myAlgoBenchmarkFunc",
"name": "myAlgoBenchmarkFunc",
"param_names": [
"iterations",
"dataset"
],
"params": [
[
100,
100
],
[
"januaryData",
"februaryData"
]
],
"timeout": 60,
"type": "time",
"unit": "seconds",
"version": 2
},
"version": 2
}
```
a `<machine>/machine.json` file containing:
```
{
"arch": "x86_64",
"cpu": "x86_64",
"gpu": "n/a",
"cuda": "10.0",
"machine": "x86_64",
"os": "Linux 4.4.0-146-generic",
"ram": "540955688960",
"version": 1
}
```
and a `<machine>/<commit hash>.json` file containing:
```
{
"params": {
"gpu": "n/a",
"cuda": "10.0",
"machine": "x86_64",
"os": "Linux 4.4.0-146-generic",
"python": "3.7.1"
},
"requirements": {},
"results": {
"myAlgoBenchmarkFunc": {
"params": [
[
100,
100
],
[
"januaryData",
"februaryData"
]
],
"result": [
301.23,
287.93
]
}
},
"commit_hash": "c551640ca829c32f520771306acc2d177398b721",
"date": "156812889600",
"python": "3.7.1",
"version": 1
}
```
### `asvdb` CLI tool
- Print the number of results in the database
```
user@machine> asvdb --read-from=./my_asv_dir \
--exec-once="i=0" \
--exec="i+=1" \
--exec-once="print(i)"
2040
```
This uses `--exec-once` to initialize a var `i` to 0, then execute `i+=1` for each row (result), then `--exec-once` to print the final value of `i`. `--exec-once` only executes once as opposed to once-per-row.
An easier way using shell tools is to just print any key from a row (`funcName` for example) and count the number of lines printed. This works because `--print` is executed for every row, and each row represents one unique result:
```
user@machine> asvdb --read-from=./my_asv_dir --print="funcName" | wc -l
2040
```
- Check which branches are in the database
```
user@machine> asvdb --read-from=./my_asv_dir \
--exec-once="branches=set()" \
--exec="branches.add(branch)" \
--exec-once="print(branches)"
{'branch-0.14', 'branch-0.15'}
```
or slightly easier using shell tools:
```
user@machine> asvdb --read-from=./my_asv_dir --print="branch" | sort -u
branch-0.14
branch-0.15
```
In the above example, the `sort -u` is used to limit the output to only unique items. As mentioned, `asvdb` actions (except `--exec-once`) operate on every row, so the print expression will be applied to each row, resulting in 2040 prints (one per result).
- Get the results for a specific benchmark, with specific param values, for all commits. **This is a quick and easy way to check for a regression right from your shell!**
```
user@machine> asvdb --read-from=./my_asv_dir \
--filter="funcName=='bench_algos.bench_pagerank_time' \
and argNameValuePairs==[('dataset', '../datasets/csv/directed/cit-Patents.csv'), ('managed_mem', 'False'), ('pool_allocator', 'True')]" \
--print="commitHash, result, unit"
c29c3e359d1d945ef32b6867809a331f460d3e46 0.09114686909640984 seconds
8f077b8700cc5d1b4632c429557eaed6057e03a1 0.09145867270462334 seconds
ff154939008654e62b6696cee825dc971c544b5b 0.08477148889370165 seconds
da0a9f8e66696a4c6683055bc22c7378b7430041 0.08885913200959392 seconds
e5ae3c3fcd1f414dea2be83e0564f09fe3365ea9 0.08390960488084279 seconds
```
Using `--exec` actions and some python-fu, `asvdb` itself can even show the regressions:
```
user@machine> asvdb --read-from=./my_asv_dir \
--exec-once="regressions=[]; prev=0" \
--filter="funcName=='bench_algos.bench_pagerank_time' \
and argNameValuePairs==[('dataset', '../datasets/csv/directed/cit-Patents.csv'), ('managed_mem', 'False'), ('pool_allocator', 'True')]" \
--exec="t=prev; prev=result; d=result-t; regressions=regressions+[(commitHash, d)] if d>0.0001 else regressions" \
--exec-once="print('Regressions:\n'+'\n'.join([f'commit: {r[0]} delta: {r[1]}' for r in regressions[1:]]))"
Regressions:
commit: 8f077b8700cc5d1b4632c429557eaed6057e03a1 delta: 0.00031180360821349284
commit: da0a9f8e66696a4c6683055bc22c7378b7430041 delta: 0.00408764311589227
```
_Note: since `asvdb` reads and writes databases that are (obviously) compatible with [airspeed velocity (ASV)](https://github.com/airspeed-velocity/asv), the `asv` CLI is another excellent option for finding regressions from the command line, as described [here](https://asv.readthedocs.io/en/stable/commands.html#asv-compare)_
- Get the requirements (dependencies) used for a specific commit
```
user@machine> asvdb --read-from=./my_asv_dir \
--filter="commitHash=='c29c3e359d1d945ef32b6867809a331f460d3e46'" \
--print="requirements"|sort -u
{'cudf': '0.14.200528', 'packageA': '0.0.6', 'packageB': '0.9.5'}
```
Even though this is limiting the rows to just one commit (by using the `--filter` action), there are still several results from the various runs done on that commit, hence the `sort -u`
- Change the unit string for specific benchmarks. This example first prints the current unit (seconds), then changes it to milliseconds, then prints it again to confirm the change:
```
user@machine> asvdb --read-from=./my_asv_dir \
--filter="funcName=='bench_algos.bench_pagerank_time'" \
--print=unit|sort -u
seconds
user@machine> asvdb --read-from=./my_asv_dir \
--filter="funcName=='bench_algos.bench_pagerank_time'" \
--exec="unit='milliseconds'" \
--write-to=./my_asv_dir
user@machine> asvdb --read-from=./my_asv_dir \
--filter="funcName=='bench_algos.bench_pagerank_time'" \
--print=unit|sort -u
milliseconds
```
_Note: changing the unit to `milliseconds` here is just to illustrate how `asvdb` can update the database. Not all reporting tools that read from the database may recognize different unit strings._
- Change an individual result in the database, in this case, the latest result for `pagerank_gpumem` to the value `1234567` for only the benchmark run on `ubuntu-16.04`, python `3.6`, CUDA `10.1`, with a specific arg combination. This example uses individual `--filter` actions to reduce the rows down to the a single row so the `--exec` applies to only one, then it writes the row back to the same database it read from:
```
user@machine> asvdb --read-from=cugraph-e2e \
--exec-once="latest=0" \
--exec="latest=max(latest, commitTime)" \
\
--filter="commitTime==latest" \
--filter="osType=='ubuntu-16.04'" \
--filter="pythonVer=='3.6'" \
--filter="cudaVer=='10.1'" \
--filter="funcName=='bench_algos.bench_pagerank_gpumem'" \
--filter="argNameValuePairs==[('dataset', '../datasets/csv/undirected/hollywood.csv'), ('managed_mem', 'True'), ('pool_allocator', 'True')]" \
\
--print="'Modifying result for:', commitHash, funcName, osType, pythonVer, cudaVer" \
\
--exec="result=1234567" \
\
--write-to=cugraph-e2e
Modifying result for: e630ffb768af8af95d189ba5775ce6dad38476a2 bench_algos.bench_pagerank_gpumem ubuntu-16.04 3.6 10.1
```
The initial `--exec-once` and `--exec` actions find the latest commit for all rows and save it to a new variable named `latest`, which is then used in a later `--filter` action. The individual `--filter` actions make the command much easier to read, but are less efficient that combining them into a single filter expression. This is because each `--filter` action is applied to each row, and the resulting filtered list of rows are then passed to the next action to run on each of those rows, and so on. Instead, the `--filter` expressions could be combined into a single but harder-to-read expression that only makes a single pass through all the rows:
```
asvdb --read-from=cugraph-e2e --exec-once="latest=0" --exec="latest=max(latest, commitTime)" --filter="osType=='ubuntu-16.04' and pythonVer=='3.6' and cudaVer=='10.1' and commitTime==latest and funcName=='bench_algos.bench_pagerank_gpumem' and argNameValuePairs==[('dataset', '../datasets/csv/undirected/hollywood.csv'), ('managed_mem', 'True'), ('pool_allocator', 'True')]" --print="'Modifying result for:', commitHash, funcName, osType, pythonVer, cudaVer" --exec="result=0" --write-to=cugraph-e2e
Modifying result for: e630ffb768af8af95d189ba5775ce6dad38476a2 bench_algos.bench_pagerank_gpumem ubuntu-16.04 3.6 10.1
```
FWIW, the performance impact of the former is probably negligible and worth the improved readability/maintainability. For instance, on my system when using the `time` command for both examples, there was only a 0.124 second difference.
- Read an existing database and create a new database containing only the latest commit from branch-0.14 and branch-0.15
```
user@machine> asvdb --read-from=./my_asv_dir \
--print="commitTime, branch, commitHash"|sort -u
1591733122000 branch-0.14 da0a9f8e66696a4c6683055bc22c7378b7430041
1591733228000 branch-0.14 e5ae3c3fcd1f414dea2be83e0564f09fe3365ea9
1591733272000 branch-0.15 ff154939008654e62b6696cee825dc971c544b5b
1591733292000 branch-0.14 c29c3e359d1d945ef32b6867809a331f460d3e46
1591738722000 branch-0.15 8f077b8700cc5d1b4632c429557eaed6057e03a1
user@machine> asvdb --read-from=./my_asv_dir \
--exec-once="latest={}" \
--exec="latest[branch]=max(commitTime, latest.get(branch,0))" \
\
--filter="branch in ['branch-0.14', 'branch-0.15'] and commitTime==latest[branch]" \
\
--write-to=./new_asv_dir
user@machine> asvdb --read-from=./new_asv_dir \
--print="commitTime, branch, commitHash"|sort -u
1591733292000 branch-0.14 c29c3e359d1d945ef32b6867809a331f460d3e46
1591738722000 branch-0.15 8f077b8700cc5d1b4632c429557eaed6057e03a1
```
In the above example, an existing database is read from and a new database is written to using only the latest commits for `branch-0.14` and `branch-0.15` from the existing db. This is done using several actions chained together:
1) initialize a dict named `latest` used to hold the latest `commitTime` for each `branch`
2) evaluate each row to update `latest` for the row's `branch` with the `commitTime` of the (potentially) higher time value
3) filter the rows to include only branches that are `branch-0.14` or `branch-0.15` **and** have the latest `commitTime`
4) finally, write the resulting rows to the new database.
## `asvdb` CLI:
From the help:
```
usage: asvdb [-h] [--version] [--read-from PATH] [--list-keys] [--filter EXPR]
[--exec CMD] [--exec-once CMD] [--print PRINTEXPR]
[--write-to PATH]
Examine or update an ASV 'database' row-by-row.
optional arguments:
-h, --help show this help message and exit
--version Print the current verison of asvdb and exit.
--read-from PATH Path to ASV db dir to read data from.
--list-keys List all keys found in the database to STDOUT.
--filter EXPR Action which filters the current results based on the
evaluation of EXPR.
--exec CMD Action which executes CMD on each of the current results.
--exec-once CMD Action which executes CMD once (is not executed for each
result).
--print PRINTEXPR Action which evaluates PRINTEXPR in a print() statement
for each of the current results.
--write-to PATH Path to ASV db dir to write data to. PATH is created if
it does not exist.
The database is read and each 'row' (an individual result and its context) has
the various expressions evaluated in the context of the row (see --list-keys for
all the keys that can be used in an expression/command). Each action can
potentially modify the list of rows for the next action. Actions can be chained
to perform complex queries or updates, and all actions are performed in the
order which they were specified on the command line.
The --exec-once action is an exception in that it does not execute on every row,
but instead only once in the context of the global namespace. This allows for
the creation of temp vars or other setup steps that can be used in
expressions/commands in subsequent actions. Like other actions, --exec-once can
be chained with other actions and called multiple times.
The final list of rows will be written to the destination database specified by
--write-to, if provided. If the path to the destination database does not exist,
it will be created. If the destination database does exist, it will be updated
with the results in the final list of rows.
Remember, an ASV database stores results based on the commitHash, so modifying
the commitHash for a result and writing it back to the same databse results in a
new, *additional* result as opposed to a modified one. All updates to the
database specified by --write-to either modify an existing result or add new
results, and results cannot be removed from a database. In order to effectively
remove results, a user can --write-to a new database with only the results they
want, then replace the original with the new using file system commands (rm the
old one, mv the new one to the old one's name, etc.)
```
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/asvdb/CHANGELOG.md
|
# asvdb (unreleased)
## New Features
- ...
## Improvements
- ...
## Bug Fixes
- ...
# asvdb 0.3.3 (19 Jun 2020)
- Initial release
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/asvdb/build.sh
|
#!/bin/bash
set -e
UPLOAD_FILE=`conda build ./conda --output`
UPLOAD_FILES=$(echo ${UPLOAD_FILE}|sed -e 's/\-py[0-9][0-9]/\-py36/')
UPLOAD_FILES="${UPLOAD_FILES} $(echo ${UPLOAD_FILE}|sed -e 's/\-py[0-9][0-9]/\-py37/')"
UPLOAD_FILES="${UPLOAD_FILES} $(echo ${UPLOAD_FILE}|sed -e 's/\-py[0-9][0-9]/\-py38/')"
conda build --variants="{python: [3.6, 3.7, 3.8]}" ./conda
if [ "$1" = "--publish" ]; then
anaconda upload ${UPLOAD_FILES}
fi
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/asvdb/CONTRIBUTING.md
|
# Contributing to asvdb
If you are interested in contributing to asvdb, your contributions will fall
into three categories:
1. You want to report a bug, feature request, or documentation issue
- File an [issue](https://github.com/rapidsai/asvdb/issues/new/choose)
describing what you encountered or what you want to see changed.
- The RAPIDS team will evaluate the issues and triage them, scheduling
them for a release. If you believe the issue needs priority attention
comment on the issue to notify the team.
2. You want to propose a new Feature and implement it
- Post about your intended feature, and we shall discuss the design and
implementation.
- Once we agree that the plan looks good, go ahead and implement it, using
the [code contributions](#code-contributions) guide below.
3. You want to implement a feature or bug-fix for an outstanding issue
- Follow the [code contributions](#code-contributions) guide below.
- If you need more context on a particular issue, please ask and we shall
provide.
## Code contributions
### Your first issue
1. Read the project's [README.md](https://github.com/rapidsai/asvdb/blob/main/README.md)
to learn how to setup the development environment
2. Find an issue to work on. The best way is to look for the [good first issue](https://github.com/rapidsai/asvdb/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
or [help wanted](https://github.com/rapidsai/asvdb/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) labels
3. Comment on the issue saying you are going to work on it
4. Code! Make sure to update unit tests!
5. When done, [create your pull request](https://github.com/rapidsai/asvdb/compare)
6. Verify that CI passes all [status checks](https://help.github.com/articles/about-status-checks/). Fix if needed
7. Wait for other developers to review your code and update code as needed
8. Once reviewed and approved, a RAPIDS developer will merge your pull request
Remember, if you are unsure about anything, don't hesitate to comment on issues
and ask for clarifications!
### Seasoned developers
Once you have gotten your feet wet and are more comfortable with the code, you
can look at the prioritized issues of our next release in our [project boards](https://github.com/rapidsai/asvdb/projects).
> **Pro Tip:** Always look at the release board with the highest number for
> issues to work on. This is where RAPIDS developers also focus their efforts.
Look at the unassigned issues, and find an issue you are comfortable with
contributing to. Start with _Step 3_ from above, commenting on the issue to let
others know you are working on it. If you have any questions related to the
implementation of the issue, ask them in the issue instead of the PR.
## Attribution
Portions adopted from https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/asvdb/setup.py
|
from setuptools import setup
setup(name="asvdb",
version="0.4.2",
packages=["asvdb"],
install_requires=["botocore", "boto3"],
description='ASV "database" interface',
entry_points={
"console_scripts": [
"asvdb = asvdb.__main__:main"
]
},
)
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/asvdb/LICENSE
|
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2020 NVIDIA Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/asvdb
|
rapidsai_public_repos/asvdb/tests/test_asvdb.py
|
from os import path
import os
import tempfile
import json
import threading
import time
import pytest
import boto3
datasetName = "dolphins.csv"
algoRunResults = [('loadDataFile', 3.2228727098554373),
('createGraph', 3.00713360495865345),
('pagerank', 3.00899268127977848),
('bfs', 3.004273353144526482),
('sssp', 3.004624705761671066),
('jaccard', 3.0025573652237653732),
('louvain', 3.32631026208400726),
('weakly_connected_components', 3.0034315641969442368),
('overlap', 3.002147899940609932),
('triangles', 3.2544921860098839),
('spectralBalancedCutClustering', 3.03329935669898987),
('spectralModularityMaximizationClustering', 3.011258183047175407),
('renumber', 3.001620553433895111),
('view_adj_list', 3.000927431508898735),
('degree', 3.0016251634806394577),
('degrees', None)]
repo = "myrepo"
branch = "my_branch"
commitHash = "809a1569e8a2ff138cdde4d9c282328be9dcad43"
commitTime = 1590007324
machineName = "my_machine"
def createAndPopulateASVDb(dbDir):
from asvdb import ASVDb, BenchmarkInfo
db = ASVDb(dbDir, repo, [branch])
bInfo = BenchmarkInfo(machineName=machineName,
cudaVer="9.2",
osType="linux",
pythonVer="3.6",
commitHash=commitHash,
commitTime=commitTime,
branch=branch,
gpuType="n/a",
cpuType="x86_64",
arch="my_arch",
ram="123456")
return addResultsForInfo(db, bInfo)
def addResultsForInfo(db, bInfo):
from asvdb import ASVDb, BenchmarkResult
for (algoName, exeTime) in algoRunResults:
bResult = BenchmarkResult(funcName=algoName,
argNameValuePairs=[("dataset", datasetName)],
result=exeTime)
db.addResult(bInfo, bResult)
return db
def test_addResult():
asvDir = tempfile.TemporaryDirectory()
db = createAndPopulateASVDb(asvDir.name)
asvDir.cleanup()
def test_addResults():
asvDir = tempfile.TemporaryDirectory()
from asvdb import ASVDb, BenchmarkInfo, BenchmarkResult
dbDir = asvDir.name
db = ASVDb(dbDir, repo, [branch])
bInfo = BenchmarkInfo(machineName=machineName,
cudaVer="9.2",
osType="linux",
pythonVer="3.6",
commitHash=commitHash,
commitTime=commitTime,
branch=branch,
gpuType="n/a",
cpuType="x86_64",
arch="my_arch",
ram="123456")
resultList = []
for (algoName, exeTime) in algoRunResults:
bResult = BenchmarkResult(funcName=algoName,
argNameValuePairs=[("dataset", datasetName)],
result=exeTime)
resultList.append(bResult)
db.addResults(bInfo, resultList)
# read back in and check
dbCheck = ASVDb(dbDir, repo, [branch])
retList = dbCheck.getResults()
assert len(retList) == 1
assert retList[0][0] == bInfo
assert len(retList[0][1]) == len(algoRunResults)
assert resultList == retList[0][1]
asvDir.cleanup()
def test_addResultWithRandomParamOrder():
"""
Ensures that parameterized results can be added in any order (instead of
assuming an order that matches the order of the computed cartesian product
of the params)
"""
from asvdb import ASVDb, BenchmarkInfo, BenchmarkResult
asvDir = tempfile.TemporaryDirectory()
dbDir = asvDir.name
db = ASVDb(dbDir, repo, [branch])
bInfo = BenchmarkInfo(machineName=machineName,
cudaVer="9.2",
osType="linux",
pythonVer="3.6",
commitHash=commitHash,
commitTime=commitTime,
branch=branch,
gpuType="n/a",
cpuType="x86_64",
arch="my_arch",
ram="123456")
# Results are NOT ordered the same as the expected computed cartesian
# product order (10,1),(10,2),(10,4)...
bResultList = [
{'funcName': 'bfs', 'result': 0.012093378696590662,
'argNameValuePairs': [('scale', 10), ('ngpus', 1)]},
{'funcName': 'bfs', 'result': 0.01159052224829793,
'argNameValuePairs': [('scale', 11), ('ngpus', 1)]},
{'funcName': 'bfs', 'result': 0.19485003012232482,
'argNameValuePairs': [('scale', 10), ('ngpus', 2)]},
{'funcName': 'bfs', 'result': 0.18941782205365598,
'argNameValuePairs': [('scale', 11), ('ngpus', 2)]},
{'funcName': 'bfs', 'result': 0.22938291303580627,
'argNameValuePairs': [('scale', 10), ('ngpus', 4)]},
{'funcName': 'bfs', 'result': 0.3148591748904437,
'argNameValuePairs': [('scale', 11), ('ngpus', 8)]},
{'funcName': 'bfs', 'result': 0.30646290094591677,
'argNameValuePairs': [('scale', 10), ('ngpus', 8)]},
{'funcName': 'bfs', 'result': 0.23198354698251933,
'argNameValuePairs': [('scale', 11), ('ngpus', 4)]},
]
for bResult_dic in bResultList:
bResult = BenchmarkResult(**bResult_dic)
db.addResult(bInfo, bResult)
# read back in and check
dbCheck = ASVDb(dbDir, repo, [branch])
retList = dbCheck.getResults()
assert len(retList) == 1
assert retList[0][0] == bInfo
assert len(retList[0][1]) == len(bResultList)
# Ensure each result is present
for bResult_dic in bResultList:
bResult = BenchmarkResult(**bResult_dic)
assert bResult in retList[0][1]
asvDir.cleanup()
def test_writeWithoutRepoSet():
from asvdb import ASVDb
tmpDir = tempfile.TemporaryDirectory()
asvDirName = path.join(tmpDir.name, "dir_that_does_not_exist")
db1 = ASVDb(asvDirName)
with pytest.raises(AttributeError):
db1.updateConfFile()
def test_asvDirDNE():
from asvdb import ASVDb
tmpDir = tempfile.TemporaryDirectory()
asvDirName = path.join(tmpDir.name, "dir_that_does_not_exist")
repo = "somerepo"
branch1 = "branch1"
db1 = ASVDb(asvDirName, repo, [branch1])
db1.updateConfFile()
confFile = path.join(asvDirName, "asv.conf.json")
with open(confFile) as fobj:
j = json.load(fobj)
branches = j["branches"]
assert branches == [branch1]
tmpDir.cleanup()
def test_newBranch():
from asvdb import ASVDb
asvDir = tempfile.TemporaryDirectory()
repo = "somerepo"
branch1 = "branch1"
branch2 = "branch2"
db1 = ASVDb(asvDir.name, repo, [branch1])
db1.updateConfFile()
db2 = ASVDb(asvDir.name, repo, [branch2])
db2.updateConfFile()
confFile = path.join(asvDir.name, "asv.conf.json")
with open(confFile) as fobj:
j = json.load(fobj)
branches = j["branches"]
assert branches == [branch1, branch2]
asvDir.cleanup()
def test_gitExtension():
from asvdb import ASVDb
asvDir = tempfile.TemporaryDirectory()
repo = "somerepo"
branch1 = "branch1"
db1 = ASVDb(asvDir.name, repo, [branch1])
db1.updateConfFile()
confFile = path.join(asvDir.name, "asv.conf.json")
with open(confFile) as fobj:
j = json.load(fobj)
repo = j["repo"]
assert repo.endswith(".git")
asvDir.cleanup()
def test_concurrency():
from asvdb import ASVDb, BenchmarkInfo, BenchmarkResult
tmpDir = tempfile.TemporaryDirectory()
asvDirName = path.join(tmpDir.name, "dir_that_does_not_exist")
repo = "somerepo"
branch1 = "branch1"
db1 = ASVDb(asvDirName, repo, [branch1])
db2 = ASVDb(asvDirName, repo, [branch1])
db3 = ASVDb(asvDirName, repo, [branch1])
# Use the writeDelay member var to insert a delay during write to properly
# test collisions by making writes slow.
db1.writeDelay = 10
db2.writeDelay = 10
bInfo = BenchmarkInfo()
bResult1 = BenchmarkResult(funcName="somebenchmark1", result=43)
bResult2 = BenchmarkResult(funcName="somebenchmark2", result=43)
bResult3 = BenchmarkResult(funcName="somebenchmark3", result=43)
# db1 or db2 should be actively writing the result (because the writeDelay is long)
# and db3 should be blocked.
t1 = threading.Thread(target=db1.addResult, args=(bInfo, bResult1))
t2 = threading.Thread(target=db2.addResult, args=(bInfo, bResult2))
t3 = threading.Thread(target=db3.addResult, args=(bInfo, bResult3))
t1.start()
t2.start()
time.sleep(0.5) # ensure t3 tries to write last
t3.start()
# Check that db3 is blocked - if locking wasn't working, it would have
# finished since it has no writeDelay.
t3.join(timeout=0.5)
assert t3.is_alive() is True
# Cancel db1 and db2, allowing db3 to write and finish
db1.cancelWrite = True
db2.cancelWrite = True
t3.join(timeout=11)
assert t3.is_alive() is False
t1.join()
t2.join()
t3.join()
# Check that db3 wrote its result
with open(path.join(asvDirName, "results", "benchmarks.json")) as fobj:
jo = json.load(fobj)
assert "somebenchmark3" in jo
#print(jo)
tmpDir.cleanup()
def test_concurrency_stress():
from asvdb import ASVDb, BenchmarkInfo, BenchmarkResult
tmpDir = tempfile.TemporaryDirectory()
asvDirName = path.join(tmpDir.name, "dir_that_does_not_exist")
repo = "somerepo"
branch1 = "branch1"
num = 32
dbs = []
threads = []
allFuncNames = []
bInfo = BenchmarkInfo(machineName=machineName)
for i in range(num):
db = ASVDb(asvDirName, repo, [branch1])
db.writeDelay=0.5
dbs.append(db)
funcName = f"somebenchmark{i}"
bResult = BenchmarkResult(funcName=funcName, result=43)
allFuncNames.append(funcName)
t = threading.Thread(target=db.addResult, args=(bInfo, bResult))
threads.append(t)
for i in range(num):
threads[i].start()
for i in range(num):
threads[i].join()
# There should be num unique results in the db after (re)reading. Pick any
# of the db instances to read, they should all see the same results.
results = dbs[0].getResults()
assert len(results[0][1]) == num
# Simply check that all unique func names were read back in.
allFuncNamesCheck = [r.funcName for r in results[0][1]]
assert sorted(allFuncNames) == sorted(allFuncNamesCheck)
tmpDir.cleanup()
def test_s3_concurrency():
from asvdb import ASVDb, BenchmarkInfo, BenchmarkResult
tmpDir = tempfile.TemporaryDirectory(suffix='asv')
asvDirName = "s3://gpuci-cache-testing/asvdb"
resource = boto3.resource('s3')
bucketName = "gpuci-cache-testing"
benchmarkKey = "asvdb/results/benchmarks.json"
repo = "somerepo"
branch1 = "branch1"
db1 = ASVDb(asvDirName, repo, [branch1])
db2 = ASVDb(asvDirName, repo, [branch1])
db3 = ASVDb(asvDirName, repo, [branch1])
# Use the writeDelay member var to insert a delay during write to properly
# test collisions by making writes slow.
db1.writeDelay = 10
db2.writeDelay = 10
bInfo = BenchmarkInfo()
bResult1 = BenchmarkResult(funcName="somebenchmark1", result=43)
bResult2 = BenchmarkResult(funcName="somebenchmark2", result=43)
bResult3 = BenchmarkResult(funcName="somebenchmark3", result=43)
# db1 or db2 should be actively writing the result (because the writeDelay is long)
# and db3 should be blocked.
t1 = threading.Thread(target=db1.addResult, args=(bInfo, bResult1))
t2 = threading.Thread(target=db2.addResult, args=(bInfo, bResult2))
t3 = threading.Thread(target=db3.addResult, args=(bInfo, bResult3))
t1.start()
t2.start()
time.sleep(0.5) # ensure t3 tries to write last
t3.start()
# Check that db3 is blocked - if locking wasn't working, it would have
# finished since it has no writeDelay.
t3.join(timeout=0.5)
assert t3.is_alive() is True
# Cancel db1 and db2, allowing db3 to write and finish
db1.cancelWrite = True
db2.cancelWrite = True
t3.join(timeout=11)
assert t3.is_alive() is False
t1.join()
t2.join()
t3.join()
# Check that db3 wrote its result
os.makedirs(path.join(tmpDir.name, "asvdb/results"))
resource.Bucket(bucketName).download_file(benchmarkKey, path.join(tmpDir.name, benchmarkKey))
with open(path.join(tmpDir.name, benchmarkKey)) as fobj:
jo = json.load(fobj)
assert "somebenchmark3" in jo
tmpDir.cleanup()
db3.s3Resource.Bucket(db3.bucketName).objects.filter(Prefix="asvdb/").delete()
def test_s3_concurrency_stress():
from asvdb import ASVDb, BenchmarkInfo, BenchmarkResult
asvDirName = "s3://gpuci-cache-testing/asvdb"
bucketName = "gpuci-cache-testing"
repo = "somerepo"
branch1 = "branch1"
num = 32
dbs = []
threads = []
allFuncNames = []
bInfo = BenchmarkInfo(machineName=machineName, cudaVer="Test", osType="Test", pythonVer="Test", commitHash="Test")
for i in range(num):
db = ASVDb(asvDirName, repo, [branch1])
db.debugPrint = True
db.writeDelay=0.5
dbs.append(db)
funcName = f"somebenchmark{i}"
bResult = BenchmarkResult(funcName=funcName, result=43)
allFuncNames.append(funcName)
t = threading.Thread(target=db.addResult, args=(bInfo, bResult))
threads.append(t)
for i in range(num):
threads[i].start()
for i in range(num):
threads[i].join()
# There should be num unique results in the db after (re)reading. Pick any
# of the db instances to read, they should all see the same results.
results = dbs[0].getResults()
assert len(results[0][1]) == num
# Simply check that all unique func names were read back in.
allFuncNamesCheck = [r.funcName for r in results[0][1]]
assert sorted(allFuncNames) == sorted(allFuncNamesCheck)
boto3.resource("s3").Bucket(bucketName).objects.filter(Prefix="asvdb/").delete()
def test_read():
from asvdb import ASVDb
tmpDir = tempfile.TemporaryDirectory()
asvDirName = path.join(tmpDir.name, "dir_that_did_not_exist_before")
createAndPopulateASVDb(asvDirName)
db1 = ASVDb(asvDirName)
db1.loadConfFile()
# asvdb always ensures repos end in .git
assert db1.repo == f"{repo}.git"
assert db1.branches == [branch]
# getInfo() returns a list of BenchmarkInfo objs
biList = db1.getInfo()
assert len(biList) == 1
bi = biList[0]
assert bi.machineName == machineName
assert bi.commitHash == commitHash
assert bi.commitTime == commitTime
assert bi.branch == branch
# getResults() returns a list of tuples:
# (BenchmarkInfo obj, [BenchmarkResult obj, ...])
brList = db1.getResults()
assert len(brList) == len(biList)
assert brList[0][0] == bi
results = brList[0][1]
assert len(results) == len(algoRunResults)
br = results[0]
assert br.funcName == algoRunResults[0][0]
assert br.argNameValuePairs == [("dataset", datasetName)]
assert br.result == algoRunResults[0][1]
def test_getFilteredResults():
from asvdb import ASVDb, BenchmarkInfo
tmpDir = tempfile.TemporaryDirectory()
asvDirName = path.join(tmpDir.name, "dir_that_did_not_exist_before")
db = ASVDb(asvDirName, repo, [branch])
bInfo1 = BenchmarkInfo(machineName=machineName,
cudaVer="9.2",
osType="linux",
pythonVer="3.6",
commitHash=commitHash,
commitTime=commitTime)
bInfo2 = BenchmarkInfo(machineName=machineName,
cudaVer="10.1",
osType="linux",
pythonVer="3.7",
commitHash=commitHash,
commitTime=commitTime)
bInfo3 = BenchmarkInfo(machineName=machineName,
cudaVer="10.0",
osType="linux",
pythonVer="3.7",
commitHash=commitHash,
commitTime=commitTime)
addResultsForInfo(db, bInfo1)
addResultsForInfo(db, bInfo2)
addResultsForInfo(db, bInfo3)
# should only return results associated with bInfo1
brList1 = db.getResults(filterInfoObjList=[bInfo1])
assert len(brList1) == 1
assert brList1[0][0] == bInfo1
assert len(brList1[0][1]) == len(algoRunResults)
# should only return results associated with bInfo1 or bInfo3
brList1 = db.getResults(filterInfoObjList=[bInfo1, bInfo3])
assert len(brList1) == 2
assert brList1[0][0] in [bInfo1, bInfo3]
assert brList1[1][0] in [bInfo1, bInfo3]
assert brList1[0][0] != brList1[1][0]
assert len(brList1[0][1]) == len(algoRunResults)
assert len(brList1[1][1]) == len(algoRunResults)
| 0 |
rapidsai_public_repos/asvdb
|
rapidsai_public_repos/asvdb/conda/meta.yaml
|
{% set version = load_setup_py_data().get('version') %}
package:
name: asvdb
version: {{ version }}
source:
path: ..
build:
string: {{ GIT_DESCRIBE_HASH }}_{{ GIT_DESCRIBE_NUMBER }}
script: {{ PYTHON }} -m pip install . --no-deps
noarch: python
requirements:
host:
- python
run:
- python
- boto3
- botocore
test:
imports:
- asvdb
about:
home: https://github.com/rapidsai/asvdb
license: Apache 2.0
| 0 |
rapidsai_public_repos/asvdb
|
rapidsai_public_repos/asvdb/asvdb/asvdb.py
|
import json
import os
from os import path
from pathlib import Path
import tempfile
import itertools
import glob
import time
import random
import stat
from urllib.parse import urlparse
from botocore import exceptions
import boto3
BenchmarkInfoKeys = set([
"machineName",
"cudaVer",
"osType",
"pythonVer",
"commitHash",
"commitTime",
"branch",
"gpuType",
"cpuType",
"arch",
"ram",
"gpuRam",
"requirements",
])
BenchmarkResultKeys = set([
"funcName",
"result",
"argNameValuePairs",
"unit",
])
class BenchmarkInfo:
"""
Meta-data describing the environment for a benchmark or set of benchmarks.
"""
def __init__(self, machineName="", cudaVer="", osType="", pythonVer="",
commitHash="", commitTime=0, branch="",
gpuType="", cpuType="", arch="", ram="", gpuRam="",
requirements=None):
self.machineName = machineName
self.cudaVer = cudaVer
self.osType = osType
self.pythonVer = pythonVer
self.commitHash = commitHash
self.commitTime = int(commitTime)
self.branch = branch
self.gpuType = gpuType
self.cpuType = cpuType
self.arch = arch
self.ram = ram
self.gpuRam = gpuRam
self.requirements = requirements or {}
def __repr__(self):
return (f"{self.__class__.__name__}(machineName='{self.machineName}'"
f", cudaVer='{self.cudaVer}'"
f", osType='{self.osType}'"
f", pythonVer='{self.pythonVer}'"
f", commitHash='{self.commitHash}'"
f", commitTime={self.commitTime}"
f", branch={self.branch}"
f", gpuType='{self.gpuType}'"
f", cpuType='{self.cpuType}'"
f", arch='{self.arch}'"
f", ram={repr(self.ram)}"
f", gpuRam={repr(self.gpuRam)}"
f", requirements={repr(self.requirements)}"
")")
def __eq__(self, other):
return (self.machineName == other.machineName) \
and (self.cudaVer == other.cudaVer) \
and (self.osType == other.osType) \
and (self.pythonVer == other.pythonVer) \
and (self.commitHash == other.commitHash) \
and (self.commitTime == other.commitTime) \
and (self.branch == other.branch) \
and (self.gpuType == other.gpuType) \
and (self.cpuType == other.cpuType) \
and (self.arch == other.arch) \
and (self.ram == other.ram) \
and (self.gpuRam == other.gpuRam) \
and (self.requirements == other.requirements)
class BenchmarkResult:
"""
The result of a benchmark run for a particular benchmark function, given
specific args.
"""
def __init__(self, funcName, result, argNameValuePairs=None, unit=None):
self.funcName = funcName
self.argNameValuePairs = self.__sanitizeArgNameValues(argNameValuePairs)
self.result = result
self.unit = unit or "seconds"
def __sanitizeArgNameValues(self, argNameValuePairs):
if argNameValuePairs is None:
return []
return [(n, str(v if v is not None else "NaN")) for (n, v) in argNameValuePairs]
def __repr__(self):
return (f"{self.__class__.__name__}(funcName='{self.funcName}'"
f", result={repr(self.result)}"
f", argNameValuePairs={repr(self.argNameValuePairs)}"
f", unit='{self.unit}'"
")")
def __eq__(self, other):
return (self.funcName == other.funcName) \
and (self.argNameValuePairs == other.argNameValuePairs) \
and (self.result == other.result) \
and (self.unit == other.unit)
class ASVDb:
"""
A "database" of benchmark results consumable by ASV.
https://asv.readthedocs.io/en/stable/dev.html?highlight=%24results_dir#benchmark-suite-layout-and-file-formats
"""
confFileName = "asv.conf.json"
defaultResultsDirName = "results"
defaultHtmlDirName = "html"
defaultConfVersion = 1
benchmarksFileName = "benchmarks.json"
machineFileName = "machine.json"
lockfilePrefix = ".asvdbLOCK"
def __init__(self, dbDir,
repo=None, branches=None, projectName=None, commitUrl=None):
"""
dbDir - directory containing the ASV results, config file, etc.
repo - the repo associated with all reasults in the DB.
branches - https://asv.readthedocs.io/en/stable/asv.conf.json.html#branches
projectName - the name of the project to display in ASV reports
commitUrl - the URL ASV will use in reports to redirect users to when
they click on a data point. This is typically a Github
project URL that shows the contents of a commit.
"""
self.dbDir = dbDir
self.repo = repo
self.branches = branches
self.projectName = projectName
self.commitUrl = commitUrl
self.machineFileExt = path.join(self.defaultResultsDirName, "*", self.machineFileName)
self.confFileExt = self.confFileName
self.confFilePath = path.join(self.dbDir, self.confFileName)
self.confVersion = self.defaultConfVersion
self.resultsDirName = self.defaultResultsDirName
self.resultsDirPath = path.join(self.dbDir, self.resultsDirName)
self.htmlDirName = self.defaultHtmlDirName
self.benchmarksFileExt = path.join(self.defaultResultsDirName, self.benchmarksFileName)
self.benchmarksFilePath = path.join(self.resultsDirPath, self.benchmarksFileName)
# Each ASVDb instance must have a unique lockfile name to identify other
# instances that may be setting locks.
self.lockfileName = "%s-%s-%s" % (self.lockfilePrefix, os.getpid(), time.time())
self.lockfileTimeout = 5 # seconds
# S3-related attributes
if self.__isS3URL(dbDir):
self.s3Resource = boto3.resource("s3")
self.bucketName = urlparse(self.dbDir, allow_fragments=False).netloc
self.bucketKey = urlparse(self.dbDir, allow_fragments=False).path.lstrip('/')
########################################
# Testing and debug members
self.debugPrint = False
# adds a delay during write operations to easily test write collision
# handling.
self.writeDelay = 0
# To "cancel" write operations that are being delayed.
self.cancelWrite = False
###########################################################################
# Public API
###########################################################################
def loadConfFile(self):
"""
Read the ASV conf file on disk and set - or possibly overwrite - the
member variables with the contents of the file.
"""
self.__assertDbDirExists()
try:
self.__getLock(self.dbDir)
# FIXME: check if confFile exists
self.__downloadIfS3()
d = self.__loadJsonDictFromFile(self.confFilePath)
self.resultsDirName = d.get("results_dir", self.resultsDirName)
self.resultsDirPath = path.join(self.dbDir, self.resultsDirName)
self.benchmarksFilePath = path.join(self.resultsDirPath, self.benchmarksFileName)
self.htmlDirName = d.get("html_dir", self.htmlDirName)
self.repo = d.get("repo")
self.branches = d.get("branches", [])
self.projectName = d.get("project")
self.commitUrl = d.get("show_commit_url")
self.__uploadIfS3()
finally:
self.__releaseLock(self.dbDir)
self.__removeLocalS3Copy()
def updateConfFile(self):
"""
Update the ASV conf file with the values passed in to the CTOR. This
also ensures the object is up-to-date with any changes to the conf file
that may have been done by other ASVDb instances.
"""
self.__ensureDbDirExists()
try:
self.__getLock(self.dbDir)
self.__downloadIfS3()
if self.__waitForWrite():
self.__updateConfFile()
self.__uploadIfS3()
finally:
self.__releaseLock(self.dbDir)
self.__removeLocalS3Copy()
def addResult(self, benchmarkInfo, benchmarkResult):
"""
Add the benchmarkResult associated with the benchmarkInfo to the DB.
This will also update the conf file with the CTOR args if not done
already.
"""
self.__ensureDbDirExists()
try:
self.__getLock(self.dbDir)
self.__downloadIfS3(bInfo=benchmarkInfo)
if self.__waitForWrite():
self.__updateFilesForInfo(benchmarkInfo)
self.__updateFilesForResult(benchmarkInfo, benchmarkResult)
self.__uploadIfS3()
finally:
self.__releaseLock(self.dbDir)
self.__removeLocalS3Copy()
def addResults(self, benchmarkInfo, benchmarkResultList):
"""
Add each benchmarkResult obj in benchmarkResultList associated with
benchmarkInfo to the DB. This will also update the conf file with the
CTOR args if not done already.
"""
self.__ensureDbDirExists()
try:
self.__getLock(self.dbDir)
self.__downloadIfS3(bInfo=benchmarkInfo)
if self.__waitForWrite():
self.__updateFilesForInfo(benchmarkInfo)
for resultObj in benchmarkResultList:
self.__updateFilesForResult(benchmarkInfo, resultObj)
self.__uploadIfS3()
finally:
self.__releaseLock(self.dbDir)
self.__removeLocalS3Copy()
def getInfo(self):
"""
Return a list of BenchmarkInfo objs from reading the db files on disk.
"""
self.__assertDbDirExists()
try:
self.__getLock(self.dbDir)
self.__downloadIfS3()
retList = self.__readResults(infoOnly=True)
finally:
self.__releaseLock(self.dbDir)
self.__removeLocalS3Copy()
return retList
def getResults(self, filterInfoObjList=None):
"""
Return a list of (BenchmarkInfo obj, [BenchmarkResult obj, ...]) tuples
from reading the db files on disk. filterInfoObjList is expected to be
a list of BenchmarkInfo objs, and if provided will be used to return
results for only those BenchmarkInfo objs.
"""
self.__assertDbDirExists()
try:
self.__getLock(self.dbDir)
self.__downloadIfS3(results=True)
retList = self.__readResults(filterByInfoObjs=filterInfoObjList)
finally:
self.__releaseLock(self.dbDir)
self.__removeLocalS3Copy()
return retList
###########################################################################
# Private methods. These should not be called by clients. Among other
# things, public methods use proper locking to ensure atomic operations
# and these do not.
###########################################################################
def __readResults(self, infoOnly=False, filterByInfoObjs=None):
"""
Main "read" method responsible for reading ASV JSON files and creating
BenchmarkInfo and BenchmarkResult objs.
If infoOnly==True, returns a list of only BenchmarkInfo objs, otherwise
returns a list of tuples containing (BenchmarkInfo obj, [BenchmarkResult
obj, ...]) to represent each BenchmarkInfo object and all the
BenchmarkResult objs associated with it.
filterByInfoObjs can be set to only return BenchmarkInfo objs and their
results that match at least one of the BenchmarkInfo objs in the
filterByInfoObjs list (the list is treated as ORd).
"""
retList = []
resultsPath = Path(self.resultsDirPath)
# benchmarks.json containes meta-data about the individual benchmarks,
# which is only needed for returning results.
if not(infoOnly):
benchmarksJsonFile = resultsPath / self.benchmarksFileName
if benchmarksJsonFile.exists():
bDict = self.__loadJsonDictFromFile(benchmarksJsonFile.as_posix())
else:
# FIXME: test
raise FileNotFoundError(f"{benchmarksJsonFile.as_posix()}")
for machineDir in resultsPath.iterdir():
# Each subdir under the results dir contains all results for a
# individual machine. The only non-dir (file) that may need to be
# read in the results dir is benchmarks.json, which would have been
# read above.
if machineDir.is_dir():
# Inside the individual machine dir, ;ook for and read
# machine.json first. Assume this is not a valid results dir if
# no machine file and skip.
machineJsonFile = machineDir / self.machineFileName
if machineJsonFile.exists():
mDict = self.__loadJsonDictFromFile(
machineJsonFile.as_posix())
else :
continue
# Read each results file and populate the machineResults list.
# This will contain either BenchmarkInfo objs or tuples of
# (BenchmarkInfo, [BenchmarkResult objs, ...]) based on infoOnly
machineResults = []
for resultsFile in machineDir.iterdir():
if resultsFile == machineJsonFile:
continue
rDict = self.__loadJsonDictFromFile(resultsFile.as_posix())
resultsParams = rDict.get("params", {})
# Each results file has a single BenchmarkInfo obj
# describing it.
bi = BenchmarkInfo(
machineName=mDict.get("machine", ""),
cudaVer=resultsParams.get("cuda", ""),
osType=resultsParams.get("os", ""),
pythonVer=resultsParams.get("python", ""),
commitHash=rDict.get("commit_hash", ""),
commitTime=rDict.get("date", ""),
branch=rDict.get("branch", ""),
gpuType=mDict.get("gpu", ""),
cpuType=mDict.get("cpu", ""),
arch=mDict.get("arch", ""),
ram=mDict.get("ram", ""),
gpuRam=mDict.get("gpuRam", ""),
requirements=rDict.get("requirements", {})
)
# If a filter was specified, at least one EXACT MATCH to the
# BenchmarkInfo obj must be present.
if filterByInfoObjs and not(bi in filterByInfoObjs):
continue
if infoOnly:
machineResults.append(bi)
else:
# FIXME: if results not in rDict, throw better error
resultsDict = rDict["results"]
# Populate the list of BenchmarkResult objs associated
# with the BenchmarkInfo obj
resultObjs = []
for benchmarkName in resultsDict:
# benchmarkSpec is the entry in benchmarks.json,
# which is needed for the param names
if benchmarkName not in bDict:
print("WARNING: Encountered benchmark name "
"that is not in "
f"{self.benchmarksFileName}: "
f"file: {resultsFile.as_posix()} "
f"invalid name\"{benchmarkName}\", skipping.")
continue
benchmarkSpec = bDict[benchmarkName]
# benchmarkResults is the entry in this particular
# result file for this benchmark
benchmarkResults = resultsDict[benchmarkName]
paramNames = benchmarkSpec["param_names"]
paramValues = benchmarkResults["params"]
results = benchmarkResults["result"]
# Inverse of the write operation described in
# self.__updateResultJson()
paramsCartProd = list(itertools.product(*paramValues))
for (paramValueCombo, result) in zip(paramsCartProd, results):
br = BenchmarkResult(
funcName=benchmarkName,
argNameValuePairs=zip(paramNames, paramValueCombo),
result=result)
unit = benchmarkSpec.get("unit")
if unit is not None:
br.unit = unit
resultObjs.append(br)
machineResults.append((bi, resultObjs))
retList += machineResults
return retList
def __updateFilesForInfo(self, benchmarkInfo):
"""
Updates all the db files that are affected by a new BenchmarkInfo obj.
"""
# special case: if the benchmarkInfo has a new branch specified,
# update self.branches so the conf files includes the new branch
# name.
newBranch = benchmarkInfo.branch
if newBranch and newBranch not in self.branches:
self.branches.append(newBranch)
# The comments below assume default dirname values (mainly
# "results"), which can be changed in the asv.conf.json file.
#
# <self.dbDir>/asv.conf.json
self.__updateConfFile()
# <self.dbDir>/results/<machine dir>/machine.json
self.__updateMachineJson(benchmarkInfo)
def __updateFilesForResult(self, benchmarkInfo, benchmarkResult):
"""
Updates all the db files that are affected by a new BenchmarkResult
obj. This also requires the corresponding BenchmarkInfo obj since some
results files also include info data.
"""
# <self.dbDir>/results/benchmarks.json
self.__updateBenchmarkJson(benchmarkResult)
# <self.dbDir>/results/<machine dir>/<result file name>.json
self.__updateResultJson(benchmarkResult, benchmarkInfo)
def __assertDbDirExists(self):
# FIXME: update for support S3 - this method should return True if
# self.dbDir is a valid S3 URL or a valid path on disk.
if self.__isS3URL(self.dbDir):
self.s3Resource.Bucket(self.bucketName).objects
else:
if not(path.isdir(self.dbDir)):
raise FileNotFoundError(f"{self.dbDir} does not exist or is "
"not a directory")
def __ensureDbDirExists(self):
# FIXME: for S3 support, if self.dbDir is a S3 URL then simply check if
# it's valid and exists, but don't try to create it (raise an exception
# if it does not exist). For a local file path, create it if it does
# not exist, like already being done below.
if self.__isS3URL(self.dbDir):
self.s3Resource.Bucket(self.bucketName).objects
else:
if not(path.exists(self.dbDir)):
os.mkdir(self.dbDir)
# Hack: os.mkdir() seems to return before the filesystem catches up,
# so pause before returning to help ensure the dir actually exists
time.sleep(0.1)
def __updateConfFile(self):
"""
Update the conf file with the settings in this ASVDb instance.
"""
if self.repo is None:
raise AttributeError("repo must be set to non-None before "
f"writing {self.confFilePath}")
d = self.__loadJsonDictFromFile(self.confFilePath)
# ASVDb is git-only for now, so ensure .git extension
d["repo"] = self.repo + (".git" if not self.repo.endswith(".git") else "")
currentBranches = d.get("branches", [])
d["branches"] = currentBranches + [b for b in (self.branches or []) if b not in currentBranches]
d["version"] = self.confVersion
d["project"] = self.projectName or self.repo.replace(".git", "").split("/")[-1]
d["show_commit_url"] = self.commitUrl or \
(self.repo.replace(".git", "") \
+ ("/" if not self.repo.endswith("/") else "") \
+ "commit/")
self.__writeJsonDictToFile(d, self.confFilePath)
def __updateBenchmarkJson(self, benchmarkResult):
# The following is an example of the schema ASV expects for
# `benchmarks.json`. If param names are A, B, and C
#
# {
# "<algo name>": {
# "code": "",
# "name": "<algo name>",
# "param_names": [
# "A", "B", "C"
# ],
# "params": [
# [<value1 for A>,
# <value2 for A>,
# ],
# [<value1 for B>,
# <value2 for B>,
# ],
# [<value1 for C>,
# <value2 for C>,
# ],
# ],
# "timeout": 60,
# "type": "time",
# "unit": "seconds",
# "version": 1,
# }
# }
newParamNames = []
newParamValues = []
for (n, v) in benchmarkResult.argNameValuePairs:
newParamNames.append(n)
newParamValues.append(v)
d = self.__loadJsonDictFromFile(self.benchmarksFilePath)
benchDict = d.setdefault(benchmarkResult.funcName,
self.__getDefaultBenchmarkDescrDict(
benchmarkResult.funcName, newParamNames))
benchDict["unit"] = benchmarkResult.unit
existingParamNames = benchDict["param_names"]
existingParamValues = benchDict["params"]
numExistingParams = len(existingParamNames)
numExistingParamValues = len(existingParamValues)
numNewParams = len(newParamNames)
# Check for the case where a result came in for the function, but it has
# a different number of args vs. what was saved previously
if numExistingParams != numNewParams:
raise ValueError("result for %s had %d params in benchmarks.json, "
"but new result has %d params" \
% (benchmarkResult.funcName, numExistingParams,
numNewParams))
numParams = numNewParams
cartProd = list(itertools.product(*existingParamValues))
if tuple(newParamValues) not in cartProd:
if numExistingParamValues == 0:
for newVal in newParamValues:
existingParamValues.append([newVal])
else:
for i in range(numParams):
if newParamValues[i] not in existingParamValues[i]:
existingParamValues[i].append(newParamValues[i])
d[benchmarkResult.funcName] = benchDict
# a version key must always be present in self.benchmarksFilePath,
# "current" ASV version requires this to be 2 (or higher?)
d["version"] = 2
self.__writeJsonDictToFile(d, self.benchmarksFilePath)
def __updateMachineJson(self, benchmarkInfo):
# The following is an example of the schema ASV expects for
# `machine.json`.
# {
# "arch": "x86_64",
# "cpu": "Intel, ...",
# "machine": "sm01",
# "os": "Linux ...",
# "ram": "123456",
# "version": 1,
# }
machineFilePath = path.join(self.resultsDirPath,
benchmarkInfo.machineName,
self.machineFileName)
d = self.__loadJsonDictFromFile(machineFilePath)
d["arch"] = benchmarkInfo.arch
d["cpu"] = benchmarkInfo.cpuType
d["gpu"] = benchmarkInfo.gpuType
#d["cuda"] = benchmarkInfo.cudaVer
d["machine"] = benchmarkInfo.machineName
#d["os"] = benchmarkInfo.osType
d["ram"] = benchmarkInfo.ram
d["gpuRam"] = benchmarkInfo.gpuRam
d["version"] = 1
self.__writeJsonDictToFile(d, machineFilePath)
def __updateResultJson(self, benchmarkResult, benchmarkInfo):
# The following is an example of the schema ASV expects for
# '<machine>-<commit_hash>.json'. If param names are A, B, and C
#
# {
# "params": {
# "cuda": "9.2",
# "gpu": "Tesla ...",
# "machine": "sm01",
# "os": "Linux ...",
# "python": "3.7",
# },
# "requirements": {},
# "results": {
# "<algo name>": {
# "params": [
# [<value1 for A>,
# <value2 for A>,
# ],
# [<value1 for B>,
# <value2 for B>,
# ],
# [<value1 for C>,
# <value2 for C>,
# ],
# ]
# "result": [
# <result1>,
# <result2>,
# ]
# },
# },
# "commit_hash": "321e321321eaf",
# "date": 12345678,
# "python": "3.7",
# "version": 1,
# }
resultsFilePath = self.__getResultsFilePath(benchmarkInfo)
d = self.__loadJsonDictFromFile(resultsFilePath)
d["params"] = {"gpu": benchmarkInfo.gpuType,
"cuda": benchmarkInfo.cudaVer,
"machine": benchmarkInfo.machineName,
"os": benchmarkInfo.osType,
"python": benchmarkInfo.pythonVer,
}
d["requirements"] = benchmarkInfo.requirements
allResultsDict = d.setdefault("results", {})
resultDict = allResultsDict.setdefault(benchmarkResult.funcName, {})
existingParamValuesList = resultDict.setdefault("params", [])
existingResultValueList = resultDict.setdefault("result", [])
# ASV uses the cartesian product of the param values for looking up the
# result for a particular combination of param values. For example:
# "params": [["a"], ["b", "c"], ["d", "e"]] results in: [("a", "b",
# "d"), ("a", "b", "e"), ("a", "c", "d"), ("a", "c", "e")] and each
# combination of param values has a result, with the results for the
# corresponding param values in the same order. If a result for a set
# of param values DNE, use None.
# store existing results in map based on cartesian product of all
# current params.
paramsCartProd = list(itertools.product(*existingParamValuesList))
# Assume there is an equal number of results for cartProd values
# (some will be None)
paramsResultMap = dict(zip(paramsCartProd, existingResultValueList))
# FIXME: dont assume these are ordered properly (ie. the same way as
# defined in benchmarks.json)
newResultParamValues = tuple(v for (_, v) in benchmarkResult.argNameValuePairs)
# Update the "params" lists with the new param settings for the new result.
# Only add values that are not already present
numExistingParamValues = len(existingParamValuesList)
if numExistingParamValues == 0:
for newParamValue in newResultParamValues:
existingParamValuesList.append([newParamValue])
results = [benchmarkResult.result]
else:
for i in range(numExistingParamValues):
if newResultParamValues[i] not in existingParamValuesList[i]:
existingParamValuesList[i].append(newResultParamValues[i])
# Add the new result
paramsResultMap[newResultParamValues] = benchmarkResult.result
# Re-compute the cartesian product of all param values now that the
# new values are added. Use this to determine where to place the new
# result in the result list.
results = []
for paramVals in itertools.product(*existingParamValuesList):
results.append(paramsResultMap.get(paramVals))
resultDict["params"] = existingParamValuesList
resultDict["result"] = results
d["commit_hash"] = benchmarkInfo.commitHash
d["branch"] = benchmarkInfo.branch
d["date"] = int(benchmarkInfo.commitTime)
d["python"] = benchmarkInfo.pythonVer
d["version"] = 1
self.__writeJsonDictToFile(d, resultsFilePath)
def __getDefaultBenchmarkDescrDict(self, funcName, paramNames):
return {"code": funcName,
"name": funcName,
"param_names": paramNames,
"params": [],
"timeout": 60,
"type": "time",
"unit": "seconds",
"version": 2,
}
def __getResultsFilePath(self, benchmarkInfo):
# The path to the resultsFile will be based on additional params present
# in the benchmarkInfo obj.
fileNameParts = [benchmarkInfo.commitHash,
"python%s" % benchmarkInfo.pythonVer,
"cuda%s" % benchmarkInfo.cudaVer,
benchmarkInfo.osType,
]
fileName = "-".join(fileNameParts) + ".json"
return path.join(self.resultsDirPath,
benchmarkInfo.machineName,
fileName)
def __loadJsonDictFromFile(self, jsonFile):
"""
Return a dictionary representing the contents of jsonFile by
either reading in the existing file or returning {}
"""
if path.exists(jsonFile):
with open(jsonFile) as fobj:
# FIXME: ideally this could use flock(), but some situations do
# not allow grabbing a file lock (NFS?)
# fcntl.flock(fobj, fcntl.LOCK_EX)
# FIXME: error checking
return json.load(fobj)
return {}
def __writeJsonDictToFile(self, jsonDict, filePath):
# FIXME: error checking
dirPath = path.dirname(filePath)
if not path.isdir(dirPath):
os.makedirs(dirPath)
with open(filePath, "w") as fobj:
# FIXME: ideally this could use flock(), but some situations do not
# allow grabbing a file lock (NFS?)
# fcntl.flock(fobj, fcntl.LOCK_EX)
json.dump(jsonDict, fobj, indent=2)
###########################################################################
# ASVDb private locking methods
###########################################################################
def __getLock(self, dirPath):
if self.__isS3URL(dirPath):
self.__getS3Lock()
else:
self.__getLocalFileLock(dirPath)
def __getLocalFileLock(self, dirPath):
"""
Gets a lock on dirPath against other ASVDb instances (in other
processes, possibily on other machines) using the following technique:
* Check for other locks and clear them if they've been seen for longer
than self.lockfileTimeout (do this to help cleanup after others that
may have died prematurely)
* Once all locks are clear - either by their owner because they
finished their read/write, or by removing them because they're
presumed dead - create a lock for this instance
* If a race condition was detected, probably because multiple ASVDbs
saw all locks were cleared at the same time and created their locks
at the same time, remove this lock, and wait a random amount of time
before trying again. The random time prevents yet another race.
"""
otherLockfileTimes = {}
thisLockfile = path.join(dirPath, self.lockfileName)
# FIXME: This shouldn't be needed? But if so, be smarter about
# preventing an infintite loop?
i = 0
while i < 1000:
# Keep checking for other locks to clear
self.__updateOtherLockfileTimes(dirPath, otherLockfileTimes)
# FIXME: potential infintite loop due to starvation?
otherLocks = list(otherLockfileTimes.keys())
while otherLocks:
if self.debugPrint:
print(f"This lock file will be {thisLockfile} but other "
f"locks present: {otherLocks}, waiting to try to "
"lock again...")
time.sleep(0.2)
self.__updateOtherLockfileTimes(dirPath, otherLockfileTimes)
otherLocks = list(otherLockfileTimes.keys())
# All clear, create lock
if self.debugPrint:
print(f"All clear, setting lock {thisLockfile}")
self.__createLockfile(dirPath)
# Check for a race condition where another lock could have been created
# while creating the lock for this instance.
self.__updateOtherLockfileTimes(dirPath, otherLockfileTimes)
# If another lock snuck in while this instance was creating its
# lock, remove this lock and wait a random amount of time before
# trying again (random time to prevent another race condition with
# the competing instance, this way someone will clearly get there
# first)
if otherLockfileTimes:
self.__releaseLock(dirPath)
randTime = (int(5 * random.random()) + 1) + random.random()
if self.debugPrint:
print(f"Collision - waiting {randTime} seconds before "
"trying to lock again.")
time.sleep(randTime)
else:
break
i += 1
def __releaseLock(self, dirPath):
if self.__isS3URL(dirPath):
self.__releaseS3Lock()
else:
self.__releaseLocalFileLock(dirPath)
def __releaseLocalFileLock(self, dirPath):
thisLockfile = path.join(dirPath, self.lockfileName)
if self.debugPrint:
print(f"Removing lock {thisLockfile}")
self.__removeFiles([thisLockfile])
def __updateOtherLockfileTimes(self, dirPath, lockfileTimes):
"""
Return a list of lockfiles that have "timed out", probably because their
process was killed. This will never include the lockfile for this
instance. Update the lockfileTimes dict as a side effect with the
discovery time of any new lockfiles and remove any lockfiles that are no
longer present.
"""
thisLockfile = path.join(dirPath, self.lockfileName)
now = time.time()
expired = []
allLockfiles = glob.glob(path.join(dirPath, self.lockfilePrefix) + "*")
if self.debugPrint:
print(f" This lockfile is {thisLockfile}, allLockfiles is "
f"{allLockfiles}, lockfileTimes is {lockfileTimes}")
# Remove lockfiles from the lockfileTimes dict that are no longer
# present on disk
lockfilesToRemove = set(lockfileTimes.keys()) - set(allLockfiles)
for removedLockfile in lockfilesToRemove:
lockfileTimes.pop(removedLockfile)
# check for expired lockfiles while also setting the discovery time on
# new lockfiles in the lockfileTimes dict.
for lockfile in allLockfiles:
if lockfile == thisLockfile:
continue
if (now - lockfileTimes.setdefault(lockfile, now)) > \
self.lockfileTimeout:
expired.append(lockfile)
if self.debugPrint:
print(f" This lockfile is {thisLockfile}, lockfileTimes is "
f"{lockfileTimes}, now is {now}, expired is {expired}")
self.__removeFiles(expired)
def __createLockfile(self, dirPath):
"""
low-level lockfile creation - consider calling __getLock() instead.
"""
thisLockfile = path.join(dirPath, self.lockfileName)
open(thisLockfile, "w").close()
# Make the lockfile read/write to all so others can remove it if this
# process dies prematurely
os.chmod(thisLockfile, (stat.S_IRUSR | stat.S_IWUSR
| stat.S_IRGRP | stat.S_IWGRP
| stat.S_IROTH | stat.S_IWOTH))
###########################################################################
# S3 Locking methods
###########################################################################
def __getS3Lock(self):
thisLockfile = path.join(self.bucketKey, self.lockfileName)
# FIXME: This shouldn't be needed? But if so, be smarter about
# preventing an infintite loop?
i = 0
# otherLockfileTimes is a tuple representing (<List of lockfiles>, <Length of List>)
otherLockfileTimes = ([], 0)
while i < 1000:
otherLockfileTimes = self.__updateS3LockfileTimes()
debugCounter = 0
while otherLockfileTimes[1] != 0:
if self.debugPrint:
lockfileList = []
for each in otherLockfileTimes[0]:
lockfileList.append(each.key)
print(f"This lock file will be {thisLockfile} but other "
f"locks present: {lockfileList}, waiting to try to "
"lock again...")
time.sleep(1)
otherLockfileTimes = self.__updateS3LockfileTimes()
# All clear, create lock
if self.debugPrint:
print(f"All clear, setting lock {thisLockfile}")
self.s3Resource.Object(self.bucketName, thisLockfile).put()
#Give S3 time to see the new lock
time.sleep(1)
# Check for a race condition where another lock could have been created
# while creating the lock for this instance.
otherLockfileTimes = self.__updateS3LockfileTimes()
if otherLockfileTimes[1] != 0:
self.__releaseS3Lock()
randTime = (int(30 * random.random()) + 5) + random.random()
if self.debugPrint:
print(f"Collision - waiting {randTime} seconds before "
"trying to lock again.")
time.sleep(randTime)
else:
break
i += 1
def __updateS3LockfileTimes(self):
# Find lockfiles in S3 Bucket
response = self.s3Resource.Bucket(self.bucketName).objects \
.filter(Prefix=path.join(self.bucketKey, self.lockfilePrefix))
length = 0
for lockfile in response:
length += 1
if self.lockfileName in lockfile.key:
lockfile.delete()
length -= 1
return (response, length)
def __releaseS3Lock(self):
thisLockfile = path.join(self.bucketKey, self.lockfileName)
if self.debugPrint:
print(f"Removing lock {thisLockfile}")
self.s3Resource.Object(self.bucketName, thisLockfile).delete()
###########################################################################
# S3 utilities
###########################################################################
def __downloadIfS3(self, bInfo=BenchmarkInfo(), results=False):
def downloadS3(bucket, ext):
bucket.download_file(
path.join(self.bucketKey, ext),
path.join(self.localS3Copy.name, ext)
)
if not self.__isS3URL(self.dbDir):
return
self.localS3Copy = tempfile.TemporaryDirectory()
os.makedirs(path.join(self.localS3Copy.name, self.defaultResultsDirName))
bucket = self.s3Resource.Bucket(self.bucketName)
# If results isn't set, only download key files, else download key files and results
if results == False:
keyFileExts = [self.confFileExt, self.machineFileExt, self.benchmarksFileExt]
# Use Try/Except to catch file Not Found errors and continue, avoids additional API calls
for fileExt in keyFileExts:
try:
downloadS3(bucket, fileExt)
except exceptions.ClientError as e:
err = "Not Found"
if err not in e.response["Error"]["Message"]:
raise
# Download specific result file for updating results if BenchmarkInfo is sent
try:
if bInfo.machineName != "":
commitHash, pyVer, cuVer, osType = bInfo.commitHash, bInfo.pythonVer, bInfo.cudaVer, bInfo.osType
filename = f"{commitHash}-python{pyVer}-cuda{cuVer}-{osType}.json"
os.makedirs(path.join(self.localS3Copy.name, self.defaultResultsDirName, bInfo.machineName), exist_ok=True)
resultFileExt = path.join(self.defaultResultsDirName, bInfo.machineName, filename)
downloadS3(bucket, resultFileExt)
except exceptions.ClientError as e:
err = "Not Found"
if err not in e.response["Error"]["Message"]:
raise
else:
try:
downloadS3(bucket, self.confFileExt)
except exceptions.ClientError as e:
err = "Not Found"
if err not in e.response["Error"]["Message"]:
raise
try:
resultsBucketPath = path.join(self.bucketKey, self.defaultResultsDirName)
resultsLocalPath = path.join(self.localS3Copy.name, self.defaultResultsDirName)
# Loop over ASV results folder and download everything.
# objectExt represents the file extension starting from the base resultsBucketPath
# For example: resultsBucketPath = "asvdb/results"
# : objectKey = "asvdb/results/machine_name/results.json
# : objectExt = "machine_name/results.json"
for bucketObj in bucket.objects.filter(Prefix=resultsBucketPath):
objectExt = bucketObj.key.replace(resultsBucketPath + "/", "")
if len(objectExt.split("/")) > 1:
os.makedirs(path.join(resultsLocalPath, objectExt.split("/")[0]), exist_ok=True)
bucket.download_file(bucketObj.key, path.join(resultsLocalPath, objectExt))
except exceptions.ClientError as e:
err = "Not Found"
if err not in e.response["Error"]["Message"]:
raise e
# Set all the internal locations to point to the downloaded files:
self.confFilePath = path.join(self.localS3Copy.name, self.confFileName)
self.resultsDirPath = path.join(self.localS3Copy.name, self.resultsDirName)
self.benchmarksFilePath = path.join(self.resultsDirPath, self.benchmarksFileName)
def __uploadIfS3(self):
def recursiveUpload(base, ext=""):
root, dirs, files = next(os.walk(path.join(base, ext), topdown=True))
# Upload files in this folder
for name in files:
self.s3Resource.Bucket(self.bucketName) \
.upload_file(path.join(base, ext, name), path.join(self.bucketKey, ext, name))
# Call upload again for each folder
if len(dirs) != 0:
for folder in dirs:
ext = path.join(ext, folder)
recursiveUpload(base, ext)
if self.__isS3URL(self.dbDir):
recursiveUpload(self.localS3Copy.name)
# Give S3 time to see the new uploads before releasing lock
time.sleep(1)
def __removeLocalS3Copy(self):
if not self.__isS3URL(self.dbDir):
return
self.localS3Copy.cleanup()
self.localS3Copy = None
self.confFilePath = path.join(self.dbDir, self.confFileName)
self.resultsDirPath = path.join(self.dbDir, self.resultsDirName)
self.benchmarksFilePath = path.join(self.resultsDirPath, self.benchmarksFileName)
###########################################################################
def __removeFiles(self, fileList):
for f in fileList:
try:
os.remove(f)
except FileNotFoundError:
pass
def __isS3URL(self, url):
"""
Returns True if url is a S3 URL, False otherwise.
"""
if url.startswith("s3:"):
return True
return False
def __waitForWrite(self):
"""
Testing helper: pause for self.writeDelay seconds, or until
self.cancelWrite turns True. Always set self.cancelWrite back to False
so future writes can take place by default.
Return True to indicate a write operation should take place, False to
cancel the write operation, based on if the write was cancelled or not.
"""
if not(self.cancelWrite):
st = now = time.time()
while ((now - st) < self.writeDelay) and not(self.cancelWrite):
time.sleep(0.01)
now = time.time()
retVal = not(self.cancelWrite)
self.cancelWrite = False
return retVal
| 0 |
rapidsai_public_repos/asvdb
|
rapidsai_public_repos/asvdb/asvdb/__init__.py
|
from .asvdb import (
ASVDb,
BenchmarkInfo,
BenchmarkResult,
BenchmarkInfoKeys,
BenchmarkResultKeys,
)
from . import utils
| 0 |
rapidsai_public_repos/asvdb
|
rapidsai_public_repos/asvdb/asvdb/utils.py
|
import subprocess
def getRepoInfo():
out = getCommandOutput("git remote -v")
repo = out.split("\n")[-1].split()[1]
branch = getCommandOutput("git rev-parse --abbrev-ref HEAD")
return (repo, branch)
def getCommandOutput(cmd):
result = subprocess.run(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
stdout = result.stdout.decode().strip()
if result.returncode == 0:
return stdout
stderr = result.stderr.decode().strip()
raise RuntimeError("Problem running '%s' (STDOUT: '%s' STDERR: '%s')"
% (cmd, stdout, stderr))
def getCommitInfo():
commitHash = getCommandOutput("git rev-parse HEAD")
commitTime = getCommandOutput("git log -n1 --pretty=%%ct %s" % commitHash)
return (commitHash, str(int(commitTime)*1000))
def getCudaVer():
# FIXME
return "10.0"
def getGPUModel():
# FIXME
return "some GPU"
| 0 |
rapidsai_public_repos/asvdb
|
rapidsai_public_repos/asvdb/asvdb/__main__.py
|
import argparse
from os import path
import asvdb
DESCRIPTION = "Examine or update an ASV 'database' row-by-row."
EPILOG = """
The database is read and each 'row' (an individual result and its context) has
the various expressions evaluated in the context of the row (see --list-keys for
all the keys that can be used in an expression/command). Each action can
potentially modify the list of rows for the next action. Actions can be chained
to perform complex queries or updates, and all actions are performed in the
order which they were specified on the command line.
The --exec-once action is an exception in that it does not execute on every row,
but instead only once in the context of the global namespace. This allows for
the creation of temp vars or other setup steps that can be used in
expressions/commands in subsequent actions. Like other actions, --exec-once can
be chained with other actions and called multiple times.
The final list of rows will be written to the destination database specified by
--write-to, if provided. If the path to the destination database does not exist,
it will be created. If the destination database does exist, it will be updated
with the results in the final list of rows.
Remember, an ASV database stores results based on the commitHash, so modifying
the commitHash for a result and writing it back to the same databse results in a
new, *additional* result as opposed to a modified one. All updates to the
database specified by --write-to either modify an existing result or add new
results, and results cannot be removed from a database. In order to effectively
remove results, a user can --write-to a new database with only the results they
want, then replace the original with the new using file system commands (rm the
old one, mv the new one to the old one's name, etc.)
"""
def parseArgs(argv=None):
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description=DESCRIPTION,
epilog=EPILOG
)
parser.add_argument("--version", action="store_true",
help="Print the current verison of asvdb and exit.")
parser.add_argument("--read-from", type=str, metavar="PATH",
help="Path to ASV db dir to read data from.")
parser.add_argument("--list-keys", action="store_true",
help="List all keys found in the database to STDOUT.")
parser.add_argument("--filter", metavar="EXPR", dest="cmds",
type=_storeActionArg("filter"), action="append",
help="Action which filters the current results based "
"on the evaluation of %(metavar)s.")
parser.add_argument("--exec", metavar="CMD", dest="cmds",
type=_storeActionArg("exec"), action="append",
help="Action which executes %(metavar)s on each of the "
"current results.")
parser.add_argument("--exec-once", metavar="CMD", dest="cmds",
type=_storeActionArg("exec_once"), action="append",
help="Action which executes %(metavar)s once (is not "
"executed for each result).")
parser.add_argument("--print", metavar="PRINTEXPR", dest="cmds",
type=_storeActionArg("print"), action="append",
help="Action which evaluates %(metavar)s in a print() "
"statement for each of the current results.")
parser.add_argument("--write-to", type=str, metavar="PATH",
help="Path to ASV db dir to write data to. %(metavar)s "
"is created if it does not exist.")
return parser.parse_args(argv)
def _storeActionArg(cmdName):
"""
Return a callable to be called by argparse that returns a tuple containing
cmdName and the option given on the command line.
"""
def callable(stringOpt):
if not stringOpt:
raise argparse.ArgumentTypeError("Cannot be empty")
return (cmdName, stringOpt)
return callable
def openAsvdbAtPath(dbDir, repo=None, branches=None,
projectName=None, commitUrl=None):
"""
Either reads the ASV db at dbDir and creates a new db object, or creates a
new db object and sets up the db at dbDir for (presumably) writing new
results to.
"""
db = asvdb.ASVDb(dbDir, repo=repo, branches=branches,
projectName=projectName, commitUrl=commitUrl)
if path.isdir(dbDir):
db.loadConfFile()
else:
db.updateConfFile()
return db
def createNamespace(benchmarkInfo, benchmarkResult):
"""
Creates a dictionary representing a namespace containing the member
var/values on the benchmarkInfo and benchmarkResult passed in to eval/exec
expressions in. This is usually used in place of locals() in calls to eval()
or exec().
"""
namespace = dict(benchmarkInfo.__dict__)
namespace.update(benchmarkResult.__dict__)
return namespace
def updateObjsFromNamespace(benchmarkInfo, benchmarkResult, namespace):
"""
Update the benchmarkInfo and benchmarkResult objects passed in with the
contents of the namespace dict. The objects are updated based on the key
name (eg. a key of commitHash updates benchmarkInfo.commitHash since
commitHash is a member of the BenchmarkInfo class). Any other keys that
aren't members of either class end up updating the global namespace.
"""
for attr in asvdb.BenchmarkInfoKeys:
setattr(benchmarkInfo, attr, namespace.pop(attr))
for attr in asvdb.BenchmarkResultKeys:
setattr(benchmarkResult, attr, namespace.pop(attr))
# All leftover vars in the namespace should be applied to the global
# namespace. This allows exec commands to store intermediate values.
globals().update(namespace)
def filterResults(resultTupleList, expr):
"""
Return a new list of results contining objects that evaluate as True when
the expression is applied to them.
"""
newResultTupleList = []
for (benchmarkInfo, benchmarkResults) in resultTupleList:
resultsForInfo = []
for resultObj in benchmarkResults:
namespace = createNamespace(benchmarkInfo, resultObj)
if eval(expr, globals(), namespace):
resultsForInfo.append(resultObj)
if resultsForInfo:
newResultTupleList.append((benchmarkInfo, resultsForInfo))
return newResultTupleList
def printResults(resultTupleList, expr):
"""
Print the print expression for each result in the resultTupleList list.
"""
for (benchmarkInfo, benchmarkResults) in resultTupleList:
for resultObj in benchmarkResults:
namespace = createNamespace(benchmarkInfo, resultObj)
eval(f"print({expr})", globals(), namespace)
return resultTupleList
def execResults(resultTupleList, code):
"""
Run the code on each result in the list. This likely results in modified
objects and possibly new variables in the global namespace.
"""
for (benchmarkInfo, benchmarkResults) in resultTupleList:
for resultObj in benchmarkResults:
namespace = createNamespace(benchmarkInfo, resultObj)
exec(code, globals(), namespace)
updateObjsFromNamespace(benchmarkInfo, resultObj, namespace)
return resultTupleList
def execOnce(resultTupleList, code):
"""
Run the code once, ignoring the result list and updating the global
namespace.
"""
exec(code, globals())
return resultTupleList
def updateDb(dbObj, resultTupleList):
"""
Write the results to the dbOj.
"""
for (benchmarkInfo, benchmarkResults) in resultTupleList:
dbObj.addResults(benchmarkInfo, benchmarkResults)
def main():
cmdMap = {"filter": filterResults,
"print": printResults,
"exec": execResults,
"exec_once": execOnce,
}
args = parseArgs()
if args.version:
print(asvdb.__version__)
return
if args.list_keys:
for k in set.union(asvdb.BenchmarkInfoKeys, asvdb.BenchmarkResultKeys):
print(k)
else:
if args.read_from is None:
raise RuntimeError("--read-from must be specified")
fromDb = openAsvdbAtPath(args.read_from)
results = fromDb.getResults()
for (cmd, expr) in args.cmds or []:
results = cmdMap[cmd](results, expr)
if args.write_to:
toDb = openAsvdbAtPath(args.write_to,
repo=fromDb.repo,
branches=fromDb.branches,
projectName=fromDb.projectName,
commitUrl=fromDb.commitUrl)
updateDb(toDb, results)
if __name__ == "__main__":
main()
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dependency-file-generator/.pre-commit-config.yaml
|
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: 'v4.3.0'
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-builtin-literals
- id: check-executables-have-shebangs
- id: check-json
- id: check-yaml
- id: debug-statements
- id: requirements-txt-fixer
- repo: https://github.com/asottile/pyupgrade
rev: 'v3.1.0'
hooks:
- id: pyupgrade
args:
- --py38-plus
- repo: https://github.com/PyCQA/isort
rev: '5.12.0'
hooks:
- id: isort
- repo: https://github.com/psf/black
rev: '22.10.0'
hooks:
- id: black
- repo: https://github.com/PyCQA/flake8
rev: '5.0.4'
hooks:
- id: flake8
args:
- --show-source
- repo: https://github.com/python-jsonschema/check-jsonschema
rev: 0.21.0
hooks:
- id: check-metaschema
files: ^src/rapids_dependency_file_generator/schema.json$
- id: check-jsonschema
files: ^tests/examples/([^/]*)/dependencies.yaml$
args: ["--schemafile", "src/rapids_dependency_file_generator/schema.json"]
- id: check-github-workflows
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dependency-file-generator/package.json
|
{
"name": "rapids-dependency-file-generator",
"version": "1.7.1",
"description": "`rapids-dependency-file-generator` is a Python CLI tool that generates conda `environment.yaml` files and `requirements.txt` files from a single YAML file, typically named `dependencies.yaml`.",
"repository": {
"type": "git",
"url": "git+https://github.com/rapidsai/dependency-file-generator.git"
},
"author": "",
"license": "Apache-2.0",
"bugs": {
"url": "https://github.com/rapidsai/dependency-file-generator/issues"
},
"homepage": "https://github.com/rapidsai/dependency-file-generator",
"devDependencies": {
"@semantic-release/exec": "^6.0.3",
"@semantic-release/git": "^10.0.1",
"semantic-release": "^20.1.0"
}
}
| 0 |
rapidsai_public_repos
|
rapidsai_public_repos/dependency-file-generator/.pre-commit-hooks.yaml
|
- id: rapids-dependency-file-generator
name: RAPIDS dependency file generator
description: Update dependency files according to the RAPIDS dependencies spec
entry: rapids-dependency-file-generator
language: python
files: "dependencies.yaml"
pass_filenames: false
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.